dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
8184
|
[
{
"role": "user",
"content": "Abstract: Background\nDelirium is a syndrome characterised by an acute disturbance of attention and awareness which develops over a short time period and fluctuates in severity over the course of the day. It is commonly experienced during inpatient admission in the terminal phase of illness. It can cause symptoms such as agitation and hallucinations and is distressing for terminally ill people, their families and staff. Delirium may arise from any number of causes and treatment should aim to address these causes. When this is not possible, or treatment is unsuccessful, drug therapy to manage the symptoms may become necessary.\nThis is the second update of the review first published in 2004.\nObjectives\nTo evaluate the effectiveness and safety of drug therapies to manage delirium symptoms in terminally ill adults.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and PsycINFO from inception to July 2019, reference lists of retrieved papers, and online trial registries.\nSelection criteria\nWe included randomised controlled trials of drug therapies in any dose by any route, compared to another drug therapy, a non-pharmacological approach, placebo, standard care or wait-list control, for the management of delirium symptoms in terminally ill adults (18 years or older).\nData collection and analysis\nWe independently screened citations, extracted data and assessed risk of bias. Primary outcomes were delirium symptoms; agitation score; adverse events. Secondary outcomes were: use of rescue medication; cognitive status; survival. We applied the GRADE approach to assess the overall quality of the evidence for each outcome and we include eight 'Summary of findings' tables.\nMain results\nWe included four studies (three new to this update), with 399 participants. Most participants had advanced cancer or advanced AIDS, and mild- to moderate-severity delirium. Meta-analysis was not possible because no two studies examined the same comparison. Each study was at high risk of bias for at least one criterion. Most evidence was low to very low quality, downgraded due to very serious study limitations, imprecision or because there were so few data. Most studies reported delirium symptoms; two reported agitation scores; three reported adverse events with data on extrapyramidal effects; and none reported serious adverse events.\n1. Haloperidol versus placebo\nThere may be little to no difference between placebo and haloperidol in delirium symptoms within 24 hours (mean difference (MD) 0.34, 95% confidence interval (CI) −0.07 to 0.75; 133 participants). Haloperidol may slightly worsen delirium symptoms compared with placebo at 48 hours (MD 0.49, 95% CI 0.10 to 0.88; 123 participants with mild- to moderate-severity delirium).\nHaloperidol may reduce agitation slightly compared with placebo between 24 and 48 hours (MD −0.14, 95% −0.28 to −0.00; 123 participants with mild- to moderate-severity delirium).\nHaloperidol probably increases extrapyramidal adverse effects compared with placebo (MD 0.79, 95% CI 0.17 to 1.41; 123 participants with mild- to moderate-severity delirium).\n2. Haloperidol versus risperidone\nThere may be little to no difference in delirium symptoms with haloperidol compared with risperidone within 24 hours (MD −0.42, 95% CI −0.90 to 0.06; 126 participants) or 48 hours (MD −0.36, 95% CI −0.92 to 0.20; 106 participants with mild- to moderate-severity delirium). Agitation scores and adverse events were not reported for this comparison.\n3. Haloperidol versus olanzapine\nWe are uncertain whether haloperidol reduces delirium symptoms compared with olanzapine within 24 hours (MD 2.36, 95% CI −0.75 to 5.47; 28 participants) or 48 hours (MD 1.90, 95% CI −1.50 to 5.30, 24 participants). Agitation scores and adverse events were not reported for this comparison.\n4. Risperidone versus placebo\nRisperidone may slightly worsen delirium symptoms compared with placebo within 24 hours (MD 0.76, 95% CI 0.30 to 1.22; 129 participants); and at 48 hours (MD 0.85, 95% CI 0.32 to 1.38; 111 participants with mild- to moderate-severity delirium).\nThere may be little to no difference in agitation with risperidone compared with placebo between 24 and 48 hours (MD −0.05, 95% CI −0.19 to 0.09; 111 participants with mild- to moderate-severity delirium).\nRisperidone may increase extrapyramidal adverse effects compared with placebo (MD 0.73 95% CI 0.09 to 1.37; 111 participants with mild- to moderate-severity delirium).\n5. Lorazepam plus haloperidol versus placebo plus haloperidol\nWe are uncertain whether lorazepam plus haloperidol compared with placebo plus haloperidol improves delirium symptoms within 24 hours (MD 2.10, 95% CI −1.00 to 5.20; 50 participants with moderate to severe delirium), reduces agitation within 24 hours (MD 1.90, 95% CI 0.90 to 2.80; 52 participants), or increases adverse events (RR 0.70, 95% CI −0.19 to 2.63; 31 participants with moderate to severe delirium).\n6. Haloperidol versus chlorpromazine\nWe are uncertain whether haloperidol reduces delirium symptoms compared with chlorpromazine at 48 hours (MD 0.37, 95% CI −4.58 to 5.32; 24 participants). Agitation scores were not reported. We are uncertain whether haloperidol increases adverse events compared with chlorpromazine (MD 0.46, 95% CI −4.22 to 5.14; 24 participants).\n7. Haloperidol versus lorazepam\nWe are uncertain whether haloperidol reduces delirium symptoms compared with lorazepam at 48 hours (MD −4.88, 95% CI −9.70 to 0.06; 17 participants). Agitation scores were not reported. We are uncertain whether haloperidol increases adverse events compared with lorazepam (MD −6.66, 95% CI −14.85 to 1.53; 17 participants).\n8. Lorazepam versus chlorpromazine\nWe are uncertain whether lorazepam reduces delirium symptoms compared with chlorpromazine at 48 hours (MD 5.25, 95% CI 0.38 to 10.12; 19 participants), or increases adverse events (MD 7.12, 95% CI 1.08 to 15.32; 18 participants). Agitation scores were not reported.\nSecondary outcomes: use of rescue medication, cognitive impairment, survival\nThere were insufficient data to draw conclusions or assess GRADE.\nAuthors' conclusions\nWe found no high-quality evidence to support or refute the use of drug therapy for delirium symptoms in terminally ill adults. We found low-quality evidence that risperidone or haloperidol may slightly worsen delirium symptoms of mild to moderate severity for terminally ill people compared with placebo. We found moderate- to low-quality evidence that haloperidol and risperidone may slightly increase extrapyramidal adverse events for people with mild- to moderate-severity delirium. Given the small number of studies and participants on which current evidence is based, further research is essential.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our search to July 2019 found four trials, involving 399 adults in total. Participants had advanced cancer (three studies) or advanced AIDS (one study), and all had symptoms of delirium. The drugs evaluated were antipsychotics (three studies) or benzodiazepines (one study), compared to placebo or each other, on their own or in combination with another drug or placebo.\nMost studies reported the outcomes we deemed most important: delirium symptoms, agitation, and adverse events (side effects).\nIt was not possible to combine the data from different studies due to a lack of similarity between them. We found low-quality evidence that certain drugs (haloperidol and risperidone) may slightly worsen delirium symptoms for terminally ill adults with mild- to moderate-severity delirium. We found moderate-quality evidence that haloperidol probably slightly increases adverse side effects for people with mild- to moderate-severity delirium."
}
] |
query-laysum
|
8185
|
[
{
"role": "user",
"content": "Abstract: Background\nAlopecia areata is an autoimmune disease leading to nonscarring hair loss on the scalp or body. There are different treatments including immunosuppressants, hair growth stimulants, and contact immunotherapy.\nObjectives\nTo assess the benefits and harms of the treatments for alopecia areata (AA), alopecia totalis (AT), and alopecia universalis (AU) in children and adults.\nSearch methods\nThe Cochrane Skin Specialised Register, CENTRAL, MEDLINE, Embase, ClinicalTrials.gov and WHO ICTRP were searched up to July 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated classical immunosuppressants, biologics, small molecule inhibitors, contact immunotherapy, hair growth stimulants, and other therapies in paediatric and adult populations with AA.\nData collection and analysis\nWe used the standard procedures expected by Cochrane including assessment of risks of bias using RoB2 and the certainty of the evidence using GRADE. The primary outcomes were short-term hair regrowth ≥ 75% (between 12 and 26 weeks of follow-up), and incidence of serious adverse events. The secondary outcomes were long-term hair regrowth ≥ 75% (greater than 26 weeks of follow-up) and health-related quality of life. We could not perform a network meta-analysis as very few trials compared the same treatments. We presented direct comparisons and made a narrative description of the findings.\nMain results\nWe included 63 studies that tested 47 different treatments in 4817 randomised participants. All trials used a parallel-group design except one that used a cross-over design. The mean sample size was 78 participants. All trials recruited outpatients from dermatology clinics. Participants were between 2 and 74 years old. The trials included patients with AA (n = 25), AT (n = 1), AU (n = 1), mixed cases (n = 31), and unclear types of alopecia (n = 4).\nThirty-three out of 63 studies (52.3%) reported the proportion of participants achieving short-term hair regrowth ≥ 75% (between 12 and 26 weeks). Forty-seven studies (74.6%) reported serious adverse events and only one study (1.5%) reported health-related quality of life. Five studies (7.9%) reported the proportion of participants with long-term hair regrowth ≥ 75% (greater than 26 weeks).\nAmongst the variety of interventions found, we prioritised some groups of interventions for their relevance to clinical practice: systemic therapies (classical immunosuppressants, biologics, and small molecule inhibitors), and local therapies (intralesional corticosteroids, topical small molecule inhibitors, contact immunotherapy, hair growth stimulants and cryotherapy).\nConsidering only the prioritised interventions, 14 studies from 12 comparisons reported short-term hair regrowth ≥ 75% and 22 studies from 10 comparisons reported serious adverse events (18 reported zero events and 4 reported at least one). One study (1 comparison) reported quality of life, and two studies (1 comparison) reported long-term hair regrowth ≥ 75%.\nFor the main outcome of short-term hair regrowth ≥ 75%, the evidence is very uncertain about the effect of oral prednisolone or cyclosporine versus placebo (RR 4.68, 95% CI 0.57 to 38.27; 79 participants; 2 studies; very low-certainty evidence), intralesional betamethasone or triamcinolone versus placebo (RR 13.84, 95% CI 0.87 to 219.76; 231 participants; 1 study; very low-certainty evidence), oral ruxolitinib versus oral tofacitinib (RR 1.08, 95% CI 0.77 to 1.52; 80 participants; 1 study; very low-certainty evidence), diphencyprone or squaric acid dibutil ester versus placebo (RR 1.16, 95% CI 0.79 to 1.71; 99 participants; 1 study; very-low-certainty evidence), diphencyprone or squaric acid dibutyl ester versus topical minoxidil (RR 1.16, 95% CI 0.79 to 1.71; 99 participants; 1 study; very low-certainty evidence), diphencyprone plus topical minoxidil versus diphencyprone (RR 0.67, 95% CI 0.13 to 3.44; 30 participants; 1 study; very low-certainty evidence), topical minoxidil 1% and 2% versus placebo (RR 2.31, 95% CI 1.34 to 3.96; 202 participants; 2 studies; very low-certainty evidence) and cryotherapy versus fractional CO2 laser (RR 0.31, 95% CI 0.11 to 0.86; 80 participants; 1 study; very low-certainty evidence). The evidence suggests oral betamethasone may increase short-term hair regrowth ≥ 75% compared to prednisolone or azathioprine (RR 1.67, 95% CI 0.96 to 2.88; 80 participants; 2 studies; low-certainty evidence). There may be little to no difference between subcutaneous dupilumab and placebo in short-term hair regrowth ≥ 75% (RR 3.59, 95% CI 0.19 to 66.22; 60 participants; 1 study; low-certainty evidence) as well as between topical ruxolitinib and placebo (RR 5.00, 95% CI 0.25 to 100.89; 78 participants; 1 study; low-certainty evidence). However, baricitinib results in an increase in short-term hair regrowth ≥ 75% when compared to placebo (RR 7.54, 95% CI 3.90 to 14.58; 1200 participants; 2 studies; high-certainty evidence).\nFor the incidence of serious adverse events, the evidence is very uncertain about the effect of topical ruxolitinib versus placebo (RR 0.33, 95% CI 0.01 to 7.94; 78 participants; 1 study; very low-certainty evidence). Baricitinib and apremilast may result in little to no difference in the incidence of serious adverse events versus placebo (RR 1.47, 95% CI 0.60 to 3.60; 1224 participants; 3 studies; low-certainty evidence). The same result is observed for subcutaneous dupilumab compared to placebo (RR 1.54, 95% CI 0.07 to 36.11; 60 participants; 1 study; low-certainty evidence).\nFor health-related quality of life, the evidence is very uncertain about the effect of oral cyclosporine compared to placebo (MD 0.01, 95% CI -0.04 to 0.07; very low-certainty evidence).\nBaricitinib results in an increase in long-term hair regrowth ≥ 75% compared to placebo (RR 8.49, 95% CI 4.70 to 15.34; 1200 participants; 2 studies; high-certainty evidence).\nRegarding the risk of bias, the most relevant issues were the lack of details about randomisation and allocation concealment, the limited efforts to keep patients and assessors unaware of the assigned intervention, and losses to follow-up.\nAuthors' conclusions\nWe found that treatment with baricitinib results in an increase in short- and long-term hair regrowth compared to placebo. Although we found inconclusive results for the risk of serious adverse effects with baricitinib, the reported small incidence of serious adverse events in the baricitinib arm should be balanced with the expected benefits. We also found that the impact of other treatments on hair regrowth is very uncertain. Evidence for health-related quality of life is still scant.\n\nGiven the provided abstract, please respond to the following query: \"How is alopecia areata treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most patients get better spontaneously and, in some cases, the doctor and patient may choose to wait for new hair to grow. However, spontaneous growth is rare in severe cases. For patients who need medication, there are multiple treatments, including topical therapies, oral treatments, and localised corticosteroid injections."
}
] |
query-laysum
|
8186
|
[
{
"role": "user",
"content": "Abstract: Background\nThe development of severe coronavirus disease 2019 (COVID-19) and poor clinical outcomes are associated with hyperinflammation and a complex dysregulation of the immune response. Colchicine is an anti-inflammatory medicine and is thought to improve disease outcomes in COVID-19 through a wide range of anti-inflammatory mechanisms. Patients and healthcare systems need more and better treatment options for COVID-19 and a thorough understanding of the current body of evidence.\nObjectives\nTo assess the effectiveness and safety of Colchicine as a treatment option for COVID-19 in comparison to an active comparator, placebo, or standard care alone in any setting, and to maintain the currency of the evidence, using a living systematic review approach.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register (comprising CENTRAL, MEDLINE (PubMed), Embase, ClinicalTrials.gov, WHO International Clinical Trials Registry Platform, and medRxiv), Web of Science (Science Citation Index Expanded and Emerging Sources Citation Index), and WHO COVID-19 Global literature on coronavirus disease to identify completed and ongoing studies without language restrictions to 21 May 2021.\nSelection criteria\nWe included randomised controlled trials evaluating colchicine for the treatment of people with COVID-19, irrespective of disease severity, age, sex, or ethnicity.\nWe excluded studies investigating the prophylactic effects of colchicine for people without severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection but at high risk of SARS-CoV-2 exposure.\nData collection and analysis\nWe followed standard Cochrane methodology. We used the Cochrane risk of bias tool (ROB 2) to assess bias in included studies and GRADE to rate the certainty of evidence for the following prioritised outcome categories considering people with moderate or severe COVID-19: all-cause mortality, worsening and improvement of clinical status, quality of life, adverse events, and serious adverse events and for people with asymptomatic infection or mild disease: all-cause mortality, admission to hospital or death, symptom resolution, duration to symptom resolution, quality of life, adverse events, serious adverse events.\nMain results\nWe included three RCTs with 11,525 hospitalised participants (8002 male) and one RCT with 4488 (2067 male) non-hospitalised participants. Mean age of people treated in hospital was about 64 years, and was 55 years in the study with non-hospitalised participants. Further, we identified 17 ongoing studies and 11 studies completed or terminated, but without published results.\nColchicine plus standard care versus standard care (plus/minus placebo)\nTreatment of hospitalised people with moderate to severe COVID-19\nAll-cause mortality: colchicine plus standard care probably results in little to no difference in all-cause mortality up to 28 days compared to standard care alone (risk ratio (RR) 1.00, 95% confidence interval (CI) 0.93 to 1.08; 2 RCTs, 11,445 participants; moderate-certainty evidence).\nWorsening of clinical status: colchicine plus standard care probably results in little to no difference in worsening of clinical status assessed as new need for invasive mechanical ventilation or death compared to standard care alone (RR 1.02, 95% CI 0.96 to 1.09; 2 RCTs, 10,916 participants; moderate-certainty evidence).\nImprovement of clinical status: colchicine plus standard care probably results in little to no difference in improvement of clinical status, assessed as number of participants discharged alive up to day 28 without clinical deterioration or death compared to standard care alone (RR 0.99, 95% CI 0.96 to 1.01; 1 RCT, 11,340 participants; moderate-certainty evidence).\nQuality of life, including fatigue and neurological status: we identified no studies reporting this outcome.\nAdverse events: the evidence is very uncertain about the effect of colchicine on adverse events compared to placebo (RR 1.00, 95% CI 0.56 to 1.78; 1 RCT, 72 participants; very low-certainty evidence).\nSerious adverse events: the evidence is very uncertain about the effect of colchicine plus standard care on serious adverse events compared to standard care alone (0 events observed in 1 RCT of 105 participants; very low-certainty evidence).\nTreatment of non-hospitalised people with asymptomatic SARS-CoV-2 infection or mild COVID-19\nAll-cause mortality: the evidence is uncertain about the effect of colchicine on all-cause mortality at 28 days (Peto odds ratio (OR) 0.57, 95% CI 0.20 to 1.62; 1 RCT, 4488 participants; low-certainty evidence).\nAdmission to hospital or death within 28 days: colchicine probably slightly reduces the need for hospitalisation or death within 28 days compared to placebo (RR 0.80, 95% CI 0.62 to 1.03; 1 RCT, 4488 participants; moderate-certainty evidence).\nSymptom resolution: we identified no studies reporting this outcome.\nQuality of life, including fatigue and neurological status: we identified no studies reporting this outcome.\nAdverse events: the evidence is uncertain about the effect of colchicine on adverse events compared to placebo . Results are from one RCT reporting treatment-related events only in 4412 participants (low-certainty evidence).\nSerious adverse events: colchicine probably slightly reduces serious adverse events (RR 0.78, 95% CI 0.61 to 1.00; 1 RCT, 4412 participants; moderate-certainty evidence).\nColchicine versus another active treatment (e.g. corticosteroids, anti-viral drugs, monoclonal antibodies)\nNo studies evaluated this comparison.\nDifferent formulations, doses, or schedules of colchicine\nNo studies assessed this.\nAuthors' conclusions\nBased on the current evidence, in people hospitalised with moderate to severe COVID-19 the use of colchicine probably has little to no influence on mortality or clinical progression in comparison to placebo or standard care alone. We do not know whether colchicine increases the risk of (serious) adverse events.\nWe are uncertain about the evidence of the effect of colchicine on all-cause mortality for people with asymptomatic infection or mild disease. However, colchicine probably results in a slight reduction of hospital admissions or deaths within 28 days, and the rate of serious adverse events compared with placebo.\nNone of the studies reported data on quality of life or compared the benefits and harms of colchicine versus other drugs, or different dosages of colchicine.\nWe identified 17 ongoing and 11 completed but not published RCTs, which we expect to incorporate in future versions of this review as their results become available.\nEditorial note: due to the living approach of this work, we monitor newly published results of RCTs on colchicine on a weekly basis and will update the review when the evidence or our certainty in the evidence changes.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What did we do?\nWe searched for studies that compared colchicine together with usual care to usual care (plus/minus placebo). Studies could take place anywhere in the world and include people with mild or no symptoms, moderate or severe COVID-19, of any age, sex, or ethnicity.\nWe compared and summarised the results of the studies and rated our certainty in the evidence, based on factors such as study methods and sizes."
}
] |
query-laysum
|
8187
|
[
{
"role": "user",
"content": "Abstract: Background\nGolimumab is a humanized inhibitor of Tumor necrosis factor-alpha, recently approved by the Food and Drug Administration (FDA) for the treatment of Rheumatoid arthritis (RA).\nObjectives\nThe objective of this systematic review was to compare the efficacy and safety of golimumab (alone or in combination with DMARDs or biologics) to placebo (alone or in combination with DMARDs or biologics) in randomized or quasi-randomized clinical trials in adults with RA.\nSearch methods\nAn expert librarian searched six databases for any clinical trials of golimumab in RA, including the Cochrane Central Register of Controlled Trials (CENTRAL), OVID MEDLINE, CINAHL, EMBASE, Science Citation Index (Web of Science) and Current Controlled Trials databases.\nSelection criteria\nStudies were included if they used golimumab in adults with RA, were randomized or quasi-randomized and provided clinical outcomes.\nData collection and analysis\nTwo review authors (JS, SN) independently reviewed all titles and abstracts, selected appropriate studies for full review and reviewed the full-text articles for the final selection of included studies. For each study, they independently abstracted study characteristics, safety and efficacy data and performed risk of bias assessment. Disagreements were resolved by consensus. For continuous measures, we calculated mean differences or standardized mean differences and for categorical measures, relative risks. 95% confidence intervals were calculated.\nMain results\nFour RCTs with 1,231 patients treated with golimumab and 483 patients treated with placebo were included. Of these, 436 were treated with the FDA-approved dose of golimumab 50 mg every four weeks. Compared to patients treated with placebo+methotrexate, patients treated with the FDA-approved dose of golimumab+methotrexate were 2.6 times more likely to reach ACR50 (95% confidence interval (CI) 1.3 to 4.9; P=0.005 and NNT= 5, 95% confidence interval 2 to 20), no more likely to have any adverse event (relative risk 1.1, 95% Cl 0.9 to 1.2; P=0.44), and 0.5 times as likely to have overall withdrawals (95% Cl 0.3 to 0.8; P=0.005). Golimumab-treated patients were significantly more likely to achieve remission, low disease activity and improvement in functional ability compared to placebo (all statistically significant). No significant differences were noted between golimumab and placebo regarding serious adverse events, infections, serious infections, lung infections, tuberculosis, cancer, withdrawals due to adverse events and inefficacy and deaths. No radiographic data were reported.\nAuthors' conclusions\nWith an overall high grade of evidence, at the FDA-approved dose, golimumab is significantly more efficacious than placebo in treatment of patients with active RA , when used in combination with methotrexate. The short-term safety profile, based on short-term RCTs, is reasonable with no differences in total adverse events, serious infections, cancer, tuberculosis or deaths. Long-term surveillance studies are needed for safety assessment.\n\nGiven the provided abstract, please respond to the following query: \"The review shows that in people with rheumatoid arthritis;\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review shows that in people with rheumatoid arthritis;\n- Golimumab improves the number of tender or swollen joints and other outcomes such as pain and disability (ACR 50).\n- Golimumab increases the number of people in remission.\n- Slightly more people who take Golimumab will have minor side effects (such as a minor infection), but this may be the result of chance. There is no difference in the number of people who will have a serious adverse event, compared to people who took a placebo.\nWe do not have precise information about side effects and complications. This is particularly true for rare but serious side effects. Possible side effects may include a serious infection or upper respiratory infection. Rare complications may include certain types of cancer."
}
] |
query-laysum
|
8188
|
[
{
"role": "user",
"content": "Abstract: Background\nMechanical ventilation is a potentially painful and discomforting intervention that is widely used in neonatal intensive care. Newborn infants demonstrate increased sensitivity to pain, which may affect clinical and neurodevelopmental outcomes. The use of drugs that reduce pain might be important in improving survival and neurodevelopmental outcomes.\nObjectives\nTo determine the benefits and harms of opioid analgesics for neonates (term or preterm) receiving mechanical ventilation compared to placebo or no drug, other opioids, or other analgesics or sedatives.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 9), in the Cochrane Library; MEDLINE via PubMed (1966 to 29 September 2020); Embase (1980 to 29 September 2020); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1982 to 29 September 2020). We searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials comparing opioids to placebo or no drug, to other opioids, or to other analgesics or sedatives in newborn infants on mechanical ventilation. We excluded cross-over trials. We included term (≥ 37 weeks' gestational age) and preterm (< 37 weeks' gestational age) newborn infants on mechanical ventilation. We included any duration of drug treatment and any dosage given continuously or as bolus; we excluded studies that gave opioids to ventilated infants for procedures.\nData collection and analysis\nFor each of the included trials, we independently extracted data (e.g. number of participants, birth weight, gestational age, types of opioids) using Cochrane Effective Practice and Organisation of Care Group (EPOC) criteria and assessed the risk of bias (e.g. adequacy of randomisation, blinding, completeness of follow-up). We evaluated treatment effects using a fixed-effect model with risk ratio (RR) for categorical data and mean difference (MD) for continuous data. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe included 23 studies (enrolling 2023 infants) published between 1992 and 2019. Fifteen studies (1632 infants) compared the use of morphine or fentanyl versus placebo or no intervention. Four studies included both term and preterm infants, and one study only term infants; all other studies included only preterm infants, with five studies including only very preterm infants. We are uncertain whether opioids have an effect on the Premature Infant Pain Profile (PIPP) Scale in the first 12 hours after infusion (MD -5.74, 95% confidence interval (CI) -6.88 to -4.59; 50 participants, 2 studies) and between 12 and 48 hours after infusion (MD -0.98, 95% CI -1.35 to -0.61; 963 participants, 3 studies) because of limitations in study design, high heterogeneity (inconsistency), and imprecision of estimates (very low-certainty evidence - GRADE). The use of morphine or fentanyl probably has little or no effect in reducing duration of mechanical ventilation (MD 0.23 days, 95% CI -0.38 to 0.83; 1259 participants, 7 studies; moderate-certainty evidence because of unclear risk of bias in most studies) and neonatal mortality (RR 1.12, 95% CI 0.80 to 1.55; 1189 participants, 5 studies; moderate-certainty evidence because of imprecision of estimates). We are uncertain whether opioids have an effect on neurodevelopmental outcomes at 18 to 24 months (RR 2.00, 95% CI 0.39 to 10.29; 78 participants, 1 study; very low-certainty evidence because of serious imprecision of the estimates and indirectness). Limited data were available for the other comparisons (i.e. two studies (54 infants) on morphine versus midazolam, three (222 infants) on morphine versus fentanyl, and one each on morphine versus diamorphine (88 infants), morphine versus remifentanil (20 infants), fentanyl versus sufentanil (20 infants), and fentanyl versus remifentanil (24 infants)). For these comparisons, no meta-analysis was conducted because outcomes were reported by one study.\nAuthors' conclusions\nWe are uncertain whether opioids have an effect on pain and neurodevelopmental outcomes at 18 to 24 months; the use of morphine or fentanyl probably has little or no effect in reducing the duration of mechanical ventilation and neonatal mortality. Data on the other comparisons planned in this review (opioids versus analgesics; opioids versus other opioids) are extremely limited and do not allow any conclusions. In the absence of firm evidence to support a routine policy, opioids should be used selectively - based on clinical judgement and evaluation of pain indicators - although pain measurement in newborns has limitations.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are uncertain whether opioids have an effect on pain and neurodevelopmental outcomes at 18 to 24 months; use of morphine or fentanyl probably has little or no effect in reducing the duration of mechanical ventilation and neonatal mortality. Further research is needed."
}
] |
query-laysum
|
8189
|
[
{
"role": "user",
"content": "Abstract: Background\nUncertainty exists about the optimal point at which multi-component fortifier should be added to human milk for promoting growth in preterm infants. The most common practice is to start fortification when the infant’s daily enteral feed volume reaches 100 mL/kg body weight. Another approach is to commence fortification earlier, in some cases as early as the first enteral feed. Early fortification of human milk could increase nutrient intake and growth rates but may increase the risk of feed intolerance and necrotising enterocolitis (NEC).\nObjectives\nTo assess effects on growth and safety of early fortification of human milk versus late fortification in preterm infants\nTo assess whether effects vary based upon gestational age (≤ 27 weeks; 28 to 31 weeks; ≥ 32 weeks), birth weight (< 1000 g; 1000 to 1499 g; ≥ 1500 g), small or appropriate for gestational age, or type of fortifier (bovine milk-based human milk fortifier (HMF); human milk-based HMF; formula powder)\nSearch methods\nWe used the standard strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 8); OVID MEDLINE (R) and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Daily and Versions (R) (1946 to 15 August 2019); MEDLINE via PubMed (1 August 2018 to 15 August 2019) for the previous year; and the Cumulative Index to Nursing and Allied Health Literatue (CINAHL) (1981 to 15 August 2019). We searched clinical trials databases and reference lists of included studies.\nSelection criteria\nWe included randomised controlled trials that compared early versus late fortification of human milk in preterm infants. We defined early fortification as fortification started at < 100 mL/kg/d enteral feed volume or < 7 days postnatal age, and late fortification as fortification started at ≥ 100 mL/kg/d feeds or ≥ 7 days postnatal age.\nData collection and analysis\nBoth review authors assessed trial eligibility and risk of bias and independently extracted data. We analysed treatment effects in individual trials, and we reported risk ratio (RR) for dichotomous data and mean difference (MD) for continuous data, with respective 95% confidence intervals (CIs). We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe included two trials with a total of 237 infants. All participants were very low birth weight infants (birth weight < 1500 g). Early fortification was started at 20 mL/kg/d enteral feeds in one study and 40 mL/kg/d in the other study. Late fortification was started at 100 mL/kg/d feeds in both studies. One study used bovine milk-based fortifier, and the other used human milk-based fortifier.\nMeta-analysis showed that early fortification may have little or no effect on growth outcomes including time to regain birth weight (MD -0.06 days, 95% CI -1.32 to 1.20 days), linear growth (MD 0.10 cm/week, 95% CI -0.03 to 0.22 cm/week), or head growth (MD -0.01 cm/week, 95% CI -0.07 to 0.06 cm/week) during the initial hospitalisation period. Early fortification may have little or no effect on the risk of NEC (MD -0.01, 95% CI -0.07 to 0.06). The certainty of evidence was low for these outcomes due to risk of bias (lack of blinding) and imprecision (small sample size).\nEarly fortification may have little or no effect on incidence of surgical NEC, time to reach full enteral feeds, extrauterine growth restriction at discharge, proportion of infants with feed interruption episodes, duration of total parenteral nutrition (TPN), duration of central venous line usage, or incidence of invasive infection, all-cause mortality, and duration of hospital stay. The certainty of evidence was low for these outcomes due to risk of bias (lack of blinding) and imprecision (small sample size).\nWe did not have data for other outcomes such as subsequent weight gain after birth weight is regained, parenteral nutrition-associated liver disease, postdischarge growth, and neurodevelopmental outcomes.\nAuthors' conclusions\nAvailable evidence is insufficient to support or refute early fortification of human milk in preterm infants. Further large trials would be needed to provide data of sufficient quality and precision to inform policy and practice.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The available evidence is insufficient to support or refute early addition of HMF to human milk to promote growth in preterm infants. More trials are needed to examine whether adding HMF early is beneficial or harmful for preterm infants."
}
] |
query-laysum
|
8190
|
[
{
"role": "user",
"content": "Abstract: Background\nPre-eclampsia is a pregnancy-specific multi-organ disorder, which is characterised by hypertension and multisystem organ involvement and which has significant maternal and fetal morbidity and mortality. Failure of the placental vascular remodelling and reduced uteroplacental flow form the etiopathological basis of pre-eclampsia. There are several established therapies for pre-eclampsia including antihypertensives and anticonvulsants. Most of these therapies aim at controlling the blood pressure or preventing complications of elevated blood pressure, or both. Epidural therapy aims at blocking the vasomotor tone of the arteries, thereby increasing uteroplacental blood flow. This review was aimed at evaluating the available evidence about the possible benefits and risks of epidural therapy in the management of severe pre-eclampsia, to define the current evidence level of this therapy, and to determine what (if any) further evidence is required.\nObjectives\nTo assess the effectiveness, safety and cost of the extended use of epidural therapy for treating severe pre-eclampsia in non-labouring women. This review aims to compare the use of extended epidural therapy with other methods, which include intravenous magnesium sulphate, anticonvulsants other than magnesium sulphate, with or without use of the antihypertensive drugs and adjuncts in the treatment of severe pre-eclampsia.\nThis review only considered the use of epidural anaesthesia in the management of severe pre-eclampsia in the antepartum period and not as pain relief in labour.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform (ICTRP) (13 July 2017) and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs comparing epidural therapy versus traditional therapy for pre-eclampsia in the form of antihypertensives, anticonvulsants, magnesium sulphate, low-dose dopamine, corticosteroids or a combination of these, were eligible for inclusion. Trials using a cluster design, and studies published in abstract form only are also eligible for inclusion in this review. Cross-over trials were not eligible for inclusion in this review.\nData collection and analysis\nThe two review authors independently assessed trials for inclusion and trial quality. There were no relevant data available for extraction.\nMain results\nWe included one small study (involving 24 women). The study was a single-centre randomised trial conducted in Mexico. This study compared a control group who received antihypertensive therapy, anticonvulsant therapy, plasma expanders, corticosteroids and dypyridamole with an intervention group that received epidural block instead of the antihypertensives, as well as all the other four drugs. Lumbar epidural block was given using 0.25% bupivacaine, 10 mg bolus and 5 mg each hour on continuous epidural infusion for six hours. This study was at low risk of bias in three domains but was assessed to be high risk of bias in two domains due to lack of allocation concealment and blinding of women and staff, and unclear for random sequence generation and outcome assessor blinding.\nThe included study did not report on any of this review's important outcomes. Meta-analysis was not possible.\nFor the mother, these were: maternal death (death during pregnancy or up to 42 days after the end of the pregnancy, or death more than 42 days after the end of the pregnancy); development of eclampsia or recurrence of seizures; stroke; any serious morbidity: defined as at least one of stroke, kidney failure, liver failure, HELLP syndrome (haemolysis, elevated liver enzymes and low platelets), disseminated intravascular coagulation, pulmonary oedema.\nFor the baby, these were: death: stillbirths (death in utero at or after 20 weeks' gestation), perinatal deaths (stillbirths plus deaths in the first week of life), death before discharge from the hospital, neonatal deaths (death within the first 28 days after birth), deaths after the first 28 days; preterm birth (defined as the birth before 37 completed weeks' gestation); and side effects of the intervention.\nReported outcomes\nThe included study only reported on a single secondary outcome of interest to this review: the Apgar score of the baby at birth and after five minutes and there was no clear difference between the intervention and control groups.\nThe included study also reported a reduction in maternal diastolic arterial pressure. However, the change in maternal mean arterial pressure and systolic arterial pressure, which were the other reported outcomes of this trial, were not significantly different between the two groups.\nAuthors' conclusions\nCurrently, there is insufficient evidence from randomised controlled trials to evaluate the effectiveness, safety or cost of using epidural therapy for treating severe pre-eclampsia in non-labouring women.\nHigh-quality randomised controlled trials are needed to evaluate the use of epidural agents as therapy for treatment of severe pre-eclampsia. The rationale for the use of epidural is well-founded. However there is insufficient evidence from randomised controlled trials to show that the effect of epidural translates into improved maternal and fetal outcomes. Thus, there is a need for larger, well-designed studies to come to an evidence-based conclusion as to whether the lowering of vasomotor tone by epidural therapy results in better maternal and fetal outcomes and for how long that could be maintained. Another important question that needs to be answered is how long should extended epidural be used to ensure any potential clinical benefits and what could be the associated side effects and costs. Interactions with other modalities of treatment and women's satisfaction could represent other avenues of research.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Women with severe pre-eclampsia are treated with drugs to lower their blood pressure, magnesium sulphate or other anticonvulsants to prevent seizures (eclampsia), and drugs to control blood clotting.\nExtended epidural anaesthesia may have a role in reducing the risk of stroke or cerebral bleeding, kidney and liver failure with severe pre-eclampsia. This could give time for optimal planning of delivery so that outcomes are improved for the mother and her baby. Blood flow to the uterus and placenta may be increased so that birth outcomes are improved. Extended epidural analgesia has also been reported to be well-tolerated for up to one week. The purpose of this review was to evaluate the use of epidural therapy as a treatment modality for severe pre-eclampsia and to compare this therapy with other established treatments."
}
] |
query-laysum
|
8191
|
[
{
"role": "user",
"content": "Abstract: Background\nPressure ulcers (also known as pressure injuries, pressure sores, decubitus ulcers and bed sores) are localised injuries to the skin or underlying soft tissue, or both, caused by unrelieved pressure, shear or friction. Alternating pressure (active) air surfaces are widely used with the aim of preventing pressure ulcers.\nObjectives\nTo assess the effects of alternating pressure (active) air surfaces (beds, mattresses or overlays) compared with any support surface on the incidence of pressure ulcers in any population in any setting.\nSearch methods\nIn November 2019, we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe included randomised controlled trials that allocated participants of any age to alternating pressure (active) air beds, overlays or mattresses. Comparators were any beds, overlays or mattresses.\nData collection and analysis\nAt least two review authors independently assessed studies using predetermined inclusion criteria. We carried out data extraction, 'Risk of bias' assessment using the Cochrane 'Risk of bias' tool, and the certainty of the evidence assessment according to Grading of Recommendations, Assessment, Development and Evaluations methodology.\nMain results\nWe included 32 studies (9058 participants) in the review. Most studies were small (median study sample size: 83 participants). The average age of participants ranged from 37.2 to 87.0 years (median: 69.1 years). Participants were largely from acute care settings (including accident and emergency departments). We synthesised data for six comparisons in the review: alternating pressure (active) air surfaces versus: foam surfaces, reactive air surfaces, reactive water surfaces, reactive fibre surfaces, reactive gel surfaces used in the operating room followed by foam surfaces used on the ward bed, and another type of alternating pressure air surface. Of the 32 included studies, 25 (78.1%) presented findings which were considered at high overall risk of bias.\nPrimary outcome: pressure ulcer incidence\nAlternating pressure (active) air surfaces may reduce the proportion of participants developing a new pressure ulcer compared with foam surfaces (risk ratio (RR) 0.63, 95% confidence interval (CI) 0.34 to 1.17; I2 = 63%; 4 studies, 2247 participants; low-certainty evidence). Alternating pressure (active) air surfaces applied on both operating tables and hospital beds may reduce the proportion of people developing a new pressure ulcer compared with reactive gel surfaces used on operating tables followed by foam surfaces applied on hospital beds (RR 0.22, 95% CI 0.06 to 0.76; I2 = 0%; 2 studies, 415 participants; low-certainty evidence).\nIt is uncertain whether there is a difference in the proportion of people developing new pressure ulcers between alternating pressure (active) air surfaces and the following surfaces, as all these comparisons have very low-certainty evidence: (1) reactive water surfaces; (2) reactive fibre surfaces; and (3) reactive air surfaces.\nThe comparisons between different types of alternating pressure air surfaces are presented narratively. Overall, all comparisons suggest little to no difference between these surfaces in pressure ulcer incidence (7 studies, 2833 participants; low-certainty evidence).\nIncluded studies have data on time to pressure ulcer incidence for three comparisons. When time to pressure ulcer development is considered using a hazard ratio (HR), it is uncertain whether there is a difference in the risk of developing new pressure ulcers, over 90 days' follow-up, between alternating pressure (active) air surfaces and foam surfaces (HR 0.41, 95% CI 0.10 to 1.64; I2 = 86%; 2 studies, 2105 participants; very low-certainty evidence). For the comparison with reactive air surfaces, there is low-certainty evidence that people treated with alternating pressure (active) air surfaces may have a higher risk of developing an incident pressure ulcer than those treated with reactive air surfaces over 14 days' follow-up (HR 2.25, 95% CI 1.05 to 4.83; 1 study, 308 participants). Neither of the two studies with time to ulcer incidence data suggested a difference in the risk of developing an incident pressure ulcer over 60 days' follow-up between different types of alternating pressure air surfaces.\nSecondary outcomes\nThe included studies have data on (1) support-surface-associated patient comfort for comparisons involving foam surfaces, reactive air surfaces, reactive fibre surfaces and alternating pressure (active) air surfaces; (2) adverse events for comparisons involving foam surfaces, reactive gel surfaces and alternating pressure (active) air surfaces; and (3) health-related quality of life outcomes for the comparison involving foam surfaces. However, all these outcomes and comparisons have low or very low-certainty evidence and it is uncertain whether there are any differences in these outcomes.\nIncluded studies have data on cost effectiveness for two comparisons. Moderate-certainty evidence suggests that alternating pressure (active) air surfaces are probably more cost-effective than foam surfaces (1 study, 2029 participants) and that alternating pressure (active) air mattresses are probably more cost-effective than overlay versions of this technology for people in acute care settings (1 study, 1971 participants).\nAuthors' conclusions\nCurrent evidence is uncertain about the difference in pressure ulcer incidence between using alternating pressure (active) air surfaces and other surfaces (reactive water surfaces, reactive fibre surfaces and reactive air surfaces). Alternating pressure (active) air surfaces may reduce pressure ulcer risk compared with foam surfaces and reactive gel surfaces used on operating tables followed by foam surfaces applied on hospital beds. People using alternating pressure (active) air surfaces may be more likely to develop new pressure ulcers over 14 days' follow-up than those treated with reactive air surfaces in the nursing home setting; but as the result is sensitive to the choice of outcome measure it should be interpreted cautiously. Alternating pressure (active) air surfaces are probably more cost-effective than reactive foam surfaces in preventing new pressure ulcers.\nFuture studies should include time-to-event outcomes and assessment of adverse events and trial-level cost-effectiveness. Further review using network meta-analysis adds to the findings reported here.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this Cochrane Review is current to November 2019."
}
] |
query-laysum
|
8192
|
[
{
"role": "user",
"content": "Abstract: Background\nHeavy alcohol consumption causes alcoholic liver disease and is a causal factor of many types of liver injuries and concomitant diseases. It is a true systemic disease that may damage the digestive tract, the nervous system, the heart and vascular system, the bone and skeletal muscle system, and the endocrine and immune system, and can lead to cancer. Liver damage in turn, can present as multiple alcoholic liver diseases, including fatty liver, steatohepatitis, fibrosis, alcoholic cirrhosis, and hepatocellular carcinoma, with presence or absence of hepatitis B or C virus infection. There are three scarring types (fibrosis) that are most commonly found in alcoholic liver disease: centrilobular scarring, pericellular fibrosis, and periportal fibrosis. When liver fibrosis progresses, alcoholic cirrhosis occurs. Hepatocellular carcinoma occurs in 5% to 15% of people with alcoholic cirrhosis, but people in whom hepatocellular carcinoma has developed are often co-infected with hepatitis B or C virus.\nAbstinence from alcohol may help people with alcoholic disease in improving their prognosis of survival at any stage of their disease; however, the more advanced the stage, the higher the risk of complications, co-morbidities, and mortality, and lesser the effect of abstinence. Being abstinent one month after diagnosis of early cirrhosis will improve the chance of a seven-year life expectancy by 1.6 times. Liver transplantation is the only radical method that may change the prognosis of a person with alcoholic liver disease; however, besides the difficulties of finding a suitable liver transplant organ, there are many other factors that may influence a person's survival.\nUltrasound is an inexpensive method that has been used for years in clinical practice to diagnose alcoholic cirrhosis. Ultrasound parameters for assessing cirrhosis in people with alcoholic liver disease encompass among others liver size, bluntness of the liver edge, coarseness of the liver parenchyma, nodularity of the liver surface, size of the lymph nodes around the hepatic artery, irregularity and narrowness of the inferior vena cava, portal vein velocity, and spleen size.\nDiagnosis of cirrhosis by ultrasound, especially in people who are asymptomatic, may have its advantages for the prognosis, motivation, and treatment of these people to decrease their alcohol consumption or become abstinent.\nTimely diagnosis of alcoholic cirrhosis in people with alcoholic liver disease is the cornerstone for evaluation of prognosis or choosing treatment strategies.\nObjectives\nTo determine the diagnostic accuracy of ultrasonography for detecting the presence or absence of cirrhosis in people with alcoholic liver disease compared with liver biopsy as reference standard.\nTo determine the diagnostic accuracy of any of the ultrasonography tests, B-mode or echo-colour Doppler ultrasonography, used singly or combined, or plus ultrasonography signs, or a combination of these, for detecting hepatic cirrhosis in people with alcoholic liver disease compared with liver biopsy as a reference standard, irrespective of sequence.\nSearch methods\nWe performed searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, The Cochrane Hepato-Biliary Group Diagnostic Test Accuracy Studies Register, The Cochrane Library (Wiley), MEDLINE (OvidSP), EMBASE (OvidSP), and the Science Citation Index Expanded to 8 January 2015. We applied no language limitations.\nWe screened study references of the retrieved studies to identify other potentially relevant studies for inclusion in the review and read abstract and poster publications.\nSelection criteria\nThree review authors independently identified studies for possible inclusion in the review. We excluded references not fulfilling the inclusion criteria of the review protocol. We sent e-mails to study authors.\nThe included studies had to evaluate ultrasound in the diagnosis of hepatic cirrhosis using only liver biopsy as the reference standard.\nThe maximum time interval of investigation with liver biopsy and ultrasonography should not have exceeded six months. In addition, ultrasonography could have been performed before or after liver biopsy.\nData collection and analysis\nWe followed the Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy.\nMain results\nThe review included two studies that provided numerical data regarding alcoholic cirrhosis in 205 men and women with alcoholic liver disease. Although there were no applicability concerns in terms of participant selection, index text, and reference standard, we judged the two studies at high risk of bias. Participants in both studies had undergone both liver biopsy and ultrasonography investigations. The studies shared only a few comparable clinical signs and symptoms (index tests).\nWe decided to not perform a meta-analysis due to the high risk of bias and the high degree of heterogeneity of the included studies.\nAuthors' conclusions\nAs the accuracy of ultrasonography in the two included studies was not informative enough, we could not recommend the use of ultrasonography as a diagnostic tool for liver cirrhosis in people with alcoholic liver disease. In order to be able to answer the review questions, we need diagnostic ultrasonography prospective studies of adequate sample size, enrolling only participants with alcoholic liver disease.\nThe design and report of the studies should follow the Standards for Reporting of Diagnostic Accuracy. The sonographic features, with validated cut-offs, which may help identify clinical signs used for diagnosis of fibrosis in alcoholic liver disease, should be carefully selected to achieve maximum diagnostic accuracy on ultrasonography.\n\nGiven the provided abstract, please respond to the following query: \"Aim\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The primary review aim was to determine the diagnostic accuracy of ultrasound for detecting the presence or absence of cirrhosis in people with alcoholic liver disease compared with liver biopsy (where a small needle is inserted into the liver to collect a sample, which is then examined in a laboratory) as reference standard (i.e., the best available test). The secondary aim of the review was to determine the diagnostic accuracy of any of the ultrasound tests, B-mode (a two-dimensional ultrasound image display composed of bright dots representing the ultrasound echoes) or echo-colour Doppler ultrasound (a colour ultrasound image showing blood flow through the liver), used singly or combined, or plus ultrasound signs, or a combination of these, for detecting hepatic cirrhosis in people with alcoholic liver disease compared with liver biopsy as a reference standard."
}
] |
query-laysum
|
8193
|
[
{
"role": "user",
"content": "Abstract: Background\nPoor medication adherence decreases treatment efficacy and worsens clinical outcomes, but average rates of adherence to long-term pharmacological treatments for chronic illnesses are only about 50%. Interventions for improving medication adherence largely focus on patients rather than on physicians; however, the strategies shown to be effective are complex and difficult to implement in clinical practice. There is a need for new care models addressing the problem of medication adherence, integrating this problem into the patient care process. Physicians tend to overestimate how well patients take their medication as prescribed. This can lead to missed opportunities to change medications, solve adverse effects, or propose the use of reminders in order to improve patients' adherence. Thus, providing physicians with feedback on medication adherence has the potential to prompt changes that improve their patients' adherence to prescribed medications.\nObjectives\nTo assess the effects of providing physicians with feedback about their patients' medication adherence for improving adherence. We also assessed the effects of the intervention on patient outcomes, health resource use, and processes of care.\nSearch methods\nWe conducted a systematic search of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, and Embase, all from database inception to December 2016 and without any language restriction. We also searched ISI Web of Science, two trials registers, and grey literature.\nSelection criteria\nWe included randomised trials, controlled before-after studies, and interrupted time series studies that compared the effects of providing feedback to physicians about their patients' adherence to prescribed long-term medications for chronic diseases versus usual care. We included published or unpublished studies in any language. Participants included any physician and any patient prescribed with long-term medication for chronic disease. We included interventions providing the prescribing physician with information about patient adherence to medication. Only studies in which feedback to the physician was the sole intervention or the essential component of a multifaceted intervention were eligible. In the comparison groups, the physicians should not have had access to information about their patients' adherence to medication. We considered the following outcomes: medication adherence, patient outcomes, health resource use, processes of care, and adverse events.\nData collection and analysis\nTwo independent review authors extracted and analysed all data using standard methodological procedures expected by Cochrane and the Effective Practice and Organisation of Care group. Due to heterogeneity in study methodology, comparison groups, intervention settings, and measurements of outcomes, we did not carry out meta-analysis. We describe the impact of interventions on outcomes in tabular form and make a qualitative assessment of the effects of studies.\nMain results\nWe included nine studies (23,255 patient participants): eight randomised trials and one interrupted time series analysis. The studies took place in primary care and other outpatient settings in the USA and Canada. Seven interventions involved the systematic provision of feedback to physicians concerning all their patients' adherence to medication, and two interventions involved issuing an alert for non-adherent patients only. Seven studies used pharmacy refill data to assess medication adherence, and two used an electronic device or self-reporting. The definition of adherence differed across studies, making comparisons difficult. Eight studies were at high risk of bias, and one study was at unclear risk of bias. The most frequent source of bias was lack of protection against contamination.\nProviding physicians with feedback may lead to little or no difference in medication adherence (seven studies, 22,924 patients), patient outcomes (two studies, 1292 patients), or health resource use (two studies, 4181 patients). Providing physicians with feedback on medication adherence may improve processes of care (e.g. more medication changes, dialogue with patient, management of uncontrolled hypertension) compared to usual care (four studies, 2780 patients). None of the studies reported an adverse event due to the intervention. The certainty of evidence was low for all outcomes, mainly due to high risk of bias, high heterogeneity across studies, and indirectness of evidence.\nAuthors' conclusions\nAcross nine studies, we observed little or no evidence that provision of feedback to physicians regarding their patients adherence to prescribed medication improved medication adherence, patient outcomes, or health resource use. Feedback about medication adherence may improve processes of care, but due to the small number of studies assessing this outcome and high risk of bias, we cannot draw firm conclusions on the effect of feedback on this outcome. Future research should use a clear, standardised definition of medication adherence and cluster-randomisation to avoid the risk of contamination.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Providing physicians with feedback about the medication adherence of their patients may lead to little or no difference in patients' adherence to prescribed medications, patient outcomes, and health resource use, but it may improve processes of care. The certainty of the evidence is low."
}
] |
query-laysum
|
8194
|
[
{
"role": "user",
"content": "Abstract: Background\nNonsteroidal anti-inflammatory drugs (NSAIDs) are one of the most frequently prescribed drugs for the treatment of sciatica. A previous Cochrane review on the efficacy of NSAIDs summarised findings for acute and chronic low back pain (LBP) and sciatica. This is an update of the original review (2008) focusing on people suffering from sciatica.\nObjectives\nTo determine the efficacy of NSAIDs in pain reduction, overall improvement, and reported side effects in people with sciatica.\nSearch methods\nWe performed electronic searches up to 24 June 2015 in the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PubMed, and two trials registers. We searched reference lists of included studies and relevant reviews on the topics for additional trials.\nSelection criteria\nWe included randomised controlled trials (double-blind, single-blind, and open-label) that assessed the efficacy of NSAIDs in sciatica. We included all trials that compared NSAIDs to placebo, to other NSAIDs, or to other medication. Additional interventions were allowed if there was a clear contrast for the treatment with NSAIDs in the trial.\nData collection and analysis\nThree review authors independently assessed the risk of bias and extracted the data. Where feasible we calculated pooled results using Review Manager 5.3. We reported pain relief outcomes using mean difference (MD) with 95% confidence intervals (95% CI). We used risk ratios (RR) with 95% CI to report global improvement of treatment, adverse effects, and additional medication. We performed a meta-analysis if possible. We assessed level of evidence using the GRADE approach. We used standard methodological procedures recommended by The Cochrane Collaboration.\nMain results\nWe included 10 trials reported in 9 publications (N = 1651). Only one trial out of 10 was assessed at low risk of bias. Five trials used the currently recommended daily dose for the drug, and two trials used lower daily doses available over the counter. Three trials investigated NSAIDs no longer approved for human use. The follow-up duration was short in all studies but one.\nThree trials (n = 918) compared the effects of NSAIDs to those of placebo on pain reduction. The pooled mean difference showed comparable pain reduction (visual analogue scale, 0 to 100) in the NSAIDs and placebo groups (MD -4.56, 95% CI -11.11 to 1.99). Heterogeneity was high (I2 = 82%), and the quality of the evidence was very low. When we excluded one trial with a short follow-up of eight hours, the mean difference further decreased (MD -0.09, 95% CI -9.89 to 9.71). Three trials (n = 753) compared NSAIDs to placebo regarding global improvement. We found low-quality evidence that NSAIDs are more effective than placebo with a risk ratio of 1.14 (95% CI 1.03 to 1.27). One trial (n = 214) studied the effect of NSAIDs on disability, finding very low-quality evidence that NSAIDs are no more effective than placebo on disability. Four trials (n = 967) comparing NSAIDs to placebo reported adverse effects, with low-quality evidence that the risk for adverse effects is higher in the NSAID group than in the placebo group (RR 1.40, 95% CI 1.02 to 1.93). The adverse effects reported in this review are consistent with those previously reported in the literature.\nAuthors' conclusions\nThis updated systematic review including 10 trials evaluating the efficacy of NSAIDs versus placebo or other drugs in people with sciatica reports low- to very low-level evidence using the GRADE criteria. The efficacy of NSAIDs for pain reduction was not significant. NSAIDs showed a better global improvement compared to placebo. These findings must be interpreted with caution, as the level of evidence according to the GRADE classification was very low for the outcome pain reduction and low for global improvement due to small study samples, inconsistent results, imprecision, and a high risk of bias in the included trials. While the trials included in the analysis were not powered to detect potential rare side effects, we found an increased risk for side effects in the short-term NSAIDs use. As NSAIDs are frequently prescribed, the risk-benefit ratio of prescribing the drug needs to be considered.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "NSAIDs are the most frequently prescribed medication worldwide and are commonly used to treat low back pain with or without pain radiating to the leg (sciatica)."
}
] |
query-laysum
|
8195
|
[
{
"role": "user",
"content": "Abstract: Background\nAccurate rapid diagnostic tests for SARS-CoV-2 infection would be a useful tool to help manage the COVID-19 pandemic. Testing strategies that use rapid antigen tests to detect current infection have the potential to increase access to testing, speed detection of infection, and inform clinical and public health management decisions to reduce transmission. This is the second update of this review, which was first published in 2020.\nObjectives\nTo assess the diagnostic accuracy of rapid, point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. We consider accuracy separately in symptomatic and asymptomatic population groups. Sources of heterogeneity investigated included setting and indication for testing, assay format, sample site, viral load, age, timing of test, and study design.\nSearch methods\nWe searched the COVID-19 Open Access Project living evidence database from the University of Bern (which includes daily updates from PubMed and Embase and preprints from medRxiv and bioRxiv) on 08 March 2021. We included independent evaluations from national reference laboratories, FIND and the Diagnostics Global Health website. We did not apply language restrictions.\nSelection criteria\nWe included studies of people with either suspected SARS-CoV-2 infection, known SARS-CoV-2 infection or known absence of infection, or those who were being screened for infection. We included test accuracy studies of any design that evaluated commercially produced, rapid antigen tests. We included evaluations of single applications of a test (one test result reported per person) and evaluations of serial testing (repeated antigen testing over time). Reference standards for presence or absence of infection were any laboratory-based molecular test (primarily reverse transcription polymerase chain reaction (RT-PCR)) or pre-pandemic respiratory sample.\nData collection and analysis\nWe used standard screening procedures with three people. Two people independently carried out quality assessment (using the QUADAS-2 tool) and extracted study results. Other study characteristics were extracted by one review author and checked by a second. We present sensitivity and specificity with 95% confidence intervals (CIs) for each test, and pooled data using the bivariate model. We investigated heterogeneity by including indicator variables in the random-effects logistic regression models. We tabulated results by test manufacturer and compliance with manufacturer instructions for use and according to symptom status.\nMain results\nWe included 155 study cohorts (described in 166 study reports, with 24 as preprints). The main results relate to 152 evaluations of single test applications including 100,462 unique samples (16,822 with confirmed SARS-CoV-2). Studies were mainly conducted in Europe (101/152, 66%), and evaluated 49 different commercial antigen assays. Only 23 studies compared two or more brands of test.\nRisk of bias was high because of participant selection (40, 26%); interpretation of the index test (6, 4%); weaknesses in the reference standard for absence of infection (119, 78%); and participant flow and timing 41 (27%). Characteristics of participants (45, 30%) and index test delivery (47, 31%) differed from the way in which and in whom the test was intended to be used. Nearly all studies (91%) used a single RT-PCR result to define presence or absence of infection.\nThe 152 studies of single test applications reported 228 evaluations of antigen tests. Estimates of sensitivity varied considerably between studies, with consistently high specificities. Average sensitivity was higher in symptomatic (73.0%, 95% CI 69.3% to 76.4%; 109 evaluations; 50,574 samples, 11,662 cases) compared to asymptomatic participants (54.7%, 95% CI 47.7% to 61.6%; 50 evaluations; 40,956 samples, 2641 cases). Average sensitivity was higher in the first week after symptom onset (80.9%, 95% CI 76.9% to 84.4%; 30 evaluations, 2408 cases) than in the second week of symptoms (53.8%, 95% CI 48.0% to 59.6%; 40 evaluations, 1119 cases). For those who were asymptomatic at the time of testing, sensitivity was higher when an epidemiological exposure to SARS-CoV-2 was suspected (64.3%, 95% CI 54.6% to 73.0%; 16 evaluations; 7677 samples, 703 cases) compared to where COVID-19 testing was reported to be widely available to anyone on presentation for testing (49.6%, 95% CI 42.1% to 57.1%; 26 evaluations; 31,904 samples, 1758 cases). Average specificity was similarly high for symptomatic (99.1%) or asymptomatic (99.7%) participants.\nWe observed a steady decline in summary sensitivities as measures of sample viral load decreased.\nSensitivity varied between brands. When tests were used according to manufacturer instructions, average sensitivities by brand ranged from 34.3% to 91.3% in symptomatic participants (20 assays with eligible data) and from 28.6% to 77.8% for asymptomatic participants (12 assays). For symptomatic participants, summary sensitivities for seven assays were 80% or more (meeting acceptable criteria set by the World Health Organization (WHO)). The WHO acceptable performance criterion of 97% specificity was met by 17 of 20 assays when tests were used according to manufacturer instructions, 12 of which demonstrated specificities above 99%. For asymptomatic participants the sensitivities of only two assays approached but did not meet WHO acceptable performance standards in one study each; specificities for asymptomatic participants were in a similar range to those observed for symptomatic people.\nAt 5% prevalence using summary data in symptomatic people during the first week after symptom onset, the positive predictive value (PPV) of 89% means that 1 in 10 positive results will be a false positive, and around 1 in 5 cases will be missed. At 0.5% prevalence using summary data for asymptomatic people, where testing was widely available and where epidemiological exposure to COVID-19 was suspected, resulting PPVs would be 38% to 52%, meaning that between 2 in 5 and 1 in 2 positive results will be false positives, and between 1 in 2 and 1 in 3 cases will be missed.\nAuthors' conclusions\nAntigen tests vary in sensitivity. In people with signs and symptoms of COVID-19, sensitivities are highest in the first week of illness when viral loads are higher. Assays that meet appropriate performance standards, such as those set by WHO, could replace laboratory-based RT-PCR when immediate decisions about patient care must be made, or where RT-PCR cannot be delivered in a timely manner. However, they are more suitable for use as triage to RT-PCR testing. The variable sensitivity of antigen tests means that people who test negative may still be infected. Many commercially available rapid antigen tests have not been evaluated in independent validation studies.\nEvidence for testing in asymptomatic cohorts has increased, however sensitivity is lower and there is a paucity of evidence for testing in different settings. Questions remain about the use of antigen test-based repeat testing strategies. Further research is needed to evaluate the effectiveness of screening programmes at reducing transmission of infection, whether mass screening or targeted approaches including schools, healthcare setting and traveller screening.\n\nGiven the provided abstract, please respond to the following query: \"Why is this question important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with suspected COVID-19 need to know quickly whether they are infected, so that they can self-isolate, receive treatment, and inform close contacts. Currently, COVID-19 infection is confirmed by a laboratory test called RT-PCR, which uses specialist equipment and often takes at least 24 hours to produce a result.\nIn many places, rapid antigen tests have opened access to testing for many more people, with and without symptoms, and in locations other than healthcare settings. Faster diagnosis of COVID-19 infection could allow people to take appropriate action more quickly, with the potential to reduce the spread of COVID-19, but it is important to understand how accurate they are and the best way to use them."
}
] |
query-laysum
|
8196
|
[
{
"role": "user",
"content": "Abstract: Background\nMechanical ventilation is a potentially painful and discomforting intervention that is widely used in neonatal intensive care. Newborn infants demonstrate increased sensitivity to pain, which may affect clinical and neurodevelopmental outcomes. The use of drugs that reduce pain might be important in improving survival and neurodevelopmental outcomes.\nObjectives\nTo determine the benefits and harms of opioid analgesics for neonates (term or preterm) receiving mechanical ventilation compared to placebo or no drug, other opioids, or other analgesics or sedatives.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 9), in the Cochrane Library; MEDLINE via PubMed (1966 to 29 September 2020); Embase (1980 to 29 September 2020); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1982 to 29 September 2020). We searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials comparing opioids to placebo or no drug, to other opioids, or to other analgesics or sedatives in newborn infants on mechanical ventilation. We excluded cross-over trials. We included term (≥ 37 weeks' gestational age) and preterm (< 37 weeks' gestational age) newborn infants on mechanical ventilation. We included any duration of drug treatment and any dosage given continuously or as bolus; we excluded studies that gave opioids to ventilated infants for procedures.\nData collection and analysis\nFor each of the included trials, we independently extracted data (e.g. number of participants, birth weight, gestational age, types of opioids) using Cochrane Effective Practice and Organisation of Care Group (EPOC) criteria and assessed the risk of bias (e.g. adequacy of randomisation, blinding, completeness of follow-up). We evaluated treatment effects using a fixed-effect model with risk ratio (RR) for categorical data and mean difference (MD) for continuous data. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe included 23 studies (enrolling 2023 infants) published between 1992 and 2019. Fifteen studies (1632 infants) compared the use of morphine or fentanyl versus placebo or no intervention. Four studies included both term and preterm infants, and one study only term infants; all other studies included only preterm infants, with five studies including only very preterm infants. We are uncertain whether opioids have an effect on the Premature Infant Pain Profile (PIPP) Scale in the first 12 hours after infusion (MD -5.74, 95% confidence interval (CI) -6.88 to -4.59; 50 participants, 2 studies) and between 12 and 48 hours after infusion (MD -0.98, 95% CI -1.35 to -0.61; 963 participants, 3 studies) because of limitations in study design, high heterogeneity (inconsistency), and imprecision of estimates (very low-certainty evidence - GRADE). The use of morphine or fentanyl probably has little or no effect in reducing duration of mechanical ventilation (MD 0.23 days, 95% CI -0.38 to 0.83; 1259 participants, 7 studies; moderate-certainty evidence because of unclear risk of bias in most studies) and neonatal mortality (RR 1.12, 95% CI 0.80 to 1.55; 1189 participants, 5 studies; moderate-certainty evidence because of imprecision of estimates). We are uncertain whether opioids have an effect on neurodevelopmental outcomes at 18 to 24 months (RR 2.00, 95% CI 0.39 to 10.29; 78 participants, 1 study; very low-certainty evidence because of serious imprecision of the estimates and indirectness). Limited data were available for the other comparisons (i.e. two studies (54 infants) on morphine versus midazolam, three (222 infants) on morphine versus fentanyl, and one each on morphine versus diamorphine (88 infants), morphine versus remifentanil (20 infants), fentanyl versus sufentanil (20 infants), and fentanyl versus remifentanil (24 infants)). For these comparisons, no meta-analysis was conducted because outcomes were reported by one study.\nAuthors' conclusions\nWe are uncertain whether opioids have an effect on pain and neurodevelopmental outcomes at 18 to 24 months; the use of morphine or fentanyl probably has little or no effect in reducing the duration of mechanical ventilation and neonatal mortality. Data on the other comparisons planned in this review (opioids versus analgesics; opioids versus other opioids) are extremely limited and do not allow any conclusions. In the absence of firm evidence to support a routine policy, opioids should be used selectively - based on clinical judgement and evaluation of pain indicators - although pain measurement in newborns has limitations.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Background"
}
] |
query-laysum
|
8197
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatorenal syndrome is defined as renal failure in people with cirrhosis in the absence of other causes. In addition to supportive treatment such as albumin to restore fluid balance, the other potential treatments include systemic vasoconstrictor drugs (such as vasopressin analogues or noradrenaline), renal vasodilator drugs (such as dopamine), transjugular intrahepatic portosystemic shunt (TIPS), and liver support with molecular adsorbent recirculating system (MARS). There is uncertainty over the best treatment regimen for hepatorenal syndrome.\nObjectives\nTo compare the benefits and harms of different treatments for hepatorenal syndrome in people with decompensated liver cirrhosis.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Science Citation Index Expanded, World Health Organization International Clinical Trials Registry Platform, and trial registers until December 2018 to identify randomised clinical trials on hepatorenal syndrome in people with cirrhosis.\nSelection criteria\nWe included only randomised clinical trials (irrespective of language, blinding, or publication status) in adults with cirrhosis and hepatorenal syndrome. We excluded randomised clinical trials in which participants had previously undergone liver transplantation.\nData collection and analysis\nTwo authors independently identified eligible trials and collected data. The outcomes for this review included mortality, serious adverse events, any adverse events, resolution of hepatorenal syndrome, liver transplantation, and other decompensation events. We performed a network meta-analysis with OpenBUGS using Bayesian methods and calculated the odds ratio (OR), rate ratio, hazard ratio (HR), and mean difference (MD) with 95% credible intervals (CrI) based on an available-case analysis, according to National Institute of Health and Care Excellence Decision Support Unit guidance.\nMain results\nWe included a total of 25 trials (1263 participants; 12 interventions) in the review. Twenty-three trials (1185 participants) were included in one or more outcomes. All the trials but two were at high risk of bias, and all the evidence was of low or very low certainty. The trials included participants with liver cirrhosis of varied aetiologies as well as a mixture of type I hepatorenal syndrome only, type II hepatorenal syndrome only, or people with both type I and type II hepatorenal syndrome. Participant age ranged from 42 to 60 years, and the proportion of females ranged from 5.8% to 61.5% in the trials that reported this information. The follow-up in the trials ranged from one week to six months. Overall, 59% of participants died during this period and about 35% of participants recovered from hepatorenal syndrome. The most common interventions compared were albumin plus terlipressin, albumin plus noradrenaline, and albumin alone.\nThere was no evidence of a difference in mortality (22 trials; 1153 participants) at maximal follow-up between the different interventions. None of the trials reported health-related quality of life. There was no evidence of differences in the proportion of people with serious adverse events (three trials; 428 participants), number of participants with serious adverse events per participant (two trials; 166 participants), proportion of participants with any adverse events (four trials; 402 participants), the proportion of people who underwent liver transplantation at maximal follow-up (four trials; 342 participants), or other features of decompensation at maximal follow-up (one trial; 466 participants). Five trials (293 participants) reported number of any adverse events, and five trials (219 participants) reported treatment costs. Albumin plus noradrenaline had fewer numbers of adverse events per participant (rate ratio 0.51, 95% CrI 0.28 to 0.87). Eighteen trials (1047 participants) reported recovery from hepatorenal syndrome (as per definition of hepatorenal syndrome). In terms of recovery from hepatorenal syndrome, in the direct comparisons, albumin plus midodrine plus octreotide and albumin plus octreotide had lower recovery from hepatorenal syndrome than albumin plus terlipressin (HR 0.04; 95% CrI 0.00 to 0.25 and HR 0.26, 95% CrI 0.07 to 0.80 respectively). There was no evidence of differences between the groups in any of the other direct comparisons. In the network meta-analysis, albumin and albumin plus midodrine plus octreotide had lower recovery from hepatorenal syndrome compared with albumin plus terlipressin.\nFunding: two trials were funded by pharmaceutical companies; five trials were funded by parties who had no vested interest in the results of the trial; and 18 trials did not report the source of funding.\nAuthors' conclusions\nBased on very low-certainty evidence, there is no evidence of benefit or harm of any of the interventions for hepatorenal syndrome with regards to the following outcomes: all-cause mortality, serious adverse events (proportion), number of serious adverse events per participant, any adverse events (proportion), liver transplantation, or other decompensation events. Low-certainty evidence suggests that albumin plus noradrenaline had fewer 'any adverse events per participant' than albumin plus terlipressin. Low- or very low-certainty evidence also found that albumin plus midodrine plus octreotide and albumin alone had lower recovery from hepatorenal syndrome compared with albumin plus terlipressin.\nFuture randomised clinical trials should be adequately powered; employ blinding, avoid post-randomisation dropouts or planned cross-overs (or perform an intention-to-treat analysis); and report clinically important outcomes such as mortality, health-related quality of life, adverse events, and recovery from hepatorenal syndrome. Albumin plus noradrenaline and albumin plus terlipressin appear to be the interventions that should be compared in future trials.\n\nGiven the provided abstract, please respond to the following query: \"Date of literature search\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "December 2018"
}
] |
query-laysum
|
8198
|
[
{
"role": "user",
"content": "Abstract: Background\nPosterior capsule opacification (PCO) is a clouding of the posterior part of the lens capsule, a skin-like transparent structure, which surrounds the crystalline lens in the human eye. PCO is the most common postoperative complication following modern cataract surgery with implantation of a posterior chamber intraocular lens (IOL). The main symptoms of PCO are a decrease in visual acuity, 'cloudy', blurred vision and reduced contrast sensitivity. PCO is treated with a neodymium:YAG (Nd:YAG) laser to create a small opening in the opaque capsule and regain a clear central visual axis. This capsulotomy might cause further ocular complications, such as raised intraocular pressure or swelling of the central retina (macular oedema). This procedure is also a significant financial burden for health care systems worldwide. In recent decades, there have been advances in the selection of IOL materials and optimisation of IOL designs to help prevent PCO formation after cataract surgery. These include changes to the side structures holding the lens in the centre of the lens capsule bag, called IOL haptics, and IOL optic edge designs.\nObjectives\nTo compare the effects of different IOL optic edge designs on PCO after cataract surgery.\nSearch methods\nWe searched CENTRAL, Ovid MEDLINE, Ovid Embase, Latin American and Caribbean Health Sciences Literature Database (LILACS), the ISRCTN registry, ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) up to 17 November 2020.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared different types of IOL optic edge design. Our prespecified primary outcome was the proportion of eyes with Nd:YAG capsulotomy one year after surgery. Secondary outcomes included PCO score, best-corrected distance visual acuity (BCDVA) and quality of life score at one year. Due to availability of important long-term data, we also presented data at longer-term follow-up which is a post hoc change to our protocol.\nData collection and analysis\nWe used standard methods expected by Cochrane and the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included 10 studies (1065 people, 1834 eyes) that compared sharp- and round-edged IOLs. Eight of these studies were within-person studies whereby one eye received a sharp-edged IOL and the fellow eye a round-edged IOL. The IOL materials were acrylic (2 studies), silicone (4 studies), polymethyl methacrylate (PMMA, 3 studies) and different materials (1 study). The studies were conducted in Austria, Germany, India, Japan, Sweden and the UK. Five studies were at high risk of bias in at least one domain. We judged two studies to be at low risk of bias in all domains.\nThere were few cases of Nd:YAG capsulotomy at one year (primary outcome): 1/371 in sharp-edged and 4/371 in round-edged groups. The effect estimate was in favour of sharp-edged IOLs but the confidence intervals were very wide and compatible with higher or lower chance of Nd:YAG capsulotomy in sharp-edged compared with round-edged lenses (Peto odds ratio (OR) 0.30, 95% CI 0.05 to 1.74; I2 = 0%; 6 studies, 742 eyes). This corresponds to seven fewer cases of Nd:YAG capsulotomy per 1000 sharp-edged IOLs inserted compared with round-edged IOLs (95% CI 9 fewer to 7 more). We judged this as low-certainty evidence, downgrading for imprecision and risk of bias.\nA similar reduced risk of Nd:YAG capsulotomy in sharp-edge compared with round-edge IOLs was seen at two, three and five years but as the number of Nd:YAG capsulotomy events increased with longer follow-up this effect was more precisely measured at longer follow-up: two years, risk ratio (RR) 0.35 (0.16 to 0.80); 703 eyes (6 studies); 89 fewer cases per 1000; three years, RR 0.21 (0.11 to 0.41); 538 eyes (6 studies); 170 fewer cases per 1000; five years, RR 0.21 (0.10 to 0.45); 306 eyes (4 studies); 331 fewer cases per 1000. Data at 9 years and 12 years were only available from one study.\nAll studies reported a PCO score. Four studies reported the AQUA (Automated Quantification of After-Cataract) score, four studies reported the EPCO (Evaluation of PCO) score and two studies reported another method of quantifying PCO. It was not possible to pool these data due to the way they were reported, but all studies consistently reported a statistically significant lower average PCO score (of the order of 0.5 to 3 units) with sharp-edged IOLs compared with round-edged IOLs. We judged this to be moderate-certainty evidence downgrading for risk of bias.\nThe logMAR visual acuity score was lower (better) in eyes that received a sharp-edged IOL but the difference was small and likely to be clinically unimportant at one year (mean difference (MD) -0.06 logMAR, 95% CI -0.12 to 0; 2 studies, 153 eyes; low-certainty evidence). Similar effects were seen at longer follow-up periods but non-statistically significant data were less fully reported: two years MD -0.01 logMAR (-0.05 to 0.02); 2 studies, 311 eyes; three years MD -0.09 logMAR (-0.22 to 0.03); 2 studies, 117 eyes; data at five years only available from one study.\nNone of the studies reported quality of life. Very low-certainty evidence on adverse events did not suggest any important differences between the groups.\nAuthors' conclusions\nThis review provides evidence that sharp-edged IOLs are likely to be associated with less PCO formation than round-edged IOLs, with less Nd:YAG capsulotomy. The effects on visual acuity were less certain. The impact of these lenses on quality of life has not been assessed and there are only very low-certainty comparative data on adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Our confidence in our results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are moderately confident about lenses with sharp edges showing a lower average score for clouding than lenses with round edges. Further evidence may increase our confidence in this result. We are less confident about vision being clearer and the lower need for laser surgery for eyes given sharp-edged lenses. Further research is likely to increase our confidence in these results."
}
] |
query-laysum
|
8199
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease (SCD) is a group of inherited disorders that result in haemoglobin abnormalities and other complications. Injury to the spleen, among other factors, contribute to persons with SCD being particularly susceptible to infection. Infants and very young children are especially vulnerable. The 'Co-operative Study of Sickle Cell Disease' observed an incidence rate for pneumococcal septicaemia of 10 per 100 person-years in children under the age of three years. Vaccines, including customary pneumococcal vaccines, may be of limited use in this age group. Therefore, prophylactic penicillin regimens may be advisable for this population. This is an update of a Cochrane Review which was first published in 2002, and previously updated, most recently in 2017.\nObjectives\nTo compare the effects of antibiotic prophylaxis against pneumococcus in children with SCD receiving antibiotic prophylaxis compared to those without in relation to:\n1. incidence of Streptococcus pneumoniae infection;\n2. mortality (as reported in the included studies);\n3. drug-related adverse events (as reported in the included studies) to the individual and the community;\n4. the impact of discontinuing at various ages on incidence of infection and mortality.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register, which is comprised of references identified from comprehensive electronic database searches and also two clinical trials registries: ClinicalTrials.gov and the WHO International Registry Platform (not in 2020 given access issues relating to Covid-19 pandemic). Additionally, we carried out hand searching of relevant journals and abstract books of conference proceedings.\nDate of the most recent search: 25 January 2021.\nSelection criteria\nAll randomised or quasi-randomised controlled trials comparing prophylactic antibiotics to prevent pneumococcal infection in children with SCD with placebo, no treatment or a comparator drug.\nData collection and analysis\nThe standard methodological procedures expected by Cochrane were used. Both authors independently extracted data and assessed trial quality. The authors used the GRADE criteria to assess the certainty of the evidence.\nMain results\nSix trials were identified by the searches, of which three trials were eligible for inclusion. A total of 880 children, who were between three months to five years of age at randomization were included. The included studies were conducted in centres in the USA and in Kingston, Jamaica. In trials that investigated initiation of penicillin on risk of pneumococcal infection, the odds ratio was 0.37 (95% confidence interval 0.16 to 0.86) (two trials, 457 children) (low-certainty evidence), while for withdrawal the odds ratio was 0.49 (95% confidence interval 0.09 to 2.71) (one trial, 400 children) (low-certainty evidence). Adverse drug effects were rare and minor. Rates of pneumococcal infection were found to be relatively low in children over the age of five years.\nOverall, the certainty of the evidence for all outcomes was judged to be low. The results from the risk of bias assessment undertaken identified two domains in which the risk of bias was considered to be high, these were incomplete outcome data (attrition bias) (two trials) and allocation concealment (selection bias) (one trial). Domains considered to have a low risk of bias for all three trials were selective reporting (reporting bias) and blinding (performance and detection bias).\nAuthors' conclusions\nThe evidence examined was determined to be of low certainty and suggests that prophylactic penicillin significantly reduces risk of pneumococcal infection in children with homozygous SCD, and is associated with minimal adverse reactions. Further research may help to determine the ideal age to safely withdraw penicillin.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 25 January 2021."
}
] |
query-laysum
|
8200
|
[
{
"role": "user",
"content": "Abstract: Background\nThe clinical management of intermittent exotropia (X(T)) has been discussed extensively in the literature, yet there remains a lack of clarity regarding indications for intervention, the most effective form of treatment, and whether there is an optimal time in the evolution of the disease at which any given treatment should be carried out.\nObjectives\nThe objective of this review was to analyze the effects of various surgical and non-surgical treatments in randomized controlled trials (RCTs) of participants with intermittent exotropia, and to report intervention criteria and determine whether the treatment effect varies by age and subtype of X(T).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2021, Issue 1), which contains the Cochrane Eyes and Vision Trials Register; Ovid MEDLINE; Ovid Embase; Latin American and Caribbean Health Science Information database (LILACS); the ISRCTN registry; ClinicalTrials.gov, and the WHO ICTRP. The date of the search was 20 January 2021. We performed manual searches of the British Orthoptic Journal up to 2002, and the proceedings of the European Strabismological Association (ESA), International Strabismological Association (ISA), and American Association for Pediatric Ophthalmology and Strabismus meeting (AAPOS) up to 2001.\nSelection criteria\nWe included RCTs of any surgical or non-surgical treatment for intermittent exotropia.\nData collection and analysis\nWe followed standard Cochrane methodology.\nMain results\nWe included six RCTs, four of which took place in the United States, and the remaining two in Asia (Turkey, India). A total of 890 participants with basic or distance X(T) were included, most of whom were children aged 12 months to 10 years. Three of these six studies were from the 2013 version of this review. Overall, the included studies had a high risk of performance bias as masking of participants and personnel administering treatment was not possible.\nTwo RCTs compared bilateral lateral rectus recession versus unilateral lateral rectus recession with medial rectus resection, but only one RCT (n = 197) reported on the primary outcomes of this review. Bilateral lateral rectus recession likely results in little difference in motor alignment at near (MD 1.00, 95% CI −2.69 to 4.69) and distance (MD 2.00, 95% CI −1.22 to 5.22) fixation as measured in pupillary distance using PACT (moderate-certainty evidence). Bilateral lateral rectus recession may result in little to no difference in stereoacuity at near fixation (risk ratio (RR) 0.77, 95% CI 0.35 to 1.71), adverse events (RR 7.36, 95% CI 0.39 to 140.65), or quality of life measures (low-certainty evidence).\nWe conducted a meta-analysis of two RCTs comparing patching (n = 249) with active observation (n = 252), but were unable to conduct further meta-analyses due to the clinical and methodological heterogeneity in the remaining trials. We found evidence that patching was clinically more effective than active observation in improving motor alignment at near (mean difference (MD) −2.23, 95% confidence interval (CI) −4.02 to −0.44) and distance (MD −2.00, 95% CI −3.40 to −0.61) fixation as measured by prism and alternate cover test (PACT) at six months (high-certainty evidence). The evidence suggests that patching results in little to no difference in stereoacuity at near fixation (MD 0.00, 95% CI −0.07 to 0.07) (low-certainty evidence). Stereoacuity at distance, motor fusion test, and quality of life measures were not reported. Adverse events were also not reported, but study authors explained that they were not anticipated due to the non-surgical nature of patching.\nOne RCT (n = 38) compared prism adaptation test with eye muscle surgery versus eye muscle surgery alone. No review outcomes were reported.\nOne RCT (n = 60) compared lateral rectus recession and medial rectus plication versus lateral rectus recession and medial rectus resection. Lateral rectus recession and medial rectus plication may not improve motor alignment at distance (MD 0.66, 95% CI −1.06 to 2.38) (low-certainty evidence). The evidence for the effect of lateral rectus recession and medial rectus plication on motor fusion test performance is very uncertain (RR 0.92, 95% CI 0.48 to 1.74) (very low-certainty evidence).\nAuthors' conclusions\nPatching confers a clinical benefit in children aged 12 months to 10 years of age with basic- or distance-type X(T) compared with active observation. There is insufficient evidence to determine whether interventions such as bilateral lateral rectus recession versus unilateral lateral rectus recession with medial rectus resection; lateral rectus recession and medial rectus plication versus lateral rectus recession and medial rectus resection; and prism adaptation test prior to eye muscle surgery versus eye muscle surgery alone may confer any benefit.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared different treatments for intermittent exotropia against each other. We compared and summarized the results of the studies and rated our confidence in the evidence, based on factors such as study methods and sizes."
}
] |
query-laysum
|
8201
|
[
{
"role": "user",
"content": "Abstract: Background\nMonoclonal antibodies (mAbs) are laboratory-produced molecules derived from the B cells of an infected host. They are being investigated as potential prophylaxis to prevent coronavirus disease 2019 (COVID-19).\nObjectives\nTo assess the effects of SARS-CoV-2-neutralising mAbs, including mAb fragments, to prevent infection with SARS-CoV-2 causing COVID-19; and to maintain the currency of the evidence, using a living systematic review approach.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register, MEDLINE, Embase, and three other databases on 27 April 2022. We checked references, searched citations, and contacted study authors to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated SARS-CoV-2-neutralising mAbs, including mAb fragments, alone or combined, versus an active comparator, placebo, or no intervention, for pre-exposure prophylaxis (PrEP) and postexposure prophylaxis (PEP) of COVID-19. We excluded studies of SARS-CoV-2-neutralising mAbs to treat COVID-19, as these are part of another review.\nData collection and analysis\nTwo review authors independently assessed search results, extracted data, and assessed risk of bias using Cochrane RoB 2. Prioritised outcomes were infection with SARS-CoV-2, development of clinical COVID-19 symptoms, all-cause mortality, admission to hospital, quality of life, adverse events (AEs), and serious adverse events (SAEs). We rated the certainty of evidence using GRADE.\nMain results\nWe included four RCTs of 9749 participants who were previously uninfected and unvaccinated at baseline. Median age was 42 to 76 years. Around 20% to 77.5% of participants in the PrEP studies and 35% to 100% in the PEP studies had at least one risk factor for severe COVID-19. At baseline, 72.8% to 82.2% were SARS-CoV-2 antibody seronegative.\nWe identified four ongoing studies, and two studies awaiting classification.\nPre-exposure prophylaxis\nTixagevimab/cilgavimab versus placebo\nOne study evaluated tixagevimab/cilgavimab versus placebo in participants exposed to SARS-CoV-2 wild-type, Alpha, Beta, and Delta variant. About 39.3% of participants were censored for efficacy due to unblinding and 13.8% due to vaccination. Within six months, tixagevimab/cilgavimab probably decreases infection with SARS-CoV-2 (risk ratio (RR) 0.45, 95% confidence interval (CI) 0.29 to 0.70; 4685 participants; moderate-certainty evidence), decreases development of clinical COVID-19 symptoms (RR 0.18, 95% CI 0.09 to 0.35; 5172 participants; high-certainty evidence), and may decrease admission to hospital (RR 0.03, 95% CI 0 to 0.59; 5197 participants; low-certainty evidence). Tixagevimab/cilgavimab may result in little to no difference on mortality within six months, all-grade AEs, and SAEs (low-certainty evidence). Quality of life was not reported.\nCasirivimab/imdevimab versus placebo\nOne study evaluated casirivimab/imdevimab versus placebo in participants who may have been exposed to SARS-CoV-2 wild-type, Alpha, and Delta variant. About 36.5% of participants opted for SARS-CoV-2 vaccination and had a mean of 66.1 days between last dose of intervention and vaccination. Within six months, casirivimab/imdevimab may decrease infection with SARS-CoV-2 (RR 0.01, 95% CI 0 to 0.14; 825 seronegative participants; low-certainty evidence) and may decrease development of clinical COVID-19 symptoms (RR 0.02, 95% CI 0 to 0.27; 969 participants; low-certainty evidence). We are uncertain whether casirivimab/imdevimab affects mortality regardless of the SARS-CoV-2 antibody serostatus. Casirivimab/imdevimab may increase all-grade AEs slightly (RR 1.14, 95% CI 0.98 to 1.31; 969 participants; low-certainty evidence). The evidence is very uncertain about the effects on grade 3 to 4 AEs and SAEs within six months. Admission to hospital and quality of life were not reported.\nPostexposure prophylaxis\nBamlanivimab versus placebo\nOne study evaluated bamlanivimab versus placebo in participants who may have been exposed to SARS-CoV-2 wild-type. Bamlanivimab probably decreases infection with SARS-CoV-2 versus placebo by day 29 (RR 0.76, 95% CI 0.59 to 0.98; 966 participants; moderate-certainty evidence), may result in little to no difference on all-cause mortality by day 60 (R 0.83, 95% CI 0.25 to 2.70; 966 participants; low-certainty evidence), may increase all-grade AEs by week eight (RR 1.12, 95% CI 0.86 to 1.46; 966 participants; low-certainty evidence), and may increase slightly SAEs (RR 1.46, 95% CI 0.73 to 2.91; 966 participants; low-certainty evidence). Development of clinical COVID-19 symptoms, admission to hospital within 30 days, and quality of life were not reported.\nCasirivimab/imdevimab versus placebo\nOne study evaluated casirivimab/imdevimab versus placebo in participants who may have been exposed to SARS-CoV-2 wild-type, Alpha, and potentially, but less likely to Delta variant. Within 30 days, casirivimab/imdevimab decreases infection with SARS-CoV-2 (RR 0.34, 95% CI 0.23 to 0.48; 1505 participants; high-certainty evidence), development of clinical COVID-19 symptoms (broad-term definition) (RR 0.19, 95% CI 0.10 to 0.35; 1505 participants; high-certainty evidence), may result in little to no difference on mortality (RR 3.00, 95% CI 0.12 to 73.43; 1505 participants; low-certainty evidence), and may result in little to no difference in admission to hospital. Casirivimab/imdevimab may slightly decrease grade 3 to 4 AEs (RR 0.50, 95% CI 0.24 to 1.02; 2617 participants; low-certainty evidence), decreases all-grade AEs (RR 0.70, 95% CI 0.61 to 0.80; 2617 participants; high-certainty evidence), and may result in little to no difference on SAEs in participants regardless of SARS-CoV-2 antibody serostatus. Quality of life was not reported.\nAuthors' conclusions\nFor PrEP, there is a decrease in development of clinical COVID-19 symptoms (high certainty), infection with SARS-CoV-2 (moderate certainty), and admission to hospital (low certainty) with tixagevimab/cilgavimab. There is low certainty of a decrease in infection with SARS-CoV-2, and development of clinical COVID-19 symptoms; and a higher rate for all-grade AEs with casirivimab/imdevimab.\nFor PEP, there is moderate certainty of a decrease in infection with SARS-CoV-2 and low certainty for a higher rate for all-grade AEs with bamlanivimab. There is high certainty of a decrease in infection with SARS-CoV-2, development of clinical COVID-19 symptoms, and a higher rate for all-grade AEs with casirivimab/imdevimab.\nAlthough there is high-to-moderate certainty evidence for some outcomes, it is insufficient to draw meaningful conclusions. These findings only apply to people unvaccinated against COVID-19. They are only applicable to the variants prevailing during the study and not other variants (e.g. Omicron). In vitro, tixagevimab/cilgavimab is effective against Omicron, but there are no clinical data. Bamlanivimab and casirivimab/imdevimab are ineffective against Omicron in vitro.\nFurther studies are needed and publication of four ongoing studies may resolve the uncertainties.\n\nGiven the provided abstract, please respond to the following query: \"Pre-exposure prophylaxis\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One study (5197 people) compared tixagevimab/cilgavimab to placebo. Participants had been exposed to wild-type, Alpha, Beta, and Delta variants.\nTixagevimab/cilgavimab:\n– reduces development of symptoms;\n– probably reduces number of people infected;\n– may reduce number of hospital admissions;\n– may have little or no effect on deaths from any cause, unwanted effects (any severity), and serious unwanted effect.\nWe found no data for quality of life, or mild and severe unwanted effects.\nOne study (969 people) compared casirivimab/imdevimab to placebo. Participants may have been exposed to wild-type, Alpha, and Delta variants.\nCasirivimab/imdevimab:\n– may reduce number of people infected and development of symptoms;\n– resulted in no deaths;\n– may increase unwanted effects (any severity) slightly;\n– we are uncertain whether casirivimab/imdevimab may have an effect on severe and serious unwanted effects.\nWe found no data for number of people with COVID-19 within 30 days, development of symptoms within 30 days, hospital admissions within 30 days, quality of life, and mild unwanted effects."
}
] |
query-laysum
|
8202
|
[
{
"role": "user",
"content": "Abstract: Background\nDelivery arrangements include changes in who receives care and when, who provides care, the working conditions of those who provide care, coordination of care amongst different providers, where care is provided, the use of information and communication technology to deliver care, and quality and safety systems. How services are delivered can have impacts on the effectiveness, efficiency and equity of health systems. This broad overview of the findings of systematic reviews can help policymakers and other stakeholders identify strategies for addressing problems and improve the delivery of services.\nObjectives\nTo provide an overview of the available evidence from up-to-date systematic reviews about the effects of delivery arrangements for health systems in low-income countries. Secondary objectives include identifying needs and priorities for future evaluations and systematic reviews on delivery arrangements and informing refinements of the framework for delivery arrangements outlined in the review.\nMethods\nWe searched Health Systems Evidence in November 2010 and PDQ-Evidence up to 17 December 2016 for systematic reviews. We did not apply any date, language or publication status limitations in the searches. We included well-conducted systematic reviews of studies that assessed the effects of delivery arrangements on patient outcomes (health and health behaviours), the quality or utilisation of healthcare services, resource use, healthcare provider outcomes (such as sick leave), or social outcomes (such as poverty or employment) and that were published after April 2005. We excluded reviews with limitations important enough to compromise the reliability of the findings. Two overview authors independently screened reviews, extracted data, and assessed the certainty of evidence using GRADE. We prepared SUPPORT Summaries for eligible reviews, including key messages, 'Summary of findings' tables (using GRADE to assess the certainty of the evidence), and assessments of the relevance of findings to low-income countries.\nMain results\nWe identified 7272 systematic reviews and included 51 of them in this overview. We judged 6 of the 51 reviews to have important methodological limitations and the other 45 to have only minor limitations. We grouped delivery arrangements into eight categories. Some reviews provided more than one comparison and were in more than one category. Across these categories, the following intervention were effective; that is, they have desirable effects on at least one outcome with moderate- or high-certainty evidence and no moderate- or high-certainty evidence of undesirable effects.\nWho receives care and when: queuing strategies and antenatal care to groups of mothers.\nWho provides care: lay health workers for caring for people with hypertension, lay health workers to deliver care for mothers and children or infectious diseases, lay health workers to deliver community-based neonatal care packages, midlevel health professionals for abortion care, social support to pregnant women at risk, midwife-led care for childbearing women, non-specialist providers in mental health and neurology, and physician-nurse substitution.\nCoordination of care: hospital clinical pathways, case management for people living with HIV and AIDS, interactive communication between primary care doctors and specialists, hospital discharge planning, adding a service to an existing service and integrating delivery models, referral from primary to secondary care, physician-led versus nurse-led triage in emergency departments, and team midwifery.\nWhere care is provided: high-volume institutions, home-based care (with or without multidisciplinary team) for people living with HIV and AIDS, home-based management of malaria, home care for children with acute physical conditions, community-based interventions for childhood diarrhoea and pneumonia, out-of-facility HIV and reproductive health services for youth, and decentralised HIV care.\nInformation and communication technology: mobile phone messaging for patients with long-term illnesses, mobile phone messaging reminders for attendance at healthcare appointments, mobile phone messaging to promote adherence to antiretroviral therapy, women carrying their own case notes in pregnancy, interventions to improve childhood vaccination.\nQuality and safety systems: decision support with clinical information systems for people living with HIV/AIDS.\nComplex interventions (cutting across delivery categories and other health system arrangements): emergency obstetric referral interventions.\nAuthors' conclusions\nA wide range of strategies have been evaluated for improving delivery arrangements in low-income countries, using sound systematic review methods in both Cochrane and non-Cochrane reviews. These reviews have assessed a range of outcomes. Most of the available evidence focuses on who provides care, where care is provided and coordination of care. For all the main categories of delivery arrangements, we identified gaps in primary research related to uncertainty about the applicability of the evidence to low-income countries, low- or very low-certainty evidence or a lack of studies.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this overview?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Overview is to provide a broad summary of what is known about the effects of delivery arrangements for health systems in low-income countries.\nThis overview is based on 51 systematic reviews. These systematic reviews searched for studies that evaluated different types of delivery arrangements. The reviews included a total of 850 studies.\nThis overview is one of a series of four Cochrane Overviews that evaluate health system arrangements."
}
] |
query-laysum
|
8203
|
[
{
"role": "user",
"content": "Abstract: Background\nPrimary hyperparathyroidism (PHPT), a disorder in which the parathyroid glands produce excessive amounts of parathyroid hormone, is most common in older adults and postmenopausal women. While most people with PHPT are asymptomatic at diagnosis, symptomatic disease can lead to hypercalcaemia, osteoporosis, renal stones, cardiovascular abnormalities and reduced quality of life. Surgical removal of abnormal parathyroid tissue (parathyroidectomy) is the only established treatment for adults with symptomatic PHPT to prevent exacerbation of symptoms and to be cured of PHPT. However, the benefits and risks of parathyroidectomy compared to simple observation or medical therapy for asymptomatic and mild PHPT are not well established.\nObjectives\nTo evaluate the benefits and harms of parathyroidectomy in adults with PHPT compared to simple observation or medical therapy.\nSearch methods\nWe searched CENTRAL, MEDLINE, LILACS, ClinicalTrials.gov and WHO ICTRP from their date of inception until 26 November 2021. We applied no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing parathyroidectomy with simple observation or medical therapy for the treatment of adults with PHPT.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. cure of PHPT, 2. morbidity related to PHPT and 3. serious adverse events. Our secondary outcomes were 1. all-cause mortality, 2. health-related quality of life and 3. hospitalisation for hypercalcaemia, acute renal impairment or pancreatitis. We used GRADE to assess the certainty of the evidence for each outcome.\nMain results\nWe identified eight eligible RCTs that included 447 adults with (mostly asymptomatic) PHPT; 223 participants were randomised to parathyroidectomy. Follow-up duration varied from six months to 24 months.\nOf the 223 participants (37 men) randomised to surgery, 164 were included in the analyses, of whom 163 were cured at six to 24 months (overall cure rate 99%). Parathyroidectomy compared to observation probably results in a large increase in cure rate at six to 24 months follow-up: 163/164 participants (99.4%) in the parathyroidectomy group and 0/169 participants in the observation or medical therapy group were cured of their PHPT (8 studies, 333 participants; moderate certainty).\nNo studies explicitly reported intervention effects on morbidities related to PHPT, such as osteoporosis, osteopenia, kidney dysfunction, urolithiasis, cognitive dysfunction or cardiovascular disease, although some studies reported surrogate outcomes for osteoporosis and cardiovascular disease. A post-hoc analysis revealed that parathyroidectomy, compared to observation or medical therapy, may have little or no effect after one to two years on bone mineral density (BMD) at the lumbar spine (mean difference (MD) 0.03 g/cm2,95% CI −0.05 to 0.12; 5 studies, 287 participants; very low certainty). Similarly, compared to observation, parathyroidectomy may have little or no effect on femoral neck BMD after one to two years (MD −0.01 g/cm2, 95% CI −0.13 to 0.11; 3 studies, 216 participants; very low certainty). However, the evidence is very uncertain for both BMD outcomes. Furthermore, the evidence is very uncertain about the effect of parathyroidectomy on improving left ventricular ejection fraction (MD −2.38%, 95% CI −4.77 to 0.01; 3 studies, 121 participants; very low certainty).\nFour studies reported serious adverse events. Three of these reported zero events in both the intervention and control groups; consequently, we were unable to include data from these three studies in the pooled analysis. The evidence suggests that parathyroidectomy compared to observation may have little or no effect on serious adverse events (RR 3.35, 95% CI 0.14 to 78.60; 4 studies, 168 participants; low certainty).\nOnly two studies reported all-cause mortality. One study could not be included in the pooled analysis as zero events were observed in both the intervention and control groups. Parathyroidectomy compared to observation may have little or no effect on all-cause mortality, but the evidence is very uncertain (RR 2.11, 95% CI 0.20 to 22.60; 2 studies, 133 participants; very low certainty).\nThree studies measured health-related quality of life using the 36-Item Short Form Health Survey (SF-36) and reported inconsistent differences in scores for different domains of the questionnaire between parathyroidectomy and observation.\nSix studies reported hospitalisations for the correction of hypercalcaemia. Two studies reported zero events in both the intervention and control groups and could not be included in the pooled analysis. Parathyroidectomy, compared to observation, may have little or no effect on hospitalisation for hypercalcaemia (RR 0.91, 95% CI 0.20 to 4.25; 6 studies, 287 participants; low certainty). There were no reported hospitalisations for renal impairment or pancreatitis.\nAuthors' conclusions\nIn accordance with the literature, our review findings suggest that parathyroidectomy, compared to simple observation or medical (etidronate) therapy, probably results in a large increase in cure rates of PHPT (with normalisation of serum calcium and parathyroid hormone levels to laboratory reference values). Parathyroidectomy, compared with observation, may have little or no effect on serious adverse events or hospitalisation for hypercalcaemia, and the evidence is very uncertain about the effect of parathyroidectomy on other short-term outcomes, such as BMD, all-cause mortality and quality of life. The high uncertainty of evidence limits the applicability of our findings to clinical practice; indeed, this systematic review provides no new insights with regard to treatment decisions for people with (asymptomatic) PHPT. In addition, the methodological limitations of the included studies, and the characteristics of the study populations (mainly comprising white women with asymptomatic PHPT), warrant caution when extrapolating the results to other populations with PHPT.\nLarge-scale multi-national, multi-ethnic and long-term RCTs are needed to explore the potential short- and long-term benefits of parathyroidectomy compared to non-surgical treatment options with regard to osteoporosis or osteopenia, urolithiasis, hospitalisation for acute kidney injury, cardiovascular disease and quality of life.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to know if parathyroidectomy was better than medical therapy or simple observation for curing mild PHPT with no symptoms and improving disease-associated complications such as osteoporosis, low bone density, kidney stones, kidney disease, cardiovascular disease (conditions affecting the heart and blood vessels) and cognitive dysfunction. We also wanted to determine if parathyroidectomy had any unwanted effects and whether it improved the health-related quality of life of people with PHPT."
}
] |
query-laysum
|
8204
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of a Cochrane review first published in The Cochrane Library in Issue 2, 2002 and previously updated in 2004, 2007 and 2010.\nRadiotherapy, open surgery and endolaryngeal excision (with or without laser) are all accepted modalities of treatment for early-stage glottic cancer. Case series suggest that they confer a similar survival advantage, however radiotherapy and endolaryngeal surgery offer the advantage of voice preservation. There has been an observed trend away from open surgery in recent years, however equipoise remains between radiotherapy and endolaryngeal surgery as both treatment modalities offer laryngeal preservation with similar survival rates. Opinions on optimal therapy vary across disciplines and between countries.\nObjectives\nTo compare the effectiveness of open surgery, endolaryngeal excision (with or without laser) and radiotherapy in the management of early glottic laryngeal cancer.\nSearch methods\nWe searched the Cochrane Ear, Nose and Throat Disorders Group Trials Register; the Cochrane Central Register of Controlled Trials (CENTRAL 2014, Issue 8); PubMed; EMBASE; CINAHL; Web of Science; Cambridge Scientific Abstracts; ICTRP and additional sources for published and unpublished trials. The date of the most recent search was 18 September 2014.\nSelection criteria\nRandomised controlled trials comparing open surgery, endolaryngeal resection (with or without laser) and radiotherapy.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe identified only one randomised controlled trial, which compared open surgery and radiotherapy in 234 patients with early glottic laryngeal cancer. The overall risk of bias in this study was high.\nFor T1 tumours, the five-year survival was 91.7% following radiotherapy and 100% following surgery and for T2 tumours, 88.8% following radiotherapy and 97.4% following surgery. There were no significant differences in survival between the two groups.\nFor T1 tumours, the five-year disease-free survival rate was 71.1% following radiotherapy and 100.0% following surgery, and for the T2 tumours, 60.1% following radiotherapy and 78.7% following surgery. Only the latter comparison was statistically significant (P value = 0.036), but statistical significance would not have been achieved with a two-sided test.\nData were not available on side effects, quality of life, voice outcomes or cost.\nWe identified no randomised controlled trials that included endolaryngeal surgery. A number of trials comparing endolaryngeal resection and radiotherapy have terminated early because of difficulty recruiting participants. One randomised controlled trial is still ongoing.\nAuthors' conclusions\nThere is only one randomised controlled trial comparing open surgery and radiotherapy but its interpretation is limited because of concerns about the adequacy of treatment regimens and deficiencies in the reporting of the study design and analysis.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The included study is of low quality. The evidence in this review is up to date to September 2014."
}
] |
query-laysum
|
8205
|
[
{
"role": "user",
"content": "Abstract: Background\n'Blue-light filtering', or 'blue-light blocking', spectacle lenses filter ultraviolet radiation and varying portions of short-wavelength visible light from reaching the eye. Various blue-light filtering lenses are commercially available. Some claims exist that they can improve visual performance with digital device use, provide retinal protection, and promote sleep quality. We investigated clinical trial evidence for these suggested effects, and considered any potential adverse effects.\nObjectives\nTo assess the effects of blue-light filtering lenses compared with non-blue-light filtering lenses, for improving visual performance, providing macular protection, and improving sleep quality in adults.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; containing the Cochrane Eyes and Vision Trials Register; 2022, Issue 3); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and WHO ICTRP, with no date or language restrictions. We last searched the electronic databases on 22 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs), involving adult participants, where blue-light filtering spectacle lenses were compared with non-blue-light filtering spectacle lenses.\nData collection and analysis\nPrimary outcomes were the change in visual fatigue score and critical flicker-fusion frequency (CFF), as continuous outcomes, between baseline and one month of follow-up. Secondary outcomes included best-corrected visual acuity (BCVA), contrast sensitivity, discomfort glare, proportion of eyes with a pathological macular finding, colour discrimination, proportion of participants with reduced daytime alertness, serum melatonin levels, subjective sleep quality, and patient satisfaction with their visual performance. We evaluated findings related to ocular and systemic adverse effects.\nWe followed standard Cochrane methods for data extraction and assessed risk of bias using the Cochrane Risk of Bias 1 (RoB 1) tool. We used GRADE to assess the certainty of the evidence for each outcome.\nMain results\nWe included 17 RCTs, with sample sizes ranging from five to 156 participants, and intervention follow-up periods from less than one day to five weeks. About half of included trials used a parallel-arm design; the rest adopted a cross-over design. A variety of participant characteristics was represented across the studies, ranging from healthy adults to individuals with mental health and sleep disorders.\nNone of the studies had a low risk of bias in all seven Cochrane RoB 1 domains. We judged 65% of studies to have a high risk of bias due to outcome assessors not being masked (detection bias) and 59% to be at high risk of bias of performance bias as participants and personnel were not masked. Thirty-five per cent of studies were pre-registered on a trial registry. We did not perform meta-analyses for any of the outcome measures, due to lack of available quantitative data, heterogenous study populations, and differences in intervention follow-up periods.\nThere may be no difference in subjective visual fatigue scores with blue-light filtering lenses compared to non-blue-light filtering lenses, at less than one week of follow-up (low-certainty evidence). One RCT reported no difference between intervention arms (mean difference (MD) 9.76 units (indicating worse symptoms), 95% confidence interval (CI) -33.95 to 53.47; 120 participants). Further, two studies (46 participants, combined) that measured visual fatigue scores reported no significant difference between intervention arms.\nThere may be little to no difference in CFF with blue-light filtering lenses compared to non-blue-light filtering lenses, measured at less than one day of follow-up (low-certainty evidence). One study reported no significant difference between intervention arms (MD - 1.13 Hz lower (indicating poorer performance), 95% CI - 3.00 to 0.74; 120 participants). Another study reported a less negative change in CFF (indicating less visual fatigue) with high- compared to low-blue-light filtering and no blue-light filtering lenses.\nCompared to non-blue-light filtering lenses, there is probably little or no effect with blue-light filtering lenses on visual performance (BCVA) (MD 0.00 logMAR units, 95% CI -0.02 to 0.02; 1 study, 156 participants; moderate-certainty evidence), and unknown effects on daytime alertness (2 RCTs, 42 participants; very low-certainty evidence); uncertainty in these effects was due to lack of available data and the small number of studies reporting these outcomes. We do not know if blue-light filtering spectacle lenses are equivalent or superior to non-blue-light filtering spectacle lenses with respect to sleep quality (very low-certainty evidence). Inconsistent findings were evident across six RCTs (148 participants); three studies reported a significant improvement in sleep scores with blue-light filtering lenses compared to non-blue-light filtering lenses, and the other three studies reported no significant difference between intervention arms. We noted differences in the populations across studies and a lack of quantitative data.\nDevice-related adverse effects were not consistently reported (9 RCTs, 333 participants; low-certainty evidence). Nine studies reported on adverse events related to study interventions; three studies described the occurrence of such events. Reported adverse events related to blue-light filtering lenses were infrequent, but included increased depressive symptoms, headache, discomfort wearing the glasses, and lower mood. Adverse events associated with non-blue-light filtering lenses were occasional hyperthymia, and discomfort wearing the spectacles.\nWe were unable to determine whether blue-light filtering lenses affect contrast sensitivity, colour discrimination, discomfort glare, macular health, serum melatonin levels or overall patient visual satisfaction, compared to non-blue-light filtering lenses, as none of the studies evaluated these outcomes.\nAuthors' conclusions\nThis systematic review found that blue-light filtering spectacle lenses may not attenuate symptoms of eye strain with computer use, over a short-term follow-up period, compared to non-blue-light filtering lenses. Further, this review found no clinically meaningful difference in changes to CFF with blue-light filtering lenses compared to non-blue-light filtering lenses. Based on the current best available evidence, there is probably little or no effect of blue-light filtering lenses on BCVA compared with non-blue-light filtering lenses. Potential effects on sleep quality were also indeterminate, with included trials reporting mixed outcomes among heterogeneous study populations. There was no evidence from RCT publications relating to the outcomes of contrast sensitivity, colour discrimination, discomfort glare, macular health, serum melatonin levels, or overall patient visual satisfaction. Future high-quality randomised trials are required to define more clearly the effects of blue-light filtering lenses on visual performance, macular health and sleep, in adult populations.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The Cochrane Review authors searched for studies that had been published up to 22 March 2022."
}
] |
query-laysum
|
8206
|
[
{
"role": "user",
"content": "Abstract: Background\nThe use of mechanical thrombectomy to restore intracranial blood flow after proximal large artery occlusion by a thrombus has increased over time and led to better outcomes than intravenous thrombolytic therapy alone. Currently, the type of anaesthetic technique during mechanical thrombectomy is under debate as having a relevant impact on neurological outcomes.\nObjectives\nTo assess the effects of different types of anaesthesia for endovascular interventions in people with acute ischaemic stroke.\nSearch methods\nWe searched the Cochrane Stroke Group Specialised Register of Trials on 5 July 2022, and CENTRAL, MEDLINE, and seven other databases on 21 March 2022. We performed searches of reference lists of included trials, grey literature sources, and other systematic reviews.\nSelection criteria\nWe included all randomised controlled trials with a parallel design that compared general anaesthesia versus local anaesthesia, conscious sedation anaesthesia, or monitored care anaesthesia for mechanical thrombectomy in acute ischaemic stroke. We also included studies reported as full-text, those published as abstract only, and unpublished data. We excluded quasi-randomised trials, studies without a comparator group, and studies with a retrospective design.\nData collection and analysis\nTwo review authors independently applied the inclusion criteria, extracted data, and assessed the risk of bias and the certainty of the evidence using the GRADE approach. The outcomes were assessed at different time periods, ranging from the onset of the stroke symptoms to 90 days after the start of the intervention. The main outcomes were functional outcome, neurological impairment, stroke-related mortality, all intracranial haemorrhage, target artery revascularisation status, time to revascularisation, adverse events, and quality of life. All included studies reported data for early (up to 30 days) and long-term (above 30 days) time points.\nMain results\nWe included seven trials with 982 participants, which investigated the type of anaesthesia for endovascular treatment in large vessel occlusion in the intracranial circulation. The outcomes were assessed at different time periods, ranging from the onset of stroke symptoms to 90 days after the procedure. Therefore, all included studies reported data for early (up to 30 days) and long-term (above 30 up to 90 days) time points.\nGeneral anaesthesia versus non-general anaesthesia(early)\nWe are uncertain about the effect of general anaesthesia on functional outcomes compared to non-general anaesthesia (mean difference (MD) 0, 95% confidence interval (CI) –0.31 to 0.31; P = 1.0; 1 study, 90 participants; very low-certainty evidence) and in time to revascularisation from groin puncture until the arterial reperfusion (MD 2.91 minutes, 95% CI –5.11 to 10.92; P = 0.48; I² = 48%; 5 studies, 498 participants; very low-certainty evidence). General anaesthesia may lead to no difference in neurological impairment up to 48 hours after the procedure (MD –0.29, 95% CI –1.18 to 0.59; P = 0.52; I² = 0%; 7 studies, 982 participants; low-certainty evidence), and in stroke-related mortality (risk ratio (RR) 0.98, 95% CI 0.52 to 1.84; P = 0.94; I² = 0%; 3 studies, 330 participants; low-certainty evidence), all intracranial haemorrhages (RR 0.92, 95% CI 0.65 to 1.29; P = 0.63; I² = 0%; 5 studies, 693 participants; low-certainty evidence) compared to non-general anaesthesia. General anaesthesia may improve adverse events (haemodynamic instability) compared to non-general anaesthesia (RR 0.21, 95% CI 0.05 to 0.79; P = 0.02; I² = 71%; 2 studies, 229 participants; low-certainty evidence). General anaesthesia improves target artery revascularisation compared to non-general anaesthesia (RR 1.10, 95% CI 1.02 to 1.18; P = 0.02; I² = 29%; 7 studies, 982 participants; moderate-certainty evidence). There were no available data for quality of life.\nGeneral anaesthesia versus non-general anaesthesia (long-term)\nThere is no difference in general anaesthesia compared to non-general anaesthesia for dichotomous and continuous functional outcomes (dichotomous: RR 1.21, 95% CI 0.93 to 1.58; P = 0.16; I² = 29%; 4 studies, 625 participants; low-certainty evidence; continuous: MD –0.14, 95% CI –0.34 to 0.06; P = 0.17; I² = 0%; 7 studies, 978 participants; low-certainty evidence). General anaesthesia showed no changes in stroke-related mortality compared to non-general anaesthesia (RR 0.88, 95% CI 0.64 to 1.22; P = 0.44; I² = 12%; 6 studies, 843 participants; low-certainty evidence). There were no available data for neurological impairment, all intracranial haemorrhages, target artery revascularisation status, time to revascularisation from groin puncture until the arterial reperfusion, adverse events (haemodynamic instability), or quality of life.\nOngoing studies\nWe identified eight ongoing studies. Five studies compared general anaesthesia versus conscious sedation anaesthesia, one study compared general anaesthesia versus conscious sedation anaesthesia plus local anaesthesia, and two studies compared general anaesthesia versus local anaesthesia. Of these studies, seven plan to report data on functional outcomes using the modified Rankin Scale, five studies on neurological impairment, six studies on stroke-related mortality, two studies on all intracranial haemorrhage, five studies on target artery revascularisation status, four studies on time to revascularisation, and four studies on adverse events. One ongoing study plans to report data on quality of life. One study did not plan to report any outcome of interest for this review.\nAuthors' conclusions\nIn early outcomes, general anaesthesia improves target artery revascularisation compared to non-general anaesthesia with moderate-certainty evidence. General anaesthesia may improve adverse events (haemodynamic instability) compared to non-general anaesthesia with low-certainty evidence. We found no evidence of a difference in neurological impairment, stroke-related mortality, all intracranial haemorrhage and haemodynamic instability adverse events between groups with low-certainty evidence. We are uncertain whether general anaesthesia improves functional outcomes and time to revascularisation because the certainty of the evidence is very low.\nHowever, regarding long-term outcomes, general anaesthesia makes no difference to functional outcomes compared to non-general anaesthesia with low-certainty evidence. General anaesthesia did not change stroke-related mortality when compared to non-general anaesthesia with low-certainty evidence. There were no reported data for other outcomes.\nIn view of the limited evidence of effect, more randomised controlled trials with a large number of participants and good protocol design with a low risk of bias should be performed to reduce our uncertainty and to aid decision-making in the choice of anaesthesia.\n\nGiven the provided abstract, please respond to the following query: \"Reliability of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have either little or moderate confidence in these results because, in most studies, it was possible that researchers collecting information about the outcomes of surgery knew which type of anaesthetic people had been given. This could have influenced their assessments. Also, a small number of trials were included with a small population. Furthermore, the variability between included studies, management and anaesthetic type, type of recanalisation therapy, and the experience of the healthcare provider involved in the procedure may have had a significant influence on outcomes."
}
] |
query-laysum
|
8207
|
[
{
"role": "user",
"content": "Abstract: Background\nWork disability such as sickness absence is common in people with depression.\nObjectives\nTo evaluate the effectiveness of interventions aimed at reducing work disability in employees with depressive disorders.\nSearch methods\nWe searched CENTRAL (The Cochrane Library), MEDLINE, Embase, CINAHL, and PsycINFO until April 4th 2020.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster-RCTs of work-directed and clinical interventions for depressed people that included sickness absence days or being off work as an outcome. We also analysed the effects on depression and work functioning.\nData collection and analysis\nTwo review authors independently extracted the data and rated the certainty of the evidence using GRADE. We used standardised mean differences (SMDs) or risk ratios (RR) with 95% confidence intervals (CI) to pool study results in studies we judged to be sufficiently similar.\nMain results\nIn this update, we added 23 new studies. In total, we included 45 studies with 88 study arms, involving 12,109 participants with either a major depressive disorder or a high level of depressive symptoms.\nRisk of bias\nThe most common types of bias risk were detection bias (27 studies) and attrition bias (22 studies), both for the outcome of sickness absence.\nWork-directed interventions\nWork-directed interventionscombined with clinical interventions\nA combination of a work-directed intervention and a clinical intervention probably reduces sickness absence days within the first year of follow-up (SMD -0.25, 95% CI -0.38 to -0.12; 9 studies; moderate-certainty evidence). This translates back to 0.5 fewer (95% CI -0.7 to -0.2) sick leave days in the past two weeks or 25 fewer days during one year (95% CI -37.5 to -11.8). The intervention does not lead to fewer persons being off work beyond one year follow-up (RR 1.08, 95% CI 0.64 to 1.83; 2 studies, high-certainty evidence). The intervention may reduce depressive symptoms (SMD -0.25, 95% CI -0.49 to -0.01; 8 studies, low-certainty evidence) and probably has a small effect on work functioning (SMD -0.19, 95% CI -0.42 to 0.06; 5 studies, moderate-certainty evidence) within the first year of follow-up.\nStand alone work-directed interventions\nA specific work-directed intervention alone may increase the number of sickness absence days compared with work-directed care as usual (SMD 0.39, 95% CI 0.04 to 0.74; 2 studies, low-certainty evidence) but probably does not lead to more people being off work within the first year of follow-up (RR 0.93, 95% CI 0.77 to 1.11; 1 study, moderate-certainty evidence) or beyond (RR 1.00, 95% CI 0.82 to 1.22; 2 studies, moderate-certainty evidence). There is probably no effect on depressive symptoms (SMD -0.10, 95% -0.30 CI to 0.10; 4 studies, moderate-certainty evidence) within the first year of follow-up and there may be no effect on depressive symptoms beyond that time (SMD 0.18, 95% CI -0.13 to 0.49; 1 study, low-certainty evidence). The intervention may also not lead to better work functioning (SMD -0.32, 95% CI -0.90 to 0.26; 1 study, low-certainty evidence) within the first year of follow-up.\nPsychological interventions\nA psychological intervention, either face-to-face, or an E-mental health intervention, with or without professional guidance, may reduce the number of sickness absence days, compared with care as usual (SMD -0.15, 95% CI -0.28 to -0.03; 9 studies, low-certainty evidence). It may also reduce depressive symptoms (SMD -0.30, 95% CI -0.45 to -0.15, 8 studies, low-certainty evidence). We are uncertain whether these psychological interventions improve work ability (SMD -0.15 95% CI -0.46 to 0.57; 1 study; very low-certainty evidence).\nPsychological interventioncombined with antidepressant medication\nTwo studies compared the effect of a psychological intervention combined with antidepressants to antidepressants alone. One study combined psychodynamic therapy with tricyclic antidepressant (TCA) medication and another combined telephone-administered cognitive behavioural therapy (CBT) with a selective serotonin reuptake inhibitor (SSRI). We are uncertain if this intervention reduces the number of sickness absence days (SMD -0.38, 95% CI -0.99 to 0.24; 2 studies, very low-certainty evidence) but found that there may be no effect on depressive symptoms (SMD -0.19, 95% CI -0.50 to 0.12; 2 studies, low-certainty evidence).\nAntidepressant medication only\nThree studies compared the effectiveness of SSRI to selective norepinephrine reuptake inhibitor (SNRI) medication on reducing sickness absence and yielded highly inconsistent results.\nImproved care\nOverall, interventions to improve care did not lead to fewer sickness absence days, compared to care as usual (SMD -0.05, 95% CI -0.16 to 0.06; 7 studies, moderate-certainty evidence). However, in studies with a low risk of bias, the intervention probably leads to fewer sickness absence days in the first year of follow-up (SMD -0.20, 95% CI -0.35 to -0.05; 2 studies; moderate-certainty evidence). Improved care probably leads to fewer depressive symptoms (SMD -0.21, 95% CI -0.35 to -0.07; 7 studies, moderate-certainty evidence) but may possibly lead to a decrease in work-functioning (SMD 0.5, 95% CI 0.34 to 0.66; 1 study; moderate-certainty evidence).\nExercise\nSupervised strength exercise may reduce sickness absence, compared to relaxation (SMD -1.11; 95% CI -1.68 to -0.54; one study, low-certainty evidence). However, aerobic exercise probably is not more effective than relaxation or stretching (SMD -0.06; 95% CI -0.36 to 0.24; 2 studies, moderate-certainty evidence). Both studies found no differences between the two conditions in depressive symptoms.\nAuthors' conclusions\nA combination of a work-directed intervention and a clinical intervention probably reduces the number of sickness absence days, but at the end of one year or longer follow-up, this does not lead to more people in the intervention group being at work. The intervention may also reduce depressive symptoms and probably increases work functioning more than care as usual. Specific work-directed interventions may not be more effective than usual work-directed care alone. Psychological interventions may reduce the number of sickness absence days, compared with care as usual. Interventions to improve clinical care probably lead to lower sickness absence and lower levels of depression, compared with care as usual. There was no evidence of a difference in effect on sickness absence of one antidepressant medication compared to another. Further research is needed to assess which combination of work-directed and clinical interventions works best.\n\nGiven the provided abstract, please respond to the following query: \"What we found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 45 studies in 12,109 people with depression. The studies took place in Europe (34 studies), the USA (8), Australia (2) and Canada (1).\nThe effects of 'care as usual' were compared with those of workplace changes and clinical programmes to find out:\nhow many days people with depression were on sick leave\nhow many people with depression were off work;\npeople's symptoms of depression; and\nhow well people with depression could cope with their work."
}
] |
query-laysum
|
8208
|
[
{
"role": "user",
"content": "Abstract: Background\nDrowning is responsible for an estimated 320,000 deaths a year, and over 90% of drowning mortality occurs in low- to middle-income countries (LMICs), with peak drowning rates among children aged 1 to 4 years. In this age group, mortality due to drowning is particularly common in rural settings and about 75% of drowning accidents happen in natural bodies of water close to the home. Providing adequate child supervision can protect children from drowning, and organized formal day care programs could offer a way to achieve this.\nObjectives\nPrimary objective\n• To assess the effects of day care programs for children under 6 years of age on drowning-related mortality or morbidity, or on total drowning accidents (fatal and non-fatal), in LMICs, compared to no day care programs or other drowning prevention interventions\nSecondary objectives\n• To assess the effects of day care programs in LMICs for children under 6 years of age on unsafe water exposure\n• To assess safety within these programs (e.g. transmission of infection within day care, physical or sexual abuse of children within day care)\n• To assess the incidence of unintentional injury within these programs\n• To describe the cost-effectiveness of such programs, in relation to averted drowning-related mortality or morbidity\nSearch methods\nOn November 23, 2019, and for an update on August 18, 2020, we searched MEDLINE (PubMed), Embase, CENTRAL, ERIC, and CINAHL, as well as two trial registries. On December 16, 2019, and for an update on February 9, 2021, we searched 12 other resources, including websites of organizations that develop programs targeted to children.\nSelection criteria\nWe included randomized, quasi-randomized, and non-randomized controlled studies (with explicitly listed specific study design features) that implemented formal day care programs as a single program or combined with additional out-of-day care components (such as educational activities aimed at preventing injury or drowning or early childhood development activities) for children of preschool age (below 6 years of age) in LMICs for comparison with no such programs or with other drowning prevention interventions. Studies had to report at least one outcome related to drowning or injury prevention for the children enrolled.\nData collection and analysis\nTwo review authors independently performed study selection and data extraction, as well as risk of bias and GRADE assessment.\nMain results\nTwo non-randomized observational studies, conducted in rural Bangladesh, involving a total of 252,631 participants, met the inclusion criteria for this review. One of these studies compared a formal day care program combined with parent education, playpens provided to parents, and community-based activities as additional out-of-day care components versus no such program. Overall we assessed this study to be at moderate risk of bias (moderate risk of bias due to confounding, low risk of bias for other domains). This study showed that implementation of a formal day care program combined with parent education, provision of playpens to parents, and community-based activities, in a rural area with a high drowning incidence, likely reduces the risk of death from drowning over the study period of 4 years and 8 months compared to no day care program (hazard ratio 0.18, 95% confidence interval [CI] 0.06 to 0.58; 1 study, 136,577 participants; moderate-certainty evidence). Drowning morbidity (non-fatal drowning resulting in complications), total drowning (fatal and non-fatal), unsafe water exposure, and program safety (e.g. transmission of infection within day care, physical or sexual abuse of children within day care) were not reported, nor was the incidence of other unintentional injuries. Cost-effectiveness was reported as 812 USD (95% CI 589 to 1777) per disability-adjusted life-year averted as a consequence of drowning (moderate-certainty evidence). The second study compared day care programs with or without playpens provided to parents as an additional component versus only playpens provided to parents as an alternative drowning prevention intervention. Overall we assessed the study to be at critical risk of bias because we judged bias due to confounding to be at critical risk. As the certainty of evidence was very low, we are uncertain about the effects on drowning mortality rate of implementing a day care program compared to providing playpens (rate ratio 0.25, 95% CI 0.15 to 0.41; 1 study; 76,575 participants; very low-certainty evidence). Likewise, we are uncertain about the effects of a day care program with playpens provided as an additional component versus playpens provided alone (rate ratio 0.06, 95% CI 0.02 to 0.12; 1 study, 45,460 participants; very low-certainty evidence). The other outcomes of interest - drowning morbidity, total drowning, unsafe water exposure, program safety, incidence of other unintentional injuries, and cost-effectiveness - were not reported.\nAuthors' conclusions\nThis review provides evidence suggesting that a day care program with additional out-of-day care components such as community-based education, parent education, and playpens provided to parents likely reduces the drowning mortality risk in regions with a high burden of drowning compared to no intervention.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 2 studies that involved 252,631 children and took place in rural Bangladesh.\nOne study lasted 4 years and 8 months. It assessed the effects of a day care program that combined parent education, playpens provided to parents, and community education as compared to the effects of no day care program.\nThe other study lasted 3 years and 4 months. The effects of a day care program alone and the effects of a combined program of day care with playpens provided to parents were each compared with the effects of only playpens provided to parents."
}
] |
query-laysum
|
8209
|
[
{
"role": "user",
"content": "Abstract: Background\nSelf-harm (SH; intentional self-poisoning or self-injury) is common, often repeated, and associated with suicide. This is an update of a broader Cochrane review first published in 1998, previously updated in 1999, and now split into three separate reviews. This review focuses on psychosocial interventions in adults who engage in self-harm.\nObjectives\nTo assess the effects of specific psychosocial treatments versus treatment as usual, enhanced usual care or other forms of psychological therapy, in adults following SH.\nSearch methods\nThe Cochrane Depression, Anxiety and Neurosis Group (CCDAN) trials coordinator searched the CCDAN Clinical Trials Register (to 29 April 2015). This register includes relevant randomised controlled trials (RCTs) from: the Cochrane Library (all years), MEDLINE (1950 to date), EMBASE (1974 to date), and PsycINFO (1967 to date).\nSelection criteria\nWe included RCTs comparing psychosocial treatments with treatment as usual (TAU), enhanced usual care (EUC) or alternative treatments in adults with a recent (within six months) episode of SH resulting in presentation to clinical services.\nData collection and analysis\nWe used Cochrane's standard methodological procedures.\nMain results\nWe included 55 trials, with a total of 17,699 participants. Eighteen trials investigated cognitive-behavioural-based psychotherapy (CBT-based psychotherapy; comprising cognitive-behavioural, problem-solving therapy or both). Nine investigated interventions for multiple repetition of SH/probable personality disorder, comprising emotion-regulation group-based psychotherapy, mentalisation, and dialectical behaviour therapy (DBT). Four investigated case management, and 11 examined remote contact interventions (postcards, emergency cards, telephone contact). Most other interventions were evaluated in only single small trials of moderate to very low quality.\nThere was a significant treatment effect for CBT-based psychotherapy compared to TAU at final follow-up in terms of fewer participants repeating SH (odds ratio (OR) 0.70, 95% confidence interval (CI) 0.55 to 0.88; number of studies k = 17; N = 2665; GRADE: low quality evidence), but with no reduction in frequency of SH (mean difference (MD) -0.21, 95% CI -0.68 to 0.26; k = 6; N = 594; GRADE: low quality).\nFor interventions typically delivered to individuals with a history of multiple episodes of SH/probable personality disorder, group-based emotion-regulation psychotherapy and mentalisation were associated with significantly reduced repetition when compared to TAU: group-based emotion-regulation psychotherapy (OR 0.34, 95% CI 0.13 to 0.88; k = 2; N = 83; GRADE: low quality), mentalisation (OR 0.35, 95% CI 0.17 to 0.73; k = 1; N = 134; GRADE: moderate quality). Compared with TAU, dialectical behaviour therapy (DBT) showed a significant reduction in frequency of SH at final follow-up (MD -18.82, 95% CI -36.68 to -0.95; k = 3; N = 292; GRADE: low quality) but not in the proportion of individuals repeating SH (OR 0.57, 95% CI 0.21 to 1.59, k = 3; N = 247; GRADE: low quality). Compared with an alternative form of psychological therapy, DBT-oriented therapy was also associated with a significant treatment effect for repetition of SH at final follow-up (OR 0.05, 95% CI 0.00 to 0.49; k = 1; N = 24; GRADE: low quality). However, neither DBT vs 'treatment by expert' (OR 1.18, 95% CI 0.35 to 3.95; k = 1; N = 97; GRADE: very low quality) nor prolonged exposure DBT vs standard exposure DBT (OR 0.67, 95% CI 0.08 to 5.68; k = 1; N =18; GRADE: low quality) were associated with a significant reduction in repetition of SH.\nCase management was not associated with a significant reduction in repetition of SH at post intervention compared to either TAU or enhanced usual care (OR 0.78, 95% CI 0.47 to 1.30; k = 4; N = 1608; GRADE: moderate quality). Continuity of care by the same therapist vs a different therapist was also not associated with a significant treatment effect for repetition (OR 0.28, 95% CI 0.07 to 1.10; k = 1; N = 136; GRADE: very low quality). None of the following remote contact interventions were associated with fewer participants repeating SH compared with TAU: adherence enhancement (OR 0.57, 95% CI 0.32 to 1.02; k = 1; N = 391; GRADE: low quality), mixed multimodal interventions (comprising psychological therapy and remote contact-based interventions) (OR 0.98, 95% CI 0.68 to 1.43; k = 1 study; N = 684; GRADE: low quality), including a culturally adapted form of this intervention (OR 0.83, 95% CI 0.44 to 1.55; k = 1; N = 167; GRADE: low quality), postcards (OR 0.87, 95% CI 0.62 to 1.23; k = 4; N = 3277; GRADE: very low quality), emergency cards (OR 0.82, 95% CI 0.31 to 2.14; k = 2; N = 1039; GRADE: low quality), general practitioner's letter (OR 1.15, 95% CI 0.93 to 1.44; k = 1; N = 1932; GRADE: moderate quality), telephone contact (OR 0.74, 95% CI 0.42 to 1.32; k = 3; N = 840; GRADE: very low quality), and mobile telephone-based psychological therapy (OR not estimable due to zero cell counts; GRADE: low quality).\nNone of the following mixed interventions were associated with reduced repetition of SH compared to either alternative forms of psychological therapy: interpersonal problem-solving skills training, behaviour therapy, home-based problem-solving therapy, long-term psychotherapy; or to TAU: provision of information and support, treatment for alcohol misuse, intensive inpatient and community treatment, general hospital admission, or intensive outpatient treatment.\nWe had only limited evidence on whether the intervention had different effects in men and women. Data on adverse effects, other than planned outcomes relating to suicidal behaviour, were not reported.\nAuthors' conclusions\nCBT-based psychological therapy can result in fewer individuals repeating SH; however, the quality of this evidence, assessed using GRADE criteria, ranged between moderate and low. Dialectical behaviour therapy for people with multiple episodes of SH/probable personality disorder may lead to a reduction in frequency of SH, but this finding is based on low quality evidence. Case management and remote contact interventions did not appear to have any benefits in terms of reducing repetition of SH. Other therapeutic approaches were mostly evaluated in single trials of moderate to very low quality such that the evidence relating to these interventions is inconclusive.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Self harm (SH), which includes non-fatal intentional self-poisoning/overdose and self-injury, is a major problem in many countries and is linked to risk of future suicide. It is distressing for both patients and their families and friends, and places large demands on clinical services. It is therefore important to assess the evidence on treatments for SH patients."
}
] |
query-laysum
|
8210
|
[
{
"role": "user",
"content": "Abstract: Background\nCrimean Congo haemorrhagic fever (CCHF) is a tick-borne disease that occurs in parts of Asia, Europe and Africa. Since 2000 the infection has caused epidemics in Turkey, Iran, Russia, Uganda and Pakistan. Good-quality general supportive medical care helps reduce mortality. There is uncertainty and controversy about treating CCHF with the antiviral drug ribavirin.\nObjectives\nTo assess the effects of ribavirin for treating people with Crimean Congo haemorrhagic fever.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register; the Central Register of Controlled Trials (CENTRAL); MEDLINE (PubMed); Embase (OVID); Science Citation Index-Expanded, Social Sciences Citation index, conference proceedings (Web of Science); and CINAHL (EBSCOHost). We also searched the WHO International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov for trials in progress. We conducted all searches up to 16 October 2017. We also contacted experts in the field and obtained further studies from these sources.\nSelection criteria\nWe evaluated studies assessing the use of ribavirin in people with suspected or confirmed Crimean Congo haemorrhagic fever. We included randomised control trials (RCTs); non-randomised studies (NRSs) that included more than 10 participants designed as cohort studies with comparators; and case-control studies.\nData collection and analysis\nTwo review authors assessed eligibility, risk of bias, and extracted data. For non-randomized studies we used the ROBINS-I tool to assess risk of bias. The main effects analysis included all studies where we judged the risk of bias to be low, moderate or high. We summarized dichotomous outcomes using risk ratios (RRs) and continuous outcomes using mean differences (MDs), and used meta-analyses where appropriate. We carried out a subsidiary appraisal and analysis of studies with critical risk of bias for the primary outcome, as these are often cited to support using ribavirin.\nMain results\nFor the main effects analysis, five studies met our inclusion criteria: one RCT with 136 participants and four non-randomized studies with 612 participants. We excluded 18 non-randomized studies with critical risk of bias, where none had attempted to control for confounding.\nWe do not know if ribavirin reduces mortality (1 RCT; RR 1.13, 95% confidence interval (CI) 0.29 to 4.32; 136 participants; very low-certainty evidence; 3 non-randomized studies; RR 0.72, 95% CI 0.41 to 1.28; 549 participants; very low-certainty evidence). We do not know if ribavirin reduces the length of stay in hospital (1 RCT: mean difference (MD) 0.70 days, 95% CI -0.39 to 1.79; 136 participants; and 1 non-randomized study: MD -0.80, 95% CI -2.70 to 1.10; 50 participants; very low-certainty evidence). We do not know if it reduces the risk of patients needing platelet transfusions (1 RCT: RR 1.23, 95% CI 0.77 to 1.96; 136 participants; very low-certainty evidence). For adverse effects (including haemolytic anaemia and a need to discontinue treatment), we do not know whether there is an increased risk with ribavirin in people with CCHF as data are insufficient.\nWe do not know if adding ribavirin to early supportive care improves outcomes. One non-randomized study assessed mortality in people receiving ribavirin and supportive care within four days or less from symptom onset compared to after four days since symptom onset: mortality was lower in the group receiving early supportive care and ribavirin, but it is not possible to distinguish between the effects of ribavirin and early supportive medical care alone.\nIn the subsidiary analysis, 18 studies compared people receiving ribavirin with those not receiving ribavirin. All had a critical risk of bias due to confounding, reflected in the mortality point estimates favouring ribavirin.\nAuthors' conclusions\nWe do not know if ribavirin is effective for treating Crimean Congo haemorrhagic fever. Non-randomized studies are often cited as evidence of an effect, but the risk of bias in these studies is high.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is insufficient reliable evidence to show whether ribavirin is effective in treating Crimean Congo haemorrhagic fever. A randomised clinical trial could help answer this question."
}
] |
query-laysum
|
8211
|
[
{
"role": "user",
"content": "Abstract: Background\nFollowing middle ear ventilation tube (tympanostomy tube or grommet) insertion, most surgeons advise that a child's ears should be kept dry during the immediate postoperative period. Following the initial period some surgeons will permit swimming or bathing, whereas other surgeons will recommend ongoing water precautions. A large number of studies have been conducted to explore the association between water exposure and ear infections in children with ventilation tubes, however a range of differing conclusions exist regarding the need for water precautions and there is wide variation in clinical practice.\nObjectives\nTo assess the effectiveness of water precautions for the prevention of ear infections in children with ventilation tubes (grommets), at any time while the tubes are in place.\nSearch methods\nThe Cochrane ENT Trials Search Co-ordinator searched the ENT Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 8); PubMed; EMBASE; CINAHL; Web of Science; Clinicaltrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 1 September 2015.\nSelection criteria\nRandomised controlled trials recruiting children (0 to 17 years) with ventilation tubes and assessing the effect of water precautions while the tubes are in place. We considered all forms of water precautions, including behavioural (i.e. avoidance or swimming/bathing restrictions) and mechanical (ear plugs/moulds or hats/bands).\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcome measures were episodes of otorrhoea and adverse effects; secondary outcomes were antimicrobial prescriptions for ear infections, ventilation tube extrusion, surgical intervention to remove ventilation tubes and hearing outcomes.\nMain results\nTwo randomised controlled trials recruiting a total of 413 patients met the criteria for inclusion in our review; one study had a low risk of bias and the other study had a high risk of bias.\nEar plugs versus control\nOne study recruited 201 children (aged six months to six years) who underwent myringotomy and ventilation tube insertion. The study compared an intervention group who were instructed to swim and bathe with ear plugs with a control group; the participants were followed up at one-month intervals for one year. This study, with low risk of bias, showed that the use of ear plugs results in a small but statistically significant reduction in the rate of otorrhoea from 1.2 episodes to 0.84 episodes in the year of follow-up (mean difference (MD) -0.36 episodes per year, 95% confidence interval (CI) -0.45 to -0.27). There was no significant difference in ventilation tube extrusion or hearing outcomes between the two study arms. No child required surgical intervention to remove ventilation tubes and no adverse events were reported.\nWater avoidance versus control\nAnother study recruited 212 children (aged three months to 12 years) who underwent myringotomy and ventilation tube insertion. The study compared an intervention group who were instructed not to swim or submerge their heads while bathing with a control group; the participants were followed up at three-month intervals for one year. This study, with high risk of bias, did not show any evidence of a reduction or increase in the rate of otorrhoea (1.17 episodes per year in both groups; MD 0 episodes, 95% CI -0.14 to 0.14). No other outcomes were reported for this study and no adverse events were reported.\nQuality of evidence\nThe overall quality (GRADE) of the body of evidence for the effect of ear plugs on the rate of otorrhoea and the effect of water avoidance on the rate of otorrhoea are low and very low respectively.\nAuthors' conclusions\nThe baseline rate of ventilation tube otorrhoea and the morbidity associated with it is usually low and therefore careful prior consideration must be given to the efficacy, costs and burdens of any intervention aimed at reducing this rate.\nWhile there is some evidence to suggest that wearing ear plugs reduces the rate of otorrhoea in children with ventilation tubes, clinicians and parents should understand that the absolute reduction in the number of episodes of otorrhoea appears to be very small and is unlikely to be clinically significant. Based on the data available, an average child would have to wear ear plugs for 2.8 years to prevent one episode of otorrhoea.\nSome evidence suggests that advising children to avoid swimming or head immersion during bathing does not affect rates of otorrhoea, although good quality data are lacking in this area. Currently, consensus guidelines therefore recommend against the routine use of water precautions on the basis that the limited clinical benefit is outweighed by the associated cost, inconvenience and anxiety.\nFuture high-quality studies could be undertaken but may not be thought necessary. It is uncertain whether further trials in this area would change the findings of this review or have an impact on practice. Any future high-quality research should focus on determining whether particular groups of children benefit more from water precautions than others, as well as on developing clinical guidelines and their implementation.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Is there is any evidence to show that keeping ears dry after ventilation tube (grommet) insertion helps to prevent problems such as infections?"
}
] |
query-laysum
|
8212
|
[
{
"role": "user",
"content": "Abstract: Background\nAnaemia is a condition in which the number of red blood cells is insufficient to meet physiologic needs; it is caused by many conditions, particularly iron deficiency. Traditionally, daily iron supplementation has been a standard practice for preventing and treating anaemia. However, its long-term use has been limited, as it has been associated with adverse side effects such as nausea, constipation, and teeth staining. Intermittent iron supplementation has been suggested as an effective and safer alternative to daily iron supplementation for preventing and reducing anaemia at the population level, especially in areas where this condition is highly prevalent.\nObjectives\nTo assess the effects of intermittent oral iron supplementation, alone or in combination with other nutrients, on anaemia and its associated impairments among menstruating women, compared with no intervention, a placebo, or daily supplementation.\nSearch methods\nIn February 2018, we searched CENTRAL, MEDLINE, Embase, nine other databases, and two trials registers. In March 2018, we also searched LILACS, IBECS and IMBIOMED. In addition, we examined reference lists, and contacted authors and known experts to identify additional studies.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs with either individual or cluster randomisation. Participants were menstruating women; that is, women beyond menarche and prior to menopause who were not pregnant or lactating and did not have a known condition that impeded the presence of menstrual periods. The intervention was the use of iron supplements intermittently (one, two or three times a week on non-consecutive days) compared with placebo, no intervention, or the same supplements provided on a daily basis.\nData collection and analysis\nBoth review authors independently assessed the eligibility of studies against the inclusion criteria, extracted data from included studies, checked data entry for accuracy, assessed the risk of bias of the included studies, and rated the quality of the evidence using GRADE.\nMain results\nWe included 25 studies involving 10,996 women. Study methods were not well described in many of the included studies and thus assessing risk of bias was difficult. The main limitations of the studies were lack of blinding and high attrition. Studies were mainly funded by international organisations, universities, and ministries of health within the countries. Approximately one third of the included studies did not provide a funding source.\nAlthough quality across studies was variable, the results consistently showed that intermittent iron supplementation (alone or with any other vitamins and minerals) compared with no intervention or a placebo, reduced the risk of having anaemia (risk ratio (RR) 0.65, 95% confidence interval (CI) 0.49 to 0.87; 11 studies, 3135 participants; low-quality evidence), and improved the concentration of haemoglobin (mean difference (MD) 5.19 g/L, 95% CI 3.07 to 7.32; 15 studies, 2886 participants; moderate-quality evidence), and ferritin (MD 7.46 μg/L, 95% CI 5.02 to 9.90; 7 studies, 1067 participants; low-quality evidence). Intermittent regimens may also reduce the risk of having iron deficiency (RR 0.50, 95% CI 0.24 to 1.04; 3 studies, 624 participants; low-quality evidence), but evidence was inconclusive regarding iron deficiency anaemia (RR 0.07, 95% CI 0.00 to 1.16; 1 study, 97 participants; very low-quality evidence) and all-cause morbidity (RR 1.12, 95% CI 0.82 to 1.52; 1 study, 119 participants; very low-quality evidence). Women in the control group were less likely to have any adverse side effects than those receiving intermittent iron supplements (RR 1.98, 95% CI 0.31 to 12.72; 3 studies, 630 participants; moderate-quality evidence).\nIn comparison with daily supplementation, results showed that intermittent supplementation (alone or with any other vitamins and minerals) produced similar effects to daily supplementation (alone or with any other vitamins and minerals) on anaemia (RR 1.09, 95% CI 0.93 to 1.29; 8 studies, 1749 participants; moderate-quality evidence). Intermittent supplementation may produce similar haemoglobin concentrations (MD 0.43 g/L, 95% CI −1.44 to 2.31; 10 studies, 2127 participants; low-quality evidence) but lower ferritin concentrations on average (MD −6.07 μg/L, 95% CI −10.66 to −1.48; 4 studies, 988 participants; low-quality evidence) compared to daily supplementation. Compared to daily regimens, intermittent regimens may also reduce the risk of having iron deficiency (RR 4.30, 95% CI 0.56 to 33.20; 1 study, 198 participants; very low-quality evidence). Women receiving iron supplements intermittently were less likely to have any adverse side effects than those receiving iron supplements daily (RR 0.41, 95% CI 0.21 to 0.82; 6 studies, 1166 participants; moderate-quality evidence). No studies reported on the effect of intermittent regimens versus daily regimens on iron deficiency anaemia and all-cause morbidity.\nInformation on disease outcomes, adherence, economic productivity, and work performance was scarce, and evidence about the effects of intermittent supplementation on these outcomes unclear.\nOverall, whether the supplements were given once or twice weekly, for less or more than three months, contained less or more than 60 mg of elemental iron per week, or given to populations with different degrees of anaemia at baseline did not seem to affect the findings. Furthermore, the response did not differ in areas where malaria was frequent, although very few trials were conducted in these settings.\nAuthors' conclusions\nIntermittent iron supplementation may reduce anaemia and may improve iron stores among menstruating women in populations with different anaemia and malaria backgrounds. In comparison with daily supplementation, the provision of iron supplements intermittently is probably as effective in preventing or controlling anaemia. More information is needed on morbidity (including malaria outcomes), side effects, work performance, economic productivity, depression, and adherence to the intervention. The quality of this evidence base ranged from very low to moderate quality, suggesting that we are uncertain about these effects.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Women with anaemia may have less energy for physical work and become more prone to infections. Most women throughout the world enter pregnancy with anaemia, putting them at greater risk of having low birth-weight babies and other complications during delivery.\nSome scientists believe that taking iron a few times a week – instead of every day – can help women with anaemia to feel better and improve their haemoglobin without giving them as many side effects. If women have fewer side effects, they may be more likely to take iron supplements more regularly and for longer periods."
}
] |
query-laysum
|
8213
|
[
{
"role": "user",
"content": "Abstract: Background\nWhile most guidance recommends the use of insulin in women whose pregnancies are affected by pre-existing diabetes, oral anti-diabetic agents may be more acceptable to women. The effects of these oral anti-diabetic agents on maternal and infant health outcomes need to be established in pregnant women with pre-existing diabetes or impaired glucose tolerance, as well as in women with previous gestational diabetes mellitus preconceptionally or during a subsequent pregnancy. This review is an update of a review that was first published in 2010.\nObjectives\nTo investigate the effects of oral anti-diabetic agents in women with established diabetes, impaired glucose tolerance or previous gestational diabetes who are planning a pregnancy, or pregnant women with pre-existing diabetes, on maternal and infant health. The use of oral anti-diabetic agents for the management of gestational diabetes in a current pregnancy is evaluated in a separate Cochrane Review.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 October 2016) and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs assessing the effects of oral anti-diabetic agents in women with established diabetes, impaired glucose tolerance or previous gestational diabetes who were planning a pregnancy, or pregnant women with pre-existing diabetes. Cluster-RCTs were eligible for inclusion, but none were identified.\nData collection and analysis\nTwo review authors independently assessed study eligibility, extracted data and assessed the risk of bias of the included RCTs. Review authors checked the data for accuracy, and assessed the quality of the evidence using the GRADE approach.\nMain results\nWe identified six RCTs (707 women), eligible for inclusion in this updated review, however, three RCTs had mixed populations (that is, they included pregnant women with gestational diabetes) and did not report data separately for the relevant subset of women for this review. Therefore we have only included outcome data from three RCTs; data were available for 241 women and their infants. The three RCTs all compared an oral anti-diabetic agent (metformin) with insulin. The women in the RCTs that contributed data had type 2 diabetes diagnosed before or during their pregnancy. Overall, the RCTs were judged to be at varying risk of bias. We assessed the quality of the evidence for selected important outcomes using GRADE; the evidence was low- or very low-quality, due to downgrading because of design limitations (risk of bias) and imprecise effect estimates (for many outcomes only one or two RCTs contributed data).\nFor our primary outcomes there was no clear difference between metformin and insulin groups for pre-eclampsia (risk ratio (RR) 0.63, 95% confidence interval (CI) 0.33 to 1.20; RCTs = 2; participants = 227; very low-quality evidence) although in one RCT women receiving metformin were less likely to have pregnancy-induced hypertension (RR 0.58, 95% CI 0.37 to 0.91; RCTs = 1; participants = 206; low-quality evidence). Women receiving metformin were less likely to have a caesarean section compared with those receiving insulin (RR 0.73, 95% CI 0.61 to 0.88; RCTs = 3; participants = 241; low-quality evidence). In one RCT there was no clear difference between groups for large-for-gestational-age infants (RR 1.12, 95% CI 0.73 to 1.72; RCTs = 1; participants = 206; very low-quality evidence). There were no perinatal deaths in two RCTs (very low-quality evidence). Neonatal mortality or morbidity composite outcome and childhood/adulthood neurosensory disability were not reported.\nFor other secondary outcomes we assessed using GRADE, there were no clear differences between metformin and insulin groups for induction of labour (RR 1.42, 95% CI 0.62 to 3.28; RCTs = 2; participants = 35; very low-quality evidence), though infant hypoglycaemia was reduced in the metformin group (RR 0.34, 95% CI 0.18 to 0.62; RCTs = 3; infants = 241; very low-quality evidence). Perineal trauma, maternal postnatal depression and postnatal weight retention, and childhood/adulthood adiposity and diabetes were not reported.\nAuthors' conclusions\nThere are insufficient RCT data to evaluate the use of oral anti-diabetic agents in women with established diabetes, impaired glucose tolerance or previous gestational diabetes who are planning a pregnancy, or in pregnant women with pre-existing diabetes. Low to very low-quality evidence suggests possible reductions in pregnancy-induced hypertension, caesarean section birth and neonatal hypoglycaemia with metformin compared with insulin for women with type 2 diabetes diagnosed before or during their pregnancy, and no clear differences in pre-eclampsia, induction of labour and babies that are large-for-gestational age. Further high-quality RCTs that compare any combination of oral anti-diabetic agent, insulin and dietary and lifestyle advice for these women are needed. Future RCTs could be powered to evaluate effects on short- and long-term clinical outcomes; such RCTs could attempt to collect and report on the standard outcomes suggested in this review. We have identified three ongoing studies and four are awaiting classification. We will consider these when this review is updated.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is not enough evidence to guide us on the effects of oral anti-diabetic agents in women with established diabetes, impaired glucose tolerance or previous gestational diabetes who are planning a pregnancy, or pregnant women with pre-existing diabetes. Further large, well-designed, RCTs are required and could assess and report on the outcomes suggested in this review, including both short- and long-term outcomes for mothers and their infants."
}
] |
query-laysum
|
8214
|
[
{
"role": "user",
"content": "Abstract: Background\nThe coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems worldwide. Multiple reports on thromboembolic complications related to COVID-19 have been published, and researchers have described that people with COVID-19 are at high risk for developing venous thromboembolism (VTE). Anticoagulants have been used as pharmacological interventions to prevent arterial and venous thrombosis, and their use in the outpatient setting could potentially reduce the prevalence of vascular thrombosis and associated mortality in people with COVID-19. However, even lower doses used for a prophylactic purpose may result in adverse events such as bleeding. It is important to consider the evidence for anticoagulant use in non-hospitalised people with COVID-19.\nObjectives\nTo evaluate the benefits and harms of prophylactic anticoagulants versus active comparators, placebo or no intervention, or non-pharmacological interventions in non-hospitalised people with COVID-19.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 18 April 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing prophylactic anticoagulants with placebo or no treatment, another active comparator, or non-pharmacological interventions in non-hospitalised people with COVID-19. We included studies that compared anticoagulants with a different dose of the same anticoagulant. We excluded studies with a duration of under two weeks.\nData collection and analysis\nWe used standard Cochrane methodological procedures. Our primary outcomes were all-cause mortality, VTE (deep vein thrombosis (DVT) or pulmonary embolism (PE)), and major bleeding. Our secondary outcomes were DVT, PE, need for hospitalisation, minor bleeding, adverse events, and quality of life. We used GRADE to assess the certainty of the evidence.\nMain results\nWe included five RCTs with up to 90 days of follow-up (short term). Data were available for meta-analysis from 1777 participants.\nAnticoagulant compared to placebo or no treatment\nFive studies compared anticoagulants with placebo or no treatment and provided data for three of our outcomes of interest (all-cause mortality, major bleeding, and adverse events). The evidence suggests that prophylactic anticoagulants may lead to little or no difference in all-cause mortality (risk ratio (RR) 0.36, 95% confidence interval (CI) 0.04 to 3.61; 5 studies; 1777 participants; low-certainty evidence) and probably reduce VTE from 3% in the placebo group to 1% in the anticoagulant group (RR 0.36, 95% CI 0.16 to 0.85; 4 studies; 1259 participants; number needed to treat for an additional beneficial outcome (NNTB) = 50; moderate-certainty evidence). There may be little to no difference in major bleeding (RR 0.36, 95% CI 0.01 to 8.78; 5 studies; 1777 participants; low-certainty evidence). Anticoagulants probably result in little or no difference in DVT (RR 1.02, 95% CI 0.30 to 3.46; 3 studies; 1009 participants; moderate-certainty evidence), but probably reduce the risk of PE from 2.7% in the placebo group to 0.7% in the anticoagulant group (RR 0.25, 95% CI 0.08 to 0.79; 3 studies; 1009 participants; NNTB 50; moderate-certainty evidence). Anticoagulants probably lead to little or no difference in reducing hospitalisation (RR 1.01, 95% CI 0.59 to 1.75; 4 studies; 1459 participants; moderate-certainty evidence) and may lead to little or no difference in adverse events (minor bleeding, RR 2.46, 95% CI 0.90 to 6.72; 5 studies, 1777 participants; low-certainty evidence).\nAnticoagulant compared to a different dose of the same anticoagulant\nOne study compared anticoagulant (higher-dose apixaban) with a different (standard) dose of the same anticoagulant and reported five relevant outcomes. No cases of all-cause mortality, VTE, or major bleeding occurred in either group during the 45-day follow-up (moderate-certainty evidence). Higher-dose apixaban compared to standard-dose apixaban may lead to little or no difference in reducing the need for hospitalisation (RR 1.89, 95% CI 0.17 to 20.58; 1 study; 278 participants; low-certainty evidence) or in the number of adverse events (minor bleeding, RR 0.47, 95% CI 0.09 to 2.54; 1 study; 278 participants; low-certainty evidence).\nAnticoagulant compared to antiplatelet agent\nOne study compared anticoagulant (apixaban) with antiplatelet agent (aspirin) and reported five relevant outcomes. No cases of all-cause mortality or major bleeding occurred during the 45-day follow-up (moderate-certainty evidence). Apixaban may lead to little or no difference in VTE (RR 0.36, 95% CI 0.01 to 8.65; 1 study; 279 participants; low-certainty evidence), need for hospitalisation (RR 3.20, 95% CI 0.13 to 77.85; 1 study; 279 participants; low-certainty evidence), or adverse events (minor bleeding, RR 2.13, 95% CI 0.40 to 11.46; 1 study; 279 participants; low-certainty evidence).\nNo included studies reported on quality of life or investigated anticoagulants compared to a different anticoagulant, or anticoagulants compared to non-pharmacological interventions.\nAuthors' conclusions\nWe found low- to moderate-certainty evidence from five RCTs that prophylactic anticoagulants result in little or no difference in major bleeding, DVT, need for hospitalisation, or adverse events when compared with placebo or no treatment in non-hospitalised people with COVID-19. Low-certainty evidence indicates that prophylactic anticoagulants may result in little or no difference in all-cause mortality when compared with placebo or no treatment, but moderate-certainty evidence indicates that prophylactic anticoagulants probably reduce the incidence of VTE and PE.\nLow-certainty evidence suggests that comparing different doses of the same prophylactic anticoagulant may result in little or no difference in need for hospitalisation or adverse events. Prophylactic anticoagulants may result in little or no difference in risk of VTE, hospitalisation, or adverse events when compared with antiplatelet agents (low-certainty evidence). Given that there were only short-term data from one study, these results should be interpreted with caution.\nAdditional trials of sufficient duration are needed to clearly determine any effect on clinical outcomes.\n\nGiven the provided abstract, please respond to the following query: \"How is VTE treated and how can VTE be prevented in people who are at risk?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The initial treatment includes drugs such as anticoagulants to prevent the formation of further new blood clots. Patients may also receive compression stockings and clinical care (e.g. physical exercise, skin hydration, and physical therapy). Anticoagulants such as rivaroxaban and apixaban act by inhibiting the blood elements involved in the formation of blood clots. For this reason, they are also used to prevent blood clots from forming in people who are considered to be at risk, such as people with COVID-19. This is known as prophylactic treatment. However, the use of anticoagulants can cause side effects such as bleeding."
}
] |
query-laysum
|
8215
|
[
{
"role": "user",
"content": "Abstract: Background\nSeveral rehabilitation programmes are available for individuals after lumbar disc surgery.\nObjectives\nTo determine whether active rehabilitation after lumbar disc surgery is more effective than no treatment, and to describe which type of active rehabilitation is most effective. This is the second update of a Cochrane Review first published in 2002.\nFirst, we clustered treatments according to the start of treatment.\n1. Active rehabilitation that starts immediately postsurgery.\n2. Active rehabilitation that starts four to six weeks postsurgery.\n3. Active rehabilitation that starts longer than 12 months postsurgery.\nFor every cluster, the following comparisons were investigated.\nA. Active rehabilitation versus no treatment, placebo or waiting list control.\nB. Active rehabilitation versus other kinds of active rehabilitation.\nC. Specific intervention in addition to active rehabilitation versus active rehabilitation alone.\nSearch methods\nWe searched CENTRAL (2013, Issue 4) and MEDLINE, EMBASE, CINAHL, PEDro and PsycINFO to May 2013.\nSelection criteria\nWe included only randomised controlled trials (RCTs).\nData collection and analysis\nPairs of review authors independently assessed studies for eligibility and risk of bias. Meta-analyses were performed if studies were clinically homogeneous. The GRADE approach was used to determine the overall quality of evidence.\nMain results\nIn this update, we identified eight new studies, thereby including a total of 22 trials (2503 participants), 10 of which had a low risk of bias. Most rehabilitation programmes were assessed in only one study. Both men and women were included, and overall mean age was 41.4 years. All participants had received standard discectomy, microdiscectomy and in one study standard laminectomy and (micro)discectomy. Mean duration of the rehabilitation intervention was 12 weeks; eight studies assessed six to eight-week exercise programmes, and eight studies assessed 12 to 13-week exercise programmes. Programmes were provided in primary and secondary care facilities and were started immediately after surgery (n = 4) or four to six weeks (n = 16) or one year after surgery (n = 2). In general, the overall quality of the evidence is low to very low. Rehabilitation programmes that started immediately after surgery were not more effective than their control interventions, which included exercise. Low- to very low-quality evidence suggests that there were no differences between specific rehabilitation programmes (multidisciplinary care, behavioural graded activity, strength and stretching) that started four to six weeks postsurgery and their comparators, which included some form of exercise. Low-quality evidence shows that physiotherapy from four to six weeks postsurgery onward led to better function than no treatment or education only, and that multidisciplinary rehabilitation co-ordinated by medical advisors led to faster return to work than usual care. Statistical pooling was performed only for three comparisons in which the rehabilitation programmes started four to six weeks postsurgery: exercise programmes versus no treatment, high- versus low-intensity exercise programmes and supervised versus home exercise programmes. Very low-quality evidence (five RCTs, N = 272) shows that exercises are more effective than no treatment for pain at short-term follow-up (standard mean difference (SMD) -0.90; 95% confidence interval (CI) -1.55 to -0.24), and low-quality evidence (four RCTs, N = 252) suggests that exercises are more effective for functional status on short-term follow-up (SMD -0.67; 95% CI -1.22 to -0.12) and that no difference in functional status was noted on long-term follow-up (three RCTs, N = 226; SMD -0.22; 95% CI -0.49 to 0.04). None of these studies reported that exercise increased the reoperation rate. Very low-quality evidence (two RCTs, N = 103) shows that high-intensity exercise programmes are more effective than low-intensity exercise programmes for pain in the short term (weighted mean difference (WMD) -10.67; 95% CI -17.04 to -4.30), and low-quality evidence (two RCTs, N = 103) shows that they are more effective for functional status in the short term (SMD -0.77; 95% CI -1.17 to -0.36). Very low-quality evidence (four RCTs, N = 154) suggests no significant differences between supervised and home exercise programmes for short-term pain relief (SMD -0.76; 95% CI -2.04 to 0.53) or functional status (four RCTs, N = 154; SMD -0.36; 95% CI -0.88 to 0.15).\nAuthors' conclusions\nConsiderable variation was noted in the content, duration and intensity of the rehabilitation programmes included in this review, and for none of them was high- or moderate-quality evidence identified. Exercise programmes starting four to six weeks postsurgery seem to lead to a faster decrease in pain and disability than no treatment, with small to medium effect sizes, and high-intensity exercise programmes seem to lead to a slightly faster decrease in pain and disability than is seen with low-intensity programmes, but the overall quality of the evidence is only low to very low. No significant differences were noted between supervised and home exercise programmes for pain relief, disability or global perceived effect. None of the trials reported an increase in reoperation rate after first-time lumbar surgery. High-quality randomised controlled trials are strongly needed.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence on the effects of rehabilitation programmes on pain, recovery, function and return to work in people who have had lumbar disc surgery."
}
] |
query-laysum
|
8216
|
[
{
"role": "user",
"content": "Abstract: Background\nPresbyopia occurs when the lens of the eyes loses its elasticity leading to loss of accommodation. The lens may also progress to develop cataract, affecting visual acuity and contrast sensitivity. One option of care for individuals with presbyopia and cataract is the use of multifocal or extended depth of focus intraocular lens (IOL) after cataract surgery. Although trifocal and bifocal IOLs are designed to restore three and two focal points respectively, trifocal lens may be preferable because it restores near, intermediate, and far vision, and may also provide a greater range of useful vision and allow for greater spectacle independence in individuals with presbyopia.\nObjectives\nTo assess the effectiveness and safety of implantation with trifocal versus bifocal IOLs during cataract surgery among people with presbyopia.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2022, Issue 3); Ovid MEDLINE; Embase.com; PubMed; ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 31 March 2022.\nSelection criteria\nWe included randomized controlled trials that compared trifocal and bifocal IOLs among participants 30 years of age or older with presbyopia undergoing cataract surgery.\nData collection and analysis\nWe used standard Cochrane methodology and graded the certainty of the body of evidence according to the GRADE classification.\nMain results\nWe identified seven studies conducted in Europe and Turkey with a total of 331 participants. All included studies assessed visual acuity using a logarithm of the minimum angle of resolution (LogMAR chart). Of them, six (86%) studies assessed uncorrected distance visual acuity (the primary outcome of this review). Some studies also examined our secondary outcomes including uncorrected near, intermediate, and best-corrected distance visual acuity, as well as contrast sensitivity.\nStudy characteristics\nAll participants had bilateral cataracts with no pre-existing ocular pathologies or ocular surgery. Participants' mean age ranged from 55 to 74 years. Three studies reported on gender of participants, and they were mostly women. We assessed all of the included studies as being at unclear risk of bias for most domains. Two studies received financial support from manufacturers of lenses evaluated in this review, and at least one author of another study reported receiving payments for delivering lectures with lens manufacturers.\nFindings\nAll studies compared trifocal versus bifocal IOL implantation on visual acuity outcomes measured on a LogMAR scale. At one year, trifocal IOL showed no evidence of effect on uncorrected distance visual acuity (mean difference (MD) 0.00, 95% confidence interval (CI) −0.04 to 0.04; I2 = 0%; 2 studies, 107 participants; low-certainty evidence) and uncorrected near visual acuity (MD 0.01, 95% CI −0.04 to 0.06; I2 = 0%; 2 studies, 107 participants; low-certainty evidence). Trifocal IOL implantation may improve uncorrected intermediate visual acuity at one year (MD −0.16, 95% CI −0.22 to −0.10; I2 = 0%; 2 studies, 107 participants; low-certainty evidence), but showed no evidence of effect on best-corrected distance visual acuity at one year (MD 0.00, 95% CI −0.03 to 0.04; I2 = 0%; 2 studies, 107 participants; low-certainty evidence). No study reported on contrast sensitivity or quality of life at one-year follow-up. Data from one study at three months suggest that contrast sensitivity did not differ between groups under photopic conditions, but may be worse in the trifocal group in one of the four frequencies under mesopic conditions (MD −0.19, 95% CI −0.33 to −0.05; 1 study; I2 = 0%, 25 participants; low-certainty evidence). One study examined vision-related quality of life using the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25) at six months, and suggested no evidence of a difference between trifocal and bifocal IOLs (MD 1.41, 95% CI −1.78 to 4.60; 1 study, 40 participants; low-certainty evidence).\nAdverse events\nAdverse events reporting varied among studies. Of five studies reporting information on adverse events, two studies observed no intraoperative and postoperative complications or no posterior capsular opacification at six months. One study reported that glare and halos were similar to the preoperative measurements. One study reported that 4 (20%) and 10 (50%) participants had glare complaints at 6 months in trifocal and bifocal group, respectively (risk ratio 0.40, 95% CI 0.15 to 1.07; 40 participants). One study reported that four eyes (11.4%) in the bifocal group and three eyes (7.5%) in the trifocal group developed significant posterior capsular opacification requiring YAG capsulotomy at one year. The certainty of the evidence for adverse events was low.\nAuthors' conclusions\nWe found low-certainty of evidence that compared with bifocal IOL, implantation of trifocal IOL may improve uncorrected intermediate visual acuity at one year. However, there was no evidence of a difference between trifocal and bifocal IOL for uncorrected distance visual acuity, uncorrected near visual acuity, and best-corrected visual acuity at one year. Future research should include the comparison of both trifocal IOL and specific bifocal IOLs that correct intermediate visual acuity to evaluate important outcomes such as contrast sensitivity, quality of life, and vision-related adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found seven relevant studies conducted in Europe and Turkey with a total of 331 people with presbyopia.\nWe found that the implantation of trifocal lens at the time of cataract surgery may improve intermediate visual acuity at one year. We found no evidence of a difference between trifocal and bifocal lens in uncorrected distance visual acuity, uncorrected near visual acuity, and best-corrected distance visual acuity at one year. It is uncertain whether trifocal compared with bifocal lens implantation has any effect on quality of life or contrast sensitivity."
}
] |
query-laysum
|
8217
|
[
{
"role": "user",
"content": "Abstract: Background\nHigh intracranial pressure (ICP) is the most frequent cause of death and disability after severe traumatic brain injury (TBI). It is usually treated with general maneuvers (normothermia, sedation, etc.) and a set of first-line therapeutic measures (moderate hypocapnia, mannitol, etc.). When these measures fail, second-line therapies are initiated, which include: barbiturates, hyperventilation, moderate hypothermia, or removal of a variable amount of skull bone (secondary decompressive craniectomy).\nObjectives\nTo assess the effects of secondary decompressive craniectomy (DC) on outcomes of patients with severe TBI in whom conventional medical therapeutic measures have failed to control raised ICP.\nSearch methods\nThe most recent search was run on 8 December 2019. We searched the Cochrane Injuries Group's Specialised Register, CENTRAL (Cochrane Library), Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R), Embase Classic + Embase (OvidSP) and ISI Web of Science (SCI-EXPANDED & CPCI-S). We also searched trials registries and contacted experts.\nSelection criteria\nWe included randomized studies assessing patients over the age of 12 months with severe TBI who either underwent DC to control ICP refractory to conventional medical treatments or received standard care.\nData collection and analysis\nWe selected potentially relevant studies from the search results, and obtained study reports. Two review authors independently extracted data from included studies and assessed risk of bias. We used a random-effects model for meta-analysis. We rated the quality of the evidence according to the GRADE approach.\nMain results\nWe included three trials (590 participants). One single-site trial included 27 children; another multicenter trial (three countries) recruited 155 adults, the third trial was conducted in 24 countries, and recruited 408 adolescents and adults. Each study compared DC combined with standard care (this could include induced barbiturate coma or cooling of the brain, or both). All trials measured outcomes up to six months after injury; one also measured outcomes at 12 and 24 months (the latter data remain unpublished). All trials were at a high risk of bias for the criterion of performance bias, as neither participants nor personnel could be blinded to these interventions. The pediatric trial was at a high risk of selection bias and stopped early; another trial was at risk of bias because of atypical inclusion criteria and a change to the primary outcome after it had started.\nMortality: pooled results for three studies provided moderate quality evidence that risk of death at six months was slightly reduced with DC (RR 0.66, 95% CI 0.43 to 1.01; 3 studies, 571 participants; I2 = 38%; moderate-quality evidence), and one study also showed a clear reduction in risk of death at 12 months (RR 0.59, 95% CI 0.45 to 0.76; 1 study, 373 participants; high-quality evidence).\nNeurological outcome: conscious of controversy around the traditional dichotomization of the Glasgow Outcome Scale (GOS) scale, we chose to present results in three ways, in order to contextualize factors relevant to clinical/patient decision-making.\nFirst, we present results of death in combination with vegetative status, versus other outcomes. Two studies reported results at six months for 544 participants. One employed a lower ICP threshold than the other studies, and showed an increase in the risk of death/vegetative state for the DC group. The other study used a more conventional ICP threshold, and results favoured the DC group (15.7% absolute risk reduction (ARR) (95% CI 6% to 25%). The number needed to treat for one beneficial outcome (NNTB) (i.e. to avoid death or vegetative status) was seven. The pooled result for DC compared with standard care showed no clear benefit for either group (RR 0.99, 95% CI 0.46 to 2.13; 2 studies, 544 participants; I2 = 86%; low-quality evidence). One study reported data for this outcome at 12 months, when the risk for death or vegetative state was clearly reduced by DC compared with medical treatment (RR 0.68, 95% CI 0.54 to 0.86; 1 study, 373 participants; high-quality evidence).\nSecond, we assessed the risk of an 'unfavorable outcome' evaluated on a non-traditional dichotomized GOS-Extended scale (GOS-E), that is, grouping the category 'upper severe disability' into the 'good outcome' grouping. Data were available for two studies (n = 571). Pooling indicated little difference between DC and standard care regarding the risk of an unfavorable outcome at six months following injury (RR 1.06, 95% CI 0.69 to 1.63; 544 participants); heterogeneity was high, with an I2 value of 82%. One trial reported data at 12 months and indicated a clear benefit of DC (RR 0.81, 95% CI 0.69 to 0.95; 373 participants).\nThird, we assessed the risk of an 'unfavorable outcome' using the (traditional) dichotomized GOS/GOS-E cutoff into 'favorable' versus 'unfavorable' results. There was little difference between DC and standard care at six months (RR 1.00, 95% CI 0.71 to 1.40; 3 studies, 571 participants; low-quality evidence), and heterogeneity was high (I2 = 78%). At 12 months one trial suggested a similar finding (RR 0.95, 95% CI 0.83 to 1.09; 1 study, 373 participants; high-quality evidence).\nWith regard to ICP reduction, pooled results for two studies provided moderate quality evidence that DC was superior to standard care for reducing ICP within 48 hours (MD −4.66 mmHg, 95% CI −6.86 to −2.45; 2 studies, 182 participants; I2 = 0%). Data from the third study were consistent with these, but could not be pooled.\nData on adverse events are difficult to interpret, as mortality and complications are high, and it can be difficult to distinguish between treatment-related adverse events and the natural evolution of the condition. In general, there was low-quality evidence that surgical patients experienced a higher risk of adverse events.\nAuthors' conclusions\nDecompressive craniectomy holds promise of reduced mortality, but the effects of long-term neurological outcome remain controversial, and involve an examination of the priorities of participants and their families. Future research should focus on identifying clinical and neuroimaging characteristics to identify those patients who would survive with an acceptable quality of life; the best timing for DC; the most appropriate surgical techniques; and whether some synergistic treatments used with DC might improve patient outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This Cochrane Review investigated the effects of a surgical procedure, decompressive craniectomy (DC), on survival and neurological (functional) outcomes for people who have a traumatic brain injury (TBI) that does not penetrate the skull, and high pressure inside the skull that does not respond to medical treatment. In DC part of the skull is removed so the brain has room to expand, and pressure inside the skull can decrease. We compared DC to conventional medical treatments in patients over 12 months old."
}
] |
query-laysum
|
8218
|
[
{
"role": "user",
"content": "Abstract: Background\nThere is ongoing debate about the frequency with which patients should attend for a dental check-up and the effects on oral health of the interval between check-ups. Recommendations regarding optimal recall intervals vary between countries and dental healthcare systems, but 6-month dental check-ups have traditionally been advocated by general dental practitioners in many high-income countries.\nThis review updates a version first published in 2005, and updated in 2007 and 2013.\nObjectives\nTo determine the optimal recall interval of dental check-up for oral health in a primary care setting.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 17 January 2020), the Cochrane Central Register of Controlled Trials (CENTRAL; in the Cochrane Library, 2019, Issue 12), MEDLINE Ovid (1946 to 17 January 2020), and Embase Ovid (1980 to 17 January 2020). We also searched the US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform for ongoing trials. We placed no restrictions on the language or date of publication when searching.\nSelection criteria\nWe included randomised controlled trials (RCTs) assessing the effects of different dental recall intervals in a primary care setting.\nData collection and analysis\nTwo review authors screened search results against inclusion criteria, extracted data and assessed risk of bias, independently and in duplicate. We contacted study authors for clarification or further information where necessary and feasible. We expressed the estimate of effect as mean difference (MD) with 95% confidence intervals (CIs) for continuous outcomes and risk ratios (RR) with 95% CIs for dichotomous outcomes. We assessed the certainty of the evidence using GRADE.\nMain results\nWe included two studies with data from 1736 participants. One study was conducted in a public dental service clinic in Norway and involved participants under 20 years of age who were regular attenders at dental appointments. It compared 12-month with 24-month recall intervals and measured outcomes at two years. The other study was conducted in UK general dental practices and involved adults who were regular attenders, which was defined as having attended the dentist at least once in the previous two years. It compared the effects of 6-month, 24-month and risk-based recall intervals, and measured outcomes at four years. The main outcomes we considered were dental caries, gingival bleeding and oral-health-related quality of life. Neither study measured other potential adverse effects.\n24-month versus 12-month recall at 2 years' follow-up\nDue to the very low certainty of evidence from one trial, it is unclear if there is an important difference in caries experience between assignment to a 24-month or a 12-month recall. For 3- to 5-year-olds with primary teeth, the mean difference (MD) in dmfs (decayed, missing, and filled tooth surfaces) increment was 0.90 (95% CI −0.16 to 1.96; 58 participants). For 16- to 20-year-olds with permanent teeth, the MD in DMFS increment was 0.86 (95% CI −0.03 to 1.75; 127 participants). The trial did not assess other clinical outcomes of relevance to this review.\nRisk-based recall versus 6-month recall at 4 years' follow-up\nWe found high-certainty evidence from one trial of adults that there is little to no difference between risk-based and 6-month recall intervals for the outcomes: number of tooth surfaces with any caries (ICDAS 1 to 6; MD 0.15, 95% CI −0.77 to 1.08; 1478 participants); proportion of sites with gingival bleeding (MD 0.78%, 95% CI −1.17% to 2.73%; 1472 participants); oral-health-related quality of life (MD in OHIP-14 scores −0.35, 95% CI −1.02 to 0.32; 1551 participants). There is probably little to no difference in the prevalence of moderate to extensive caries (ICDAS 3 to 6) between the groups (RR 1.04, 95% CI 0.99 to 1.09; 1478 participants; moderate-certainty evidence).\n24-month recall versus 6-month recall at 4 years' follow-up\nWe found moderate-certainty evidence from one trial of adults that there is probably little to no difference between 24-month and 6-month recall intervals for the outcomes: number of tooth surfaces with any caries (MD −0.60, 95% CI −2.54 to 1.34; 271 participants); percentage of sites with gingival bleeding (MD −0.91%, 95% CI −5.02% to 3.20%; 271 participants). There may be little to no difference between the groups in the prevalence of moderate to extensive caries (RR 1.05, 95% CI 0.92 to 1.20; 271 participants; low-certainty evidence). We found high-certainty evidence that there is little to no difference in oral-health-related quality of life between the groups (MD in OHIP-14 scores −0.24, 95% CI −1.55 to 1.07; 305 participants).\nRisk-based recall versus 24-month recall at 4 years' follow-up\nWe found moderate-certainty evidence from one trial of adults that there is probably little to no difference between risk-based and 24-month recall intervals for the outcomes: prevalence of moderate to extensive caries (RR 1.06, 95% CI 0.95 to 1.19; 279 participants); number of tooth surfaces with any caries (MD 1.40, 95% CI −0.69 to 3.49; 279 participants). We found high-certainty evidence that there is no important difference between the groups in the percentage of sites with gingival bleeding (MD −0.07%, 95% CI −4.10% to 3.96%; 279 participants); or in oral-health-related quality of life (MD in OHIP-14 scores −0.37, 95% CI −1.69 to 0.95; 298 participants).\nAuthors' conclusions\nFor adults attending dental check-ups in primary care settings, there is little to no difference between risk-based and 6-month recall intervals in the number of tooth surfaces with any caries, gingival bleeding and oral-health-related quality of life over a 4-year period (high-certainty evidence). There is probably little to no difference between the recall strategies in the prevalence of moderate to extensive caries (moderate-certainty evidence).\nWhen comparing 24-month with either 6-month or risk-based recall intervals for adults, there is moderate- to high-certainty evidence that there is little to no difference in the number of tooth surfaces with any caries, gingival bleeding and oral-health-related quality of life over a 4-year period.\nThe available evidence on recall intervals between dental check-ups for children and adolescents is uncertain.\nThe two trials we included in the review did not assess adverse effects of different recall strategies.\n\nGiven the provided abstract, please respond to the following query: \"What are the results of our review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In adults, there was little to no difference between six-monthly and risk-based check-ups in tooth decay (number of tooth surfaces affected), gum disease and quality of life after four years; and probably little to no difference in how many people had moderate-to-extensive tooth decay.\nThere was probably little to no difference between 24-monthly and six-monthly or risk-based check-ups in tooth decay (number of people and number of tooth surfaces affected), gum disease or well‑being, and may be little to no difference in how many people had moderate-to-extensive tooth decay.\nWe did not find enough reliable evidence about the effects of 12-monthly and 24-monthly check-ups in children and adolescents after two years. This was because of problems with the way that the study was conducted."
}
] |
query-laysum
|
8219
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is the third update of the review first published in 2017.\nHypertension is a prominent preventable cause of premature morbidity and mortality. People with hypertension and established cardiovascular disease are at particularly high risk, so reducing blood pressure to below standard targets may be beneficial. This strategy could reduce cardiovascular mortality and morbidity but could also increase adverse events. The optimal blood pressure target in people with hypertension and established cardiovascular disease remains unknown.\nObjectives\nTo determine if lower blood pressure targets (systolic/diastolic 135/85 mmHg or less) are associated with reduction in mortality and morbidity compared with standard blood pressure targets (140 mmHg to 160mmHg/90 mmHg to 100 mmHg or less) in the treatment of people with hypertension and a history of cardiovascular disease (myocardial infarction, angina, stroke, peripheral vascular occlusive disease).\nSearch methods\nFor this updated review, we used standard, extensive Cochrane search methods. The latest search date was January 2022. We applied no language restrictions.\nSelection criteria\nWe included randomized controlled trials (RCTs) with more than 50 participants per group that provided at least six months' follow-up. Trial reports had to present data for at least one primary outcome (total mortality, serious adverse events, total cardiovascular events, cardiovascular mortality). Eligible interventions involved lower targets for systolic/diastolic blood pressure (135/85 mmHg or less) compared with standard targets for blood pressure (140 mmHg to 160 mmHg/90 mmHg to 100 mmHg or less).\nParticipants were adults with documented hypertension and adults receiving treatment for hypertension with a cardiovascular history for myocardial infarction, stroke, chronic peripheral vascular occlusive disease, or angina pectoris.\nData collection and analysis\nWe used standard Cochrane methods. We used GRADE to assess the certainty of the evidence.\nMain results\nWe included seven RCTs that involved 9595 participants. Mean follow-up was 3.7 years (range 1.0 to 4.7 years). Six of seven RCTs provided individual participant data. None of the included studies was blinded to participants or clinicians because of the need to titrate antihypertensive drugs to reach a specific blood pressure goal. However, an independent committee blinded to group allocation assessed clinical events in all trials. Hence, we assessed all trials at high risk of performance bias and low risk of detection bias. We also considered other issues, such as early termination of studies and subgroups of participants not predefined, to downgrade the certainty of the evidence.\nWe found there is probably little to no difference in total mortality (risk ratio (RR) 1.05, 95% confidence interval (CI) 0.91 to 1.23; 7 studies, 9595 participants; moderate-certainty evidence) or cardiovascular mortality (RR 1.03, 95% CI 0.82 to 1.29; 6 studies, 9484 participants; moderate-certainty evidence). Similarly, we found there may be little to no differences in serious adverse events (RR 1.01, 95% CI 0.94 to 1.08; 7 studies, 9595 participants; low-certainty evidence) or total cardiovascular events (including myocardial infarction, stroke, sudden death, hospitalization, or death from congestive heart failure (CHF)) (RR 0.89, 95% CI 0.80 to 1.00; 7 studies, 9595 participants; low-certainty evidence). The evidence was very uncertain about withdrawals due to adverse effects. However, studies suggest more participants may withdraw due to adverse effects in the lower target group (RR 8.16, 95% CI 2.06 to 32.28; 3 studies, 801 participants; very low-certainty evidence). Systolic and diastolic blood pressure readings were lower in the lower target group (systolic: mean difference (MD) –8.77 mmHg, 95% CI –12.82 to –4.73; 7 studies, 8657 participants; diastolic: MD –4.50 mmHg, 95% CI –6.35 to –2.65; 6 studies, 8546 participants). More drugs were needed in the lower target group (MD 0.56, 95% CI 0.16 to 0.96; 5 studies, 7910 participants), but blood pressure targets at one year were achieved more frequently in the standard target group (RR 1.20, 95% CI 1.17 to 1.23; 7 studies, 8699 participants).\nAuthors' conclusions\nWe found there is probably little to no difference in total mortality and cardiovascular mortality between people with hypertension and cardiovascular disease treated to a lower compared to a standard blood pressure target. There may also be little to no difference in serious adverse events or total cardiovascular events. This suggests that no net health benefit is derived from a lower systolic blood pressure target. We found very limited evidence on withdrawals due to adverse effects, which led to high uncertainty. At present, evidence is insufficient to justify lower blood pressure targets (135/85 mmHg or less) in people with hypertension and established cardiovascular disease. Several trials are still ongoing, which may provide an important input to this topic in the near future.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared lower blood pressure targets to standard blood pressure targets in people with high blood pressure and a history of cardiovascular disease (heart disease, angina, stroke, vascular disease). Studies had to talk about results such as deaths or other events caused by diseases of the heart or the blood vessels, such as heart attack, stroke, or heart failure. Studies could also talk about other types of health-related side effects. We only chose randomized studies (where people were randomly put into one of two or more treatment groups) with 50 or more people in each group and that lasted at least six months."
}
] |
query-laysum
|
8220
|
[
{
"role": "user",
"content": "Abstract: Background\nEpilepsy is a highly prevalent neurological condition characterised by repeated unprovoked seizures with various aetiologies. Although antiepileptic medications produce clinical improvement in many individuals, nearly a third of individuals have drug-resistant epilepsy that carries significant morbidity and mortality, and even individuals who have clinical improvement from antiepileptic medications often report iatrogenic symptoms. There remains a need for non-invasive and more effective therapies for this population. Transcranial magnetic stimulation (TMS) uses electromagnetic coils to excite or inhibit neurons, with repetitive pulses at low-frequency producing an inhibitory effect that could conceivably reduce cortical excitability associated with epilepsy.\nThis is an updated version of the original Cochrane Review published in 2016.\nObjectives\nTo assess the evidence for the use of TMS in individuals with drug-resistant epilepsy compared with other available treatments in reducing seizure frequency, improving quality of life, reducing epileptiform discharges, antiepileptic medication use, and side effects.\nSearch methods\nFor the latest update, we searched the Cochrane Register of Studies (CRS Web) and MEDLINE (Ovid 1946 to 2 June 2020). CRS Web includes randomised or quasi-randomised controlled trials from PubMed, Embase, ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), the Cochrane Central Register of Controlled Trials (CENTRAL), and the specialised registers of Cochrane Review Groups including Epilepsy.\nSelection criteria\nWe included randomised controlled trials that were double-blinded, single-blinded, or unblinded, and placebo controlled, no treatment, or active controlled, which used repetitive transcranial magnetic stimulation (rTMS) without restriction of frequency, coil, duration or intensity on participants with drug-resistant epilepsy.\nData collection and analysis\nWe extracted information from each trial including methodological data; participant demographics including baseline seizure frequency, type of epileptic drugs taken; intervention details and intervention groups for comparison; potential biases; and outcomes and time points, primarily change in seizure frequency or responder rates, as well as quality of life and epileptiform discharges, adverse effects, and changes in medication use.\nMain results\nThe original search revealed 274 records from the databases that after selection provided seven full-text relevant studies for inclusion. The latest search identified 179 new records from the databases that after evaluation against the inclusion and exclusion criteria provided one additional full-text relevant study. The eight included studies (241 participants) were all randomised trials; seven of the studies were blinded. Methodological and design information in the included studies was unclear, particularly relating to randomisation and allocation concealment methods. We were not able to combine the results of the trials in analysis due to differences in the studies' designs.\nFor the current update, two of the eight studies analysed showed a statistically significant reduction in seizure rate from baseline (72% and 78.9% reduction of seizures per week from the baseline rate, respectively), whilst the other six studies showed no statistically significant difference in seizure frequency following rTMS treatment compared with controls (low-certainty evidence). One study assessed quality of life and found that more participants showed improvement in quality of life scores with active treatments compared to the sham treatment, but this only involved seven participants (very low-certainty evidence).\nFour studies evaluated our secondary endpoint of mean number of epileptic discharges, three of which showed a statistically significant reduction in discharges after active rTMS treatment. Adverse effects were uncommon in the studies and typically involved headache, dizziness, and tinnitus; however increased seizure frequency did occur in a small number of individuals. The included trials reported no significant changes in medication use. Overall the risk of bias was either low or unclear, and the certainty of the evidence was low to very low.\nAuthors' conclusions\nOverall, we judged the certainty of evidence for the primary outcomes of this review to be low to very low. We found some evidence to suggest that rTMS is safe but some adverse events were experienced. The variability in technique and outcome reporting prevented meta-analysis, and the evidence for efficacy of rTMS for seizure reduction is still lacking, despite reasonable evidence that it is effective at reducing epileptiform discharges.\n\nGiven the provided abstract, please respond to the following query: \"Methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The latest search for trials was 2 June 2020. We assessed the evidence from eight randomised controlled trials (studies in which participants are assigned to one of two or more treatment groups using a random method) involving a total of 241 participants comparing rTMS to control treatments (sham treatment, antiepileptic medication, or low-frequency rTMS)."
}
] |
query-laysum
|
8221
|
[
{
"role": "user",
"content": "Abstract: Background\nPsychosis is an illness characterised by alterations in thoughts and perceptions resulting in delusions and hallucinations. Psychosis is rare in adolescents but can have serious consequences. Antipsychotic medications are the mainstay treatment, and have been shown to be effective. However, there is emerging evidence on psychological interventions such as cognitive remediation therapy, psycho-education, family therapy and group psychotherapy that may be useful for adolescents with psychosis.\nObjectives\nTo assess the effects of various psychological interventions for adolescents with psychosis.\nSearch methods\nWe searched the Cochrane Schizophrenia Group's study-based Register of Trials including clinical trials registries (latest, 8 March 2019).\nSelection criteria\nAll randomised controlled trials comparing various psychological interventions with treatment-as-usual or other psychological treatments for adolescents with psychosis. For analyses, we included trials meeting our inclusion criteria and reporting useable data.\nData collection and analysis\nWe independently and reliably screened studies and we assessed risk of bias of the included studies. For dichotomous data, we calculated risk ratios (RRs) and 95% confidence intervals (CIs) on an intention-to-treat basis. For continuous data, we used mean differences (MDs) and the 95% CIs. We used a random-effects model for analyses. We created a ‘Summary of findings' table using GRADE.\nMain results\nThe current review includes 7 studies (n = 319) assessing a heterogenous group of psychological interventions with variable risk of bias. Adverse events were not reported by any of the studies. None of the studies was sponsored by industry. Below, we summarise the main results from four of six comparisons, and the certainty of these results (based on GRADE). All scale scores are average endpoint scores.\nCognitive Remediation Therapy (CRT) + Treatment-as-Usual (TAU) versus TAU\nTwo studies compared adding CRT to participants' TAU with TAU alone. Global state (CGAS, high = good) was reported by one study. There was no clear difference between treatment groups (MD -4.90, 95% CI -11.05 to 1.25; participants = 50; studies = 1, very low-certainty). Mental state (PANSS, high = poor) was reported by one study. Scores were clearly lower in the TAU group (MD 8.30, 95% CI 0.46 to 16.14; participants = 50; studies = 1; very low-certainty). Clearly more participants in the CRT group showed improvement in cognitive functioning (Memory digit span test) compared to numbers showing improvement in the TAU group (1 study, n = 31, RR 0.58, 95% CI 0.37 to 0.89; very low-certainty). For global functioning (VABS, high = good), our analysis of reported scores showed no clear difference between treatment groups (MD 5.90, 95% CI -3.03 to 14.83; participants = 50; studies = 1; very low-certainty). The number of participants leaving the study early from each group was similar (RR 0.93, 95% CI 0.32 to 2.71; participants = 91; studies = 2; low-certainty).\nGroup Psychosocial Therapy (GPT) + TAU versus TAU\nOne study assessed the effects of adding GPT to participants' usual medication. Global state scores (CGAS, high = good) were clearly higher in the GPT group (MD 5.10, 95% CI 1.35 to 8.85; participants = 56; studies = 1; very low-certainty) but there was little or no clear difference between groups for mental state scores (PANSS, high = poor, MD -4.10, 95% CI -8.28 to 0.08; participants = 56; studies = 1, very low-certainty) and no clear difference between groups for numbers of participants leaving the study early (RR 0.43, 95% CI 0.15 to 1.28; participants = 56; studies = 1; very low-certainty).\nCognitive Remediation Programme (CRP) + Psychoeducational Treatment Programme (PTP) versus PTP\nOne study assessed the effects of combining two types psychological interventions (CRP + PTP) with PTP alone. Global state scores (GAS, high = good) were not clearly different (MD 1.60, 95% CI -6.48 to 9.68; participants = 25; studies = 1; very low-certainty), as were mental state scores (BPRS total, high = poor, MD -5.40, 95% CI -16.42 to 5.62; participants = 24; studies = 1; very low-certainty), and cognitive functioning scores (SPAN-12, high = good, MD 2.40, 95% CI -2.67 to 7.47; participants = 25; studies = 1; very low-certainty).\nPsychoeducational (PE) + Multifamily Treatment (MFT) Versus Nonstructured Group Therapy (NSGT, all long-term)\nOne study compared (PE + MFT) with NSGT. Analysis of reported global state scores (CGAS, high = good, MD 3.38, 95% CI -4.87 to 11.63; participants = 49; studies = 1; very low-certainty) and mental state scores (PANSS total, high = poor, MD -8.23, 95% CI -17.51 to 1.05; participants = 49; studies = 1; very low-certainty) showed no clear differences. The number of participants needing hospital admission (RR 0.84, 95% CI 0.36 to 1.96; participants = 49; studies = 1) and the number of participants leaving the study early from each group were also similar (RR 0.52, 95% CI 0.10 to 2.60; participants = 55; studies = 1; low-certainty).\nAuthors' conclusions\nMost of our estimates of effect for our main outcomes are equivocal. An effect is suggested for only four outcomes in the SOF tables presented. Compared to TAU, CRT may have a positive effect on cognitive functioning, however the same study reports data suggesting TAU may have positive effect on mental state. Another study comparing GPT with TAU reports data suggesting GPT may have a positive effect on global state. However, the estimate of effects for all the main outcomes in our review should be viewed with considerable caution as they are based on data from a small number of studies with variable risk of bias. Further data could change these results and larger and better quality studies are needed before any firm conclusions regarding the effects of psychological interventions for adolescents with psychosis can be made.\n\nGiven the provided abstract, please respond to the following query: \"Relative effects of PIs (comparing one type of PI with another type of PI)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our analyses showed no real differences between the different types of PIs."
}
] |
query-laysum
|
8222
|
[
{
"role": "user",
"content": "Abstract: Background\nAcute liver failure is a rare and serious disease. Acute liver failure may be paracetamol-induced or non-paracetamol-induced. Acute liver failure not caused by paracetamol (acetaminophen) has a poor prognosis with limited treatment options. N-acetylcysteine has been successful in treating paracetamol-induced acute liver failure and reduces the risk of needing to undergo liver transplantation. Recent randomised clinical trials have explored whether the benefit can be extrapolated to treat non-paracetamol-related acute liver failure. The American Association for the Study of Liver Diseases (AASLD) 2011 guideline suggested that N-acetylcysteine could improve spontaneous survival when given during early encephalopathy stages for patients with non-paracetamol-related acute liver failure.\nObjectives\nTo assess the benefits and harms of N-acetylcysteine compared with placebo or no N-acetylcysteine, as an adjunct to usual care, in people with non-paracetamol-related acute liver failure.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Group Controlled Trials Register (searched 25 June 2020), Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 6) in The Cochrane Library, MEDLINE Ovid (1946 to 25 June 2020), Embase Ovid (1974 to 25 June 2020), Latin American and Caribbean Health Science Information database (LILACS) (1982 to 25 June 2020), Science Citation Index Expanded (1900 to 25 June 2020), and Conference Proceedings Citation Index – Science (1990 to 25 June 2020).\nSelection criteria\nWe included randomised clinical trials that compared N-acetylcysteine at any dose or route with placebo or no intervention in participants with non-paracetamol-induced acute liver failure.\nData collection and analysis\nWe used standard methodological procedures as described in the Cochrane Handbook for Systematic Reviews of Interventions. We conducted meta-analyses and presented results using risk ratios (RR) with 95% confidence intervals (CIs). We quantified statistical heterogeneity by calculating I2. We assessed bias using the Cochrane risk of bias tool and determined the certainty of the evidence using the GRADE approach.\nMain results\nWe included two randomised clinical trials: one with 183 adults and one with 174 children (birth through age 17 years). We classified both trials at overall high risk of bias. One unregistered study in adults is awaiting classification while we are awaiting responses from study authors for details on trial methodology (e.g. randomisation processes).\nWe did not meta-analyse all-cause mortality because of significant clinical heterogeneity in the two trials. For all-cause mortality at 21 days between adults receiving N-acetylcysteine versus placebo, there was inconclusive evidence of effect (N-acetylcysteine 24/81 (29.6%) versus placebo 31/92 (33.7%); RR 0.88, 95% CI 0.57 to 1.37; low certainty evidence). The certainty of the evidence was low due to risk of bias and imprecision. Similarly, for all-cause mortality at one year between children receiving N-acetylcysteine versus placebo, there was inconclusive evidence of effect (25/92 (27.2%) versus 17/92 (18.5%); RR 1.47, 95% CI 0.85 to 2.53; low certainty evidence). We downgraded the certainty of evidence due to very serious imprecision.\nWe did not meta-analyse serious adverse events and liver transplantation at one year due to incomplete reporting and clinical heterogeneity. For liver transplantation at 21 days in the trial with adults, there was inconclusive evidence of effect (RR 0.72, 95% CI 0.49 to 1.06; low certainty evidence). We downgraded the certainty of the evidence due to serious risk of bias and imprecision. For liver transplantation at one year in the trial with children, there was inconclusive evidence of effect (RR 1.23, 95% CI 0.84 to 1.81; low certainty of evidence). We downgraded the certainty of the evidence due to very serious imprecision.\nThere was inconclusive evidence of effect on serious adverse events in the trial with children (RR 1.25, 95% CI 0.35 to 4.51; low certainty evidence). We downgraded the certainty of the evidence due to very serious imprecision.\nWe did not meta-analyse non-serious adverse events due to clinical heterogeneity. There was inconclusive evidence of effect on non-serious adverse events in adults (RR 1.07, 95% CI 0.79 to 1.45; 173 participants; low certainty of evidence) and children (RR 1.19, 95% CI 0.62 to 2.16; 184 participants; low certainty of evidence). None of the trials reported outcomes of proportion of participants with resolution of encephalopathy and coagulopathy or health-related quality of life.\nThe National Institute of Health in the United States funded both trials through grants. One of the trials received additional funding from two hospital foundations' grants. Pharmaceutical companies provided the study drug and matching placebo, but they did not have input into study design nor involvement in analysis.\nAuthors' conclusions\nThe available evidence is inconclusive regarding the effect of N-acetylcysteine compared with placebo or no N-acetylcysteine, as an adjunct to usual care, on mortality or transplant rate in non-paracetamol-induced acute liver failure. Current evidence does not support the guideline suggestion to use N-acetylcysteine in adults with non-paracetamol-related acute liver failure, nor the rising use observed in clinical practice. The uncertainty based on current scanty evidence warrants additional randomised clinical trials with non-paracetamol-related acute liver failure evaluating N-acetylcysteine versus placebo, as well as investigations to identify predictors of response and the optimal N-acetylcysteine dose and duration.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Acute liver failure is a condition where previously healthy liver cells are severely damaged. This damage results in loss of important liver functions, which include clearing the body of wastes and toxins and producing proteins to help blood clot. People with acute liver failure are at high risk of death. The most common cause of acute liver failure is ingesting too much paracetamol (acetaminophen). Other reasons for acute liver failures include viral infections (e.g. hepatitis), drug-related liver injury, the body's immune system attacking the liver, or lack of liver blood flow.\nFor acute liver failure that is caused by ingesting too much paracetamol, N-acetylcysteine is a treatment that seems to work well and is the standard treatment for people who ingested too much paracetamol. For all other causes of acute liver failure, transplanting a liver is the usual accepted treatment, but it has many limitations (e.g. organ availability). Certain clinicians have started N-acetylcysteine administration also in patients with acute liver failure not caused by paracetamol."
}
] |
query-laysum
|
8223
|
[
{
"role": "user",
"content": "Abstract: Background\nRadiofrequency (RF) denervation, an invasive treatment for chronic low back pain (CLBP), is used most often for pain suspected to arise from facet joints, sacroiliac (SI) joints or discs. Many (uncontrolled) studies have shown substantial variation in its use between countries and continued uncertainty regarding its effectiveness.\nObjectives\nThe objective of this review is to assess the effectiveness of RF denervation procedures for the treatment of patients with CLBP. The current review is an update of the review conducted in 2003.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, three other databases, two clinical trials registries and the reference lists of included studies from inception to May 2014 for randomised controlled trials (RCTs) fulfilling the inclusion criteria. We updated this search in June 2015, but we have not yet incorporated these results.\nSelection criteria\nWe included RCTs of RF denervation for patients with CLBP who had a positive response to a diagnostic block or discography. We applied no language or date restrictions.\nData collection and analysis\nPairs of review authors independently selected RCTs, extracted data and assessed risk of bias (RoB) and clinical relevance using standardised forms. We performed meta-analyses with clinically homogeneous studies and assessed the quality of evidence for each outcome using the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach.\nMain results\nIn total, we included 23 RCTs (N = 1309), 13 of which (56%) had low RoB. We included both men and women with a mean age of 50.6 years. We assessed the overall quality of the evidence as very low to moderate. Twelve studies examined suspected facet joint pain, five studies disc pain, two studies SI joint pain, two studies radicular CLBP, one study suspected radiating low back pain and one study CLBP with or without suspected radiation. Overall, moderate evidence suggests that facet joint RF denervation has a greater effect on pain compared with placebo over the short term (mean difference (MD) -1.47, 95% confidence interval (CI) -2.28 to -0.67). Low-quality evidence indicates that facet joint RF denervation is more effective than placebo for function over the short term (MD -5.53, 95% CI -8.66 to -2.40) and over the long term (MD -3.70, 95% CI -6.94 to -0.47). Evidence of very low to low quality shows that facet joint RF denervation is more effective for pain than steroid injections over the short (MD -2.23, 95% CI -2.38 to -2.08), intermediate (MD -2.13, 95% CI -3.45 to -0.81), and long term (MD -2.65, 95% CI -3.43 to -1.88). RF denervation used for disc pain produces conflicting results, with no effects for RF denervation compared with placebo over the short and intermediate term, and small effects for RF denervation over the long term for pain relief (MD -1.63, 95% CI -2.58 to -0.68) and improved function (MD -6.75, 95% CI -13.42 to -0.09). Lack of evidence of short-term effectiveness undermines the clinical plausibility of intermediate-term or long-term effectiveness. When RF denervation is used for SI joint pain, low-quality evidence reveals no differences from placebo in effects on pain (MD -2.12, 95% CI -5.45 to 1.21) and function (MD -14.06, 95% CI -30.42 to 2.30) over the short term, and one study shows a small effect on both pain and function over the intermediate term. RF denervation is an invasive procedure that can cause a variety of complications. The quality and size of original studies were inadequate to permit assessment of how often complications occur.\nAuthors' conclusions\nThe review authors found no high-quality evidence suggesting that RF denervation provides pain relief for patients with CLBP. Similarly, we identified no convincing evidence to show that this treatment improves function. Overall, the current evidence for RF denervation for CLBP is very low to moderate in quality; high-quality evidence is lacking. High-quality RCTs with larger patient samples are needed, as are data on long-term effects.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to May 2014. This review includes 23 randomised controlled trials with a total of 1309 participants whose chronic low back pain was evaluated with nerve blocks or other diagnostic tests. Both men and women, with a mean age of 50.6 years, were included. Patients with a positive response to a diagnostic block or to discography were given radiofrequency denervation, a placebo or a comparison treatment."
}
] |
query-laysum
|
8224
|
[
{
"role": "user",
"content": "Abstract: Background\nVery early mobilisation (VEM) is performed in some stroke units and recommended in some acute stroke clinical guidelines. However, it is unclear whether very early mobilisation independently improves outcome after stroke.\nObjectives\nTo determine whether very early mobilisation (started as soon as possible, and no later than 48 hours after onset of symptoms) in people with acute stroke improves recovery (primarily the proportion of independent survivors) compared with usual care.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (last searched 31 July 2017). We also systematically searched 19 electronic databases including; CENTRAL; 2017, Issue 7 in the Cochrane Library (searched July 2017), MEDLINE Ovid (1950 to August 2017), Embase Ovid (1980 to August 2017), CINAHL EBSCO (Cumulative Index to Nursing and Allied Health Literature; 1937 to August 2017) , PsycINFO Ovid (1806 to August 2017), AMED Ovid (Allied and Complementary Medicine Database), SPORTDiscus EBSCO (1830 to August 2017). We searched relevant ongoing trials and research registers (searched December 2016), the Chinese medical database, Wanfangdata (searched to November 2016), and reference lists, and contacted researchers in the field.\nSelection criteria\nRandomised controlled trials (RCTs) of people with acute stroke, comparing an intervention group that started out-of-bed mobilisation within 48 hours of stroke, and aimed to reduce time-to-first mobilisation, with or without an increase in the amount or frequency (or both) of mobilisation activities, with usual care, where time-to-first mobilisation was commenced later.\nData collection and analysis\nTwo review authors independently selected trials, extracted data, assessed risk of bias, and applied the GRADE approach to assess the quality of the evidence. The primary outcome was death or poor outcome (dependency or institutionalisation) at the end of scheduled follow-up. Secondary outcomes included death, dependency, institutionalisation, activities of daily living (ADL), extended ADL, quality of life, walking ability, complications (e.g. deep vein thrombosis), patient mood, and length of hospital stay. We also analysed outcomes at three-month follow-up.\nMain results\nWe included nine RCTs with 2958 participants; one trial provided most of the information (2104 participants). The median (range) delay to starting mobilisation after stroke onset was 18.5 (13.1 to 43) hours in the VEM group and 33.3 (22.5 to 71.5) hours in the usual care group. The median difference within trials was 12.7 (4 to 45.6) hours. Other differences in intervention varied between trials; in five trials, the VEM group were also reported to have received more time in therapy, or more mobilisation activity.\nPrimary outcome data were available for 2542 of 2618 (97.1%) participants randomized and followed up for a median of three months. VEM probably led to similar or slightly more deaths and participants who had a poor outcome, compared with delayed mobilisation (51% versus 49%; odds ratio (OR) 1.08, 95% confidence interval (CI) 0.92 to 1.26; P = 0.36; 8 trials; moderate-quality evidence). Death occurred in 7% of participants who received delayed mobilisation, and 8.5% of participants who received VEM (OR 1.27, 95% CI 0.95 to 1.70; P = 0.11; 8 trials, 2570 participants; moderate-quality evidence), and the effects on experiencing any complication were unclear (OR 0.88; 95% CI 0.73 to 1.06; P = 0.18; 7 trials, 2778 participants; low-quality evidence). Analysis using outcomes collected only at three-month follow-up did not alter the conclusions.\nThe mean ADL score (measured at end of follow-up, with the 20-point Barthel Index) was higher in those who received VEM compared with the usual care group (mean difference (MD) 1.94, 95% CI 0.75 to 3.13, P = 0.001; 8 trials, 9 comparisons, 2630/2904 participants (90.6%); low-quality evidence), but there was substantial heterogeneity (93%). Effect sizes were smaller for outcomes collected at three-month follow-up, rather than later.\nThe mean length of stay was shorter in those who received VEM compared with the usual care group (MD -1.44, 95% CI -2.28 to -0.60, P = 0.0008; 8 trials, 2532/2618 participants (96.7%); low-quality evidence). Confidence in the answer was limited by the variable definitions of length of stay. The other secondary outcome analyses (institutionalisation, extended activities of daily living, quality of life, walking ability, patient mood) were limited by lack of data.\nSensitivity analyses by trial quality: none of the outcome conclusions were altered if we restricted analyses to trials with the lowest risk of bias (based on method of randomization, allocation concealment, completeness of follow-up, and blinding of final assessment), or information about the amount of mobilisation.\nSensitivity analysis by intervention characteristics: analyses restricted to trials where the mean VEM time-to-first mobilisation was less than 24 hours, showed an odds of death of 1.35 (95% CI 0.99 to 1.83; P = 0.06; I² = 25%; 5 trials). Analyses restricted to the trials that clearly reported a more prolonged out-of-bed activity showed a similar primary outcome (OR 1.14; 0.96 to 1.35; P = 0.13; I² = 28%; 5 trials), and odds of death (OR 1.27; 0.93 to 1.73; P = 0.13; I² = 0%; 4 trials) to the main analysis.\nExploratory network meta-analysis (NMA): we were unable to analyze by the amount of therapy, but low-quality evidence indicated that time-to-first mobilisation at around 24 hours was associated with the lowest odds of death or poor outcome, compared with earlier or later mobilisation.\nAuthors' conclusions\nVEM, which usually involved first mobilisation within 24 hours of stroke onset, did not increase the number of people who survived or made a good recovery after their stroke. VEM may have reduced the length of stay in hospital by about one day, but this was based on low-quality evidence. Based on the potential hazards reported in the single largest RCT, the sensitivity analysis of trials commencing mobilisation within 24 hours, and the NMA, there was concern that VEM commencing within 24 hours may carry an increased risk, at least in some people with stroke. Given the uncertainty around these effect estimates, more detailed research is still required.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review is up-to-date to July 2017."
}
] |
query-laysum
|
8225
|
[
{
"role": "user",
"content": "Abstract: Background\nThere is growing research and policy interest in the potential for using the natural environment to enhance human health and well-being. This resource may be underused as a health promotion tool to address the increasing burden of common health problems such as increased chronic diseases and mental health concerns. Outdoor environmental enhancement and conservation activities (EECA) (for instance unpaid litter picking, tree planting or path maintenance) offer opportunities for physical activity alongside greater connectedness with local environments, enhanced social connections within communities and improved self-esteem through activities that improve the locality which may, in turn, further improve well-being.\nObjectives\nTo assess the health and well-being impacts on adults following participation in environmental enhancement and conservation activities.\nSearch methods\nWe contacted or searched the websites of more than 250 EECA organisations to identify grey literature. Resource limitations meant the majority of the websites were from UK, USA, Canada and Australia. We searched the following databases (initially in October 2012, updated October 2014, except CAB Direct, OpenGrey, SPORTDiscus, and TRIP Database), using a search strategy developed with our project advisory groups (predominantly leaders of EECA-type activities and methodological experts): ASSIA; BIOSIS; British Education Index; British Nursing Index; CAB Abstracts; Campbell Collaboration; Cochrane Public Health Specialized Register; DOPHER; EMBASE; ERIC; Global Health; GreenFILE; HMIC; MEDLINE-in-Process; MEDLINE; OpenGrey; PsychINFO; Social Policy and Practice; SPORTDiscus; TRoPHI; Social Services Abstracts; Sociological Abstracts; The Cochrane Library; TRIP database; and Web of Science. Citation and related article chasing was used. Searches were limited to studies in English published after 1990.\nSelection criteria\nTwo review authors independently screened studies. Included studies examined the impact of EECA on adult health and well-being. Eligible interventions needed to include each of the following: intended to improve the outdoor natural or built environment at either a local or wider level; took place in urban or rural locations in any country; involved active participation; and were NOT experienced through paid employment.\nWe included quantitative and qualitative research. Includable quantitative study designs were: randomised controlled trials (RCTs), cluster RCTs, quasi-RCTs, cluster quasi-RCTs, controlled before-and-after studies, interrupted-time-series, cohort studies (prospective or retrospective), case-control studies and uncontrolled before-and-after studies (uBA). We included qualitative research if it used recognised qualitative methods of data collection and analysis.\nData collection and analysis\nOne reviewer extracted data, and another reviewer checked the data. Two review authors independently appraised study quality using the Effective Public Health Practice Project tool (for quantitative studies) or Wallace criteria (for qualitative studies). Heterogeneity of outcome measures and poor reporting of intervention specifics prevented meta-analysis so we synthesised the results narratively. We synthesised qualitative research findings using thematic analysis.\nMain results\nDatabase searches identified 21,420 records, with 21,304 excluded at title/abstract. Grey literature searches identified 211 records. We screened 327 full-text articles from which we included 21 studies (reported in 28 publications): two case-studies (which were not included in the synthesis due to inadequate robustness), one case-control, one retrospective cohort, five uBA, three mixed-method (uBA, qualitative), and nine qualitative studies. The 19 studies included in the synthesis detailed the impacts to a total of 3,603 participants: 647 from quantitative intervention studies and 2630 from a retrospective cohort study; and 326 from qualitative studies (one not reporting sample size).\nIncluded studies shared the key elements of EECA defined above, but the range of activities varied considerably. Quantitative evaluation methods were heterogeneous. The designs or reporting of quantitative studies, or both, were rated as ‘weak’ quality with high risk of bias due to one or more of the following: inadequate study design, intervention detail, participant selection, outcome reporting and blinding.\nParticipants’ characteristics were poorly reported; eight studies did not report gender or age and none reported socio-economic status. Three quantitative studies reported that participants were referred through health or social services, or due to mental ill health (five quantitative studies), however participants' engagement routes were often not clear.\nWhilst the majority of quantitative studies (n = 8) reported no effect on one or more outcomes, positive effects were reported in six quantitative studies relating to short-term physiological, mental/emotional health, and quality-of-life outcomes. Negative effects were reported in two quantitative studies; one study reported higher levels of anxiety amongst participants, another reported increased mental health stress.\nThe design or reporting, or both, of the qualitative studies was rated as good in three studies or poor in nine; mainly due to missing detail about participants, methods and interventions. Included qualitative evidence provided rich data about the experience of participation. Thematic analysis identified eight themes supported by at least one good quality study, regarding participants' positive experiences and related to personal/social identity, physical activity, developing knowledge, spirituality, benefits of place, personal achievement, psychological benefits and social contact. There was one report of negative experiences.\nAuthors' conclusions\nThere is little quantitative evidence of positive or negative health and well-being benefits from participating in EECA. However, the qualitative research showed high levels of perceived benefit among participants. Quantitative evidence resulted from study designs with high risk of bias, qualitative evidence lacked reporting detail. The majority of included studies were programme evaluations, conducted internally or funded by the provider.\nThe conceptual framework illustrates the range of interlinked mechanisms through which people believe they potentially achieve health and well-being benefits, such as opportunities for social contact. It also considers potential moderators and mediators of effect.\nOne main finding of the review is the inherent difficulty associated with generating robust evidence of effectiveness for complex interventions. We developed the conceptual framework to illustrate how people believed they benefited. Investigating such mechanisms in a subsequent theory-led review might be one way of examining evidence of effect for these activities.\nThe conceptual framework needs further refinement through linked reviews and more reliable evidence. Future research should use more robust study designs and report key intervention and participant detail.\n\nGiven the provided abstract, please respond to the following query: \"Methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to know if taking part in nature conservation, or other activities that enhance the environment (such as litter-picking), can impact on health. The activities examined aimed to improve the outdoor environment in urban or rural locations. Participants were adult volunteers or were referred by a healthcare professional.\nWe conducted a systematic review. We searched databases and contacted experts to identify all relevant academic and unpublished research (grey literature) from any country."
}
] |
query-laysum
|
8227
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease is an inherited disorder of hemoglobin, resulting in abnormal red blood cells. These are rigid and may block blood vessels leading to acute painful crises and other complications. Recent research has focused on therapies to rehydrate the sickled cells by reducing the loss of water and ions from them. Little is known about the effectiveness and safety of such drugs. This is an updated version of a previously published review.\nObjectives\nTo assess the relative risks and benefits of drugs to rehydrate sickled red blood cells.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register. We also searched online trials registries for any ongoing trials (01 July 2018).\nLast search of the Group's Haemoglobinopathies Trials Register: 08 October 2018.\nSelection criteria\nRandomized or quasi-randomized controlled trials of drugs to rehydrate sickled red blood cells compared to placebo or an alternative treatment.\nData collection and analysis\nBoth authors independently selected studies for inclusion, assessed study quality and extracted data.\nMain results\nOf the 51 studies identified, three met the inclusion criteria, including 524 people with sickle cell disease aged between 12 and 65 years of age. One study tested the effectiveness of zinc sulphate as compared to placebo and the remaining two assessed senicapoc versus placebo. No deaths were seen in any of the studies (low-quality evidence). The zinc sulphate study showed a significant reduction in painful crises (in a total of 145 participants) over one and a half years, mean difference -2.83 (95% confidence interval -3.51 to -2.15) (moderate-quality evidence). However, analysis was restricted due to limited statistical data. Changes to red blood cell parameters and blood counts were inconsistent (very low-quality evidence). No serious adverse events were noted in the study. The Phase II dose-finding study of senicapoc (a Gardos channel blocker) compared to placebo showed that the high dose senicapoc showed significant improvement in change in hemoglobin level, the number and proportion of dense red blood cells, red blood cell count and indices and hematocrit value (very low-quality evidence). The results with low-dose senicapoc were similar to the high-dose senicapoc group but of lesser magnitude. There was no difference in the frequency of painful crises between the three groups (low-quality evidence). A subsequent Phase III study of senicapoc was terminated early since there was no difference observed between the treatment and control groups in the primary end point of painful crises.\nAuthors' conclusions\nWhile the results of zinc for reducing sickle-related crises are encouraging, larger and longer-term multicenter studies are needed to evaluate the effectiveness of this therapy for people with sickle cell disease.\nWhile the Phase II and the prematurely terminated phase III studies of senicapoc showed that the drug improved red blood cell survival (depending on dose), this did not lead to fewer painful crises.\nGiven this is no longer an active area of research, this review will no longer be regularly updated.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sickle cell disease is an inherited condition that causes red blood cells to become sickle shaped when they lose water. This leads to a high risk of the blood vessels becoming blocked. Such blockages can cause pain, stroke and damage to organs. Recent therapies aim to stop the cells becoming sickle shaped by preventing them losing water."
}
] |
query-laysum
|
8228
|
[
{
"role": "user",
"content": "Abstract: Background\nCarotid artery stenosis is an important cause of stroke and transient ischemic attack. Correctly and rapidly identifying patients with symptomatic carotid artery stenosis is essential for adequate treatment with early cerebral revascularization. Doubts about the diagnostic value regarding the accuracy of duplex ultrasound (DUS) and the possibility of using DUS as the single diagnostic test before carotid revascularization are still debated.\nObjectives\nTo estimate the accuracy of DUS in individuals with symptomatic carotid stenosis verified by either digital subtraction angiography (DSA), computed tomography angiography (CTA), or magnetic resonance angiography (MRA).\nSearch methods\nWe searched CRDTAS, CENTRAL, MEDLINE (Ovid), Embase (Ovid), ISI Web of Science, HTA, DARE, and LILACS up to 15 February 2021. We handsearched the reference lists of all included studies and other relevant publications and contacted experts in the field to identify additional studies or unpublished data.\nSelection criteria\nWe included studies assessing DUS accuracy against an acceptable reference standard (DSA, MRA, or CTA) in symptomatic patients. We considered the classification of carotid stenosis with DUS defined with validated duplex velocity criteria, and the NASCET criteria for carotid stenosis measures on DSA, MRA, and CTA. We excluded studies that included < 70% of symptomatic patients; the time between the index test and the reference standard was longer than four weeks or not described, or that presented no objective criteria to estimate carotid stenosis.\nData collection and analysis\nThe review authors independently screened articles, extracted data, and assessed the risk of bias and applicability concerns using the QUADAS-2 domain list. We extracted data with an effort to complete a 2 × 2 table (true positives, true negatives, false positives, and false negatives) for each of the different categories of carotid stenosis and reference standards. We produced forest plots and summary receiver operating characteristic (ROC) plots to summarize the data. Where meta-analysis was possible, we used a bivariate meta-analysis model.\nMain results\nWe identified 25,087 unique studies, of which 22 were deemed eligible for inclusion (4957 carotid arteries). The risk of bias varied considerably across the studies, and studies were generally of moderate to low quality. We narratively described the results without meta-analysis in seven studies in which the criteria used to determine stenosis were too different from the duplex velocity criteria proposed in our protocol or studies that provided insufficient data to complete a 2 × 2 table for at least in one category of stenosis. Nine studies (2770 carotid arteries) presented DUS versus DSA results for 70% to 99% carotid artery stenosis, and two (685 carotid arteries) presented results from DUS versus CTA in this category. Seven studies presented results for occlusion with DSA as the reference standard and three with CTA as the reference standard. Five studies compared DUS versus DSA for 50% to 99% carotid artery stenosis. Only one study presented results from 50% to 69% carotid artery stenosis.\nFor DUS versus DSA, for < 50% carotid artery stenosis, the summary sensitivity was 0.63 (95% confidence interval [CI] 0.48 to 0.76) and the summary specificity was 0.99 (95% CI 0.96 to 0.99); for the 50% to 69% range, only one study was included and meta-analysis not performed; for the 50% to 99% range, the summary sensitivity was 0.97 (95% CI 0.95 to 0.98) and the summary specificity was 0.70 (95% CI 0.67 to 0.73); for the 70% to 99% range, the summary sensitivity was 0.85 (95% CI 0.77 to 0.91) and the summary specificity was 0.98 (95% CI 0.74 to 0.90); for occlusion, the summary sensitivity was 0.91 (95% CI 0.81 to 0.97) and the summary specificity was 0.95 (95% CI 0.76 to 0.99).\nFor sensitivity analyses, excluding studies in which participants were selected based on the presence of occlusion on DUS had an impact on specificity: 0.98 (95% CI 0.97 to 0.99). For DUS versus CTA, we found two studies in the range of 70% to 99%; the sensitivity varied from 0.57 to 0.94 and the specificity varied from 0.87 to 0.98. For occlusion, the summary sensitivity was 0.95 (95% CI 0.80 to 0.99) and the summary specificity was 0.91 (95% CI 0.09 to 0.99). For DUS versus MRA, there was one study with results for 50% to 99% carotid artery stenosis, with a sensitivity of 0.88 (95% CI 0.70 to 0.98) and specificity of 0.60 (95% CI 0.15 to 0.95); in the 70% to 99% range, two studies were included, with sensitivity that varied from 0.54 to 0.99 and specificity that varied from 0.78 to 0.89. We could perform only a few of the proposed sensitivity analyses because of the small number of studies included.\nAuthors' conclusions\nThis review provides evidence that the diagnostic accuracy of DUS is high, especially at discriminating between the presence or absence of significant carotid artery stenosis (< 50% or 50% to 99%). This evidence, plus its less invasive nature, supports the early use of DUS for the detection of carotid artery stenosis. The accuracy for 70% to 99% carotid artery stenosis and occlusion is high. Clinicians should exercise caution when using DUS as the single preoperative diagnostic method, and the limitations should be considered. There was little evidence of the accuracy of DUS when compared with CTA or MRA. The results of this review should be interpreted with caution because they are based on studies of low methodological quality, mainly due to the patient selection method. Methodological problems in participant inclusion criteria from the studies discussed above apparently influenced an overestimated estimate of prevalence values. Most of the studies included failed to precisely describe inclusion criteria and previous testing. Future diagnostic accuracy studies should include direct comparisons of the various modalities of diagnostic tests (mainly DUS, CTA, and MRA) for carotid artery stenosis since DSA is no longer considered to be the best method for diagnosing carotid stenosis and less invasive tests are now used as reference standards in clinical practice. Also, for future studies, the participant inclusion criteria require careful attention.\n\nGiven the provided abstract, please respond to the following query: \"How reliable are the results of the studies in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There were some problems with how the studies were conducted that could impair the correct estimates of the diagnostic accuracy. Many of the studies were of poor or unclear quality."
}
] |
query-laysum
|
8229
|
[
{
"role": "user",
"content": "Abstract: Background\nMany people with dementia are cared for at home by unpaid informal caregivers, usually family members. Caregivers may experience a range of physical, emotional, financial and social harms, which are often described collectively as caregiver burden. The degree of burden experienced is associated with characteristics of the caregiver, such as gender, and characteristics of the person with dementia, such as dementia stage, and the presence of behavioural problems or neuropsychiatric disturbances. It is a strong predictor of admission to residential care for people with dementia.\nPsychoeducational interventions might prevent or reduce caregiver burden. Overall, they are intended to improve caregivers' knowledge about the disease and its care; to increase caregivers' sense of competence and their ability to cope with difficult situations; to relieve feelings of isolation and allow caregivers to attend to their own emotional and physical needs. These interventions are heterogeneous, varying in their theoretical framework, components, and delivery formats. Interventions that are delivered remotely, using printed materials, telephone or video technologies, may be particularly suitable for caregivers who have difficulty accessing face-to-face services because of their own health problems, poor access to transport, or absence of substitute care. During the COVID-19 pandemic, containment measures in many countries required people to be isolated in their homes, including people with dementia and their family carers. In such circumstances, there is no alternative to remote delivery of interventions.\nObjectives\nTo assess the efficacy and acceptability of remotely delivered interventions aiming to reduce burden and improve mood and quality of life of informal caregivers of people with dementia.\nSearch methods\nWe searched the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group, MEDLINE, Embase and four other databases, as well as two international trials registries, on 10 April 2020. We also examined the bibliographies of relevant review papers and published trials.\nSelection criteria\nWe included only randomised controlled trials that assessed the remote delivery of structured interventions for informal caregivers who were providing care for people with dementia living at home. Caregivers had to be unpaid adults (relatives or members of the person's community). The interventions could be delivered using printed materials, the telephone, the Internet or a mixture of these, but could not involve any face-to-face contact with professionals. We categorised intervention components as information, training or support. Information interventions included two key elements: (i) they provided standardised information, and (ii) the caregiver played a passive role. Support interventions promoted interaction with other people (professionals or peers). Training interventions trained caregivers in practical skills to manage care. We excluded interventions that were primarily individual psychotherapy.\nOur primary outcomes were caregiver burden, mood, health-related quality of life and dropout for any reason. Secondary outcomes were caregiver knowledge and skills, use of health and social care resources, admission of the person with dementia to institutional care, and quality of life of the person with dementia.\nData collection and analysis\nStudy selection, data extraction and assessment of the risk of bias in included studies were done independently by two review authors. We used the Template for Intervention Description and Replication (TIDieR) to describe the interventions. We conducted meta-analyses using a random-effects model to derive estimates of effect size. We used GRADE methods to describe our degree of certainty about effect estimates.\nMain results\nWe included 26 studies in this review (2367 participants). We compared (1) interventions involving training, support or both, with or without information (experimental interventions) with usual treatment, waiting list or attention control (12 studies, 944 participants); and (2) the same experimental interventions with provision of information alone (14 studies, 1423 participants).\nWe downgraded evidence for study limitations and, for some outcomes, for inconsistency between studies. There was a frequent risk of bias from self-rating of subjective outcomes by participants who were not blind to the intervention. Randomisation methods were not always well-reported and there was potential for attrition bias in some studies. Therefore, all evidence was of moderate or low certainty.\nIn the comparison of experimental interventions with usual treatment, waiting list or attention control, we found that the experimental interventions probably have little or no effect on caregiver burden (nine studies, 597 participants; standardised mean difference (SMD) -0.06, 95% confidence interval (CI) -0.35 to 0.23); depressive symptoms (eight studies, 638 participants; SMD -0.05, 95% CI -0.22 to 0.12); or health-related quality of life (two studies, 311 participants; SMD 0.10, 95% CI -0.13 to 0.32). The experimental interventions probably result in little or no difference in dropout for any reason (eight studies, 661 participants; risk ratio (RR) 1.15, 95% CI 0.87 to 1.53).\nIn the comparison of experimental interventions with a control condition of information alone, we found that experimental interventions may result in a slight reduction in caregiver burden (nine studies, 650 participants; SMD -0.24, 95% CI -0.51 to 0.04); probably result in a slight improvement in depressive symptoms (11 studies, 1100 participants; SMD -0.25, 95% CI -0.43 to -0.06); may result in little or no difference in caregiver health-related quality of life (two studies, 257 participants; SMD -0.03, 95% CI -0.28 to 0.21); and probably result in an increase in dropouts for any reason (12 studies, 1266 participants; RR 1.51, 95% CI 1.04 to 2.20).\nAuthors' conclusions\nRemotely delivered interventions including support, training or both, with or without information, may slightly reduce caregiver burden and improve caregiver depressive symptoms when compared with provision of information alone, but not when compared with usual treatment, waiting list or attention control. They seem to make little or no difference to health-related quality of life. Caregivers receiving training or support were more likely than those receiving information alone to drop out of the studies, which might limit applicability. The efficacy of these interventions may depend on the nature and availability of usual services in the study settings.\n\nGiven the provided abstract, please respond to the following query: \"How reliable are these results?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that most studies were well-conducted, but because most outcomes were subjective, there was a risk that the expectations of the caregivers and researchers could have influenced results. For some outcomes, there were inconsistent results among studies. Overall, our confidence in our findings was moderate or low so the results could be affected by further research."
}
] |
query-laysum
|
8230
|
[
{
"role": "user",
"content": "Abstract: Background\nWalk-in clinics are growing in popularity around the world as a substitute for traditional medical care delivered in physician offices and emergency rooms, but their clinical efficacy is unclear.\nObjectives\nTo assess the quality of care and patient satisfaction of walk-in clinics compared to that of traditional physician offices and emergency rooms for people who present with basic medical complaints for either acute or chronic issues.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, six other databases, and two trials registers on 22 March 2016 together with reference checking, citation searching, and contact with study authors to identify additional studies. We applied no restrictions on language, publication type, or publication year.\nSelection criteria\nStudy design: randomized trials, non-randomized trials, and controlled before-after studies. Population: standalone physical clinics not requiring advance appointments or registration, that provided basic medical care without expectation of follow-up. Comparisons: traditional primary care practices or emergency rooms.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane and the Cochrane Effective Practice and Organisation of Care (EPOC) Group.\nMain results\nThe literature search identified 6587 citations, of which we considered 65 to be potentially relevant. We reviewed the abstracts of all 65 potentially relevant studies and retrieved the full texts of 12 articles thought to fit our study criteria. However, following independent author assessment of the full texts, we excluded all 12 articles.\nAuthors' conclusions\nControlled trial evidence about the mortality, morbidity, quality of care, and patient satisfaction of walk-in clinics is currently not available.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Frequently offering extended hours, shorter wait times, and lower prices, retail clinics have become popular alternatives to traditional physician offices and emergency rooms for people with low acuity illnesses. Despite their growing popularity, walk-in clinics have been controversial. Surveys have shown that some doctors in the UK, Canada, and Australia are concerned that walk-in clinics may provide lower quality care than physician offices. In the US, prominent physician groups have voiced similar concerns. A systematic review of the research literature on the quality and patient satisfaction of walk-in clinics as compared to physician offices and emergency rooms would give patients, practitioners, and health policymakers an objective understanding of this increasingly important but controversial healthcare resource."
}
] |
query-laysum
|
8231
|
[
{
"role": "user",
"content": "Abstract: Background\nTorture is widespread, with potentially broad and long-lasting impact across physical, psychological, social and other areas of life. Its complex and diverse effects interact with ethnicity, gender, and refugee experience. Health and welfare agencies offer varied rehabilitation services, from conventional mental health treatment to eclectic or needs-based interventions. This review is needed because relatively little outcome research has been done in this field, and no previous systematic review has been conducted. Resources are scarce, and the challenges of providing services can be considerable.\nObjectives\nTo assess beneficial and adverse effects of psychological, social and welfare interventions for torture survivors, and to compare these effects with those reported by active and inactive controls.\nSearch methods\nRandomised controlled trials (RCTs) were identified through a search of PsycINFO, MEDLINE, EMBASE, Web of Science, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Cochrane Central Register of Controlled Trials (CENTRAL) and the Cochrane Depression, Anxiety and Neurosis Specialised Register (CCDANCTR), the Latin American and Caribbean Health Science Information Database (LILACS), the Open System for Information on Grey Literature in Europe (OpenSIGLE), the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) and Published International Literature On Traumatic Stress (PILOTS) all years to 11 April 2013; searches of Cochrane resources, international trial registries and the main biomedical databases were updated on 20 June 2014. We also searched the Online Library of Dignity (Danish Institute against Torture), reference lists of reviews and included studies and the most frequently cited journals, up to April 2013 but not repeated for 2014. Investigators were contacted to provide updates or details as necessary.\nSelection criteria\nFull publications of RCTs or quasi-RCTs of psychological, social or welfare interventions for survivors of torture against any active or inactive comparison condition.\nData collection and analysis\nWe included all major sources of grey literature in our search and used standard methodological procedures as expected by The Cochrane Collaboration for collecting data, evaluating risk of bias and using GRADE (Grades of Recommendation, Assessment, Development and Evaluation) methods to assess the quality of evidence.\nMain results\nNine RCTs were included in this review. All were of psychological interventions; none provided social or welfare interventions. The nine trials provided data for 507 adults; none involved children or adolescents. Eight of the nine studies described individual treatment, and one discussed group treatment. Six trials were conducted in Europe, and three in different African countries. Most people were refugees in their thirties and forties; most met the criteria for post-traumatic stress disorder (PTSD) at the outset. Four trials used narrative exposure therapy (NET), one cognitive-behavioural therapy (CBT) and the other four used mixed methods for trauma symptoms, one of which included reconciliation methods. Five interventions were compared with active controls, such as psychoeducation; four used treatment as usual or waiting list/no treatment; we analysed all control conditions together. Duration of therapy varied from one hour to longer than 20 hours with a median of around 12 to 15 hours. All trials reported effects on distress and on PTSD, and two reported on quality of life. Five studies followed up participants for at least six months.\nNo immediate benefits of psychological therapy were noted in comparison with controls in terms of our primary outcome of distress (usually depression), nor for PTSD symptoms, PTSD caseness, or quality of life. At six-month follow-up, three NET and one CBT study (86 participants) showed moderate effect sizes for intervention over control in reduction of distress (standardised mean difference (SMD) -0.63, 95% confidence interval (CI) -1.07 to -0.19) and of PTSD symptoms (SMD -0.52, 95% CI -0.97 to -0.07). However, the quality of evidence was very low, and risk of bias resulted from researcher/therapist allegiance to treatment methods, effects of uncertain asylum status of some people and real-time non-standardised translation of assessment measures. No measures of adverse events were described, nor of participation, social functioning, quantity of social or family relationships, proxy measures by third parties or satisfaction with treatment. Too few studies were identified for review authors to attempt sensitivity analyses.\nAuthors' conclusions\nVery low-quality evidence suggests no differences between psychological therapies and controls in terms of immediate effects on post-traumatic symptoms, distress or quality of life; however, NET and CBT were found to confer moderate benefits in reducing distress and PTSD symptoms over the medium term (six months after treatment). Evidence was of very low quality, mainly because non-standardised assessment methods using interpreters were applied, and sample sizes were very small. Most eligible trials also revealed medium to high risk of bias. Further, attention to the cultural appropriateness of interventions or to their psychometric qualities was inadequate, and assessment measures used were unsuitable. As such, these findings should be interpreted with caution.\nNo data were available on whether symptom reduction enabled improvements in quality of life, participation in community life, or in social and family relationships in the medium term. Details of adverse events and treatment satisfaction were not available immediately after treatment nor in the medium term. Future research should aim to address these gaps in the evidence and should include larger sample sizes when possible. Problems of torture survivors need to be defined far more broadly than by PTSD symptoms, and recognition given to the contextual influences of being a torture survivor, including as an asylum seeker or refugee, on psychological and social health.\n\nGiven the provided abstract, please respond to the following query: \"What should happen next?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Studies need to identify broader aims for intervention and should continue to test a range of interventions consistent with the breadth of problems of torture survivors. Investigators also need to assess changes with validity across cultures and languages."
}
] |
query-laysum
|
8232
|
[
{
"role": "user",
"content": "Abstract: Background\nIt remains unclear whether people with non-muscle invasive bladder cancer (NMIBC) benefit from intravesical gemcitabine compared to other agents in the primary or recurrent setting following transurethral resection of a bladder tumor. This is an update of a Cochrane Review first published in 2012. Since that time, several randomized controlled trials (RCTs) have been reported, making this update relevant.\nObjectives\nTo assess the comparative effectiveness and toxicity of intravesical gemcitabine instillation for NMIBC.\nSearch methods\nWe performed a comprehensive literature search of the Cochrane Library, MEDLINE, Embase, four other databases, trial registries, and conference proceedings to 11 September 2020, with no restrictions on the language or status of publication.\nSelection criteria\nWe included RCTs in which participants received intravesical gemcitabine for primary or recurrent NMIBC.\nData collection and analysis\nTwo review authors independently assessed the included studies and extracted data for the primary outcomes: time to recurrence, time to progression, grade III to V adverse events determined by the Common Terminology Criteria for Adverse Events version 5.0 (CTCAE v5.0), and the secondary outcomes: time to death from bladder cancer, time to death from any cause, grade I or II adverse events determined by the CTCAE v5.0 and disease-specific quality of life. We performed statistical analyses using a random-effects model and rated the certainty of the evidence using GRADE.\nMain results\nWe included seven studies with 1222 participants with NMIBC across five comparisons. This abstract focuses on the primary outcomes of the three most clinically relevant comparisons.\n1. Gemcitabine versus saline: based on two years' to four years' follow-up, gemcitabine may reduce the risk of recurrence over time compared to saline (39% versus 47% recurrence rate, hazard ratio [HR] 0.77, 95% confidence interval [CI] 0.54 to 1.09; studies = 2, participants = 734; I2 = 49%; low-certainty evidence), but the CI included the possibility of no effect. Gemcitabine may result in little to no difference in the risk of progression over time compared to saline (4.6% versus 4.8% progression rate, HR 0.96, 95% CI 0.19 to 4.71; studies = 2, participants = 654; I2 = 53%; low-certainty evidence). Gemcitabine may result in little to no difference in the CTCAE grade III to V adverse events compared to saline (5.9% versus 4.7% adverse events rate, risk ratio [RR] 1.26, 95% CI 0.58 to 2.75; studies = 2, participants = 668; I2 = 24%; low-certainty evidence).\n2. Gemcitabine versus mitomycin: based on three years' follow-up (studies = 1, participants = 109), gemcitabine may reduce the risk of recurrence over time compared to mitomycin (17% versus 40% recurrence rate, HR 0.36, 95% CI 0.19 to 0.69; low-certainty evidence). Gemcitabine may reduce the risk of progression over time compared to mitomycin (11% versus 18% progression rate, HR 0.57, 95% CI 0.32 to 1.01; low-certainty evidence), but the CI included the possibility of no effect. We are very uncertain about the effect of gemcitabine on the CTCAE grade III to V adverse events compared to mitomycin (RR 0.51, 95% CI 0.13 to 1.93; very low-certainty evidence). The analysis was only based on recurrent NMIBC.\n3. Gemcitabine versus Bacillus Calmette-Guérin (BCG) for recurrent (one-course BCG failure) high-risk NMIBC: based on 6 months' to 22 months' follow-up (studies = 1, participants = 80), gemcitabine may reduce the risk of recurrence compared to BCG (41% versus 97% recurrence rate, HR 0.15, 95% CI 0.09 to 0.26; low-certainty evidence) and progression over time (16% versus 33% progression rate, HR 0.45, 95% CI 0.27 to 0.76; low-certainty evidence). We are very uncertain about the effect of gemcitabine on the CTCAE grade III to V adverse events compared to BCG (RR 1.00, 95% CI 0.21 to 4.66; very low-certainty evidence).\nIn addition, the review provides information on the comparison of gemcitabine versus BCG and gemcitabine versus one-third dose BCG.\nAuthors' conclusions\nBased on findings of this review, gemcitabine may have a more favorable impact on recurrence and progression-free survival than mitomycin but we are very uncertain as to how major adverse events compare. The same is true when comparing gemcitabine to BCG in individuals with high risk disease who have previously failed BCG. The underlying low- to very low-certainty evidence indicates that our confidence in these results is limited; the true effects may be substantially different from these findings; therefore, better quality studies are needed.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "NMIBC can be taken out of the bladder using small instruments and a light source (called transurethral surgery). However, these tumors often come back (recurrence) with an aggressive feature such as spread into the deep layers of the bladder. To prevent this, we can put various medicines into the bladder. In this review, we wanted to know whether gemcitabine (a chemotherapy medication) was better or worse than other medicines."
}
] |
query-laysum
|
8233
|
[
{
"role": "user",
"content": "Abstract: Background\nAnterior cruciate ligament (ACL) tears are frequently treated with surgical reconstruction with grafts, frequently patella tendon or hamstrings. Interference screws are often used to secure the graft in bone tunnels in the femur and tibia. This review examines whether bioabsorbable interference screws give better results than metal interference screws when used for graft fixation in ACL reconstruction.\nObjectives\nTo assess the effects (benefits and harms) of bioabsorbable versus metallic interference screws for graft fixation in ACL reconstruction.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, CENTRAL (the Cochrane Library), MEDLINE, Embase, LILACS, trial registers and reference lists of articles. Date of search: January 2016.\nSelection criteria\nWe included randomised controlled trials and quasi-randomised trials comparing bioabsorbable with metallic interferences screws in ACL reconstruction. The main outcomes sought were subjective-rated knee function, failure of treatment, and activity level.\nData collection and analysis\nAt least two review authors selected eligible trials, independently assessed risk of bias, and cross-checked data. Data were pooled whenever relevant and possible. Requests for further information were sent to the original study authors.\nMain results\nWe included 12 trials (11 randomised and one quasi-randomised) involving a total of 944 participants, and reporting follow-up results for 774. Participants in the 12 trials underwent ACL reconstruction with either hamstring tendon grafts (five trials) or patellar tendon grafts (seven trials). Trials participants were randomly allocated to bioabsorbable or metallic interference screws for graft fixation in both femur and tibia (seven trials); femur only (three trials); tibia only (one trial); location was not reported in the remaining trial. A variety of materials was used for the bioabsorbable screws, Poly-L-lactic acid (PLLA) being the most common. The metallic screws, where reported, were titanium.\nAll trials were at high risk of bias, which invariably included performance bias. Seven trials were at high risk of attrition bias and eight at high risk of reporting bias. The quasi-randomised trial was assessed as being at high risk for selection bias. Based on these study limitations and insufficiency of the available data, we judged the quality of evidence for all outcomes was very low.\nThe majority of the available data for patient-reported knee function was presented as Lysholm scores (0 to 100; higher scores = better function). There was very low quality but consistent evidence of no clinically important differences between the two groups in Lysholm scores at 12 months follow-up (mean difference (MD) -0.08, 95% confidence interval (CI) -1.48 to 1.32; three trials, 168 participants); 24 months (MD 0.35, 95% CI -1.27 to 1.98; three trials, 113 participants) or five or more years follow-up (MD 1.23, 95% CI -2.00 to 4.47; two trials, 71 participants). This lack of between-group differences was also reported for Lysholm scores in several trials that did not provide sufficient data for pooling as well as for other self-reported knee function scores reported in several trials.\nTreatment failure was represented by the summed data for implant breakage during surgery and major postoperative complications (implant failure, graft rupture, symptomatic foreign body reactions, effusion and treated arthrofibrosis and related conditions) that were usually described in the trial reports as requiring further substantive treatment. There is very low-quality evidence of greater treatment failure in the bioabsorbable screw group (60/451 versus 29/434; risk ratio (RR) 1.94 favouring metallic screw fixation, 95% CI 1.29 to 2.93; 885 participants, 11 studies). In a population with an assumed risk (based on the median control group risk) of 56 participants per 1000 having treatment failure after metallic screw fixation, this equates to 53 more (95% CI 17 to 108 more) per 1000 participants having treatment failure after bioabsorbable screw fixation. All 16 intraoperative complications in the bioabsorbable screw group were implant breakages upon screw insertion. Treatment failure defined as postoperative complications only still favoured the metallic screw group but the 95% CI also included the potential for a greater risk of treatment failure after metallic screw fixation: 44/451 versus 29/434; RR 1.44, 95% CI 0.93 to 2.23. Based on the assumed risk of 56 participants per 1000 having postoperative treatment failure after metallic screw fixation, this equates to 25 more (95% CI 4 fewer and 69 more) per 1000 participants having this outcome after bioabsorbable screw fixation.\nThere was very low-quality evidence of very similar activity levels in the two groups at 12 and 24 months follow-up measured via the Tegner score (0 to 10; higher scores = greater activity): 12 months (MD 0.08, 95% CI -0.39 to 0.55; 122 participants, two studies); 24 months (MD 0.01, 95% CI -0.54 to 0.57; 72 participants, two studies).\nAuthors' conclusions\nThere is very low-quality evidence of no difference in self-reported knee function and levels of activity between bioabsorbable and metallic interference screws for graft fixation in ACL reconstruction. There is very low-quality evidence that bioabsorbable screws may be associated with more overall treatment failures, including implant breakage during surgery. Further research does not appear to be a priority, but if undertaken, should also examine costs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found evidence that self-reported measures of knee function were similar at one, two and five of more years in those treated whose grafts were fixed with bioabsorbable screws to those whose grafts were fixed with metal screws. Similarly, no differences were seen between the two types of screws in levels of activity at one and two years. However, there was evidence that bioabsorbable screws may be associated with more treatment failures. These include screw breakage during surgery, and greater numbers of graft rupture."
}
] |
query-laysum
|
8234
|
[
{
"role": "user",
"content": "Abstract: Background\nCorticosteroids are often preferred over enteral nutrition (EN) as induction therapy for Crohn's disease (CD). Prior meta-analyses suggest that corticosteroids are superior to EN for induction of remission in CD. Treatment failures in EN trials are often due to poor compliance, with dropouts frequently due to poor acceptance of a nasogastric tube and unpalatable formulations. This systematic review is an update of a previously published Cochrane review.\nObjectives\nTo evaluate the effectiveness and safety of exclusive EN as primary therapy to induce remission in CD and to examine the importance of formula composition on effectiveness.\nSearch methods\nWe searched MEDLINE, Embase and CENTRAL from inception to 5 July 2017. We also searched references of retrieved articles and conference abstracts.\nSelection criteria\nRandomized controlled trials involving patients with active CD were considered for inclusion. Studies comparing one type of EN to another type of EN or conventional corticosteroids were selected for review.\nData collection and analysis\nData were extracted independently by at least two authors. The primary outcome was clinical remission. Secondary outcomes included adverse events, serious adverse events and withdrawal due to adverse events. For dichotomous outcomes, we calculated the risk ratio (RR) and 95% confidence interval (CI). A random-effects model was used to pool data. We performed intention-to-treat and per-protocol analyses for the primary outcome. Heterogeneity was explored using the Chi2 and I2 statistics. The studies were separated into two comparisons: one EN formulation compared to another EN formulation and EN compared to corticosteroids. Subgroup analyses were based on formula composition and age. Sensitivity analyses included abstract publications and poor quality studies. We used the Cochrane risk of bias tool to assess study quality. We used the GRADE criteria to assess the overall quality of the evidence supporting the primary outcome and selected secondary outcomes.\nMain results\nTwenty-seven studies (1,011 participants) were included. Three studies were rated as low risk of bias. Seven studies were rated as high risk of bias and 17 were rated as unclear risk of bias due to insufficient information. Seventeen trials compared different formulations of EN, 13 studies compared one or more elemental formulas to a non-elemental formula, three studies compared EN diets of similar protein composition but different fat composition, and one study compared non-elemental diets differing in glutamine enrichment. Meta-analysis of 11 trials (378 participants) demonstrated no difference in remission rates. Sixty-four per cent (134/210) of patients in the elemental group achieved remission compared to 62% (105/168) of patients in the non-elemental group (RR 1.02, 95% CI 0.88 to 1.18; GRADE very low quality). A per-protocol analysis (346 participants) produced similar results (RR 1.04, 95% CI 0.91 to 1.18). Subgroup analyses performed to evaluate the different types of elemental and non-elemental diets (elemental, semi-elemental and polymeric) showed no differences in remission rates. An analysis of 7 trials including 209 patients treated with EN formulas of differing fat content (low fat: < 20 g/1000 kCal versus high fat: > 20 g/1000 kCal) demonstrated no difference in remission rates (RR 1.03; 95% CI 0.85 to 1.26). Very low fat content (< 3 g/1000 kCal) and very low long chain triglycerides demonstrated higher remission rates than higher content EN formulas. There was no difference between elemental and non-elemental diets in adverse event rates (RR 1.00, 95% CI 0.63 to 1.60; GRADE very low quality), or withdrawals due to adverse events (RR 1.29, 95% CI 0.80 to 2.09; GRADE very low quality). Common adverse events included nausea, vomiting, diarrhea and bloating.\nTen trials compared EN to steroid therapy. Meta-analysis of eight trials (223 participants) demonstrated no difference in remission rates between EN and steroids. Fifty per cent (111/223) of patients in the EN group achieved remission compared to 72% (133/186) of patients in the steroid group (RR 0.77, 95% CI 0.58 to 1.03; GRADE very low quality). Subgroup analysis by age showed a difference in remission rates for adults but not for children. In adults 45% (87/194) of EN patients achieved remission compared to 73% (116/158) of steroid patients (RR 0.65, 95% CI 0.52 to 0.82; GRADE very low quality). In children, 83% (24/29) of EN patients achieved remission compared to 61% (17/28) of steroid patients (RR 1.35, 95% CI 0.92 to 1.97; GRADE very low quality). A per-protocol analysis produced similar results (RR 0.93, 95% CI 0.75 to 1.14). The per-protocol subgroup analysis showed a difference in remission rates for both adults (RR 0.82, 95% CI 0.70 to 0.95) and children (RR 1.43, 95% CI 1.03 to 1.97). There was no difference in adverse event rates (RR 1.39, 95% CI 0.62 to 3.11; GRADE very low quality). However, patients on EN were more likely to withdraw due to adverse events than those on steroid therapy (RR 2.95, 95% CI 1.02 to 8.48; GRADE very low quality). Common adverse events reported in the EN group included heartburn, flatulence, diarrhea and vomiting, and for steroid therapy acne, moon facies, hyperglycemia, muscle weakness and hypoglycemia. The most common reason for withdrawal was inability to tolerate the EN diet.\nAuthors' conclusions\nVery low quality evidence suggests that corticosteroid therapy may be more effective than EN for induction of clinical remission in adults with active CD. Very low quality evidence also suggests that EN may be more effective than steroids for induction of remission in children with active CD. Protein composition does not appear to influence the effectiveness of EN for the treatment of active CD. EN should be considered in pediatric CD patients or in adult patients who can comply with nasogastric tube feeding or perceive the formulations to be palatable, or when steroid side effects are not tolerated or better avoided. Further research is required to confirm the superiority of corticosteroids over EN in adults. Further research is required to confirm the benefit of EN in children. More effort from industry should be taken to develop palatable polymeric formulations that can be delivered without use of a nasogastric tube as this may lead to increased patient adherence with this therapy\n\nGiven the provided abstract, please respond to the following query: \"What did the researchers find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The researchers found twenty-seven studies (1,011 participants) that fulfilled the search criteria. Eleven of these studies, which included 378 patients, compared elemental to non-elemental diets at producing remission in Crohn's disease. Eight studies, which included 352 patients, investigated enteral nutrition compared to steroid therapy at inducing remission in Crohn's disease. The researchers searched the medical literature extensively up to July 5, 2017.\nVery low quality evidence suggests that steroids may be more effective than enteral nutrition for induction of remission in adults with active Crohn's disease. Very low quality evidence also suggests that enteral nutrition may be more effective than steroids for induction of remission in children with active Crohn's disease. There was no difference in remission rates between elemental and non-elemental diets. An increase in side effects was not seen with elemental diets compared to non-elemental diets, nor with enteral nutrition compared to steroids. Common side effects experienced with enteral nutrition included vomiting, diarrhea, heartburn and flatulence . Common side effects associated with steroid use included acne, moon facies, muscle weakness, hyperglycemia (high blood sugar) and hypoglycemia (low blood sugar) . Patients on enteral nutrition were more likely to withdraw from the study due to side effects than those on steroids. The most common reason for study withdrawal was inability to tolerate the taste of the enteral nutrition diet. Enteral nutrition should be considered in pediatric Crohn's patients or in adult patients who can comply with nasogastric tube feeding or perceive the formulations to be palatable, or when steroid side effects are not tolerated or better avoided. Further research is required to confirm the superiority of corticosteroids over EN in adults. Further research is required to confirm the benefit of EN in children. More effort from industry should be taken to develop palatable polymeric formulations that can be delivered without use of a nasogastric tube as this may lead to increased patient compliance with this therapy."
}
] |
query-laysum
|
8235
|
[
{
"role": "user",
"content": "Abstract: Background\nThe pattern of infections among neutropenic patients with cancer has shifted in the last decades to a predominance of gram-positive infections. Some of these gram-positive bacteria are increasingly resistant to beta-lactams and necessitate specific antibiotic treatment.\nObjectives\nTo assess the effectiveness of empirical anti-gram-positive (antiGP) antibiotic treatment for febrile neutropenic patients with cancer in terms of mortality and treatment failure. To assess the rate of resistance development, further infections and adverse events associated with additional antiGP treatment.\nSearch methods\nFor the review update we searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2017, Issue 2), MEDLINE (May 2012 to 2017), Embase (May 2012 to 2017), LILACS (2012 to 2017), conference proceedings, ClinicalTrials.gov trial registry, and the references of the included studies. We contacted the first authors of all included and potentially relevant trials.\nSelection criteria\nRandomised controlled trials (RCTs) comparing one antibiotic regimen versus the same regimen with the addition of an antiGP antibiotic for the treatment of febrile neutropenic patients with cancer.\nData collection and analysis\nTwo review authors independently assessed trial eligibility and risk of bias, and extracted all data. Risk ratios (RR) with 95% confidence intervals (CIs) were calculated. A random-effects model was used for all comparisons showing substantial heterogeneity (I2 > 50%). Outcomes were extracted by intention-to-treat and the analysis was patient-based whenever possible.\nMain results\nFourteen trials and 2782 patients or episodes were included. Empirical antiGP antibiotics were tested at the onset of treatment in 12 studies, and for persistent fever in two studies. The antiGP treatment was a glycopeptide in nine trials. Eight studies were assessed in the overall mortality comparison and no significant difference was seen between the comparator arms, RR of 0.90 (95% CI 0.64 to 1.25; 8 studies, 1242 patients; moderate-quality data). Eleven trials assessed failure, including modifications as failures, while seven assessed overall failure disregarding treatment modifications. Failure with modifications was reduced, RR of 0.72 (95% CI 0.65 to 0.79; 11 studies, 2169 patients; very low-quality data), while overall failure was the same, RR of 1.00 (95% CI 0.79 to 1.27; 7 studies, 943 patients; low-quality data). Sensitivity analysis for allocation concealment and incomplete outcome data did not change the results. Failure among patients with gram-positive infections was reduced with antiGP treatment, RR of 0.56 (95% CI 0.38 to 0.84, 5 studies, 175 patients), although, mortality among these patients was not changed.\nData regarding other patient subgroups likely to benefit from antiGP treatment were not available. Glycopeptides did not increase fungal superinfection rates and were associated with a reduction in documented gram-positive superinfections. Resistant colonisation was not documented in the studies.\nAuthors' conclusions\nBased on very low- or low-quality evidence using the GRADE approach and overall low risk of bias, the current evidence shows that the empirical routine addition of antiGP treatment, namely glycopeptides, does not improve the outcomes of febrile neutropenic patients with cancer.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "we assessed whether the addition of specific anti gram-positive antibiotics prior to identification of a causative bacteria improves survival and cure among cancer patients with fever and neutropenia."
}
] |
query-laysum
|
8236
|
[
{
"role": "user",
"content": "Abstract: Background\nBiologic disease-modifying anti-rheumatic drugs (DMARDs: referred to as biologics) are effective in treating rheumatoid arthritis (RA), however there are few head-to-head comparison studies. Our systematic review, standard meta-analysis and network meta-analysis (NMA) updates the 2009 Cochrane overview, 'Biologics for rheumatoid arthritis (RA)' and adds new data. This review is focused on biologic or tofacitinib therapy in people with RA who had previously been treated unsuccessfully with biologics.\nObjectives\nTo compare the benefits and harms of biologics (abatacept, adalimumab, anakinra, certolizumab pegol, etanercept, golimumab, infliximab, rituximab, tocilizumab) and small molecule tofacitinib versus comparator (placebo or methotrexate (MTX)/other DMARDs) in people with RA, previously unsuccessfully treated with biologics.\nMethods\nOn 22 June 2015 we searched for randomized controlled trials (RCTs) in CENTRAL, MEDLINE, and Embase; and trials registries (WHO trials register, Clinicaltrials.gov). We carried out article selection, data extraction, and risk of bias and GRADE assessments in duplicate. We calculated direct estimates with 95% confidence intervals (CI) using standard meta-analysis. We used a Bayesian mixed treatment comparison (MTC) approach for NMA estimates with 95% credible intervals (CrI). We converted odds ratios (OR) to risk ratios (RR) for ease of understanding. We have also presented results in absolute measures as risk difference (RD) and number needed to treat for an additional beneficial outcome (NNTB). Outcomes measured included four benefits (ACR50, function measured by Health Assessment Questionnaire (HAQ) score, remission defined as DAS < 1.6 or DAS28 < 2.6, slowing of radiographic progression) and three harms (withdrawals due to adverse events, serious adverse events, and cancer).\nMain results\nThis update includes nine new RCTs for a total of 12 RCTs that included 3364 participants. The comparator was placebo only in three RCTs (548 participants), MTX or other traditional DMARD in six RCTs (2468 participants), and another biologic in three RCTs (348 participants). Data were available for four tumor necrosis factor (TNF)-biologics: (certolizumab pegol (1 study; 37 participants), etanercept (3 studies; 348 participants), golimumab (1 study; 461 participants), infliximab (1 study; 27 participants)), three non-TNF biologics (abatacept (3 studies; 632 participants), rituximab (2 studies; 1019 participants), and tocilizumab (2 studies; 589 participants)); there was only one study for tofacitinib (399 participants). The majority of the trials (10/12) lasted less than 12 months.\nWe judged 33% of the studies at low risk of bias for allocation sequence generation, allocation concealment and blinding, 25% had low risk of bias for attrition, 92% were at unclear risk for selective reporting; and 92% had low risk of bias for major baseline imbalance. We downgraded the quality of the evidence for most outcomes to moderate or low due to study limitations, heterogeneity, or rarity of direct comparator trials.\nBiologic monotherapy versus placebo\nCompared to placebo, biologics were associated with clinically meaningful and statistically significant improvement in RA as demonstrated by higher ACR50 and RA remission rates. RR was 4.10 for ACR50 (95% CI 1.97 to 8.55; moderate-quality evidence); absolute benefit RD 14% (95% CI 6% to 21%); and NNTB = 8 (95% CI 4 to 23). RR for RA remission was 13.51 (95% CI 1.85 to 98.45, one study available; moderate-quality evidence); absolute benefit RD 9% (95% CI 5% to 13%); and NNTB = 11 (95% CI 3 to 136). Results for withdrawals due to adverse events and serious adverse events did not show any statistically significant or clinically meaningful differences. There were no studies available for analysis for function measured by HAQ, radiographic progression, or cancer outcomes. There were not enough data for any of the outcomes to look at subgroups.\nBiologic + MTX versus active comparator (MTX/other traditional DMARDs)\nCompared to MTX/other traditional DMARDs, biologic + MTX was associated with a clinically meaningful and statistically significant improvement in ACR50, function measured by HAQ, and RA remission rates in direct comparisons. RR for ACR50 was 4.07 (95% CI 2.76 to 5.99; high-quality evidence); absolute benefit RD 16% (10% to 21%); NNTB = 7 (95% CI 5 to 11). HAQ scores showed an improvement with a mean difference (MD) of 0.29 (95% CI 0.21 to 0.36; high-quality evidence); absolute benefit RD 9.7% improvement (95% CI 7% to 12%); and NNTB = 5 (95% CI 4 to 7). Remission rates showed an improved RR of 20.73 (95% CI 4.13 to 104.16; moderate-quality evidence); absolute benefit RD 10% (95% CI 8% to 13%); and NNTB = 17 (95% CI 4 to 96), among the biologic + MTX group compared to MTX/other DMARDs. There were no studies for radiographic progression. Results were not clinically meaningful or statistically significantly different for withdrawals due to adverse events or serious adverse events, and were inconclusive for cancer.\nTofacitinib monotherapy versus placebo\nThere were no published data.\nTofacitinib + MTX versus active comparator (MTX)\nIn one study, compared to MTX, tofacitinib + MTX was associated with a clinically meaningful and statistically significant improvement in ACR50 (RR 3.24; 95% CI 1.78 to 5.89; absolute benefit RD 19% (95% CI 12% to 26%); NNTB = 6 (95% CI 3 to 14); moderate-quality evidence), and function measured by HAQ, MD 0.27 improvement (95% CI 0.14 to 0.39); absolute benefit RD 9% (95% CI 4.7% to 13%), NNTB = 5 (95% CI 4 to 10); high-quality evidence). RA remission rates were not statistically significantly different but the observed difference may be clinically meaningful (RR 15.44 (95% CI 0.93 to 256.1; high-quality evidence); absolute benefit RD 6% (95% CI 3% to 9%); NNTB could not be calculated. There were no studies for radiographic progression. There were no statistically significant or clinically meaningful differences for withdrawals due to adverse events and serious adverse events, and results were inconclusive for cancer.\nAuthors' conclusions\nBiologic (with or without MTX) or tofacitinib (with MTX) use was associated with clinically meaningful and statistically significant benefits (ACR50, HAQ, remission) compared to placebo or an active comparator (MTX/other traditional DMARDs) among people with RA previously unsuccessfully treated with biologics.\nNo studies examined radiographic progression. Results were not clinically meaningful or statistically significant for withdrawals due to adverse events and serious adverse events, and were inconclusive for cancer.\n\nGiven the provided abstract, please respond to the following query: \"Tofacitinib + MTX versus MTX\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "56 people out of 1000 who were on tofacitinib + MTX had their RA symptoms disappear compared to 0 people out of 1000 who were on MTX/DMARD (6% absolute improvement), based on one study."
}
] |
query-laysum
|
8237
|
[
{
"role": "user",
"content": "Abstract: Background\nExchange transfusion and phototherapy have traditionally been used to treat jaundice and avoid the associated neurological complications. Because of the risks and burdens of exchange transfusion, intravenous immunoglobulin (IVIg) has been suggested as an alternative therapy for alloimmune hemolytic disease of the newborn (HDN) to reduce the need for exchange transfusion.\nObjectives\nTo assess the effect and complications of IVIg in newborn infants with alloimmune HDN on the need for and number of exchange transfusions.\nSearch methods\nWe performed electronic searches of CENTRAL, PubMed, Embase (Ovid), Web of Science, CINAHL (EBSCOhost), Academic Search Premier, and the trial registers ClinicalTrials.gov and controlled-trials.com in May 2017. We also searched reference lists of included and excluded trials and relevant reviews for further relevant studies.\nSelection criteria\nWe considered all randomized and quasi-randomized controlled trials of IVIg in the treatment of alloimmune HDN. Trials must have used predefined criteria for the use of IVIg and exchange transfusion therapy to be included.\nData collection and analysis\nWe used the standard methods of Cochrane and its Neonatal Review Group. We assessed studies for inclusion and two review authors independently assessed quality and extracted data. We discussed any differences of opinion to reach consensus. We contacted investigators for additional or missing information. We calculated risk ratio (RR), risk difference (RD) and number needed to treat for an additional beneficial outcome (NNTB) for categorical outcomes. We calculated mean difference (MD) for continuous variables. We used GRADE criteria to assess the risk of bias for major outcomes and to summarize the level of evidence.\nMain results\nNine studies with 658 infants fulfilled the inclusion criteria. Term and preterm infants with Rh or ABO (or both) incompatibility were included. The use of exchange transfusion decreased significantly in the immunoglobulin treated group (typical RR 0.35, 95% CI 0.25 to 0.49; typical RD -0.22, 95% CI -0.27 to -0.16; NNTB 5). The mean number of exchange transfusions per infant was also significantly lower in the immunoglobulin treated group (MD -0.34, 95% CI -0.50 to -0.17). However, sensitivity analysis by risk of bias showed that in the only two studies in which the treatment was masked by use of a placebo and outcome assessment was blinded, the results differed; there was no difference in the need for exchange transfusions (RR 0.98, 95% CI 0.48 to 1.98) or number of exchange transfusions (MD -0.04, 95% CI -0.18 to 0.10). Two studies assessed long-term outcomes and found no cases of kernicterus, deafness or cerebral palsy.\nAuthors' conclusions\nAlthough overall results show a significant reduction in the need for exchange transfusion in infants treated with IVIg, the applicability of the results is limited because of low to very low quality of evidence. Furthermore, the two studies at lowest risk of bias show no benefit of IVIg in reducing the need for and number of exchange transfusions. Based on these results, we have insufficient confidence in the effect estimate for benefit of IVIg to make even a weak recommendation for the use of IVIg for the treatment of alloimmune HDN. Further studies are needed before the use of IVIg for the treatment of alloimmune HDN can be recommended, and should include blinding of the intervention by use of a placebo as well as sufficient sample size to assess the potential for serious adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Background\nIn alloimmune HDN, maternal antibodies (circulating proteins that are produced by the immune system in response to the presence of a foreign substance) are produced against fetal blood cells. These antibodies are transferred across the placenta and destroy red blood cells, leading to fetal anemia (deficiency of red cells in the unborn baby). Intrauterine (within the womb) blood transfusion is used to treat severe fetal anemia. After birth, the antibodies persist in the infant and cause hyperbilirubinemia (a raised blood level of an orange-yellow pigment (bilirubin, a waste product of a degrading red blood cell) with the risk of serious brain damage (kernicterus) and anemia. Traditional treatment of hyperbilirubinemia consists of (intensive) phototherapy (light treatment) and exchange transfusion (where the baby's blood is replaced with that of a donor; ET). Because ET is an invasive, high risk procedure, alternative treatments such as intravenous immunoglobulin (IVIg), have been investigated. IVIg is thought to reduce the rate of hemolysis and consequently the need for ETs."
}
] |
query-laysum
|
8238
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic pain is common and can be challenging to manage. Despite increased utilisation of opioids, the safety and efficacy of long-term use of these compounds for chronic non-cancer pain (CNCP) remains controversial. This overview of Cochrane Reviews complements the overview entitled 'High-dose opioids for chronic non-cancer pain: an overview of Cochrane Reviews'.\nObjectives\nTo provide an overview of the occurrence and nature of adverse events associated with any opioid agent (any dose, frequency, or route of administration) used on a medium- or long-term basis for the treatment of CNCP in adults.\nMethods\nWe searched the Cochrane Database of Systematic Reviews (the Cochrane Library) Issue 3, 2017 on 8 March 2017 to identify all Cochrane Reviews of studies of medium- or long-term opioid use (2 weeks or more) for CNCP in adults aged 18 and over. We assessed the quality of the reviews using the AMSTAR criteria (Assessing the Methodological Quality of Systematic Reviews) as adapted for Cochrane Overviews. We assessed the quality of the evidence for the outcomes using the GRADE framework.\nMain results\nWe included a total of 16 reviews in our overview, of which 14 presented unique quantitative data. These 14 Cochrane Reviews investigated 14 different opioid agents that were administered for time periods of two weeks or longer. The longest study was 13 months in duration, with most in the 6- to 16-week range. The quality of the included reviews was high using AMSTAR criteria, with 11 reviews meeting all 10 criteria, and 5 of the reviews meeting 9 out of 10, not scoring a point for either duplicate study selection and data extraction, or searching for articles irrespective of language and publication type. The quality of the evidence for the generic adverse event outcomes according to GRADE ranged from very low to moderate, with risk of bias and imprecision being identified for the following generic adverse event outcomes: any adverse event, any serious adverse event, and withdrawals due to adverse events. A GRADE assessment of the quality of the evidence for specific adverse events led to a downgrading to very low- to moderate-quality evidence due to risk of bias, indirectness, and imprecision.\nWe calculated the equivalent milligrams of morphine per 24 hours for each opioid studied (buprenorphine, codeine, dextropropoxyphene, dihydrocodeine, fentanyl, hydromorphone, levorphanol, methadone, morphine, oxycodone, oxymorphone, tapentadol, tilidine, and tramadol). In the 14 Cochrane Reviews providing unique quantitative data, there were 61 studies with a total of 18,679 randomised participants; 12 of these studies had a cross-over design with two to four arms and a total of 796 participants. Based on the 14 selected Cochrane Reviews, there was a significantly increased risk of experiencing any adverse event with opioids compared to placebo (risk ratio (RR) 1.42, 95% confidence interval (CI) 1.22 to 1.66) as well as with opioids compared to a non-opioid active pharmacological comparator, with a similar risk ratio (RR 1.21, 95% CI 1.10 to 1.33). There was also a significantly increased risk of experiencing a serious adverse event with opioids compared to placebo (RR 2.75, 95% CI 2.06 to 3.67). Furthermore, we found significantly increased risk ratios with opioids compared to placebo for a number of specific adverse events: constipation, dizziness, drowsiness, fatigue, hot flushes, increased sweating, nausea, pruritus, and vomiting.\nThere was no data on any of the following prespecified adverse events of interest in any of the included reviews in this overview of Cochrane Reviews: addiction, cognitive dysfunction, depressive symptoms or mood disturbances, hypogonadism or other endocrine dysfunction, respiratory depression, sexual dysfunction, and sleep apnoea or sleep-disordered breathing. We found no data for adverse events analysed by sex or ethnicity.\nAuthors' conclusions\nA number of adverse events, including serious adverse events, are associated with the medium- and long-term use of opioids for CNCP. The absolute event rate for any adverse event with opioids in trials using a placebo as comparison was 78%, with an absolute event rate of 7.5% for any serious adverse event. Based on the adverse events identified, clinically relevant benefit would need to be clearly demonstrated before long-term use could be considered in people with CNCP in clinical practice. A number of adverse events that we would have expected to occur with opioid use were not reported in the included Cochrane Reviews. Going forward, we recommend more rigorous identification and reporting of all adverse events in randomised controlled trials and systematic reviews on opioid therapy. The absence of data for many adverse events represents a serious limitation of the evidence on opioids. We also recommend extending study follow-up, as a latency of onset may exist for some adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In March 2017, we found 16 Cochrane Reviews of 14 different opioid medicines, including codeine, morphine, and oxycodone. These papers included 61 studies with more than 18,000 participants. We found that people who take opioids have a higher risk of having any side effect, such as constipation, dizziness, and nausea, as well as having a serious side effect. We did not find any information in the Cochrane Reviews about many of the known and sometimes serious side effects of opioids, such as addiction, depression, and sleep problems."
}
] |
query-laysum
|
8239
|
[
{
"role": "user",
"content": "Abstract: Background\nAsthma is a chronic respiratory condition that affects over 300 million adults and children worldwide. It is characterised by wheeze, cough, chest tightness, and shortness of breath. Symptoms typically are intermittent and may worsen over a short time, leading to an exacerbation. Asthma exacerbations can be serious, leading to hospitalisation or even death in rare cases. Exacerbations may be treated by increasing an individual's usual medication and providing additional medication, such as oral steroids. Although antibiotics are sometimes included in the treatment regimen, bacterial infections are thought to be responsible for only a minority of exacerbations, and current guidance states that antibiotics should be reserved for cases in which clear signs, symptoms, or laboratory test results are suggestive of bacterial infection.\nObjectives\nTo determine the efficacy and safety of antibiotics in the treatment of asthma exacerbations.\nSearch methods\nWe searched the Cochrane Airways Trials Register, which contains records compiled from multiple electronic and handsearched resources. We also searched trial registries and reference lists of primary studies. We conducted the most recent search in October 2017.\nSelection criteria\nWe included studies comparing antibiotic therapy for asthma exacerbations in adults or children versus placebo or usual care not involving an antibiotic. We allowed studies including any type of antibiotic, any dose, and any duration, providing the aim was to treat the exacerbation. We included parallel studies of any duration conducted in any setting and planned to include cluster trials. We excluded cross-over trials. We included studies reported as full-text articles, those published as abstracts only, and unpublished data.\nData collection and analysis\nAt least two review authors screened the search results for eligible studies. We extracted outcome data, assessed risk of bias in duplicate, and resolved discrepancies by involving another review author. We analysed dichotomous data as odds ratios (ORs) or risk differences (RDs), and continuous data as mean differences (MDs), all with a fixed-effect model. We described skewed data narratively. We graded the results and presented evidence in 'Summary of findings' tables for each comparison. Primary outcomes were intensive care unit/high dependence unit (ICU/HDU) admission, duration of symptoms/exacerbations, and all adverse events. Seconday outcomes were mortality, length of hospital admission, relapse after index presentation, and peak expiratory flow rate (PEFR).\nMain results\nSix studies met our inclusion criteria and included a total of 681 adults and children with exacerbations of asthma. Mean age in the three studies in adults ranged from 36.2 to 41.2 years. The three studies in children applied varied inclusion criteria, ranging from one to 18 years of age. Five studies explicitly excluded participants with obvious signs and symptoms of bacterial infection (i.e. those clearly meeting current guidance to receive antibiotics). Four studies investigated macrolide antibiotics, and two studies investigated penicillin (amoxicillin and ampicillin) antibiotics; both studies using penicillin were conducted over 35 years ago. Five studies compared antibiotics versus placebo, and one was open-label. Study follow-up ranged from one to twelve weeks. Trials were of varied methodological quality, and we were able to perform only limited meta-analysis.\nNone of the included trials reported ICU/HDU admission, although one participant in the placebo group of a study including children with status asthmaticus experienced a respiratory arrest and was ventilated. Four studies reported asthma symptoms, but we were able to combine results for only two macrolide studies of 416 participants; the MD in diary card symptom score was -0.34 (95% confidence interval (CI) -0.60 to -0.08), with lower scores (on a 7 point scale) denoting improved symptoms. Two macrolide studies reported symptom-free days. One study of 255 adults authors reported the percentage of symptom-free days at 10 days as 16% in the antibiotic group and 8% in the placebo group. In a further study of 40 children study authors reported significantly more symptom-free days at all time points in the antibiotic group compared with the usual care group. The same study reported the duration in days of the index asthma exacerbation, again favouring the antibiotic group. One study of a penicillin including 69 participants reported asthma symptoms at hospital discharge; the between-group difference for both studies was reported as non-significant.\nWe combined data for serious adverse events from three studies involving 502 participants, but events were rare; the three trials reported only 10 events: five in the antibiotic group and five in the placebo group. We combined data for all adverse events (AEs) from three studies, but the effect estimate is imprecise (OR 0.99, 95% CI 0.69 to 1.43). No deaths were reported in any of the included studies.\nTwo studies investigating penicillins reported admission duration; neither study reported a between-group difference. In one study (263 participants) of macrolides, two participants in each arm were reported as experiencing a relapse, defined as a further exacerbation, by the six-week time points. We combined PEFR endpoint results at 10 days for two macrolide studies; the result favoured antibiotics over placebo (MD 23.42 L/min, 95% CI 5.23 to 41.60). One study in children reported the maximum peak flow recorded during the follow-up period, favouring the clarithromycin group, but the confidence interval includes no difference (MD 38.80, 95% CI -11.19 to 88.79).\nGrading of outcomes ranged from moderate to very low quality, with quality of outcomes downgraded for suspicion of publication bias, indirectness, imprecision, and poor methodological quality of studies.\nAuthors' conclusions\nWe found limited evidence that antibiotics given at the time of an asthma exacerbation may improve symptoms and PEFR at follow-up compared with standard care or placebo. However, findings were inconsistent across the six heterogeneous studies included, two of the studies were conducted over 30 years ago and most of the participants included in this review were recruited from emergency departments, limiting the applicability of findings to this population. Therefore we have limited confidence in the results. We found insufficient evidence about several patient-important outcomes (e.g. hospital admission) to form conclusions. We were unable to rule out a difference between groups in terms of all adverse events, but serious adverse events were rare.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked for studies that compared a group of people given any type or dose of antibiotic with a group of people not given an antibiotic for an exacerbation. We included only studies in which it was decided by chance who would get an antibiotic. We included studies in adults and children carried out at any time and anywhere in the world."
}
] |
query-laysum
|
8240
|
[
{
"role": "user",
"content": "Abstract: Background\nHeart transplantation is considered to be the gold standard treatment for selected patients with end-stage heart disease when medical therapy has been unable to halt progression of the underlying pathology. Evidence suggests that aerobic exercise training may be effective in reversing the pathophysiological consequences associated with cardiac denervation and prevent immunosuppression-induced adverse effects in heart transplant recipients.\nObjectives\nTo determine the effectiveness and safety of exercise-based rehabilitation on the mortality, hospital admissions, adverse events, exercise capacity, health-related quality of life, return to work and costs for people after heart transplantation.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE (Ovid), Embase (Ovid), CINAHL (EBSCO) and Web of Science Core Collection (Thomson Reuters) to June 2016. We also searched two clinical trials registers and handsearched the reference lists of included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) of parallel group, cross-over or cluster design, which compared exercise-based interventions with (i) no exercise control (ii) a different dose of exercise training (e.g. low- versus high-intensity exercise training); or (iii) an active intervention (i.e. education, psychological intervention). The study population comprised adults aged 18 years or over who had received a heart transplant.\nData collection and analysis\nTwo review authors independently screened all identified references for inclusion based on pre-specified inclusion criteria. Disagreements were resolved by consensus or by involving a third person. Two review authors extracted outcome data from the included trials and assessed their risk of bias. One review author extracted study characteristics from included studies and a second author checked them against the trial report for accuracy.\nMain results\nWe included 10 RCTs that involved a total of 300 participants whose mean age was 54.4 years. Women accounted for fewer than 25% of all study participants. Nine trials which randomised 284 participants to receive exercise-based rehabilitation (151 participants) or no exercise (133 participants) were included in the main analysis. One cross-over RCT compared high-intensity interval training with continued moderate-intensity training in 16 participants. We reported findings for all trials at their longest follow-up (median 12 weeks).\nExercise-based cardiac rehabilitation increased exercise capacity (VO2peak) compared with no exercise control (MD 2.49 mL/kg/min, 95% CI 1.63 to 3.36; N = 284; studies = 9; moderate quality evidence). There was evidence from one trial that high-intensity interval exercise training was more effective in improving exercise capacity than continuous moderate-intensity exercise (MD 2.30 mL/kg/min, 95% CI 0.59 to 4.01; N = 16; 1 study). Four studies reported health-related quality of life (HRQoL) measured using SF-36, Profile of Quality of Life in the Chronically Ill (PLC) and the World Health Organization Quality Of Life (WHOQoL) - BREF. Due to the variation in HRQoL outcomes and methods of reporting we were unable to meta-analyse results across studies, but there was no evidence of a difference between exercise-based cardiac rehabilitation and control in 18 of 21 HRQoL domains reported, or between high and moderate intensity exercise in any of the 10 HRQoL domains reported. One adverse event was reported by one study.\nExercise-based cardiac rehabilitation improves exercise capacity, but exercise was found to have no impact on health-related quality of life in the short-term (median 12 weeks follow-up), in heart transplant recipients whose health is stable.\nThere was no evidence of statistical heterogeneity across trials for exercise capacity and no evidence of small study bias. The overall risk of bias in included studies was judged as low or unclear; more than 50% of included studies were assessed at unclear risk of bias with respect to allocation concealment, blinding of outcome assessors and declaration of conflicts of interest. Evidence quality was assessed as moderate according to GRADE criteria.\nAuthors' conclusions\nWe found moderate quality evidence suggesting that exercise-based cardiac rehabilitation improves exercise capacity, and that exercise has no impact on health-related quality of life in the short-term (median 12 weeks follow-up), in heart transplant recipients. Cardiac rehabilitation appears to be safe in this population, but long-term follow-up data are incomplete and further good quality and adequately-powered trials are needed to demonstrate the longer-term benefits of exercise on safety and impact on both clinical and patient-related outcomes, such as health-related quality of life, and healthcare costs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 10 trials that studied 300 people who were heart transplant recipients. Nine studies compared exercise with no exercise; one study compared high-intensity interval training with continuous moderate-intensity exercise.\nWe found that exercise-based cardiac rehabilitation led to an increase in the exercise capacity of heart transplant recipients compared to not undertaking exercise. There was evidence of better exercise capacity following high-intensity interval training compared to continuous moderate-intensity exercise. Four studies reported health-related quality of life, but there was no evidence of differences between exercise training and no exercise training in most (18/21) aspects reported, or between high- and moderate-intensity exercise.\nOne adverse event was reported in one study.\nRisk of bias in the included studies was assessed as low or unclear; lack of reporting made assessment for more than half of included studies challenging."
}
] |
query-laysum
|
8241
|
[
{
"role": "user",
"content": "Abstract: Background\nMissed hospital outpatient appointments is a commonly reported problem in healthcare services around the world; for example, they cost the National Health Service (NHS) in the UK millions of pounds every year and can cause operation and scheduling difficulties worldwide. In 2002, the World Health Organization (WHO) published a report highlighting the need for a model of care that more readily meets the needs of people with chronic conditions. Patient-initiated appointment systems may be able to meet this need at the same time as improving the efficiency of hospital appointments.\nObjectives\nTo assess the effects of patient-initiated appointment systems compared with consultant-led appointment systems for people with chronic or recurrent conditions managed in secondary care.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and six other databases. We contacted authors of identified studies and conducted backwards and forwards citation searching. We searched for current/ongoing research in two trial registers. Searches were run on 13 March 2019.\nSelection criteria\nWe included randomised trials, published and unpublished in any language that compared the use of patient-initiated appointment systems to consultant-led appointment systems for adults with chronic or recurrent conditions managed in secondary care if they reported one or more of the following outcomes: physical measures of health status or disease activity (including harms), quality of life, service utilisation or cost, adverse effects, patient or clinician satisfaction, or failures of the 'system'.\nData collection and analysis\nTwo review authors independently screened all references at title/abstract stage and full-text stage using prespecified inclusion criteria. We resolved disagreements though discussion. Two review authors independently completed data extraction for all included studies. We discussed and resolved discrepancies with a third review author. Where needed, we contacted authors of included papers to provide more information. Two review authors independently assessed the risk of bias using the Cochrane Effective Practice and Organisation of Care 'Risk of bias' tool, resolving any discrepancies with a third review author. Two review authors independently assessed the certainty of the evidence using GRADE.\nMain results\nThe 17 included randomised trials (3854 participants; mean age 41 to 76 years; follow-up 12 to 72 months) covered six broad health conditions: cancer, rheumatoid arthritis, asthma, chronic obstructive pulmonary disease, psoriasis and inflammatory bowel disease. The certainty of the evidence using GRADE ratings was mainly low to very low. The results suggest that patient-initiated clinics may make little or no difference to anxiety (odds ratio (OR) 0.87, 95% confidence interval (CI) 0.68 to 1.12; 5 studies, 1019 participants; low-certainty evidence) or depression (OR 0.79 95% CI 0.51 to 1.23; 6 studies, 1835 participants; low-certainty evidence) compared to the consultant-led appointment system. The results also suggest that patient-initiated clinics may make little or no difference to quality of life (standardised mean difference (SMD) 0.12, 95% CI 0.00 to 0.25; 7 studies, 1486 participants; low-certainty evidence) compared to the consultant-led appointment system. Results for service utilisation (contacts) suggest there may be little or no difference in service utilisation in terms of contacts between the patient-initiated and consultant-led appointment groups; however, the effect is not certain as the rate ratio ranged from 0.68 to 3.83 across the studies (median rate ratio 1.11, interquartile (IQR) 0.93 to 1.37; 15 studies, 3348 participants; low-certainty evidence). It is uncertain if service utilisation (costs) are reduced in the patient-initiated compared to the consultant-led appointment groups (8 studies, 2235 participants; very low-certainty evidence). The results suggest that adverse events such as relapses in some conditions (inflammatory bowel disease and cancer) may have little or no reduction in the patient-initiated appointment group in comparison with the consultant-led appointment group (MD –0.20, 95% CI –0.54 to 0.14; 3 studies, 888 participants; low-certainty evidence). The results are unclear about any differences the intervention may make to patient satisfaction (SMD 0.05, 95% CI –0.41 to 0.52; 2 studies, 375 participants) because the certainty of the evidence is low, as each study used different questions to collect their data at different time points and across different health conditions. Some areas of risk of bias across all the included studies was consistently high (i.e. for blinding of participants and personnel and blinding of outcome assessment, other areas were largely of low risk of bias or were affected by poor reporting making the assessment unclear).\nAuthors' conclusions\nPatient-initiated appointment systems may have little or no effect on patient anxiety, depression and quality of life compared to consultant-led appointment systems. Other aspects of disease status and experience also appear to show little or no difference between patient-initiated and consultant-led appointment systems. Patient-initiated appointment systems may have little or no effect on service utilisation in terms of service contact and there is uncertainty about costs compared to consultant-led appointment systems. Patient-initiated appointment systems may have little or no effect on adverse events such as relapse or patient satisfaction compared to consultant-led appointment systems.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors searched for studies that had been published up to March 2019."
}
] |
query-laysum
|
8242
|
[
{
"role": "user",
"content": "Abstract: Background\nSleep positioning systems can be prescribed for children with cerebral palsy to help reduce or prevent hip migration, provide comfort to ease pain and/or improve sleep. As sleep disturbance is common in children with developmental disabilities, with impact on their carers' sleep, and as sleep positioning systems can be expensive, guidance is needed to support decisions as to their use.\nObjectives\nTo determine whether commercially-available sleep positioning systems, compared with usual care, reduce or prevent hip migration in children with cerebral palsy. Any negative effect of sleep positioning systems on hip migration will be considered within this objective.\nSecondary objectives were to determine the effect of sleep positioning systems on: (1) number or frequency of hip problems; (2) sleep patterns and quality; (3) quality of life of the child and family; (4) pain; and (5) physical functioning. We also sought to identify any adverse effects from using sleep positioning systems.\nSearch methods\nIn December 2014, we searched CENTRAL, Ovid MEDLINE, Embase, and 13 other databases. We also searched two trials registers. We applied no restrictions on date of publication, language, publication status or study design. We checked references and contacted manufacturers and authors for potentially relevant literature, and searched the internet using Google.\nSelection criteria\nWe included all randomised controlled trials (RCTs) evaluating whole body sleep positioning systems for children and adolescents (up to 18 years of age) with cerebral palsy.\nData collection and analysis\nTwo review authors independently screened reports retrieved from the search against pre-determined inclusion criteria and assessed the quality of eligible studies.\nMembers of the public (parent carers of children with neurodisability) contributed to this review by suggesting the topic, refining the research objectives, interpreting the findings, and reviewing the plain language summary.\nMain results\nWe did not identify any randomised controlled trials that evaluated the effectiveness of sleep positioning systems on hip migration.\nWe did find two randomised cross-over trials that met the inclusion criteria in respect of secondary objectives relating to sleep quality and pain. Neither study reported any important difference between sleeping in sleep positioning systems and not for sleep patterns or sleep quality (two studies, 21 children, very low quality evidence) and pain (one study, 11 children, very low quality evidence). These were small studies with established users of sleep positioning systems and were judged to have high risk of bias.\nWe found no eligible trials that explored the other secondary objectives (number or frequency of hip problems, quality of life of the child and family, physical functioning, and adverse effects).\nAuthors' conclusions\nWe found no randomised trials that evaluated the effectiveness of sleep positioning systems to reduce or prevent hip migration in children with cerebral palsy. Nor did we find any randomised trials that evaluated the effect of sleep positioning systems on the number or frequency of hip problems, quality of life of the child and family or on physical functioning.\nLimited data from two randomised trials, which evaluated the effectiveness of sleep positioning systems on sleep quality and pain for children with cerebral palsy, showed no significant differences in these aspects of health when children were using and not using a sleep positioning system.\nIn order to inform clinical decision-making and the prescription of sleep positioning systems, more rigorous research is needed to determine effectiveness, cost-effectiveness, and the likelihood of adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "No randomised controlled trials were found that evaluated the effectiveness of sleep positioning systems to reduce or prevent hip migration.\nTwo small randomised controlled trials compared children's quality of sleep when they were using and not using their sleep positioning system. One of these studies also examined pain when sleeping in and out of a sleep positioning system. These were cross-over trials (see Cochrane Glossary). The children in these studies spent a few nights using their sleep positioning system and then a few nights not using it, or the other way round. The order in which they either used or did not use the equipment was randomised.\nTwenty-one children with cerebral palsy, aged between 5 and 16 years, who were used to sleeping in sleep positioning systems took part in the studies. One of the studies took place in a sleep laboratory and the other study took place in the children's homes. Neither study reported any differences in quality of sleep or pain whether using or not using their sleep positioning system. These results need to be interpreted cautiously due to the small numbers of children involved and the fact that the children who participated were already accustomed users of the equipment. There were also various weaknesses in the way the research was designed and reported."
}
] |
query-laysum
|
8243
|
[
{
"role": "user",
"content": "Abstract: Background\nSeptal surgery is a well-established procedure used to treat nasal obstruction due to deviation of the nasal septum, which is carried out under local or general anaesthesia. Local anaesthesia is used for postoperative pain control, but its effectiveness and safety are unclear.\nObjectives\nTo assess the effectiveness of perioperative local anaesthesia for reducing pain in septal surgery and to evaluate the risk of associated complications.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trial Register; Central Register of Controlled Trials; Ovid MEDLINE; Ovid Embase; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 9 January 2018.\nSelection criteria\nRandomised controlled trials and cluster-randomised controlled trials involving adults or children (or both) who underwent septal surgery. We included studies comparing local anaesthesia versus no treatment/placebo. We also included studies comparing different types of local anaesthesia to each other (i.e. local injection, the addition of an anaesthetic agent to nasal packing, where used, and sphenopalatine ganglion block).\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. The primary outcome was postoperative pain intensity at 12, 24 and 48 hours measured by visual analogue scale (VAS) or another pain outcome tool including numerical or verbal rating scales. Secondary outcomes were requirement for additional analgesia, duration of hospitalisation and adverse effects (postoperative bleeding and postoperative vomiting). We used GRADE to assess the quality of the evidence for each outcome; this is indicated in italics.\nMain results\nWe included seven randomised controlled trials involving 493 participants. In all studies the participants were adults undergoing septoplasty. These studies were heterogeneous and the quality of the body of evidence ranged from low to very low. Few of the studies provided reliable data for the primary outcome in this review.\nLocal anaesthetic injection versus no treatment/placebo\nTwo studies (142 participants) compared local anaesthetic injection versus placebo but these studies did not report postoperative pain at 12, 24 or 48 hours. It is unclear whether local anaesthetic injection changed the risk of vomiting (odds ratio (OR) 3.10, 95% confidence interval (CI) 0.12 to 79.23; 60 participants; one study) (low-quality evidence). Neither study reported the requirement for additional analgesia, duration of hospitalisation or uncontrollable postoperative bleeding.\nLocal anaesthetic application via nasal packing versus no packing/packing with placebo\nFour studies (301 participants) used nasal packing postoperatively and compared the addition of local anaesthetic to the pack versus packing with a placebo added. Compared with packing with placebo, the addition of local anaesthetic to nasal packing reduced the pain score on a VAS (ranging from 0 to 100) at 12 hours (mean difference (MD) -16.95, 95% CI -22.27 to -11.62; 151 participants; two studies; I2 = 49%) (low-quality evidence) and at 24 hours postoperatively (MD -7.53, 95% CI -9.76 to -5.29; 268 participants; four studies; I2 = 83%) (very low-quality evidence). These studies did not report postoperative pain at 48 hours. The addition of local anaesthetic to nasal packing decreased the requirement for additional analgesia (OR 0.15, 95% CI 0.07 to 0.34; 151 participants; two studies; I2 = 15%) (moderate-quality evidence). No studies reported duration of hospitalisation, postoperative vomiting or uncontrollable postoperative bleeding.\nNo studies compared the addition of local anaesthetic to nasal packing versus no packing.\nSphenopalatine ganglion block versus no treatment/placebo\nOne study (50 participants) compared sphenopalatine ganglion block versus no treatment but this study did not report postoperative pain, requirement for additional analgesia, duration of hospitalisation, vomiting or uncontrollable postoperative bleeding.\nAuthors' conclusions\nThe addition of local anaesthesia to nasal packs (if these are being used) following septal surgery may reduce postoperative pain within the first 12 hours, compared to nasal packing with a placebo added. The effect is uncertain at 24 hours because the quality of the evidence is very low. Evidence was lacking for other outcomes, including adverse effects. There is a lack of evidence about the effects of local anaesthesia added to nasal packing compared to no nasal packing. There is also a lack of evidence about the effects of local anaesthesia given by injection and the effects of sphenopalatine ganglion block.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to January 2018. We included seven studies, recruiting 493 patients. The participants were adults having septoplasty in all of the included studies and the percentage of females ranged from 20.7% to 58.3%. Two studies, recruiting 142 participants, assessed local anaesthetic injection. Four studies, recruiting 301 participants, used nasal packing postoperatively and assessed the addition of a local anaesthetic to the nasal pack. One study, recruiting 50 participants, assessed a regional nerve block. No studies were funded by the anaesthetic drug manufacturer."
}
] |
query-laysum
|
8244
|
[
{
"role": "user",
"content": "Abstract: Background\nMelanoma accounts for a small proportion of all skin cancer cases but is responsible for most skin cancer-related deaths. Early detection and treatment can improve survival. Smartphone applications are readily accessible and potentially offer an instant risk assessment of the likelihood of malignancy so that the right people seek further medical attention from a clinician for more detailed assessment of the lesion. There is, however, a risk that melanomas will be missed and treatment delayed if the application reassures the user that their lesion is low risk.\nObjectives\nTo assess the diagnostic accuracy of smartphone applications to rule out cutaneous invasive melanoma and atypical intraepidermal melanocytic variants in adults with concerns about suspicious skin lesions.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception to August 2016: Cochrane Central Register of Controlled Trials; MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nStudies of any design evaluating smartphone applications intended for use by individuals in a community setting who have lesions that might be suspicious for melanoma or atypical intraepidermal melanocytic variants versus a reference standard of histological confirmation or clinical follow-up and expert opinion.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). Due to scarcity of data and poor quality of studies, we did not perform a meta-analysis for this review. For illustrative purposes, we plotted estimates of sensitivity and specificity on coupled forest plots for each application under consideration.\nMain results\nThis review reports on two cohorts of lesions published in two studies. Both studies were at high risk of bias from selective participant recruitment and high rates of non-evaluable images. Concerns about applicability of findings were high due to inclusion only of lesions already selected for excision in a dermatology clinic setting, and image acquisition by clinicians rather than by smartphone app users.\nWe report data for five mobile phone applications and 332 suspicious skin lesions with 86 melanomas across the two studies. Across the four artificial intelligence-based applications that classified lesion images (photographs) as melanomas (one application) or as high risk or 'problematic' lesions (three applications) using a pre-programmed algorithm, sensitivities ranged from 7% (95% CI 2% to 16%) to 73% (95% CI 52% to 88%) and specificities from 37% (95% CI 29% to 46%) to 94% (95% CI 87% to 97%). The single application using store-and-forward review of lesion images by a dermatologist had a sensitivity of 98% (95% CI 90% to 100%) and specificity of 30% (95% CI 22% to 40%).\nThe number of test failures (lesion images analysed by the applications but classed as 'unevaluable' and excluded by the study authors) ranged from 3 to 31 (or 2% to 18% of lesions analysed). The store-and-forward application had one of the highest rates of test failure (15%). At least one melanoma was classed as unevaluable in three of the four application evaluations.\nAuthors' conclusions\nSmartphone applications using artificial intelligence-based analysis have not yet demonstrated sufficient promise in terms of accuracy, and they are associated with a high likelihood of missing melanomas. Applications based on store-and-forward images could have a potential role in the timely presentation of people with potentially malignant lesions by facilitating active self-management health practices and early engagement of those with suspicious skin lesions; however, they may incur a significant increase in resource and workload. Given the paucity of evidence and low methodological quality of existing studies, it is not possible to draw any implications for practice. Nevertheless, this is a rapidly advancing field, and new and better applications with robust reporting of studies could change these conclusions substantially.\n\nGiven the provided abstract, please respond to the following query: \"Why is improving the diagnosis of malignant melanoma skin cancer important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Melanoma is one of the most dangerous forms of skin cancer. Not recognising a melanoma (a false negative test result) could delay seeking appropriate advice and surgery to remove it. This increases the risk of the cancer spreading to other organs in the body and possibly causing death. Diagnosing a skin lesion as a melanoma when it is not present (a false positive result) may cause anxiety and lead to unnecessary surgery and further investigations."
}
] |
query-laysum
|
8245
|
[
{
"role": "user",
"content": "Abstract: Background\nThe high levels of healthcare worker shortage is recognised as a severe impediment to increasing patients' access to antiretroviral therapy. This is particularly of concern where the burden of disease is greatest and the access to trained doctors is limited.This review aims to better inform HIV care programmes that are currently underway, and those planned, by assessing if task-shifting care from doctors to non-doctors provides both high quality and safe care for all patients requiring antiretroviral treatment.\nObjectives\nTo evaluate the quality of initiation and maintenance of HIV/AIDS care in models that task shift care from doctors to non-doctors.\nSearch methods\nWe conducted a comprehensive search to identify all relevant studies regardless of language or publication status (published, unpublished, in press, and in progress) from 1 January 1996 to 28 March 2014, with major HIV/AIDS conferences searched 23 May 2014. We had also contacted relevant organizations and researchers. Key words included MeSH terms and free-text terms relevant to 'task shifting', 'skill mix', 'integration of tasks', 'service delivery' and 'health services accessibility'.\nSelection criteria\nWe included controlled trials (randomised or non-randomised), controlled-before and after studies, and cohort studies (prospective or retrospective) comparing doctor-led antiretroviral therapy delivery to delivery that included another cadre of health worker other than a doctor, for initiating treatment, continuing treatment, or both, in HIV infected patients.\nData collection and analysis\nTwo authors independently screened titles, abstracts and descriptor terms of the results of the electronic search and applied our eligibility criteria using a standardized eligibility form to full texts of potentially eligible or uncertain abstracts. Two reviewers independently extracted data on standardized data extraction forms. Where possible, data were pooled using random effects meta-analysis. We assessed evidence quality with GRADE methodology.\nMain results\nTen studies met our inclusion criteria, all of which were conducted in Africa. Of these four were randomised controlled trials while the remaining six were cohort studies.\nFrom the trial data, when nurses initiated and provided follow-up HIV therapy, there was high quality evidence of no difference in death at one year, unadjusted risk ratio was 0.96 (95% CI 0.82 to 1.12), one trial, cluster adjusted n = 2770. There was moderate quality evidence of lower rates of losses to follow-up at one year, relative risk of 0.73 (95% CI 0.55 to 0.97). From the cohort data, there was low quality evidence that there may be an increased risk of death in the task shifting group, relative risk 1.23 (95% CI 1.14 to 1.33, two cohorts, n = 39 160) and very low quality data reporting no difference in patients lost to follow-up between groups, relative risk 0.30 (95% CI 0.05 to 1.94).\nFrom the trial data, when doctors initiated therapy and nurses provided follow-up, there was moderate quality evidence that there is probably no difference in death compared with doctor-led care at one year, relative risk of 0.89 (95% CI 0.59 to 1.32), two trials, cluster adjusted n = 4332. There was moderate quality evidence that there is probably no difference in the numbers of patients lost to follow-up at one year, relative risk 1.27 (95% CI 0.92 to 1.77), P = 0.15. From the cohort data, there is very low quality data that death at one year may be lower in the task shifting group, relative risk 0.19 (95% CI 0.05 to 0.78), one cohort, n = 2772, and very low quality evidence that loss to follow-up was reduced, relative risk 0.34 (95% CI 0.18 to 0.66).\nFrom the trial data, for maintenance therapy delivered in the community there was moderate quality evidence that there is probably no difference in mortality when doctors deliver care in the hospital or specially trained field workers provide home-based maintenance care and antiretroviral therapy at one year, relative risk 1.0 (95% CI 0.62 to 1.62), 1 trial, cluster adjusted n = 559. There is moderate quality evidence from this trial that losses to follow-up are probably no different at one year, relative risk 0.52 (0.12 to 2.3), P = 0.39. The cohort studies did not report on one year follow-up for these outcomes.\nAcross the studies that reported on virological and immunological outcomes, there was no clear evidence of difference whether a doctor or nurse or clinical officer delivered therapy. Three studies report on costs to patients, indicating a reduction in travel costs to treatment facilities where task shifting was occurring closer to patients homes. There is conflicting evidence regarding the relative cost to the health system, as implementation of the strategy may increase costs. The two studies reporting the patient and staff perceptions of the quality of care, report good acceptability of the service by patients, and general acceptance by doctors of the shifting of roles. One trial reported on the time to initiation of antiretroviral therapy, finding no clear evidence of a difference between groups. The same trial reports on new diagnosis of tuberculosis which favours nurse initiation of HIV care for increasing the numbers of diagnoses of tuberculosis made.\nAuthors' conclusions\nOur review found moderate quality evidence that shifting responsibility from doctors to adequately trained and supported nurses or community health workers for managing HIV patients probably does not decrease the quality of care and, in the case of nurse initiated care, may decrease the numbers of patients lost to follow-up.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies up to March 2014. We found 10 studies, including four randomised controlled trials and 6 cohort studies collecting data from HIV care programmes. All the studies were conducted in Africa in adults who were followed up for up to one year.\nWe describe three types of care:\n- Doctor versus nurse or clinical officer care for initiation and maintenance of antiretrovirals\n- Doctor versus nurse or clinical officer care for maintenance of antiretroviral therapy\n- Doctor versus community health workers for maintenance of antiretroviral therapy."
}
] |
query-laysum
|
8246
|
[
{
"role": "user",
"content": "Abstract: Background\nImplant overdentures are one of the most common treatment options used to rehabilitate edentulous patients. Attachment systems are used to anchor the overdentures to implants. The plethora of attachment systems available dictates a need for clinicians to understand their prosthodontic and patient-related outcomes.\nObjectives\nTo compare different attachment systems for maxillary and mandibular implant overdentures by assessing prosthodontic success, prosthodontic maintenance, patient preference, patient satisfaction/quality of life and costs.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 24 January 2018); Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 12) in the Cochrane Library (searched 24 January 2018); MEDLINE Ovid (1946 to 24 January 2018); and Embase Ovid (1980 to 24 January 2018). The US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials on 24 January 2018. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nAll randomised controlled trials (RCTs), including cross-over trials on maxillary or mandibular implant overdentures with different attachment systems with at least 1 year follow-up.\nData collection and analysis\nFour review authors extracted data independently and assessed risk of bias for each included trial. Several corresponding authors were subsequently contacted to obtain missing information. Fixed-effect meta-analysis was used to combine the outcomes with risk ratios (RR) for dichotomous outcomes and mean differences (MD) for continuous outcomes, with 95% confidence intervals (95% CI). We used the GRADE approach to assess the quality of evidence and create 'Summary of findings' tables.\nMain results\nWe identified six RCTs with a total of 294 mandibular overdentures (including one cross-over trial). No trials on maxillary overdentures were eligible. Due to the poor reporting of the outcomes across the included trials, only limited analyses between mandibular overdenture attachment systems were possible.\nComparing ball and bar attachments, upon pooling the data regarding short-term prosthodontic success, we identified substantial heterogeneity (I2 = 97%) with inconsistency in the direction of effect, which was unexplained by clinical or methodological differences between the studies, and accordingly we did not perform meta-analyses for this outcome. Short-term re-treatment (repair of attachment system) was higher with ball attachments (RR 3.11, 95% CI 1.68 to 5.75; 130 participants; 2 studies; very low-quality evidence), and there was no difference between both attachment systems in short-term re-treatment (replacement of attachment system) (RR 1.18, 95% CI 0.38 to 3.71; 130 participants; 2 studies; very low-quality evidence). It is uncertain whether there is a difference in short-term prosthodontic success when ball attachments are compared with bar attachments.\nComparing ball and magnet attachments, there was no difference between them in medium-term prosthodontic success (RR 0.84, 95% CI 0.64 to 1.10; 69 participants; 1 study; very low-quality evidence), or in medium-term re-treatment (repair of attachment system) (RR 1.75, 95% CI 0.65 to 4.72; 69 participants; 1 study; very low-quality evidence). However, after 5 years, prosthodontic maintenance costs were higher when magnet attachments were used (MD -247.37 EUR, 95% CI -346.32 to -148.42; 69 participants; 1 study; very low-quality evidence). It is uncertain whether there is a difference in medium-term prosthodontic success when ball attachments are compared with magnet attachments.\nOne trial provided data for ball versus telescopic attachments and reported no difference in prosthodontic maintenance between the two systems in short-term patrix replacement (RR 6.00, 95% CI 0.86 to 41.96; 22 participants; 1 study; very low-quality evidence), matrix activation (RR 11.00, 95% CI 0.68 to 177.72; 22 participants; 1 study; very low-quality evidence), matrix replacement (RR 1.75, 95% CI 0.71 to 4.31; 22 participants; 1 study; very low-quality evidence), or in relining of the implant overdenture (RR 2.33, 95% CI 0.81 to 6.76; 22 participants; 1 study; very low-quality evidence). It is uncertain whether there is a difference in short-term prosthodontic maintenance when ball attachments are compared with telescopic attachments.\nIn the only cross-over trial included, patient preference between different attachment systems was assessed after only 3 months and not for the entire trial period of 10 years.\nAuthors' conclusions\nFor mandibular overdentures, there is insufficient evidence to determine the relative effectiveness of different attachment systems on prosthodontic success, prosthodontic maintenance, patient satisfaction, patient preference or costs. In the short term, there is some evidence that is insufficient to show a difference and where there was no evidence was reported. It was not possible to determine any preferred attachment system for mandibular overdentures.\nFor maxillary overdentures, there is no evidence (with no trials identified) to determine the relative effectiveness of different attachment systems on prosthodontic success, prosthodontic maintenance, patient satisfaction, patient preference or costs.\nFurther RCTs on edentulous cohorts must pay attention to trial design specifically using the same number of implants of the same implant system, but with different attachment systems clearly identified in control and test groups. Trials should also determine the longevity of different attachment systems and patient preferences. Trials on the current array of computer-aided designed/computer-assisted manufactured (CAD/CAM) bar attachment systems are encouraged.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We judged the quality of the evidence to be very low. In all the included trials there were relatively few participants and few events, and there were serious limitations in the trial designs with data missing or not all outcomes reported."
}
] |
query-laysum
|
8247
|
[
{
"role": "user",
"content": "Abstract: Background\nPediatric cervical spine injury (CSI) after blunt trauma is rare. Nonetheless, missing these injuries can have severe consequences. To prevent the overuse of radiographic imaging, two clinical decision tools have been developed: The National Emergency X-Radiography Utilization Study (NEXUS) criteria and the Canadian C-spine Rule (CCR). Both tools are proven to be accurate in deciding whether or not diagnostic imaging is needed in adults presenting for blunt trauma screening at the emergency department. However, little information is known about the accuracy of these triage tools in a pediatric population.\nObjectives\nTo determine the diagnostic accuracy of the NEXUS criteria and the Canadian C-spine Rule in a pediatric population evaluated for CSI following blunt trauma.\nSearch methods\nWe searched the following databases to 24 February 2015: CENTRAL, MEDLINE, MEDLINE Non-Indexed and In-Process Citations, PubMed, Embase, Science Citation Index, ProQuest Dissertations & Theses Database, OpenGrey, ClinicalTrials.gov, World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, the Health Technology Assessment, and the Aggressive Research Intelligence Facility.\nSelection criteria\nWe included all retrospective and prospective studies involving children following blunt trauma that evaluated the accuracy of the NEXUS criteria, the Canadian C-spine Rule, or both. Plain radiography, computed tomography (CT) or magnetic resonance imaging (MRI) of the cervical spine, and follow-up were considered as adequate reference standards.\nData collection and analysis\nTwo review authors independently assessed the quality of included studies using the QUADAS-2 checklists. They extracted data on study design, patient characteristics, inclusion and exclusion criteria, clinical parameters, target condition, reference standard, and the diagnostic two-by-two table. We calculated and plotted sensitivity, specificity and negative predictive value in ROC space, and constructed forest plots for visual examination of variation in test accuracy.\nMain results\nThree cohort studies were eligible for analysis, including 3380 patients ; 96 children were diagnosed with CSI. One study evaluated the accuracy of the Canadian C-spine Rule and the NEXUS criteria, and two studies evaluated the accuracy of the NEXUS criteria. The studies were of moderate quality. Due to the small number of included studies and the diverse outcomes of those studies, we could not describe a pooled estimate for the diagnostic test accuracy. The sensitivity of the NEXUS criteria of the individual studies was 0.57 (95% confidence interval (CI) 0.18 to 0.90), 0.98 (95% CI 0.91 to 1.00) and 1.00 (95% CI 0.88 to 1.00). The specificity of the NEXUS criteria was 0.35 (95% CI 0.25 to 0.45), 0.54 (95% CI 0.45 to 0.62) and 0.2 (95% CI 0.18 to 0.21). For the Canadian C-spine Rule the sensitivity was 0.86 (95% CI 0.42 to 1.00) and specificity was 0.15 (95% CI 0.08 to 0.23). Since the quantity of the data was small we were not able to investigate heterogeneity.\nAuthors' conclusions\nThere are currently few studies assessing the diagnostic test accuracy of the NEXUS criteria and CCR in children. At the moment, there is not enough evidence to determine the accuracy of the Canadian C-spine Rule to detect CSI in pediatric trauma patients following blunt trauma. The confidence interval of the sensitivity of the NEXUS criteria between the individual studies showed a wide range, with a lower limit varying from 0.18 to 0.91 with a total of four false negative test results, meaning that if physicians use the NEXUS criteria in children, there is a chance of missing CSI. Since missing CSI could have severe consequences with the risk of significant morbidity, we consider that the NEXUS criteria are at best a guide to clinical assessment, with current evidence not supporting strict or protocolized adoption of the tool into pediatric trauma care. Moreover, we have to keep in mind that the sensitivity differs among several studies, and individual confidence intervals of these studies show a wide range. Our main conclusion is therefore that additional well-designed studies with large sample sizes are required to better evaluate the accuracy of the NEXUS criteria or the Canadian C-spine Rule, or both, in order to determine whether they are appropriate triage tools for the clearance of the cervical spine in children following blunt trauma.\n\nGiven the provided abstract, please respond to the following query: \"Index tests\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The NEXUS criteria and Canadian C-spine Rules are tools that doctors use to decide whether adults are at risk for cervical spine injury and whether they need to use radiographic imaging. These tools are already being used in children, but little information is known about how accurate they are in children. The goal of this review was therefore to evaluate whether these tools can also be used safely and effectively in children."
}
] |
query-laysum
|
8248
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatic encephalopathy is a common and devastating neuropsychiatric complication of acute liver failure or chronic liver disease. Ammonia content in the blood seems to play a role in the development of hepatic encephalopathy. Treatment for hepatic encephalopathy is complex. Acetyl-L-carnitine is a substance that may reduce ammonia toxicity. This review assessed the benefits and harms of acetyl-L-carnitine for patients with hepatic encephalopathy.\nObjectives\nTo assess the benefits and harms of acetyl-L-carnitine for patients with hepatic encephalopathy.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE Ovid, Embase Ovid, LILACS, and Science Citation Index Expanded for randomised clinical trials. We sought additional randomised clinical trials from the World Health Organization Clinical Trials Search Portal and ClinicalTrials.gov. We performed all electronic searches until 10 September 2018. We looked through the reference lists of retrieved publications and review articles, and we searched the FDA and EMA websites.\nSelection criteria\nWe searched for randomised clinical trials in any setting, recruiting people with hepatic encephalopathy. Trials were eligible for inclusion if they compared acetyl-L-carnitine plus standard care (e.g. antibiotics, lactulose) versus placebo or no acetyl-L-carnitine plus standard care. We are well aware that by selecting randomised clinical trials, we placed greater focus on potential benefits than on potential harms.\nData collection and analysis\nWe selected randomised clinical trials, assessed risk of bias in eight domains, and extracted data in a duplicate and independent fashion. We estimated risk ratios (RRs) for dichotomous outcomes and mean differences (MDs) for continuous outcomes. We measured statistical heterogeneity using I² and D² statistics. We subjected our analyses to fixed-effect and random-effects model meta-analyses. We assessed bias risk domains to control systematic errors. We assessed overall quality of the data for each individual outcome by using the GRADE approach.\nMain results\nWe identified five randomised clinical trials involving 398 participants. All trials included only participants with cirrhosis as the underlying cause of hepatic encephalopathy. Trials included participants with covert or overt hepatic encephalopathy. All trials were conducted in Italy by a single team and assessed acetyl-L-carnitine compared with placebo. Oral intervention was the most frequent route of administration. All trials were at high risk of bias and were underpowered. None of the trials were sponsored by the pharmaceutical industry.\nNone of the identified trials reported information on all-cause mortality, serious adverse events, or days of hospitalisation. Only one trial assessed quality of life using the Short Form (SF)-36 scale (67 participants; very low-quality evidence). The effects of acetyl-L-carnitine compared with placebo on general health at 90 days are uncertain (MD -6.20 points, 95% confidence interval (CI) -9.51 to -2.89). Results for additional domains of the SF-36 are also uncertain. One trial assessed fatigue using the Wessely and Powell test (121 participants; very low-quality evidence). The effects are uncertain in people with moderate-grade hepatic encephalopathy (mental fatigue: MD 0.40 points, 95% CI -0.21 to 1.01; physical fatigue: MD -0.20 points, 95% CI -0.92 to 0.52) and mild-grade hepatic encephalopathy (mental fatigue: -0.80 points, 95% CI -1.48 to -0.12; physical fatigue: 0.20 points, 95% CI -0.72 to 1.12). Meta-analysis showed a reduction in blood ammonium levels favouring acetyl-L-carnitine versus placebo (MD -13.06 mg/dL, 95% CI -17.24 to -8.99; 387 participants; 5 trials; very low-quality evidence). It is unclear whether acetyl-L-carnitine versus placebo increases the risk of non-serious adverse events (8/126 (6.34%) vs 3/120 (2.50%); RR 2.51, 95% CI 0.68 to 9.22; 2 trials; very low-quality evidence). Overall, adverse events data were poorly reported and harms may have been underestimated.\nAuthors' conclusions\nThis Cochrane systematic review analysed a heterogeneous group of five trials at high risk of bias and with high risk of random errors conducted by only one research team. We assessed acetyl-L-carnitine versus placebo in participants with cirrhosis with covert or overt hepatic encephalopathy. Hence, we have no data on the drug for hepatic encephalopathy in acute liver failure. We found no information about all-cause mortality, serious adverse events, or days of hospitalisation. We found no clear differences in effect between acetyl-L-carnitine and placebo regarding quality of life, fatigue, and non-serious adverse events. Acetyl-L-carnitine reduces blood ammonium levels compared with placebo. We rated all evidence as of very low quality due to pitfalls in design and execution, inconsistency, small sample sizes, and very few events. The harms profile for acetyl-L-carnitine is presently unclear. Accordingly, we need further randomised clinical trials to assess acetyl-L-carnitine versus placebo conducted according to the SPIRIT statements and reported according to the CONSORT statements.\n\nGiven the provided abstract, please respond to the following query: \"Shortcomings of the trials included in this review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "None of the included trials reported data on participants’ all-cause mortality, serious adverse events, or days of hospitalisation. Researchers poorly reported harms caused by acetyl-L-carnitine, so the harms profile remains unclear. Risks of bias, imprecision, and outcome reporting bias all make the certainty of evidence low or very low. A reduction in blood ammonium levels favoured participants receiving acetyl-L-carnitine, but study authors observed no clinical benefits.\nIt is clear that additional randomised clinical trials are required to assess the benefits and harms of acetyl-L-carnitine compared with placebo in the treatment of people with hepatic encephalopathy. These trials should be well designed, conducted by independent researchers, and collaborative, and should include large numbers of participants."
}
] |
query-laysum
|
8249
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review last published in Issue 7, 2019; it includes two additional studies.\nEpilepsy is a common neurological disease that affects approximately 1% of the UK population. Approximately one-third of these people continue to have seizures despite drug treatment. Pregabalin is one of the newer antiepileptic drugs that has been developed to improve outcomes. In this review we summarised the current evidence regarding pregabalin when used as an add-on treatment for drug-resistant focal epilepsy.\nObjectives\nTo assess the efficacy and tolerability of pregabalin when used as an add-on treatment for drug-resistant focal epilepsy.\nSearch methods\nFor the latest update we searched the following databases on 16 November 2020: Cochrane Register of Studies (CRS Web), and MEDLINE (Ovid, 1946 to 16 November 2020). CRS Web includes randomised or quasi-randomised, controlled trials from PubMed, Embase, ClinicalTrials.gov, the World Health Organisation International Clinical Trials Registry Platform (ICTRP), the Cochrane Central Register of Controlled Trials (CENTRAL), and the Specialised Registers of Cochrane Review Groups, including Epilepsy.\nWe imposed no language restrictions. We contacted the manufacturers of pregabalin and authors in the field to identify any relevant unpublished studies.\nSelection criteria\nWe included randomised controlled trials comparing pregabalin with placebo or an alternative antiepileptic drug as an add-on for people of any age with drug-resistant focal epilepsy. Double-blind and single-blind trials were eligible for inclusion. The primary outcome was 50% or greater reduction in seizure frequency; secondary outcomes were seizure freedom, treatment withdrawal for any reason, treatment withdrawal due to adverse effects, and proportion of individuals experiencing adverse effects.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted the relevant data. Primary analyses were intention-to-treat (ITT). We presented summary risk ratios (RRs) and odds ratios (ORs) with 95% confidence intervals (CIs). We evaluated dose response in regression models. We carried out a risk of bias assessment for each included study using the Cochrane risk of bias tool and assessed the overall certainty of evidence using the GRADE approach.\nMain results\nWe included 11 randomised controlled trials (3949 participants). Nine trials compared pregabalin to placebo. For the primary outcome, participants randomised to pregabalin were significantly more likely to attain a 50% or greater reduction in seizure frequency compared to placebo (RR 1.95, 95% CI 1.40 to 2.72, 9 trials, 2663 participants, low-certainty evidence). The odds of response doubled with an increase in dose from 300 mg/day to 600 mg/day (OR 1.99, 95% CI 1.74 to 2.28), indicating a dose-response relationship. Pregabalin was significantly associated with seizure freedom (RR 3.94, 95% CI 1.50 to 10.37, 4 trials, 1125 participants, moderate-certainty evidence). Participants were significantly more likely to withdraw from pregabalin treatment than placebo for any reason (RR 1.33, 95% CI 1.10 to 1.60; 9 trials, 2663 participants; moderate-certainty evidence) and for adverse effects (RR 2.60, 95% CI 1.86 to 3.64; 9 trials, 2663 participants; moderate-certainty evidence).\nThree trials compared pregabalin to three active-control drugs: lamotrigine, levetiracetam and gabapentin. Participants allocated to pregabalin were significantly more likely to achieve a 50% or greater reduction in seizure frequency than those allocated to lamotrigine (RR 1.47, 95% CI 1.03 to 2.12; 1 trial, 293 participants) but not those allocated to levetiracetam (RR 0.94, 95% CI 0.80 to 1.11; 1 trial, 509 participants) or gabapentin (RR 0.96, 95% CI 0.82 to 1.12; 1 trial, 484 participants). We found no significant differences between pregabalin and lamotrigine for seizure freedom (RR 1.39, 95% CI 0.40 to 4.83). However, significantly fewer participants achieved seizure freedom with add-on pregabalin compared to levetiracetam (RR 0.50, 95% CI 0.30 to 0.85). No data were reported for this outcome for pregabalin versus gabapentin. We detected no significant differences in treatment withdrawal rate for any reason or due to adverse effects, specifically, during either pooled analysis or subgroup analysis. Ataxia, dizziness, somnolence, weight gain, headache and fatigue were significantly associated with pregabalin than in active control.\nWe rated the overall risk of bias in the included studies as low or unclear due to the possibility of publication bias and lack of methodological details provided. We assessed all the studies to be at a high risk of funding bias as they were all sponsored by Pfizer. We rated the certainty of the evidence as very low to moderate using the GRADE approach.\nAuthors' conclusions\nFor people with drug-resistant focal epilepsy, pregabalin when used as an add-on treatment was significantly more effective than placebo at producing a 50% or greater seizure reduction and seizure freedom. Results demonstrated efficacy for doses from 150 mg/day to 600 mg/day, with increasing effectiveness at 600 mg doses, although there were issues with tolerability at higher doses. However, the trials included in this review were of short duration, and longer-term trials are needed to inform clinical decision-making. This review focused on the use of pregabalin in drug-resistant focal epilepsy, and the results cannot be generalised to add-on treatment for generalised epilepsies. Likewise, no inference can be made about the effects of pregabalin when used as monotherapy.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We rated all included studies as being at low or unclear risk of bias due to missing information about the methods used to conduct the trial and a suspicion of publication bias. Publication bias can occur when studies that report non-significant findings are not published. We suspected publication bias because the majority of included studies showed significant findings and were sponsored by the same drug company. We assessed the certainty of the evidence for the primary outcome of reduction in seizure frequency as low, meaning that we cannot be certain that the finding reported is accurate. However, we rated the certainty of the evidence for the outcomes seizure freedom and treatment withdrawal as moderate, so we can be fairly confident that these results are accurate. There were no data regarding the longer-term effectiveness of pregabalin, and the use of pregabalin in drug-resistant generalised epilepsy, which should be investigated in future studies.\nThe evidence is current to 16 November 2020."
}
] |
query-laysum
|
8250
|
[
{
"role": "user",
"content": "Abstract: Background\nHyperimmune immunoglobulin (hIVIG) contains polyclonal antibodies, which can be prepared from large amounts of pooled convalescent plasma or prepared from animal sources through immunisation. They are being investigated as a potential therapy for coronavirus disease 2019 (COVID-19). This review was previously part of a parent review addressing convalescent plasma and hIVIG for people with COVID-19 and was split to address hIVIG and convalescent plasma separately.\nObjectives\nTo assess the benefits and harms of hIVIG therapy for the treatment of people with COVID-19, and to maintain the currency of the evidence using a living systematic review approach.\nSearch methods\nTo identify completed and ongoing studies, we searched the World Health Organization (WHO) COVID-19 Research Database, the Cochrane COVID-19 Study Register, the Epistemonikos COVID-19 L*OVE Platform and Medline and Embase from 1 January 2019 onwards. We carried out searches on 31 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated hIVIG for COVID-19, irrespective of disease severity, age, gender or ethnicity.\nWe excluded studies that included populations with other coronavirus diseases (severe acute respiratory syndrome (SARS) or Middle East respiratory syndrome (MERS)), as well as studies that evaluated standard immunoglobulin.\nData collection and analysis\nWe followed standard Cochrane methodology.\nTo assess bias in included studies, we used RoB 2. We rated the certainty of evidence, using the GRADE approach, for the following outcomes: all-cause mortality, improvement and worsening of clinical status (for individuals with moderate to severe disease), quality of life, adverse events, and serious adverse events.\nMain results\nWe included five RCTs with 947 participants, of whom 688 received hIVIG prepared from humans, 18 received heterologous swine glyco-humanised polyclonal antibody, and 241 received equine-derived processed and purified F(ab’)2 fragments. All participants were hospitalised with moderate-to-severe disease, most participants were not vaccinated (only 12 participants were vaccinated). The studies were conducted before or during the emergence of several SARS-CoV-2 variants of concern.\nThere are no data for people with COVID-19 with no symptoms (asymptomatic) or people with mild COVID-19. We identified a further 10 ongoing studies evaluating hIVIG.\nBenefits of hIVIG prepared from humans\nWe included data on one RCT (579 participants) that assessed the benefits and harms of hIVIG 0.4 g/kg compared to saline placebo. hIVIG may have little to no impact on all-cause mortality at 28 days (risk ratio (RR) 0.79, 95% confidence interval (CI) 0.43 to 1.44; absolute effect 77 per 1000 with placebo versus 61 per 1000 (33 to 111) with hIVIG; low-certainty evidence). The evidence is very uncertain about the effect on worsening of clinical status at day 7 (RR 0.85, 95% CI 0.58 to 1.23; very low-certainty evidence). It probably has little to no impact on improvement of clinical status on day 28 (RR 1.02, 95% CI 0.97 to 1.08; moderate-certainty evidence). We did not identify any studies that reported quality-of-life outcomes, so we do not know if hIVIG has any impact on quality of life.\nHarms of hIVIG prepared from humans\nhIVIG may have little to no impact on adverse events at any grade on day 1 (RR 0.98, 95% CI 0.81 to 1.18; 431 per 1000; 1 study 579 participants; low-certainty evidence). Patients receiving hIVIG probably experience more adverse events at grade 3-4 severity than patients who receive placebo (RR 4.09, 95% CI 1.39 to 12.01; moderate-certainty evidence). hIVIG may have little to no impact on the composite outcome of serious adverse events or death up to day 28 (RR 0.72, 95% CI 0.45 to 1.14; moderate-certainty evidence).\nWe also identified additional results on the benefits and harms of other dose ranges of hIVIG, not included in the summary of findings table, but summarised in additional tables.\nBenefits of animal-derived polyclonal antibodies\nWe included data on one RCT (241 participants) to assess the benefits and harms of receptor-binding domain-specific polyclonal F(ab´)2 fragments of equine antibodies (EpAbs) compared to saline placebo. EpAbs may reduce all-cause mortality at 28 days (RR 0.60, 95% CI 0.26 to 1.37; absolute effect 114 per 1000 with placebo versus 68 per 1000 (30 to 156) ; low-certainty evidence). EpAbs may reduce worsening of clinical status up to day 28 (RR 0.67, 95% CI 0.38 to 1.18; absolute effect 203 per 1000 with placebo versus 136 per 1000 (77 to 240); low-certainty evidence). It may have some effect on improvement of clinical status on day 28 (RR 1.06, 95% CI 0.96 to 1.17; low-certainty evidence). We did not identify any studies that reported quality-of-life outcomes, so we do not know if EpAbs have any impact on quality of life.\nHarms of animal-derived polyclonal antibodies\nEpAbs may have little to no impact on the number of adverse events at any grade up to 28 days (RR 0.99, 95% CI 0.74 to 1.31; low-certainty evidence). Adverse events at grade 3-4 severity were not reported. Individuals receiving EpAbs may experience fewer serious adverse events than patients receiving placebo (RR 0.67, 95% CI 0.38 to 1.19; low-certainty evidence).\nWe also identified additional results on the benefits and harms of other animal-derived polyclonal antibody doses, not included in the summary of findings table, but summarised in additional tables.\nAuthors' conclusions\nWe included data from five RCTs that evaluated hIVIG compared to standard therapy, with participants with moderate-to-severe disease. As the studies evaluated different preparations (from humans or from various animals) and doses, we could not pool them. hIVIG prepared from humans may have little to no impact on mortality, and clinical improvement and worsening. hIVIG may increase grade 3-4 adverse events. Studies did not evaluate quality of life. RBD-specific polyclonal F(ab´)2 fragments of equine antibodies may reduce mortality and serious adverse events, and may reduce clinical worsening. However, the studies were conducted before or during the emergence of several SARS-CoV-2 variants of concern and prior to widespread vaccine rollout.\nAs no studies evaluated hIVIG for participants with asymptomatic infection or mild disease, benefits for these individuals remains uncertain.\nThis is a living systematic review. We search monthly for new evidence and update the review when we identify relevant new evidence.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• We do not know whether hyperimmune immunoglobulin (a preparation made with antibodies from people who have recovered from COVID-19) reduces deaths or serious unwanted effects in people with moderate to severe COVID-19. But a similar preparation made by injecting animals with certain antibodies may reduce deaths, and serious unwanted effects and may stop people’s condition getting worse.\n• We found no studies that investigated people with COVID-19 but without symptoms or people with mild COVID-19, so we do not know how effective human or animal hyperimmune immunoglobulin is for them.\n• We found 10 ongoing studies. We will update this review when their results become available."
}
] |
query-laysum
|
8251
|
[
{
"role": "user",
"content": "Abstract: Background\nFibrates are a class of drugs characterised by mainly lowering high triglyceride, raising high-density lipoprotein (HDL) cholesterol, and lowering the small dense fraction of low-density lipoprotein (LDL) cholesterol. Their efficacy for secondary prevention of serious vascular events is unclear, and to date no systematic review focusing on secondary prevention has been undertaken.\nObjectives\nTo assess the efficacy and safety of fibrates for the prevention of serious vascular events in people with previous cardiovascular disease (CVD), including coronary heart disease and stroke.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 9, 2014) on the Cochrane Library, MEDLINE (OVID, 1946 to October week 1 2014), EMBASE (OVID, 1980 to 2014 week 41), the China Biological Medicine Database (CBM) (1978 to 2014), the Chinese National Knowledge Infrastructure (CNKI) (1979 to 2014), Chinese Science and Technique Journals Database (VIP) (1989 to 2014). We also searched other resources, such as ongoing trials registers and databases of conference abstracts, to identify further published, unpublished, and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) in which a fibrate (for example gemfibrozil, fenofibrate) was compared with placebo or no treatment. We excluded RCTs with only laboratory outcomes. We also excluded trials comparing two different fibrates without a placebo or no-treatment control.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, assessed risk of bias, and extracted the data. We contacted authors of trials for missing data.\nMain results\nWe included 13 trials involving a total of 16,112 participants. Eleven trials recruited participants with history of coronary heart disease, two trials recruited participants with history of stroke, and one trial recruited participants with a mix of people with CVD. We judged overall risk of bias to be moderate. The meta-analysis (including all fibrate trials) showed evidence for a protective effect of fibrates primarily compared to placebo for the primary composite outcome of non-fatal stroke, non-fatal myocardial infarction (MI), and vascular death (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.83 to 0.94; participants = 16,064; studies = 12; I2 = 45%, fixed effect). Fibrates were moderately effective for preventing MI occurrence (RR 0.86, 95% CI 0.80 to 0.93; participants = 13,942; studies = 10; I2 = 24%, fixed effect). Fibrates were not effective against all-cause mortality (RR 0.98, 95% CI 0.91 to 1.06; participants = 13,653; studies = 10; I2 = 23%), death from vascular causes (RR 0.95, 95% CI 0.86 to 1.05; participants = 13,653; studies = 10; I2 = 11%, fixed effect), and stroke events (RR 1.03, 95% CI 0.91 to 1.16; participants = 11,719; studies = 6; I2 = 11%, fixed effect). Excluding clofibrate trials, as the use of clofibrate was discontinued in 2002 due to safety concerns, the remaining class of fibrates were no longer effective in preventing the primary composite outcome (RR 0.90, 95% CI 0.79 to 1.03; participants = 10,320; studies = 7; I2 = 50%, random effects). However, without clofibrate data, fibrates remained effective in preventing MI (RR 0.85, 95% CI 0.76 to 0.94; participants = 8304; studies = 6; I2 = 47%, fixed effect). There was no increase in adverse events with fibrates compared to control. Subgroup analyses showed the benefit of fibrates on the primary composite outcome to be consistent irrespective of age, gender, and diabetes mellitus.\nAuthors' conclusions\nModerate evidence showed that the fibrate class can be effective in the secondary prevention of composite outcome of non-fatal stroke, non-fatal MI, and vascular death. However, this beneficial effect relies on the inclusion of clofibrate data, a drug that was discontinued in 2002 due to its unacceptably large adverse effects. Further trials of the use of fibrates in populations with previous stroke and also against a background treatment with statins (standard of care) are required.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Fibrates have been used for many years as a treatment to prevent further heart attacks and strokes in people who already have disease of their circulatory system. Fibrates are a class of drugs that work by positively influencing fats in the the blood, that is lowering triglyceride, raising high-density lipoprotein ('good') cholesterol and lowering the small dense fraction of low-density lipoprotein ('bad') cholesterol. The drug class includes clofibrate, gemfibrozil, fenofibrate, bezafibrate, and ciprofibrate. Successful adjustment of the blood fats in people with disease of their circulatory system could mean these individuals are less likely to have heart attacks, stroke, and die from their circulatory disease. There is currently no clear evidence for benefit of fibrates in such people."
}
] |
query-laysum
|
8252
|
[
{
"role": "user",
"content": "Abstract: Background\nHip osteoarthritis (OA) is a major cause of pain and functional limitation. Few hip OA treatments have been evaluated for safety and effectiveness. Acupuncture is a traditional Chinese medical therapy which aims to treat disease by inserting very thin needles at specific points on the body.\nObjectives\nTo assess the benefits and harms of acupuncture in patients with hip OA.\nSearch methods\nWe searched Cochrane CENTRAL, MEDLINE, and Embase all through March 2018.\nSelection criteria\nWe included randomized controlled trials (RCTs) that compared acupuncture with sham acupuncture, another active treatment, or no specific treatment; and RCTs that evaluated acupuncture as an addition to another treatment. Major outcomes were pain and function at the short term (i.e. < 3 months after randomization) and adverse events.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nSix RCTs with 413 participants were included. Four RCTs included only people with OA of the hip, and two included a mix of people with OA of the hip and knee. All RCTs included primarily older participants, with a mean age range from 61 to 67 years, and a mean duration of hip OA pain from two to eight years. Approximately two-thirds of participants were women. Two RCTs compared acupuncture versus sham acupuncture; the other four RCTs were not blinded. All results were evaluated at short term (i.e. four to nine weeks after randomization).\nIn the two RCTs that compared acupuncture to sham acupuncture, the sham acupuncture control interventions were judged believable, but each sham acupuncture intervention was also judged to have a risk of weak acupuncture-specific effects, due to placement of non-penetrating needles at the correct acupuncture points in one RCT, and the use of penetrating needles not inserted at the correct points in the other RCT. For these two sham-controlled RCTs, the risk of bias was low for all outcomes.\nThe combined analysis of two sham-controlled RCTs gave moderate quality evidence of little or no effect in reduction in pain for acupuncture relative to sham acupuncture. Due to the small sample sizes in the studies, the confidence interval includes both the possibility of moderate benefit and the possibility of no effect of acupuncture (120 participants; Standardized Mean Difference (SMD) -0.13, (95% Confidence Interval (CI) -0.49 to 0.22); 2.1 points greater improvement with acupuncture compared to sham acupuncture on 100 point scale (i.e., absolute percent change -2.1% (95% CI -7.9% to 3.6%)); relative percent change -4.1% (95% CI -15.6% to 7.0%)). Estimates of effect were similar for function (120 participants; SMD -0.15, (95% CI -0.51 to 0.21)). No pooled estimate, representative of the two sham-controlled RCTs, could be calculated or reported for the quality of life outcome.\nThe four other RCTs were unblinded comparative effectiveness RCTs, which compared (additional) acupuncture to four different active control treatments.\nThere was low quality evidence that addition of acupuncture to the routine primary care that RCT participants were receiving from their physicians was associated with statistically significant and clinically relevant benefits, compared to the routine primary physician care alone, in pain (1 RCT; 137 participants; mean percent difference -22.9% (95% CI -29.2% to -16.6%); relative percent difference -46.5% (95% CI -59.3% to -33.7%)) and function (mean percent difference -19.0% (95% CI -24.41 to -13.59); relative percent difference -38.6% (95% CI -49.6% to -27.6%)). There was no statistically significant difference for mental quality of life and acupuncture showed a small, significant benefit for physical quality of life.\nThe effects of acupuncture compared with either advice plus exercise or NSAIDs are uncertain.\nWe are also uncertain whether acupuncture plus patient education improves pain, function, and quality of life, when compared to patient education alone.\nIn general, the overall quality of the evidence for the four comparative effectiveness RCTs was low to very low, mainly due to the potential for biased reporting of patient-assessed outcomes due to lack of blinding and sparse data.\nInformation on safety was reported in four RCTs. Two RCTs reported minor side effects of acupuncture, which were primarily minor bruising, bleeding, or pain at needle insertion sites. Four RCTs reported on adverse events, and none reported any serious adverse events attributed to acupuncture.\nAuthors' conclusions\nAcupuncture probably has little or no effect in reducing pain or improving function relative to sham acupuncture in people with hip osteoarthritis. Due to the small sample size in the studies, the confidence intervals include both the possibility of moderate benefits and the possibility of no effect of acupuncture. One unblinded trial found that acupuncture as an addition to routine primary physician care was associated with benefits on pain and function. However, these reported benefits are likely due at least partially to RCT participants' greater expectations of benefit from acupuncture. Possible side effects associated with acupuncture treatment were minor.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum
|
8253
|
[
{
"role": "user",
"content": "Abstract: Background\nCancer of ovarian, fallopian tube and peritoneal origin, referred to collectively as ovarian cancer, is the eighth most common cancer in women and is often diagnosed at an advanced stage. Women with relapsed epithelial ovarian cancer (EOC) are less well and have a limited life expectancy, therefore maintaining quality of life with effective symptom control is an important aim of treatment. However, the unwanted effects of chemotherapy agents may be severe, and optimal treatment regimens are unclear. Pegylated liposomal doxorubicin (PLD), which contains a cytotoxic drug called doxorubicin hydrochloride, is one of several treatment modalities that may be considered for treatment of relapsed EOCs. This is an update of the original Cochrane Review which was published in Issue 7, 2013.\nObjectives\nTo evaluate the efficacy and safety of PLD, with or without other anti-cancer drugs, in women with relapsed high grade epithelial ovarian cancer (EOC).\nSearch methods\nWe searched CENTRAL, MEDLINE (via Ovid) and Embase (via Ovid) from 1990 to January 2022. We also searched online registers of clinical trials, abstracts of scientific meetings and reference lists of included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated PLD in women diagnosed with relapsed epithelial ovarian cancer.\nData collection and analysis\nTwo review authors independently extracted data to a pre-designed data collection form and assessed the risk of bias according to the Cochrane Handbook for Systematic Reviews of Interventions guidelines. Where possible, we pooled collected data in meta-analyses.\nMain results\nThis is an update of a previous review with 12 additional studies, so this updated review includes a total of 26 RCTs with 8277 participants that evaluated the effects of PLD alone or in combination with other drugs in recurrent EOC: seven in platinum-sensitive disease (2872 participants); 11 in platinum-resistant disease (3246 participants); and eight that recruited individuals regardless of platinum sensitivity status (2079 participants). The certainty of the evidence was assessed for the three most clinically relevant comparisons out of eight comparisons identified in the included RCTs.\nRecurrent platinum-sensitive EOC\nPLD with conventional chemotherapy agent compared to alternative combination chemotherapy likely results in little to no difference in overall survival (OS) (hazard ratio (HR) 0.93, 95% confidence interval (CI) 0.83 to 1.04; 5 studies, 2006 participants; moderate-certainty evidence) but likely increases progression-free survival (PFS) (HR 0.81, 95% CI 0.74 to 0.89; 5 studies, 2006 participants; moderate-certainty evidence). The combination may slightly improve quality of life at three months post-randomisation, measured using European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire C30 (mean difference 4.80, 95% CI 0.92 to 8.68; 1 study, 608 participants; low-certainty evidence), but this may not represent a clinically meaningful difference.\nPLD in combination with another chemotherapy agent compared to alternative combination chemotherapy likely results in little to no difference in the rate of overall severe adverse events (grade ≥ 3) (risk ratio (RR) 1.11, 95% CI 0.95 to 1.30; 2 studies, 834 participants; moderate-certainty evidence). PLD with chemotherapy likely increases anaemia (grade ≥ 3) (RR 1.37, 95% CI 1.02 to 1.85; 5 studies, 1961 participants; moderate-certainty evidence). The evidence is very uncertain about the effect of PLD with conventional chemotherapy on hand-foot syndrome (HFS)(grade ≥ 3) (RR 4.01, 95% CI 1.00 to 16.01; 2 studies, 1028 participants; very low-certainty evidence) and neurological events (grade ≥ 3) (RR 0.38, 95% CI 0.20 to 0.74; 4 studies, 1900 participants; very low-certainty evidence).\nRecurrent platinum-resistant EOC\nPLD alone compared to another conventional chemotherapy likely results in little to no difference in OS (HR 0.96, 95% CI 0.77 to 1.19; 6 studies, 1995 participants; moderate-certainty evidence). The evidence is very uncertain about the effect of PLD on PFS (HR 0.94, 95% CI 0.85 to 1.04; 4 studies, 1803 participants; very low-certainty evidence), overall severe adverse events (grade ≥ 3) (RR ranged from 0.61 to 0.97; 2 studies, 964 participants; very low-certainty evidence), anaemia (grade ≥ 3) (RR ranged from 0.19 to 0.82; 5 studies, 1968 participants; very low-certainty evidence), HFS (grade ≥ 3) (RR ranged from 15.19 to 109.15; 6 studies, 2184 participants; very low-certainty evidence), and the rate of neurological events (grade ≥ 3)(RR ranged from 0.08 to 3.09; 3 studies, 1222 participants; very low-certainty evidence).\nPLD with conventional chemotherapy compared to PLD alone likely results in little to no difference in OS (HR 0.92, 95% CI 0.70 to 1.21; 1 study, 242 participants; moderate-certainty evidence) and it may result in little to no difference in PFS (HR 0.94, 95% CI 0.73 to 1.22; 2 studies, 353 participants; low-certainty evidence). The combination likely increases overall severe adverse events (grade ≥ 3) (RR 2.48, 95% CI 1.98 to 3.09; 1 study, 663 participants; moderate-certainty evidence) and anaemia (grade ≥ 3) (RR 2.38, 95% CI 1.46 to 3.87; 2 studies, 785 participants; moderate-certainty evidence), but likely results in a large reduction in HFS (grade ≥ 3) (RR 0.24, 95% CI 0.14 to 0.40; 2 studies, 785 participants; moderate-certainty evidence). It may result in little to no difference in neurological events (grade ≥ 3) (RR 1.40, 95% CI 0.85 to 2.31; 1 study, 663 participants; low-certainty evidence).\nAuthors' conclusions\nIn platinum-sensitive relapsed EOC, including PLD in a combination chemotherapy regimen probably makes little to no difference in OS compared to other combinations, but likely improves PFS. Choice of chemotherapy will therefore be guided by symptoms from previous chemotherapy and other patient considerations. Single-agent PLD remains a useful agent for platinum-resistant relapsed EOC and choice of agent at relapse will depend on patient factors, e.g. degree of bone marrow suppression or neurotoxicity from previous treatments. Adding another agent to PLD likely increases overall grade ≥ 3 adverse events with little to no improvement in survival outcomes. The limited evidence relating to PLD in combination with other agents in platinum-resistant relapsed EOC does not indicate a benefit, but there is some evidence of increased side effects.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We added 12 studies to the previous review, so now include 26 studies with a total of 8277 women with recurrent EOC. Seven studies looked at platinum-sensitive disease (2827 women); 11 platinum-resistant disease (3246 women); and eight recruited women who had both platinum-sensitivity and platinum resistant disease (2079 women).\nRecurrent platinum-sensitive EOC\nWe found five studies for women with platinum-sensitive disease using PLD in combination with chemotherapy versus alternative combination chemotherapy. The PLD combination likely makes little difference in how long women survive (overall survival, OS), but likely increases the time to further relapse (progression-free survival, PFS). There may be a slight improvement in quality of life. There may be little to no difference in the overall number of severe unwanted effects, although adding PLD causes more anaemia. We are uncertain about the effect of PLD with chemotherapy on other individual unwanted effects.\nRecurrent platinum-resistant EOC\nWe found six studies for women with platinum-resistant disease using PLD alone compared to conventional chemotherapy. PLD alone likely makes little difference in OS. We are very uncertain about the effect on PFS, overall severe unwanted effects (i.e. those that require hospital treatment, e.g. blood transfusion), severe anaemia (grade ≥ 3), HFS, and the rate of severe unwanted effects on the nervous system (e.g. permanent numbness in fingers and toes).\nWe found two studies that compared PLD alongside other chemotherapy combination with PLD alone. PLD in combination likely makes little difference in OS, and it may make little difference in PFS. The combination likely increases overall severe unwanted effects and anaemia. Combination treatment likely results in a large reduction in HFS, but may result in little difference in unwanted effects on the nervous system.\nSeveral studies compared PLD alone with new targeted agents or immunotherapy, but we are very uncertain about the benefit of adding these to PLD."
}
] |
query-laysum
|
8254
|
[
{
"role": "user",
"content": "Abstract: Background\nDiabetes is the commonest cause of chronic kidney disease (CKD). Both conditions commonly co-exist. Glucometabolic changes and concurrent dialysis in diabetes and CKD make glucose-lowering challenging, increasing the risk of hypoglycaemia. Glucose-lowering agents have been mainly studied in people with near-normal kidney function. It is important to characterise existing knowledge of glucose-lowering agents in CKD to guide treatment.\nObjectives\nTo examine the efficacy and safety of insulin and other pharmacological interventions for lowering glucose levels in people with diabetes and CKD.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 12 February 2018 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nAll randomised controlled trials (RCTs) and quasi-RCTs looking at head-to-head comparisons of active regimens of glucose-lowering therapy or active regimen compared with placebo/standard care in people with diabetes and CKD (estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2) were eligible.\nData collection and analysis\nFour authors independently assessed study eligibility, risk of bias, and quality of data and performed data extraction. Continuous outcomes were expressed as post-treatment mean differences (MD). Adverse events were expressed as post-treatment absolute risk differences (RD). Dichotomous clinical outcomes were presented as risk ratios (RR) with 95% confidence intervals (CI).\nMain results\nForty-four studies (128 records, 13,036 participants) were included. Nine studies compared sodium glucose co-transporter-2 (SGLT2) inhibitors to placebo; 13 studies compared dipeptidyl peptidase-4 (DPP-4) inhibitors to placebo; 2 studies compared glucagon-like peptide-1 (GLP-1) agonists to placebo; 8 studies compared glitazones to no glitazone treatment; 1 study compared glinide to no glinide treatment; and 4 studies compared different types, doses or modes of administration of insulin. In addition, 2 studies compared sitagliptin to glipizide; and 1 study compared each of sitagliptin to insulin, glitazars to pioglitazone, vildagliptin to sitagliptin, linagliptin to voglibose, and albiglutide to sitagliptin. Most studies had a high risk of bias due to funding and attrition bias, and an unclear risk of detection bias.\nCompared to placebo, SGLT2 inhibitors probably reduce HbA1c (7 studies, 1092 participants: MD -0.29%, -0.38 to -0.19 (-3.2 mmol/mol, -4.2 to -2.2); I2 = 0%), fasting blood glucose (FBG) (5 studies, 855 participants: MD -0.48 mmol/L, -0.78 to -0.19; I2 = 0%), systolic blood pressure (BP) (7 studies, 1198 participants: MD -4.68 mmHg, -6.69 to -2.68; I2 = 40%), diastolic BP (6 studies, 1142 participants: MD -1.72 mmHg, -2.77 to -0.66; I2 = 0%), heart failure (3 studies, 2519 participants: RR 0.59, 0.41 to 0.87; I2 = 0%), and hyperkalaemia (4 studies, 2788 participants: RR 0.58, 0.42 to 0.81; I2 = 0%); but probably increase genital infections (7 studies, 3086 participants: RR 2.50, 1.52 to 4.11; I2 = 0%), and creatinine (4 studies, 848 participants: MD 3.82 μmol/L, 1.45 to 6.19; I2 = 16%) (all effects of moderate certainty evidence). SGLT2 inhibitors may reduce weight (5 studies, 1029 participants: MD -1.41 kg, -1.8 to -1.02; I2 = 28%) and albuminuria (MD -8.14 mg/mmol creatinine, -14.51 to -1.77; I2 = 11%; low certainty evidence). SGLT2 inhibitors may have little or no effect on the risk of cardiovascular death, hypoglycaemia, acute kidney injury (AKI), and urinary tract infection (low certainty evidence). It is uncertain whether SGLT2 inhibitors have any effect on death, end-stage kidney disease (ESKD), hypovolaemia, fractures, diabetic ketoacidosis, or discontinuation due to adverse effects (very low certainty evidence).\nCompared to placebo, DPP-4 inhibitors may reduce HbA1c (7 studies, 867 participants: MD -0.62%, -0.85 to -0.39 (-6.8 mmol/mol, -9.3 to -4.3); I2 = 59%) but may have little or no effect on FBG (low certainty evidence). DPP-4 inhibitors probably have little or no effect on cardiovascular death (2 studies, 5897 participants: RR 0.93, 0.77 to 1.11; I2 = 0%) and weight (2 studies, 210 participants: MD 0.16 kg, -0.58 to 0.90; I2 = 29%; moderate certainty evidence). Compared to placebo, DPP-4 inhibitors may have little or no effect on heart failure, upper respiratory tract infections, and liver impairment (low certainty evidence). Compared to placebo, it is uncertain whether DPP-4 inhibitors have any effect on eGFR, hypoglycaemia, pancreatitis, pancreatic cancer, or discontinuation due to adverse effects (very low certainty evidence).\nCompared to placebo, GLP-1 agonists probably reduce HbA1c (7 studies, 867 participants: MD -0.53%, -1.01 to -0.06 (-5.8 mmol/mol, -11.0 to -0.7); I2 = 41%; moderate certainty evidence) and may reduce weight (low certainty evidence). GLP-1 agonists may have little or no effect on eGFR, hypoglycaemia, or discontinuation due to adverse effects (low certainty evidence). It is uncertain whether GLP-1 agonists reduce FBG, increase gastrointestinal symptoms, or affect the risk of pancreatitis (very low certainty evidence).\nCompared to placebo, it is uncertain whether glitazones have any effect on HbA1c, FBG, death, weight, and risk of hypoglycaemia (very low certainty evidence).\nCompared to glipizide, sitagliptin probably reduces hypoglycaemia (2 studies, 551 participants: RR 0.40, 0.23 to 0.69; I2 = 0%; moderate certainty evidence). Compared to glipizide, sitagliptin may have had little or no effect on HbA1c, FBG, weight, and eGFR (low certainty evidence). Compared to glipizide, it is uncertain if sitagliptin has any effect on death or discontinuation due to adverse effects (very low certainty).\nFor types, dosages or modes of administration of insulin and other head-to-head comparisons only individual studies were available so no conclusions could be made.\nAuthors' conclusions\nEvidence concerning the efficacy and safety of glucose-lowering agents in diabetes and CKD is limited. SGLT2 inhibitors and GLP-1 agonists are probably efficacious for glucose-lowering and DPP-4 inhibitors may be efficacious for glucose-lowering. Additionally, SGLT2 inhibitors probably reduce BP, heart failure, and hyperkalaemia but increase genital infections, and slightly increase creatinine. The safety profile for GLP-1 agonists is uncertain. No further conclusions could be made for the other classes of glucose-lowering agents including insulin. More high quality studies are required to help guide therapeutic choice for glucose-lowering in diabetes and CKD.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 44 studies involving 13,036 people. Most studies compared different medication types - sodium glucose co-transporter-2 (SGLT2) inhibitors, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagon-like peptide-1 (GLP-1) agonists, and glitazones to no treatment. Two studies compared the medications sitagliptin to glipizide.\nSGLT2 inhibitors probably reduce glucose levels, blood pressure, heart failure and high potassium levels but increase genital infections and slightly reduce kidney function. SGLT2 inhibitors may reduce weight. Their effect on the risk of death, hypoglycaemia, acute kidney injury, urinary tract infection, end-stage kidney disease, low blood volume, bone fractures, diabetic ketoacidosis is uncertain.\nDPP-4 inhibitors may reduce glucose levels. Their effect on the risk of death due to heart attacks and strokes, heart failure, upper respiratory tract infections, liver problems, kidney function, hypoglycaemia, pancreatitis and pancreatic cancer is uncertain.\nGLP-1 agonists probably reduce glucose levels and may reduce weight. Their effect on kidney function, hypoglycaemia, gastrointestinal symptoms and pancreatitis is uncertain.\nCompared to glipizide, sitagliptin probably has a lower risk of hypoglycaemia.\nNo conclusions could be made regarding other glucose-lowering medications when compared to another medication or no treatment because of the lack of studies."
}
] |
query-laysum
|
8255
|
[
{
"role": "user",
"content": "Abstract: Background\nMetastatic breast cancer is not a curable disease, but women with metastatic disease are living longer. Surgery to remove the primary tumour is associated with an increased survival in other types of metastatic cancer. Breast surgery is not standard treatment for metastatic disease, however several recent retrospective studies have suggested that breast surgery could increase the women's survival. These studies have methodological limitations including selection bias. A systematic review mapping all randomised controlled trials addressing the benefits and potential harms of breast surgery is ideal to answer this question.\nObjectives\nTo assess the effects of breast surgery in women with metastatic breast cancer.\nSearch methods\nWe conducted searches using the MeSH terms 'breast neoplasms', 'mastectomy', and 'analysis, survival' in the following databases: the Cochrane Breast Cancer Specialised Register, CENTRAL, MEDLINE (by PubMed) and Embase (by OvidSP) on 22 February 2016. We also searched ClinicalTrials.gov (22 February 2016) and the WHO International Clinical Trials Registry Platform (24 February 2016). We conducted an additional search in the American Society of Clinical Oncology (ASCO) conference proceedings in July 2016 that included reference checking, citation searching, and contacting study authors to identify additional studies.\nSelection criteria\nThe inclusion criteria were randomised controlled trials of women with metastatic breast cancer at initial diagnosis comparing breast surgery plus systemic therapy versus systemic therapy alone. The primary outcomes were overall survival and quality of life. Secondary outcomes were progression-free survival (local and distant control), breast cancer-specific survival, and toxicity from local therapy.\nData collection and analysis\nTwo review authors independently conducted trial selection, data extraction, and 'Risk of bias' assessment (using Cochrane's 'Risk of bias' tool), which a third review author checked. We used the GRADE tool to assess the quality of the body of evidence. We used the risk ratio (RR) to measure the effect of treatment for dichotomous outcomes and the hazard ratio (HR) for time-to-event outcomes. We calculated 95% confidence intervals (CI) for these measures. We used the random-effects model, as we expected clinical or methodological heterogeneity, or both, among the included studies.\nMain results\nWe included two trials enrolling 624 women in the review. It is uncertain whether breast surgery improves overall survival as the quality of the evidence has been assessed as very low (HR 0.83, 95% CI 0.53 to 1.31; 2 studies; 624 women). The two studies did not report quality of life. Breast surgery may improve local progression-free survival (HR 0.22, 95% CI 0.08 to 0.57; 2 studies; 607 women; low-quality evidence), while it probably worsened distant progression-free survival (HR 1.42, 95% CI 1.08 to 1.86; 1 study; 350 women; moderate-quality evidence). The two included studies did not measure breast cancer-specific survival. Toxicity from local therapy was reported by 30-day mortality and did not appear to differ between the two groups (RR 0.99, 95% CI 0.14 to 6.90; 1 study; 274 women; low-quality evidence).\nAuthors' conclusions\nBased on existing evidence from two randomised clinical trials, it is not possible to make definitive conclusions on the benefits and risks of breast surgery associated with systemic treatment for women diagnosed with metastatic breast cancer. Until the ongoing clinical trials are finalised, the decision to perform breast surgery in these women should be individualised and shared between the physician and the patient considering the potential risks, benefits, and costs of each intervention.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors are uncertain whether breast surgery improves overall survival as the quality of the evidence has been assessed as very low. The included studies did not report any information relating to quality of life. Breast surgery may improve the control of local disease but it probably worsened control at distant sites. The two included studies did not measure breast cancer-specific survival. Toxicity from local therapy appeared to be the same in the group undergoing breast surgery combined with medical treatment and in the group receiving only medical treatment."
}
] |
query-laysum
|
8256
|
[
{
"role": "user",
"content": "Abstract: Background\nThere are a limited number of treatment options for people with corticosteroid-refractory ulcerative colitis. Animal models of inflammatory bowel disease and uncontrolled studies in humans suggest that tacrolimus may be an effective treatment for ulcerative colitis.\nObjectives\nTo evaluate the efficacy and safety of tacrolimus for induction of remission in people with corticosteroid-refractory ulcerative colitis.\nSearch methods\nWe searched the Cochrane Gut group specialised register, CENTRAL, MEDLINE (PubMed), Embase, Clinicaltrials.gov and WHO ICTRP from inception to October 2021 to identify relevant randomised controlled trials (RCT).\nSelection criteria\nTwo review authors independently selected potentially relevant studies to determine eligibility based on the prespecified criteria.\nData collection and analysis\nTwo review authors independently extracted data and analysed them using Review Manager Web. The primary outcomes were induction of remission and clinical improvement, as defined by the studies and expressed as a percentage of the participants randomised (intention-to-treat analysis).\nMain results\nThis review included five RCTs with 347 participants who had active ulcerative colitis or ulcerative proctitis. The duration of intervention varied between two weeks and eight weeks.\nTacrolimus versus placebo\nTacrolimus (oral and rectal) may be superior in achieving clinical remission compared to placebo (oral and rectal) (14/87 participants with tacrolimus versus 1/61 participants with placebo; risk ratio (RR) 3.76, 95% confidence interval (CI) 1.03 to 13.73; 3 studies). These results are of low certainty due to imprecision and risk of bias.\nTacrolimus (oral and rectal) may be superior for clinical improvement compared to placebo (oral and rectal) (45/87 participants with tacrolimus versus 7/61 participants with placebo; RR 4.47, 95% CI 2.15 to 9.29; 3 studies). These results are of low certainty due to imprecision and risk of bias.\nThe evidence is very uncertain about the effects of tacrolimus (oral and rectal) on serious adverse events compared to placebo (oral and rectal) (2/87 participants with tacrolimus versus 0/61 participants with placebo; RR 2.44, 95% CI 0.12 to 48.77; 3 studies). These results are of very low certainty due to high imprecision and risk of bias.\nTacrolimus versus ciclosporin\nOne study compared oral tacrolimus to intravenous ciclosporin, with an intervention lasting two weeks and 113 randomised participants. The evidence is very uncertain about the effect of tacrolimus on achievement of clinical remission compared to ciclosporin (15/33 participants with tacrolimus versus 24/80 participants with ciclosporin; RR 1.52, 95% CI 0.92 to 2.50). The results are of very low certainty due to risk of bias and high imprecision.\nThe evidence is very uncertain about the effect of tacrolimus on clinical improvement compared to intravenous ciclosporin (23/33 participants with tacrolimus versus 62/80 participants with ciclosporin; RR 0.90, 95% CI 0.70 to 1.16). The results are of very low certainty due to risk of bias and imprecision.\nTacrolimus versus beclometasone\nOne study compared tacrolimus suppositories with beclometasone suppositories in an intervention lasting four weeks with 88 randomised participants. There may be little to no difference in achievement of clinical remission (16/44 participants with tacrolimus versus 15/44 participants with beclometasone; RR 1.07, 95% CI 0.60 to 1.88). The results are of low certainty due to high imprecision.\nThere may be little to no difference in clinical improvement when comparing tacrolimus suppositories to beclometasone suppositories (22/44 participants with tacrolimus versus 22/44 with beclometasone; RR 1.00, 95% CI 0.66 to 1.52). The results are of low certainty due to high imprecision.\nThere may be little to no difference in serious adverse events when comparing tacrolimus suppositories to beclometasone suppositories (1/44 participants with tacrolimus versus 0/44 with beclometasone; RR 3.00, 95% CI 0.13 to 71.70). These results are of low certainty due to high imprecision.\nThere may be little to no difference in total adverse events when comparing tacrolimus suppositories to beclometasone suppositories (21/44 participants with tacrolimus versus 14/44 participants with beclometasone; RR 1.50, 95% CI 0.88 to 2.55). These results are of low certainty due to high imprecision.\nNo secondary outcomes were reported for people requiring rescue medication or to undergo surgery.\nAuthors' conclusions\nThere is low-certainty evidence that tacrolimus may be superior to placebo for achievement of clinical remission and clinical improvement in corticosteroid-refractory colitis or corticosteroid-refractory proctitis. The evidence is very uncertain about the effect of tacrolimus compared to ciclosporin for achievement of clinical remission or clinical improvement. There may be no difference between tacrolimus and beclometasone for inducing clinical remission or clinical improvement.\nThe cohorts studied to date were small, with missing data sets, offered short follow-up and the clinical endpoints used were not in line with those suggested by regulatory bodies. Therefore, no clinical practice conclusions can be made.\nThis review highlights the need for further research that targets the relevant clinical questions, uses appropriate trial methodology and reports key findings in a systematic manner that facilitates future integration of findings with current evidence to better inform clinicians and patients. Future studies need to be adequately powered and of pertinent duration so as to capture the efficacy and effectiveness of tacrolimus in the medium to long term. Well-structured efficacy studies need to be followed up by long-term phase 4 extensions to provide key outputs and inform in a real-world setting.\n\nGiven the provided abstract, please respond to the following query: \"What did the review study?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this review, we examined data from five studies that compared tacrolimus to placebo (dummy treatment) and two other medicines called beclometasone and ciclosporin.\nWe wanted to see if tacrolimus is better in stopping the symptoms of ulcerative colitis (achieving remission) or improving them, and if it is safe to use."
}
] |
query-laysum
|
8257
|
[
{
"role": "user",
"content": "Abstract: Background\nGastro-oesophageal reflux disease (GORD) is a common condition with 3% to 33% of people from different parts of the world suffering from GORD. There is considerable uncertainty about whether people with GORD should receive an operation or medical treatment for controlling the condition.\nObjectives\nTo assess the benefits and harms of laparoscopic fundoplication versus medical treatment for people with gastro-oesophageal reflux disease.\nSearch methods\nWe searched the Cochrane Upper Gastrointestinal and Pancreatic Diseases Group (UGPD) Trials Register (June 2015), Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library Issue 6, 2015), Ovid MEDLINE (1966 to June 2015), and EMBASE (1980 to June 2015) to identify randomised controlled trials. We also searched the references of included trials to identify further trials.\nSelection criteria\nWe considered only randomised controlled trials (RCT) comparing laparoscopic fundoplication with medical treatment in people with GORD irrespective of language, blinding, or publication status for inclusion in the review.\nData collection and analysis\nTwo review authors independently identified trials and independently extracted data. We calculated the risk ratio (RR) or standardised mean difference (SMD) with 95% confidence intervals (CI) using both fixed-effect and random-effects models with RevMan 5 based on available case analysis.\nMain results\nFour studies met the inclusion criteria for the review, and provided information on one or more outcomes for the review. A total of 1160 participants in the four RCTs were either randomly assigned to laparoscopic fundoplication (589 participants) or medical treatment with proton pump inhibitors (571 participants). All the trials included participants who had had reflux symptoms for at least six months and had received long-term acid suppressive therapy. All the trials included only participants who could undergo surgery if randomised to the surgery arm. All of the trials were at high risk of bias. The overall quality of evidence was low or very low. None of the trials reported long-term health-related quality of life (HRQoL) or GORD-specific quality of life (QoL).\nThe difference between laparoscopic fundoplication and medical treatment was imprecise for overall short-term HRQOL (SMD 0.14, 95% CI -0.02 to 0.30; participants = 605; studies = 3), medium-term HRQOL (SMD 0.03, 95% CI -0.19 to 0.24; participants = 323; studies = 2), medium-term GORD-specific QoL (SMD 0.28, 95% CI -0.27 to 0.84; participants = 994; studies = 3), proportion of people with adverse events (surgery: 7/43 (adjusted proportion = 14.0%); medical: 0/40 (0.0%); RR 13.98, 95% CI 0.82 to 237.07; participants = 83; studies = 1), long-term dysphagia (surgery: 27/118 (adjusted proportion = 22.9%); medical: 28/110 (25.5%); RR 0.90, 95% CI 0.57 to 1.42; participants = 228; studies = 1), and long-term reflux symptoms (surgery: 29/118 (adjusted proportion = 24.6%); medical: 41/115 (35.7%); RR 0.69, 95% CI 0.46 to 1.03; participants = 233; studies = 1).\nThe short-term GORD-specific QoL was better in the laparoscopic fundoplication group than in the medical treatment group (SMD 0.58, 95% CI 0.46 to 0.70; participants = 1160; studies = 4).\nThe proportion of people with serious adverse events (surgery: 60/331 (adjusted proportion = 18.1%); medical: 38/306 (12.4%); RR 1.46, 95% CI 1.01 to 2.11; participants = 637; studies = 2), short-term dysphagia (surgery: 44/331 (adjusted proportion = 12.9%); medical: 11/306 (3.6%); RR 3.58, 95% CI 1.91 to 6.71; participants = 637; studies = 2), and medium-term dysphagia (surgery: 29/288 (adjusted proportion = 10.2%); medical: 5/266 (1.9%); RR 5.36, 95% CI 2.1 to 13.64; participants = 554; studies = 1) was higher in the laparoscopic fundoplication group than in the medical treatment group.\nThe proportion of people with heartburn at short term (surgery: 29/288 (adjusted proportion = 10.0%); medical: 59/266 (22.2%); RR 0.45, 95% CI 0.30 to 0.69; participants = 554; studies = 1), medium term (surgery: 12/288 (adjusted proportion = 4.2%); medical: 59/266 (22.2%); RR 0.19, 95% CI 0.10 to 0.34; participants = 554; studies = 1), long term (surgery: 46/111 (adjusted proportion = 41.2%); medical: 78/106 (73.6%); RR 0.56, 95% CI 0.44 to 0.72); participants = 217; studies = 1) and those with reflux symptoms at short-term (surgery: 6/288 (adjusted proportion = 2.0%); medical: 53/266 (19.9%); RR 0.10, 95% CI 0.05 to 0.24; participants = 554; studies = 1) and medium term (surgery: 6/288 (adjusted proportion = 2.1%); medical: 37/266 (13.9%); RR 0.15, 95% CI 0.06 to 0.35; participants = 554; studies = 1) was less in the laparoscopic fundoplication group than in the medical treatment group.\nAuthors' conclusions\nThere is considerable uncertainty in the balance of benefits versus harms of laparoscopic fundoplication compared to long-term medical treatment with proton pump inhibitors. Further RCTs of laparoscopic fundoplication versus medical management in patients with GORD should be conducted with outcome-assessor blinding and should include all participants in the analysis. Such trials should include long-term patient-orientated outcomes such as treatment-related adverse events (including severity), quality of life, and also report on the social and economic impact of the adverse events and symptoms.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Four studies met the inclusion criteria for the review, and provided information for the review. A total of 1160 participants received either laparoscopic fundoplication (589 participants) or medical treatment (571 participants). The decision on whether a participant received surgery or medical treatment was made using methods similar to tossing a coin, ensuring that the participants in the two groups were similar. All the trials included people who had had reflux symptoms for at least six months, had received long-term acid suppressive therapy, and could undergo laparoscopic fundoplication if necessary."
}
] |
query-laysum
|
8258
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease (SCD) is a group of genetic haemoglobin disorders, that occurs in about 2.2 per 1000 births worldwide. Increasingly, some people with SCD develop secondary iron overload due to occasional red blood cell transfusions or are on long-term transfusion programmes for e.g. secondary stroke prevention. Iron chelation therapy can prevent long-term complications.\nDeferoxamine and deferiprone have been found to be efficacious. However, questions exist about the effectiveness and safety of the newer oral chelator deferasirox.\nObjectives\nTo assess the effectiveness and safety of oral deferasirox in people with SCD and secondary iron overload.\nSearch methods\nWe searched the Cystic Fibrosis & Genetic Disorders Group's Haemoglobinopathies Trials Register: date of most recent search:13 March 2014.\nWe searched MEDLINE, Embase, Biosis Previews, Web of Science, Derwent Drug File, XTOXLINE, EBMR and The Cochrane Library, respectively; date of most recent searches: 02 August 2013.\nWe searched four trial registries: www.controlled-trials.com; www.clinicaltrials.gov; www.who.int./ictrp/en/; www.drks.de; date of most recent searches: 03 June 2013.\nSelection criteria\nRandomised controlled trials comparing deferasirox with no therapy or placebo or with another iron chelating treatment schedule.\nData collection and analysis\nTwo authors independently assessed risk of bias and extracted data. We contacted the corresponding study authors for additional information.\nMain results\nTwo studies (with 203 and 212 people) comparing the efficacy and safety of deferasirox and deferoxamine after 12 months and 24 weeks, respectively, were included. The overall quality, according to GRADE, for the main outcomes was moderate to low. Only limited data were available on mortality and end-organ damage, although one study did assess mortality, relative risk 1.26 (95% confidence interval 0.05 to 30.41), the 24-week follow up was too short to allow us to draw firm conclusions. One study reported a relative risk of 1.26 for the incidence of type 2 diabetes mellitus (95% confidence interval 0.05 to 30.41). Serum ferritin reduction was significantly greater with deferoxamine, mean difference of change of 440.69 µg/l (95% confidence interval 11.73 to 869.64). Liver iron concentration (reported in one study) measured by superconduction quantum interference device showed no significant difference for the overall group of patients adjusted for transfusion category, mean difference -0.20 mg Fe/g dry weight (95% confidence interval -3.15 to 2.75).\nThe occurrence of serious adverse events did not differ between drugs. Nausea, diarrhoea and rash occurred significantly more often in people treated with deferasirox, while adverse events of any kind were more often reported for patients treated with deferoxamine (one study). The mean increase of creatinine was also significantly higher with deferasirox, mean difference 3.24 (95% confidence interval 0.45 to 6.03). Long-term adverse events could not be measured in the included studies (follow up 52 weeks and 24 weeks). Patient satisfaction and the likelihood of continuing treatment, were significantly better with deferasirox.\nAuthors' conclusions\nDeferasirox appears to be of similar efficacy to deferoxamine depending on depending on the appropriate ratio of doses of deferoxamine and deferasirox being compared. However, only limited evidence is available assessing the efficacy regarding patient-important outcomes. The short-term safety of deferasirox seems to be acceptable, however, follow up in the available studies was too short to assess long-term side effects. Long-term safety and efficacy data are available from a non-controlled extension phase not included in our review; however, no valid comparative conclusions can be drawn and future studies should assess comparatively long-term outcomes both for safety and efficacy.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of deferasirox on secondary iron load in people with sickle cell disease."
}
] |
query-laysum
|
8259
|
[
{
"role": "user",
"content": "Abstract: Background\nMénière's disease is a condition that causes recurrent episodes of vertigo, associated with hearing loss and tinnitus. Lifestyle or dietary modifications (including reducing the amount of salt or caffeine in the diet) are sometimes suggested to be of benefit for this condition. The underlying cause of Ménière's disease is unknown, as is the way in which these interventions may work. The efficacy of these different interventions at preventing vertigo attacks, and their associated symptoms, is currently unclear.\nObjectives\nTo evaluate the benefits and harms of lifestyle and dietary interventions versus placebo or no treatment in people with Ménière's disease.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 14 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with Ménière's disease comparing any lifestyle or dietary intervention with either placebo or no treatment. We excluded studies with follow-up of less than three months, or with a cross-over design (unless data from the first phase of the study could be identified).\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) change in hearing, 6) change in tinnitus and 7) other adverse effects. We considered outcomes reported at three time points: 3 to < 6 months, 6 to ≤ 12 months and > 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included two RCTs, one related to diet, and the other related to fluid intake and sleep. In a Swedish study, 51 participants were randomised to receive 'specially processed cereals' or standard cereals. The specially processed cereals are thought to stimulate the production of anti-secretory factor - a protein that reduces inflammation and fluid secretion. Participants received the cereals for three months. The only outcome reported by this study was disease-specific health-related quality of life.\nThe second study was conducted in Japan. The participants (223) were randomised to receive abundant water intake (35 mL/kg/day), or to sleep in darkness (in an unlit room for six to seven hours per night), or to receive no intervention. The duration of follow-up was two years. The outcomes assessed were 'improvement in vertigo' and hearing.\nAs these studies considered different interventions we were unable to carry out any meta-analysis, and for almost all outcomes the certainty of the evidence was very low. We are unable to draw meaningful conclusions from the numerical results.\nAuthors' conclusions\nThe evidence for lifestyle or dietary interventions for Ménière's disease is very uncertain. We did not identify any placebo-controlled RCTs for interventions that are frequently recommended for those with Ménière's disease, such as salt restriction or caffeine restriction. We identified only two RCTs that compared a lifestyle or dietary intervention to placebo or no treatment, and the evidence that is currently available from these studies is of low or very low certainty. This means that we have very low confidence that the effects reported are accurate estimates of the true effect of these interventions. Consensus on the appropriate outcomes to measure in studies of Ménière's disease is needed (i.e. a core outcome set) in order to guide future studies in this area and enable meta-analyses of the results. This must include appropriate consideration of the potential harms of treatment, as well as the benefits.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This evidence is up-to-date to September 2022."
}
] |
query-laysum
|
8260
|
[
{
"role": "user",
"content": "Abstract: Background\nAge-related macular degeneration (AMD) is the most common cause of uncorrectable severe vision loss in people aged 55 years and older in the developed world. Choroidal neovascularization (CNV) secondary to AMD accounts for most cases of AMD-related severe vision loss. Intravitreous injection of anti-vascular endothelial growth factor (anti-VEGF) agents aims to block the growth of abnormal blood vessels in the eye to prevent vision loss and, in some instances, to improve vision.\nObjectives\n• To investigate ocular and systemic effects of, and quality of life associated with, intravitreous injection of three anti-VEGF agents (pegaptanib, ranibizumab, and bevacizumab) versus no anti-VEGF treatment for patients with neovascular AMD\n• To compare the relative effects of one of these anti-VEGF agents versus another when administered in comparable dosages and regimens\nSearch methods\nTo identify eligible studies for this review, we searched the Cochrane Central Register of Controlled Trials (CENTRAL), which contains the Cochrane Eyes and Vision Trials Register (searched January 31, 2018); MEDLINE Ovid (1946 to January 31, 2018); Embase Ovid (1947 to January 31, 2018); the Latin American and Caribbean Health Sciences Literature Database (LILACS) (1982 to January 31, 2018); the International Standard Randomized Controlled Trials Number (ISRCTN) Registry (www.isrctn.com/editAdvancedSearch - searched January 31, 2018); ClinicalTrials.gov (www.clinicaltrials.gov - searched November 28, 2018); and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en - searched January 31, 2018). We did not impose any date or language restrictions in electronic searches for trials.\nSelection criteria\nWe included randomized controlled trials (RCTs) that evaluated pegaptanib, ranibizumab, or bevacizumab versus each other or versus a control treatment (e.g. sham treatment, photodynamic therapy), in which participants were followed for at least one year.\nData collection and analysis\nTwo review authors independently screened records, extracted data, and assessed risks of bias. We contacted trial authors for additional data. We compared outcomes using risk ratios (RRs) or mean differences (MDs). We used the standard methodological procedures expected by Cochrane.\nMain results\nWe included 16 RCTs that had enrolled a total of 6347 participants with neovascular AMD (the number of participants per trial ranged from 23 to 1208) and identified one potentially relevant ongoing trial. Six trials compared anti-VEGF treatment (pegaptanib, ranibizumab, or bevacizumab) versus control, and 10 trials compared bevacizumab versus ranibizumab. Pharmaceutical companies conducted or sponsored four trials but funded none of the studies that evaluated bevacizumab. Researchers conducted these trials at various centers across five continents (North and South America, Europe, Asia, and Australia). The overall certainty of the evidence was moderate to high, and most trials had an overall low risk of bias. All but one trial had been registered prospectively.\nWhen compared with those who received control treatment, more participants who received intravitreous injection of any of the three anti-VEGF agents had gained 15 letters or more of visual acuity (risk ratio [RR] 4.19, 95% confidence interval [CI] 2.32 to 7.55; moderate-certainty evidence), had lost fewer than 15 letters of visual acuity (RR 1.40, 95% CI 1.27 to 1.55; high-certainty evidence), and showed mean improvement in visual acuity (mean difference 6.7 letters, 95% CI 4.4 to 9.0 in one pegaptanib trial; mean difference 17.8 letters, 95% CI 16.0 to 19.7 in three ranibizumab trials; moderate-certainty evidence) after one year of follow-up. Participants treated with anti-VEGF agents showed improvement in morphologic outcomes (e.g. size of CNV, central retinal thickness) compared with participants not treated with anti-VEGF agents (moderate-certainty evidence). No trial directly compared pegaptanib versus another anti-VEGF agent and followed participants for one year; however, when compared with control treatments, ranibizumab and bevacizumab each yielded larger improvements in visual acuity outcomes than pegaptanib.\nVisual acuity outcomes after bevacizumab and ranibizumab were similar when the same RCTs compared the same regimens with respect to gain of 15 or more letters of visual acuity (RR 0.95, 95% CI 0.81 to 1.12; high-certainty evidence) and loss of fewer than 15 letters of visual acuity (RR 1.00, 95% CI 0.98 to 1.02; high-certainty evidence); results showed similar mean improvement in visual acuity (mean difference [MD] -0.5 letters, 95% CI -1.5 to 0.5; high-certainty evidence) after one year of follow-up, despite the substantially lower cost of bevacizumab compared with ranibizumab. Reduction in central retinal thickness was less among bevacizumab-treated participants than among ranibizumab-treated participants after one year (MD -11.6 μm, 95% CI -21.6 to -1.7; high-certainty evidence); however, this difference is within the range of measurement error, and we did not interpret it to be clinically meaningful.\nOcular inflammation and increased intraocular pressure (IOP) after intravitreal injection were the most frequently reported serious ocular adverse events. Researchers reported endophthalmitis in less than 1% of anti-VEGF-treated participants and in no cases among control groups. The occurrence of serious systemic adverse events was comparable across anti-VEGF-treated groups and control groups; however, the numbers of events and trial participants may have been insufficient to show a meaningful difference between groups (evidence of low- to moderate-certainty). Investigators rarely measured and reported data on visual function, quality of life, or economic outcomes.\nAuthors' conclusions\nResults of this review show the effectiveness of anti-VEGF agents (pegaptanib, ranibizumab, and bevacizumab) in terms of maintaining visual acuity; studies show that ranibizumab and bevacizumab improved visual acuity in some eyes that received these agents and were equally effective. Available information on the adverse effects of each medication does not suggest a higher incidence of potentially vision-threatening complications with intravitreous injection of anti-VEGF agents compared with control interventions; however, clinical trial sample sizes were not sufficient to estimate differences in rare safety outcomes. Future Cochrane Reviews should incorporate research evaluating variable dosing regimens of anti-VEGF agents, effects of long-term use, use of combination therapies (e.g. anti-VEGF treatment plus photodynamic therapy), and other methods of delivering these agents.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Wet AMD is a common cause of severe vision loss among people 55 years of age and older. The macula, located in the central retina in the back of the eye, is important for vision. Wet AMD occurs when abnormal growth of blood vessels in the back of the eye damages the macula. Wet AMD causes blurriness, darkness, or distortion in the center of the field of vision, thus reducing the individual's ability to read, drive, and see faces.\nInjection into the eye of medicines like pegaptanib, ranibizumab, and bevacizumab can help block abnormal growth of blood vessels in the back of the eye. These drugs are known as anti-VEGF agents. We conducted this review to compare benefits and risks of treatment with anti-VEGF agents versus treatment without anti-VEGF agents and to compare different types of anti-VEGF agents."
}
] |
query-laysum
|
8261
|
[
{
"role": "user",
"content": "Abstract: Background\nHypertension is a major public health challenge affecting more than one billion people worldwide; it disproportionately affects populations in low- and middle-income countries (LMICs), where health systems are generally weak. The increasing prevalence of hypertension is associated with population growth, ageing, genetic factors, and behavioural risk factors, such as excessive salt and fat consumption, physical inactivity, being overweight and obese, harmful alcohol consumption, and poor management of stress. Over the long term, hypertension leads to risk for cardiovascular events, such as heart disease, stroke, kidney failure, disability, and premature mortality.\nCardiovascular events can be preventable when high-risk populations are targeted, for example, through population-wide screening strategies. When available resources are limited, taking a total risk approach whereby several risk factors of hypertension are taken into consideration (e.g. age, gender, lifestyle factors, diabetes, blood cholesterol) can enable more accurate targeting of high-risk groups. Targeting of high-risk groups can help reduce costs in that resources are not spent on the entire population.\nEarly detection in the form of screening for hypertension (and associated risk factors) can help identify high-risk groups, which can result in timely treatment and management of risk factors. Ultimately, early detection can help reduce morbidity and mortality linked to it and can help contain health-related costs, for example, those associated with hospitalisation due to severe illness and poorly managed risk factors and comorbidities.\nObjectives\nTo assess the effectiveness of different screening strategies for hypertension (mass, targeted, or opportunistic) to reduce morbidity and mortality associated with hypertension.\nSearch methods\nAn Information Specialist searched the Cochrane Register of Studies (CRS-Web), the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Latin American Caribbean Health Sciences Literature (LILACS) Bireme, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) without language, publication year, or publication status restrictions. The searches were conducted from inception until 9 April 2020.\nSelection criteria\nRandomised controlled trials (RCTs) and non-RCTs (NRCTs), that is, controlled before and after (CBA), interrupted time series (ITS), and prospective analytic cohort studies of healthy adolescents, adults, and elderly people participating in mass, targeted, or opportunistic screening of hypertension.\nData collection and analysis\nScreening of all retrieved studies was done in Covidence. A team of reviewers, in pairs, independently assessed titles and abstracts of identified studies and acquired full texts for studies that were potentially eligible. Studies were deemed to be eligible for full-text screening if two review authors agreed, or if consensus was reached through discussion with a third review author. It was planned that at least two review authors would independently extract data from included studies, assess risk of bias using pre-specified Cochrane criteria, and conduct a meta-analysis of sufficiently similar studies or present a narrative synthesis of the results.\nMain results\nWe screened 9335 titles and abstracts. We identified 54 potentially eligible studies for full-text screening. However, no studies met the eligibility criteria.\nAuthors' conclusions\nThere is an implicit assumption that early detection of hypertension through screening can reduce the burden of morbidity and mortality, but this assumption has not been tested in rigorous research studies. High-quality evidence from RCTs or programmatic evidence from NRCTs on the effectiveness and costs or harms of different screening strategies for hypertension (mass, targeted, or opportunistic) to reduce hypertension-related morbidity and mortality is lacking.\n\nGiven the provided abstract, please respond to the following query: \"Review methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched various electronic databases on this topic until 9 April 2020. We searched for studies written in any language, whether published or not. We planned to include studies that compared one type of screening strategy for hypertension versus no screening strategy, that is, mass screening versus no screening, targeted screening versus no screening, and opportunistic screening versus no screening. We were interested in studies in which participants were healthy adolescents, adults, and elderly people, and in which researchers measured clinical outcomes, health system outcomes, and adverse events."
}
] |
query-laysum
|
8262
|
[
{
"role": "user",
"content": "Abstract: Background\nThe use of a mesh in primary ventral or incisional hernia repair lowers the recurrence rate and is the accepted standard of care for larger defects. In laparoscopic primary ventral or incisional hernia repair the insertion of a mesh is indispensable. Different mesh fixation techniques have been used and refined over the years. The type of fixation technique is claimed to have a major impact on recurrence rates, chronic pain, health-related quality of life (HRQOL) and complication rates.\nObjectives\nTo determine the impact of different mesh fixation techniques for primary and incisional ventral hernia repair on hernia recurrence, chronic pain, HRQOL and complications.\nSearch methods\nOn 2 October 2020 we searched CENTRAL, MEDLINE (Ovid MEDLINE(R)) Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid MEDLINE(R)), Ovid Embase, and two trials registries. We also performed handsearches, and contacted experts from the European Hernia Society (EHS).\nSelection criteria\nWe included randomised controlled trials (RCTs) including adults with primary ventral or incisional hernia that compared different types of mesh fixation techniques (absorbable/nonabsorbable sutures, absorbable/nonabsorbable tacks, fibrin glue, and combinations of these techniques).\nData collection and analysis\nWe extracted data in standardised piloted tables, or if necessary, directly into Review Manager 5. We assessed risks of bias with the Cochrane 'Risk of bias' tool. Two review authors independently selected the publications, and extracted data on results. We calculated risk ratios (RRs) for binary outcomes and mean differences (MDs) for continuous outcomes. For pooling we used an inverse-variance random-effects meta-analysis or the Peto method in the case of rare events. We prepared GRADE 'Summary of findings' tables.\nFor laparoscopic repair we considered absorbable tacks compared to nonabsorbable tacks, and nonabsorbable tacks compared to nonabsorbable sutures as key comparisons.\nMain results\nWe included 10 trials with a total of 787 participants. The number of randomised participants ranged from 40 to 199 per comparison. Eight studies included participants with both primary and incisional ventral hernia. One study included only participants with umbilical hernia, and another only participants with incisional hernia. Hernia size varied between studies.\nWe judged the risk of bias as moderate to high.\nAbsorbable tacks compared to nonabsorbable tacks\nRecurrence rates in the groups were similar (RR 0.74, 95% confidence interval (CI) 0.17 to 3.22; 2 studies, 101 participants). It is uncertain whether there is a difference between absorbable tacks and nonabsorbable tacks in recurrence because the certainty of evidence was very low. Evidence suggests that the difference between groups in early postoperative, late follow-up, chronic pain and HRQOL is negligible.\nNonabsorbable tacks compared to nonabsorbablesutures\nAt six months there was one recurrence in each group (RR 1.00, 95% CI 0.07 to 14.79; 1 study, 36 participants). It is uncertain whether there is a difference between nonabsorbable tacks and nonabsorbable sutures in recurrence because the certainty of evidence was very low. Evidence suggests that the difference between groups in early postoperative, late follow-up and chronic pain is negligible. We found no study that assessed HRQOL.\nAbsorbable tacks compared toabsorbable sutures\nNo recurrence was observed at one year (very low certainty of evidence). Early postoperative pain was higher in the tacks group (VAS 0 - 10: MD −2.70, 95% CI −6.67 to 1.27; 1 study, 48 participants). It is uncertain whether there is a difference between absorbable tacks compared to absorbable sutures in early postoperative pain because the certainty of evidence was very low. The MD for late follow-up pain was −0.30 (95% CI −0.74 to 0.14; 1 study, 48 participants). We found no study that assessed HRQOL.\nCombination of different fixation types (tacks and sutures) or materials (absorbable and nonabsorbable)\nThere were mostly negligible or only small differences between combinations (e.g. tacks plus sutures) compared to a single technique (e.g. sutures only), as well as combinations compared to other combinations (e.g. absorbable sutures combined with nonabsorbable sutures compared to absorbable tacks combined with nonabsorbable tacks) in all outcomes. It is uncertain whether there is an advantage for combining different fixation types or materials for recurrence, chronic pain, HRQOL and complications, because the evidence certainty was very low or low, or we found no study on important outcomes.\nNonabsorbable tacks compared to fibrin sealant\nThe two studies showed different directions of effects: one showed higher rates for nonabsorbable tacks, and the other showed higher rates for fibrin sealant. Low-certainty evidence suggests that the difference between groups in early postoperative, late follow-up, chronic pain and HRQOL is negligible.\nAbsorbable tacks compared to fibrin sealant\nOne recurrence in the tacks group and none in the fibrin sealant group were noted after one year (low certainty of evidence). Early postoperative pain might be slightly lower using tacks (VAS 0 - 100; MD −12.40, 95% CI −27.60 to, 2.80;1 study, 50 participants; low-certainty evidence). The pattern of pain and HRQOL course over time (up to 1 year) was similar in the groups (low certainty of evidence).\nAuthors' conclusions\nCurrently none of the techniques can be considered superior to any other, because the certainty of evidence was low or very low for all outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 10 studies involving 787 persons, with ages ranging from 31 to 62 years. Eight studies included people with primary as well as incisional ventral hernia, one study included people with umbilical (navel area) hernia only, and another study with incisional hernia only. Hernia size varied widely between studies. The number of included participants ranged from 40 to 199. Participant follow-up was mostly short (less than 12 months)."
}
] |
query-laysum
|
8263
|
[
{
"role": "user",
"content": "Abstract: Background\nRoutine use of whole-cell pertussis (wP) vaccines was suspended in some countries in the 1970s and 1980s because of concerns about adverse effects. Following this action, there was a resurgence of whooping cough. Acellular pertussis (aP) vaccines, containing purified or recombinant Bordetella pertussis (B. pertussis) antigens, were developed in the hope that they would be as effective, but less reactogenic than the whole-cell vaccines. This is an update of a Cochrane review first published in 1999, and previously updated in 2012. In this update, we included no new studies.\nObjectives\nTo assess the efficacy and safety of acellular pertussis vaccines in children and to compare them with the whole-cell vaccines.\nSearch methods\nWe searched CENTRAL (2013, Issue 12), MEDLINE (1950 to January week 2, 2014), EMBASE (1974 to January 2014), Biosis Previews (2009 to January 2014) and CINAHL (2009 to January 2014).\nSelection criteria\nWe selected double-blind randomised efficacy and safety trials of aP vaccines in children up to six years old, with active follow-up of participants and laboratory verification of pertussis cases.\nData collection and analysis\nTwo review authors independently extracted data and assessed the risk of bias in the studies. Differences in trial design precluded a meta-analysis of the efficacy data. We pooled the safety data from individual trials using a random-effects meta-analysis model.\nMain results\nWe included six efficacy trials with a total of 46,283 participants and 52 safety trials with a total of 136,541 participants. Most of the safety trials did not report the methods for random sequence generation, allocation concealment and blinding, which made it difficult to assess the risk of bias in the studies. The efficacy of multi-component (≥ three) vaccines varied from 84% to 85% in preventing typical whooping cough (characterised by 21 or more consecutive days of paroxysmal cough with confirmation of B. pertussis infection by culture, appropriate serology or contact with a household member who has culture-confirmed pertussis), and from 71% to 78% in preventing mild pertussis disease (characterised by seven or more consecutive days of cough with confirmation of B. pertussis infection by culture or appropriate serology). In contrast, the efficacy of one- and two-component vaccines varied from 59% to 78% against typical whooping cough and from 41% to 58% against mild pertussis disease. Multi-component acellular vaccines are more effective than low-efficacy whole-cell vaccines, but may be less effective than the highest-efficacy whole-cell vaccines. Most systemic and local adverse events were significantly less common with aP vaccines than with wP vaccines for the primary series as well as for the booster dose.\nAuthors' conclusions\nMulti-component (≥ three) aP vaccines are effective in preventing whooping cough in children. Multi-component aP vaccines have higher efficacy than low-efficacy wP vaccines, but they may be less efficacious than the highest-efficacy wP vaccines. Acellular vaccines have fewer adverse effects than whole-cell vaccines for the primary series as well as for booster doses.\n\nGiven the provided abstract, please respond to the following query: \"Implications for practice\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The implications of the findings of this review for clinical practice may be different in high-income and low-income countries. In high-income countries, death from whooping cough is rare and parental acceptance is a major determinant of immunisation uptake. In these circumstances, the improved side effect profile of acellular vaccines argues in favour of their use, even though they might sacrifice some degree of effectiveness compared to the best whole-cell vaccines. In low-income countries, where the risk of pertussis is higher and cases are more likely to be fatal, greater weight needs to be given to vaccine efficacy. If an acellular vaccine has been shown to be less effective than a high-efficacy whole-cell vaccine it is intended to replace, the safety advantage of the acellular vaccine may be offset by increased mortality and morbidity due to a significantly higher rate of pertussis. However, most of the whole-cell vaccines used in low-income countries have not been adequately studied for efficacy and, therefore, it is not known where on the wide spectrum of whole-cell vaccine efficacy an individual product lies."
}
] |
query-laysum
|
8264
|
[
{
"role": "user",
"content": "Abstract: Background\nRetained placenta is a common complication of pregnancy affecting 1% to 6% of all births. If a retained placenta is left untreated, spontaneous delivery of the placenta may occur, but there is a high risk of bleeding and infection. Manual removal of the placenta (MROP) in an operating theatre under anaesthetic is the usual treatment, but is invasive and may have complications. An effective non-surgical alternative for retained placenta would potentially reduce the physical and psychological trauma of the procedure, and costs. It could also be lifesaving by providing a therapy for settings without easy access to modern operating theatres or anaesthetics. Injection of uterotonics into the uterus via the umbilical vein and placenta is an attractive low-cost option for this. This is an update of a review last published in 2011.\nObjectives\nTo assess the use of umbilical vein injection (UVI) of saline solution with or without uterotonics compared to either expectant management or with an alternative solution or other uterotonic agent for retained placenta.\nSearch methods\nFor this update, we searched Cochrane Pregnancy and Childbirth's Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (14 June 2020), and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials (RCTs) comparing UVI of saline or other fluids (with or without uterotonics), either with expectant management or with an alternative solution or other uterotonic agent, in the management of retained placenta. We considered quasi-randomised, cluster-randomised, and trials reported only in abstract form.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data, and checked them for accuracy. We assessed the certainty of the evidence using the GRADE approach. We calculated pooled risk ratios (RRs) and mean differences (MDs) with 95% confidence intervals (CIs), and presented results using 'Summary of findings' tables.\nMain results\nWe included 24 trials (n = 2348). All included trials were RCTs, one was quasi-randomised, and none were cluster-randomised. Risk of bias was variable across the included studies. We assessed certainty of evidence for four comparisons: saline versus expectant management, oxytocin versus expectant management, oxytocin versus saline, and oxytocin versus plasma expander. Evidence was moderate to very-low certainty and downgraded for risk of bias of included studies, imprecision, and inconsistency of effect estimates.\nSaline solution versus expectant management\nThere is probably little or no difference in the incidence of MROP between saline and expectant management (RR 0.93, 95% CI 0.80 to 1.10; 5 studies, n = 445; moderate-certainty evidence). Evidence for the following remaining primary outcomes was very-low certainty: severe postpartum haemorrhage 1000 mL or greater, blood transfusion, and infection. There were no events reported for maternal mortality or postpartum anaemia (24 to 48 hours postnatal). No studies reported addition of therapeutic uterotonics.\nOxytocin solution versus expectant management\nUVI of oxytocin solution might slightly reduce in the need for manual removal compared with expectant management (mean RR 0.73, 95% CI 0.56 to 0.95; 7 studies, n = 546; low-certainty evidence). There may be little to no difference between the incidence of blood transfusion between groups (RR 0.81, 95% CI 0.47 to 1.38; 4 studies, n = 339; low-certainty evidence). There were no maternal deaths reported (2 studies, n = 93). Evidence for severe postpartum haemorrhage of 1000 mL or greater, additional uterotonics, and infection was very-low certainty. There were no events for postpartum anaemia (24 to 48 hours postnatal).\nOxytocin solution versus saline solution\nUVI of oxytocin solution may reduce the use of MROP compared with saline solution, but there was high heterogeneity (RR 0.82, 95% CI 0.69 to 0.97; 14 studies, n = 1370; I² = 54%; low-certainty evidence). There were no differences between subgroups according to risk of bias or oxytocin dose for the outcome MROP. There may be little to no difference between groups in severe postpartum haemorrhage of 1000 mL or greater, blood transfusion, use of additional therapeutic uterotonics, and antibiotic use. There were no events for postpartum anaemia (24 to 48 hours postnatal) (very low-certainty evidence) and there was only one event for maternal mortality (low-certainty evidence).\nOxytocin solution versus plasma expander\nOne small study reported UVI of oxytocin compared with plasma expander (n = 109). The evidence was very unclear about any effect on MROP or blood transfusion between the two groups (very low-certainty evidence). No other primary outcomes were reported.\nFor other comparisons there were little to no differences for most outcomes examined. However, there was some evidence to suggest that there may be a reduction in MROP with prostaglandins in comparison to oxytocin (4 studies, n = 173) and ergometrine (1 study, n = 52), although further large-scale studies are needed to confirm these findings.\nAuthors' conclusions\nUVI of oxytocin solution is an inexpensive and simple intervention that can be performed when placental delivery is delayed. This review identified low-certainty evidence that oxytocin solution may slightly reduce the need for manual removal. However, there are little or no differences for other outcomes. Small studies examining injection of prostaglandin (such as dissolved misoprostol) into the umbilical vein show promise and deserve to be studied further.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The injection of oxytocin (a hormone released from the brain into the blood during labour) solution into the umbilical cord after the cord is cut is a cheap and simple intervention that could be performed to deliver the placenta. It is especially attractive for low-income countries where there is not easy access to doctors or an operating theatre."
}
] |
query-laysum
|
8265
|
[
{
"role": "user",
"content": "Abstract: Background\nAggressive, agitated or violent behaviour due to psychosis constitutes an emergency psychiatric treatment where fast-acting interventions are required. Risperidone is a widely accessible antipsychotic that can be used to manage psychosis-induced aggression or agitation.\nObjectives\nTo examine whether oral risperidone alone is an effective treatment for psychosis-induced aggression or agitation.\nSearch methods\nWe searched the Cochrane Schizophrenia Group's Study-Based Register of Trials (up to April 2017); this register is compiled by systematic searches of major resources (including AMED, BIOSIS CINAHL, Embase, MEDLINE, PsycINFO, PubMed, and registries of clinical trials) and their monthly updates, handsearches, grey literature, and conference proceedings. There are no language, date, document type, or publication status limitations for inclusion of records into the register.\nSelection criteria\nRandomised controlled trials (RCTs) comparing rapid use of risperidone and other drugs, combinations of drugs or placebo for people exhibiting aggression or agitation (or both) thought to be due to psychosis.\nData collection and analysis\nWe independently inspected all citations from searches, identified relevant abstracts, and independently extracted data from all included studies. For binary data we calculated risk ratio (RR) and for continuous data we calculated mean difference (MD), all with 95% confidence intervals (CI) and used a fixed-effect model. We assessed risk of bias for the included studies and used the GRADE approach to produce a ’Summary of findings’ tables.\nMain results\nThe review now contains data from nine trials (total n = 582) reporting on five comparisons. Due to risk of bias, small size of trials, indirectness of outcome measures and a paucity of investigated and reported 'pragmatic' outcomes, evidence was graded as very-low quality. None of the included studies provided useable data on our primary outcome 'tranquillisation or asleep' by 30 minutes, repeated need for tranquillisation or any economic outcomes. Data were available for our other main outcomes of agitation or aggression, needing restraint, and incidence of adverse effects.\nRisperidone versus haloperidol (up to 24 hours follow-up)\nFor the outcome, specific behaviour - agitation, no clear difference was found between risperidone and haloperidol in terms of efficacy, measured as at least 50% reduction in the Positive and Negative Syndrome Scale - Psychotic Agitation Sub-score (PANSS-PAS) (RR 1.04, 95% CI 0.86 to 1.26; participants = 124; studies = 1; very low-quality evidence) and no effect was observed for need to use restraints (RR 2.00, 95% CI 0.43 to 9.21; participants = 28; studies = 1; very low-quality evidence). Incidence of adverse effects was similar between treatment groups (RR 0.94, 95% CI 0.54 to 1.66; participants = 124; studies = 1; very low-quality evidence).\nRisperidone versus olanzapine\nOne small trial (n = 29) reported useable data for the comparison risperidone versus olanzapine. No effect was observed for agitation measured as PANSS-PAS endpoint score at two hours (MD 2.50, 95% CI -2.46 to 7.46; very low-quality evidence); need to use restraints at four days (RR 1.43, 95% CI 0.39 to 5.28; very-low quality evidence); specific movement disorders measured as Behavioural Activity Rating Scale (BARS) endpoint score at four days (MD 0.20, 95% CI -0.43 to 0.83; very low-quality evidence).\nRisperidone versus quetiapine\nOne trial reported (n = 40) useable data for the comparison risperidone versus quetiapine. Aggression was measured using the Modified Overt Aggression Scale (MOAS) endpoint score at two weeks. A clear difference, favouring quetiapine was observed (MD 1.80, 95% CI 0.20 to 3.40; very-low quality evidence). No evidence of a difference between treatment groups could be observed for incidence of akathisia after 24 hours (RR 1.67, 95% CI 0.46 to 6.06; very low-quality evidence). Two participants allocated to risperidone and one allocated to quetiapine experienced myocardial ischaemia during the trial.\nRisperidone versus risperidone + oxcarbazepine\nOne trial (n = 68) measured agitation using the Positive and Negative Syndrome Scale - Excited Component.(PANSS-EC) endpoint score and found a clear difference, favouring the combination treatment at one week (MD 2.70, 95% CI 0.42 to 4.98; very low-quality evidence), but no effect was observed for global state using Clinical Global Impression - Improvement (CGI-I) endpoint score at one week (MD -0.20, 95% CI -0.61 to 0.21; very-low quality evidence). Incidence of extrapyramidal symptoms after 24 hours was similar between treatment groups (RR 1.59, 95% CI 0.49 to 5.14; very-low quality evidence).\nRisperidone versus risperidone + valproic acid\nTwo trials compared risperidone with a combination of risperidone plus valproic acid. No clear differences between the treatment groups were observed for aggression (MOAS endpoint score at three days: MD 1.07, 95% CI -0.20 to 2.34; participants = 54; studies = 1; very low-quality evidence) or incidence of akathisia after 24 hours: RR 0.75, 95% CI 0.28 to 2.03; participants = 122; studies = 2; very low-quality evidence).\nAuthors' conclusions\nOverall, results for the main outcomes show no real effect for risperidone. The only data available for use in this review are from nine under-sampled trials and the evidence available is of very low quality. This casts uncertainty on the role of risperidone in rapid tranquillisation for people with psychosis-induced aggression. High-quality pragmatic RCTs are feasible and are needed before clear recommendations can be drawn on the use of risperidone for psychosis-induced aggression or agitation.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Nine studies, with 582 participants, are included in the review but the information provided is poor in quality and tended to provide information only partially relevant to the main aim of this review, particularly a lack of information regarding immediate (i.e. under one hour after treatment) calming effects and the need for repeated tranquillisation. Economic data were also not reported. In the trials, risperidone was compared to other antipsychotics, which included haloperidol, olanzapine and quetiapine. The review found risperidone was no better or worse than haloperidol for calming aggression within 24 hours, and that two weeks after treatment, people receiving risperidone had higher (worse) scores on scales measuring levels of aggression than those receiving quetiapine. Both these results, however, were graded as very low-quality evidence. One small study found a combination of antipsychotics (risperidone plus oxcarbazepine) was better than risperidone alone at reducing levels of agitation but these data were collected after one week and again, this evidence was rated as very low quality. No clear differences in the incidence of side effects such as movement disorders were observed."
}
] |
query-laysum
|
8266
|
[
{
"role": "user",
"content": "Abstract: Background\nIn many countries emergency departments (EDs) are facing an increase in demand for services, long waits, and severe crowding. One response to mitigate overcrowding has been to provide primary care services alongside or within hospital EDs for patients with non-urgent problems. However, it is unknown how this impacts the quality of patient care and the utilisation of hospital resources, or if it is cost-effective. This is the first update of the original Cochrane Review published in 2012.\nObjectives\nTo assess the effects of locating primary care professionals in hospital EDs to provide care for patients with non-urgent health problems, compared with care provided by regularly scheduled emergency physicians (EPs).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (the Cochrane Library; 2017, Issue 4), MEDLINE, Embase, CINAHL, PsycINFO, and King's Fund, from inception until 10 May 2017. We searched ClinicalTrials.gov and the WHO ICTRP for registered clinical trials, and screened reference lists of included papers and relevant systematic reviews.\nSelection criteria\nRandomised trials, non-randomised trials, controlled before-after studies, and interrupted time series studies that evaluated the effectiveness of introducing primary care professionals to hospital EDs attending to patients with non-urgent conditions, as compared to the care provided by regularly scheduled EPs.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe identified four trials (one randomised trial and three non-randomised trials), one of which is newly identified in this update, involving a total of 11,463 patients, 16 general practitioners (GPs), 9 emergency nurse practitioners (NPs), and 69 EPs. These studies evaluated the effects of introducing GPs or emergency NPs to provide care to patients with non-urgent problems in the ED, as compared to EPs for outcomes such as resource use. The studies were conducted in Ireland, the UK, and Australia, and had an overall high or unclear risk of bias. The outcomes investigated were similar across studies, and there was considerable variation in the triage system used, the level of expertise and experience of the medical practitioners, and type of hospital (urban teaching, suburban community hospital). Main sources of funding were national or regional health authorities and a medical research funding body.\nThere was high heterogeneity across studies, which precluded pooling data. It is uncertain whether the intervention reduces time from arrival to clinical assessment and treatment or total length of ED stay (1 study; 260 participants), admissions to hospital, diagnostic tests, treatments given, or consultations or referrals to hospital-based specialist (3 studies; 11,203 participants), as well as costs (2 studies; 9325 participants), as we assessed the evidence as being of very low-certainty for all outcomes.\nNo data were reported on adverse events (such as ED returns and mortality).\nAuthors' conclusions\nWe assessed the evidence from the four included studies as of very low-certainty overall, as the results are inconsistent and safety has not been examined. The evidence is insufficient to draw conclusions for practice or policy regarding the effectiveness and safety of care provided to non-urgent patients by GPs and NPs versus EPs in the ED to mitigate problems of overcrowding, wait times, and patient flow.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies published up to May 2017."
}
] |
query-laysum
|
8267
|
[
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis is caused by a defective gene encoding a protein called the cystic fibrosis transmembrane conductance regulator (CFTR), and is characterised by chronic lung infection resulting in inflammation and progressive lung damage that results in a reduced life expectancy.\nObjectives\nTo determine whether topical CFTR gene replacement therapy to the lungs in people with cystic fibrosis is associated with improvements in clinical outcomes, and to assess any adverse effects.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches, handsearching relevant journals and abstract books of conference proceedings.\nDate of most recent search: 05 May 2016.\nAn additional search of the National Institutes for Health (NIH) Genetic Modification Clinical Research Information System (GeMCRIS) was also performed for the years 1992 to 2015.\nDate of most recent search: 20 April 2016.\nSelection criteria\nRandomised controlled studies comparing topical CFTR gene delivery to the lung, using either viral or non-viral delivery systems, with placebo or an alternative delivery system in people with confirmed cystic fibrosis.\nData collection and analysis\nThe authors independently extracted data and assessed study quality. Authors of included studies were contacted and asked for any available additional data. Meta-analysis was limited due to differing study designs.\nMain results\nFour randomised controlled studies met the inclusion criteria for this review, involving a total of 302 participants lasting from 29 days to 13 months; 14 studies were excluded. The included studies differed in terms of CFTR gene replacement agent and study design, which limited the meta-analysis. One study only enrolled adult males, the remaining studies included both males and females aged 12 years and over.\nRisk of bias in the studies was moderate. Random sequence generation and allocation concealment was only described in the more recent study; the remaining three studies were judged to have an unclear risk of bias. All four studies documented double-blinding to the intervention, but there is some uncertainty with regards to participant blinding in one study. Some outcome data were missing from all four studies.\nThere were no differences in either the number of respiratory exacerbations or the number of participants with an exacerbation between replacement therapy or placebo groups at any time point. Meta-analysis of most respiratory function tests showed no difference between treatment and placebo groups, but the smallest study (n = 16) reported forced vital capacity (litres) increased more in the placebo group at up to 24 hours. A further study reported a significant improvement in forced expiratory volume at one second (litres) at 30 days after participants had received their first dose of favouring the gene therapy agent, but this finding was not confirmed when combined with at second study in the meta-analysis. The more recent study (n = 140) demonstrated a small improvement in forced vital capacity (per cent predicted) at two and three months and again at 11 and 12 months for participants receiving CFTR gene replacement therapy compared to those receiving placebo. The same study reported a significant difference in the relative change in forced expiratory volume at one second (per cent predicted) at two months, three months and 12 months.\nOne small study reported significant concerns with \"influenza-like\" symptoms in participants treated with CFTR gene replacement therapy; this was not reported on repeated use of the same agent in a larger recent study.\nThere was no other evidence of positive impact on outcomes, in particular improved quality of life or reduced treatment burden.\nTwo studies measured ion transport in the lower airways; one (n = 16) demonstrated significant changes toward normal values in the participants who received gene transfer agents (P < 0.0001), mean difference 6.86 (95% confidence interval 3.77 to 9.95). The second study (n = 140) also reported significant changes toward normal values (P = 0.032); however, aggregate data were not available for analysis. In the most recent study, there was also evidence of increased salt transport in cells obtained by brushing the lower airway. These outcomes, whilst important, are not of direct clinical relevance.\nAuthors' conclusions\nOne study of liposome-based CFTR gene transfer therapy demonstrated some improvements in respiratory function in people with CF, but this limited evidence of efficacy does not support this treatment as a routine therapy at present. There was no evidence of efficacy for viral-mediated gene delivery.\nFuture studies need to investigate clinically important outcome measures.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found four studies with 302 people to include in this review. The studies lasted from 29 days to 13 months. Three of these studies included both men and women aged 12 years and over and one study only included adult men. The studies compare gene therapy to a dummy treatment (placebo) both of which are inhaled as a mist into the lungs. The studies were of different designs and used different agents. This meant we could not combine their results."
}
] |
query-laysum
|
8268
|
[
{
"role": "user",
"content": "Abstract: Background\nExercise or physical activity is recommended for improving pain and functional status in people with knee or hip osteoarthritis. These are complex interventions whose effectiveness depends on one or more components that are often poorly identified. It has been suggested that health benefits may be greater with high-intensity rather than low-intensity exercise or physical activity.\nObjectives\nTo determine the benefits and harms of high- versus low-intensity physical activity or exercise programs in people with hip or knee osteoarthritis.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; issue 06, 2014), MEDLINE (194 8 to June 2014) , EMBASE (198 0 to June 2014), CINAHL (1982 to June 2014), PEDro (1929 to June 2014), SCOPUS (to June 2014) and the World Health Organization (WHO) International Clinical Registry Platform (to June 2014) for articles, without a language restriction. We also handsearched relevant conference proceedings, trials, and reference lists and contacted researchers and experts in the field to identify additional studies.\nSelection criteria\nWe included randomized controlled trials of people with knee or hip osteoarthritis that compared high- versus low-intensity physical activity or exercise programs between the experimental and control group.\nHigh-intensity physical activity or exercise programs training had to refer to an increase in the overall amount of training time (frequency, duration, number of sessions) or the amount of work (strength, number of repetitions) or effort/energy expenditure (exertion, heart rate, effort).\nData collection and analysis\nTwo review authors independently assessed study eligibility and extracted data on trial details. We contacted authors for additional information if necessary. We assessed the quality of the body of evidence for these outcomes using the GRADE approach.\nMain results\nWe included reports for six studies of 656 participants that compared high- and low-intensity exercise programs; five studies exclusively recruited people with symptomatic knee osteoarthritis (620 participants), and one study exclusively recruited people with hip or knee osteoarthritis (36 participants). The majority of the participants were females (70%). No studies evaluated physical activity programs. We found the overall quality of evidence to be low to very low due to concerns about study limitations and imprecision (small number of studies, large confidence intervals) for the major outcomes using the GRADE approach. Most of the studies had an unclear or high risk of bias for several domains, and we judged five of the six studies to be at high risk for performance, detection, and attrition bias.\nLow-quality evidence indicated reduced pain on a 20-point Western Ontario and McMaster Universities Arthritis Index (WOMAC) pain scale (mean difference (MD) -0.84, 95% confidence interval (CI) -1.63 to -0.04; 4% absolute reduction, 95% CI -8% to 0%; number needed to treat for an additional beneficial outcome (NNTB) 11, 95% CI 14 to 22) and improved physical function on the 68-point WOMAC disability subscale (MD -2.65, 95% CI -5.29 to -0.01; 4% absolute reduction; NNTB 10, 95% CI 8 to 13) immediately at the end of the exercise programs (from 8 to 24 weeks). However, these results are unlikely to be of clinical importance. These small improvements did not continue at longer-term follow-up (up to 40 weeks after the end of the intervention). We are uncertain of the effect on quality of life, as only one study reported this outcome (0 to 200 scale; MD 4.3, 95% CI -6.5 to 15.2; 2% absolute reduction; very low level of evidence).\nOur subgroup analyses provided uncertain evidence as to whether increased exercise time (duration, number of sessions) and level of resistance (strength or effort) have an impact on the exercise program effects.\nThree studies reported withdrawals due to adverse events. The number of dropouts was small. Only one study systematically monitored adverse effects, but four studies reported some adverse effects related to knee pain associated with an exercise program. We are uncertain as to whether high intensity increases the number of adverse effects (Peto odds ratio 1.72, 95% CI 0.51 to 5.81; - 2% absolute risk reduction; very low level of evidence). None of the included studies reported serious adverse events.\nAuthors' conclusions\nWe found very low-quality to low-quality evidence for no important clinical benefit of high-intensity compared to low-intensity exercise programs in improving pain and physical function in the short term. There was insufficient evidence to determine the effect of different types of intensity of exercise programs.\nWe are uncertain as to whether higher-intensity exercise programs may induce more harmful effects than those of lower intensity; this must be evaluated by further studies. Withdrawals due to adverse events were poorly monitored and not reported systematically in each group. We downgraded the evidence to low or very low because of the risk of bias, inconsistency, and imprecision.\nThe small number of studies comparing high- and low-intensity exercise programs in osteoarthritis underscores the need for more studies investigating the dose–response relationship in exercise programs. In particular, further studies are needed to establish the minimal intensity of exercise programs needed for clinical effect and the highest intensity patients can tolerate. Larger studies should comply with the Consolidated Standards of Reporting Trials (CONSORT) checklist and systematically report harms data to evaluate the potential impact of highest intensities of exercise programs in people with joint damage.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We graded the quality of evidence as low for pain and function and very low for quality of life. The small number of studies and participants included in some analyses reduced the robustness and precision of these findings.\nAdverse effects were poorly recorded. Very low quality evidence shows we are uncertain whether higher-intensity exercise programs may result in more side effects than lower-intensity exercise programs. Further research may change the result."
}
] |
query-laysum
|
8269
|
[
{
"role": "user",
"content": "Abstract: Background\nIncreasing numbers of incidental pancreatic lesions are being detected each year. Accurate characterisation of pancreatic lesions into benign, precancerous, and cancer masses is crucial in deciding whether to use treatment or surveillance. Distinguishing benign lesions from precancerous and cancerous lesions can prevent patients from undergoing unnecessary major surgery. Despite the importance of accurately classifying pancreatic lesions, there is no clear algorithm for management of focal pancreatic lesions.\nObjectives\nTo determine and compare the diagnostic accuracy of various imaging modalities in detecting cancerous and precancerous lesions in people with focal pancreatic lesions.\nSearch methods\nWe searched the CENTRAL, MEDLINE, Embase, and Science Citation Index until 19 July 2016. We searched the references of included studies to identify further studies. We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively.\nSelection criteria\nWe planned to include studies reporting cross-sectional information on the index test (CT (computed tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), EUS (endoscopic ultrasound), EUS elastography, and EUS-guided biopsy or FNA (fine-needle aspiration)) and reference standard (confirmation of the nature of the lesion was obtained by histopathological examination of the entire lesion by surgical excision, or histopathological examination for confirmation of precancer or cancer by biopsy and clinical follow-up of at least six months in people with negative index tests) in people with pancreatic lesions irrespective of language or publication status or whether the data were collected prospectively or retrospectively.\nData collection and analysis\nTwo review authors independently searched the references to identify relevant studies and extracted the data. We planned to use the bivariate analysis to calculate the summary sensitivity and specificity with their 95% confidence intervals and the hierarchical summary receiver operating characteristic (HSROC) to compare the tests and assess heterogeneity, but used simpler models (such as univariate random-effects model and univariate fixed-effect model) for combining studies when appropriate because of the sparse data. We were unable to compare the diagnostic performance of the tests using formal statistical methods because of sparse data.\nMain results\nWe included 54 studies involving a total of 3,196 participants evaluating the diagnostic accuracy of various index tests. In these 54 studies, eight different target conditions were identified with different final diagnoses constituting benign, precancerous, and cancerous lesions. None of the studies was of high methodological quality. None of the comparisons in which single studies were included was of sufficiently high methodological quality to warrant highlighting of the results. For differentiation of cancerous lesions from benign or precancerous lesions, we identified only one study per index test. The second analysis, of studies differentiating cancerous versus benign lesions, provided three tests in which meta-analysis could be performed. The sensitivities and specificities for diagnosing cancer were: EUS-FNA: sensitivity 0.79 (95% confidence interval (CI) 0.07 to 1.00), specificity 1.00 (95% CI 0.91 to 1.00); EUS: sensitivity 0.95 (95% CI 0.84 to 0.99), specificity 0.53 (95% CI 0.31 to 0.74); PET: sensitivity 0.92 (95% CI 0.80 to 0.97), specificity 0.65 (95% CI 0.39 to 0.84). The third analysis, of studies differentiating precancerous or cancerous lesions from benign lesions, only provided one test (EUS-FNA) in which meta-analysis was performed. EUS-FNA had moderate sensitivity for diagnosing precancerous or cancerous lesions (sensitivity 0.73 (95% CI 0.01 to 1.00) and high specificity 0.94 (95% CI 0.15 to 1.00), the extremely wide confidence intervals reflecting the heterogeneity between the studies). The fourth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (dysplasia) provided three tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing invasive carcinoma were: CT: sensitivity 0.72 (95% CI 0.50 to 0.87), specificity 0.92 (95% CI 0.81 to 0.97); EUS: sensitivity 0.78 (95% CI 0.44 to 0.94), specificity 0.91 (95% CI 0.61 to 0.98); EUS-FNA: sensitivity 0.66 (95% CI 0.03 to 0.99), specificity 0.92 (95% CI 0.73 to 0.98). The fifth analysis, of studies differentiating cancerous (high-grade dysplasia or invasive carcinoma) versus precancerous (low- or intermediate-grade dysplasia) provided six tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing cancer (high-grade dysplasia or invasive carcinoma) were: CT: sensitivity 0.87 (95% CI 0.00 to 1.00), specificity 0.96 (95% CI 0.00 to 1.00); EUS: sensitivity 0.86 (95% CI 0.74 to 0.92), specificity 0.91 (95% CI 0.83 to 0.96); EUS-FNA: sensitivity 0.47 (95% CI 0.24 to 0.70), specificity 0.91 (95% CI 0.32 to 1.00); EUS-FNA carcinoembryonic antigen 200 ng/mL: sensitivity 0.58 (95% CI 0.28 to 0.83), specificity 0.51 (95% CI 0.19 to 0.81); MRI: sensitivity 0.69 (95% CI 0.44 to 0.86), specificity 0.93 (95% CI 0.43 to 1.00); PET: sensitivity 0.90 (95% CI 0.79 to 0.96), specificity 0.94 (95% CI 0.81 to 0.99). The sixth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (low-grade dysplasia) provided no tests in which meta-analysis was performed. The seventh analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) provided two tests in which meta-analysis was performed. The sensitivity and specificity for diagnosing cancer were: CT: sensitivity 0.83 (95% CI 0.68 to 0.92), specificity 0.83 (95% CI 0.64 to 0.93) and MRI: sensitivity 0.80 (95% CI 0.58 to 0.92), specificity 0.81 (95% CI 0.53 to 0.95), respectively. The eighth analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) or benign lesions provided no test in which meta-analysis was performed.\nThere were no major alterations in the subgroup analysis of cystic pancreatic focal lesions (42 studies; 2086 participants). None of the included studies evaluated EUS elastography or sequential testing.\nAuthors' conclusions\nWe were unable to arrive at any firm conclusions because of the differences in the way that study authors classified focal pancreatic lesions into cancerous, precancerous, and benign lesions; the inclusion of few studies with wide confidence intervals for each comparison; poor methodological quality in the studies; and heterogeneity in the estimates within comparisons.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All of the studies were of low methodological quality, which may result in arriving at false conclusions."
}
] |
query-laysum
|
8270
|
[
{
"role": "user",
"content": "Abstract: Background\nThe use of anaesthetics in the elderly surgical population (more than 60 years of age) is increasing. Postoperative delirium, an acute condition characterized by reduced awareness of the environment and a disturbance in attention, typically occurs between 24 and 72 hours after surgery and can affect up to 60% of elderly surgical patients. Postoperative cognitive dysfunction (POCD) is a new-onset of cognitive impairment which may persist for weeks or months after surgery.\nTraditionally, surgical anaesthesia has been maintained with inhalational agents. End-tidal concentrations require adjustment to balance the risks of accidental awareness and excessive dosing in elderly people. As an alternative, propofol-based total intravenous anaesthesia (TIVA) offers a more rapid recovery and reduces postoperative nausea and vomiting. Using TIVA with a target controlled infusion (TCI) allows plasma and effect-site concentrations to be calculated using an algorithm based on age, gender, weight and height of the patient.\nTIVA is a viable alternative to inhalational maintenance agents for surgical anaesthesia in elderly people. However, in terms of postoperative cognitive outcomes, the optimal technique is unknown.\nObjectives\nTo compare maintenance of general anaesthesia for elderly people undergoing non-cardiac surgery using propofol-based TIVA or inhalational anaesthesia on postoperative cognitive function, mortality, risk of hypotension, length of stay in the postanaesthesia care unit (PACU), and hospital stay.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 11), MEDLINE (1946 to November 2017), Embase (1974 to November 2017), PsycINFO (1887 to November 2017). We searched clinical trials registers for ongoing studies, and conducted backward and forward citation searching of relevant articles.\nSelection criteria\nWe included randomized controlled trials (RCTs) with participants over 60 years of age scheduled for non-cardiac surgery under general anaesthesia. We planned to also include quasi-randomized trials. We compared maintenance of anaesthesia with propofol-based TIVA versus inhalational maintenance of anaesthesia.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, assessed risk of bias, and synthesized findings.\nMain results\nWe included 28 RCTs with 4507 randomized participants undergoing different types of surgery (predominantly cardiovascular, laparoscopic, abdominal, orthopaedic and ophthalmic procedures). We found no quasi-randomized trials. Four studies are awaiting classification because we had insufficient information to assess eligibility.\nAll studies compared maintenance with propofol-based TIVA versus inhalational maintenance of anaesthesia. Six studies were multi-arm and included additional TIVA groups, additional inhalational maintenance or both. Inhalational maintenance agents included sevoflurane (19 studies), isoflurane (eight studies), and desflurane (three studies), and was not specified in one study (reported as an abstract). Some studies also reported use of epidural analgesia/anaesthesia, fentanyl and remifentanil.\nWe found insufficient reporting of randomization methods in many studies and all studies were at high risk of performance bias because it was not feasible to blind anaesthetists to study groups. Thirteen studies described blinding of outcome assessors. Three studies had a high of risk of attrition bias, and we noted differences in the use of analgesics between groups in six studies, and differences in baseline characteristics in five studies. Few studies reported clinical trials registration, which prevented assessment of risk of selective reporting bias.\nWe found no evidence of a difference in incidences of postoperative delirium according to type of anaesthetic maintenance agents (odds ratio (OR) 0.59, 95% confidence interval (CI) 0.15 to 2.26; 321 participants; five studies; very low-certainty evidence); we noted during sensitivity analysis that using different time points in one study may influence direction of this result. Thirteen studies (3215 participants) reported POCD, and of these, six studies reported data that could not be pooled; we noted no difference in scores of POCD in four of these and in one study, data were at a time point incomparable to other studies. We excluded one large study from meta-analysis because study investigators had used non-standard anaesthetic management and this study was not methodologically comparable to other studies. We combined data for seven studies and found low-certainty evidence that TIVA may reduce POCD (OR 0.52, 95% CI 0.31 to 0.87; 869 participants).\nWe found no evidence of a difference in mortality at 30 days (OR 1.21, 95% CI 0.33 to 4.45; 271 participants; three studies; very low-certainty evidence). Twelve studies reported intraoperative hypotension. We did not perform meta-analysis for 11 studies for this outcome. We noted visual inconsistencies in these data, which may be explained by possible variation in clinical management and medication used to manage hypotension in each study (downgraded to low-certainty evidence); one study reported data in a format that could not be combined and we noted little or no difference between groups in intraoperative hypotension for this study. Eight studies reported length of stay in the PACU, and we did not perform meta-analysis for seven studies. We noted visual inconsistencies in these data, which may be explained by possible differences in definition of time points for this outcome (downgraded to very low-certainty evidence); data were unclearly reported in one study. We found no evidence of a difference in length of hospital stay according to type of anaesthetic maintenance agent (mean difference (MD) 0 days, 95% CI -1.32 to 1.32; 175 participants; four studies; very low-certainty evidence).\nWe used the GRADE approach to downgrade the certainty of the evidence for each outcome. Reasons for downgrading included: study limitations, because some included studies insufficiently reported randomization methods, had high attrition bias, or high risk of selective reporting bias; imprecision, because we found few studies; inconsistency, because we noted heterogeneity across studies.\nAuthors' conclusions\nWe are uncertain whether maintenance with propofol-based TIVA or with inhalational agents affect incidences of postoperative delirium, mortality, or length of hospital stay because certainty of the evidence was very low. We found low-certainty evidence that maintenance with propofol-based TIVA may reduce POCD. We were unable to perform meta-analysis for intraoperative hypotension or length of stay in the PACU because of heterogeneity between studies. We identified 11 ongoing studies from clinical trials register searches; inclusion of these studies in future review updates may provide more certainty for the review outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Many studies did not report randomization methods adequately and all studies were at high risk of bias from anaesthetists, who needed to be aware of which anaesthetic agent they used. Outcome assessors in some studies were aware of which study group participants were in. We noted a large loss of participants in three studies, and some studies had differences between groups in the types of drugs used for pain, the types of monitors used to assess how deeply-unconscious the patients were, and participant characteristics at the start of the studies; these factors may have influenced the results. Few studies had reported clinical trials registration. We found few studies for two outcomes (mortality and length of hospital stay), which made the results less precise. We judged evidence for postoperative delirium, number of deaths, length of stay in the PACU, and length of hospital stay to be very low certainty, and evidence for postoperative cognitive dysfunction, and low blood pressure during the operation to be low certainty.\nTIVA with propofol may reduce postoperative cognitive dysfunction. We are uncertain whether the choice of anaesthetic agents (TIVA with propofol, or inhalational agents) affects postoperative delirium, mortality and length of hospital stay. We found 11 ongoing studies in database and clinical trials register searches. Inclusion of these studies in future review updates will provide more certainty for the review outcomes."
}
] |
query-laysum
|
8271
|
[
{
"role": "user",
"content": "Abstract: Background\nAlcohol use in young people is a risk factor for a range of short- and long-term harms and is a cause of concern for health services, policy-makers, youth workers, teachers, and parents.\nObjectives\nTo assess the effectiveness of universal, selective, and indicated family-based prevention programmes in preventing alcohol use or problem drinking in school-aged children (up to 18 years of age).\nSpecifically, on these outcomes, the review aimed:\n• to assess the effectiveness of universal family-based prevention programmes for all children up to 18 years (‘universal interventions’);\n• to assess the effectiveness of selective family-based prevention programmes for children up to 18 years at elevated risk of alcohol use or problem drinking (‘selective interventions’); and\n• to assess the effectiveness of indicated family-based prevention programmes for children up to 18 years who are currently consuming alcohol, or who have initiated use or regular use (‘indicated interventions’).\nSearch methods\nWe identified relevant evidence from the Cochrane Central Register of Controlled Trials (CENTRAL), in the Cochrane Library, MEDLINE (Ovid 1966 to June 2018), Embase (1988 to June 2018), Education Resource Information Center (ERIC; EBSCOhost; 1966 to June 2018), PsycINFO (Ovid 1806 to June 2018), and Google Scholar. We also searched clinical trial registers and handsearched references of topic-related systematic reviews and the included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster RCTs (C-RCTs) involving the parents of school-aged children who were part of the general population with no known risk factors (universal interventions), were at elevated risk of alcohol use or problem drinking (selective interventions), or were already consuming alcohol (indicated interventions). Psychosocial or educational interventions involving parents with or without involvement of children were compared with no intervention, or with alternate (e.g. child only) interventions, allowing experimental isolation of parent components.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 46 studies (39,822 participants), with 27 classified as universal, 12 as selective, and seven as indicated. We performed meta-analyses according to outcome, including studies reporting on the prevalence, frequency, or volume of alcohol use. The overall quality of evidence was low or very low, and there was high, unexplained heterogeneity.\nUpon comparing any family intervention to no intervention/standard care, we found no intervention effect on the prevalence (standardised mean difference (SMD) 0.00, 95% confidence interval (CI) -0.08 to 0.08; studies = 12; participants = 7490; I² = 57%; low-quality evidence) or frequency (SMD -0.31, 95% CI -0.83 to 0.21; studies = 8; participants = 1835; I² = 96%; very low-quality evidence) of alcohol use in comparison with no intervention/standard care. The effect of any parent/family interventions on alcohol consumption volume compared with no intervention/standard care was very small (SMD -0.14, 95% CI -0.27 to 0.00; studies = 5; participants = 1825; I² = 42%; low-quality evidence).\nWhen comparing parent/family and adolescent interventions versus interventions with young people alone, we found no difference in alcohol use prevalence (SMD -0.39, 95% CI -0.91 to 0.14; studies = 4; participants = 5640; I² = 99%; very low-quality evidence) or frequency (SMD -0.16, 95% CI -0.42 to 0.09; studies = 4; participants = 915; I² = 73%; very low-quality evidence). For this comparison, no trials reporting on the volume of alcohol use could be pooled in meta-analysis.\nIn general, the results remained consistent in separate subgroup analyses of universal, selective, and indicated interventions. No adverse effects were reported.\nAuthors' conclusions\nThe results of this review indicate that there are no clear benefits of family-based programmes for alcohol use among young people. Patterns differ slightly across outcomes, but overall, the variation, heterogeneity, and number of analyses performed preclude any conclusions about intervention effects. Additional independent studies are required to strengthen the evidence and clarify the marginal effects observed.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence was current to June 2018."
}
] |
query-laysum
|
8272
|
[
{
"role": "user",
"content": "Abstract: Background\nSystematic screening in high-burden settings is recommended as a strategy for early detection of pulmonary tuberculosis disease, reducing mortality, morbidity and transmission, and improving equity in access to care. Questioning for symptoms and chest radiography (CXR) have historically been the most widely available tools to screen for tuberculosis disease. Their accuracy is important for the design of tuberculosis screening programmes and determines, in combination with the accuracy of confirmatory diagnostic tests, the yield of a screening programme and the burden on individuals and the health service.\nObjectives\nTo assess the sensitivity and specificity of questioning for the presence of one or more tuberculosis symptoms or symptom combinations, CXR, and combinations of these as screening tools for detecting bacteriologically confirmed pulmonary tuberculosis disease in HIV-negative adults and adults with unknown HIV status who are considered eligible for systematic screening for tuberculosis disease. Second, to investigate sources of heterogeneity, especially in relation to regional, epidemiological, and demographic characteristics of the study populations.\nSearch methods\nWe searched the MEDLINE, Embase, LILACS, and HTA (Health Technology Assessment) databases using pre-specified search terms and consulted experts for unpublished reports, for the period 1992 to 2018. The search date was 10 December 2018. This search was repeated on 2 July 2021.\nSelection criteria\nStudies were eligible if participants were screened for tuberculosis disease using symptom questions, or abnormalities on CXR, or both, and were offered confirmatory testing with a reference standard. We included studies if diagnostic two-by-two tables could be generated for one or more index tests, even if not all participants were subjected to a microbacteriological reference standard. We excluded studies evaluating self-reporting of symptoms.\nData collection and analysis\nWe categorized symptom and CXR index tests according to commonly used definitions. We assessed the methodological quality of included studies using the QUADAS-2 instrument. We examined the forest plots and receiver operating characteristic plots visually for heterogeneity. We estimated summary sensitivities and specificities (and 95% confidence intervals (CI)) for each index test using bivariate random-effects methods. We analyzed potential sources of heterogeneity in a hierarchical mixed-model.\nMain results\nThe electronic database search identified 9473 titles and abstracts. Through expert consultation, we identified 31 reports on national tuberculosis prevalence surveys as eligible (of which eight were already captured in the search of the electronic databases), and we identified 957 potentially relevant articles through reference checking. After removal of duplicates, we assessed 10,415 titles and abstracts, of which we identified 430 (4%) for full text review, whereafter we excluded 364 articles. In total, 66 articles provided data on 59 studies. We assessed the 2 July 2021 search results; seven studies were potentially eligible but would make no material difference to the review findings or grading of the evidence, and were not added in this edition of the review.\nWe judged most studies at high risk of bias in one or more domains, most commonly because of incorporation bias and verification bias. We judged applicability concerns low in more than 80% of studies in all three domains.\nThe three most common symptom index tests, cough for two or more weeks (41 studies), any cough (21 studies), and any tuberculosis symptom (29 studies), showed a summary sensitivity of 42.1% (95% CI 36.6% to 47.7%), 51.3% (95% CI 42.8% to 59.7%), and 70.6% (95% CI 61.7% to 78.2%, all very low-certainty evidence), and a specificity of 94.4% (95% CI 92.6% to 95.8%, high-certainty evidence), 87.6% (95% CI 81.6% to 91.8%, low-certainty evidence), and 65.1% (95% CI 53.3% to 75.4%, low-certainty evidence), respectively. The data on symptom index tests were more heterogenous than those for CXR. The studies on any tuberculosis symptom were the most heterogeneous, but had the lowest number of variables explaining this variation. Symptom index tests also showed regional variation.\nThe summary sensitivity of any CXR abnormality (23 studies) was 94.7% (95% CI 92.2% to 96.4%, very low-certainty evidence) and 84.8% (95% CI 76.7% to 90.4%, low-certainty evidence) for CXR abnormalities suggestive of tuberculosis (19 studies), and specificity was 89.1% (95% CI 85.6% to 91.8%, low-certainty evidence) and 95.6% (95% CI 92.6% to 97.4%, high-certainty evidence), respectively. Sensitivity was more heterogenous than specificity, and could be explained by regional variation.\nThe addition of cough for two or more weeks, whether to any (pulmonary) CXR abnormality or to CXR abnormalities suggestive of tuberculosis, resulted in a summary sensitivity and specificity of 99.2% (95% CI 96.8% to 99.8%) and 84.9% (95% CI 81.2% to 88.1%) (15 studies; certainty of evidence not assessed).\nAuthors' conclusions\nThe summary estimates of the symptom and CXR index tests may inform the choice of screening and diagnostic algorithms in any given setting or country where screening for tuberculosis is being implemented. The high sensitivity of CXR index tests, with or without symptom questions in parallel, suggests a high yield of persons with tuberculosis disease. However, additional considerations will determine the design of screening and diagnostic algorithms, such as the availability and accessibility of CXR facilities or the resources to fund them, and the need for more or fewer diagnostic tests to confirm the diagnosis (depending on screening test specificity), which also has resource implications.\nThese review findings should be interpreted with caution due to methodological limitations in the included studies and regional variation in sensitivity and specificity. The sensitivity and specificity of an index test in a specific setting cannot be predicted with great precision due to heterogeneity. This should be borne in mind when planning for and implementing tuberculosis screening programmes.\n\nGiven the provided abstract, please respond to the following query: \"Why is improving screening for tuberculosis of the lungs important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Systematic screening in settings where tuberculosis is common is a recommended strategy for early detection of tuberculosis. Screening helps identify people who are more likely to have tuberculosis so they can have confirmatory testing. These are additional tests to confirm the presence of Mycobacterium tuberculosis, the bacterium that causes tuberculosis. Asking about tuberculosis symptoms (for example, cough, coughing up blood, fever, and fatigue) and doing a chest X-ray (CXR), which shows lung abnormalities, are commonly used screening methods. Tuberculosis is treatable with antibiotics, which means that early detection may result in lower mortality and morbidity, less transmission of tuberculosis, and more equitable access to care.\nNot recognizing lung tuberculosis when it is present (a false-negative result) may result in delayed treatment and further transmission. Conversely, a screening result that is thought to be positive while it is not may result in unnecessary confirmatory tests, which burdens both the individual and the public health system.\nKnowing how often screening tests lead to false-positive and false-negative results – this is called accuracy - may help in choosing a screening method."
}
] |
query-laysum
|
8273
|
[
{
"role": "user",
"content": "Abstract: Background\nOn the American continent, cutaneous and mucocutaneous leishmaniasis (CL and MCL) are diseases associated with infection by several species of Leishmania parasites. Pentavalent antimonials remain the first-choice treatment. There are alternative interventions, but reviewing their effectiveness and safety is important as availability is limited. This is an update of a Cochrane Review first published in 2009.\nObjectives\nTo assess the effects of interventions for all immuno-competent people who have American cutaneous and mucocutaneous leishmaniasis (ACML).\nSearch methods\nWe updated our database searches of the Cochrane Skin Group Specialised Register, CENTRAL, MEDLINE, Embase, LILACS and CINAHL to August 2019. We searched five trials registers.\nSelection criteria\nRandomised controlled trials (RCTs) assessing either single or combination treatments for ACML in immuno-competent people, diagnosed by clinical presentation and Leishmania infection confirmed by smear, culture, histology, or polymerase chain reaction on a biopsy specimen. The comparators were either no treatment, placebo only, or another active compound.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Our key outcomes were the percentage of participants 'cured' at least three months after the end of treatment, adverse effects, and recurrence. We used GRADE to assess evidence certainty for each outcome.\nMain results\nWe included 75 studies (37 were new), totalling 6533 randomised participants with ATL. The studies were mainly conducted in Central and South America at regional hospitals, local healthcare clinics, and research centres. More male participants were included (mean age: roughly 28.9 years (SD: 7.0)). The most common confirmed species were L. braziliensis, L. panamensis, and L. mexicana. The most assessed interventions and comparators were non-antimonial systemics (particularly oral miltefosine) and antimonials (particularly meglumine antimoniate (MA), which was also a common intervention), respectively.\nThree studies included moderate-to-severe cases of mucosal leishmaniasis but none included cases with diffuse cutaneous or disseminated CL, considered the severe cutaneous form. Lesions were mainly ulcerative and located in the extremities and limbs. The follow-up (FU) period ranged from 28 days to 7 years. All studies had high or unclear risk of bias in at least one domain (especially performance bias). None of the studies reported the degree of functional or aesthetic impairment, scarring, or quality of life.\nCompared to placebo, at one-year FU, intramuscular (IM) MA given for 20 days to treat L. braziliensis and L. panamensis infections in ACML may increase the likelihood of complete cure (risk ratio (RR) 4.23, 95% confidence interval (CI) 0.84 to 21.38; 2 RCTs, 157 participants; moderate-certainty evidence), but may also make little to no difference, since the 95% CI includes the possibility of both increased and reduced healing (cure rates), and IMMA probably increases severe adverse effects such as myalgias and arthralgias (RR 1.51, 95% CI 1.17 to 1.96; 1 RCT, 134 participants; moderate-certainty evidence). IMMA may make little to no difference to the recurrence risk, but the 95% CI includes the possibility of both increased and reduced risk (RR 1.79, 95% CI 0.17 to 19.26; 1 RCT, 127 participants; low-certainty evidence).\nCompared to placebo, at six-month FU, oral miltefosine given for 28 days to treat L. mexicana, L. panamensis and L. braziliensis infections in American cutaneous leishmaniasis (ACL) probably improves the likelihood of complete cure (RR 2.25, 95% CI 1.42 to 3.38), and probably increases nausea rates (RR 3.96, 95% CI 1.49 to 10.48) and vomiting (RR 6.92, 95% CI 2.68 to 17.86) (moderate-certainty evidence). Oral miltefosine may make little to no difference to the recurrence risk (RR 2.97, 95% CI 0.37 to 23.89; low-certainty evidence), but the 95% CI includes the possibility of both increased and reduced risk (all based on 1 RCT, 133 participants).\nCompared to IMMA, at 6 to 12 months FU, oral miltefosine given for 28 days to treat L. braziliensis, L. panamensis, L. guyanensis and L. amazonensis infections in ACML may make little to no difference to the likelihood of complete cure (RR 1.05, 95% CI 0.90 to 1.23; 7 RCTs, 676 participants; low-certainty evidence). Based on moderate-certainty evidence (3 RCTs, 464 participants), miltefosine probably increases nausea rates (RR 2.45, 95% CI 1.72 to 3.49) and vomiting (RR 4.76, 95% CI 1.82 to 12.46) compared to IMMA. Recurrence risk was not reported.\nFor the rest of the key comparisons, recurrence risk was not reported, and risk of adverse events could not be estimated.\nCompared to IMMA, at 6 to 12 months FU, oral azithromycin given for 20 to 28 days to treat L. braziliensis infections in ACML probably reduces the likelihood of complete cure (RR 0.51, 95% CI 0.34 to 0.76; 2 RCTs, 93 participants; moderate-certainty evidence).\nCompared to intravenous MA (IVMA) and placebo, at 12 month FU, adding topical imiquimod to IVMA, given for 20 days to treat L. braziliensis, L. guyanensis and L. peruviana infections in ACL probably makes little to no difference to the likelihood of complete cure (RR 1.30, 95% CI 0.95 to 1.80; 1 RCT, 80 participants; moderate-certainty evidence).\nCompared to MA, at 6 months FU, one session of local thermotherapy to treat L. panamensis and L. braziliensis infections in ACL reduces the likelihood of complete cure (RR 0.80, 95% CI 0.68 to 0.95; 1 RCT, 292 participants; high-certainty evidence).\nCompared to IMMA and placebo, at 26 weeks FU, adding oral pentoxifylline to IMMA to treat CL (species not stated) probably makes little to no difference to the likelihood of complete cure (RR 0.86, 95% CI 0.63 to 1.18; 1 RCT, 70 participants; moderate-certainty evidence).\nAuthors' conclusions\nEvidence certainty was mostly moderate or low, due to methodological shortcomings, which precluded conclusive results. Overall, both IMMA and oral miltefosine probably result in an increase in cure rates, and nausea and vomiting are probably more common with miltefosine than with IMMA.\nFuture trials should investigate interventions for mucosal leishmaniasis and evaluate recurrence rates of cutaneous leishmaniasis and its progression to mucosal disease.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Antimonials\nCompared to placebo, MA may increase chances of complete healing of ACML, but treatment effects vary, so it is possible that it may make little to no difference. MA probably increases the likelihood of pain in the muscles or joints. There may be little to no difference in the risk of developing the disease again, but there is also a possibility of increased or reduced risk due to the wide range of effects seen.\nNon-antimonials\nMiltefosine probably improves chances of complete healing of American cutaneous leishmaniasis (ACL) compared to placebo, but there may be little to no difference compared to treatment with MA in ACML. Miltefosine may make little to no difference to the risk of developing ACL again when compared to placebo, but treatment effects on recurrence varied, so it may also increase or decrease the risk. Miltefosine probably increases the likelihood of vomiting or nausea when compared to either placebo or MA in ACML. We do not know the effect on recurrence of miltefosine compared to MA.\nAzithromycin probably reduces chances of complete healing of ACML compared to MA.\nImiquimod in combination with MA probably makes little to no difference to the chance of complete healing of ACL compared to MA in combination with placebo.\nPhysical therapies\nThermotherapy lowers the chance of complete healing of ACL compared to MA.\nImmuno-chemotherapy\nPentoxifylline plus MA probably makes little to no difference to chances of complete healing of ACML compared with MA plus placebo.\nNo study reported information about damage, prevention of scarring, or quality of life."
}
] |
query-laysum
|
8274
|
[
{
"role": "user",
"content": "Abstract: Background\nPostoperative pain remains a significant problem following paediatric surgery. Premedication with a suitable agent may improve its management. Clonidine is an alpha-2 adrenergic agonist which has sedative, anxiolytic and analgesic properties. It may therefore be a useful premedication for reducing postoperative pain in children.\nObjectives\nTo evaluate the evidence for the effectiveness of clonidine, when given as a premedication, in reducing postoperative pain in children less than 18 years of age. We also sought evidence of any clinically significant side effects.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library (Issue 12, 2012), Ovid MEDLINE (1966 to 21 December 2012) and Ovid EMBASE (1982 to 21 December 2012), as well as reference lists of other relevant articles and online trial registers.\nSelection criteria\nWe included all randomized (or quasi-randomized), controlled trials comparing clonidine premedication to placebo, a higher dose of clonidine, or another agent when used for surgical or other invasive procedures in children under the age of 18 years and where pain or a surrogate (principally the need for supplementary analgesia) was reported.\nData collection and analysis\nTwo authors independently performed the database search, decided on the inclusion eligibility of publications, ascertained study quality and extracted data. They then resolved any differences between their results by discussion. The data were entered into RevMan 5 for analyses and presentation. Sensitivity analyses were performed, as appropriate, to exclude studies with a high risk of bias.\nMain results\nWe identified 11 trials investigating a total of 742 children in treatment arms relevant to our study question. Risks of bias in the studies were mainly low or unclear, but two studies had aspects of their methodology that had a high risk of bias. Overall, the quality of the evidence from pooled studies was low or had unclear risk of bias. Four trials compared clonidine with a placebo or no treatment, six trials compared clonidine with midazolam, and one trial compared clonidine with fentanyl. There was substantial methodological heterogeneity between trials; the dose and route of clonidine administration varied as did the patient populations, the types of surgery and the outcomes measured. It was therefore difficult to combine the outcomes of some trials for meta-analysis.\nWhen clonidine was compared to placebo, pooling studies of low or unclear risk of bias, the need for additional analgesia was reduced when clonidine premedication was given orally at 4 µg/kg (risk ratio (RR) 0.24, 95% confidence interval (CI) 0.11 to 0.51). Only one small trial (15 patients per arm) compared clonidine to midazolam for the same outcome; this also found a reduction in the need for additional postoperative analgesia (RR 0.25, 95% CI 0.09 to 0.71) when clonidine premedication was given orally at 2 or 4 µg/kg compared to oral midazolam at 0.5 mg/kg. A trial comparing oral clonidine at 4 µg/kg with intravenous fentanyl at 3 µg/kg found no statistically significant difference in the need for rescue analgesia (RR 0.89, 95% CI 0.56 to 1.42). When clonidine 4 µg/kg was compared to clonidine 2 µg/kg, there was a statistically significant difference in the number of patients requiring additional analgesia, in favour of the higher dose, as reported by a single, higher-quality trial (RR 0.38, 95% CI 0.23 to 0.65).\nThe effect of clonidine on pain scores was hard to interpret due to differences in study methodology, the doses and route of drug administration, and the pain scale used. However, when given at a dose of 4 µg/kg, clonidine may have reduced analgesia requirements after surgery. There were no significant side effects of clonidine that were reported such as severe hypotension, bradycardia, or excessive sedation requiring intervention. However, several studies used atropine prophylactically with the aim of preventing such adverse effects.\nAuthors' conclusions\nThere were only 11 relevant trials studying 742 children having surgery where premedication with clonidine was compared to placebo or other drug treatment. Despite heterogeneity between trials, clonidine premedication in an adequate dosage (4 µg/kg) was likely to have a beneficial effect on postoperative pain in children. Side effects were minimal, but some of the studies used atropine prophylactically with the intention of preventing bradycardia and hypotension. Further research is required to determine under what conditions clonidine premedication is most effective in providing postoperative pain relief in children.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of giving clonidine before anaesthesia (that is, as a premedication) on postoperative pain in children."
}
] |
query-laysum
|
8275
|
[
{
"role": "user",
"content": "Abstract: Background\nThe sedation needs of critically ill patients have been recognized as a core component of critical care that is vital to assist recovery and ensure humane treatment. Evidence suggests that sedation requirements are not always optimally managed. Suboptimal sedation, both under- and over-sedation, have been linked to short-term (e.g. length of stay) and long-term (e.g. psychological recovery) outcomes. Strategies to improve sedation assessment and management have been proposed. This review was originally published in 2015 and updated in 2018.\nObjectives\nTo assess the effects of protocol-directed sedation management compared to usual care on the duration of mechanical ventilation, intensive care unit (ICU) and hospital mortality and other patient outcomes in mechanically ventilated ICU adults and children.\nSearch methods\nWe used the standard search strategy of the Cochrane Anaesthesia, Critical and Emergency Care Group (ACE). We searched the Cochrane Central Register of Controlled trials (CENTRAL) (December 2017), MEDLINE (OvidSP) (2013 to December 2017), Embase (OvidSP) (2013 to December 2017), CINAHL (BIREME host) (2013 to December 2017), LILACS (2013 to December 2017), trial registries and reference lists of articles. (The original search was run in November 2013).\nSelection criteria\nWe included randomized controlled trials (RCTs) and quasi-randomized controlled trials conducted in ICUs comparing management with and without protocol-directed sedation in intensive care adults and children.\nData collection and analysis\nTwo authors screened the titles and abstracts and then full-text reports identified from our electronic search. We assessed seven domains of potential risk of bias for the included studies. We examined clinical, methodological and statistical heterogeneity and used the random-effects model for meta-analysis where we considered it appropriate. We calculated the mean difference (MD) for duration of mechanical ventilation and risk ratio (RR) for mortality across studies, with 95% confidence intervals (CIs).\nMain results\nWe included four studies with a total of 3323 participants (864 adults and 2459 paediatrics) in this update. Three studies were single-centre, patient-level RCTs and one study was a multicentre cluster-RCT. The settings were in metropolitan centres and included general, mixed medical-surgical, medical only and a range of paediatric units. All four included studies compared the use of protocol-directed sedation, specifically protocols delivered by nurses, with usual care. We rated the risk of selection bias due to random sequence generation low for two studies and unclear for two studies. The risk of bias was highly variable across the domains and studies, with the risk of selection and performance bias generally rated high and the risk of detection and attrition bias generally rated low.\nWhen comparing protocol-directed sedation with usual care, there was no clear evidence of difference in duration of mechanical ventilation in hours for the entire duration of the first ICU stay for each patient (MD -28.15 hours, 95% CI -69.15 to 12.84; I2 = 85%; 4 studies; adjusted sample 2210 participants; low-quality evidence). There was no clear evidence of difference in ICU mortality (RR 0.77, 95% CI 0.39 to 1.50; I2 = 67%; 2 studies; 513 participants; low-quality evidence), or hospital mortality (RR 0.90, 95% CI 0.72 to 1.13; I2 = 10%; 3 studies; adjusted sample 2088 participants; low-quality evidence). There was no clear evidence of difference in ICU length of stay (MD -1.70 days, 95% CI-3.71 to 0.31; I2 = 82%; 4 studies; adjusted sample of 2123 participants; low-quality of evidence), however there was evidence of a significant reduction in hospital length of stay (MD -3.09 days, 95% CI -5.08 to -1.10; I2 = 2%; 3 studies; adjusted sample of 1922 participants; moderate-quality evidence). There was no clear evidence of difference in the incidence of self-extubation (RR 0.88, 95% CI 0.55 to 1.42; I2 = 0%; 2 studies; adjusted sample of 1687 participants; high-quality evidence), or incidence of tracheostomy (RR 0.67, 95% CI 0.35 to 1.30; I2 = 66%; 3 studies; adjusted sample of 2008 participants; low-quality evidence). Only one study examined incidence of reintubation, therefore we could not pool data; there was no clear evidence of difference (RR 0.65, 95% CI 0.35 to 1.24; 1 study; 321 participants; low-quality evidence).\nAuthors' conclusions\nThere is currently limited evidence from RCTs evaluating the effectiveness of protocol-directed sedation on patient outcomes. The four included RCTs reported conflicting results and heterogeneity limited the interpretation of results for the primary outcomes of duration of mechanical ventilation and mortality. Further studies, taking into account differing contextual characteristics, are necessary to inform future practice. Methodological strategies to reduce the risk of bias need to be considered in future studies.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases for studies that examined protocol-directed sedation in adult and paediatric intensive care patients. We identified four studies with 3308 participants (864 adults and 2459 paediatrics) to include in this review."
}
] |
query-laysum
|
8276
|
[
{
"role": "user",
"content": "Abstract: Background\nRed blood cell (RBC) transfusion is a common treatment for anaemia in many conditions. The safety and efficacy of transfusing RBC units that have been stored for different durations before a transfusion is a current concern. The duration of storage for a RBC unit can be up to 42 days. If evidence from randomised controlled trials (RCT) were to indicate that clinical outcomes are affected by storage duration, the implications for inventory management and clinical practice would be significant.\nObjectives\nTo assess the effects of using red blood cells (RBCs) stored for a shorter versus a longer duration, or versus RBCs stored for standard practice duration, in people requiring a RBC transfusion.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, PubMed (for epublications), LILACS, Transfusion Evidence Library, Web of Science CPCI-S and four international clinical trial registries on 20 November 2017.\nSelection criteria\nWe included RCTs that compared transfusion of RBCs of shorter versus longer storage duration, or versus standard practice storage duration.\nData collection and analysis\nWe used standard Cochrane methods.\nMain results\nWe included 22 trials (42,835 participants) in this review.\nThe GRADE quality of evidence ranged from very low to moderate for our primary outcome of in-hospital and short-term mortality reported at different time points.\nTransfusion of RBCs of shorter versus longer storage duration\nEleven trials (2249 participants) compared transfusion of RBCs of shorter versus longer storage duration. Two trials enrolled low birth weight neonates, two enrolled children with severe anaemia secondary to malaria or sickle cell disease, and eight enrolled adults across a range of clinical settings (intensive care, cardiac surgery, major elective surgery, hospitalised in-patients, haematology outpatients). We judged only two trials to be at low risk of bias across all domains; most trials had an unclear risk for multiple domains.\nTransfusion of RBCs of shorter versus longer storage duration probably leads to little or no difference in mortality at seven-day follow-up (risk ratio (RR) 1.42, 95% confidence interval (CI) 0.66 to 3.06; 1 trial, 3098 participants; moderate quality evidence) or 30-day follow-up (RR 0.85, 95%CI 0.50 to 1.45; 2 trials, 1121 participants; moderate quality evidence) in adults undergoing major elective cardiac or non-cardiac surgery.\nFor neonates, no studies reported on the primary outcome of in-hospital or short-term mortality. At 40 weeks gestational age, the effect of RBCs of shorter versus longer storage duration on the risk of death was uncertain, as the quality of evidence is very low (RR 0.90, 95% CI 0.41 to 1.85; 1 trial, 52 participants).\nThe effect of RBCs of shorter versus longer storage duration on the risk of death in children with severe anaemia was also uncertain within 24 hours of transfusion (RR 1.50, 95% CI 0.43 to 5.25; 2 trials, 364 participants; very low quality evidence), or at 30-day follow-up (RR 1.40, 95% CI 0.45 to 4.31; 1 trial, 290 participants; low quality evidence).\nOnly one trial, in children with severe anaemia (290 participants), reported adverse transfusion reactions. Only one child in each arm experienced an adverse reaction within 24 hours of transfusion.\nTransfusion of RBCs of shorter versus standard practice storage duration\nEleven trials (40,588 participants) compared transfusion of RBCs of shorter versus standard practice storage duration. Three trials enrolled critically ill term neonates; two of these enrolled very low birth weight neonates. There were no trials in children. Eight trials enrolled critically ill and non-critically ill adults, with most being hospitalised. We judged four trials to be at low risk of bias across all domains with the others having an unclear risk of bias across multiple domains.\nTransfusion of RBCs of shorter versus standard practice storage duration probably leads to little or no difference in adult in-hospital mortality (RR 1.05, 95% CI 0.97 to 1.14; 4 trials, 25,704 participants; moderate quality evidence), ICU mortality (RR 1.06, 95% CI 0.98 to 1.15; 3 trials, 13,066 participants; moderate quality evidence), or 30-day mortality (RR 1.04, 95% CI 0.96 to 1.13; 4 trials, 7510 participants;moderate quality evidence).\nTwo of the three trials that enrolled neonates reported that there were no adverse transfusion reactions. One trial reported an isolated case of cytomegalovirus infection in participants assigned to the standard practice storage duration group. Two trials in critically ill adults reported data on transfusion reactions: one observed no difference in acute transfusion reactions between arms (RR 0.67, 95% CI 0.19 to 2.36, 2413 participants), but the other observed more febrile nonhaemolytic reactions in the shorter storage duration arm (RR 1.48, 95% CI 1.13 to 1.95, 4919 participants).\nTrial sequential analysis showed that we may now have sufficient evidence to reject a 5% relative risk increase or decrease of death within 30 days when transfusing RBCs of shorter versus longer storage duration across all patient groups.\nAuthors' conclusions\nThe effect of storage duration on clinically important outcomes has now been investigated in large, high quality RCTs, predominantly in adults. There appears to be no evidence of an effect on mortality that is related to length of storage of transfused RBCs. However, the quality of evidence in neonates and children is low. The current practice in blood banks of using the oldest available RBCs can be continued safely. Additional RCTs are not required, but research using alternative study designs, should focus on particular subgroups (e.g. those requiring multiple RBC units) and on factors affecting RBC quality.\n\nGiven the provided abstract, please respond to the following query: \"Key results\nTransfusion of blood stored for a shorter duration versus blood stored for a longer duration\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Transfusion of blood stored for a shorter duration versus blood stored for a longer duration\nEight studies focused on adults, two on children with severe anaemia (low blood count) and two on low birth weight neonates. In adults undergoing major surgery, transfusion of blood stored for a shorter duration versus blood stored for a longer duration probably leads to little or no difference in risk of death up to 30 days but the quality of evidence is moderate. In children we were uncertain whether transfusion of blood stored for a shorter duration increases or decreases risk of death, because the quality of the evidence is very low."
}
] |
query-laysum
|
8277
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the 2009 Cochrane overview and network meta-analysis (NMA) of biologics for rheumatoid arthritis (RA).\nObjectives\nTo assess the benefits and harms of nine biologics (abatacept, adalimumab, anakinra, certolizumab pegol, etanercept, golimumab, infliximab, rituximab, tocilizumab) and small molecule tofacitinib, versus comparator (MTX, DMARD, placebo (PL), or a combination) in adults with rheumatoid arthritis who have failed to respond to methotrexate (MTX) or other disease-modifying anti-rheumatic drugs (DMARDs), i.e., MTX/DMARD incomplete responders (MTX/DMARD-IR).\nMethods\nWe searched for randomized controlled trials (RCTs) in the Cochrane Central Register of Controlled Trials (CENTRAL) (via The Cochrane Library Issue 6, June 2015), MEDLINE (via OVID 1946 to June 2015), and EMBASE (via OVID 1947 to June 2015). Data extraction, risk of bias and GRADE assessments were done in duplicate. We calculated both direct estimates using standard meta-analysis and used Bayesian mixed treatment comparisons approach for NMA estimates to calculate odds ratios (OR) and 95% credible intervals (CrI). We converted OR to risk ratios (RR) which are reported in the abstract for the ease of interpretation.\nMain results\nThis update included 73 new RCTs for a total of 90 RCTs; 79 RCTs with 32,874 participants provided usable data. Few trials were at high risk of bias for blinding of assessors/participants (13% to 21%), selective reporting (4%) or major baseline imbalance (8%); a large number had unclear risk of bias for random sequence generation (68%) or allocation concealment (74%).\nBased on direct evidence of moderate quality (downgraded for inconsistency), biologic+MTX/DMARD was associated with a statistically significant and clinically meaningful improvement in ACR50 versus comparator (RR 2.71 (95% confidence interval (CI) 2.36 to 3.10); absolute benefit 24% more patients (95% CI 19% to 29%), number needed to treat for an additional beneficial outcome (NNTB) = 5 (4 to 6). NMA estimates for ACR50 in tumor necrosis factor (TNF) biologic+MTX/DMARD (RR 3.23 (95% credible interval (Crl) 2.75 to 3.79), non-TNF biologic+MTX/DMARD (RR 2.99; 95% Crl 2.36 to 3.74), and anakinra + MTX/DMARD (RR 2.37 (95% Crl 1.00 to 4.70) were similar to the direct estimates.\nBased on direct evidence of moderate quality (downgraded for inconsistency), biologic+MTX/DMARD was associated with a clinically and statistically important improvement in function measured by the Health Assessment Questionnaire (0 to 3 scale, higher = worse function) with a mean difference (MD) based on direct evidence of -0.25 (95% CI -0.28 to -0.22); absolute benefit of -8.3% (95% CI -9.3% to -7.3%), NNTB = 3 (95% CI 2 to 4). NMA estimates for TNF biologic+MTX/DMARD (absolute benefit, -10.3% (95% Crl -14% to -6.7%) and non-TNF biologic+MTX/DMARD (absolute benefit, -7.3% (95% Crl -13.6% to -0.67%) were similar to respective direct estimates.\nBased on direct evidence of moderate quality (downgraded for inconsistency), biologic+MTX/DMARD was associated with clinically and statistically significantly greater proportion of participants achieving remission in RA (defined by disease activity score DAS < 1.6 or DAS28 < 2.6) versus comparator (RR 2.81 (95% CI, 2.23 to 3.53); absolute benefit 18% more patients (95% CI 12% to 25%), NNTB = 6 (4 to 9)). NMA estimates for TNF biologic+MTX/DMARD (absolute improvement 17% (95% Crl 11% to 23%)) and non-TNF biologic+MTX/DMARD (absolute improvement 19% (95% Crl 12% to 28%) were similar to respective direct estimates.\nBased on direct evidence of moderate quality (downgraded for inconsistency), radiographic progression (scale 0 to 448) was statistically significantly reduced in those on biologics + MTX/DMARDs versus comparator, MD -2.61 (95% CI -4.08 to -1.14). The absolute reduction was small, -0.58% (95% CI -0.91% to -0.25%) and we are unsure of the clinical relevance of this reduction. NMA estimates of TNF biologic+MTX/DMARD (absolute reduction -0.67% (95% Crl -1.4% to -0.12%) and non-TNF biologic+MTX/DMARD (absolute reduction, -0.68% (95% Crl -2.36% to 0.92%)) were similar to respective direct estimates.\nBased on direct evidence of moderate quality (downgraded for imprecision), results for withdrawals due to adverse events were inconclusive, with wide confidence intervals encompassing the null effect and evidence of an important increase in withdrawals, RR 1.11 (95% CI 0.96 to 1.30). The NMA estimates of TNF biologic+MTX/DMARD (RR 1.24 (95% Crl 0.99 to 1.57)) and non-TNF biologic+MTX/DMARD (RR 1.20 (95% Crl 0.87 to 1.67)) were similarly inconclusive and downgraded to low for both imprecision and indirectness.\nBased on direct evidence of high quality, biologic+MTX/DMARD was associated with clinically significantly increased risk (statistically borderline significant) of serious adverse events on biologic+MTX/DMARD (Peto OR [can be interpreted as RR due to low event rate] 1.12 (95% CI 0.99 to 1.27); absolute risk 1% (0% to 2%), As well, the NMA estimate for TNF biologic+MTX/DMARD (Peto OR 1.20 (95% Crl 1.01 to 1.43)) showed moderate quality evidence of an increase in the risk of serious adverse events. The other two NMA estimates were downgraded to low quality due to imprecision and indirectness and had wide confidence intervals resulting in uncertainty around the estimates: non-TNF biologics + MTX/DMARD: 1.07 (95% Crl 0.89 to 1.29) and anakinra: RR 1.06 (95% Crl 0.65 to 1.75).\nBased on direct evidence of low quality (downgraded for serious imprecision), results were inconclusive for cancer (Peto OR 1.07 (95% CI 0.68 to 1.68) for all biologic+MTX/DMARD combinations. The NMA estimates of TNF biologic+MTX/DMARD (Peto OR 1.21 (95% Crl 0.63 to 2.38) and non-TNF biologic+MTX/DMARD (Peto OR 0.99 (95% Crl 0.58 to 1.78)) were similarly inconclusive and downgraded to low quality for both imprecision and indirectness.\nMain results text shows the results for tofacitinib and differences between medications.\nAuthors' conclusions\nBased primarily on RCTs of 6 months' to 12 months' duration, there is moderate quality evidence that the use of biologic+MTX/DMARD in people with rheumatoid arthritis who have failed to respond to MTX or other DMARDs results in clinically important improvement in function and higher ACR50 and remission rates, and increased risk of serious adverse events than the comparator (MTX/DMARD/PL; high quality evidence). Radiographic progression is slowed but its clinical relevance is uncertain. Results were inconclusive for whether biologics + MTX/DMARDs are associated with an increased risk of cancer or withdrawals due to adverse events.\n\nGiven the provided abstract, please respond to the following query: \"ACR50 (number of tender or swollen joints and other doctor/patient assessed aspects)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Twenty-four more people out of 100 experienced improvement in the symptoms of their rheumatoid arthritis with biologics + MTX/DMARD (24% absolute improvement).\nThirty-eight people out of 100 who were on biologic+ MTX experienced improvement compared to 14 people out of 100 who were on MTX/DMARD/placebo."
}
] |
query-laysum
|
8278
|
[
{
"role": "user",
"content": "Abstract: Background\nVulvovaginal candidiasis (VVC) is estimated to be the second most common form of infection after bacterial vaginosis. The ability of probiotics in maintaining and recovering the normal vaginal microbiota, and their potential ability to resist Candidas give rise to the concept of using probiotics for the treatment of VVC.\nObjectives\nTo assess the effectiveness and safety of probiotics for the treatment of vulvovaginal candidiasis in non-pregnant women.\nSearch methods\nWe searched the following databases to October 2017: Sexually Transmitted Infections Cochrane Review Group's Specialized Register, CENTRAL, MEDLINE, Embase and eight other databases. We searched in following international resources: World Health Organization International Clinical Trials Registry Platform, ClinicalTrials.gov, Web of Science and OpenGrey. We checked specialty journals, reference lists of published articles and conference proceedings. We collected information from pharmaceutical companies and experts in the field.\nSelection criteria\nRandomized controlled trials (RCT) using probiotics, alone or as adjuvants to conventional antifungal drugs, to treat VVC in non-pregnant women. Trials recruiting women with recurrent VVC, coinfection with other vulvovaginal infections, diabetes mellitus, immunosuppressive disorders or taking immunosuppressant medication were ineligible for inclusion. Probiotics were included if they were made from single or multiple species and in any preparation type/dosage/route of administration.\nData collection and analysis\nTwo review authors independently assessed trials for eligibility and quality and extracted data. We resolved any disagreements through consensus. The quality of the evidence was assessed using the GRADE approach.\nMain results\nTen RCTs (1656 participants) met our inclusion criteria, and pharmaceutical industry funded none of these trials. All trials used probiotics as adjuvant therapy to antifungal drugs. Probiotics increased the rate of short-term clinical cure (risk ratio (RR) 1.14, 95% confidence interval (CI) 1.05 to 1.24, 695 participants, 5 studies, low quality evidence) and mycological cure (RR 1.06, 95% CI 1.02 to 1.10, 969 participants, 7 studies, low quality evidence) and decreased relapse rate at one month (RR 0.34, 95% CI 0.17 to 0.68, 388 participants, 3 studies, very low quality evidence). However, this effect did not translate into a higher frequency of long-term clinical cure (one month after treatment: RR 1.07, 95% CI 0.86 to 1.33, 172 participants, 1 study, very low quality evidence; three months after treatment: RR 1.30, 95% CI 1.00 to 1.70, 172 participants, one study, very low quality evidence) or mycological cure (one month after treatment: RR 1.26, 95% CI 0.93 to 1.71, 627 participants, 3 studies, very low quality evidence; three months after treatment: RR 1.16, 95% CI 1.00 to 1.35, 172 participants, one study, very low quality evidence). Probiotics use did not increase the frequency of serious (RR 0.80, 95% CI 0.22 to 2.94; 440 participants, 2 studies, low quality evidence). We found no eligible RCTs for outcomes as time to first relapse, need for additional treatment at the end of therapy, patient satisfaction and cost effectiveness.\nAuthors' conclusions\nLow and very low quality evidence shows that, compared with conventional treatment, the use of probiotics as an adjuvant therapy could increases the rate of short-term clinical and mycological cure and decrease the relapse rate at one month but this did not translate into a higher frequency of long-term clinical or mycological cure. Probiotics use does not seem to increase the frequency of serious or non-serious adverse events. There is a need for well-designed RCTs with standardized methodologies, longer follow-up and larger sample size.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was low or very low in this review, so we have very little confidence in the results."
}
] |
query-laysum
|
8279
|
[
{
"role": "user",
"content": "Abstract: Background\nAcne is a common, economically burdensome condition that can cause psychological harm and, potentially, scarring. Topical benzoyl peroxide (BPO) is a widely used acne treatment; however, its efficacy and safety have not been clearly evaluated.\nObjectives\nTo assess the effects of BPO for acne.\nSearch methods\nWe searched the following databases up to February 2019: the Cochrane Skin Specialised Register, CENTRAL, MEDLINE, Embase, and LILACS. We also searched five trials registers and checked the reference lists of relevant randomised controlled trials (RCTs) and systematic reviews.\nSelection criteria\nWe included RCTs that compared topical BPO used alone (including different formulations and concentrations of BPO) or as part of combination treatment against placebo, no treatment, or other active topical medications for clinically diagnosed acne (used alone or in combination with other topical drugs not containing BPO) on the face or trunk.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane. Primary outcome measures were 'participant global self-assessment of acne improvement' and 'withdrawal due to adverse events in the whole course of a trial'. 'Percentage of participants experiencing any adverse event in the whole course of a trial' was a key secondary outcome.\nMain results\nWe included 120 trials (29,592 participants randomised in 116 trials; in four trials the number of randomised participants was unclear). Ninety-one studies included males and females. When reported, 72 trials included participants with mild to moderate acne, 26 included participants with severe acne, and the mean age of participants ranged from 18 to 30 years.\nOur included trials assessed BPO as monotherapy, as add-on treatment, or combined with other active treatments, as well as BPO of different concentrations and BPO delivered through different vehicles. Comparators included different concentrations or formulations of BPO, placebo, no treatment, or other active treatments given alone or combined. Treatment duration in 80 trials was longer than eight weeks and was only up to 12 weeks in 108 trials. Industry funded 50 trials; 63 trials did not report funding. We commonly found high or unclear risk of performance, detection, or attrition bias. Trial setting was under-reported but included hospitals, medical centres/departments, clinics, general practices, and student health centres. We reported on outcomes assessed at the end of treatment, and we classified treatment periods as short-term (two to four weeks), medium-term (five to eight weeks), or long-term (longer than eight weeks).\nFor 'participant-reported acne improvement', BPO may be more effective than placebo or no treatment (risk ratio (RR) 1.27, 95% confidence interval (CI) 1.12 to 1.45; 3 RCTs; 2234 participants; treatment for 10 to 12 weeks; low-certainty evidence). Based on low-certainty evidence, there may be little to no difference between BPO and adapalene (RR 0.99, 95% CI 0.90 to 1.10; 5 RCTs; 1472 participants; treatment for 11 to 12 weeks) or between BPO and clindamycin (RR 0.95, 95% CI 0.68 to 1.34; 1 RCT; 240 participants; treatment for 10 weeks) (outcome not reported for BPO versus erythromycin or salicylic acid).\nFor 'withdrawal due to adverse effects', risk of treatment discontinuation may be higher with BPO compared with placebo or no treatment (RR 2.13, 95% CI 1.55 to 2.93; 24 RCTs; 13,744 participants; treatment for 10 to 12 weeks; low-certainty evidence); the most common causes of withdrawal were erythema, pruritus, and skin burning. Only very low-certainty evidence was available for the following comparisons: BPO versus adapalene (RR 1.85, 95% CI 0.94 to 3.64; 11 RCTs; 3295 participants; treatment for 11 to 24 weeks; causes of withdrawal not clear), BPO versus clindamycin (RR 1.93, 95% CI 0.90 to 4.11; 8 RCTs; 3330 participants; treatment for 10 to 12 weeks; causes of withdrawal included local hypersensitivity, pruritus, erythema, face oedema, rash, and skin burning), erythromycin (RR 1.00, 95% CI 0.07 to 15.26; 1 RCT; 60 participants; treatment for 8 weeks; withdrawal due to dermatitis), and salicylic acid (no participants had adverse event-related withdrawal; 1 RCT; 59 participants; treatment for 12 weeks). There may be little to no difference between these groups in terms of withdrawal; however, we are unsure of the results because the evidence is of very low certainty.\nFor 'proportion of participants experiencing any adverse event', very low-certainty evidence leaves us uncertain about whether BPO increased adverse events when compared with placebo or no treatment (RR 1.40, 95% CI 1.15 to 1.70; 21 RCTs; 11,028 participants; treatment for 10 to 12 weeks), with adapalene (RR 0.71, 95% CI 0.50 to 1.00; 7 RCTs; 2120 participants; treatment for 11 to 24 weeks), with erythromycin (no participants reported any adverse events; 1 RCT; 89 participants; treatment for 10 weeks), or with salicylic acid (RR 4.77, 95% CI 0.24 to 93.67; 1 RCT; 41 participants; treatment for 6 weeks). Moderate-certainty evidence shows that the risk of adverse events may be increased for BPO versus clindamycin (RR 1.24, 95% CI 0.97 to 1.58; 6 RCTs; 3018 participants; treatment for 10 to 12 weeks); however, the 95% CI indicates that BPO might make little to no difference. Most reported adverse events were mild to moderate, and local dryness, irritation, dermatitis, erythema, application site pain, and pruritus were the most common.\nAuthors' conclusions\nCurrent evidence suggests that BPO as monotherapy or add-on treatment may be more effective than placebo or no treatment for improving acne, and there may be little to no difference between BPO and either adapalene or clindamycin. Our key efficacy evidence is based on participant self-assessment; trials of BPO versus erythromycin or salicylic acid did not report this outcome.\nFor adverse effects, the evidence is very uncertain regarding BPO compared with adapalene, erythromycin, or salicylic acid. However, risk of treatment discontinuation may be higher with BPO compared with placebo or no treatment. Withdrawal may be linked to tolerability rather than to safety. Risk of mild to moderate adverse events may be higher with BPO compared with clindamycin.\nFurther trials should assess the comparative effects of different preparations or concentrations of BPO and combination BPO versus monotherapy. These trials should fully assess and report adverse effects and patient-reported outcomes measured on a standardised scale.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "As a common skin disease, acne vulgaris affects the physical, mental, and social well-being of millions of adolescents and young adults. A wide range of treatments for acne vulgaris are available, and topical BPO has been recommended as a priority therapy to be used alone or combined with other topical or oral treatments, depending on acne severity. However, the benefits and harms of BPO have yet to be evaluated."
}
] |
query-laysum
|
8280
|
[
{
"role": "user",
"content": "Abstract: Background\nPostherpetic neuralgia (PHN) is a painful and refractory complication of herpes zoster. Treatments are either partially or totally ineffective for many people with PHN. Antiviral agents, used at the time of the rash, have been proposed as an intervention to prevent the development of PHN. This is the first update since the first publication of the review in 2009.\nObjectives\nTo assess the effectiveness of antiviral agents in preventing PHN.\nSearch methods\nOn 26 April 2013, we updated the searches in the Cochrane Neuromuscular Disease Group Specialized Register, CENTRAL, MEDLINE, EMBASE, LILACS, and the Chinese Biomedical Retrieval System. We checked the references of published studies to identify additional trials, and contacted authors to obtain additional data. We searched other databases in The Cochrane Library for information for the Discussion and two clinical trials registries for ongoing trials.\nSelection criteria\nWe considered all randomised controlled trials (RCTs) of antiviral treatment given within 72 hours after the onset of herpes zoster for preventing PHN. There were no language restrictions.\nData collection and analysis\nTwo authors independently selected trials, evaluated the risk of bias in included trials, and extracted and analysed data.\nMain results\nSix RCTs with a total of 1211 participants were eligible; five trials evaluated oral aciclovir, and one, with 419 participants, evaluated oral famciclovir. We were able to conduct meta-analyses as there were sufficient similarities in the included studies, such as the reporting of the presence of PHN, duration of rash before treatment initiation and treatment regimen. For our primary outcome, based on three trials (609 participants) we found no significant difference between the aciclovir and control groups in the incidence of PHN four months after the onset of the acute herpetic rash (risk ratio (RR) 0.75, 95% confidence interval (CI) 0.51 to 1.11), nor was there a significant difference at six months (RR 1.05, 95% CI 0.87 to 1.27, two trials, 476 participants). In four of the trials (692 participants), there was some evidence for a reduction in the incidence of pain four weeks after the onset of rash. In the trial of famciclovir versus placebo, neither 500 mg nor 750 mg doses of famciclovir reduced the incidence of herpetic neuralgia significantly. The most commonly reported adverse events were nausea, vomiting, diarrhoea and headache for aciclovir, and headache and nausea for famciclovir. For neither treatment was the incidence of adverse events significantly different from placebo. None of the studies were at high risk of bias, although the risk of bias was unclear in at least one domain for all but one study. We found no new RCTs when we updated the searches in April 2013.\nAuthors' conclusions\nThere is high quality evidence that oral aciclovir does not reduce the incidence of PHN significantly. In addition, there is insufficient evidence to determine the effect of other antiviral treatments; therefore, further well-designed RCTs are needed to investigate famciclovir or other new antiviral agents in preventing PHN. Future trials should pay more attention to the severity of pain and quality of life of participants, and should be conducted among different subgroups of people, such as people who are immunocompromised.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of antiviral medicines for preventing postherpetic neuralgia (PHN)."
}
] |
query-laysum
|
8281
|
[
{
"role": "user",
"content": "Abstract: Background\nThe accessibility of health services is an important factor that affects the health outcomes of populations. A mobile clinic provides a wide range of services but in most countries the main focus is on health services for women and children. It is anticipated that improvement of the accessibility of health services via mobile clinics will improve women's and children's health.\nObjectives\nTo evaluate the impact of mobile clinic services on women's and children's health.\nSearch methods\nFor related systematic reviews, we searched the Database of Abstracts of Reviews of Effectiveness (DARE), CRD; Health Technology Assessment Database (HTA), CRD; NHS Economic Evaluation Database (NHS EED), CRD (searched 20 February 2014).\nFor primary studies, we searched ISI Web of Science, for studies that have cited the included studies in this review (searched 18 January 2016); WHO ICTRP, and ClinicalTrials.gov (searched 23 May 2016); Cochrane Central Register of Controlled Trials (CENTRAL), part of The Cochrane Library.www.cochranelibrary.com (including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register) (searched 7 April 2015); MEDLINE, OvidSP (searched 7 April 2015); Embase, OvidSP (searched 7 April 2015); CINAHL, EbscoHost (searched 7 April 2015); Global Health, OvidSP (searched 8 April 2015); POPLINE, K4Health (searched 8 April 2015); Science Citation Index and Social Sciences Citation Index, ISI Web of Science (searched 8 April 2015); Global Health Library, WHO (searched 8 April 2015); PAHO, VHL (searched 8 April 2015); WHOLIS, WHO (searched 8 April 2015); LILACS, VHL (searched 9 April 2015).\nSelection criteria\nWe included individual- and cluster-randomised controlled trials (RCTs) and non-RCTs. We included controlled before-and-after (CBA) studies provided they had at least two intervention sites and two control sites. Also, we included interrupted time series (ITS) studies if there was a clearly defined point in time when the intervention occurred and at least three data points before and three after the intervention. We defined the intervention of a mobile clinic as a clinic vehicle with a healthcare provider (with or without a nurse) and a driver that visited areas on a regular basis. The participants were women (18 years or older) and children (under the age of 18 years) in low-, middle-, and high-income countries.\nData collection and analysis\nTwo review authors independently screened the titles and abstracts of studies identified by the search strategy, extracted data from the included studies using a specially-designed data extraction form based on the Cochrane EPOC Group data collection checklist, and assessed full-text articles for eligibility. All authors performed analyses, 'Risk of bias' assessments, and assessed the quality of the evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nTwo cluster-RCTs met the inclusion criteria of this review. Both studies were conducted in the USA.\nOne study tested whether offering onsite mobile mammography combined with health education was more effective at increasing breast cancer screening rates than offering health education only, including reminders to attend a static clinic for mammography. Women in the group offered mobile mammography and health education may be more likely to undergo mammography within three months of the intervention than those in the comparison group (55% versus 40%; odds ratio (OR) 1.83, 95% CI 1.22 to 2.74; low certainty evidence).\nA cost-effectiveness analysis of mammography at mobile versus static units found that the total cost per patient screened may be higher for mobile units than for static units. The incremental costs per patient screened for a mobile over a stationary unit were USD 61 and USD 45 for a mobile full digital unit and a mobile film unit respectively.\nThe second study compared asthma outcomes for children aged two to six years who received asthma care from a mobile asthma clinic and children who received standard asthma care from the usual (static) primary provider. Children who receive asthma care from a mobile asthma clinic may experience little or no difference in symptom-free days, urgent care use and caregiver-reported medication use compared to children who receive care from their usual primary care provider. All of the evidence was of low certainty.\nAuthors' conclusions\nThe paucity of evidence and the restricted range of contexts from which evidence is available make it difficult to draw conclusions on the impacts of mobile clinics on women's and children's health compared to static clinics. Further rigorous studies are needed in low-, middle-, and high-income countries to evaluate the impacts of mobile clinics on women's and children's health.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to assess the effects of mobile clinics on women’s and children’s health. Cochrane researchers searched for and analysed all relevant studies to answer this question."
}
] |
query-laysum
|
8282
|
[
{
"role": "user",
"content": "Abstract: Background\nPrimary healthcare, particularly Indigenous-led services, are well placed to deliver services that reflect the needs of Indigenous children and their families. Important characteristics identified by families for primary health care include services that support families, accommodate sociocultural needs, recognise extended family child-rearing practices, and Indigenous ways of knowing and doing business. Indigenous family-centred care interventions have been developed and implemented within primary healthcare services to plan, implement, and support the care of children, immediate and extended family and the home environment. The delivery of family-centred interventions can be through environmental, communication, educational, counselling, and family support approaches.\nObjectives\nTo evaluate the benefits and harms of family-centred interventions delivered by primary healthcare services in Canada, Australia, New Zealand, and the USA on a range of physical, psychosocial, and behavioural outcomes of Indigenous children (aged from conception to less than five years), parents, and families.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 22 September 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs), cluster RCTs, quasi-RCTs, controlled before-after studies, and interrupted time series of family-centred care interventions that included Indigenous children aged less than five years from Canada, Australia, New Zealand, and the USA. Interventions were included if they met the assessment criteria for family-centred interventions and were delivered in primary health care. Comparison interventions could include usual maternal and child health care or one form of family-centred intervention versus another.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. overall health and well-being, 2. psychological health and emotional behaviour of children, 3. physical health and developmental health outcomes of children, 4. family health-enhancing lifestyle or behaviour outcomes, 5. psychological health of parent/carer. 6. adverse events or harms. Our secondary outcomes were 7. parenting knowledge and awareness, 8. family evaluation of care, 9. service access and utilisation, 10. family-centredness of consultation processes, and 11. economic costs and outcomes associated with the interventions. We used GRADE to assess the certainty of the evidence for our primary outcomes.\nMain results\nWe included nine RCTs and two cluster-RCTs that investigated the effect of family-centred care interventions delivered by primary healthcare services for Indigenous early child well-being. There were 1270 mother–child dyads and 1924 children aged less than five years recruited. Seven studies were from the USA, two from New Zealand, one from Canada, and one delivered in both Australia and New Zealand. The focus of interventions varied and included three studies focused on early childhood caries; three on childhood obesity; two on child behavioural problems; and one each on negative parenting patterns, child acute respiratory illness, and sudden unexpected death in infancy. Family-centred education was the most common type of intervention delivered. Three studies compared family-centred care to usual care and seven studies provided some 'minimal' intervention to families such as education in the form of pamphlets or newsletters. One study provided a minimal intervention during the child's first 24 months and then the family-centred care intervention for one year. No studies had low or unclear risk of bias across all domains. All studies had a high risk of bias for the blinding of participants and personnel domain.\nFamily-centred care may improve overall health and well-being of Indigenous children and their families, but the evidence was very uncertain. The pooled effect estimate from 11 studies suggests that family-centred care improved the overall health and well-being of Indigenous children and their families compared no family-centred care (standardised mean difference (SMD) 0.14, 95% confidence interval (CI) 0.03 to 0.24; 2386 participants).\nWe are very uncertain whether family-centred care compared to no family-centred care improves the psychological health and emotional behaviour of children as measured by the Infant Toddler Social Emotional Assessment (ITSEA) (Competence domain) (mean difference (MD) 0.04, 95% CI −0.03 to 0.11; 2 studies, 384 participants). We assessed the evidence as being very uncertain about the effect of family-centred care on physical health and developmental health outcomes of children. Pooled data from eight trials on physical health and developmental outcomes found there was little to no difference between the intervention and the control groups (SMD 0.13, 95% CI −0.00 to 0.26; 1961 participants). The evidence is also very unclear whether family-centred care improved family-enhancing lifestyle and behaviours outcomes. Nine studies measured family health-enhancing lifestyle and behaviours and pooled analysis found there was little to no difference between groups (SMD 0.16, 95% CI −0.06 to 0.39; 1969 participants; very low-certainty evidence). There was very low-certainty evidence of little to no difference for the psychological health of parents and carers when they participated in family-centred care compared to any control group (SMD 0.10, 95% CI −0.03 to 0.22; 5 studies, 975 parents/carers).\nTwo studies stated that there were no adverse events as a result of the intervention. No additional data were provided. No studies reported from the health service providers perspective or on outcomes for family's evaluation of care or family-centredness of consultation processes.\nAuthors' conclusions\nThere is some evidence to suggest that family-centred care delivered by primary healthcare services improves the overall health and well-being of Indigenous children, parents, and families. However, due to lack of data, there was not enough evidence to determine whether specific outcomes such as child health and development improved as a result of family-centred interventions. Seven of the 11 studies delivered family-centred education interventions. Seven studies were from the USA and centred on two particular trials, the 'Healthy Children, Strong Families' and 'Family Spirit' trials. As the evidence is very low certainty for all outcomes, further high-quality trials are needed to provide robust evidence for the use of family-centred care interventions for Indigenous children aged less than five years.\n\nGiven the provided abstract, please respond to the following query: \"What is family-centred care?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Family-centred care is a way of providing care that focuses on the needs of children and provides planned care around the whole family unit. It recognises that all family members are care recipients and aims to involve families in partnership with primary healthcare services."
}
] |
query-laysum
|
8283
|
[
{
"role": "user",
"content": "Abstract: Background\nSudden cardiac death is a significant cause of mortality in both the US and globally. However, 5% to 15% of people with sudden cardiac death have no structural abnormalities, and most of these events are attributed to underlying cardiac ion channelopathies. Rates of cardiac ion channelopathy diagnosis are increasing. However, the optimal treatment for such people is poorly understood and current guidelines rely primarily on expert opinion.\nObjectives\nTo compare the effect of implantable cardioverter defibrillators (ICD) with antiarrhythmic drugs or usual care in reducing the risk of all-cause mortality, fatal and non-fatal cardiovascular events, and adverse events in people with cardiac ion channelopathies.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, 2015, Issue 6), EMBASE, MEDLINE, Conference Proceedings Citation Index - Science (CPCI-S), ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) in July 2015. We applied no language restrictions.\nSelection criteria\nWe included all randomized controlled trials of people aged 18 years and older with ion channelopathies, including congenital long QT syndrome, congenital short QT syndrome, Brugada syndrome, or catecholaminergic polymorphic ventricular tachycardia. Participants must have been randomized to ICD implantation and compared to antiarrhythmic drug therapy or usual care.\nData collection and analysis\nTwo authors independently selected studies for inclusion and extracted the data. We included all-cause mortality, fatal and non-fatal cardiovascular events, and adverse events for our primary outcome analyses and non-fatal cardiovascular events, rates of inappropriate ICD firing, quality of life, and cost for our secondary outcome analyses. We calculated risk ratios (RR) and associated 95% confidence intervals (CIs) for dichotomous outcomes, both for independent and pooled study analyses.\nMain results\nFrom the 468 references identified after removing duplicates, we found two trials comprising 86 participants that met our inclusion criteria. Both trials included participants with Brugada syndrome who were randomized to ICD versus β-blocker therapy for secondary prevention for sudden cardiac death. Both studies were small, were performed by the same investigators, and exhibited a high risk of bias across multiple domains. In the group randomized to ICD therapy, there was a nine-fold lower risk of mortality compared with people randomized to medical therapy (0% with ICD versus 18% with medical therapy; RR 0.11, 95% CI 0.01 to 0.83; 2 trials, 86 participants). There was low quality evidence of a difference in the rates of combined fatal and non-fatal cardiovascular events, and the results were imprecise (26% with ICD versus 18% with medical therapy; RR 1.49, 95% CI 0.66 to 3.34; 2 trials, 86 participants). The rates of adverse events were higher in the ICD group, but these results were imprecise (28% with ICD versus 10% with medical therapy; RR 2.44, 95% CI 0.92 to 6.44; 2 trials, 86 participants). For secondary outcomes, the risk of non-fatal cardiovascular events was higher in the ICD group, but these results were imprecise and were driven entirely by appropriate ICD-termination of cardiac arrhythmias (26% with ICD versus 0% with medical therapy; RR 11.4, 95% CI 1.57 to 83.3; 2 trials, 86 participants). Approximately 25% of the ICD group experienced inappropriate ICD firing, all of which was corrected by device reprogramming. No data were available for quality of life or cost. We considered the quality of evidence low using the GRADE methodology, due to study limitations and imprecision of effects.\nAuthors' conclusions\nAmong people with Brugada syndrome who have survived a prior episode of sudden cardiac death, ICD therapy appeared to reduce mortality when compared to β-blocker therapy, but the true magnitude may be substantially different from the estimate of the effect because of study limitations and imprecision. Due to the large magnitude of effect, it is unlikely that there will be additional studies evaluating the role of ICDs for secondary prevention in this population. Further studies are necessary to determine the optimal treatment, if any, to prevent an initial episode of sudden cardiac death in people with cardiac ion channelopathies.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There was low quality evidence that implantable cardioverter defibrillators lower mortality in people with inherited heart rate abnormalities who have survived a sudden cardiac arrest event because of few, small, low quality studies. These devices carry a higher risk of side effects than medical therapy. Further research is very likely to have an important impact on our confidence in the results."
}
] |
query-laysum
|
8284
|
[
{
"role": "user",
"content": "Abstract: Background\nAutism spectrum disorder (ASD; also known as autism) is a developmental disability that begins in childhood and is typically seen in around 1% to 2% of children. It is characterised by social communication difficulties and repetitive and restricted behaviours and routines that can have a negative impact on a child's quality of life, achievement at school, and social interactions with others. It has been hypothesised that memantine, which is traditionally used to treat dementia, may be effective in reducing the core symptoms of autism as well as some co-occurring symptoms such as hyperactivity and language difficulties. If memantine is being used to treat the core symptoms of autism, it is important to review the evidence of its effectiveness.\nObjectives\nTo assess the effects of memantine on the core symptoms of autism, including, but not limited to, social communication and stereotypical behaviours.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, nine other databases and three trials registers up to February 2022. We also checked reference lists of key studies and checked with experts in the field for any additional papers. We searched for retractions of the included studies in MEDLINE, Embase, and the Retraction Watch Database. No retractions or corrections were found.\nSelection criteria\nWe included randomised controlled trials (RCTs) of any dose of memantine compared with placebo in autistic people. We also included RCTs in which only one group received memantine, but both groups received the same additional therapy (e.g. a behaviour intervention).\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were core autism symptoms and adverse effects. Secondary outcomes were language, intelligence, memory, adaptive behaviour, hyperactivity, and irritability. We used GRADE to assess certainty of evidence.\nMain results\nWe included three RCTs (two double-blind and one single-blind) with 204 participants that examined the short-term effect (immediately postintervention) of memantine in autistic people. Two studies took place in the USA and the other in Iran. All three studies focused on children and adolescents, with a mean age of 9.40 (standard deviation (SD) 2.26) years. Most participants were male (range across studies 73% to 87%). The diagnosis of ASD was based on the Diagnostic and Statistical Manual of Mental Disorders (4th edition; 4th edition, text revision; or 5th edition). To confirm the diagnosis, one study used the Autism Diagnostic Observation Schedule (ADOS) and the Autism Diagnostic Interview-Revised (ADI-R); one used ADOS, ADI-R or the Autism Diagnostic Interview Screener; and one used the Gilliam Autism Rating Scale. Dosage of memantine was based on the child's weight and ranged from 3 mg to 15 mg per day.\nComparisons \nTwo studies examined memantine compared with placebo; in the other study, both groups had a behavioural intervention while only one group was given memantine.\nRisk of bias\nAll studies were rated at high risk of bias overall, as they were at high or unclear risk of bias across all but four domains in one study, and all but two domains in the other two studies. One study was funded by Forest Laboratories, LLC, (Jersey City, New Jersey), Allergan. The study sponsor was involved in the study design, data collection (via contracted clinical investigator sites), analysis and interpretation of data, and the decision to present these results. The other two studies reported no financial support or sponsorship; though in one of the two, the study medication was an in-kind contribution from Forest Pharmaceuticals.\nPrimary outcomes\nThere was no clear evidence of a difference between memantine and placebo with respect to severity of core symptoms of autism, although we are very uncertain about the evidence. The standardised mean difference in autism symptoms score in the intervention group versus the control group was –0.74 standard deviations (95% confidence interval (CI) −2.07 to 0.58; 2 studies, 181 participants; very low-certainty evidence; medium effect size); lower scores indicate less severe autistic symptoms. Two studies (144 participants) recorded adverse effects that the authors deemed related to the study and found there may be no difference between memantine and placebo (odds ratio (OR) 0.64, 95% CI 0.17 to 2.39; low-certainty evidence).\nSecondary outcomes\nThere may be no difference between memantine and placebo on language (2 studies, 144 participants; low-certainty evidence); memory or adaptive behaviour (1 study, 23 participants; both low-certainty evidence); or hyperactivity or irritability (1 study, 121 participants; both low-certainty evidence).\nAuthors' conclusions\nIt is unclear whether memantine is an effective treatment for autistic children. None of the three included trials reported on the effectiveness of memantine in adults. Further studies using rigorous designs, larger samples, longer follow-up and clinically meaningful outcome measures that are important to autistic people and their families will strengthen our knowledge of the effects of memantine in autism.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found three studies with 204 people that had evaluated the effectiveness of memantine in autism. All studies were randomised controlled trials, meaning participants were randomly allocated to receive either the treatment or a dummy pill (placebo). This is the best design for assessing the effectiveness of treatments. All three studies included children diagnosed with autism spectrum disorder, with an average age of 9.40 years. We found no studies in adults. The children received memantine (for 12 weeks in two studies and for 24 weeks in one study), and their behaviour was assessed before treatment and immediately after treatment."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.