dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
5963
|
[
{
"role": "user",
"content": "Abstract: Background\nUterine fibroids occur in up to 40% of women aged over 35 years. Some are asymptomatic, but up to 50% cause symptoms that warrant therapy. Symptoms include anaemia caused by heavy menstrual bleeding, pelvic pain, dysmenorrhoea, infertility and low quality of life. Surgery is the first choice of treatment. In recent years, medical therapies have been used before surgery to improve intraoperative and postoperative outcomes. However, such therapies tend to be expensive.\nFibroid growth is stimulated by oestrogen. Gonadotropin-hormone releasing analogues (GnRHa) induce a state of hypo-oestrogenism that shrinks fibroids , but has unacceptable side effects if used long-term. Other potential hormonal treatments, include progestins and selective progesterone-receptor modulators (SPRMs).\nThis is an update of a Cochrane Review published in 2000 and 2001; the scope has been broadened to include all preoperative medical treatments.\nObjectives\nTo assess the effectiveness and safety of medical treatments prior to surgery for uterine fibroids.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group specialised register, CENTRAL, MEDLINE, Embase, PsycINFO and CINAHL in June 2017. We also searched trials registers (ClinicalTrials.com; WHO ICTRP), theses and dissertations and the grey literature, handsearched reference lists of retrieved articles and contacted pharmaceutical companies for additional trials.\nSelection criteria\nWe included randomised comparisons of medical therapy versus placebo, no treatment, or other medical therapy before surgery, myomectomy, hysterectomy or endometrial resection, for uterine fibroids.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included a total of 38 RCTs (3623 women); 19 studies compared GnRHa to no pretreatment (n = 19), placebo (n = 8), other medical pretreatments (progestin, SPRMs, selective oestrogen receptor modulators (SERMs), dopamine agonists, oestrogen receptor antagonists) (n = 7), and four compared SPRMs with placebo. Most results provided low-quality evidence due to limitations in study design (poor reporting of randomisation procedures, lack of blinding), imprecision and inconsistency.\nGnRHa versus no treatment or placebo\nGnRHa treatments were associated with reductions in both uterine (MD -175 mL, 95% CI -219.0 to -131.7; 13 studies; 858 participants; I² = 67%; low-quality evidence) and fibroid volume (heterogeneous studies, MD 5.7 mL to 155.4 mL), and increased preoperative haemoglobin (MD 0.88 g/dL, 95% CI 0.7 to 1.1; 10 studies; 834 participants; I² = 0%; moderate-quality evidence), at the expense of a greater likelihood of adverse events, particularly hot flushes (OR 7.68, 95% CI 4.6 to 13.0; 6 studies; 877 participants; I² = 46%; moderate-quality evidence).\nDuration of hysterectomy surgery was reduced among women who received GnRHa treatment (-9.59 minutes, 95% CI 15.9 to -3.28; 6 studies; 617 participants; I² = 57%; low-quality evidence) and there was less blood loss (heterogeneous studies, MD 25 mL to 148 mL), fewer blood transfusions (OR 0.54, 95% CI 0.3 to 1.0; 6 studies; 601 participants; I² = 0%; moderate-quality evidence), and fewer postoperative complications (OR 0.54, 95% CI 0.3 to 0.9; 7 studies; 772 participants; I² = 28%; low-quality evidence).\nGnRHa appeared to reduce intraoperative blood loss during myomectomy (MD 22 mL to 157 mL). There was no clear evidence of a difference among groups for other primary outcomes after myomectomy: duration of surgery (studies too heterogeneous for pooling), blood transfusions (OR 0.85, 95% CI 0.3 to 2.8; 4 studies; 121 participants; I² = 0%; low-quality evidence) or postoperative complications (OR 1.07, 95% CI 0.43 to 2.64; I² = 0%; 5 studies; 190 participants; low-quality evidence). No suitable data were available for analysis of preoperative bleeding.\nGnRHa versus other medical therapies\nGnRHa was associated with a greater reduction in uterine volume (-47% with GnRHa compared to -20% and -22% with 5 mg and 10 mg ulipristal acetate) but was more likely to cause hot flushes (OR 12.3, 95% CI 4.04 to 37.48; 5 studies; 183 participants; I² = 61%; low-quality evidence) compared with ulipristal acetate. There was no clear evidence of a difference in bleeding reduction (ulipristal acetate 5 mg: OR 0.71, 95% CI 0.3 to 1.7; 1 study; 199 participants; moderate-quality evidence; ulipristal acetate 10 mg: OR 0.39, 95% CI 0.1 to 1.1; 1 study; 203 participants; moderate-quality evidence) or haemoglobin levels (MD -0.2, 95% CI -0.6 to 0.2; 188 participants; moderate-quality evidence).\nThere was no clear evidence of a difference in fibroid volume between GnRHa and cabergoline (MD 12.71 mL, 95% CI -5.9 to 31.3; 2 studies; 110 participants; I² = 0%; low-quality evidence).\nThe included studies did not report usable data for any other primary outcomes.\nSPRMs versus placebo\nSPRMs (mifepristone, CDB-2914, ulipristal acetate and asoprisnil) were associated with greater reductions in uterine or fibroid volume than placebo (studies too heterogeneous to pool) and increased preoperative haemoglobin levels (MD 0.93 g/dL, 0.5 to 1.4; 2 studies; 173 participants; I² = 0%; high-quality evidence). Ulipristal acetate and asoprisnil were also associated with greater reductions in bleeding before surgery (ulipristal acetate 5 mg: OR 41.41, 95% CI 15.3 to 112.4; 1 study; 143 participants; low-quality evidence; ulipristal acetate 10 mg: OR 78.83, 95% CI 24.0 to 258.7; 1 study; 146 participants; low-quality evidence; asoprisnil: MD -166.9 mL; 95% CI -277.6 to -56.2; 1 study; 22 participants; low-quality evidence). There was no evidence of differences in preoperative complications. No other primary outcomes were measured.\nAuthors' conclusions\nA rationale for the use of preoperative medical therapy before surgery for fibroids is to make surgery easier. There is clear evidence that preoperative GnRHa reduces uterine and fibroid volume, and increases preoperative haemoglobin levels, although GnRHa increases the incidence of hot flushes. During hysterectomy, blood loss, operation time and complication rates were also reduced. Evidence suggests that ulipristal acetate may offer similar advantages (reduced fibroid volume and fibroid-related bleeding and increased haemoglobin levels) although replication of these studies is advised before firm conclusions can be made. Future research should focus on cost-effectiveness and distinguish between groups of women with fibroids who would most benefit.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 38 studies that involved 3623 women with fibroids that caused symptoms and who were scheduled for surgery to remove the fibroids. Surgeries were either hysterectomy (uterus removal) or myomectomy or resection (removal of fibroids from the uterus wall). Many women were anaemic (had low red blood cell or haemoglobin levels).\nThe studies compared GnRHa with no treatment or sham treatment, GnRHa with other medical treatments, and SPRMs with sham treatment."
}
] |
query-laysum
|
5964
|
[
{
"role": "user",
"content": "Abstract: Background\nThe unanticipated difficult airway is a potentially life-threatening event during anaesthesia or acute conditions. An unsuccessfully managed upper airway is associated with serious morbidity and mortality. Several bedside screening tests are used in clinical practice to identify those at high risk of difficult airway. Their accuracy and benefit however, remains unclear.\nObjectives\nThe objective of this review was to characterize and compare the diagnostic accuracy of the Mallampati classification and other commonly used airway examination tests for assessing the physical status of the airway in adult patients with no apparent anatomical airway abnormalities. We performed this individually for each of the four descriptors of the difficult airway: difficult face mask ventilation, difficult laryngoscopy, difficult tracheal intubation, and failed intubation.\nSearch methods\nWe searched major electronic databases including CENTRAL, MEDLINE, Embase, ISI Web of Science, CINAHL, as well as regional, subject specific, and dissertation and theses databases from inception to 16 December 2016, without language restrictions. In addition, we searched the Science Citation Index and checked the references of all the relevant studies. We also handsearched selected journals, conference proceedings, and relevant guidelines. We updated this search in March 2018, but we have not yet incorporated these results.\nSelection criteria\nWe considered full-text diagnostic test accuracy studies of any individual index test, or a combination of tests, against a reference standard. Participants were adults without obvious airway abnormalities, who were having laryngoscopy performed with a standard laryngoscope and the trachea intubated with a standard tracheal tube. Index tests included the Mallampati test, modified Mallampati test, Wilson risk score, thyromental distance, sternomental distance, mouth opening test, upper lip bite test, or any combination of these. The target condition was difficult airway, with one of the following reference standards: difficult face mask ventilation, difficult laryngoscopy, difficult tracheal intubation, and failed intubation.\nData collection and analysis\nWe performed screening and selection of the studies, data extraction and assessment of methodological quality (using QUADAS-2) independently and in duplicate. We designed a Microsoft Access database for data collection and used Review Manager 5 and R for data analysis. For each index test and each reference standard, we assessed sensitivity and specificity. We produced forest plots and summary receiver operating characteristic (ROC) plots to summarize the data. Where possible, we performed meta-analyses to calculate pooled estimates and compare test accuracy indirectly using bivariate models. We investigated heterogeneity and performed sensitivity analyses.\nMain results\nWe included 133 (127 cohort type and 6 case-control) studies involving 844,206 participants. We evaluated a total of seven different prespecified index tests in the 133 studies, as well as 69 non-prespecified, and 32 combinations. For the prespecified index tests, we found six studies for the Mallampati test, 105 for the modified Mallampati test, six for the Wilson risk score, 52 for thyromental distance, 18 for sternomental distance, 34 for the mouth opening test, and 30 for the upper lip bite test. Difficult face mask ventilation was the reference standard in seven studies, difficult laryngoscopy in 92 studies, difficult tracheal intubation in 50 studies, and failed intubation in two studies. Across all studies, we judged the risk of bias to be variable for the different domains; we mostly observed low risk of bias for patient selection, flow and timing, and unclear risk of bias for reference standard and index test. Applicability concerns were generally low for all domains. For difficult laryngoscopy, the summary sensitivity ranged from 0.22 (95% confidence interval (CI) 0.13 to 0.33; mouth opening test) to 0.67 (95% CI 0.45 to 0.83; upper lip bite test) and the summary specificity ranged from 0.80 (95% CI 0.74 to 0.85; modified Mallampati test) to 0.95 (95% CI 0.88 to 0.98; Wilson risk score). The upper lip bite test for diagnosing difficult laryngoscopy provided the highest sensitivity compared to the other tests (P < 0.001). For difficult tracheal intubation, summary sensitivity ranged from 0.24 (95% CI 0.12 to 0.43; thyromental distance) to 0.51 (95% CI 0.40 to 0.61; modified Mallampati test) and the summary specificity ranged from 0.87 (95% CI 0.82 to 0.91; modified Mallampati test) to 0.93 (0.87 to 0.96; mouth opening test). The modified Mallampati test had the highest sensitivity for diagnosing difficult tracheal intubation compared to the other tests (P < 0.001). For difficult face mask ventilation, we could only estimate summary sensitivity (0.17, 95% CI 0.06 to 0.39) and specificity (0.90, 95% CI 0.81 to 0.95) for the modified Mallampati test.\nAuthors' conclusions\nBedside airway examination tests, for assessing the physical status of the airway in adults with no apparent anatomical airway abnormalities, are designed as screening tests. Screening tests are expected to have high sensitivities. We found that all investigated index tests had relatively low sensitivities with high variability. In contrast, specificities were consistently and markedly higher than sensitivities across all tests. The standard bedside airway examination tests should be interpreted with caution, as they do not appear to be good screening tests. Among the tests we examined, the upper lip bite test showed the most favourable diagnostic test accuracy properties. Given the paucity of available data, future research is needed to develop tests with high sensitivities to make them useful, and to consider their use for screening difficult face mask ventilation and failed intubation. The 27 studies in 'Studies awaiting classification' may alter the conclusions of the review, once we have assessed them.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked for the most suitable and accurate rapid screening test in adults with no obvious airway abnormalities, to identify those who are likely to be difficult to intubate (i.e. insertion of a tube into the windpipe)."
}
] |
query-laysum
|
5965
|
[
{
"role": "user",
"content": "Abstract: Background\nCustom foot orthoses are commonly recommended for the treatment of foot pain.\nObjectives\nTo evaluate the effectiveness of custom foot orthoses for different types of foot pain.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2007, Issue 2), MEDLINE (from January 1966), EMBASE (from January 1980), CINAHL (from January 1982) and the Physiotherapy Evidence Database (PEDro) (to June 2007). We also contacted authors of included trials and known researchers in the field and checked the reference lists of included trials to identify trials. No language or publication restrictions were applied.\nSelection criteria\nRandomised controlled trials and controlled clinical trials evaluating custom-made foot orthoses for any type of foot pain. Outcomes included quantifiable levels of foot pain, function, disability, health-related quality of life, participant satisfaction, adverse effects and compliance.\nData collection and analysis\nTwo authors independently selected trials, rated methodological quality and cross checked data extraction. Data were analysed separately for different diagnoses of foot pain and follow-up time points.\nMain results\nEleven trials involving 1332 participants were included: five trials evaluated custom-made foot orthoses for plantar fasciitis (691 participants); three for foot pain in rheumatoid arthritis (231 participants); and one each for foot pain in pes cavus (154 participants), hallux valgus (209 participants) and juvenile idiopathic arthritis (JIA) (47 participants). Comparisons to custom-made foot orthoses included sham orthoses; no intervention; standardised interventions given to all participants; non-custom (prefabricated) foot orthoses; combined manipulation, mobilisation or stretching; night splints; and surgery. Follow up ranged from one week to three years. Custom-made foot orthoses were effective for painful pes cavus (NNTB:5), rearfoot pain in rheumatoid arthritis (NNTB:4), foot pain in JIA (NNTB:3) and painful hallux valgus (NNTB:6); however, surgery was even more effective for hallux valgus and non-custom foot orthoses appeared just as effective for JIA but the analysis may have lacked sufficient power to detect a difference in effect. It is unclear if custom-made foot orthoses were effective for plantar fasciitis or metatarsophalangeal joint pain in rheumatoid arthritis. Custom-made foot orthoses were a safe intervention in all studies.\nAuthors' conclusions\nThere is limited evidence on which to base clinical decisions regarding the prescription of custom-made foot orthoses for the treatment of foot pain. Currently, there is gold level evidence for painful pes cavus and silver level evidence for foot pain in JIA, rheumatoid arthritis, plantar fasciitis and hallux valgus.\n\nGiven the provided abstract, please respond to the following query: \"Best estimate of what happens to people with a painful bunion with hallux valgus who use custom foot orthoses\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "After 6 months, custom-made foot orthoses improve foot pain by 9 more points on a scale of 0 to 100 (possibly as many as 17 points or as few as 1 point) compared to doing nothing.\nAfter 6 months, surgery improved foot pain by 10 more points on a scale of 0 to 100 (possibly as many as 18 or as few as 2 points) compared to the custom foot orthoses.\nAfter 12 months, surgery improved foot pain by 17 more points on a scale of 0 to 100 (possibly as many as 25 or as few as 9 points) compared to the custom foot orthoses."
}
] |
query-laysum
|
5966
|
[
{
"role": "user",
"content": "Abstract: Background\nBrain arteriovenous malformations (AVMs) are the single most common cause of intracerebral haemorrhage in young adults. Brain AVMs also cause seizure(s) and focal neurological deficits (in the absence of haemorrhage, migraine or an epileptic seizure); approximately one-fifth are incidental discoveries. Various interventions are used in an attempt to eradicate brain AVMs: neurosurgical excision, stereotactic radiosurgery, endovascular embolization, and staged combinations of these interventions. This is an update of a Cochrane Review first published in 2006, and last updated in 2009.\nObjectives\nTo determine the effectiveness and safety of the different interventions, alone or in combination, for treating brain AVMs in adults compared against either each other, or conservative management, in randomized controlled trials (RCTs).\nSearch methods\nThe Cochrane Stroke Group Information Specialist searched the Cochrane Stroke Group Trials Register (last searched 7 January 2019), the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 1) in the Cochrane Library, MEDLINE Ovid (1980 to 14 January 2019), and Embase OVID (1980 to 14 January 2019). We searched international registers of clinical trials, the contents pages of relevant journals, and bibliographies of relevant articles (November 2009). We also contacted manufacturers of interventional treatments for brain AVMs (March 2005).\nSelection criteria\nWe sought RCTs of any intervention for brain AVMs (used alone or in combination), compared against each other or against conservative management, with relevant clinical outcome measures.\nData collection and analysis\nOne author screened the results of the updated searches for potentially eligible RCTs for this updated review. Both authors independently read the potentially eligible RCTs in full and confirmed their inclusion according to the inclusion criteria. We resolved disagreement by discussion. We assessed the risk of bias in included studies and applied GRADE.\nMain results\nWe included one trial with 226 participants: A Randomized trial of Unruptured Brain Arteriovenous Malformations (ARUBA), comparing intervention versus conservative management for unruptured brain AVMs (that had never bled). The quality of evidence was moderate because we found just one trial that was at low risk of bias other than a high risk of performance bias due to participants and treating physicians not being blinded to allocated treatment. Data on functional outcome and death at a follow-up of 12 months were provided for 218 (96%) of the participants in ARUBA. In this randomized controlled trial (RCT), intervention compared to conservative management increased death or dependency (modified Rankin Scale score ≥ 2, risk ratio (RR) 2.53, 95% confidence interval (CI) 1.28 to 4.98; 1 trial, 226 participants; moderate-quality evidence) and the proportion of participants with symptomatic intracranial haemorrhage (RR 6.75, 95% CI 2.07 to 21.96; 1 trial, 226 participants; moderate-quality evidence), but there was no difference in the frequency of epileptic seizures (RR 1.14, 95% CI 0.63 to 2.06; 1 trial, 226 participants; moderate-quality evidence). Three RCTs are ongoing.\nAuthors' conclusions\nWe found moderate-quality evidence from one RCT including adults with unruptured brain AVMs that conservative management was superior to intervention with respect to functional outcome and symptomatic intracranial haemorrhage over one year after randomization. More RCTs will help to confirm or refute these findings.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Abnormal tangles of blood vessels in the brain, known as brain arteriovenous malformations (AVMs) are the single most common cause of stroke due to bleeding in the brain (known as intracerebral haemorrhage, or ICH) in young adults. Brain AVMs can also leave young people disabled for life and cause epilepsy. How they should be treated, if at all, is highly controversial. The main options are: 1) medical treatment of epileptic seizures and headaches (sometimes known as 'conservative management'); or 2) one or more of the following 'interventional' treatments: neurosurgery, endovascular embolization (glue, coils, or particles are lodged within the AVM via a catheter inserted temporarily in the groin), or radiosurgery (a non-invasive treatment involving focused beams of radiation)."
}
] |
query-laysum
|
5967
|
[
{
"role": "user",
"content": "Abstract: Background\nMyopia (near-sightedness or short-sightedness) is a condition in which the refractive power of the eye is greater than required. The most frequent complaint of people with myopia is blurred distance vision, which can be eliminated by conventional optical aids such as spectacles or contact lenses, or by refractive surgery procedures such as photorefractive keratectomy (PRK) and laser epithelial keratomileusis (LASEK). PRK uses laser to remove the corneal stroma. Similar to PRK, LASEK first creates an epithelial flap and then replaces it after ablating the corneal stroma. The relative benefits and harms of LASEK and PRK, as shown in different trials, warrant a systematic review.\nObjectives\nThe objective of this review is to compare LASEK versus PRK for correction of myopia by evaluating their efficacy and safety in terms of postoperative uncorrected visual acuity, residual refractive error, and associated complications.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision group Trials Register) (2015 Issue 12), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to December 2015), EMBASE (January 1980 to December 2015), Latin American and Caribbean Health Sciences (LILACS) (January 1982 to December 2015), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 15 December 2015. We used the Science Citation Index and searched the reference lists of the included trials to identify relevant trials for this review.\nSelection criteria\nWe included in this review randomized controlled trials (RCTs) comparing LASEK versus PRK for correction of myopia. Trial participants were 18 years of age or older and had no co-existing ocular or systemic diseases that might affect refractive status or wound healing.\nData collection and analysis\nTwo review authors independently screened all reports and assessed the risk of bias of trials included in this review. We extracted data and summarized findings using risk ratios and mean differences. We used a random-effects model when we identified at least three trials, and we used a fixed-effect model when we found fewer than three trials.\nMain results\nWe included 11 RCTs with a total of 428 participants 18 years of age or older with low to moderate myopia. These trials were conducted in the Czech Republic, Brazil, Italy, Iran, China, Korea, Mexico, Turkey, USA, and UK. Investigators of 10 out of 11 trials randomly assigned one eye of each participant to be treated with LASEK and the other with PRK, but did not perform paired-eye (matched) analysis. Because of differences in outcome measures and follow-up times among the included trials, few trials contributed data for many of the outcomes we analyzed for this review. Overall, we judged RCTs to be at unclear risk of bias due to poor reporting; however, because of imprecision, inconsistency, and potential reporting bias, we graded the quality of the evidence from very low to moderate for outcomes assessed in this review.\nThe proportion of eyes with uncorrected visual acuity of 20/20 or better at 12-month follow-up was comparable in LASEK and PRK groups (risk ratio (RR) 0.98, 95% confidence interval (95% CI) 0.92 to 1.05). Although the 95% CI suggests little to no difference in effect between groups, we judged the quality of the evidence to be low because only one trial reported this outcome (102 eyes). At 12 months post treatment, data from two trials suggest no difference or a possibly small effect in favor of PRK over LASEK for the proportion of eyes achieving ± 0.50 D of target refraction (RR 0.93, 95% CI 00.84 to 1.03; 152 eyes; low-quality evidence). At 12 months post treatment, one trial reported that one of 51 eyes in the LASEK group lost one line or more best-spectacle corrected visual acuity compared with none of 51 eyes in the PRK group (RR 3.00, 95% CI 0.13 to 71.96; very low-quality evidence).\nThree trials reported adverse outcomes at 12 months of follow-up or longer. At 12 months post treatment, three trials reported corneal haze score; however, data were insufficient and were inconsistent among the trials, precluding meta-analysis. One trial reported little or no difference in corneal haze scores between groups; another trial reported that corneal haze scores were lower in the LASEK group than in the PRK group; and one trial did not report analyzable data to estimate a treatment effect. At 24 months post treatment, one trial reported a lower, but clinically unimportant, difference in corneal haze score for LASEK compared with PRK (MD -0.22, 95% CI -0.30 to -0.14; 184 eyes; low-quality evidence).\nAuthors' conclusions\nUncertainty surrounds differences in efficacy, accuracy, safety, and adverse effects between LASEK and PRK for eyes with low to moderate myopia. Future trials comparing LASEK versus PRK should follow reporting standards and follow correct analysis. Trial investigators should expand enrollment criteria to include participants with high myopia and should evaluate visual acuity, refraction, epithelial healing time, pain scores, and adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Available data were insufficient for investigation of whether LASEK or PRK is better at correcting near-sightedness. We judged the evidence for most outcomes as very low to moderate quality because of variation in reporting and differences in effects among trials."
}
] |
query-laysum
|
5968
|
[
{
"role": "user",
"content": "Abstract: Background\nApproximately 30% of adults undergoing non-cardiac surgery suffer from preoperative anaemia. Preoperative anaemia is a risk factor for mortality and adverse outcomes in different surgical specialties and is frequently the reason for blood transfusion. The most common causes are renal, chronic diseases, and iron deficiency. International guidelines recommend that the cause of anaemia guide preoperative anaemia treatment. Recombinant human erythropoietin (rHuEPO) with iron supplementation has frequently been used to increase preoperative haemoglobin concentrations in patients in order to avoid the need for perioperative allogeneic red blood cell (RBC) transfusion.\nObjectives\nTo evaluate the efficacy of preoperative rHuEPO therapy (subcutaneous or parenteral) with iron (enteral or parenteral) in reducing the need for allogeneic RBC transfusions in preoperatively anaemic adults undergoing non-cardiac surgery.\nSearch methods\nWe searched CENTRAL, Ovid MEDLINE(R), Ovid Embase, ISI Web of Science: SCI-EXPANDED and CPCI-S, and clinical trial registries WHO ICTRP and ClinicalTrials.gov on 29 August 2019.\nSelection criteria\nWe included all randomized controlled trials (RCTs) that compared preoperative rHuEPO + iron therapy to control treatment (placebo, no treatment, or standard of care with or without iron) for preoperatively anaemic adults undergoing non-cardiac surgery.\nWe used the World Health Organization (WHO) definition of anaemia: haemoglobin concentration (g/dL) less than 13 g/dL for males, and 12 g/dL for non-pregnant females (decision of inclusion based on mean haemoglobin concentration). We defined two subgroups of rHuEPO dosage: 'low' for 150 to 300 international units (IU)/kg body weight, and 'high' for 500 to 600 IU/kg body weight.\nData collection and analysis\nTwo review authors collected data from the included studies. Our primary outcome was the need for RBC transfusion (no autologous transfusion, fresh frozen plasma or platelets), measured in transfused participants during surgery (intraoperative) and up to five days after surgery. Secondary outcomes of interest were: haemoglobin concentration (directly before surgery), number of RBC units (where one unit contains 250 to 450 mL) transfused per participant (intraoperative and up to five days after surgery), mortality (within 30 days after surgery), length of hospital stay, and adverse events (e.g. renal dysfunction, thromboembolism, hypertension, allergic reaction, headache, fever, constipation).\nMain results\nMost of the included trials were in orthopaedic, gastrointestinal, and gynaecological surgery and included participants with mild and moderate preoperative anaemia (haemoglobin from 10 to 12 g/dL). The duration of preoperative rHuEPO treatment varied across the trials, ranging from once a week to daily or a 5-to-10-day period, and in one trial preoperative rHuEPO was given on the morning of surgery and for five days postoperatively.\nWe included 12 trials (participants = 1880) in the quantitative analysis of the need for RBC transfusion following preoperative treatment with rHuEPO + iron to correct preoperative anaemia in non-cardiac surgery; two studies were multiarmed trials with two different dose regimens.\nPreoperative rHuEPO + iron given to anaemic adults reduced the need RBC transfusion (risk ratio (RR) 0.55, 95% confidence interval (CI) 0.38 to 0.80; participants = 1880; studies = 12; I2 = 84%; moderate-quality evidence due to inconsistency). This analysis suggests that on average, the combined administration of rHuEPO + iron will mean 231 fewer individuals will need transfusion for every 1000 individuals compared to the control group.\nPreoperative high-dose rHuEPO + iron given to anaemic adults increased the haemoglobin concentration (mean difference (MD) 1.87 g/dL, 95% CI 1.26 to 2.49; participants = 852; studies = 3; I2 = 89%; low-quality evidence due to inconsistency and risk of bias) but not low-dose rHuEPO + iron (MD 0.11 g/dL, 95% CI −0.46 to 0.69; participants = 334; studies = 4; I2 = 69%; low-quality evidence due to inconsistency and risk of bias).\nThere was probably little or no difference in the number of RBC units when rHuEPO + iron was given preoperatively (MD −0.09, 95% CI −0.23 to 0.05; participants = 1420; studies = 6; I2 = 2%; moderate-quality evidence due to imprecision).\nThere was probably little or no difference in the risk of mortality within 30 days of surgery (RR 1.19, 95% CI 0.39 to 3.63; participants = 230; studies = 2; I2 = 0%; moderate-quality evidence due to imprecision) or of adverse events including local rash, fever, constipation, or transient hypertension (RR 0.93, 95% CI 0.68 to 1.28; participants = 1722; studies = 10; I2 = 0%; moderate-quality evidence due to imprecision).\nThe administration of rHuEPO + iron before non-cardiac surgery did not clearly reduce the length of hospital stay of preoperative anaemic adults (MD −1.07, 95% CI −4.12 to 1.98; participants = 293; studies = 3; I2 = 87%; low-quality evidence due to inconsistency and imprecision).\nAuthors' conclusions\nModerate-quality evidence suggests that preoperative rHuEPO + iron therapy for anaemic adults prior to non-cardiac surgery reduces the need for RBC transfusion and, when given at higher doses, increases the haemoglobin concentration preoperatively. The administration of rHuEPO + iron treatment did not decrease the mean number of units of RBC transfused per patient.\nThere were no important differences in the risk of adverse events or mortality within 30 days, nor in length of hospital stay. Further, well-designed, adequately powered RCTs are required to estimate the impact of this combined treatment more precisely.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Erythropoietin is a hormone produced by the kidneys. It stimulates the bone marrow to produce more red blood cells (RBCs). Erythropoietin is frequently used to correct anaemia caused by a reduced number of red blood cells or low haemoglobin concentration in cases of chronic renal failure. The haemoglobin in red blood cells carries oxygen around the body.\nAdult patients undergoing non-cardiac surgery, such as bone (orthopaedic) surgery, have blood tests before surgery to check their general health. Many individuals are found to be anaemic, which means they may need RBC transfusions with donor blood before, during, or after surgery to increase their haemoglobin concentrations. Giving donor blood can cause an allergic reaction such as a rash, headache, fever, raised blood pressure, temporary loss of kidney function, and blood clotting. If it is possible to diagnose the cause of the anaemia before the operation, then erythropoietin can be used together with iron to treat most of the causes of anaemia."
}
] |
query-laysum
|
5969
|
[
{
"role": "user",
"content": "Abstract: Background\nIschaemic stroke interrupts the flow of blood to part of the brain. Haemodilution is thought to improve the flow of blood to the affected areas of the brain and thus reduce infarct size.\nObjectives\nTo assess the effects of haemodilution in acute ischaemic stroke.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (February 2014), the Cochrane Central Register of Controlled Trials (Issue 1, 2014), MEDLINE (January 2008 to October 2013) and EMBASE (January 2008 to October 2013). We also searched trials registers, scanned reference lists and contacted authors. For the previous version of the review, the authors contacted manufacturers and investigators in the field.\nSelection criteria\nRandomised trials of haemodilution treatment in people with acute ischaemic stroke. We included only trials in which treatment was started within 72 hours of stroke onset.\nData collection and analysis\nTwo review authors assessed trial quality and one review author extracted the data.\nMain results\nWe included 21 trials involving 4174 participants. Nine trials used a combination of venesection and plasma volume expander. Twelve trials used plasma volume expander alone. The plasma volume expander was plasma alone in one trial, dextran 40 in 12 trials, hydroxyethyl starch (HES) in five trials and albumin in three trials. Two trials tested haemodilution in combination with another therapy. Evaluation was blinded in 14 trials. Five trials probably included some participants with intracerebral haemorrhage. Haemodilution did not significantly reduce deaths within the first four weeks (risk ratio (RR) 1.10; 95% confidence interval (CI) 0.90 to 1.34). Similarly, haemodilution did not influence deaths within three to six months (RR 1.05; 95% CI 0.93 to 1.20), or death and dependency or institutionalisation (RR 0.96; 95% CI 0.85 to 1.07). The results were similar in confounded and unconfounded trials, and in trials of isovolaemic and hypervolaemic haemodilution. No statistically significant benefits were documented for any particular type of haemodiluting agents, but the statistical power to detect effects of HES was weak. Six trials reported venous thromboembolic events. There was a tendency towards reduction in deep venous thrombosis or pulmonary embolism or both at three to six months' follow-up (RR 0.68; 95% CI 0.37 to 1.24). There was no statistically significant increased risk of serious cardiac events among haemodiluted participants.\nAuthors' conclusions\nThe overall results of this review showed no clear evidence of benefit of haemodilution therapy for acute ischaemic stroke.\nThese results are compatible with no persuasive beneficial evidence of haemodilution therapy for acute ischaemic stroke. This therapy has not been proven to improve survival or functional outcome.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review showed that, when all the studies are taken together, there is no clear evidence of benefit from haemodilution. There is also no clear evidence that any particular mode of haemodilution, with or without blood-letting, using various types of haemodiluting agents, etc, is beneficial. There were no significant serious side effects of this treatment. It is concluded that there is no clear scientific support for the use of haemodilution in the routine treatment of people with acute ischaemic stroke."
}
] |
query-laysum
|
5970
|
[
{
"role": "user",
"content": "Abstract: Background\nDeep venous thrombosis (DVT) occurs in approximately one in 1000 adults every year, and has an annual mortality of 14.6%. In particular, iliofemoral DVT can lead to recurrent thrombosis and post-thrombotic syndrome (PTS), a painful condition which can lead to chronic venous insufficiency, oedema, and ulceration. It causes significant disability, impaired quality of life, and economic burden. Early thrombus removal techniques have been advocated in patients with an iliofemoral DVT in order to improve vein patency, prevent valvular dysfunction, and reduce future complications, such as post-thrombotic syndrome and venous ulceration. One such technique is pharmacomechanical thrombectomy, a combination of catheter-based thrombectomy and catheter-directed thrombolysis.\nObjectives\nTo assess the effects of pharmacomechanical thrombectomy versus anticoagulation (alone or with compression stockings), mechanical thrombectomy, thrombolysis, or other endovascular techniques in the management of people with acute DVT of the iliofemoral vein.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Specialised Register (last searched December 2015) and the Cochrane Register of Studies (last searched December 2015). We searched clinical trials databases for details of ongoing or unpublished studies and the reference lists of relevant articles retrieved by electronic searches for additional citations.\nSelection criteria\nRandomised controlled trials in which patients with an iliofemoral deep vein thrombosis were allocated to receive pharmacomechanical thrombectomy versus anticoagulation, mechanical thrombectomy, thrombolysis (systemic or catheter directed thrombolysis), or other endovascular techniques for the treatment of iliofemoral DVT.\nData collection and analysis\nAt least two review authors independently assessed studies identified for potential inclusion.\nMain results\nWe found no randomised controlled trials that met the eligibility criteria for this review. We identified one ongoing study.\nAuthors' conclusions\nThere were no randomised controlled trials that assessed the effects of pharmacomechanical thrombectomy versus anticoagulation (alone or with compression stockings), mechanical thrombectomy, thrombolysis, or other endovascular techniques in the management of people with acute DVT of the iliofemoral vein that met the eligibility criteria for this review. Further high quality randomised controlled trials are needed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "At present, there is a lack of randomised controlled trials that examine the comparative effectiveness and safety of pharmacomechanical thrombectomy in the management of patients with DVT."
}
] |
query-laysum
|
5971
|
[
{
"role": "user",
"content": "Abstract: Background\nThe subclavian arteries are two major arteries of the upper chest, below the collar bone, which come from the arch of the aorta. Endovascular treatment for stenosis of the subclavian arteries includes angioplasty alone, and with stenting. There is insufficient evidence to guide the use of stents following angioplasty for subclavian artery stenosis. This is the second update of a review first published in 2011.\nObjectives\nThe aim of this review was to determine whether stenting was more effective than angioplasty alone for stenosis of the subclavian artery.\nSearch methods\nFor this update, the Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, and LILACS databases, and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 2 February 2021.\nSelection criteria\nWe searched for randomised controlled trials of endovascular treatment of subclavian artery lesions that compared angioplasty alone and stent implantation.\nData collection and analysis\nTwo review authors independently evaluated studies to assess eligibility. Discrepancies were resolved by discussion. If there was no agreement, we asked a third review author to assess the study for inclusion. We planned to undertake data collection and analysis in accordance with recommendations described in the Cochrane Handbook for Systematic Reviews of Interventions, and assess the certainty of the evidence using a GRADE approach.\nMain results\nTo date, we have not identified any completed or ongoing randomised controlled trials that compare percutaneous transluminal angioplasty and stenting for subclavian artery stenosis.\nAuthors' conclusions\nThere is currently insufficient evidence to determine whether stenting is more effective than angioplasty alone for stenosis of the subclavian artery.\n\nGiven the provided abstract, please respond to the following query: \"Why is this question important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The subclavian arteries are two major blood vessels in the upper chest, below the collar bone, which come from the arch of the aorta. The left subclavian artery supplies blood to the left arm, and the right subclavian artery supplies blood to the right arm; some branches supply blood to the head and chest. A history of smoking, high blood pressure, lower levels of 'good' (high-density lipoprotein) cholesterol, and peripheral arterial disease are associated with an increased risk of subclavian artery stenosis. Subclavian artery stenosis often occurs without symptoms. When they occur, symptoms include short-lasting vertigo (commonly described as the environment spinning) due to decreased blood flow in the back part of the brain, and blood circulation problems in the hands and arms. Endovascular (minimally invasive) treatment for stenosis of the subclavian arteries includes angioplasty alone or with stenting. It is not clear if angioplasty alone or with stenting offers the most benefit for people with subclavian artery blockages."
}
] |
query-laysum
|
5972
|
[
{
"role": "user",
"content": "Abstract: Background\nAllergic rhinitis is a common condition affecting both adults and children. Patients experience symptoms of nasal obstruction, rhinorrhoea, sneezing and nasal itching, which may affect their quality of life.\nNasal irrigation with saline (salty water), also known as nasal douching, washing or lavage, is a procedure that rinses the nasal cavity with isotonic or hypertonic saline solutions. It can be performed with low positive pressure from a spray, pump or squirt bottle, with a nebuliser or with gravity-based pressure in which the person instils saline into one nostril and allows it to drain out of the other. Saline solutions are available over the counter and can be used alone or as an adjunct to other therapies.\nObjectives\nTo evaluate the effects of nasal saline irrigation in people with allergic rhinitis.\nSearch methods\nThe Cochrane ENT Information Specialist searched the ENT Trials Register; CENTRAL; Ovid MEDLINE; Ovid Embase; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 23 November 2017.\nSelection criteria\nRandomised controlled trials (RCTs) comparing nasal saline irrigation, delivered by any means and with any volume, tonicity and alkalinity, with (a) no nasal saline irrigation or (b) other pharmacological treatments in adults and children with allergic rhinitis. We included studies comparing nasal saline versus no saline, where all participants also received pharmacological treatment (intranasal corticosteroids or oral antihistamines).\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Primary outcomes were patient-reported disease severity and a common adverse effect - epistaxis. Secondary outcomes were disease-specific health-related quality of life (HRQL), individual symptom scores, general HRQL, the adverse effects of local irritation or discomfort, ear symptoms (pain or pressure) and nasal endoscopy scores. We used GRADE to assess the quality of the evidence for each outcome; this is indicated in italics.\nMain results\nWe included 14 studies (747 participants). The studies included children (seven studies, 499 participants) and adults (seven studies, 248 participants). No studies reported outcomes beyond three months follow-up. Saline volumes ranged from 'very low' to 'high' volume. Where stated, studies used either hypertonic or isotonic saline solution.\nNasal saline versus no saline treatment\nAll seven studies (112 adults; 332 children) evaluating this comparison used different scoring systems for patient-reported disease severity, so we pooled the data using the standardised mean difference (SMD). Saline irrigation may improve patient-reported disease severity compared with no saline at up to four weeks (SMD -1.32, 95% confidence interval (CI) -1.84 to -0.81; 407 participants; 6 studies; low quality) and between four weeks and three months (SMD -1.44, 95% CI -2.39 to -0.48; 167 participants; 5 studies; low quality). Although the evidence was low quality the SMD values at both time points are considered large effect sizes. Subgroup analysis showed the improvement in both adults and children. Subgroup analyses for volume and tonicity were inconclusive due to heterogeneity.\nTwo studies reported methods for recording adverse effects and five studies mentioned them. Two studies (240 children) reported no adverse effects (epistaxis or local discomfort) in either group and three only reported no adverse effects in the saline group.\nOne study (48 children) reported disease-specific HRQL using a modified RCQ-36 scale. It was uncertain whether there was a difference between the groups at any of the specified time points (very low quality). No other secondary outcomes were reported.\nNasal saline versus no saline with adjuvant use of intranasal steroids or oral antihistamines\nThree studies (40 adults; 79 children) compared saline with intranasal steroids versus intranasal steroids alone; one study (14 adults) compared saline with oral antihistamines versus oral antihistamines alone. It is uncertain if there is a difference in patient-reported disease severity at up to four weeks (SMD -0.60, 95% CI -1.34 to 0.15; 32 participants; 2 studies; very low quality) or from four weeks to three months (SMD -0.32, 95% CI -0.85 to 0.21; 58 participants; 2 studies; very low quality). Although none of the studies reported methods for recording adverse effects, three mentioned them: one study (40 adults; adjuvant intranasal steroids) reported no adverse effects (epistaxis or local discomfort) in either group; the other two only reported no adverse effects in the saline group.\nIt is uncertain if saline irrigation in addition to pharmacological treatment improved disease-specific HRQL at four weeks to three months, compared with pharmacological treatment alone (SMD -1.26, 95% CI -2.47 to -0.05; 54 participants; 2 studies; very low quality). No other secondary outcomes were reported.\nNasal saline versus intranasal steroids\nIt is uncertain if there was a difference in patient-reported disease severity between nasal saline and intranasal steroids at up to four weeks (MD 1.06, 95% CI -1.65 to 3.77; 14 participants; 1 study), or between four weeks and three months (SMD 1.26, 95% CI -0.92 to 3.43; 97 participants; 3 studies), or indisease-specific HRQL between four weeks and three months (SMD 0.01, 95% CI -0.73 to 0.75; 83 participants; 2 studies). Only one study reported methods for recording adverse effects although three studies mentioned them. One (21 participants) reported two withdrawals due to adverse effects but did not describe these or state which group. Three studies reported no adverse effects (epistaxis or local discomfort) with saline, although one study reported that 27% of participants experienced local discomfort with steroid use. No other secondary outcomes were reported.\nAuthors' conclusions\nSaline irrigation may reduce patient-reported disease severity compared with no saline irrigation at up to three months in both adults and children with allergic rhinitis, with no reported adverse effects. No data were available for any outcomes beyond three months. The overall quality of evidence was low or very low. The included studies were generally small and used a range of different outcome measures to report disease severity scores, with unclear validation. This review did not include direct comparisons of saline types (e.g. different volume, tonicity).\nSince saline irrigation could provide a cheap, safe and acceptable alternative to intranasal steroids and antihistamines further high-quality, adequately powered research in this area is warranted.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Nasal saline irrigation compared with no saline irrigation\nNasal saline irrigation may have benefits in both adults and children in relieving the symptoms of allergic rhinitis compared to no saline irrigation and it is unlikely to be associated with adverse effects. It is not possible to tell from this review whether there is a difference between the different volumes and concentrations of saline solution.\nAdding nasal saline irrigation onto 'pharmacological' allergic rhinitis treatment\nIt is uncertain whether adding nasal saline irrigation to pharmacological treatment (intranasal steroids or oral antihistamines) helps to improve the symptoms of allergic rhinitis compared to using pharmacological treatments alone. The use of nasal saline irrigation is unlikely to be associated with adverse effects.\nNasal saline irrigation compared to 'pharmacological' allergic rhinitis treatment\nThere is not enough evidence to know whether nasal saline irrigation is better, worse or the same as using intranasal steroids. No studies reporting the outcomes we were interested in compared nasal saline irrigation with oral antihistamines."
}
] |
query-laysum
|
5973
|
[
{
"role": "user",
"content": "Abstract: Background\nAnderson-Fabry disease is an X-linked defect of glycosphingolipid metabolism. Progressive renal insufficiency is a major source of morbidity, additional complications result from cardio- and cerebro-vascular involvement. Survival is reduced among affected males and symptomatic female carriers.\nThis is an update of a Cochrane review first published in 2010, and previously updated in 2013.\nObjectives\nTo evaluate the effectiveness and safety of enzyme replacement therapy compared to other interventions, placebo or no interventions, for treating Anderson-Fabry disease.\nSearch methods\nWe searched the Cystic Fibrosis and Genetic Disorders Group's Inborn Errors of Metabolism Trials Register (date of the most recent search: 08 July 2016). We also searched 'Clinical Trials' on The Cochrane Library, MEDLINE, Embase and LILACS (date of the most recent search: 24 September 2015).\nSelection criteria\nRandomized controlled trials of agalsidase alfa or beta in participants diagnosed with Anderson-Fabry disease.\nData collection and analysis\nTwo authors selected relevant trials, assessed methodological quality and extracted data.\nMain results\nNine trials comparing either agalsidase alfa or beta in 351 participants fulfilled the selection criteria.\nBoth trials comparing agalsidase alfa to placebo reported on globotriaosylceramide concentration in plasma and tissue; aggregate results were non-significant. One trial reported pain scores measured by the Brief Pain Inventory severity, there was a statistically significant improvement for participants receiving treatment at up to three months, mean difference -2.10 (95% confidence interval -3.79 to -0.41; at up to five months, mean difference -1.90 (95% confidence interval -3.65 to -0.15); and at up to six months, mean difference -2.00 (95% confidence interval -3.66 to -0.34). There was a significant difference in the Brief Pain Inventory pain-related quality of life at over five months and up to six months, mean difference -2.10 (95% confidence interval -3.92 to -0.28) but not at other time points. Death was not an outcome in either of the trials.\nOne of the three trials comparing agalsidase beta to placebo reported on globotriaosylceramide concentration in plasma and tissue and showed significant improvement: kidney, mean difference -1.70 (95% confidence interval -2.09 to -1.31); heart, mean difference -0.90 (95% confidence interval -1.18 to -0.62); and composite results (renal, cardiac, and cerebrovascular complications and death), mean difference -4.80 (95% confidence interval -5.45 to -4.15). There was no significant difference between groups for death; no trials reported on pain.\nOnly two trials compared agalsidase alfa to agalsidase beta. One of them showed no significant difference between the groups regarding adverse events, risk ratio 0.36 (95% confidence interval 0.08 to 1.59), or any serious adverse events; risk ratio 0.30; (95% confidence interval 0.03 to 2.57).\nTwo trials compared different dosing schedules of agalsidase alfa. One of them involved three different doses (0.2 mg/kg every two weeks; 0.1 mg/kg weekly and; 0.2 mg/kg weekly), the other trial evaluated two further doses to the dosage schedules: 0.4 mg/kg every week and every other week. Both trials failed to show significant differences with various dosing schedules on globotriaosylceramide levels. No significant differences were found among the schedules for the primary efficacy outcome of self-assessed health state, or for pain scores.\nOne trial comparing agalsidase alfa to agalsidase beta showed no significant difference for any adverse events such as dyspnoea and hypertension.\nThe methodological quality of the included trials was generally unclear for the random sequence generation and allocation concealment.\nAuthors' conclusions\nTrials comparing enzyme replacement therapy to placebo show significant improvement with enzyme replacement therapy in regard to microvascular endothelial deposits of globotriaosylceramide and in pain-related quality of life. There is, however, no evidence identifying if the alfa or beta form is superior or the optimal dose or frequency of enzyme replacement therapy. With regards to safety, adverse events (i.e., rigors, fever) were more significant in the agalsidase beta as compared to placebo. The long-term influence of enzyme replacement therapy on risk of morbidity and mortality related to Anderson-Fabry disease remains to be established. This review highlights the need for continued research into the use of enzyme replacement therapy for Anderson-Fabry disease.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Anderson-Fabry disease, a rare disorder, is caused by a deficiency of the enzyme alpha-galactosidase A. This leads to the build-up of a fatty material called globotriaosylceramide in various cells in the body. Globotriaosylceramide is formed of three sugars and a fatty substance called ceramide, and is found in most cells of the body. Untreated individuals may suffer from pain, skin, eye and gastrointestinal problems. Fabry disease may cause potentially life-threatening complications such as kidney damage, heart attack and stroke. One type of treatment available is enzyme replacement therapy with either agalsidase alfa or beta, which replaces the missing or deficient enzyme."
}
] |
query-laysum
|
5974
|
[
{
"role": "user",
"content": "Abstract: Background\nA permanent upper (maxillary) canine tooth that grows into the roof of the mouth and frequently does not appear (erupt) is called a palatally displaced canine (PDC). The reported prevalence of PDC in the population varies between 1% and 3%. Management of the unerupted PDC can be lengthy, involving surgery to uncover the tooth and prolonged orthodontic (brace) treatment to straighten it; therefore, various procedures have been suggested to encourage a PDC to erupt without the need for surgical intervention.\nObjectives\nTo assess the efficacy, safety and cost-effectiveness of any interceptive procedure to promote the eruption of a PDC compared to no treatment or other interceptive procedures in young people aged 9 to 14 years old.\nSearch methods\nAn information specialist searched four bibliographic databases up to 3 February 2021 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCT) involving at least 80% of children aged between 9 and 14 years, who were diagnosed with an upper PDC and undergoing an intervention to enable the successful eruption of the unerupted PDC, which was compared with an untreated control group or another intervention.\nData collection and analysis\nTwo review authors, independently and in duplicate, examined titles, keywords, abstracts, full articles, extracted data and assessed risk of bias using the Cochrane Risk of Bias 1 tool (RoB1). The primary outcome was summarised with risk ratios (RR) and 95% confidence intervals (CI). We reported an intention-to-treat (ITT) analysis when data were available and a modified intention-to-treat (mITT) analysis if not. We also undertook several sensitivity analyses. We used summary of findings tables to present the main findings and our assessment of the certainty of the evidence.\nMain results\nWe included four studies, involving 199 randomised participants (164 analysed), 108 girls and 91 boys, 82 of whom were diagnosed with unilateral PDC and 117 with bilateral PDC. The participants were aged between 8 and 13 years at recruitment. The certainty of the evidence was very low and future research may change our conclusions.\nOne study (randomised 67 participants, 89 teeth) found that extracting the primary canine may increase the proportion of PDCs that successfully erupt into the mouth at 12 months compared with no extraction (RR 2.87, 95% CI 0.90 to 9.23; 45 participants, 45 PDCs analysed; very low-certainty evidence), but the CI included the possibility of no difference; therefore the evidence was uncertain. There was no evidence that extraction of the primary canine reduced the number of young people with a PDC referred for surgery at 12 months (RR 0.61 (95% CI 0.29 to 1.28).\nThree studies (randomised 132 participants, 227 teeth) found no difference in the proportion of successfully erupted PDCs at 18 months with a double primary tooth extraction compared with extraction of a single primary canine (RR 0.68, 95% CI 0.35 to 1.31; 119 participants analysed, 203 PDCs; mITT; very low-certainty evidence). Two of these studies found no difference in the proportions referred for surgical exposure between the single and the double primary extraction groups data at 48 months (RR 0.31, 95% CI 0.06 to 1.45).\nThere are some descriptive data suggesting that the more severe the displacement of the PDC towards the midline, the lower the proportion of successfully erupted PDCs with or without intervention.\nAuthors' conclusions\nThe evidence that extraction of the primary canine in a young person aged between 9 and 14 years diagnosed with a PDC may increase the proportion of erupted PDCs, without surgical intervention, is very uncertain. There is no evidence that double extraction of primary teeth increases the proportion of erupted PDC compared with a single primary tooth extraction at 18 months or the proportion referred for surgery by 48 months. Because we have only low to very low certainty in these findings, future research is necessary to help us know for sure the best way to deal with upper permanent teeth that are not erupting as expected.\n\nGiven the provided abstract, please respond to the following query: \"What are the treatments?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Management of a PDC can take a long time, involving surgery to uncover the tooth and prolonged orthodontic (brace) treatment to straighten it. Various quicker or easier alternatives have been suggested to encourage the tooth or teeth to erupt. These include extraction (taking out) of the primary (baby) canine, extraction of the primary canine and primary first molar (also called double primary tooth extraction), or using braces to create space in children's mouths."
}
] |
query-laysum
|
5975
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic pain, considered to be pain lasting more than three months, is a common and often difficult to treat condition that can significantly impact upon function and quality of life. Treatment typically includes pharmacological and non-pharmacological approaches. Transcutaneous electrical nerve stimulation (TENS) is an adjunct non-pharmacological treatment commonly recommended by clinicians and often used by people with pain.\nObjectives\nTo provide an overview of evidence from Cochrane Reviews of the effectiveness of TENS to reduce pain in adults with chronic pain (excluding headache or migraine).\nTo provide an overview of evidence from Cochrane Reviews of the safety of TENS when used to reduce pain in adults with chronic pain (excluding headache or migraine).\nTo identify possible sources of inconsistency in the approaches taken to evaluating the evidence related to TENS for chronic pain (excluding headache or migraine) in the Cochrane Library with a view to recommending strategies to improve consistency in methodology and reporting.\nTo highlight areas of remaining uncertainty regarding the effectiveness of TENS for chronic pain (excluding headache or migraine) with a view to recommending strategies to reduce any uncertainty.\nMethods\nSearch methods\nWe searched the Cochrane Database of Systematic Reviews (CDSR), in the Cochrane Library, across all years up to Issue 11 of 12, 2018.\nSelection of reviews\nTwo authors independently screened the results of the electronic search by title and abstract against inclusion/exclusion criteria. We included all Cochrane Reviews of randomised controlled trials (RCTs) assessing the effectiveness of TENS in people with chronic pain. We included reviews if they investigated the following: TENS versus sham; TENS versus usual care or no treatment/waiting list control; TENS plus active intervention versus active intervention alone; comparisons between different types of TENS; or TENS delivered using different stimulation parameters.\nData extraction and analysis\nTwo authors independently extracted relevant data, assessed review quality using the AMSTAR checklist and applied GRADE judgements where required to individual reviews. Our primary outcomes included pain intensity and nature/incidence of adverse effects; our secondary outcomes included disability, health-related quality of life, analgesic medication use and participant global impression of change.\nMain results\nWe included nine reviews investigating TENS use in people with defined chronic pain or in people with chronic conditions associated with ongoing pain. One review investigating TENS for phantom or stump-associated pain in people following amputation did not have any included studies. We therefore extracted data from eight reviews which represented 51 TENS-related RCTs representing 2895 TENS-comparison participants entered into the studies.\nThe included reviews followed consistent methods and achieved overall high scores on the AMSTAR checklist. The evidence reported within each review was consistently rated as very low quality. Using review authors' assessment of risk of bias, there were significant methodological limitations in included studies; and for all reviews, sample sizes were consistently small (the majority of studies included fewer than 50 participants per group).\nSix of the eight reviews presented a narrative synthesis of included studies. Two reviews reported a pooled analysis.\nPrimary and secondary outcomes\nOne review reported a beneficial effect of TENS versus sham therapy at reducing pain intensity on a 0 to 10 scale (MD −1.58, 95% CI −2.08 to −1.09, P < 0.001, I² = 29%, P = 0.22, 5 studies, 207 participants). However the quality of the evidence was very low due to significant methodological limitations and imprecision. A second review investigating pain intensity performed a pooled analysis by combining studies that compared TENS to sham with studies that compared TENS to no intervention (SMD −0.85, 95% CI −1.36 to −0.34, P = 0.001, I² = 83%, P < 0.001). This pooled analysis was judged as offering very low quality evidence due to significant methodological limitations, large between-trial heterogeneity and imprecision. We considered the approach of combining sham and no intervention data to be problematic since we would predict these different comparisons may be estimating different true effects. All remaining reviews also reported pain intensity as an outcome measure; however the data were presented in narrative review form only.\nDue to methodological limitation and lack of useable data, we were unable to offer any meaningful report on the remaining primary outcome regarding nature/incidence of adverse effects, nor for the remaining secondary outcomes: disability, health-related quality of life, analgesic medication use and participant global impression of change for any comparisons.\nWe found the included reviews had a number of inconsistencies when evaluating the evidence from TENS studies. Approaches to assessing risk of bias around the participant, personnel and outcome-assessor blinding were perhaps the most obvious area of difference across included reviews. We also found wide variability in terms of primary and secondary outcome measures, and inclusion/exclusion criteria for studies varied with respect to including studies which assessed immediate effects of single interventions.\nAuthors' conclusions\nWe found the methodological quality of the reviews was good, but quality of the evidence within them was very low. We were therefore unable to conclude with any confidence that, in people with chronic pain, TENS is harmful, or beneficial for pain control, disability, health-related quality of life, use of pain relieving medicines, or global impression of change. We make recommendations with respect to future TENS study designs which may meaningfully reduce the uncertainty relating to the effectiveness of this treatment in people with chronic pain.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "As of November 2018, we found nine reviews eligible for inclusion. Seven reviews specifically investigated TENS for the treatment of pain/function in a variety of chronic conditions in adults. We also included one review investigating a range of electrotherapy modalities for neck pain and one review examining non-pharmacological interventions in people with spinal cord injury. Both of these reviews included studies investigating TENS. Though the included reviews were of high quality, we found the quality of the evidence presented within the reviews to be very low."
}
] |
query-laysum
|
5976
|
[
{
"role": "user",
"content": "Abstract: Background\nFibromyalgia (FM) syndrome is a chronic condition of unknown aetiology characterised by musculoskeletal pain that often co-exists with sleep disturbance, cognitive dysfunction and fatigue. Patients often report high disability levels and poor quality of life. Since there is no specific treatment that alters the pathogenesis of FM, drug therapy focuses on pain reduction and improvement of other bothersome symptoms.\nObjectives\nThe objective of this review was to assess the effectiveness and safety of monoamine oxidase inhibitors (MAOIs) in the treatment of FM syndrome.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2010, Issue 10), MEDLINE (1966 to November 2010), EMBASE (1980 to November 2010) and the reference lists of reviewed articles.\nSelection criteria\nWe selected all randomised, double-blind trials of MAOIs used for the treatment of FM pain in adult participants.\nData collection and analysis\nTwo authors assessed risk of bias and extracted data independently onto a specially designed pro forma and a third review author cross-checked them.\nMain results\nWe included two studies of inconsistent risk of bias with a total of 230 patients diagnosed with FM. We evaluated two MAOIs: pirlindole and moclobemide. Pirlindole showed statistically significant results compared with placebo for several outcomes (pain, tender points and overall assessment by the patient and the physician), whereas moclobemide did not show statistically significant differences between groups. Pooled results of the two studies displayed a modest effect size in pain (mean difference (MD) -1.45 (121 patients; 95% confidence interval (CI) -2.71 to -0.20; number needed to treat (NNT) 2 (95% CI 1 to 12); I2 = 59%)), implying a minimal clinically important difference (MCID) and a small effect on tender points (standardised mean difference (SMD) -0.36 (121 patients; 95% CI -0.72 to -0.00; I2 = 31%)). No effect was seen on global assessment by patient. Physical function and sleep disturbance were not measured. The most frequent adverse events were nausea and vomiting, with statistically significant differences between groups (risk ratio (RR) 7.82 (89 patients; 95% CI 1.02 to 59.97; NNT 7 (95% CI 4 to 33)).\nAuthors' conclusions\nData suggest that the effectiveness of MAOIs for the treatment of FM symptoms is limited. Although we observed a moderate effect size on pain and a small one on tender points, these results should be taken with caution as they are only based on two studies with a small number of patients and inconsistent risk of bias among them.\n\nGiven the provided abstract, please respond to the following query: \"Global assessment (by patient)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- People who took MAOIs showed no difference in their global assessment compared to people who took placebo."
}
] |
query-laysum
|
5977
|
[
{
"role": "user",
"content": "Abstract: Background\nVenous leg ulcers are common, chronic wounds caused by venous diseases, with a high recurrence rate and heavy disease burden. Compression therapy (bandages or stockings) is the first choice treatment for venous leg ulcers. However, when ulcers remain unhealed, medication can also be used with or without compression therapy. Sulodexide, a highly purified glycosaminoglycan (a naturally occurring molecule) has antithrombotic and profibrinolytic properties (it reduces the formation of blood clots) as well as anti-inflammatory effects. Sulodexide has been studied as a potential treatment for venous leg ulcers.\nObjectives\nTo assess the efficacy and safety of sulodexide for treating venous leg ulcers.\nSearch methods\nIn July 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE; EBSCO CINAHL; Chinese Biomedical Literature Database (CBM); China National Knowledge Infrastructure Database (CNKI); Wan Fang and VIP. We also searched clinical trials registries to identify ongoing studies, as well as references listed in relevant publications. There were no restrictions based on date of publication, language or study setting.\nSelection criteria\nRandomised controlled trials (RCTs) involving people with a diagnosis of venous leg ulcers which compared sulodexide with placebo or any other drug therapy (such as pentoxifylline, flavonoids, aspirin), with or without compression therapy.\nData collection and analysis\nWe used standard Cochrane methodological procedures. The authors independently selected studies, extracted data and assessed risk of bias. We pooled data to present the risk ratio (RR) with 95% confidence interval (CI), or presented a narrative summary. We assessed overall evidence quality according to the GRADE approach.\nMain results\nWe included four RCTs with a total of 463 participants (aged 42 years to 93 years); one report was only available as a published abstract.\nMeta-analysis of three RCTs suggests an increase in the proportion of ulcers completely healed with sulodexide as an adjuvant to local treatment (including wound care and compression therapy) compared with local treatment alone (rate of complete healing with sulodexide 49.4% compared with 29.8% with local treatment alone; RR 1.66; 95% CI 1.30 to 2.12). This evidence for sulodexide increasing the rate of complete healing is low quality due to risk of bias. It is unclear whether sulodexide is associated with any increase in adverse events (4.4% with sulodexide versus 3.1% with no sulodexide; RR 1.44; 95% CI 0.48 to 4.34). The evidence for adverse events is very low quality, downgraded twice for risk of bias and once for imprecision.\nAuthors' conclusions\nSulodexide may increase the healing of venous ulcers, when used alongside local wound care, however the evidence is only low quality and the conclusion is likely to be affected by new research. It is not clear whether sulodexide is associated with adverse effects. The standard dosage, route and frequency of sulodexide reported in the trials was unclear. Further rigorous, adequately powered RCTs examining the effects of sulodexide on healing, ulcer recurrence, quality of life and costs are necessary.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of evidence for each outcome varied between low and very low, due to risk of bias, and imprecision (that is, for some outcomes, results from only one, small study were available).\nThis plain language summary is up to date as of 1 July 2015."
}
] |
query-laysum
|
5978
|
[
{
"role": "user",
"content": "Abstract: Background\nCarotid endarterectomy may significantly reduce the risk of stroke in people with recently symptomatic, severe carotid artery stenosis. However, there are significant perioperative risks that may be minimised by performing the operation under local rather than general anaesthetics. This is an update of a Cochrane Review first published in 1996, and previously updated in 2004, 2008, and 2013.\nObjectives\nTo determine whether carotid endarterectomy under local anaesthetic: 1) reduces the risk of perioperative stroke and death compared with general anaesthetic; 2) reduces the complication rate (other than stroke) following carotid endarterectomy; and 3) is acceptable to individuals and surgeons.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and two trials registers (to February 2021). We also reviewed reference lists of articles identified.\nSelection criteria\nRandomised controlled trials (RCTs) comparing the use of local anaesthetics to general anaesthetics for people having carotid endarterectomy were eligible.\nData collection and analysis\nThree review authors independently extracted data, assessed risk of bias, and evaluated quality of evidence using the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) tool. We calculated a pooled Peto odds ratio (OR) and corresponding 95% confidence interval (CI) for the following outcomes that occurred within 30 days of surgery: stroke, death, ipsilateral stroke, stroke or death, myocardial infarction, local haemorrhage, and arteries shunted.\nMain results\nWe included 16 RCTs involving 4839 participants, of which 3526 were obtained from the single largest trial (GALA). The main findings from our meta-analysis showed that, within 30 days of operation, neither incidence of stroke nor death were significantly different between local and general anaesthesia. Of these, the incidence of stroke in the local and general anaesthesia groups was 3.2% and 3.5%, respectively (Peto odds ratio (OR) 0.91, 95% confidence interval (CI) 0.66 to 1.26; P = 0.58; 13 studies, 4663 participants; low-quality evidence). The rate of ipsilateral stroke under both types of anaesthesia was 3.1% (Peto OR 1.03, 95% CI 0.71 to 1.48; P = 0.89; 2 studies, 3733 participants; low-quality evidence). The incidence of stroke or death in the local anaesthesia group was 3.5%, while stroke or death incidence was 4.1% in the general anaesthesia group (Peto OR 0.85, 95% CI 0.62 to 1.16; P = 0.31; 11 studies, 4391 participants; low-quality evidence). A lower rate of death was observed in the local anaesthetic group but evidence was of low quality (Peto OR 0.61, 95% CI 0.35 to 1.06; P = 0.08; 12 studies, 4421 participants).\nAuthors' conclusions\nThe incidence of stroke and death were not convincingly different between local and general anaesthesia for people undergoing carotid endarterectomy. The current evidence supports the choice of either approach. Further high-quality studies are still needed as the evidence is of limited reliability.\n\nGiven the provided abstract, please respond to the following query: \"Main limitations of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have either little, or moderate, confidence in these results. The quality of the evidence was reduced because, in most studies, it was possible that researchers collecting information about the outcomes of surgery knew which type of anaesthetic people had been given; this could have influenced their assessments. Also, information from some people who were meant to be included in the studies was left out, which also reduces the quality of the evidence."
}
] |
query-laysum
|
5979
|
[
{
"role": "user",
"content": "Abstract: Background\nVentilator-associated pneumonia (VAP) is a common infectious disease in intensive care units (ICUs). The best diagnostic approach to resolve this condition remains uncertain.\nObjectives\nTo evaluate whether quantitative cultures of respiratory secretions and invasive strategies are effective in reducing mortality in immunocompetent patients with VAP, compared with qualitative cultures and non-invasive strategies. We also considered changes in antibiotic use, length of ICU stay and mechanical ventilation.\nSearch methods\nWe searched CENTRAL (2014, Issue 9), MEDLINE (1966 to October week 2, 2014), EMBASE (1974 to October 2014) and LILACS (1982 to October 2014).\nSelection criteria\nRandomised controlled trials (RCTs) comparing respiratory samples processed quantitatively or qualitatively, obtained by invasive or non-invasive methods from immunocompetent patients with VAP and which analysed the impact of these methods on antibiotic use and mortality rates.\nData collection and analysis\nTwo review authors independently reviewed the trials identified in the search results and assessed studies for suitability, methodology and quality. We analysed data using Review Manager software. We pooled the included studies to yield the risk ratio (RR) for mortality and antibiotic change with 95% confidence intervals (CI).\nMain results\nOf the 5064 references identified from the electronic databases (605 from the updated search in October 2014), five RCTs (1367 patients) met the inclusion criteria. Three studies compared invasive methods using quantitative cultures versus non-invasive methods using qualitative cultures, and we used them to answer the main objective of this review. The other two studies compared invasive versus non-invasive methods, both using quantitative cultures. We combined all five studies to compare invasive versus non-invasive interventions for diagnosing VAP. The studies that compared quantitative and qualitative cultures (1240 patients) showed no statistically significant differences in mortality rates (RR 0.91; 95% CI 0.75 to 1.11). The analysis of all five RCTs showed there was no evidence of reduction in mortality in the invasive group versus the non-invasive group (RR 0.93; 95% CI 0.78 to 1.11). There were no significant differences between the interventions with respect to the number of days on mechanical ventilation, length of ICU stay or antibiotic change.\nAuthors' conclusions\nThere is no evidence that the use of quantitative cultures of respiratory secretions results in reduced mortality, reduced time in ICU and on mechanical ventilation, or higher rates of antibiotic change when compared to qualitative cultures in patients with VAP. We observed similar results when invasive strategies were compared with non-invasive strategies.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to determine whether quantitative cultures of respiratory secretions are effective in reducing mortality in immunocompetent patients with ventilator-associated pneumonia compared with qualitative cultures. We also evaluated changes in antibiotic use, length of intensive care unit (ICU) stay and mechanical ventilation.BackgroundVentilator-associated pneumonia (VAP) is a condition which occurs in patients mechanically ventilated for more than 48 hours, which can significantly increase the mortality of ICU patients. The best method for diagnosing VAP and identifying the causative organism (bacteria) is uncertain. Both invasive and non-invasive techniques are used to obtain samples of respiratory secretions and these can be analysed quantitatively (with a threshold count of the bacterial growth to differentiate between infection and colonisation of the lower airways) or qualitatively (presence or absence of pathogenic germs in the culture). The rationale for using quantitative cultures of respiratory secretions sampled from patients with VAP is to differentiate the infectious organisms (those with a higher concentration) from colonising organisms (those with lower concentration), thereby optimising antibiotic therapy."
}
] |
query-laysum
|
5980
|
[
{
"role": "user",
"content": "Abstract: Background\nUlcerative colitis is an inflammatory condition affecting the colon, with an annual incidence of approximately 10 to 20 per 100,000 people. The majority of people with ulcerative colitis can be put into remission, leaving a group who do not respond to first- or second-line therapies. There is a significant proportion of people who experience adverse effects with current therapies. Consequently, new alternatives for the treatment of ulcerative colitis are constantly being sought. Probiotics are live microbial feed supplements that may beneficially affect the host by improving intestinal microbial balance, enhancing gut barrier function and improving local immune response.\nObjectives\nThe primary objective was to determine the efficacy of probiotics compared to placebo, no treatment, or any other intervention for the maintenance of remission in people with ulcerative colitis. The secondary objective was to assess the occurrence of adverse events associated with the use of probiotics.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and two other databases on 31 October 2019. We contacted authors of relevant studies and manufacturers of probiotics regarding ongoing or unpublished trials that may be relevant to the review, and we searched ClinicalTrials.gov. We also searched references of trials for any additional trials.\nSelection criteria\nRandomised controlled trials (RCTs) that compared probiotics against placebo or any other intervention, in both adults and children, for the maintenance of remission in ulcerative colitis were eligible for inclusion. Maintenance therapy had to be for a minimum of three months when remission has been established by any clinical, endoscopic,histological or radiological relapse as defined by study authors.\nData collection and analysis\nTwo review authors independently conducted data extraction and 'Risk of bias' assessment of included studies. We analysed data using Review Manager 5. We expressed dichotomous and continuous outcomes as risk ratios (RRs) and mean differences (MDs) with 95% confidence intervals (CIs). We assessed the certainty of the evidence using the GRADE methodology.\nMain results\nIn this review, we included 12 studies (1473 randomised participants) that met the inclusion criteria. Participants were mostly adults. The studies compared probiotics to placebo, probiotics to 5-aminosalicylic acid (5-ASA) and a combination of probiotics and 5-ASA to 5-ASA. The studies ranged in length from 12 to 52 weeks. The average age of participants was between 32 and 51, with a range between 18 and 88 years. Seven studies investigated a single bacterial strain, and five studies considered mixed preparations of multiple strains. The risk of bias was high in all except three studies due to selective reporting, incomplete outcome data and lack of blinding. This resulted in low- to very low-certainty of evidence.\nIt is uncertain if there is any difference in occurrence of clinical relapse when probiotics are compared with placebo (RR 0.87, 95% CI 0.63 to 1.18; 4 studies, 361 participants; very low-certainty evidence (downgraded for risk of bias, imbalance in baseline characteristics and imprecision)). It is also uncertain whether probiotics lead to a difference in the number of people who maintain clinical remission compared with placebo (RR 1.16, 95% CI 0.98 to 1.37; 2 studies, 141 participants; very low-certainty evidence (downgraded for risk of bias, imbalance in baseline characteristics and imprecision)).\nWhen probiotics are compared with 5-ASA, there may be little or no difference in clinical relapse (RR 1.01, 95% CI 0.84 to 1.22; 2 studies, 452 participants; low-certainty evidence) and maintenance of clinical remission (RR 1.06, 95% CI 0.90 to 1.25; 1 study, 125 participants; low-certainty evidence). It is uncertain if there is any difference in clinical relapse when probiotics, combined with 5-ASA are compared with 5-ASA alone (RR 1.11, 95% CI 0.66 to 1.87; 2 studies, 242 participants; very low-certainty evidence (downgraded due to risk of bias and imprecision)). There may be little or no difference in maintenance of remission when probiotics, combined with 5-ASA, are compared with 5-ASA alone (RR 1.05, 95% CI 0.89 to 1.24; 1 study, 122 participants; low-certainty evidence).\nWhere reported, most of the studies which compared probiotics with placebo recorded no serious adverse events or withdrawals due to adverse events. For the comparison of probiotics and 5-ASA, one trial reported 11/110 withdrawals due to adverse events with probiotics and 11/112 with 5-ASA (RR 1.02, 95% CI 0.46 to 2.25; 222 participants; very low-certainty evidence). Discontinuation of therapy was due to gastrointestinal symptoms. One study (24 participants) comparing probiotics combined with 5-ASA with 5-ASA alone, reported no withdrawals due to adverse events; and two studies reported two withdrawals in the probiotic arm, due to avascular necrosis of bilateral femoral head and pulmonary thromboembolism (RR 5.29, 95% CI 0.26 to 107.63; 127 participants; very low-certainty evidence).\nHealth-related quality of life and need for additional therapy were reported infrequently.\nAuthors' conclusions\nThe effectiveness of probiotics for the maintenance of remission in ulcerative colitis remains unclear. This is due to low- to very low-certainty evidence from poorly conducted studies, which contribute limited amounts of data from a small number of participants. Future trials comparing probiotics with 5-ASA rather than placebo will better reflect conventional care given to people with ulcerative colitis. Appropriately powered studies with a minimum length of 12 months are needed.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Ulcerative colitis is a chronic disease of the large bowel, which causes inflammation (swelling). Some of the symptoms include tummy pain, diarrhoea and tiredness. Probiotics are living microscopic organisms that are thought to change the growth of bacteria in the bowel and reduce inflammation."
}
] |
query-laysum
|
5981
|
[
{
"role": "user",
"content": "Abstract: Background\nApproximately 0.6% to 4% of cholecystectomies are performed because of gallbladder polyps. The decision to perform cholecystectomy is based on presence of gallbladder polyp(s) on transabdominal ultrasound (TAUS) or endoscopic ultrasound (EUS), or both. These polyps are currently considered for surgery if they grow more than 1 cm. However, non-neoplastic polyps (pseudo polyps) do not need surgery, even when they are larger than 1 cm. True polyps are neoplastic, either benign (adenomas) or (pre)malignant (dysplastic polyps/carcinomas). True polyps need surgery, especially if they are premalignant or malignant. There has been no systematic review and meta-analysis on the accuracy of TAUS and EUS in the diagnosis of gallbladder polyps, true gallbladder polyps, and (pre)malignant polyps.\nObjectives\nTo summarise and compare the accuracy of transabdominal ultrasound (TAUS) and endoscopic ultrasound (EUS) for the detection of gallbladder polyps, for differentiating between true and pseudo gallbladder polyps, and for differentiating between dysplastic polyps/carcinomas and adenomas/pseudo polyps of the gallbladder in adults.\nSearch methods\nWe searched the Cochrane Library, MEDLINE, Embase, Science Citation Index Expanded, and trial registrations (last date of search 09 July 2018). We had no restrictions regarding language, publication status, or prospective or retrospective nature of the studies.\nSelection criteria\nStudies reporting on the diagnostic accuracy data (true positive, false positive, false negative and true negative) of the index test (TAUS or EUS or both) for detection of gallbladder polyps, differentiation between true and pseudo polyps, or differentiation between dysplastic polyps/carcinomas and adenomas/pseudo polyps. We only accepted histopathology after cholecystectomy as the reference standard, except for studies on diagnosis of gallbladder polyp. For the latter studies, we also accepted repeated imaging up to six months by TAUS or EUS as the reference standard.\nData collection and analysis\nTwo authors independently screened abstracts, selected studies for inclusion, and collected data from each study. The quality of the studies was evaluated using the QUADAS-2 tool. The bivariate random-effects model was used to obtain summary estimates of sensitivity and specificity, to compare diagnostic performance of the index tests, and to assess heterogeneity.\nMain results\nA total of 16 studies were included. All studies reported on TAUS and EUS as separate tests and not as a combination of tests. All studies were at high or unclear risk of bias, ten studies had high applicability concerns in participant selection (because of inappropriate participant exclusions) or reference standards (because of lack of follow-up for non-operated polyps), and three studies had unclear applicability concerns in participant selection (because of high prevalence of gallbladder polyps) or index tests (because of lack of details on ultrasound equipment and performance). A meta-analysis directly comparing results of TAUS and EUS in the same population could not be performed because only limited studies executed both tests in the same participants. Therefore, the results below were obtained only from indirect test comparisons. There was significant heterogeneity amongst all comparisons (target conditions) on TAUS and amongst studies on EUS for differentiating true and pseudo polyps.\nDetection of gallbladder polyps: Six studies (16,260 participants) used TAUS. We found no studies on EUS. The summary sensitivity and specificity of TAUS for the detection of gallbladder polyps was 0.84 (95% CI 0.59 to 0.95) and 0.96 (95% CI 0.92 to 0.98), respectively. In a cohort of 1000 people, with a 6.4% prevalence of gallbladder polyps, this would result in 37 overdiagnosed and seven missed gallbladder polyps.\nDifferentiation between true polyp and pseudo gallbladder polyp: Six studies (1078 participants) used TAUS; the summary sensitivity was 0.68 (95% CI 0.44 to 0.85) and the summary specificity was 0.79 (95% CI 0.57 to 0.91). Three studies (209 participants) used EUS; the summary sensitivity was 0.85 (95% CI 0.46 to 0.97) and the summary specificity was 0.90 (95% CI 0.78 to 0.96). In a cohort of 1000 participants with gallbladder polyps, with 10% having true polyps, this would result in 189 overdiagnosed and 32 missed true polyps by TAUS, and 90 overdiagnosed and 15 missed true polyps by EUS. There was no evidence of a difference between the diagnostic accuracy of TAUS and EUS (relative sensitivity 1.06, P = 0.70, relative specificity 1.15, P = 0.12).\nDifferentiation between dysplastic polyps/carcinomas and adenomas/pseudo polyps of the gallbladder: Four studies (1,009 participants) used TAUS; the summary sensitivity was 0.79 (95% CI 0.62 to 0.90) and the summary specificity was 0.89 (95% CI 0.68 to 0.97). Three studies (351 participants) used EUS; the summary sensitivity was 0.86 (95% CI 0.76 to 0.92) and the summary specificity was 0.92 (95% CI 0.85 to 0.95). In a cohort of 1000 participants with gallbladder polyps, with 5% having a dysplastic polyp/carcinoma, this would result in 105 overdiagnosed and 11 missed dysplastic polyps/carcinomas by TAUS and 76 overdiagnosed and seven missed dysplastic polyps/carcinomas by EUS. There was no evidence of a difference between the diagnostic accuracy of TAUS and EUS (log likelihood test P = 0.74).\nAuthors' conclusions\nAlthough TAUS seems quite good at discriminating between gallbladder polyps and no polyps, it is less accurate in detecting whether the polyp is a true or pseudo polyp and dysplastic polyp/carcinoma or adenoma/pseudo polyp. In practice, this would lead to both unnecessary surgeries for pseudo polyps and missed cases of true polyps, dysplastic polyps, and carcinomas. There was insufficient evidence that EUS is better compared to TAUS in differentiating between true and pseudo polyps and between dysplastic polyps/carcinomas and adenomas/pseudo polyps. The conclusions are based on heterogeneous studies with unclear criteria for diagnosis of the target conditions and studies at high or unclear risk of bias. Therefore, results should be interpreted with caution. Further studies of high methodological quality, with clearly stated criteria for diagnosis of gallbladder polyps, true polyps, and dysplastic polyps/carcinomas are needed to accurately determine diagnostic accuracy of EUS and TAUS.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All studies were either at high or unclear risk of bias and 13 studies had either high or unclear applicability concerns. This may undermine the validity of the studies."
}
] |
query-laysum
|
5982
|
[
{
"role": "user",
"content": "Abstract: Background\nVarious approaches to physical rehabilitation may be used after stroke, and considerable controversy and debate surround the effectiveness of relative approaches. Some physiotherapists base their treatments on a single approach; others use a mixture of components from several different approaches.\nObjectives\nTo determine whether physical rehabilitation approaches are effective in recovery of function and mobility in people with stroke, and to assess if any one physical rehabilitation approach is more effective than any other approach.\nFor the previous versions of this review, the objective was to explore the effect of 'physiotherapy treatment approaches' based on historical classifications of orthopaedic, neurophysiological or motor learning principles, or on a mixture of these treatment principles. For this update of the review, the objective was to explore the effects of approaches that incorporate individual treatment components, categorised as functional task training, musculoskeletal intervention (active), musculoskeletal intervention (passive), neurophysiological intervention, cardiopulmonary intervention, assistive device or modality.\nIn addition, we sought to explore the impact of time after stroke, geographical location of the study, dose of the intervention, provider of the intervention and treatment components included within an intervention.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (last searched December 2012), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library Issue 12, 2012), MEDLINE (1966 to December 2012), EMBASE (1980 to December 2012), AMED (1985 to December 2012) and CINAHL (1982 to December 2012). We searched reference lists and contacted experts and researchers who have an interest in stroke rehabilitation.\nSelection criteria\nRandomised controlled trials (RCTs) of physical rehabilitation approaches aimed at promoting the recovery of function or mobility in adult participants with a clinical diagnosis of stroke. Outcomes included measures of independence in activities of daily living (ADL), motor function, balance, gait velocity and length of stay. We included trials comparing physical rehabilitation approaches versus no treatment, usual care or attention control and those comparing different physical rehabilitation approaches.\nData collection and analysis\nTwo review authors independently categorised identified trials according to the selection criteria, documented their methodological quality and extracted the data.\nMain results\nWe included a total of 96 studies (10,401 participants) in this review. More than half of the studies (50/96) were carried out in China. Generally the studies were heterogeneous, and many were poorly reported.\nPhysical rehabilitation was found to have a beneficial effect, as compared with no treatment, on functional recovery after stroke (27 studies, 3423 participants; standardised mean difference (SMD) 0.78, 95% confidence interval (CI) 0.58 to 0.97, for Independence in ADL scales), and this effect was noted to persist beyond the length of the intervention period (nine studies, 540 participants; SMD 0.58, 95% CI 0.11 to 1.04). Subgroup analysis revealed a significant difference based on dose of intervention (P value < 0.0001, for independence in ADL), indicating that a dose of 30 to 60 minutes per day delivered five to seven days per week is effective. This evidence principally arises from studies carried out in China. Subgroup analyses also suggest significant benefit associated with a shorter time since stroke (P value 0.003, for independence in ADL).\nWe found physical rehabilitation to be more effective than usual care or attention control in improving motor function (12 studies, 887 participants; SMD 0.37, 95% CI 0.20 to 0.55), balance (five studies, 246 participants; SMD 0.31, 95% CI 0.05 to 0.56) and gait velocity (14 studies, 1126 participants; SMD 0.46, 95% CI 0.32 to 0.60). Subgroup analysis demonstrated a significant difference based on dose of intervention (P value 0.02 for motor function), indicating that a dose of 30 to 60 minutes delivered five to seven days a week provides significant benefit. Subgroup analyses also suggest significant benefit associated with a shorter time since stroke (P value 0.05, for independence in ADL).\nNo one physical rehabilitation approach was more (or less) effective than any other approach in improving independence in ADL (eight studies, 491 participants; test for subgroup differences: P value 0.71) or motor function (nine studies, 546 participants; test for subgroup differences: P value 0.41). These findings are supported by subgroup analyses carried out for comparisons of intervention versus no treatment or usual care, which identified no significant effects of different treatment components or categories of interventions.\nAuthors' conclusions\nPhysical rehabilitation, comprising a selection of components from different approaches, is effective for recovery of function and mobility after stroke. Evidence related to dose of physical therapy is limited by substantial heterogeneity and does not support robust conclusions. No one approach to physical rehabilitation is any more (or less) effective in promoting recovery of function and mobility after stroke. Therefore, evidence indicates that physical rehabilitation should not be limited to compartmentalised, named approaches, but rather should comprise clearly defined, well-described, evidenced-based physical treatments, regardless of historical or philosophical origin.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review brings together evidence confirming that physical rehabilitation (often delivered by a physiotherapist, physical therapist or rehabilitation therapist) can improve function, balance and walking after stroke. It appears to be most beneficial when the therapist selects a mixture of different treatments for an individual patient from a wide range of available treatments.\nWe were able to combine the results from 27 studies (3243 stroke survivors) that compared physical rehabilitation versus no treatment. Twenty-five of these 27 studies were carried out in China. Results showed that physical rehabilitation improves functional recovery, and that this improvement may last long-term. When we looked at studies that compared additional physical rehabilitation versus usual care or a control intervention, we found evidence to show that the additional physical treatment improved motor function (12 studies, 887 stroke survivors), standing balance (five studies, 246 stroke survivors) and walking speed (14 studies, 1126 stroke survivors). Very limited evidence suggests that, for comparisons of physical rehabilitation versus no treatment and versus usual care, treatment that appeared to be effective was given between 30 and 60 minutes per day, five to seven days per week, but further research is needed to confirm this. We also found evidence of greater benefit associated with a shorter time since stroke, but again further research is needed to confirm this.\nWe found evidence showing that no one physical rehabilitation approach was more effective than any other approach. This finding means that physiotherapists should choose each individual patient's treatment according to the evidence available for that specific treatment, and should not limit their practice to a single 'named' approach."
}
] |
query-laysum
|
5983
|
[
{
"role": "user",
"content": "Abstract: Background\nThis review replaces part of an earlier review that evaluated oxycodone for both neuropathic pain and fibromyalgia, which has now been split into separate reviews for the two conditions. This review will consider pain in fibromyalgia only.\nOpioid drugs are commonly used to treat fibromyalgia, but they may not be beneficial for people with this condition. Most reviews have examined all opioids together. This review sought evidence specifically for oxycodone, at any dose, and by any route of administration.\nObjectives\nTo assess the analgesic efficacy and adverse events of oxycodone for treating pain in fibromyalgia in adults.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, and EMBASE for randomised controlled trials from inception to 25 July 2016. We also searched the reference lists of retrieved studies and reviews, and searched online clinical trial registries.\nSelection criteria\nWe planned to include randomised, double-blind trials of eight weeks' duration or longer, comparing oxycodone (alone or in fixed-dose combination with naloxone) with placebo or another active treatment. We did not include observational studies.\nData collection and analysis\nThe plan was for two independent review authors to extract data and assess trial quality and potential bias. Where pooled analysis was possible, we planned to use dichotomous data to calculate risk ratio and numbers needed to treat for one additional event, using standard methods.\nMain results\nNo study satisfied the inclusion criteria. Effects of interventions were not assessed as there were no included studies. We have only very low quality evidence and are very uncertain about estimates of benefit and harm.\nAuthors' conclusions\nThere is no randomised trial evidence to support or refute the suggestion that oxycodone, alone or in combination with naloxone, reduces pain in fibromyalgia.\n\nGiven the provided abstract, please respond to the following query: \"Bottom line\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no good evidence to support or refute the suggestion that oxycodone, alone or in combination with naloxone, reduces pain in fibromyalgia."
}
] |
query-laysum
|
5984
|
[
{
"role": "user",
"content": "Abstract: Background\nGlobally, cardiovascular diseases (CVD, that is, coronary heart (CHD) and circulatory diseases combined) contribute to 31% of all deaths, more than any other cause. In line with guidance in the UK and globally, cardiac rehabilitation programmes are widely offered to people with heart disease, and include psychosocial, educational, health behaviour change, and risk management components. Social support and social network interventions have potential to improve outcomes of these programmes, but whether and how these interventions work is poorly understood.\nObjectives\nTo assess the effectiveness of social network and social support interventions to support cardiac rehabilitation and secondary prevention in the management of people with heart disease. The comparator was usual care with no element of social support (i.e. secondary prevention alone or with cardiac rehabilitation).\nSearch methods\nWe undertook a systematic search of the following databases on 9 August 2022: CENTRAL, MEDLINE, Embase, and the Web of Science. We also searched ClinicalTrials.gov and the WHO ICTRP. We reviewed the reference lists of relevant systematic reviews and included primary studies, and we contacted experts to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) of social network or social support interventions for people with heart disease. We included studies regardless of their duration of follow-up, and included those reported as full text, published as abstract only, and unpublished data.\nData collection and analysis\nUsing Covidence, two review authors independently screened all identified titles. We retrieved full-text study reports and publications marked ‘included’, and two review authors independently screened these, and conducted data extraction. Two authors independently assessed risk of bias, and assessed the certainty of the evidence using GRADE. Primary outcomes were all-cause mortality, cardiovascular-related mortality, all-cause hospital admission, cardiovascular-related hospital admission, and health-related quality of life (HRQoL) measured at > 12 months follow-up.\nMain results\nWe included 54 RCTs (126 publications) reporting data for a total of 11,445 people with heart disease. The median follow-up was seven months and median sample size was 96 participants. Of included study participants, 6414 (56%) were male, and the mean age ranged from 48.6 to 76.3 years. Studies included heart failure (41%), mixed cardiac disease (31%), post-myocardial infarction (13%), post-revascularisation (7%), CHD (7%), and cardiac X syndrome (1%) patients. The median intervention duration was 12 weeks. We identified notable diversity in social network and social support interventions, across what was delivered, how, and by whom.\nWe assessed risk of bias (RoB) in primary outcomes at > 12 months follow-up as either 'low' (2/15 studies), 'some concerns' (11/15), or 'high' (2/15). 'Some concerns' or 'high' RoB resulted from insufficient detail on blinding of outcome assessors, data missingness, and absence of pre-agreed statistical analysis plans. In particular, HRQoL outcomes were at high RoB. Using the GRADE method, we assessed the certainty of evidence as low or very low across outcomes.\nSocial network or social support interventions had no clear effect on all-cause mortality (risk ratio (RR) 0.75, 95% confidence interval (CI) 0.49 to 1.13, I2 = 40%) or cardiovascular-related mortality (RR 0.85, 95% CI 0.66 to 1.10, I2 = 0%) at > 12 months follow-up. The evidence suggests that social network or social support interventions for heart disease may result in little to no difference in all-cause hospital admission (RR 1.03, 95% CI 0.86 to 1.22, I2 = 0%), or cardiovascular-related hospital admission (RR 0.92, 95% CI 0.77 to 1.10, I2 = 16%), with a low level of certainty. The evidence was very uncertain regarding the impact of social network interventions on HRQoL at > 12 months follow-up (SF-36 physical component score: mean difference (MD) 31.53, 95% CI -28.65 to 91.71, I2 = 100%, 2 trials/comparisons, 166 participants; mental component score MD 30.62, 95% CI -33.88 to 95.13, I2 = 100%, 2 trials/comparisons, 166 participants).\nRegarding secondary outcomes, there may be a decrease in both systolic and diastolic blood pressure with social network or social support interventions. There was no evidence of impact found on psychological well-being, smoking, cholesterol, myocardial infarction, revascularisation, return to work/education, social isolation or connectedness, patient satisfaction, or adverse events.\nResults of meta-regression did not suggest that the intervention effect was related to risk of bias, intervention type, duration, setting, and delivery mode, population type, study location, participant age, or percentage of male participants.\nAuthors' conclusions\nWe found no strong evidence for the effectiveness of such interventions, although modest effects were identified in relation to blood pressure. While the data presented in this review are indicative of potential for positive effects, the review also highlights the lack of sufficient evidence to conclusively support such interventions for people with heart disease. Further high-quality, well-reported RCTs are required to fully explore the potential of social support interventions in this context. Future reporting of social network and social support interventions for people with heart disease needs to be significantly clearer, and more effectively theorised, in order to ascertain causal pathways and effect on outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no clear evidence to suggest that social support or social network programmes help people with heart disease.\nThese programmes may produce some improvement in quality of life and blood pressure.\nOur review suggests that, while social support or social network interventions may have potential to help people with heart disease, more high-quality, clearly reported trials are needed to prove any effectiveness."
}
] |
query-laysum
|
5985
|
[
{
"role": "user",
"content": "Abstract: Background\nFluvoxamine is a selective serotonin reuptake inhibitor (SSRI) that has been approved for the treatment of depression, obsessive–compulsive disorder, and a variety of anxiety disorders; it is available as an oral preparation. Fluvoxamine has not been approved for the treatment of infections, but has been used in the early treatment of people with mild to moderate COVID-19. As there are only a few effective therapies for people with COVID-19 in the community, a thorough understanding of the current evidence regarding the efficacy and safety of fluvoxamine as an anti-inflammatory and possible anti-viral treatment for COVID-19, based on randomised controlled trials (RCTs), is needed.\nObjectives\nTo assess the efficacy and safety of fluvoxamine in addition to standard care, compared to standard care (alone or with placebo), or any other active pharmacological comparator with proven efficacy for the treatment of COVID-19 outpatients and inpatients.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register (including Cochrane Central Register of Controlled Trials, MEDLINE, Embase, ClinicalTrials.gov, WHO ICTRP, medRxiv), Web of Science and WHO COVID-19 Global literature on COVID-19 to identify completed and ongoing studies up to 1 February 2022.\nSelection criteria\nWe included RCTs that compared fluvoxamine in addition to standard care (also including no intervention), with standard care (alone or with placebo), or any other active pharmacological comparator with proven efficacy in clinical trials for the treatment of people with confirmed COVID-19, irrespective of disease severity, in both inpatients and outpatients. Co-interventions needed to be the same in both study arms. We excluded studies comparing fluvoxamine to other pharmacological interventions with unproven efficacy.\nData collection and analysis\nWe assessed risk of bias of primary outcomes using the Cochrane Risk of Bias 2 tool for RCTs. We used GRADE to rate the certainty of evidence to treat people with asymptomatic to severe COVID-19 for the primary outcomes including mortality, clinical deterioration, clinical improvement, quality of life, serious adverse events, adverse events of any grade, and suicide or suicide attempt.\nMain results\nWe identified two completed studies with a total of 1649 symptomatic participants. One study was conducted in the USA (study with 152 participants, 80 and 72 participants per study arm) and the other study in Brazil (study with 1497 high-risk participants for progression to severe disease, 741 and 756 participants per study arm) among outpatients with mild COVID-19. Both studies were double-blind, placebo-controlled trials in which participants were prescribed 100 mg fluvoxamine two or three times daily for a maximum of 15 days.\nWe identified five ongoing studies and two studies awaiting classification (due to translation issues, and due to missing published data). We found no published studies comparing fluvoxamine to other pharmacological interventions of proven efficacy.\nWe assessed both included studies to have an overall high risk of bias.\nFluvoxamine for the treatment of COVID-19 in inpatients\nWe did not identify any completed studies of inpatients.\nFluvoxamine for the treatment of COVID-19 in outpatients\nFluvoxamine in addition to standard care may slightly reduce all-cause mortality at day 28 (RR 0.69, 95% CI 0.38 to 1.27; risk difference (RD) 9 per 1000; 2 studies, 1649 participants; low-certainty evidence), and may reduce clinical deterioration defined as all-cause hospital admission or death before hospital admission (RR 0.55, 95% CI 0.16 to 1.89; RD 57 per 1000; 2 studies, 1649 participants; low-certainty evidence). We are very uncertain regarding the effect of fluvoxamine on serious adverse events (RR 0.56, 95% CI 0.15 to 2.03; RD 54 per 1000; 2 studies, 1649 participants; very low-certainty evidence) or adverse events of any grade (RR 1.06, 95% CI 0.82 to 1.37; RD 7 per 1000; 2 studies, 1649 participants; very low-certainty evidence).\nNeither of the studies reported on symptom resolution (clinical improvement), quality of life or suicide/suicide attempt.\nAuthors' conclusions\nBased on a low-certainty evidence, fluvoxamine may slightly reduce all-cause mortality at day 28, and may reduce the risk of admission to hospital or death in outpatients with mild COVID-19. However, we are very uncertain regarding the effect of fluvoxamine on serious adverse events, or any adverse events.\nIn accordance with the living approach of this review, we will continually update our search and include eligible trials as they arise, to complete any gaps in the evidence.\n\nGiven the provided abstract, please respond to the following query: \"What is fluvoxamine?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Fluvoxamine is a type of medication known as a selective serotonin reuptake inhibitor (SSRI), available in tablet form. Recent research has found that fluvoxamine may have an effect on COVID-19. When the immune system fights the virus, the lungs and airways can become inflamed, causing breathing difficulties. Fluvoxamine could help reduce this inflammation, potentially reducing the risk of developing severe COVID-19 and its associated lung symptoms through its possible anti-inflammatory and anti-viral effects. We know that most people do not experience any serious side effects with fluvoxamine when it is taken as an antidepressant. Some people can, however, experience the following common side effects, especially when starting the medication: nausea, anxiety or restlessness, insomnia, or diarrhoea, and in rare cases, suicidal ideation."
}
] |
query-laysum
|
5986
|
[
{
"role": "user",
"content": "Abstract: Background\nMethylxanthines, including caffeine, theophylline, and aminophylline, work as stimulants of the respiratory drive, and decrease apnea of prematurity, a developmental disorder common in preterm infants. In particular, caffeine has been reported to improve important clinical outcomes, including bronchopulmonary dysplasia (BPD) and neurodevelopmental disability. However, there is uncertainty regarding the efficacy of caffeine compared to other methylxanthines.\nObjectives\nTo assess the effects of caffeine compared to aminophylline or theophylline in preterm infants at risk of apnea, with apnea, or in the peri-extubation phase.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Epistemonikos, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), and clinicaltrials.gov in February 2023. We also checked the reference lists of relevant articles to identify additional studies.\nSelection criteria\nStudies: randomized controlled trials (RCTs) and quasi-RCTs\nParticipants: infants born before 34 weeks of gestation for prevention and extubation trials, and infants born before 37 weeks of gestation for treatment trials\nIntervention and comparison: caffeine versus theophylline or caffeine versus aminophylline. We included all doses and duration of treatment.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We evaluated treatment effects using a fixed-effect model with risk ratio (RR), risk difference (RD), and 95% confidence intervals (CI) for categorical data, and mean, standard deviation, and mean difference for continuous data. We used the GRADE approach to evaluate the certainty of evidence.\nMain results\nWe included 22 trials enrolling 1776 preterm infants. The indication for treatment was prevention of apnea in three studies, treatment of apnea in 13 studies, and extubation management in three studies. In three studies, there were multiple indications for treatment, and in one study, the indication for treatment was unclear. In 19 included studies, the infants had a mean gestational age between 28 and 32 weeks and a mean birth weight between 1000 g and 1500 g. One study's participants had a mean gestational age of more than 32 weeks, and two studies had participants with a mean birth weight of 1500 g or more.\nCaffeine administrated for any indication may result in little to no difference in all-cause mortality prior to hospital discharge compared to other methylxanthines (RR 1.12, 95% CI 0.68 to 1.84; RD 0.02, 95% CI -0.05 to 0.08; 2 studies, 396 infants; low-certainty evidence). Only one study enrolling 79 infants reported components of the outcome moderate to severe neurodevelopmental disability at 18 to 26 months. The evidence is very uncertain about the effect of caffeine on cognitive developmental delay compared to other methylxanthines (RR 0.17, 95% CI 0.02 to 1.37; RD -0.12, 95% CI -0.24 to 0.01; 1 study, 79 infants; very low-certainty evidence). The evidence is very uncertain about the effect of caffeine on language developmental delay compared to other methylxanthines (RR 0.76, 95% CI 0.37 to 1.58; RD -0.07, 95% CI -0.27 to 0.12; 1 study, 79 infants; very low-certainty evidence). The evidence is very uncertain about the effect of caffeine on motor developmental delay compared to other methylxanthines (RR 0.50, 95% CI 0.13 to 1.96; RD -0.07, 95% CI -0.21 to 0.07; 1 study, 79 infants; very low-certainty evidence). The evidence is very uncertain about the effect of caffeine on visual and hearing impairment compared to other methylxanthines. At 24 months of age, visual impairment was seen in 8 out of 11 infants and 10 out of 11 infants in the caffeine and other methylxanthines groups, respectively. Hearing impairment was seen in 2 out of 5 infants and 1 out of 1 infant in the caffeine and other methylxanthines groups, respectively. No studies reported the outcomes cerebral palsy, gross motor disability, and mental development. Compared to other methylxanthines, caffeine may result in little to no difference in BPD/chronic lung disease, defined as 28 days of oxygen exposure at 36 weeks' postmenstrual age (RR 1.40, 95% CI 0.92 to 2.11; RD 0.04, 95% CI -0.01 to 0.09; 3 studies, 481 infants; low-certainty evidence). The evidence is very uncertain about the effect of caffeine on side effects (tachycardia, agitation, or feed intolerance) leading to a reduction in dose or withholding of methylxanthines compared to other methylxanthines (RR 0.17, 95% CI 0.02 to 1.32; RD -0.29, 95% CI -0.57 to -0.02; 1 study, 30 infants; very low-certainty evidence). Caffeine may result in little to no difference in duration of hospital stay compared to other methylxanthines (median (interquartile range): caffeine 43 days (27.5 to 61.5); other methylxanthines 39 days (28 to 55)). No studies reported the outcome seizures.\nAuthors' conclusions\nAlthough caffeine has been shown to improve important clinical outcomes, in the few studies that compared caffeine to other methylxanthines, there might be little to no difference in mortality, bronchopulmonary dysplasia, and duration of hospital stay. The evidence is very uncertain about the effect of caffeine compared to other methylxanthines on long-term development and side effects.\nAlthough caffeine or other methylxanthines are widely used in preterm infants, there is little direct evidence to support the choice of which methylxanthine to use. More research is needed, especially on extremely preterm infants born before 28 weeks of gestation. Data from four ongoing studies might provide more evidence on the effects of caffeine or other methylxanthines.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• Caffeine and other similar substances belong to a group of stimulants called methylxanthines. They are often used to prevent and treat apnea – when breathing repeatedly stops and starts – in newborn babies.\n• Caffeine may result in little to no difference in the frequency of death in babies born too early compared to other methylxanthines.\n• More studies are needed, especially in babies born extremely early, since these babies tend to be the most poorly."
}
] |
query-laysum
|
5987
|
[
{
"role": "user",
"content": "Abstract: Background\nA new Cochrane Review entitled 'Cognitive behavioural therapy for anxiety disorders in children and adolescents' was published on 16 November 2020 which supersedes this publication.\nCitation: James AC, Reardon T, Soler A, James G, Creswell C. Cognitive behavioural therapy for anxiety disorders in children and adolescents. Cochrane Database of Systematic Reviews 2020, Issue 11. Art. No.: CD013162. DOI: 10.1002/14651858.CD013162.pub2.\nA previous Cochrane review (James 2005) showed that cognitive behavioural therapy (CBT) was effective in treating childhood anxiety disorders; however, questions remain regarding (1) the relative efficacy of CBT versus non-CBT active treatments; (2) the relative efficacy of CBT versus medication and the combination of CBT and medication versus placebo; and (3) the long-term effects of CBT.\nObjectives\nTo examine (1) whether CBT is an effective treatment for childhood and adolescent anxiety disorders in comparison with (a) wait-list controls; (b) active non-CBT treatments (i.e. psychological placebo, bibliotherapy and treatment as usual (TAU)); and (c) medication and the combination of medication and CBT versus placebo; and (2) the long-term effects of CBT.\nSearch methods\nSearches for this review included the Cochrane Central Register of Controlled Trials (CENTRAL) and the Cochrane Depression, Anxiety and Neurosis Group Register, which consists of relevant randomised controlled trials from the bibliographic databases-The Cochrane Library (1970 to July 2012), EMBASE, (1970 to July 2012) MEDLINE (1970 to July 2012) and PsycINFO (1970 to July 2012).\nSelection criteria\nAll randomised controlled trials (RCTs) of CBT versus waiting list, active control conditions, TAU or medication were reviewed. All participants must have met the criteria of the Diagnostic and Statistical Manual (DSM) or the International Classification of Diseases (ICD) for an anxiety diagnosis, excluding simple phobia, obsessive-compulsive disorder, post-traumatic stress disorder and elective mutism.\nData collection and analysis\nThe methodological quality of included trials was assessed by three reviewers independently. For the dichotomous outcome of remission of anxiety diagnosis, the odds ratio (OR) with 95% confidence interval (CI) based on the random-effects model, with pooling of data via the inverse variance method of weighting, was used. Significance was set at P < 0.05. Continuous data on each child’s anxiety symptoms were pooled using the standardised mean difference (SMD).\nMain results\nForty-one studies consisting of 1806 participants were included in the analyses. The studies involved children and adolescents with anxiety of mild to moderate severity in university and community clinics and school settings. For the primary outcome of remission of any anxiety diagnosis for CBT versus waiting list controls, intention-to-treat (ITT) analyses with 26 studies and 1350 participants showed an OR of 7.85 (95% CI 5.31 to 11.60, Z = 10.34, P < 0.0001), but with evidence of moderate heterogeneity (P = 0.05, I² = 30%). The number needed to treat (NNT) was 3.0 (95% CI 1.75 to 3.03). No difference in outcome was noted between individual, group and family/parental formats. ITT analyses revealed that CBT was no more effective than non-CBT active control treatments (six studies, 426 participants) or TAU in reducing anxiety diagnoses (two studies, 88 participants). The few controlled follow-up studies (n = 4) indicate that treatment gains in the remission of anxiety diagnosis are not statistically significant.\nAuthors' conclusions\nCognitive behavioural therapy is an effective treatment for childhood and adolescent anxiety disorders; however, the evidence suggesting that CBT is more effective than active controls or TAU or medication at follow-up, is limited and inconclusive.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Many children and young people suffer from anxiety. Children and young people with anxiety are more likely to have difficulty with friendships, family life and school. Treatments for children and young people with anxiety can help to prevent them from developing mental health problems or drug and alcohol misuse in later life. Talking therapies such as cognitive behavioural therapy (CBT) can help children and young people to deal with anxiety by using new ways of thinking. Many parents and children prefer to try talking therapies rather than medication such as antidepressants."
}
] |
query-laysum
|
5988
|
[
{
"role": "user",
"content": "Abstract: Background\nInterleukin-1 (IL-1) blocking agents have been used for treating severe coronavirus disease 2019 (COVID-19), on the premise that their immunomodulatory effect might be beneficial in people with COVID-19.\nObjectives\nTo assess the effects of IL-1 blocking agents compared with standard care alone or with placebo on effectiveness and safety outcomes in people with COVID-19.\nWe will update this assessment regularly.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register and the COVID-19 L-OVE Platform (search date 5 November 2021). These sources are maintained through regular searches of MEDLINE, Embase, CENTRAL, trial registers and other sources. We also checked the World Health Organization International Clinical Trials Registry Platform, regulatory agency websites, Retraction Watch (search date 3 November 2021).\nSelection criteria\nWe included randomised controlled trials (RCTs) evaluating IL-1 blocking agents compared with standard care alone or with placebo for people with COVID-19, regardless of disease severity.\nData collection and analysis\nWe followed Cochrane methodology. The protocol was amended to reduce the number of outcomes considered. Two researchers independently screened and extracted data and assessed the risk of bias with the Cochrane Risk of Bias 2 tool. We rated the certainty of evidence using the GRADE approach for the critical outcomes of clinical improvement (Day 28; ≥ D60); WHO Clinical Progression Score of level 7 or above (i.e. the proportion of participants with mechanical ventilation +/- additional organ support OR death) (D28; ≥ D60); all-cause mortality (D28; ≥ D60); incidence of any adverse events; and incidence of serious adverse events.\nMain results\nWe identified four RCTs of anakinra (three published in peer-reviewed journals, one reported as a preprint) and two RCTs of canakinumab (published in peer-reviewed journals). All trials were multicentre (2 to 133 centres). Two trials stopped early (one due to futility and one as the trigger for inferiority was met). The median/mean age range varied from 58 to 68 years; the proportion of men varied from 58% to 77%. All participants were hospitalised; 67% to 100% were on oxygen at baseline but not intubated; between 0% and 33% were intubated at baseline. We identified a further 16 registered trials with no results available, of which 15 assessed anakinra (four completed, four terminated, five ongoing, three not recruiting) and one (completed) trial assessed canakinumab.\nEffectiveness of anakinra for people with COVID-19\nAnakinra probably results in little or no increase in clinical improvement at D28 (risk ratio (RR) 1.08, 95% confidence interval (CI) 0.97 to 1.20; 3 RCTs, 837 participants; absolute effect: 59 more per 1000 (from 22 fewer to 147 more); moderate-certainty evidence.\nThe evidence is uncertain about an effect of anakinra on 1) the proportion of participants with a WHO Clinical Progression Score of level 7 or above at D28 (RR 0.67, 95% CI 0.36 to 1.22; 2 RCTs, 722 participants; absolute effect: 55 fewer per 1000 (from 107 fewer to 37 more); low-certainty evidence) and ≥ D60 (RR 0.54, 95% CI 0.30 to 0.96; 1 RCT, 606 participants; absolute effect: 47 fewer per 1000 (from 72 fewer to 4 fewer) low-certainty evidence); and 2) all-cause mortality at D28 (RR 0.69, 95% CI 0.34 to 1.39; 2 RCTs, 722 participants; absolute effect: 32 fewer per 1000 (from 68 fewer to 40 more); low-certainty evidence).\nThe evidence is very uncertain about an effect of anakinra on 1) the proportion of participants with clinical improvement at ≥ D60 (RR 0.93, 95% CI 0.78 to 1.12; 1 RCT, 115 participants; absolute effect: 59 fewer per 1000 (from 186 fewer to 102 more); very low-certainty evidence); and 2) all-cause mortality at ≥ D60 (RR 1.03, 95% CI 0.68 to 1.56; 4 RCTs, 1633 participants; absolute effect: 8 more per 1000 (from 84 fewer to 147 more); very low-certainty evidence).\nSafety of anakinra for people with COVID-19\nAnakinra probably results in little or no increase in adverse events (RR 1.02, 95% CI 0.94 to 1.11; 2 RCTs, 722 participants; absolute effect: 14 more per 1000 (from 43 fewer to 78 more); moderate-certainty evidence).\nThe evidence is uncertain regarding an effect of anakinra on serious adverse events (RR 0.95, 95% CI 0.58 to 1.56; 2 RCTs, 722 participants; absolute effect: 12 fewer per 1000 (from 104 fewer to 138 more); low-certainty evidence).\nEffectiveness of canakinumab for people with COVID-19\nCanakinumab probably results in little or no increase in clinical improvement at D28 (RR 1.05, 95% CI 0.96 to 1.14; 2 RCTs, 499 participants; absolute effect: 42 more per 1000 (from 33 fewer to 116 more); moderate-certainty evidence).\nThe evidence of an effect of canakinumab is uncertain on 1) the proportion of participants with a WHO Clinical Progression Score of level 7 or above at D28 (RR 0.72, 95% CI 0.44 to 1.20; 2 RCTs, 499 participants; absolute effect: 35 fewer per 1000 (from 69 fewer to 25 more); low-certainty evidence); and 2) all-cause mortality at D28 (RR:0.75; 95% CI 0.39 to 1.42); 2 RCTs, 499 participants; absolute effect: 20 fewer per 1000 (from 48 fewer to 33 more); low-certainty evidence).\nThe evidence is very uncertain about an effect of canakinumab on all-cause mortality at ≥ D60 (RR 0.55, 95% CI 0.16 to 1.91; 1 RCT, 45 participants; absolute effect: 112 fewer per 1000 (from 210 fewer to 227 more); very low-certainty evidence).\nSafety of canakinumab for people with COVID-19\nCanakinumab probably results in little or no increase in adverse events (RR 1.02; 95% CI 0.86 to 1.21; 1 RCT, 454 participants; absolute effect: 11 more per 1000 (from 74 fewer to 111 more); moderate-certainty evidence).\nThe evidence of an effect of canakinumab on serious adverse events is uncertain (RR 0.80, 95% CI 0.57 to 1.13; 2 RCTs, 499 participants; absolute effect: 44 fewer per 1000 (from 94 fewer to 28 more); low-certainty evidence).\nAuthors' conclusions\nOverall, we did not find evidence for an important beneficial effect of IL-1 blocking agents. The evidence is uncertain or very uncertain for several outcomes. Sixteen trials of anakinra and canakinumab with no results are currently registered, of which four are completed, and four terminated. The findings of this review are updated on the COVID-NMA platform (covid-nma.com).\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that assessed the effects of IL-1 blockers to treat people with COVID-19 compared with standard care alone or with placebo. People in the studies could have suspected or confirmed COVID-19 of any severity (mild, moderate or severe), and be any age or sex.\nWe compared and summarised the results of the studies and rated our confidence in the evidence, based on factors such as study methods and sizes."
}
] |
query-laysum
|
5989
|
[
{
"role": "user",
"content": "Abstract: Background\nDespite substantial improvements in myocardial preservation strategies, coronary artery bypass grafting (CABG) is still associated with severe complications. It has been reported that remote ischaemic preconditioning (RIPC) reduces reperfusion injury in people undergoing cardiac surgery and improves clinical outcome. However, there is a lack of synthesised information and a need to review the current evidence from randomised controlled trials (RCTs).\nObjectives\nTo assess the benefits and harms of remote ischaemic preconditioning in people undergoing coronary artery bypass grafting, with or without valve surgery.\nSearch methods\nIn May 2016 we searched CENTRAL, MEDLINE, Embase and Web of Science. We also conducted a search of ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP). We also checked reference lists of included studies. We did not apply any language restrictions.\nSelection criteria\nWe included RCTs in which people scheduled for CABG (with or without valve surgery) were randomly assigned to receive RIPC or sham intervention before surgery.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, extracted data and checked them for accuracy. We calculated mean differences (MDs), standardised mean differences (SMDs) and risk ratios (RR) using a random-effects model. We assessed quality of the trial evidence for all primary outcomes using the GRADE methodology. We completed a ’Risk of bias’ assessment for all studies and performed sensitivity analysis by excluding studies judged at high or unclear risk of bias for sequence generation, allocation concealment and incomplete outcome data. We contacted authors for missing data. Our primary endpoints were 1) composite endpoint (including all-cause mortality, non-fatal myocardial infarction or any new stroke, or both) assessed at 30 days after surgery, 2) cardiac troponin T (cTnT, ng/L) at 48 hours and 72 hours, and as area under the curve (AUC) 72 hours (µg/L) after surgery, and 3) cardiac troponin I (cTnI, ng/L) at 48 hours, 72 hours, and as area under the curve (AUC) 72 hours (µg/L) after surgery.\nMain results\nWe included 29 studies involving 5392 participants (mean age = 64 years, age range 23 to 86 years, 82% male). However, few studies contributed data to meta-analyses due to inconsistency in outcome definition and reporting. In general, risk of bias varied from low to high risk of bias across included studies, and insufficient detail was provided to inform judgement in several cases. The quality of the evidence of key outcomes ranged from moderate to low quality due to the presence of moderate or high statistical heterogeneity, imprecision of results or due to limitations in the design of individual studies.\nCompared with no RIPC, we found that RIPC has no treatment effect on the rate of the composite endpoint with RR 0.99 (95% confidence interval (CI) 0.78 to 1.25); 2 studies; 2463 participants; moderate-quality evidence. Participants randomised to RIPC showed an equivalent or better effect regarding the amount of cTnT release measured at 72 hours after surgery with SMD -0.32 (95% CI -0.65 to 0.00); 3 studies; 1120 participants; moderate-quality evidence; and expressed as AUC 72 hours with SMD -0.49 (95% CI -0.96 to -0.02); 3 studies; 830 participants; moderate-quality evidence. We found the same result in favour of RIPC for the cTnI release measured at 48 hours with SMD -0.21 (95% CI -0.40 to -0.02); 5 studies; 745 participants; moderate-quality evidence; and measured at 72 hours after surgery with SMD -0.37 (95% CI -0.59 to -0.15); 2 studies; 459 participants; moderate-quality evidence. All other primary outcomes showed no differences between groups (cTnT release measured at 48 hours with SMD -0.14, 95% CI -0.33 to 0.06; 4 studies; 1792 participants; low-quality evidence and cTnI release measured as AUC 72 hours with SMD -0.17, 95% CI -0.48 to 0.14; 2 studies; 159 participants; moderate-quality evidence).\nWe also found no differences between groups for all-cause mortality after 30 days, non-fatal myocardial infarction after 30 days, any new stroke after 30 days, acute renal failure after 30 days, length of stay on the intensive care unit (days), any complications and adverse effects related to ischaemic preconditioning. We did not assess many patient-centred/salutogenic-focused outcomes.\nAuthors' conclusions\nWe found no evidence that RIPC has a treatment effect on clinical outcomes (measured as a composite endpoint including all-cause mortality, non-fatal myocardial infarction or any new stroke, or both, assessed at 30 days after surgery). There is moderate-quality evidence that RIPC has no treatment effect on the rate of the composite endpoint including all-cause mortality, non-fatal myocardial infarction or any new stroke assessed at 30 days after surgery, or both. We found moderate-quality evidence that RIPC reduces the cTnT release measured at 72 hours after surgery and expressed as AUC (72 hours). There is moderate-quality evidence that RIPC reduces the amount of cTnI release measured at 48 hours, and measured 72 hours after surgery. Adequately-designed studies, especially focusing on influencing factors, e.g. with regard to anaesthetic management, are encouraged and should systematically analyse the commonly used medications of people with cardiovascular diseases.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases for randomised trials in which people scheduled for CABG (with or without valve surgery) were randomly assigned to receive RIPC or sham intervention before surgery. The evidence is current to May 2016. We did not identify any source of bias related to the funding of included studies."
}
] |
query-laysum
|
5990
|
[
{
"role": "user",
"content": "Abstract: Background\nThe sedation needs of critically ill patients have been recognized as a core component of critical care that is vital to assist recovery and ensure humane treatment. Evidence suggests that sedation requirements are not always optimally managed. Suboptimal sedation, both under- and over-sedation, have been linked to short-term (e.g. length of stay) and long-term (e.g. psychological recovery) outcomes. Strategies to improve sedation assessment and management have been proposed. This review was originally published in 2015 and updated in 2018.\nObjectives\nTo assess the effects of protocol-directed sedation management compared to usual care on the duration of mechanical ventilation, intensive care unit (ICU) and hospital mortality and other patient outcomes in mechanically ventilated ICU adults and children.\nSearch methods\nWe used the standard search strategy of the Cochrane Anaesthesia, Critical and Emergency Care Group (ACE). We searched the Cochrane Central Register of Controlled trials (CENTRAL) (December 2017), MEDLINE (OvidSP) (2013 to December 2017), Embase (OvidSP) (2013 to December 2017), CINAHL (BIREME host) (2013 to December 2017), LILACS (2013 to December 2017), trial registries and reference lists of articles. (The original search was run in November 2013).\nSelection criteria\nWe included randomized controlled trials (RCTs) and quasi-randomized controlled trials conducted in ICUs comparing management with and without protocol-directed sedation in intensive care adults and children.\nData collection and analysis\nTwo authors screened the titles and abstracts and then full-text reports identified from our electronic search. We assessed seven domains of potential risk of bias for the included studies. We examined clinical, methodological and statistical heterogeneity and used the random-effects model for meta-analysis where we considered it appropriate. We calculated the mean difference (MD) for duration of mechanical ventilation and risk ratio (RR) for mortality across studies, with 95% confidence intervals (CIs).\nMain results\nWe included four studies with a total of 3323 participants (864 adults and 2459 paediatrics) in this update. Three studies were single-centre, patient-level RCTs and one study was a multicentre cluster-RCT. The settings were in metropolitan centres and included general, mixed medical-surgical, medical only and a range of paediatric units. All four included studies compared the use of protocol-directed sedation, specifically protocols delivered by nurses, with usual care. We rated the risk of selection bias due to random sequence generation low for two studies and unclear for two studies. The risk of bias was highly variable across the domains and studies, with the risk of selection and performance bias generally rated high and the risk of detection and attrition bias generally rated low.\nWhen comparing protocol-directed sedation with usual care, there was no clear evidence of difference in duration of mechanical ventilation in hours for the entire duration of the first ICU stay for each patient (MD -28.15 hours, 95% CI -69.15 to 12.84; I2 = 85%; 4 studies; adjusted sample 2210 participants; low-quality evidence). There was no clear evidence of difference in ICU mortality (RR 0.77, 95% CI 0.39 to 1.50; I2 = 67%; 2 studies; 513 participants; low-quality evidence), or hospital mortality (RR 0.90, 95% CI 0.72 to 1.13; I2 = 10%; 3 studies; adjusted sample 2088 participants; low-quality evidence). There was no clear evidence of difference in ICU length of stay (MD -1.70 days, 95% CI-3.71 to 0.31; I2 = 82%; 4 studies; adjusted sample of 2123 participants; low-quality of evidence), however there was evidence of a significant reduction in hospital length of stay (MD -3.09 days, 95% CI -5.08 to -1.10; I2 = 2%; 3 studies; adjusted sample of 1922 participants; moderate-quality evidence). There was no clear evidence of difference in the incidence of self-extubation (RR 0.88, 95% CI 0.55 to 1.42; I2 = 0%; 2 studies; adjusted sample of 1687 participants; high-quality evidence), or incidence of tracheostomy (RR 0.67, 95% CI 0.35 to 1.30; I2 = 66%; 3 studies; adjusted sample of 2008 participants; low-quality evidence). Only one study examined incidence of reintubation, therefore we could not pool data; there was no clear evidence of difference (RR 0.65, 95% CI 0.35 to 1.24; 1 study; 321 participants; low-quality evidence).\nAuthors' conclusions\nThere is currently limited evidence from RCTs evaluating the effectiveness of protocol-directed sedation on patient outcomes. The four included RCTs reported conflicting results and heterogeneity limited the interpretation of results for the primary outcomes of duration of mechanical ventilation and mortality. Further studies, taking into account differing contextual characteristics, are necessary to inform future practice. Methodological strategies to reduce the risk of bias need to be considered in future studies.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence available to answer our review question is low to moderate. This is mainly due to the often conflicting results that were reported from the four eligible studies. Further studies need to be conducted to determine the effectiveness of this intervention."
}
] |
query-laysum
|
5991
|
[
{
"role": "user",
"content": "Abstract: Background\nThe ideal intravenous fluid for kidney transplantation has not been defined, despite the common use of normal saline during the peri-operative period. The high chloride content of normal saline is associated with an increased risk of hyperchloraemic metabolic acidosis, which may in turn increase the risk of hyperkalaemia and delayed graft function. Balanced electrolyte solutions have a lower chloride content which may decrease this risk and avoid the need for dialysis due to hyperkalaemia in the immediate post-transplant period. Randomised controlled trials (RCTs) addressing this issue have used biochemical outcomes to compare fluids and have been underpowered to address patient-centred outcomes such as delayed graft function.\nObjectives\nTo examine the effect of lower-chloride solutions versus normal saline on delayed graft function, hyperkalaemia and acid-base status in kidney transplant recipients.\nSearch methods\nWe searched the Cochrane Kidney and Transplant's Specialised Register to 26 November 2015 through contact with the Information Specialist using search terms relevant to this review.\nSelection criteria\nRCTs of kidney transplant recipients that compared peri-operative intravenous lower-chloride solutions to normal saline were included.\nData collection and analysis\nTwo independent investigators assessed studies for eligibility and risk of bias. Data from individual studies were extracted using standardised forms and pooled according to a published protocol. Summary estimates of effect were obtained using a random-effects model, and results were expressed as risk ratios (RR) and their 95% confidence intervals (CI) for dichotomous outcomes, and mean difference (MD) and 95% CI for continuous outcomes.\nMain results\nSix studies (477 participants) were included in the review. All participants were adult kidney transplant recipients and 70% of participants underwent live-donor kidney transplantation. The overall risk of bias was low for selection bias and unclear for remaining domains. There was no difference in the risk of delayed graft function (3 studies, 298 participants: RR 1.03, 95% CI 0.62 to 1.70) or hyperkalaemia (2 studies, 199 participants: RR 0.48, 95% CI 0.04 to 6.10) for participants who received balanced electrolyte solutions compared to normal saline. Intraoperative balanced electrolyte solutions compared to normal saline were associated with higher blood pH (3 studies, 193 participants: MD 0.07, 95% CI 0.05 to 0.09), higher serum bicarbonate (3 studies, 215 participants: MD 3.02 mEq/L, 95% CI 2.00 to 4.05) and lower serum chloride (3 studies, 215 participants: MD -9.93 mmol/L, 95% CI -19.96 to 0.11). There were four cases of graft loss in the normal saline group and one in the balanced electrolyte solution group, and four cases of acute rejection in the normal saline group compared to two cases in the balanced electrolyte solution group.\nAuthors' conclusions\nBalanced electrolyte solutions are associated with less hyperchloraemic metabolic acidosis compared to normal saline, however it remains uncertain whether lower-chloride solutions lead to improved graft outcomes compared to normal saline.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found six studies that included 477 kidney transplant patients. The majority of these patients had a kidney transplant from a living donor. The overall quality of the studies was low to average, and the main problem was the small number of studies and the small size of the studies. There was no information on funding source for most of the studies.\nCompared to normal saline, giving kidney transplant patients solutions that contain less chloride during their transplant operation resulted in lower blood acid levels but did not affect how the transplant kidney worked after surgery, or the number of patients who had high blood potassium levels. Harmful effects were not reported in many studies. In the group of patients who were given lower-chloride fluids, the transplant failed in one patient and one patient rejected the transplant. In the group of patients who were given normal saline, the transplant failed in four patients, and two patients rejected the transplant. However, this is probably an incomplete picture of harmful effects."
}
] |
query-laysum
|
5992
|
[
{
"role": "user",
"content": "Abstract: Background\nA blood transfusion is an acute intervention, implemented to solve life and health-threatening conditions on a short-term basis. However, blood transfusions have adverse events, some of them potentially related to immune modulation or to a direct transmission of infectious agents (e.g. cytomegalovirus). Leukoreduction is a process in which the white blood cells are intentionally reduced in packed red blood cells (PRBCs) in order to reduce the risk of adverse reactions. The potential benefits of leukoreduced PRBCs in all types of transfused patients for decreasing infectious and non-infectious complications remain unclear.\nObjectives\nTo determine the clinical effectiveness of leukoreduction of packed red blood cells for preventing adverse reactions following allogeneic blood transfusion.\nSearch methods\nWe ran the most recent search on 10th November 2015. We searched the Cochrane Injuries Group's Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL, the Cochrane Library), MEDLINE (OvidSP), Embase(OvidSP), CINAHL Plus (EBSCO), LILACS (BIREME), and clinical trials registers. In addition, we checked the reference lists of all relevant trials and reviews identified in the literature searches.\nSelection criteria\nRandomised clinical trials including patients of all ages requiring PRBC allogeneic transfusion. Any study was eligible for inclusion, regardless of the length of participant follow-up or country where the study was performed. The primary outcome was transfusion-related acute lung injury (TRALI). Secondary outcomes were death from any cause, infection from any cause, non-infectious complications and any other adverse event.\nData collection and analysis\nAt least two review authors independently performed study selection, 'Risk of bias' assessments and data extraction. We estimated pooled relative risk for dichotomous outcomes, and we measured statistical heterogeneity using I² statistic. The random-effects model was used to synthesise results. We conducted a trial sequential analysis to assess the risk of random errors in cumulative meta-analyses.\nMain results\nThirteen studies, most including adult patients, met the eligibility criteria. We found no clear evidence of an effect of leukoreduced PRBC versus non-leukoreduced PRBC in patients that were randomised to receive transfusion for the following outcomes:\nTRALI: RR 0.96, 95% CI 0.67 to 1.36, P = 0.80 from one trial reporting data on 1864 trauma patients. The accrued information of 1864 participants constituted only 28.5% of the diversity-adjusted required information size (DARIS) of 6548 participants. The quality of evidence was low.\nDeath from any cause: RR 0.81, 95% CI 0.58 to 1.12, I² statistic = 63%, P = 0.20 from nine trials reporting data on 6485 cardiovascular surgical patients, gastro-oncology surgical patients, trauma patients and HIV infected patients. The accrued information of 6485 participants constituted only 55.3% of the DARIS of 11,735 participants. The quality of evidence was very low.\nInfection from any cause: RR 0.80, 95% CI 0.62 to 1.03, I² statistic = 84%, P = 0.08 from 10 trials reporting data on 6709 cardiovascular surgical patients, gastro-oncology surgical patients, trauma patients and HIV infected patients. The accrued information of 6709 participants constituted only 60.6% of the DARIS of 11,062 participants. The quality of evidence was very low.\nAdverse events: The only adverse event reported as an adverse event was fever (RR 0.81, 95% CI 0.64 to 1.02; I² statistic= 0%, P = 0.07). Fever was reported in two trials on 634 cardiovascular surgical and gastro-oncology surgical patients. The accrued information of 634 participants constituted only 84.4% of the DARIS of 751 participants. The quality of evidence was low.\nIncidence of other non-infectious complications: This outcome was not assessed in any included trial.\nAuthors' conclusions\nThere is no clear evidence for supporting or rejecting the routine use of leukoreduction in all patients requiring PRBC transfusion for preventing TRALI, death, infection, non-infectious complications and other adverse events. As the quality of evidence is very low to low, more evidence is needed before a definitive conclusion can be drawn.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical journals for reports of clinical trials which examined the effects of removing white blood cells from donated blood. We were interested in finding out whether the removal of white blood cells from donated blood resulted in patients receiving a blood transfusion having few complications such as transfusion-related acute lung injury, death, infectious and non-infectious complications, or any other adverse event. We included people of any age or sex, who received a blood transfusion for any reason. The evidence is based on studies retrieved up to 05 December 2014."
}
] |
query-laysum
|
5993
|
[
{
"role": "user",
"content": "Abstract: Background\nSuperior oblique palsy is a common cause of vertical strabismus in adults and children. Patients may be symptomatic from binocular vertical diplopia or compensatory head tilt required to maintain single vision. Most patients who are symptomatic elect to undergo strabismus surgery, but the optimal surgical treatment for vertical strabismus in people with superior oblique palsy is unknown.\nObjectives\nTo assess the relative effects of surgical treatments compared with another surgical intervention, non-surgical intervention, or observation for vertical strabismus in people with superior oblique palsy.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2016, Issue 12), MEDLINE Ovid (1946 to 13 December 2016), Embase Ovid (1947 to 13 December 2016), Latin American and Caribbean Health Sciences Literature Database (LILACS) (1982 to 13 December 2016), the ISRCTN registry (www.isrctn.com/editAdvancedSearch); searched 13 December 2016, ClinicalTrials.gov (www.clinicaltrials.gov); searched 13 December 2016, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en); searched 13 December 2016. We did not use any date or language restrictions in the electronic searches for trials.\nSelection criteria\nWe included randomized trials that compared at least one type of surgical intervention to another surgical or non-surgical intervention or observation.\nData collection and analysis\nTwo review authors independently completed eligibility screening, data abstraction, 'Risk of bias' assessment, and grading of the evidence.\nMain results\nWe identified two randomized trials comparing four different surgical treatments for this condition, two methods in each trial. The studies included a total of 45 children and adults. The surgical treatments were all procedures to weaken the ipsilateral inferior oblique muscle. One study compared inferior oblique myectomy to recession of 10 mm; the other study compared inferior oblique disinsertion to anterior transposition (2 mm anterior to the temporal border of the inferior rectus insertion).\nWe judged both studies to be at unclear risk of bias due to incomplete reporting of methods and other methodological deficiencies.\nNeither study reported data on the primary outcome of this review, which was the proportion of participants with postoperative surgical success, defined as hypertropia less than 3 prism diopters (PD) in primary gaze. However, both studies reported the average reduction in hypertropia in primary gaze. One study found that at 12 months' postoperatively the average decrease in hypertropia was higher in participants who underwent inferior oblique myectomy than in those who underwent recession, however data were not available for statistical comparison. The other trial found that after at least six months of follow-up, the mean decrease in primary position hypertropia was lower in participants who underwent inferior oblique disinsertion than in those who underwent anterior transposition (mean difference (MD) -5.20 PD, 95% confidence interval (CI) -7.76 to -2.64; moderate-quality evidence).\nBoth trials also reported the average postoperative reduction in vertical deviation in adduction. One study reported that the average reduction in hypertropia in adduction was greater in participants who underwent inferior oblique myectomy than in those who underwent recession, but data were not available for statistical comparison. The other study found a lower decrease in hypertropia in contralateral gaze in participants who underwent inferior oblique disinsertion than in those who underwent anterior transposition (MD -7.10 PD, 95% CI -13.85 to -0.35; moderate-quality evidence).\nSecondary outcomes with sufficient data for analysis included proportion of participants with preoperative head tilt that resolved postoperatively and proportion of participants who underwent a second surgery. These outcomes were assessed in the trial comparing inferior oblique anterior transposition to disinsertion; both outcomes favored anterior transposition (risk ratio 7.00, 95% CI 0.40 to 121.39 for both outcomes; very low-quality evidence). None of the participants who underwent inferior oblique anterior transposition or disinsertion developed postoperative hypotropia or reversal of the vertical deviation. All participants who underwent inferior oblique anterior transposition developed elevation deficiency, which the authors deemed to be clinically insignificant in all cases, whereas no participants who underwent inferior oblique disinsertion experienced this complication. Additionally, the trial comparing inferior oblique myectomy to recession reported that no participant in either group required another strabismus surgery during the postoperative period.\nAuthors' conclusions\nThe two trials included in this review evaluated four inferior oblique weakening procedures for surgical treatment of superior oblique palsy. We found no trials comparing other types of surgical procedures for this disorder. Both studies had enrolled a small number of participants and provided low-quality evidence due to limitations in completeness and applicability. We therefore found no high-quality evidence to support recommendations for optimal surgical treatment of superior oblique palsy. Rigorously designed, conducted, and reported randomized trials are needed to identify the optimal surgical treatment for vertical strabismus in this disorder.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Each of the two included trials compared two different surgical procedures to weaken the inferior oblique muscle, and thus balance the weakness in the superior oblique muscle. A total of four different inferior oblique muscle-weakening surgeries were studied: myectomy (removing part of the muscle), recession (moving the muscle to a position where it exerts less force), anterior transposition (moving the muscle to a position where the direction of force is altered), and disinsertion (detaching the muscle from the sclera).\nNeither of the trials examined the main outcome we wished to study, that is the proportion of participants deemed to have successful eye realignment after surgery. Additionally, we judged the quality of the data in both studies to be low."
}
] |
query-laysum
|
5994
|
[
{
"role": "user",
"content": "Abstract: Background\nFor people with physical, sensory and cognitive limitations due to stroke, the routine practice of oral health care (OHC) may become a challenge. Evidence-based supported oral care intervention is essential for this patient group.\nObjectives\nTo compare the effectiveness of OHC interventions with usual care or other treatment options for ensuring oral health in people after a stroke.\nSearch methods\nWe searched the Cochrane Stroke Group and Cochrane Oral Health Group trials registers, CENTRAL, MEDLINE, Embase, and six other databases in February 2019. We scanned reference lists from relevant papers and contacted authors and researchers in the field. We handsearched the reference lists of relevant articles and contacted other researchers. There were no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated one or more interventions designed to improve the cleanliness and health of the mouth, tongue and teeth in people with a stroke who received assisted OHC led by healthcare staff. We included trials with a mixed population provided we could extract the stroke-specific data. The primary outcomes were dental plaque or denture plaque. Secondary outcomes included presence of oral disease, presence of related infection and oral opportunistic pathogens related to OHC and pneumonia, stroke survivor and providers' knowledge and attitudes to OHC, and patient satisfaction and quality of life.\nData collection and analysis\nTwo review authors independently screened abstracts and full-text articles according to prespecified selection criteria, extracted data and assessed the methodological quality using the Cochrane 'Risk of bias' tool. We sought clarification from investigators when required. Where suitable statistical data were available, we combined the selected outcome data in pooled meta-analyses. We used GRADE to assess the quality of evidence for each outcome.\nMain results\nFifteen RCTs (22 randomised comparisons) involving 3631 participants with data for 1546 people with stroke met the selection criteria.\nOHC interventions compared with usual care\nSeven trials (2865 participants, with data for 903 participants with stroke, 1028 healthcare providers, 94 informal carers) investigated OHC interventions compared with usual care.\nMulti-component OHC interventions showed no evidence of a difference in the mean score (DMS) of dental plaque one month after the intervention was delivered (DMS –0.66, 95% CI –1.40 to 0.09; 2 trials, 83 participants; I2 = 83%; P = 0.08; very low-quality evidence).\nStroke survivors had less plaque on their dentures when staff had access to the multi-component OHC intervention (DMS –1.31, 95% CI –1.96 to –0.66; 1 trial, 38 participants; P < 0.0001; low-quality evidence).\nThere was no evidence of a difference in gingivitis (DMS –0.60, 95% CI –1.66 to 0.45; 2 trials, 83 participants; I2 = 93%; P = 0.26: very low-quality evidence) or denture-induced stomatitis (DMS –0.33, 95% CI –0.92 to 0.26; 1 trial, 38 participants; P = 0.69; low-quality evidence) among participants receiving the multi-component OHC protocol compared with usual care one month after the intervention. There was no difference in the incidence of pneumonia in participants receiving a multi-component OHC intervention (99 participants; 5 incidents of pneumonia) compared with those receiving usual care (105 participants; 1 incident of pneumonia) (OR 4.17, CI 95% 0.82 to 21.11; 1 trial, 204 participants; P = 0.08; low-quality evidence).\nOHC training for stroke survivors and healthcare providers significantly improved their OHC knowledge at one month after training (SMD 0.70, 95% CI 0.06 to 1.35; 3 trials, 728 participants; I2 = 94%; P = 0.03; very low-quality evidence). Pooled data one month after training also showed evidence of a difference between stroke survivor and providers' oral health attitudes (SMD 0.28, 95% CI 0.01 to 0.54; 3 trials, 728 participants; I2 = 65%; P = 0.06; very low-quality evidence).\nOHC interventions compared with placebo\nThree trials (394 participants, with data for 271 participants with stroke) compared an OHC intervention with placebo. There were no data for primary outcomes. There was no evidence of a difference in the incidence of pneumonia in participants receiving an OHC intervention compared with placebo (OR 0.39, CI 95% 0.14 to 1.09; 2 trials, 242 participants; I2 = 42%; P = 0.07; low-quality evidence). However, decontamination gel reduced the incidence of pneumonia among the intervention group compared with placebo gel group (OR 0.20, 95% CI 0.05 to 0.84; 1 trial, 203 participants; P = 0.028). There was no difference in the incidence of pneumonia in participants treated with povidone-iodine compared with a placebo (OR 0.81, 95% CI 0.18 to 3.51; 1 trial, 39 participants; P = 0.77).\nOne OHC intervention compared with another OHC intervention\nTwelve trials (372 participants with stroke) compared one OHC intervention with another OHC intervention. There was no difference in dental plaque scores between those participants that received an enhanced multi-component OHC intervention compared with conventional OHC interventions at three months (MD –0.04, 95% CI –0.33 to 0.25; 1 trial, 61 participants; P = 0.78; low-quality evidence). There were no data for denture plaque.\nAuthors' conclusions\nWe found low- to very low-quality evidence suggesting that OHC interventions can improve the cleanliness of patient's dentures and stroke survivor and providers' knowledge and attitudes. There is limited low-quality evidence that selective decontamination gel may be more beneficial than placebo at reducing the incidence of pneumonia. Improvements in the cleanliness of a patient's own teeth was limited. We judged the quality of the evidence included within meta-analyses to be low or very low quality, and this limits our confidence in the results. We still lack high-quality evidence of the optimal approach to providing OHC to people after stroke.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 15 studies (22 comparisons) involving 1546 people with stroke, 1028 staff and 94 carers in this updated review. Seven trials compared OHC with usual care; three trials compared OHC with placebo (pretend treatment or usual care), and 12 trials compared two different types of OHC."
}
] |
query-laysum
|
5995
|
[
{
"role": "user",
"content": "Abstract: Background\nAdult smoking usually has its roots in adolescence. If individuals do not take up smoking during this period it is unlikely that they ever will. Further, once smoking becomes established, cessation is challenging; the probability of subsequently quitting is inversely proportional to the age of initiation. One novel approach to reducing the prevalence of youth smoking is the use of incentives.\nObjectives\nTo assess the effect of incentives on preventing children and adolescents (aged 5 to 18 years) from starting to smoke. It was also our intention to assess, where possible, the dose-response of incentives, the costs of incentive programmes, whether incentives are more or less effective in combination with other interventions to prevent smoking initiation, and any unintended consequences arising from the use of incentives.\nSearch methods\nFor the original review (published 2012) we searched the Cochrane Tobacco Addiction Group Specialized Register, with additional searches of MEDLINE, Embase, CINAHL, CSA databases and PsycINFO for terms relating to incentives, in combination with terms for smoking and tobacco use, and children and adolescents. The most recent searches were of the Cochrane Tobacco Addiction Group Specialized Register, and were carried out in December 2016.\nSelection criteria\nWe considered randomized controlled trials (RCTs) allocating children and adolescents (aged 5 to 18 years) as individuals, groups or communities to intervention or control conditions, where the intervention included an incentive aimed at preventing smoking uptake. We also considered controlled trials (CTs) with baseline measures and post-intervention outcomes.\nData collection and analysis\nTwo review authors extracted and independently assessed the data. The primary outcome was the smoking status of children or adolescents at follow-up who reported no smoking at baseline. We required a minimum follow-up of six months from baseline and assessed each included study for risks of bias. We used the most rigorous definition of abstinence in each trial; we did not require biochemical validation of self-reported tobacco use for study inclusion. Where possible we combined eligible studies to calculate pooled estimates at the longest follow-up, using the Mantel-Haenszel fixed-effect method, grouping studies by study design.\nMain results\nWe identified three eligible RCTs and five CTs, including participants aged 11 to 14 years, who were non-smokers at baseline. Of the eight trials identified, six had analyzable data relevant for this review, which contributed to meta-analyses (7275 participants in total: 4003 intervention; 3272 control; 2484 participants after adjusting for clustering). All except one of the studies tested the 'Smokefree Class Competition' (SFC), which has been widely implemented throughout Europe. In this competition, classes with youth generally between the ages of 11 and 14 years commit to being smoke-free for a six-month period, and report their smoking status regularly. If 90% or more of the class are non-smokers at the end of the six months, the class goes into a competition to win prizes. The one study that was not a trial of the SFC was a controlled trial in which schools in two communities were assigned to the intervention, with schools in a third community acting as controls. Students in the intervention community with lower smoking rates at the end of the project (one school year) received rewards.\nMost studies resulted in statistically non-significant results. Only one study of the SFC reported a significant effect of the competition on the prevention of smoking at the longest follow-up. However, this study was at risk of multiple biases, and when we calculated the adjusted risk ratio (RR) we no longer detected a statistically significant difference. The pooled RR for the more robust RCTs (3 studies, n = 3056 participants/1107 adjusted for clustering) suggests that there is no statistically significant effect of incentives, in the form of the SFC, to prevent smoking initiation among children and adolescents in the long term (RR 1.00, 95% confidence interval (CI) 0.84 to 1.19). Pooled results from the non-randomized trials also did not detect a significant effect of the SFC, and we were unable to extract data on our outcome of interest from the one trial that did not study the SFC. There is little robust evidence to suggest that unintended consequences (such as making false claims about their smoking status and bullying of smoking students) are consistently associated with such interventions, although this has not been the focus of much research. There was insufficient information to assess the dose-response relationship or to report costs of incentives for preventing smoking uptake.\nWe judged the included RCTs to be at unclear risk of bias, and the non-RCTs to be at high risk of bias. Using GRADE, we rated the overall quality of the evidence for our primary outcome as 'low' (for RCTs) and 'very low' (for non-RCTs), because of imprecision (all studies had wide confidence intervals), and for the risks of bias identified. We further downgraded the non-RCT evidence, due to issues with the non-RCT study design, likely to introduce further bias.\nAuthors' conclusions\nThe very limited evidence currently available suggests that incentive programmes do not prevent smoking initiation among youth. However, there are relatively few published studies and these are of variable quality. In addition, trials included in the meta-analyses were all studies of the SFC, which distributed small to moderately-sized prizes to whole classes, usually through a lottery system. It is therefore possible that other incentive programmes could be more successful at preventing smoking uptake in young people.\nFuture studies might investigate the efficacy of a wider range of incentives, including those given to individual participants to prevent smoking uptake, whilst considering both the effect of incentives on smoking initiation and the progression to smoking. It would be useful if incentives were evaluated in varying populations from different socioeconomic and ethnic backgrounds, and if intervention components were described in detail.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This is an update of a previous review. The first version was published in 2012 and included seven studies. For this update we searched for new studies in December 2016 and found one. This review now includes eight trials. Seven of these were trials of the 'Smokefree Class Competition' (SFC), which has been widely used throughout Europe. In this competition, classes (generally between the ages of 11 and 14 years) promise to be smoke-free for a six-month period. They report regularly on their smoking status, and if 90% or more of the class are non-smokers at the end of the six months, the class goes into a competition to win prizes. In the one trial that did not test the SFC, classes with the smallest percentage of students smoking at the school year's end were given rewards."
}
] |
query-laysum
|
5996
|
[
{
"role": "user",
"content": "Abstract: Background\nSocial networking sites (SNSs) have great potential as a platform for public health interventions to address the unmet need for contraception.\nObjectives\nTo evaluate the effectiveness of interventions using SNSs to promote the uptake of and adherence to contraception in reproductive-age women.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and six other databases on January 2018. We also searched Google Scholar, key conference proceedings, checked the reference lists of included studies, and contacted study authors to identify additional studies.\nSelection criteria\nWe considered randomised controlled trials (RCTs) and non-randomised interventional studies (NRS) in women of reproductive age. SNSs requiring a social profile within a bounded or restricted-access system of shared connections were included. We also included trials that utilised SNSs only or as an adjunct to an intervention. Studies had to have a follow-up outcome assessment of at least three months.\nData collection and analysis\nTwo authors independently screened titles, abstracts and full-text studies, and extracted data from included studies. A third author was assigned to arbitrate areas of disagreement. Authors assessed risk of bias according to the Cochrane Handbook for Systematic Reviews of Interventions. We were unable to conduct a meta-analysis due to the heterogeneity of the study designs and outcome measures.\nMain results\nOf the 461 unique records found, only two studies met our inclusion criteria. Both studies were conducted in the USA and were at high risk of bias. One RCT included 2284 women exposed to a web-based SNS or nothing. The groups were no different post intervention in their self-reported consistency of contraceptive use or knowledge of the relative effectiveness of different methods. There was a small but significant increase in the use of more effective methods (long-acting reversible methods) at 12 months' follow-up.\nThe second study, a cluster RCT with 1578 women, used a closed Facebook page showing sexual health content compared to a modified Facebook news page that avoided sexual health content. They found no differences in the use of condoms at last act of sexual intercourse at six months or the intention to use condoms between the intervention and control groups.\nAuthors' conclusions\nDespite the prevalence of SNSs, we found little scientific evidence to support the use of SNSs to improve contraceptive use or adherence among women.\n\nGiven the provided abstract, please respond to the following query: \"Data collection and analysis\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Two authors independently screened titles, abstracts and full-text studies, and extracted data from included studies. A third author was assigned to resolve areas of disagreement."
}
] |
query-laysum
|
5997
|
[
{
"role": "user",
"content": "Abstract: Background\nMost cancer survivors receive follow-up care after completion of treatment with the primary aim of detecting recurrence. Traditional follow-up consisting of fixed visits to a cancer specialist for examinations and tests are expensive and may be burdensome for the patient. Follow-up strategies involving non-specialist care providers, different intensity of procedures, or addition of survivorship care packages have been developed and tested, however their effectiveness remains unclear.\nObjectives\nThe objective of this review is to compare the effect of different follow-up strategies in adult cancer survivors, following completion of primary cancer treatment, on the primary outcomes of overall survival and time to detection of recurrence. Secondary outcomes are health-related quality of life, anxiety (including fear of recurrence), depression and cost.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, four other databases and two trials registries on 11 December 2018 together with reference checking, citation searching and contact with study authors to identify additional studies.\nSelection criteria\nWe included all randomised trials comparing different follow-up strategies for adult cancer survivors following completion of curatively-intended primary cancer treatment, which included at least one of the outcomes listed above. We compared the effectiveness of: 1) non-specialist-led follow-up (i.e. general practitioner (GP)-led, nurse-led, patient-initiated or shared care) versus specialist-led follow-up; 2) less intensive versus more intensive follow-up (based on clinical visits, examinations and diagnostic procedures) and 3) follow-up integrating additional care components relevant for detection of recurrence (e.g. patient symptom education or monitoring, or survivorship care plans) versus usual care.\nData collection and analysis\nWe used the standard methodological guidelines by Cochrane and Cochrane Effective Practice and Organisation of Care (EPOC). We assessed the certainty of the evidence using the GRADE approach. For each comparison, we present synthesised findings for overall survival and time to detection of recurrence as hazard ratios (HR) and for health-related quality of life, anxiety and depression as mean differences (MD), with 95% confidence intervals (CI). When meta-analysis was not possible, we reported the results from individual studies. For survival and recurrence, we used meta-regression analysis where possible to investigate whether the effects varied with regards to cancer site, publication year and study quality.\nMain results\nWe included 53 trials involving 20,832 participants across 12 cancer sites and 15 countries, mainly in Europe, North America and Australia. All the studies were carried out in either a hospital or general practice setting. Seventeen studies compared non-specialist-led follow-up with specialist-led follow-up, 24 studies compared intensity of follow-up and 12 studies compared patient symptom education or monitoring, or survivorship care plans with usual care. Risk of bias was generally low or unclear in most of the studies, with a higher risk of bias in the smaller trials.\nNon-specialist-led follow-up compared with specialist-led follow-up\nIt is uncertain how this strategy affects overall survival (HR 1.21, 95% CI 0.68 to 2.15; 2 studies; 603 participants), time to detection of recurrence (4 studies, 1691 participants) or cost (8 studies, 1756 participants) because the certainty of the evidence is very low.\nNon-specialist- versus specialist-led follow up may make little or no difference to health-related quality of life at 12 months (MD 1.06, 95% CI −1.83 to 3.95; 4 studies; 605 participants; low-certainty evidence); and probably makes little or no difference to anxiety at 12 months (MD −0.03, 95% CI −0.73 to 0.67; 5 studies; 1266 participants; moderate-certainty evidence). We are more certain that it has little or no effect on depression at 12 months (MD 0.03, 95% CI −0.35 to 0.42; 5 studies; 1266 participants; high-certainty evidence).\nLess intensive follow-up compared with more intensive follow-up\nLess intensive versus more intensive follow-up may make little or no difference to overall survival (HR 1.05, 95% CI 0.96 to 1.14; 13 studies; 10,726 participants; low-certainty evidence) and probably increases time to detection of recurrence (HR 0.85, 95% CI 0.79 to 0.92; 12 studies; 11,276 participants; moderate-certainty evidence). Meta-regression analysis showed little or no difference in the intervention effects by cancer site, publication year or study quality.\nIt is uncertain whether this strategy has an effect on health-related quality of life (3 studies, 2742 participants), anxiety (1 study, 180 participants) or cost (6 studies, 1412 participants) because the certainty of evidence is very low. None of the studies reported on depression.\nFollow-up strategies integrating additional patient symptom education or monitoring, or survivorship care plans compared with usual care:\nNone of the studies reported on overall survival or time to detection of recurrence.\nIt is uncertain whether this strategy makes a difference to health-related quality of life (12 studies, 2846 participants), anxiety (1 study, 470 participants), depression (8 studies, 2351 participants) or cost (1 studies, 408 participants), as the certainty of evidence is very low.\nAuthors' conclusions\nEvidence regarding the effectiveness of the different follow-up strategies varies substantially. Less intensive follow-up may make little or no difference to overall survival but probably delays detection of recurrence. However, as we did not analyse the two outcomes together, we cannot make direct conclusions about the effect of interventions on survival after detection of recurrence. The effects of non-specialist-led follow-up on survival and detection of recurrence, and how intensity of follow-up affects health-related quality of life, anxiety and depression, are uncertain. There was little evidence for the effects of follow-up integrating additional patient symptom education/monitoring and survivorship care plans.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We analysed 53 studies, involving 20,832 participants with 12 types of cancer in 15 different countries, mainly in Europe, North America and Australia. All the studies were carried out in either a hospital or general practice setting."
}
] |
query-laysum
|
5998
|
[
{
"role": "user",
"content": "Abstract: Background\nBeta-blockers and inhibitors of the renin-angiotensin-aldosterone system improve survival and reduce morbidity in people with heart failure with reduced left ventricular ejection fraction (LVEF); a review of the evidence is required to determine whether these treatments are beneficial for people with heart failure with preserved ejection fraction (HFpEF).\nObjectives\nTo assess the effects of beta-blockers, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, angiotensin receptor neprilysin inhibitors, and mineralocorticoid receptor antagonists in people with HFpEF.\nSearch methods\nWe updated searches of CENTRAL, MEDLINE, Embase, and one clinical trial register on 14 May 2020 to identify eligible studies, with no language or date restrictions. We checked references from trial reports and review articles for additional studies.\nSelection criteria\nWe included randomised controlled trials with a parallel group design, enrolling adults with HFpEF, defined by LVEF greater than 40%.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 41 randomised controlled trials (231 reports), totalling 23,492 participants across all comparisons. The risk of bias was frequently unclear and only five studies had a low risk of bias in all domains.\nBeta-blockers (BBs)\nWe included 10 studies (3087 participants) investigating BBs. Five studies used a placebo comparator and in five the comparator was usual care. The mean age of participants ranged from 30 years to 81 years.\nA possible reduction in cardiovascular mortality was observed (risk ratio (RR) 0.78, 95% confidence interval (CI) 0.62 to 0.99; number needed to treat for an additional benefit (NNTB) 25; 1046 participants; three studies), however, the certainty of evidence was low. There may be little to no effect on all-cause mortality (RR 0.82, 95% CI 0.67 to 1.00; 1105 participants; four studies; low-certainty evidence). The effects on heart failure hospitalisation, hyperkalaemia, and quality of life remain uncertain.\nMineralocorticoid receptor antagonists (MRAs)\nWe included 13 studies (4459 participants) investigating MRA. Eight studies used a placebo comparator and in five the comparator was usual care. The mean age of participants ranged from 54.5 to 80 years.\nPooled analysis indicated that MRA treatment probably reduces heart failure hospitalisation (RR 0.82, 95% CI 0.69 to 0.98; NNTB = 41; 3714 participants; three studies; moderate-certainty evidence). However, MRA treatment probably has little or no effect on all-cause mortality (RR 0.91, 95% CI 0.78 to 1.06; 4207 participants; five studies; moderate-certainty evidence) and cardiovascular mortality (RR 0.90, 95% CI 0.74 to 1.11; 4070 participants; three studies; moderate-certainty evidence). MRA treatment may have little or no effect on quality of life measures (mean difference (MD) 0.84, 95% CI -2.30 to 3.98; 511 participants; three studies; low-certainty evidence). MRA treatment was associated with a higher risk of hyperkalaemia (RR 2.11, 95% CI 1.77 to 2.51; number needed to treat for an additional harmful outcome (NNTH) = 11; 4291 participants; six studies; high-certainty evidence).\nAngiotensin-converting enzyme inhibitors (ACEIs)\nWe included eight studies (2061 participants) investigating ACEIs. Three studies used a placebo comparator and in five the comparator was usual care. The mean age of participants ranged from 70 to 82 years.\nPooled analyses with moderate-certainty evidence suggest that ACEI treatment likely has little or no effect on cardiovascular mortality (RR 0.93, 95% CI 0.61 to 1.42; 945 participants; two studies), all-cause mortality (RR 1.04, 95% CI 0.75 to 1.45; 1187 participants; five studies) and heart failure hospitalisation (RR 0.86, 95% CI 0.64 to 1.15; 1019 participants; three studies), and may result in little or no effect on the quality of life (MD -0.09, 95% CI -3.66 to 3.48; 154 participants; two studies; low-certainty evidence). The effects on hyperkalaemia remain uncertain.\nAngiotensin receptor blockers (ARBs)\nEight studies (8755 participants) investigating ARBs were included. Five studies used a placebo comparator and in three the comparator was usual care. The mean age of participants ranged from 61 to 75 years.\nPooled analyses with high certainty of evidence suggest that ARB treatment has little or no effect on cardiovascular mortality (RR 1.02, 95% 0.90 to 1.14; 7254 participants; three studies), all-cause mortality (RR 1.01, 95% CI 0.92 to 1.11; 7964 participants; four studies), heart failure hospitalisation (RR 0.92, 95% CI 0.83 to 1.02; 7254 participants; three studies), and quality of life (MD 0.41, 95% CI -0.86 to 1.67; 3117 participants; three studies). ARB was associated with a higher risk of hyperkalaemia (RR 1.88, 95% CI 1.07 to 3.33; 7148 participants; two studies; high-certainty evidence).\nAngiotensin receptor neprilysin inhibitors (ARNIs)\nThree studies (7702 participants) investigating ARNIs were included. Two studies used ARBs as the comparator and one used standardised medical therapy, based on participants' established treatments at enrolment. The mean age of participants ranged from 71 to 73 years.\nResults suggest that ARNIs may have little or no effect on cardiovascular mortality (RR 0.96, 95% CI 0.79 to 1.15; 4796 participants; one study; moderate-certainty evidence), all-cause mortality (RR 0.97, 95% CI 0.84 to 1.11; 7663 participants; three studies; high-certainty evidence), or quality of life (high-certainty evidence). However, ARNI treatment may result in a slight reduction in heart failure hospitalisation, compared to usual care (RR 0.89, 95% CI 0.80 to 1.00; 7362 participants; two studies; moderate-certainty evidence). ARNI treatment was associated with a reduced risk of hyperkalaemia compared with valsartan (RR 0.88, 95% CI 0.77 to 1.01; 5054 participants; two studies; moderate-certainty evidence).\nAuthors' conclusions\nThere is evidence that MRA and ARNI treatment in HFpEF probably reduces heart failure hospitalisation but probably has little or no effect on cardiovascular mortality and quality of life. BB treatment may reduce the risk of cardiovascular mortality, however, further trials are needed. The current evidence for BBs, ACEIs, and ARBs is limited and does not support their use in HFpEF in the absence of an alternative indication. Although MRAs and ARNIs are probably effective at reducing the risk of heart failure hospitalisation, the treatment effect sizes are modest. There is a need for improved approaches to patient stratification to identify the subgroup of patients who are most likely to benefit from MRAs and ARNIs, as well as for an improved understanding of disease biology, and for new therapeutic approaches.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The certainty of evidence ranged from high to low across the outcomes and drug classes studied. With the exception of ARBs and ARNIs, there was a lack of large-scale trials in HFpEF for the interventions and outcomes tested."
}
] |
query-laysum
|
5999
|
[
{
"role": "user",
"content": "Abstract: Background\nIntussusception is a common abdominal emergency in children with significant morbidity. Prompt diagnosis and management reduces associated risks and the need for surgical intervention. Despite widespread agreement on the use of contrast enema as opposed to surgery for initial management in most cases, debate persists on the appropriate contrast medium, imaging modality, pharmacological adjuvant, and protocol for delayed repeat enema, and on the best approach for surgical management for intussusception in children.\nObjectives\nTo assess the safety and effectiveness of non-surgical and surgical approaches in the management of intussusception in children.\nSearch methods\nWe searched the following electronic databases: Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8) in the Cochrane Library; Ovid MEDLINE (1950 to September 2016); Ovid Embase (1974 to September 2016); Science Citation Index Expanded (via Web of Science) (1900 to September 2016); and BIOSIS Previews (1969 to September 2016).\nWe examined the reference lists of all eligible trials to identify additional studies. To locate unpublished studies, we contacted content experts, searched the World Health Organization International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov (September 2016), and explored proceedings from meetings of the British Association of Paedatric Surgeons (BAPS), the American Soceity of Pediatric Surgery, and the World Congress of Pediatric Surgery.\nSelection criteria\nWe included all randomised controlled trials comparing contrast media, imaging modalities, pharmacological adjuvants, protocols for delayed repeat enema, and/or surgical approaches for the management of intussusception in children. We applied no language, publication date, or publication status restrictions.\nData collection and analysis\nTwo review authors independently conducted study selection and data extraction and assessed risk of bias using a standardised form. We resolved disagreements by consensus with a third review author when necessary. We reported dichotomous outcomes as risk ratios (RRs) with 95% confidence intervals (CIs). We analysed data on an intention-to-treat basis and evaluated the overall quality of evidence supporting the outcomes by using GRADE criteria.\nMain results\nWe included six randomised controlled trials (RCTs) with a total of 822 participants. Two trials compared liquid enema reduction plus glucagon versus liquid enema alone. One trial compared liquid enema plus dexamethasone versus liquid enema alone. Another trial compared air enema plus dexamethasone versus air enema alone, and two trials compared use of liquid enema versus air enema.\n\nWe identified three ongoing trials.\nWe judged all included trials to be at risk of bias owing to omissions in reported methods. We judged five of six trials as having high risk of bias in at least one domain. Therefore, the quality of the evidence (GRADE) for outcomes was low. Interventions and data presentation varied greatly across trials; therefore meta-analysis was not possible for most review outcomes.\nEnema plus glucagon versus enema alone\nIt is uncertain whether use of glucagon improves the rate of successful reduction of intussusception when compared with enema alone (reported in two trials, 218 participants; RR 1.09, 95% CI 0.94 to 1.26;low quality of evidence). No trials in this comparison reported on the number of children with bowel perforation(s) nor on the number of children with recurrent intussusception.\nEnema plus dexamethasone versus enema alone\nUse of the adjunct, dexamethasone, may be beneficial in reducing intussusception recurrence with liquid or air enema (two trials, 299 participants; RR 0.14, 95% CI 0.03 to 0.60; low quality of evidence). This equates to a number needed to treat for an additional beneficial outcome of 13 (95% CI 8 to 37). It is uncertain whether use of the adjunct, dexamethasone, improves the rate of successful reduction of intussusception when compared with enema alone (reported in two trials, 356 participants; RR 1.01, 95% CI 0.92 to 1.10;low quality of evidence).\nAir enema versus liquid enema\nAir enema may be more successful than liquid enema for reducing intussusception (two trials, 199 participants; RR 1.28, 95% CI 1.10 to 1.49; low quality of evidence). This equates to a number needed to treat for an additional beneficial outcome of 6 (95% CI 4 to 19). No trials in this comparison reported on the number of children with bowel perforation(s) or on the number of children with recurrent intussusception nor any intraoperative complications, such as bowel perforation, or other adverse effects. Only one trial reported postoperative complications, but owing to the method of reporting used, a quantitative analysis was not possible. We identified no studies that exclusively evaluated surgical interventions for management of intussusception.\nAuthors' conclusions\nThis review identified a small number of trials that assessed a variety of interventions. All included trials provided evidence of low quality and were subject to serious concerns about imprecision, high risk of bias, or both. Air enema may be superior to liquid enema for successfully reducing intussusception in children; however, this finding is based on a few studies including small numbers of participants. Dexamethasone as an adjuvant may be more effective in reducing intussusception recurrence rates following air enema or liquid enema, but these results are also based on a few studies of small numbers of participants. This review highlights several points that need to be addressed in future studies, including reducing the risk of bias and including relevant outcomes. Specifically, surgical trials are lacking, and future research is needed to address this evidence gap.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Evidence is current to September 2016. We identified six randomised studies, with a total of 822 participants, that explored the management of intussusception in children and assessed different types of interventions. We also identified three ongoing trials."
}
] |
query-laysum
|
6000
|
[
{
"role": "user",
"content": "Abstract: Background\nIn 2011 the World Health Organization (WHO) recommended parenteral artesunate in preference to quinine as first-line treatment for people with severe malaria. Prior to this recommendation many countries, particularly in Africa, had begun to use artemether, an alternative artemisinin derivative. This Cochrane Review evaluates intramuscular artemether compared with both quinine and artesunate.\nObjectives\nTo assess the efficacy and safety of intramuscular artemether versus any other parenteral medication in the treatment of severe malaria in adults and children.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register, CENTRAL (the Cochrane Library), MEDLINE, Embase, and LILACS, ISI Web of Science, conference proceedings, and reference lists of articles. We also searched the WHO International Clinical Trial Registry Platform, ClinicalTrials.gov, and the metaRegister of Controlled Trials (mRCT) for ongoing trials up to 7 September 2018. We checked the reference lists of all studies identified by the search. We examined references listed in review articles and previously compiled bibliographies to look for eligible studies.\nSelection criteria\nRandomized controlled trials (RCTs) comparing intramuscular artemether with intravenous/intramuscular quinine or artesunate for treating severe malaria.\nData collection and analysis\nThe primary outcome was all-cause death. Two review authors independently screened each article by title and abstract, and examined potentially relevant studies for inclusion using an eligibility form. Two review authors independently extracted data and assessed risk of bias of included studies. We summarized dichotomous outcomes using risk ratios (RRs) and continuous outcomes using mean differences (MDs), and have presented both measures with 95% confidence intervals (CIs). Where appropriate, we combined data in meta-analyses and used the GRADE approach to summarize the certainty of the evidence.\nMain results\nWe included 19 RCTs, enrolling 2874 adults and children with severe malaria, carried out in Africa (12 trials) and in Asia (7 trials).\nArtemether versus quinine\nFor children, there is probably little or no difference in the risk of death between intramuscular artemether and quinine (RR 0.97, 95% CI 0.77 to 1.21; 13 trials, 1659 participants, moderate-certainty evidence). Coma resolution time may be about five hours shorter with artemether (MD −5.45, 95% CI −7.90 to −3.00; six trials, 358 participants, low-certainty evidence). Artemether may make little difference to neurological sequelae (RR 0.84, 95% CI 0.66 to 1.07; seven trials, 968 participants, low-certainty evidence). Compared to quinine, artemether probably shortens the parasite clearance time by about nine hours (MD −9.03, 95% CI −11.43 to −6.63; seven trials, 420 participants, moderate-certainty evidence), and may shorten the fever clearance time by about three hours (MD −3.73, 95% CI −6.55 to −0.92; eight trials, 457 participants, low-certainty evidence).\nFor adults, treatment with intramuscular artemether probably results in fewer deaths than treatment with quinine (RR 0.59, 95% CI 0.42 to 0.83; four trials, 716 participants, moderate-certainty evidence).\nArtemether versus artesunate\nArtemether and artesunate have not been directly compared in randomized trials in children.\nFor adults, mortality is probably higher with intramuscular artemether (RR 1.80, 95% CI 1.09 to 2.97; two trials, 494 participants, moderate-certainty evidence).\nAuthors' conclusions\nArtemether appears to be more effective than quinine in children and adults. Artemether compared to artesunate has not been extensively studied, but in adults it appears inferior. These findings are consistent with the WHO recommendations that artesunate is the drug of choice, but artemether is acceptable when artesunate is not available.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Injection of artesunate is recommended by the World Health Organization (WHO) for treating adults and children that have severe malaria as studies have shown that it results in fewer deaths compared to quinine treatment.\nArtemether is an alternative artemisinin-based medicine but is only available as a pre-mixed oil-based solution for intramuscular injection. Artemether is now widely available and is used in many African countries, although it is not specifically recommended by the WHO. The aim of this review was to examine the effects of treating people that have severe malaria with artemether injected intramuscularly compared to treatment with other antimalarial medicines given intramuscularly or intravenously."
}
] |
query-laysum
|
6001
|
[
{
"role": "user",
"content": "Abstract: Background\nFebrile neutropenia (FN) and other infectious complications are some of the most serious treatment-related toxicities of chemotherapy for cancer, with a mortality rate of 2% to 21%. The two main types of prophylactic regimens are granulocyte (macrophage) colony-stimulating factors (G(M)-CSF) and antibiotics, frequently quinolones or cotrimoxazole. Current guidelines recommend the use of colony-stimulating factors when the risk of febrile neutropenia is above 20%, but they do not mention the use of antibiotics. However, both regimens have been shown to reduce the incidence of infections. Since no systematic review has compared the two regimens, a systematic review was undertaken.\nObjectives\nTo compare the efficacy and safety of G(M)-CSF compared to antibiotics in cancer patients receiving myelotoxic chemotherapy.\nSearch methods\nWe searched The Cochrane Library, MEDLINE, EMBASE, databases of ongoing trials, and conference proceedings of the American Society of Clinical Oncology and the American Society of Hematology (1980 to December 2015). We planned to include both full-text and abstract publications. Two review authors independently screened search results.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing prophylaxis with G(M)-CSF versus antibiotics for the prevention of infection in cancer patients of all ages receiving chemotherapy. All study arms had to receive identical chemotherapy regimes and other supportive care. We included full-text, abstracts, and unpublished data if sufficient information on study design, participant characteristics, interventions and outcomes was available. We excluded cross-over trials, quasi-randomised trials and post-hoc retrospective trials.\nData collection and analysis\nTwo review authors independently screened the results of the search strategies, extracted data, assessed risk of bias, and analysed data according to standard Cochrane methods. We did final interpretation together with an experienced clinician.\nMain results\nIn this updated review, we included no new randomised controlled trials. We included two trials in the review, one with 40 breast cancer patients receiving high-dose chemotherapy and G-CSF compared to antibiotics, a second one evaluating 155 patients with small-cell lung cancer receiving GM-CSF or antibiotics.\nWe judge the overall risk of bias as high in the G-CSF trial, as neither patients nor physicians were blinded and not all included patients were analysed as randomised (7 out of 40 patients). We considered the overall risk of bias in the GM-CSF to be moderate, because of the risk of performance bias (neither patients nor personnel were blinded), but low risk of selection and attrition bias.\nFor the trial comparing G-CSF to antibiotics, all cause mortality was not reported. There was no evidence of a difference for infection-related mortality, with zero events in each arm. Microbiologically or clinically documented infections, severe infections, quality of life, and adverse events were not reported. There was no evidence of a difference in frequency of febrile neutropenia (risk ratio (RR) 1.22; 95% confidence interval (CI) 0.53 to 2.84). The quality of the evidence for the two reported outcomes, infection-related mortality and frequency of febrile neutropenia, was very low, due to the low number of patients evaluated (high imprecision) and the high risk of bias.\nThere was no evidence of a difference in terms of median survival time in the trial comparing GM-CSF and antibiotics. Two-year survival times were 6% (0 to 12%) in both arms (high imprecision, low quality of evidence). There were four toxic deaths in the GM-CSF arm and three in the antibiotics arm (3.8%), without evidence of a difference (RR 1.32; 95% CI 0.30 to 5.69; P = 0.71; low quality of evidence). There were 28% grade III or IV infections in the GM-CSF arm and 18% in the antibiotics arm, without any evidence of a difference (RR 1.55; 95% CI 0.86 to 2.80; P = 0.15, low quality of evidence). There were 5 episodes out of 360 cycles of grade IV infections in the GM-CSF arm and 3 episodes out of 334 cycles in the cotrimoxazole arm (0.8%), with no evidence of a difference (RR 1.55; 95% CI 0.37 to 6.42; P = 0.55; low quality of evidence). There was no significant difference between the two arms for non-haematological toxicities like diarrhoea, stomatitis, infections, neurologic, respiratory, or cardiac adverse events. Grade III and IV thrombopenia occurred significantly more frequently in the GM-CSF arm (60.8%) compared to the antibiotics arm (28.9%); (RR 2.10; 95% CI 1.41 to 3.12; P = 0.0002; low quality of evidence). Neither infection-related mortality, incidence of febrile neutropenia, nor quality of life were reported in this trial.\nAuthors' conclusions\nAs we only found two small trials with 195 patients altogether, no conclusion for clinical practice is possible. More trials are necessary to assess the benefits and harms of G(M)-CSF compared to antibiotics for infection prevention in cancer patients receiving chemotherapy.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence for infection-related mortality and frequency of febrile neutropenia in the G-CSF trial was very low, because of the small number of patients that were evaluated, and the study design (high risk of bias). The trial that analysed GM-CSF versus antibiotics reported overall survival, toxic deaths, infections, severe infections, and adverse events. Because of the very small number of patients included, we judged that the overall quality for all these outcomes was low.\nThe evidence is current to December 2015."
}
] |
query-laysum
|
6002
|
[
{
"role": "user",
"content": "Abstract: Background\nAlcohol use disorder (AUD) confers a prodigious burden of disease, disability, premature mortality, and high economic costs from lost productivity, accidents, violence, incarceration, and increased healthcare utilization. For over 80 years, Alcoholics Anonymous (AA) has been a widespread AUD recovery organization, with millions of members and treatment free at the point of access, but it is only recently that rigorous research on its effectiveness has been conducted.\nObjectives\nTo evaluate whether peer-led AA and professionally-delivered treatments that facilitate AA involvement (Twelve-Step Facilitation (TSF) interventions) achieve important outcomes, specifically: abstinence, reduced drinking intensity, reduced alcohol-related consequences, alcohol addiction severity, and healthcare cost offsets.\nSearch methods\nWe searched the Cochrane Drugs and Alcohol Group Specialized Register, Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, Embase, CINAHL and PsycINFO from inception to 2 August 2019. We searched for ongoing and unpublished studies via ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) on 15 November 2018. All searches included non-English language literature. We handsearched references of topic-related systematic reviews and bibliographies of included studies.\nSelection criteria\nWe included randomized controlled trials (RCTs), quasi-RCTs and non-randomized studies that compared AA or TSF (AA/TSF) with other interventions, such as motivational enhancement therapy (MET) or cognitive behavioral therapy (CBT), TSF treatment variants, or no treatment. We also included healthcare cost offset studies. Participants were non-coerced adults with AUD.\nData collection and analysis\nWe categorized studies by: study design (RCT/quasi-RCT; non-randomized; economic); degree of standardized manualization (all interventions manualized versus some/none); and comparison intervention type (i.e. whether AA/TSF was compared to an intervention with a different theoretical orientation or an AA/TSF intervention that varied in style or intensity). For analyses, we followed Cochrane methodology calculating the standard mean difference (SMD) for continuous variables (e.g. percent days abstinent (PDA)) or the relative risk (risk ratios (RRs)) for dichotomous variables. We conducted random-effects meta-analyses to pool effects wherever possible.\nMain results\nWe included 27 studies containing 10,565 participants (21 RCTs/quasi-RCTs, 5 non-randomized, and 1 purely economic study). The average age of participants within studies ranged from 34.2 to 51.0 years. AA/TSF was compared with psychological clinical interventions, such as MET and CBT, and other 12-step program variants.\nWe rated selection bias as being at high risk in 11 of the 27 included studies, unclear in three, and as low risk in 13. We rated risk of attrition bias as high risk in nine studies, unclear in 14, and low in four, due to moderate (> 20%) attrition rates in the study overall (8 studies), or in study treatment group (1 study). Risk of bias due to inadequate researcher blinding was high in one study, unclear in 22, and low in four. Risks of bias arising from the remaining domains were predominantly low or unclear.\nAA/TSF (manualized) compared to treatments with a different theoretical orientation (e.g. CBT) (randomized/quasi-randomized evidence)\nRCTs comparing manualized AA/TSF to other clinical interventions (e.g. CBT), showed AA/TSF improves rates of continuous abstinence at 12 months (risk ratio (RR) 1.21, 95% confidence interval (CI) 1.03 to 1.42; 2 studies, 1936 participants; high-certainty evidence). This effect remained consistent at both 24 and 36 months.\nFor percentage days abstinent (PDA), AA/TSF appears to perform as well as other clinical interventions at 12 months (mean difference (MD) 3.03, 95% CI -4.36 to 10.43; 4 studies, 1999 participants; very low-certainty evidence), and better at 24 months (MD 12.91, 95% CI 7.55 to 18.29; 2 studies, 302 participants; very low-certainty evidence) and 36 months (MD 6.64, 95% CI 1.54 to 11.75; 1 study, 806 participants; very low-certainty evidence).\nFor longest period of abstinence (LPA), AA/TSF may perform as well as comparison interventions at six months (MD 0.60, 95% CI -0.30 to 1.50; 2 studies, 136 participants; low-certainty evidence).\nFor drinking intensity, AA/TSF may perform as well as other clinical interventions at 12 months, as measured by drinks per drinking day (DDD) (MD -0.17, 95% CI -1.11 to 0.77; 1 study, 1516 participants; moderate-certainty evidence) and percentage days heavy drinking (PDHD) (MD -5.51, 95% CI -14.15 to 3.13; 1 study, 91 participants; low-certainty evidence).\nFor alcohol-related consequences, AA/TSF probably performs as well as other clinical interventions at 12 months (MD -2.88, 95% CI -6.81 to 1.04; 3 studies, 1762 participants; moderate-certainty evidence).\nFor alcohol addiction severity, one study found evidence of a difference in favor of AA/TSF at 12 months (P < 0.05; low-certainty evidence).\nAA/TSF (non-manualized) compared to treatments with a different theoretical orientation (e.g. CBT) (randomized/quasi-randomized evidence)\nFor the proportion of participants completely abstinent, non-manualized AA/TSF may perform as well as other clinical interventions at three to nine months follow-up (RR 1.71, 95% CI 0.70 to 4.18; 1 study, 93 participants; low-certainty evidence).\nNon-manualized AA/TSF may also perform slightly better than other clinical interventions for PDA (MD 3.00, 95% CI 0.31 to 5.69; 1 study, 93 participants; low-certainty evidence).\nFor drinking intensity, AA/TSF may perform as well as other clinical interventions at nine months, as measured by DDD (MD -1.76, 95% CI -2.23 to -1.29; 1 study, 93 participants; very low-certainty evidence) and PDHD (MD 2.09, 95% CI -1.24 to 5.42; 1 study, 286 participants; low-certainty evidence).\nNone of the RCTs comparing non-manualized AA/TSF to other clinical interventions assessed LPA, alcohol-related consequences, or alcohol addiction severity.\nCost-effectiveness studies\nIn three studies, AA/TSF had higher healthcare cost savings than outpatient treatment, CBT, and no AA/TSF treatment. The fourth study found that total medical care costs decreased for participants attending CBT, MET, and AA/TSF treatment, but that among participants with worse prognostic characteristics AA/TSF had higher potential cost savings than MET (moderate-certainty evidence).\nAuthors' conclusions\nThere is high quality evidence that manualized AA/TSF interventions are more effective than other established treatments, such as CBT, for increasing abstinence. Non-manualized AA/TSF may perform as well as these other established treatments. AA/TSF interventions, both manualized and non-manualized, may be at least as effective as other treatments for other alcohol-related outcomes. AA/TSF probably produces substantial healthcare cost savings among people with alcohol use disorder.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review summarized research comparing the Alcoholics Anonymous (AA) and similarTwelve-Step Facilitation (TSF) programs (AA/TSF) to other treatments to see if they help people with drinking problems to stay sober, or reduce alcohol consumption and drinking-related consequences. We also examined whether AA/TSF reduces healthcare costs relative to other treatments."
}
] |
query-laysum
|
6003
|
[
{
"role": "user",
"content": "Abstract: Background\nMénière's disease is a condition that causes recurrent episodes of vertigo, associated with hearing loss and tinnitus. First-line treatments often involve dietary or lifestyle changes, medication or local (intratympanic) treatments. However, surgery may also be considered for people with persistent or severe symptoms. The efficacy of different surgical interventions at preventing vertigo attacks, and their associated symptoms, is currently unclear.\nObjectives\nTo evaluate the benefits and harms of surgical interventions versus placebo or no treatment in people with Ménière's disease.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 14 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with definite or probable Ménière's disease comparing ventilation tubes, endolymphatic sac surgery, semi-circular canal plugging/obliteration, vestibular nerve section or labyrinthectomy with either placebo (sham surgery) or no treatment. We excluded studies with follow-up of less than three months, or with a cross-over design (unless data from the first phase of the study could be identified).\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) change in hearing, 6) change in tinnitus and 7) other adverse effects. We considered outcomes reported at three time points: 3 to < 6 months, 6 to ≤ 12 months and > 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included two studies with a total of 178 participants. One evaluated ventilation tubes compared to no treatment, the other evaluated endolymphatic sac decompression compared to sham surgery.\nVentilation tubes\nWe included a single RCT of 148 participants with definite Ménière's disease. It was conducted in a single centre in Japan from 2010 to 2013. Participants either received ventilation tubes with standard medical treatment, or standard medical treatment alone, and were followed up for two years. Some data were reported on the number of participants in whom vertigo resolved, and the effect of the intervention on hearing. Our other primary and secondary outcomes were not reported in this study. This is a single, small study and for all outcomes the certainty of evidence was low or very low. We are unable to draw meaningful conclusions from the numerical results.\nEndolymphatic sac decompression\nWe also included one RCT of 30 participants that compared endolymphatic sac decompression with sham surgery. This was a single-centre study conducted in Denmark during the 1980s. Follow-up was predominantly conducted at one year, but additional follow-up continued for up to nine years in some participants. Some data were reported on hearing and vertigo (both improvement in vertigo and change in vertigo), but our other outcomes of interest were not reported. Again, this is a single, very small study and we rated the certainty of the evidence as very low for all outcomes. We are therefore unable to draw meaningful conclusions from the numerical results.\nAuthors' conclusions\nWe are unable to draw clear conclusions about the efficacy of these surgical interventions for Ménière's disease. We identified evidence for only two of our five proposed comparisons, and we assessed all the evidence as low- or very low-certainty. This means that we have very low confidence that the effects reported are accurate estimates of the true effect of these interventions. Many of the outcomes that we planned to assess were not reported by the studies, such as the impact on quality of life, and adverse effects of the interventions. Consensus on the appropriate outcomes to measure in studies of Ménière's disease is needed (i.e. a core outcome set) in order to guide future studies in this area and enable meta-analyses of the results. This must include appropriate consideration of the potential harms of treatment, as well as the benefits.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This evidence is up-to-date to September 2022."
}
] |
query-laysum
|
6004
|
[
{
"role": "user",
"content": "Abstract: Background\nResilience can be defined as the maintenance or quick recovery of mental health during or after periods of stressor exposure, which may result from a potentially traumatising event, challenging life circumstances, a critical life transition phase, or physical illness. Healthcare professionals, such as nurses, physicians, psychologists and social workers, are exposed to various work-related stressors (e.g. patient care, time pressure, administration) and are at increased risk of developing mental disorders. This population may benefit from resilience-promoting training programmes.\nObjectives\nTo assess the effects of interventions to foster resilience in healthcare professionals, that is, healthcare staff delivering direct medical care (e.g. nurses, physicians, hospital personnel) and allied healthcare staff (e.g. social workers, psychologists).\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, 11 other databases and three trial registries from 1990 to June 2019. We checked reference lists and contacted researchers in the field. We updated this search in four key databases in June 2020, but we have not yet incorporated these results.\nSelection criteria\nRandomised controlled trials (RCTs) in adults aged 18 years and older who are employed as healthcare professionals, comparing any form of psychological intervention to foster resilience, hardiness or post-traumatic growth versus no intervention, wait-list, usual care, active or attention control. Primary outcomes were resilience, anxiety, depression, stress or stress perception and well-being or quality of life. Secondary outcomes were resilience factors.\nData collection and analysis\nTwo review authors independently selected studies, extracted data, assessed risks of bias, and rated the certainty of the evidence using the GRADE approach (at post-test only).\nMain results\nWe included 44 RCTs (high-income countries: 36). Thirty-nine studies solely focused on healthcare professionals (6892 participants), including both healthcare staff delivering direct medical care and allied healthcare staff. Four studies investigated mixed samples (1000 participants) with healthcare professionals and participants working outside of the healthcare sector, and one study evaluated training for emergency personnel in general population volunteers (82 participants). The included studies were mainly conducted in a hospital setting and included physicians, nurses and different hospital personnel (37/44 studies).\nParticipants mainly included women (68%) from young to middle adulthood (mean age range: 27 to 52.4 years). Most studies investigated group interventions (30 studies) of high training intensity (18 studies; > 12 hours/sessions), that were delivered face-to-face (29 studies). Of the included studies, 19 compared a resilience training based on combined theoretical foundation (e.g. mindfulness and cognitive-behavioural therapy) versus unspecific comparators (e.g. wait-list). The studies were funded by different sources (e.g. hospitals, universities), or a combination of different sources. Fifteen studies did not specify the source of their funding, and one study received no funding support.\nRisk of bias was high or unclear for most studies in performance, detection, and attrition bias domains.\nAt post-intervention, very-low certainty evidence indicated that, compared to controls, healthcare professionals receiving resilience training may report higher levels of resilience (standardised mean difference (SMD) 0.45, 95% confidence interval (CI) 0.25 to 0.65; 12 studies, 690 participants), lower levels of depression (SMD −0.29, 95% CI −0.50 to −0.09; 14 studies, 788 participants), and lower levels of stress or stress perception (SMD −0.61, 95% CI −1.07 to −0.15; 17 studies, 997 participants). There was little or no evidence of any effect of resilience training on anxiety (SMD −0.06, 95% CI −0.35 to 0.23; 5 studies, 231 participants; very-low certainty evidence) or well-being or quality of life (SMD 0.14, 95% CI −0.01 to 0.30; 13 studies, 1494 participants; very-low certainty evidence). Effect sizes were small except for resilience and stress reduction (moderate). Data on adverse effects were available for three studies, with none reporting any adverse effects occurring during the study (very-low certainty evidence).\nAuthors' conclusions\nFor healthcare professionals, there is very-low certainty evidence that, compared to control, resilience training may result in higher levels of resilience, lower levels of depression, stress or stress perception, and higher levels of certain resilience factors at post-intervention.\nThe paucity of medium- or long-term data, heterogeneous interventions and restricted geographical distribution limit the generalisability of our results. Conclusions should therefore be drawn cautiously. The findings suggest positive effects of resilience training for healthcare professionals, but the evidence is very uncertain. There is a clear need for high-quality replications and improved study designs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For healthcare professionals, resilience training may improve resilience, and may reduce symptoms of depression and stress immediately after the end of treatment. Resilience interventions do not appear to reduce anxiety symptoms or improve well-being. However, the evidence found in this review is limited and very uncertain. This means that, at present, we have very little confidence that resilience interventions make a difference to these outcomes. Further research is very likely to change the findings.\nVery few studies reported on the longer-term impact of resilience interventions. Studies used a variety of different outcome measures and intervention designs, making it difficult to draw general conclusions from the findings. Potential adverse events were only examined in three studies, showing no undesired effects. More research is needed of high methodological quality and with improved study designs."
}
] |
query-laysum
|
6005
|
[
{
"role": "user",
"content": "Abstract: Background\nUrinary stone disease is a common condition characterised by increasing prevalence and high rates of recurrence. Observational studies have reported that increased water intake played a role in the prevention of urinary stone formation but with limited strength of evidence.\nObjectives\nTo compare the effects of increased water intake with standard water intake for the prevention of urinary stone formation in participants with or without a history of urinary stones.\nSearch methods\nWe performed a systematic search of PubMed (MEDLINE), EMBASE (Ovid) and the Cochrane Library to 15 October 2019. We handsearched review articles, clinical trial registries, and reference lists of retrieved articles. We did not apply any restrictions to publication language or publication status.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs looking at the benefits and harms of increased water intake versus standard water intake for the prevention of urinary stone formation in participants with or without a history of urinary stones.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently extracted data and assessed the risk of bias of included studies. We pooled dichotomous outcomes (e.g. incidence/recurrence rate of urinary stones; adverse events) using risk ratios (RRs) with 95% confidence intervals (CIs). We calculated hazard ratio (HRs) and corresponding 95% CIs to assess the intervention effect for time-to-event outcomes. We assessed the certainty of the evidence by using the GRADE criteria.\nMain results\nOur search identified no RCTs investigating the role of increased water intake for the prevention of urinary stone formation in participants with no history of urinary stones (primary prevention). We found one RCT assessing the effects of increased water intake versus standard water intake for the prevention of urinary stone formation in people with a history of urinary stones (secondary prevention). This trial randomised 220 participants (110 participants in the intervention group with increased water intake and 110 in the control group with standard water intake). Increased water intake was defined as achieving a urine volume of at least 2.0 L per day by drinking water.\nBased on this study, increased water intake may decrease stone recurrences (RR 0.45, 95% CI 0.24 to 0.84; 199 participants; low-certainty evidence); this corresponds to 149 fewer (43 fewer to 205 fewer) stone recurrences per 1000 participants with 270 stone recurrence per 1000 participants over five years in the control group.\nIncreased water intake may also prolong the time to urinary stone recurrence compared to standard water intake (HR 0.40, 95% CI 0.20 to 0.79; 199 participants; low-certainty evidence); based on a stone recurrence rate of 270 per 1000 participants over five years, this corresponds to 152 fewer (209 fewer to 50 fewer) recurrences per 1000 participants.\nFor both outcomes we downgraded the certainty of evidence for study limitations and imprecision. We found no evidence for the outcome of adverse events\nAuthors' conclusions\nWe found no RCT evidence on the role of increased water intake for primary prevention of urinary stones. For secondary prevention, increased water intake achieving a urine volume of at least 2.0 L/day may reduce urinary stone recurrence and prolong time to recurrence for people with a history of urinary stone disease. However, our confidence in these findings is limited. We did not find evidence for adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that drinking more water may reduce the risk of stones coming back. It may also prolong the time it takes for stones to come back. We found no evidence of unwanted effects."
}
] |
query-laysum
|
6006
|
[
{
"role": "user",
"content": "Abstract: Background\nPhysical activity, a balanced diet, avoidance of tobacco exposure, and limited alcohol consumption may reduce morbidity and mortality from non-communicable diseases (NCDs). Mass media interventions are commonly used to encourage healthier behaviours in population groups. It is unclear whether targeted mass media interventions for ethnic minority groups are more or less effective in changing behaviours than those developed for the general population.\nObjectives\nTo determine the effects of mass media interventions targeting adult ethnic minorities with messages about physical activity, dietary patterns, tobacco use or alcohol consumption to reduce the risk of NCDs.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, ERIC, SweMed+, and ISI Web of Science until August 2016. We also searched for grey literature in OpenGrey, Grey Literature Report, Eldis, and two relevant websites until October 2016. The searches were not restricted by language.\nSelection criteria\nWe searched for individual and cluster-randomised controlled trials, controlled before-and-after studies (CBA) and interrupted time series studies (ITS). Relevant interventions promoted healthier behaviours related to physical activity, dietary patterns, tobacco use or alcohol consumption; were disseminated via mass media channels; and targeted ethnic minority groups. The population of interest comprised adults (≥ 18 years) from ethnic minority groups in the focal countries. Primary outcomes included indicators of behavioural change, self-reported behavioural change and knowledge and attitudes towards change. Secondary outcomes were the use of health promotion services and costs related to the project.\nData collection and analysis\nTwo authors independently reviewed the references to identify studies for inclusion. We extracted data and assessed the risk of bias in all included studies. We did not pool the results due to heterogeneity in comparisons made, outcomes, and study designs. We describe the results narratively and present them in 'Summary of findings' tables. We judged the quality of the evidence using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) methodology.\nMain results\nSix studies met the inclusion criteria, including three RCTs, two cluster-RCTs and one ITS. All were conducted in the USA and comprised targeted mass media interventions for people of African descent (four studies), Spanish-language dominant Latino immigrants (one study), and Chinese immigrants (one study). The two latter studies offered the intervention in the participants’ first language (Spanish, Cantonese, or Mandarin). Three interventions targeted towards women only, one pregnant women specifically. We judged all studies as being at unclear risk of bias in at least one domain and three studies as being at high risk of bias in at least one domain.\nWe categorised the findings into three comparisons. The first comparison examined mass media interventions targeted at ethnic minorities versus an equivalent mass media intervention intended for the general population. The one study in this category (255 participants of African decent) found little or no difference in effect on self-reported behavioural change for smoking and only small differences in attitudes to change between participants who were given a culturally specific smoking cessation booklet versus a booklet intended for the general population. We are uncertain about the effect estimates, as assessed by the GRADE methodology (very low quality evidence of effect). No study provided data for indicators of behavioural change or adverse effects.\nThe second comparison assessed targeted mass media interventions versus no intervention. One study (154 participants of African decent) reported effects for our primary outcomes. Participants in the intervention group had access to 12 one-hour live programmes on cable TV and received print material over three months regarding nutrition and physical activity to improve health and weight control. Change in body mass index (BMI) was comparable between groups 12 months after the baseline (low quality evidence). Scores on a food habits (fat behaviours) and total leisure activity scores changed favourably for the intervention group (very low quality evidence). Two other studies exposed entire populations in geographical areas to radio advertisements targeted towards African American communities. Authors presented effects on two of our secondary outcomes, use of health promotion services and project costs. The campaign message was to call smoking quit lines. The outcome was the number of calls received. After one year, one study reported 18 calls per estimated 10,000 targeted smokers from the intervention communities (estimated target population 310,500 persons), compared to 0.2 calls per estimated 10,000 targeted smokers from the control communities (estimated target population 331,400 persons) (moderate quality evidence). The ITS study also reported an increase in the number of calls from the target population during campaigns (low quality evidence). The proportion of African American callers increased in both studies (low to very low quality evidence). No study provided data on knowledge and attitudes for change and adverse effects. Information on costs were sparse.\nThe third comparison assessed targeted mass media interventions versus a mass media intervention plus personalised content. Findings are based on three studies (1361 participants). Participants in these comparison groups received personal feedback. Two of the studies recorded weight changes over time. Neither found significant differences between the groups (low quality evidence). Evidence on behavioural changes, and knowledge and attitudes typically found some effects in favour of receiving personalised content or no significant differences between groups (very low quality evidence). No study provided data on adverse effects. Information on costs were sparse.\nAuthors' conclusions\nThe available evidence is inadequate for understanding whether mass media interventions targeted toward ethnic minority populations are more effective in changing health behaviours than mass media interventions intended for the population at large. When compared to no intervention, a targeted mass media intervention may increase the number of calls to smoking quit line, but the effect on health behaviours is unclear. These studies could not distinguish the impact of different components, for instance the effect of hearing a message regarding behavioural change, the cultural adaptation to the ethnic minority group, or increase reach to the target group through more appropriate mass media channels. New studies should explore targeted interventions for ethnic minorities with a first language other than the dominant language in their resident country, as well as directly compare targeted versus general population mass media interventions.\n\nGiven the provided abstract, please respond to the following query: \"Background and review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Health authorities and non-governmental organisations often use mass media interventions (e.g. leaflets, radio and TV advertisements, posters and social media) to encourage healthier behaviours related to physical activity, dietary patterns, tobacco use or alcohol consumption, among others. In this review, we consider the effects of mass media interventions targeted towards ethnic minorities. A targeted intervention is designed for and considers the characteristics of a specific group, ideally providing ethnic minority groups equal opportunities and resources to access information, life skills, and opportunities to make healthier choices. However, we do not know whether targeted mass media strategies are more effective in reaching and influencing ethnic minorities than mass media strategies developed for the general population."
}
] |
query-laysum
|
6007
|
[
{
"role": "user",
"content": "Abstract: Background\nMumps is an infectious disease caused by the mumps virus. Chinese physicians generally believe that Chinese medicinal herbs are effective in alleviating symptoms and reducing the duration of mumps. Herbalists tend to develop a treatment plan according to the individual's symptoms.\nObjectives\nTo evaluate the effectiveness and safety of Chinese medicinal herbs combined with routine treatments for mumps.\nSearch methods\nWe searched CENTRAL (2015, Issue 1), MEDLINE (1948 to January week 4, 2015), EMBASE (1974 to February 2015), CINAHL (1981 to February 2015), AMED (1985 to April 2014), the Chinese Biomedical Database (CBM) (1980 to February 2015), China National Knowledge Infrastructure (CNKI) (1979 to February 2015), VIP Information (1989 to February 2015), and relevant databases of ongoing trials.\nSelection criteria\nRandomised controlled trials (RCTs) of Chinese medicinal herbs for mumps (with or without complications).\nData collection and analysis\nTwo review authors independently evaluated trial quality and conducted data extraction. We contacted the trial authors for missing data regarding participant allocation. Some trials allocated participants according to the participants' admission sequence, making it a pseudo-random allocation. None of the trials concealed participants' allocation or used blinding.\nMain results\nWe did not identify any eligible trials for inclusion. We identified 108 studies that claimed to use random allocation. We excluded 104 studies because the allocation methods the authors had used were not actually randomised. We were unable to contact the trial authors of the remaining four studies. These trials require further evaluation and have been allocated to the 'Studies awaiting classification' section.\nAuthors' conclusions\nWe did not find any RCTs for or against Chinese herbal medicine used in the treatment of mumps. We hope more high-quality RCTs will be conducted in the future.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to February 2015. The review authors did not find any randomised controlled trials (RCTs) to provide reliable evidence for the effectiveness and safety of Chinese medicinal herbs for mumps. Although no trials were included, we reviewed the studies and found no reports of adverse events with Chinese medicinal herbs as a treatment for mumps."
}
] |
query-laysum
|
6008
|
[
{
"role": "user",
"content": "Abstract: Background\nGolimumab is a humanized inhibitor of Tumor necrosis factor-alpha, recently approved by the Food and Drug Administration (FDA) for the treatment of Rheumatoid arthritis (RA).\nObjectives\nThe objective of this systematic review was to compare the efficacy and safety of golimumab (alone or in combination with DMARDs or biologics) to placebo (alone or in combination with DMARDs or biologics) in randomized or quasi-randomized clinical trials in adults with RA.\nSearch methods\nAn expert librarian searched six databases for any clinical trials of golimumab in RA, including the Cochrane Central Register of Controlled Trials (CENTRAL), OVID MEDLINE, CINAHL, EMBASE, Science Citation Index (Web of Science) and Current Controlled Trials databases.\nSelection criteria\nStudies were included if they used golimumab in adults with RA, were randomized or quasi-randomized and provided clinical outcomes.\nData collection and analysis\nTwo review authors (JS, SN) independently reviewed all titles and abstracts, selected appropriate studies for full review and reviewed the full-text articles for the final selection of included studies. For each study, they independently abstracted study characteristics, safety and efficacy data and performed risk of bias assessment. Disagreements were resolved by consensus. For continuous measures, we calculated mean differences or standardized mean differences and for categorical measures, relative risks. 95% confidence intervals were calculated.\nMain results\nFour RCTs with 1,231 patients treated with golimumab and 483 patients treated with placebo were included. Of these, 436 were treated with the FDA-approved dose of golimumab 50 mg every four weeks. Compared to patients treated with placebo+methotrexate, patients treated with the FDA-approved dose of golimumab+methotrexate were 2.6 times more likely to reach ACR50 (95% confidence interval (CI) 1.3 to 4.9; P=0.005 and NNT= 5, 95% confidence interval 2 to 20), no more likely to have any adverse event (relative risk 1.1, 95% Cl 0.9 to 1.2; P=0.44), and 0.5 times as likely to have overall withdrawals (95% Cl 0.3 to 0.8; P=0.005). Golimumab-treated patients were significantly more likely to achieve remission, low disease activity and improvement in functional ability compared to placebo (all statistically significant). No significant differences were noted between golimumab and placebo regarding serious adverse events, infections, serious infections, lung infections, tuberculosis, cancer, withdrawals due to adverse events and inefficacy and deaths. No radiographic data were reported.\nAuthors' conclusions\nWith an overall high grade of evidence, at the FDA-approved dose, golimumab is significantly more efficacious than placebo in treatment of patients with active RA , when used in combination with methotrexate. The short-term safety profile, based on short-term RCTs, is reasonable with no differences in total adverse events, serious infections, cancer, tuberculosis or deaths. Long-term surveillance studies are needed for safety assessment.\n\nGiven the provided abstract, please respond to the following query: \"People who dropped out of the studies due to any reason\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People who dropped out of the studies due to any reason\n- 5 fewer people out of 100 who took golimumab dropped out of the studies for any reason. (2% absolute difference)\n- 5 people out of 100 who took golimumab dropped out of the studies for any reason.\n- 10 people out of 100 who used a placebo dropped out of the studies for any reason."
}
] |
query-laysum
|
6009
|
[
{
"role": "user",
"content": "Abstract: Background\nRefractory peptic ulcers are ulcers in the stomach or duodenum that do not heal after eight to 12 weeks of medical treatment or those that are associated with complications despite medical treatment. Recurrent peptic ulcers are peptic ulcers that recur after healing of the ulcer. Given the number of deaths due to peptic ulcer-related complications and the long-term complications of medical treatment (increased incidence of fracture), it is unclear whether medical or surgical intervention is the better treatment option in people with recurrent or refractory peptic ulcers.\nObjectives\nTo assess the benefits and harms of medical versus surgical treatment for people with recurrent or refractory peptic ulcer.\nSearch methods\nWe searched the specialised register of the Cochrane Upper GI and Pancreatic Diseases group, the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE, EMBASE, Science Citation Index Expanded, and trials registers until September 2015 to identify randomised trials and non-randomised studies, using search strategies. We also searched the references of included studies to identify further studies.\nSelection criteria\nWe considered randomised controlled trials and non-randomised studies comparing medical treatment with surgical treatment in people with refractory or recurrent peptic ulcer, irrespective of language, blinding, or publication status for inclusion in the review.\nData collection and analysis\nTwo review authors independently identified trials and extracted data. We planned to calculate the risk ratio, mean difference, standardised mean difference, or hazard ratio with 95% confidence intervals using both fixed-effect and random-effects models with Review Manager 5 based on intention-to-treat analysis.\nMain results\nWe included only one non-randomised study published 30 years ago in the review. This study included 77 participants who had gastric ulcer and in whom medical therapy (histamine H2 receptor blockers, antacids, and diet) had failed after an average duration of treatment of 29 months. The authors do not state whether these were recurrent or refractory ulcers. It appears that the participants did not have previous complications such as bleeding or perforation. Of the 77 included participants, 37 participants continued to have medical therapy while 40 participants received surgical therapy (antrectomy with or without vagotomy; subtotal gastrectomy with or without vagotomy; vagotomy; pyloroplasty and suture of the ulcer; suture or closure of ulcer without vagotomy or excision of the ulcer; proximal gastric or parietal cell vagotomy alone; suture or closure of the ulcer with proximal gastric or parietal cell vagotomy). Whether to use medical or surgical treatment was determined by participant's or treating physician's preference.\nThe study authors reported that two participants in the medical treatment group (2 out of 37; 5.4%) had gastric cancer, which was identified by repeated biopsy. They did not report the proportion of participants who had gastric cancer in the surgical treatment group. They also did not report the implications of the delayed diagnosis of gastric cancer in the medical treatment group. They did not report any other outcomes of interest for this review (that is health-related quality of life (using any validated scale), adverse events and serious adverse events, peptic ulcer bleeding, peptic ulcer perforation, abdominal pain, and long-term mortality).\nAuthors' conclusions\nWe found no studies that provide the relative benefits and harms of medical versus surgical treatment for recurrent or refractory peptic ulcers. Studies that evaluate the natural history of recurrent and refractory peptic ulcers are urgently required to determine whether randomised controlled trials comparing medical versus surgical management in patients with recurrent or refractory peptic ulcers or both are necessary. Such studies will also provide information for the design of such randomised controlled trials. A minimum follow-up of two to three years will allow the calculation of the incidence of complications and gastric cancer (in gastric ulcers only) in recurrent and refractory peptic ulcers. In addition to complications related to treatment and disease, health-related quality of life and loss of productivity should also be measured.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Approximately 1 in 100 to 1 in 800 people have peptic ulcers. The major causes of peptic ulcer are Helicobacter pylori infection, non-steroidal anti-inflammatory drug (NSAID) use, and smoking. People who have peptic ulcer have upper abdominal pain, which is sometimes accompanied by dyspepsia (that is fullness, bloating, loss of appetite after eating a small amount of food, or nausea). The most serious complications of peptic ulcers are bleeding from the ulcer and perforation of the peptic ulcer, which results in stomach or upper small intestinal contents or both leaking into the tummy. About 1 in 10 people with bleeding peptic ulcer and 1 in 4 people with perforated peptic ulcer die. Peptic ulcers cause approximately 3000 to 4500 deaths per year in the US.\nCurrently, medical management, usually with a group of drugs called proton pump inhibitors (such as omeprazole and lansoprazole), is the mainstay treatment for uncomplicated peptic ulcers. Recently concerns have arisen about the risk of fractures with long-term use of proton pump inhibitors. The alternative to medical treatment for refractory and recurrent peptic ulcer is surgical treatment to decrease the acid secretion in the stomach with the goal of curing the peptic ulcer. It is not known whether medical or surgical management is a better option for people with a refractory or recurrent peptic ulcer. We attempted to resolve this issue by searching the medical literature for studies comparing medical and surgical treatment in people with refractory or recurrent peptic ulcers."
}
] |
query-laysum
|
6010
|
[
{
"role": "user",
"content": "Abstract: Background\nNausea and vomiting is a common and distressing presenting complaint in emergency departments (ED). The aetiology of nausea and vomiting in EDs is diverse and drugs are commonly prescribed. There is currently no consensus as to the optimum drug treatment of nausea and vomiting in the adult ED setting.\nObjectives\nTo provide evidence of the efficacy and safety of antiemetic medications in the management of nausea and vomiting in the adult ED setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 8), MEDLINE (OvidSP) (January 1966 to August 2014), EMBASE (OvidSP) (January 1980 to August 2014) and ISI Web of Science (January 1955 to August 2014). We also searched relevant clinical trial registries and conference proceedings.\nSelection criteria\nWe included randomized controlled trials (RCTs) of any drug in the treatment of nausea and vomiting in the treatment of adults in the ED. Study eligibility was not restricted by language or publication status.\nData collection and analysis\nTwo review authors independently performed study selection, data extraction and assessment of risk of bias in included studies. We contacted authors of studies to obtain missing information if required.\nMain results\nWe included eight trials, involving 952 participants, of which 64% were women. Included trials were generally of adequate quality, with six trials at low risk of bias, and two trials at high risk of bias. Three trials with 518 participants compared five different drugs with placebo; all reported the primary outcome as mean change in visual analogue scale (VAS) (0 to 100) for nausea severity from baseline to 30 minutes. Trials did not routinely report other primary outcomes of the change in nausea VAS at 60 minutes or number of vomiting episodes. Differences in mean VAS change from baseline to 30 minutes between placebo and the drugs evaluated were: metoclopramide (three trials, 301 participants; mean difference (MD) -5.27, 95% confidence interval (CI) -11.33 to 0.80), ondansetron (two trials, 250 participants; MD -4.32, 95% CI -11.20 to 2.56), prochlorperazine (one trial, 50 participants; MD -1.80, 95% CI -14.40 to 10.80), promethazine (one trial, 82 participants; MD -8.47, 95% CI -19.79 to 2.85) and droperidol (one trial, 48 participants; MD -15.8, 95% CI -26.98 to -4.62). The only statistically significant change in baseline VAS to 30 minutes was for droperidol, in a single trial of 48 participants. No other drug was statistically significantly superior to placebo. Other included trials evaluated a drug compared to \"active controls\" (alternative antiemetic). There was no convincing evidence of superiority of any particular drug compared to active control. All trials included in this review reported adverse events, but they were variably reported precluding meaningful pooling of results. Adverse events were generally mild, there were no reported serious adverse events. Overall, the quality of the evidence was low, mainly because there were not enough data.\nAuthors' conclusions\nIn an ED population, there is no definite evidence to support the superiority of any one drug over any other drug, or the superiority of any drug over placebo. Participants receiving placebo often reported clinically significant improvement in nausea, implying general supportive treatment such as intravenous fluids may be sufficient for the majority of people. If a drug is considered necessary, choice of drug may be dictated by other considerations such as a person's preference, adverse-effect profile and cost. The review was limited by the paucity of clinical trials in this setting. Future research should include the use of placebo and consider focusing on specific diagnostic groups and controlling for factors such as intravenous fluid administered.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The main results of interest were the effect on nausea between zero and 60 minutes after the medicine was given, number of vomits and side effects to medicines. Of these, only nausea at 30 minutes and side effects were reported by all trials. From all trials, only one medicine was reported to be better than placebo and other medicines. That was droperidol, which was included in one small trial of 97 participants. No other single medicine was definitely better than any other medicine, and none of the other trials that included a placebo group showed that the active medicines definitely worked better than the placebo. Side effects were mild.\nOur results suggest that in people in the emergency department, nausea will generally improve, whether they are treated with specific medicines or placebo. Therefore, supportive treatment, such as intravenous fluids (where fluid is given directly into a blood vessel) may be sufficient for many people. Overall, the quality of the evidence was low, mainly because there was not enough data."
}
] |
query-laysum
|
6011
|
[
{
"role": "user",
"content": "Abstract: Background\nThe genital infection caused by Chlamydia trachomatis (CT) is a common sexually transmitted infection (STI) globally. The infection is mainly asymptomatic in women, thus it can produce infertility and chronic pelvic pain. In men infection is mainly symptomatic, but can evolve to prostatitis. Clinical practice guidelines for CT urogenital infections do not give any specific recommendation about which antibiotic use as first option\nObjectives\nTo assess the efficacy and safety of antibiotic treatment for CT genital infection in men and non-pregnant women.\nSearch methods\nThe Cochrane Sexually Transmitted Infections' (STI) Information Specialist developed the electronic searches in electronic databases (CENTRAL, MEDLINE, Embase and LILACS), and trials registers. We searched studies published from inception to June 2018.\nSelection criteria\nWe included parallel, randomised controlled trials (RCTs) of men, and sexually-active, non-pregnant women with CT infection (urethritis or uterine cervicitis or asymptomatic), diagnosed by cell culture for CT, nucleic acid amplification tests (NAAT) or antigen-based detection methods, who had been treated with any of the antibiotic regimens recommended by any of the updated to 2013 CT Guidelines.\nData collection and analysis\nFour review authors screened evidence according to selection criteria and independently extracted data and assessed risk of bias. Two authors developed the 'Summary of findings' tables. We used a fixed-effect meta-analysis model for combining data where it was reasonable to assume that studies were estimating the same underlying treatment effect. We estimated the pooled risk ratio in order to establish the effects of the comparisons. Our primary outcomes were microbiological failure and adverse events, and our secondary outcomes were clinical failure, antimicrobial resistance and reinfection.\nMain results\nWe selected 14 studies ( 2715 participants: 2147 (79.08%) men and 568 (20.92%) women). The studies were conducted mainly at STD clinics. Sample sizes ranged from 71 to 606 participants; follow-up was 29.7 days on average.\nFor the comparison: azithromycin single dose versus doxycycline once or twice daily for 7 days, in men treated for CT, the risk of microbiological failure was higher in the azithromycin group (RR 2.45, 95% CI 1.36 to 4.41; participants = 821; studies = 9; moderate-quality evidence), but regarding clinical failure, the results showed that the effect is uncertain (RR 0.94, 95% CI 0.43 to 2,05; I² = 55%; participants = 525; studies = 3; low-quality evidence). Regarding adverse events (AE) in men there could be little or no difference between the antibiotics (RR 0.83, 95% CI 0.67 to 1.02; participants = 1424; studies = 6; low-quality evidence). About women treated for CT, the effect on microbiological failure was uncertain (RR = 1.71, 95% CI 0.48 to 6.16; participants = 338; studies = 5; very low-quality evidence). There were no studies assessing clinical failure or adverse events in women, however, we found that azithromycin probably has fewer adverse events in both genders (RR 0.83, 95% CI 0.71 to 0.98; I² = 0%; participants = 2261; studies = 9; moderate-quality evidence).\nFor the second comparison: doxycycline compared to ofloxacin, for men treated for CT the effect on microbiological failure was uncertain (RR 8.53, 95% CI 0.43 to 167.38, I² not applicable; participants = 80; studies = 2; very low-quality evidence), as also it was on clinical failure (RR 0.85, 95% CI 0.28 to 2.62; participants = 36; studies = 1; very low-quality evidence). The effect of in women on clinical failure was uncertain (RR 0.94, 95% CI 0.39 to 2.25; I² = 39%; participants = 127; studies = 2; very low-quality evidence).Regarding adverse events, the effect in both men and women was uncertain (RR 1.02 95% CI 0.66 to 1.55; participants = 339 studies = 3; very low-quality evidence). The effect on microbiological failure in women and in men and women together was not estimable. The most frequently AE reported were not serious and of gastrointestinal origin.No studies assessed antimicrobial resistance or reinfection in either comparison.\nAuthors' conclusions\nIn men, regimens with azithromycin are probably less effective than doxycycline for microbiological failure, however, there might be little or no difference for clinical failure. For women, we are uncertain whether azithromycin compared to doxycycline increases the risk of microbiological failure. Azithromycin probably slightly reduces adverse events compared to doxycycline in men and women together but may have little difference in men alone. We are uncertain whether doxycycline compared to ofloxacin reduces microbiological failure in men or women alone, or men and women together, nor if it reduces clinical failure or adverse events in men or women.\nBased on the fact that women suffer mainly asymptomatic infections, and in order to test the effectiveness and safety of the current recommendations (azithromycin, doxycycline and ofloxacin), for CT infection, especially in low and middle income countries, future RCTs should be designed and conducted to include a large enough sample size of women, and with low risk of bias. It is also important that future RCTs include adherence, CT resistance to antibiotic regimens, and risk of reinfection as outcomes to be measured. In addition, it is important to conduct a network meta-analysis in order to evaluate all those studies that included in one arm only the current antibiotic treatments for CT infection that are recommended by the updated clinical practice guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "CT is the most frequent cause of urinary tract and genital infections in women and men. However, women frequently show no symptoms when they are infected. CT infection can lead to complications or cause further problems in reproductive health in women (infertility, pelvic inflammatory disease, chronic pelvic pain), and men (prostatitis (swollen prostate gland), or chronic pelvic pain. Clinical guidelines for treating CT do not recommend a preferred antibiotic treatment. This Cochrane review evaluates all randomised controlled studies (where participants are assigned at random to one of the treatments), that included antibiotics for the treatment of genital CT infection that are recommended by the most up-to-date clinical guidelines."
}
] |
query-laysum
|
6012
|
[
{
"role": "user",
"content": "Abstract: Background\nGlaucoma is a leading cause of irreversible blindness. Subconjunctival draining minimally-invasive glaucoma devices such as the Xen gelatin implant and InnFocus stent have been introduced as a treatment to prevent glaucoma progressing.\nThese implants provide a channel to allow aqueous humour from the anterior chamber of the eye to drain into the subconjunctival space on the surface of the eye thus reducing intraocular pressure (IOP) and mimicking the mechanism of the most commonly undertaken glaucoma surgery, trabeculectomy.\nObjectives\nTo evaluate the efficacy and safety of subconjunctival draining minimally-invasive glaucoma devices in treating people with open angle glaucoma and ocular hypertension whose condition is inadequately controlled with drops.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; which contains the Cochrane Eyes and Vision Trials Register; 2018, Issue 6); Ovid MEDLINE; Ovid Embase; the ISRCTN registry; ClinicalTrials.gov and the WHO ICTRP. The date of the search was 10 July 2018.\nSelection criteria\nWe searched for randomised controlled trials (RCTs) of Xen gelatin implant or InnFocus MicroShunt to other surgical treatments (other minimally-invasive glaucoma device techniques, trabeculectomy), laser treatment or medical treatment. We also planned to include trials where these devices were combined with phacoemulsification compared to phacoemulsification alone.\nData collection and analysis\nWe planned to have two review authors independently extract data from reports of included studies using a data collection form and analyse data based on methods expected by Cochrane. Our primary outcome was mean change in IOP. Secondary outcomes included proportion of participants who were drop-free; proportion of participants who achieved an IOP of 21 mmHg or less, 17 mmHg or less or 14 mmHg or less; and proportion of participants experiencing intra- and postoperative complications. We planned to measure all outcomes in the short-term (six to 18 months), medium-term (18 to 36 months), and long-term (36 months onwards).\nMain results\nWe found no completed RCTs that met our inclusion criteria. We found one ongoing study (NCT01881425). The study compares InnFocus MicroShunt to trabeculectomy in people with primary open angle glaucoma. The primary outcome is greater than 20% IOP reduction from baseline to 12 months' follow-up. A total of 889 people aged between 40 and 85 years have been enrolled. The estimated study completion date is November 2019.\nAuthors' conclusions\nThere is currently no high-quality evidence for the effects of subconjunctival draining minimally-invasive glaucoma devices for medically uncontrolled open angle glaucoma. Properly designed RCTs are needed to assess the medium- and long-term efficacy and safety of this technique.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The Cochrane Review authors searched for studies published up to 10 July 2018."
}
] |
query-laysum
|
6014
|
[
{
"role": "user",
"content": "Abstract: Background\nIt is not clear which fixation of total knee arthroplasty obtains the best clinical, functional and radiographic results in people with osteoarthritis and other non-traumatic diseases, such as rheumatoid arthritis.\nObjectives\nTo assess the benefits and harms of cemented, cementless and hybrid knee prostheses fixation techniques in participants with primary osteoarthritis (osteoarthritis following trauma was not included) and other non-traumatic diseases, such as rheumatoid arthritis.\nSearch methods\nWe searched CENTRAL (2011, issue 10), MEDLINE via PubMed, EMBASE, Current Controlled Trials, LILACS, The Cumulative Index to Nursing and Allied Health Literature, SPORTDiscus, Health Technology Assessment Database and the Database of Abstracts of Reviews of Effectiveness, all from implementation to October 2011, along with handsearches of high-yield journals and reference lists of articles. No language restrictions were applied.\nSelection criteria\nRandomized controlled trials (RCTs) evaluating cemented, cementless and hybrid fixation. Participants included patients that were 18 years or older with osteoarthritis and other non-traumatic diseases who were undergoing primary total knee arthroplasty.\nData collection and analysis\nThree authors independently selected the eligible trials, assessed the trial quality, risk of bias and extracted data. Researchers were contacted to obtain missing information.\nMain results\nFive RCTs and 297 participants were included in this review. Using meta-analysis on roentgen stereophotogrammetric analysis (RSA) we observed that cemented fixation of the tibial components demonstrated smaller displacement in relation to cementless fixation (with and without hydroxyapatite) after a follow-up of two years (maximum total point-motion, N = 167, two RCTs, mean difference (MD) = 0.52 mm, 95% confidence interval (CI) 0.31 to 0.74). However, the risk of future aseptic loosening with uncemented fixation was approximately half that of cemented fixation according to the arthroplasty instability classification (moderate quality as assessed by GRADE) inferred from RSA (N = 216, three RCTs, risk ratio (RR) = 0.47, 95% CI 0.24 to 0.92) with a 16% absolute risk difference between groups. The number needed to treat for an additional beneficial outcome (NNTB) to prevent future aseptic loosening was 7 (95% CI 5 to 44). There was a low risk of bias for RSA among the studies included. It was not possible to perform meta-analysis on patient-important outcomes, such as the survival rate of the implant (any change of a component), patient global assessments, functional measures, pain, health-related quality of life measures and adverse events. Almost all included studies recorded functional measures of Knee Society and Hospital for Special Surgery knee scores, but the authors of each study found no significant difference between the groups.\nAuthors' conclusions\nThere was a smaller displacement of the cemented tibial component in relation to the cementless fixation in studies with osteoarthritis and rheumatoid arthritis participants who underwent primary total knee prosthesis with a follow-up of two years; however, the cemented fixation presented a greater risk of future aseptic loosening than cementless fixation.\n\nGiven the provided abstract, please respond to the following query: \"What is osteoarthritis and other non-traumatic diseases of the knee and what types of knee implant fixation methods are available?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Osteoarthritis, also known as degenerative joint disease, has a variety of causes. Osteoarthritis can be classified as either primary or secondary, depending on whether there is an identifiable underlying cause. Most cases of the disease have no known cause and are referred to as primary osteoarthritis. Primary osteoarthritis is mostly related to aging. The causes of secondary osteoarthritis include rheumatoid arthritis, a disease in which the immune system attacks the joints (post-traumatic causes were not included in this review).\nIn some patients, damage and pain in the knee from arthritis may be severe enough for surgery. In these patients, the damaged joint surfaces can be replaced by an artificial joint or knee implant. In total knee replacement surgery, the ends of the long bones of the leg (femur and tibia) are usually replaced with metal ends, and an insert is placed between them. The femoral and tibial components can be fixed to the bone with or without cement. Cementation of the tibial component while leaving the femoral component cementless is a hybrid technique.\nIt is not clear which fixation obtains the best clinical, functional and radiographic results in people with osteoarthritis and other non-traumatic diseases, such as rheumatoid arthritis. The use of cement in total knee arthroplasty fixation is considered by many authors to be the gold standard but remains a controversial issue."
}
] |
query-laysum
|
6015
|
[
{
"role": "user",
"content": "Abstract: Background\nDiverticular disease is a common condition in Western industrialised countries. Most individuals remain asymptomatic throughout life; however, 25% experience acute diverticulitis. The standard treatment for acute diverticulitis is open surgery. Laparoscopic surgery - a minimal-access procedure - offers an alternative approach to open surgery, as it is characterised by reduced operative stress that may translate into shorter hospitalisation and more rapid recovery, as well as improved quality of life.\nObjectives\nTo evaluate the effectiveness of laparoscopic surgical resection compared with open surgical resection for individuals with acute sigmoid diverticulitis.\nSearch methods\nWe searched the following electronic databases: the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 2) in the Cochrane Library; Ovid MEDLINE (1946 to 23 February 2017); Ovid Embase (1974 to 23 February 2017); clinicaltrials.gov (February 2017); and the World Health Organization (WHO) International Clinical Trials Registry (February 2017). We reviewed the bibliographies of identified trials to search for additional studies.\nSelection criteria\nWe included randomised controlled trials comparing elective or emergency laparoscopic sigmoid resection versus open surgical resection for acute sigmoid diverticulitis.\nData collection and analysis\nTwo review authors independently selected studies, assessed the domains of risk of bias from each included trial, and extracted data. For dichotomous outcomes, we calculated risk ratios (RRs) with 95% confidence intervals (CIs). For continuous outcomes, we planned to calculate mean differences (MDs) with 95% CIs for outcomes such as hospital stay, and standardised mean differences (SMDs) with 95% CIs for quality of life and global rating scales, if researchers used different scales.\nMain results\nThree trials with 392 participants met the inclusion criteria. Studies were conducted in three European countries (Switzerland, Netherlands, and Germany). The median age of participants ranged from 62 to 66 years; 53% to 64% were female. Inclusion criteria differed among studies. One trial included participants with Hinchey I characteristics as well as those who underwent Hartmann’s procedure; the second trial included only participants with \"a proven stage II/III disease according to the classification of Stock and Hansen\"; the third trial considered for inclusion patients with \"diverticular disease of sigmoid colon documented by colonoscopy and 2 episodes of uncomplicated diverticulitis, one at least being documented with CT scan, 1 episode of complicated diverticulitis, with a pericolic abscess (Hinchey stage I) or pelvic abscess (Hinchey stage II) requiring percutaneous drainage.\"\nWe determined that two studies were at low risk of selection bias; two that reported considerable dropouts were at high risk of attrition bias; none reported blinding of outcome assessors (unclear detection bias); and all were exposed to performance bias owing to the nature of the intervention.\nAvailable low-quality evidence suggests that laparoscopic surgical resection may lead to little or no difference in mean hospital stay compared with open surgical resection (3 studies, 360 participants; MD -0.62 (days), 95% CI -2.49 to 1.25; I² = 0%).\nLow-quality evidence suggests that operating time was longer in the laparoscopic surgery group than in the open surgery group (3 studies, 360 participants; MD 49.28 (minutes), 95% CI 40.64 to 57.93; I² = 0%).\nWe are uncertain whether laparoscopic surgery improves postoperative pain between day 1 and day 3 more effectively than open surgery. Low-quality evidence suggests that laparoscopic surgery may improve postoperative pain at the fourth postoperative day more effectively than open surgery (2 studies, 250 participants; MD = -0.65, 95% CI -1.04 to -0.25).\nResearchers reported quality of life differently across trials, hindering the possibility of meta-analysis. Low-quality evidence from one trial using the Short Form (SF)-36 questionnaire six weeks after surgery suggests that laparoscopic intervention may improve quality of life, whereas evidence from two other trials using the European Organization for Research and Treatment of Cancer core quality of life questionnaire (EORTC QLQ-C30) v3 and the Gastrointestinal Quality of Life Index score, respectively, suggests that laparoscopic surgery may make little or no difference in improving quality of life compared with open surgery.\nWe are uncertain whether laparoscopic surgery improves the following outcomes: 30-day postoperative mortality, early overall morbidity, major and minor complications, surgical complications, postoperative times to liquid and solid diets, and reoperations due to anastomotic leak.\nAuthors' conclusions\nResults from the present comprehensive review indicate that evidence to support or refute the safety and effectiveness of laparoscopic surgery versus open surgical resection for treatment of patients with acute diverticular disease is insufficient. Well-designed trials with adequate sample size are needed to investigate the efficacy of laparoscopic surgery towards important patient-oriented (e.g. postoperative pain) and health system-oriented outcomes (e.g. mean hospital stay).\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that laparoscopic surgical resection may lead to little or no difference in mean hospital stay when compared with open surgical resection. Operating time was longer in the laparoscopic group by an average of 49 minutes. No important differences were observed in terms of 30-day postoperative mortality, early overall morbidity, major and minor complications, surgical complications, postoperative times to liquid and solid diets, and reoperations due to anastomotic leak. To assess quality of life, researchers used different scales at different periods of time. Although one trial reported that patients who received laparoscopic surgery had better quality of life, the other two trials showed no benefit favouring either laparoscopic surgery or open surgery."
}
] |
query-laysum
|
6016
|
[
{
"role": "user",
"content": "Abstract: Background\nAnimal studies and trials in older children and adults suggest that a 'one dose per day' regimen of gentamicin is superior to a 'multiple doses per day' regimen.\nObjectives\nTo compare the efficacy and safety of one dose per day compared to multiple doses per day of gentamicin in suspected or proven sepsis in neonates.\nSearch methods\nEligible studies were identified by searching the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 3) in the Cochrane Library (searched 8 April 2016), MEDLINE (1966 to 8 April 2016), Embase (1980 to 8 April 2016), and CINAHL (December 1982 to 8 April 2016).\nSelection criteria\nAll randomised or quasi-randomised controlled trials comparing one dose per day ('once a day') compared to multiple doses per day ('multiple doses a day') of gentamicin to newborn infants.\nData collection and analysis\nData collection and analysis was performed according to the standards of the Cochrane Neonatal Review Group.\nMain results\nEleven RCTs were included (N = 574) and 28 excluded. All except one study enrolled infants of more than 32 weeks' gestation. Limited information suggested that infants in both 'once a day' as well as 'multiple doses a day' regimens showed adequate clearance of sepsis (typical RR 1.00, 95% CI 0.84 to 1.19; typical RD 0.00, 95% CI −0.19 to 0.19; 3 trials; N = 37). 'Once a day' gentamicin regimen was associated with fewer failures to attain peak level of at least 5 µg/ml (typical RR 0.22, 95% CI 0.11 to 0.47; typical RD −0.13, 95% CI −0.19 to −0.08; number needed to treat for an additional beneficial outcome (NNTB) = 8; 9 trials; N = 422); and fewer failures to achieve trough levels of 2 µg/ml or less (typical RR 0.38, 95% CI 0.27 to 0.55; typical RD −0.22, 95% CI −0.29 to −0.15; NNTB = 4; 11 trials; N = 503). 'Once a day' gentamicin achieved higher peak levels (MD 2.58, 95% CI 2.26 to 2.89; 10 trials; N = 440) and lower trough levels (MD −0.57, 95% CI −0.69 to −0.44; 10 trials; N = 440) than 'multiple doses a day' regimen. There was no significant difference in ototoxicity between two groups (typical RR 1.69, 95% CI 0.18 to 16.25; typical RD 0.01, 95% CI −0.04 to 0.05; 5 trials; N = 214). Nephrotoxicity was not noted with either of the treatment regimens. Overall, the quality of evidence was considered to be moderate on GRADE analysis, given the small sample size and unclear/high risk of bias in some of the domains in a few of the included studies.\nAuthors' conclusions\nThere is insufficient evidence from the currently available RCTs to conclude whether a 'once a day' or a 'multiple doses a day' regimen of gentamicin is superior in treating proven neonatal sepsis. However, data suggest that pharmacokinetic properties of a 'once a day' gentamicin regimen are superior to a 'multiple doses a day' regimen in that it achieves higher peak levels while avoiding toxic trough levels. There was no change in nephrotoxicity or auditory toxicity. Based on the assessment of pharmacokinetics, a 'once a day regimen' may be superior in treating sepsis in neonates of more than 32 weeks' gestation.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality was evidence was considered as moderate because the sample size was relatively small and two of the studies were scientifically less robust."
}
] |
query-laysum
|
6017
|
[
{
"role": "user",
"content": "Abstract: Background\nSedation reduces patient levels of anxiety and stress, facilitates the delivery of care and ensures safety. Alpha-2 agonists have a range of effects including sedation, analgesia and antianxiety. They sedate, but allow staff to interact with patients and do not suppress respiration. They are attractive alternatives for long-term sedation during mechanical ventilation in critically ill patients.\nObjectives\nTo assess the safety and efficacy of alpha-2 agonists for sedation of more than 24 hours, compared with traditional sedatives, in mechanically-ventilated critically ill patients.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 10, 2014), MEDLINE (1946 to 9 October 2014), EMBASE (1980 to 9 October 2014), CINAHL (1982 to 9 October 2014), Latin American and Caribbean Health Sciences Literature (1982 to 9 October 2014), ISI Web of Science (1987 to 9 October 2014), Chinese Biological Medical Database (1978 to 9 October 2014) and China National Knowledge Infrastructure (1979 to 9 October 2014), the World Health Organization international clinical trials registry platform (to 9 October 2014), Current Controlled Trials metaRegister of controlled trials active registers (to 9 October 2014), the ClinicalTrials.gov database (to 9 October 2014), the conference proceedings citation index (to 9 October 2014) and the reference lists of included studies and previously published meta-analyses and systematic reviews for relevant studies. We imposed no language restriction.\nSelection criteria\nWe included all randomized and quasi-randomized controlled trials comparing alpha-2 agonists (clonidine or dexmedetomidine) versus alternative sedatives for long-term sedation (more than 24 hours) during mechanical ventilation in critically ill patients.\nData collection and analysis\nTwo review authors independently assessed study quality and extracted data. We contacted study authors for additional information. We performed meta-analyses when more than three studies were included, and selected a random-effects model due to expected clinical heterogeneity. We calculated the geometric mean difference for continuous outcomes and the risk ratio for dichotomous outcomes. We described the effects by values and 95% confidence intervals (CIs). We considered two-sided P < 0.05 to be statistically significant.\nMain results\nSeven studies, covering 1624 participants, met the inclusion criteria. All included studies investigated adults and compared dexmedetomidine with traditional sedatives, including propofol, midazolam and lorazepam. Compared with traditional sedatives, dexmedetomidine reduced the geometric mean duration of mechanical ventilation by 22% (95% CI 10% to 33%; four studies, 1120 participants, low quality evidence), and consequently the length of stay in the intensive care unit (ICU) by 14% (95% CI 1% to 24%; five studies, 1223 participants, very low quality evidence). There was no evidence that dexmedetomidine decreased the risk of delirium (RR 0.85; 95% CI 0.63 to 1.14; seven studies, 1624 participants, very low quality evidence) as results were consistent with both no effect and appreciable benefit. Only one study assessed the risk of coma, but lacked methodological reliability (RR 0.69; 95% CI 0.55 to 0.86, very low quality evidence). Of all the adverse events included, the most commonly reported one was bradycardia, and we observed a doubled (111%) increase in the incidence of bradycardia (RR 2.11; 95% CI 1.39 to 3.20; six studies, 1587 participants, very low quality evidence). Our meta-analysis provided no evidence that dexmedetomidine had any impact on mortality (RR 0.99; 95% CI 0.79 to 1.24; six studies, 1584 participants, very low quality evidence). We observed high levels of heterogeneity in risk of delirium (I² = 70%), but due to the limited number of studies we were unable to determine the source of heterogeneity through subgroup analyses or meta-regression. We judged six of the seven studies to be at high risk of bias.\nAuthors' conclusions\nIn this review, we found no eligible studies for children or for clonidine. Compared with traditional sedatives, long-term sedation using dexmedetomidine in critically ill adults reduced the duration of mechanical ventilation and ICU length of stay. There was no evidence for a beneficial effect on risk of delirium and the heterogeneity was high. The evidence for risk of coma was inadequate. The most common adverse event was bradycardia. No evidence indicated that dexmedetomidine changed mortality. The general quality of evidence ranged from very low to low, due to high risks of bias, serious inconsistency and imprecision, and strongly suspected publication bias. Future studies could pay more attention to children and to using clonidine.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sedation is an important treatment for critically ill patients who need a machine to support breathing, because it reduces anxiety and stress and facilitates the delivery of nursing care. However, some commonly-used sedatives, such as propofol, midazolam and lorazepam, might decrease blood pressure, depress breathing, and delay awakening after a long-term infusion. They may prolong breathing support time and length of stay in hospital. Dexmedetomidine and clonidine sedate but allow staff to interact with patients, and they ease pain but do not depress breathing. Those treated with them could be more easy to awake, and more able to communicate their discomfort and pain. These drugs are therefore attractive alternatives for long-term sedation, and we planned to assess their efficacy and safety for long-term (more than 24 hours) sedation, compared with traditional sedatives."
}
] |
query-laysum
|
6018
|
[
{
"role": "user",
"content": "Abstract: Background\nDrug- and alcohol-related impairment in the workplace has been linked to an increased risk of injury for workers. Randomly testing populations of workers for these substances has become a practice in many jurisdictions, with the intention of reducing the risk of workplace incidents and accidents. Despite the proliferation of random drug and alcohol testing (RDAT), there is currently a lack of consensus about whether it is effective at preventing workplace injury, or improving other non-injury accident outcomes in the work place.\nObjectives\nTo assess the effectiveness of workplace RDAT to prevent injuries and improve non-injury accident outcomes (unplanned events that result in damage or loss of property) in workers compared with no workplace RDAT.\nSearch methods\nWe conducted a systematic literature search to identify eligible published and unpublished studies. The date of the last search was 1 November 2020. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, two other databases, Google Scholar, and three trials registers. We also screened the reference lists of relevant publications known to us.\nSelection criteria\nStudy designs that were eligible for inclusion in our review included randomised controlled trials (RCTs), cluster-randomised trials (CRTs), interrupted time-series (ITS) studies, and controlled before-after (CBA) studies. Studies needed to evaluate the effectiveness of RDAT in preventing workplace injury or improving other non-injury workplace outcomes. We also considered unpublished data from clinical trial registries. We included employees working in all safety-sensitive occupations, except for commercial drivers, who are the subject of another Cochrane Review.\nData collection and analysis\nIndependently, two review authors used a data collection form to extract relevant characteristics from the included study. They then analysed a line graph included in the study of the prevalence rate of alcohol violations per year. Independently, the review authors completed a GRADE assessment, as a means of rating the quality of the evidence.\nMain results\nAlthough our searching originally identified 4198 unique hits, only one study was eligible for inclusion in this review. This was an ITS study that measured the effect of random alcohol testing (RAT) on the test positivity rate of employees of major airlines in the USA from 1995 to 2002. The study included data from 511,745 random alcohol tests, and reported no information about testing for other substances. The rate of positive results was the only outcome of interest reported by the study.\nThe average rate of positive results found by RAT increased from 0.07% to 0.11% when the minimum percentage of workers who underwent RAT annually was reduced from 25% to 10%. Our analyses found this change to be a statistically significant increase (estimated change in level, where the level reflects the average percentage points of positive tests = 0.040, 95% confidence interval 0.005 to 0.075; P = 0.031). Our GRADE assessment, for the observed effect of lower minimum testing percentages associating with a higher rate of positive test results, found the quality of the evidence to be 'very low' across the five GRADE domains. The one included study did not address the following outcomes of interest: fatal injuries; non-fatal injuries; non-injury accidents; absenteeism; and adverse effects associated with RDAT.\nAuthors' conclusions\nIn the aviation industry in the USA, the only setting for which the eligible study reported data, there was a statistically significant increase in the rate of positive RAT results following a reduction in the percentage of workers tested, which we deem to be clinically relevant. This result suggests an inverse relationship between the proportion of positive test results and the rate of testing, which is consistent with a deterrent effect for testing. No data were reported on adverse effects related to RDAT.\nWe could not draw definitive conclusions regarding the effectiveness of RDAT for employees in safety-sensitive occupations (not including commercial driving), or with safety-sensitive job functions. We identified only one eligible study that reflected one industry in one country, was of non-randomised design, and tested only for alcohol, not for drugs or other substances. Our GRADE assessment resulted in a 'very low' rating for the quality of the evidence on the only outcome reported. The paucity of eligible research was a major limitation in our review, and additional studies evaluating the effect of RDAT on safety outcomes are needed.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this Cochrane Review is current to 1 November 2020."
}
] |
query-laysum
|
6019
|
[
{
"role": "user",
"content": "Abstract: Background\nReduced vitamin A concentration increases the risk of blindness in children infected with the measles virus. Promoting vitamin A supplementation in children with measles contributes to the control of blindness in children, which is a high priority within the World Health Organization (WHO) VISION 2020 The Right to Sight Program.\nObjectives\nTo assess the efficacy of vitamin A in preventing blindness in children with measles without prior clinical features of vitamin A deficiency.\nSearch methods\nWe searched CENTRAL 2015, Issue 11, MEDLINE (1950 to December week 3, 2015), Embase (1974 to December 2015) and LILACS (1985 to December 2015).\nSelection criteria\nRandomised controlled trials (RCTs) assessing the efficacy of vitamin A in preventing blindness in well-nourished children diagnosed with measles but with no prior clinical features of vitamin A deficiency.\nData collection and analysis\nFor the original review, two review authors independently assessed studies for eligibility and extracted data on reported outcomes. We contacted trial authors of the included studies for additional information on unpublished data. We included two RCTs which were clinically heterogenous. We presented the continuous outcomes reported as the mean difference (MD) with 95% confidence interval (CI) and dichotomous outcomes as risk ratio (RR) with 95% CI. Due to marked clinical heterogeneity we considered it inappropriate to perform a meta-analysis.\nMain results\nFor the first publication of this review, two RCTs involving 260 children with measles which compared vitamin A with placebo met the inclusion criteria. Neither study reported blindness or other ocular morbidities as end points. One trial of moderate quality suggested evidence of a significant increase in serum retinol levels in the vitamin A group one week after two doses of vitamin A (MD 9.45 µg/dL, 95% CI 2.19 to 16.71; 17 participants, moderate-quality evidence), but not six weeks after three doses of vitamin A (MD 2.56 µg/dL, 95% CI -5.28 to 10.40; 39 participants, moderate-quality evidence). There was no significant difference in weight gain six weeks (MD 0.39 kg, -0.04 to 0.82; 48 participants, moderate-quality evidence) and six months (MD 0.52 kg, 95% CI -0.08 to 1.12; 36 participants, moderate-quality evidence) after three doses of vitamin A.\nThe second trial found no significant difference in serum retinol levels two weeks after a single dose of vitamin A (MD 2.67 µg/dL, 95% CI -0.29 to 5.63; 155 participants, moderate-quality evidence). Percentage of undernutrition between the two groups did not differ significantly at one week (RR 0.93, 95% CI 0.56 to 1.54, 145 participants) and two weeks (RR 0.82, 95% CI 0.52 to 1.29, 147 participants) after a single dose of vitamin A. No adverse event was reported in either study. We did not find any new RCTS for this second update.\nAuthors' conclusions\nWe did not find any trials assessing whether or not vitamin A supplementation in children with measles prevents blindness, as neither study reported blindness or other ocular morbidities as end points.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence and methodology of both studies was moderate. The sample size of the included studies was relatively small, which could affect the accuracy of the results."
}
] |
query-laysum
|
6020
|
[
{
"role": "user",
"content": "Abstract: Background\nDevelopments in ultrasound assessment of pregnancy has resulted in the increasing diagnosis of antenatal fetal issues. Many structural fetal conditions as well as complications associated with multiple pregnancies have the potential for in-utero treatment to improve both pregnancy and neonatal outcomes. Procedures such as laser ablation for twin-twin syndrome or cord occlusion for selective fetal termination require fetal immobilisation. Immobilisation of the fetus can occur through administration of medication to the mother or directly to the fetus. This improves procedural success and reduces the ongoing risk to the pregnancy. Evidence regarding the best medication and mode of delivery helps to ensure the optimal decision is made for both the mother and the fetus.\nObjectives\nTo assess the effects of perioperative pharmacological interventions for fetal immobilisation during fetal surgery and invasive procedures on fetal, neonatal, and maternal outcomes.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (10 May 2021), and reference lists of retrieved studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs (including published abstracts) which compared different classes of medication administered to the mother or fetus to allow in-utero procedures to be performed. We also included cluster-randomised trials but excluded cross-over trials.\nData collection and analysis\nWe used the standard Cochrane Pregnancy and Childbirth methods for data collection and analysis. Two review authors independently assessed trials for inclusion and risk of bias, extracted data, and checked them for accuracy.\nMain results\nOne study with three trial reports met the inclusion criteria. This involved 54 women with a multiple pregnancy. The study was conducted in a tertiary European hospital maternal-fetal medicine unit and compared remifentanil to diazepam for fetal immobilisation and maternal sedation during fetoscopic surgery.\nLow-certainty evidence suggested that remifentanil may reduce fetal movement more than diazepam for two outcomes of fetal movement, one of fetal immobilisation at 40 minutes using a visual analogue score (VAS) (where 0 = immobile and 100 = baseline mobility), and one of gross body and limb movements (score was absolute number of movements), both assessed by a sonographer evaluating a taped ultrasound sequence (mean difference (MD) -65.00, 95% confidence interval (CI) -69.38 to -60.62 and MD -10.00, 95% CI -11.62 to -8.38; 1 study, 50 women).\nSurgeons may also report being more satisfied with the procedure when using remifentanil rather than diazepam (risk ratio (RR) 2.88, 95% CI 1.60 to 5.15; 1 study, 50 women; low-certainty evidence). However, maternal respiratory rate may decrease more during the surgical procedure with remifentanil compared with diazepam (MD -6.00, 95% CI -8.29 to -3.71; 1 study, 50 women; low-certainty evidence). Maternal sedation may also be worse with remifentanil compared with diazepam (RR 0.09, 95% CI 0.01 to 0.65; 1 study, 50 women; low-certainty evidence) measured using an observer assessment of alertness/sedation (where a score of < 4 equates to profound sedation and > 4 equates to insufficient sedation). Perinatal mortality and time taken to perform the procedure were not reported in the trial.\nWe prespecified 20 outcomes and planned to use GRADE for 6 of them, all other outcomes were not able to be reported against for the purpose of meta-analysis due to data not being provided or unable to be interpreted.\nWe assessed the included study at low risk of selection bias (appropriate random sequence generation and allocation concealment), performance bias (blinding of participants and personnel), detection bias (outcome assessors were blinded), attrition bias (incomplete outcome data minimal), and reporting bias.\nOur GRADE assessment for certainty of the evidence indicates that there is low certainty of the evidence.\nAuthors' conclusions\nWe were only able to include one study with a small number of women, from a single centre, a European tertiary hospital. This study was published in 2005 with an abstract of this trial published in 2004. This study evaluated two intravenous medications administered to the mother - remifentanil and diazepam. This study reported our prespecified primary outcome but only evaluated several of our secondary outcomes, which limited further assessment. Low-certainty evidence suggested that remifentanil may be better at reducing fetal movements and surgeons were more satisfied with the procedure. However, maternal sedation and depression of breathing may be worse with remifentanil.\nFurther high-quality RCTs assessing both fetal and maternal medications are required to evaluate their efficacy for fetal immobilisation as well as safety for both mother and fetus.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Low-certainty evidence from one small trial means that we cannot be confident in the results. The information available on maternal, fetal, and newborn outcomes was limited. More high-quality trials are needed to increase our certainty about the longer term fetal and newborn impact of the medication used for fetal immobilisation for in-utero procedures, as well as other medications which may be suitable for use.\n"
}
] |
query-laysum
|
6021
|
[
{
"role": "user",
"content": "Abstract: Background\nFibromyalgia (FM) is a clinically well-defined chronic condition of unknown aetiology characterized by chronic widespread pain that often co-exists with sleep disturbances, cognitive dysfunction and fatigue. Patients often report high disability levels and negative mood. Psychotherapies focus on reducing key symptoms, improving daily functioning, mood and sense of personal control over pain.\nObjectives\nTo assess the benefits and harms of cognitive behavioural therapies (CBTs) for treating FM at end of treatment and at long-term (at least six months) follow-up.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 8), MEDLINE (1966 to 28 August 2013), PsycINFO (1966 to 28 August 2013) and SCOPUS (1980 to 28 August 2013). We searched http://www.clinicaltrials.gov (web site of the US National Institutes of Health) and the World Health Organization Clinical Trials Registry Platform (ICTRP) (http://www.who.int/ictrp/en/) for ongoing trials (last search 28 August,2013), and the reference lists of reviewed articles.\nSelection criteria\nWe selected randomised controlled trials of CBTs with children, adolescents and adults diagnosed with FM.\nData collection and analysis\nThe data of all included studies were extracted and the risks of bias of the studies were assessed independently by two review authors. Discrepancies were resolved by discussion.\nMain results\nTwenty-three studies with 24 study arms with CBTs were included. A total of 2031 patients were included; 1073 patients in CBT groups and 958 patients in control groups. Only two studies were without any risk of bias. The GRADE quality of evidence of the studies was low. CBTs were superior to controls in reducing pain at end of treatment by 0.5 points on a scale of 0 to 10 (standardised mean difference (SMD) - 0.29; 95% confidence interval (CI) -0.49 to -0.17) and by 0.6 points at long-term follow-up (median 6 months) (SMD -0.40; 95% CI -0.62 to -0.17); in reducing negative mood at end of treatment by 0.7 points on a scale of 0 to 10 (SMD - 0.33; 95% CI -0.49 to -0.17) and by 1.3 points at long-term follow-up (median 6 months) (SMD -0.43; 95% CI -0.75 to -0.11); and in reducing disability at end of treatment by 0.7 points on a scale of 0 to 10 (SMD - 0.30; 95% CI -0.51 to -0.08) and at long-term follow-up (median 6 months) by 1.2 points (SMD -0.52; 95% CI -0.86 to -0.18). There was no statistically significant difference in dropout rates for any reasons between CBTs and controls (risk ratio (RR) 0.94; 95% CI 0.65 to 1.35).\nAuthors' conclusions\nCBTs provided a small incremental benefit over control interventions in reducing pain, negative mood and disability at the end of treatment and at long-term follow-up. The dropout rates due to any reason did not differ between CBTs and controls.\n\nGiven the provided abstract, please respond to the following query: \"Pain\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "(higher scores mean worse or more severe pain):\n- People who used CBTs rated their pain to be 0.5 points lower at the end of treatment (6.3% absolute improvement) and to be 0.6 points lower six months after the end of treatment on a scale of 0 to 10 (4.2% absolute improvement).\n- People who used CBTs rates their pain to be 6.9 points on a scale of 0-10.\n- People who used a control treatment rated their pain to be 7.4 points on a scale of 0 to 10."
}
] |
query-laysum
|
6022
|
[
{
"role": "user",
"content": "Abstract: Background\nUterine leiomyomas, also referred to as myomas or fibroids, are benign tumours arising from the smooth muscle cells of the myometrium. They are the most common pelvic tumour in women. The estimated rate of leiomyosarcoma, found during surgery for presumed benign leiomyomas, is about 0.51 per 1000 procedures, or approximately 1 in 2000.\nTreatment options for symptomatic uterine leiomyomas include medical, surgical, and radiologically-guided interventions. Laparoscopic myomectomy is the gold standard surgical approach for women who want offspring, or otherwise wish to retain their uterus. A limitation of laparoscopy is the inability to remove large specimens from the abdominal cavity through the laparoscope. To overcome this challenge, the morcellation approach was developed, during which larger specimens are broken into smaller pieces in order to remove them from the abdominal cavity via the port site. However, intracorporeal power morcellation may lead to scattering of benign tissues, with the risk of spreading leiomyoma or endometriosis. In cases of unsuspected malignancy, power morcellation can cause unintentional dissemination of malignant cells, and lead to a poorer prognosis by upstaging the occult cancer.\nA strategy to optimise women's safety is to morcellate the specimens inside a bag. In-bag morcellation may avoid the dissemination of tissue fragments.\nObjectives\nTo evaluate the effectiveness and safety of protected in-bag extracorporeal manual morcellation during laparoscopic myomectomy compared to intra-abdominal uncontained power morcellation.\nSearch methods\nOn 1 July 2019, we searched; the Cochrane Gynaecology and Fertility Group Specialized Register of Controlled Trials, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, LILACS, PubMed, Google Scholar, and two trials registers.\nWe reviewed the reference lists of all retrieved full-text articles, and contacted experts in the field for additional and ongoing trials.\nSelection criteria\nWe included all randomised controlled trials comparing in-bag extracorporeal manual morcellation versus intracorporeal uncontained power morcellation during laparoscopic myomectomy in premenopausal women.\nData collection and analysis\nWe followed standard Cochrane methods.\nTwo review authors independently reviewed the eligibility of trials, extracted data, and evaluated the risk of bias. Data were checked for accuracy.\nThe summary measures were reported as risk ratios (RR) or mean differences (MD) with 95% confidence interval (CI).\nThe outcomes of interest were a composite of intraoperative and postoperative complications, operative times, ease of morcellation, length of hospital stay, postoperative pain, conversion to laparotomy, and postoperative diagnosis of leiomyosarcoma. Results for the five main outcomes follow.\nMain results\nWe included two trials, enrolling 176 premenopausal women with fibroids, who underwent laparoscopic myomectomy.\nThe experimental group received in-bag manual morcellation, during which each enucleated myoma was placed into a specimen retrieval bag, and manually morcellated with scalpel or scissors. In the control group, intracorporeal uncontained power morcellation was used to reduce the size of the myomas.\nNo intraoperative complications, including accidental morcellation of the liver, conversion to laparotomy, endoscopic bag disruption, bowel injury, bleeding, accidental injury to any viscus or vessel, were reported in either group in either trial.\nWe found very low-quality evidence of inconclusive results for total operative time (MD 9.93 minutes, 95% CI -1.35 to 21.20; 2 studies, 176 participants; I² = 35%), and ease of morcellation (MD -0.73 points, 95% CI -1.64 to 0.18; 1 study, 104 participants). The morcellation operative time was a little longer for the in-bag manual morcellation group, however the quality of the evidence was very low (MD 2.59 minutes, 95% CI 0.45 to 4.72; 2 studies, 176 participants; I² = 0%). There were no postoperative diagnoses of leiomyosarcoma made in either group in either trial.\nWe are very uncertain of any of these results. We downgraded the quality of the evidence due to indirectness and imprecision, because of limited sites in high-income settings and countries, small sample sizes, wide confidence intervals, and few events.\nAuthors' conclusions\nThere are limited data on the effectiveness and safety of in-bag morcellation at the time of laparoscopic myomectomy compared to uncontained power morcellation. We were unable to determine the effects of in-bag morcellation on intraoperative complications as no events were reported in either group. We are uncertain if in-bag morcellation improves total operative time or ease of morcellation compared to control. Regarding morcellation operative time, the quality of the evidence was also very low and we cannot be certain of the effect of in-bag morcellation compared to uncontained morcellation. No cases of postoperative diagnosis of leiomyosarcoma occurred in either group. We found only two trials comparing in-bag extracorporeal manual morcellation to intracorporeal uncontained power morcellation at the time of laparoscopic myomectomy. Both trials had morcellation operative time as primary outcome and were not powered for uncommon outcomes such as intraoperative complications, and postoperative diagnosis of leiomyosarcoma.\nLarge, well-planned and executed trials are needed.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Myomectomy is a surgical procedure to remove uterine fibroids — also called leiomyomas. These common, non-cancerous growths usually develop in the uterus during childbearing years, but they can occur at any age. In laparoscopic myomectomy, which is a minimally invasive procedure, the surgeon accesses and removes the fibroids through several small abdominal incisions.\nLaparoscopic myomectomy is the gold standard for women with uterine leiomyomas, but it is difficult to remove large fibroids through the small incisions and tubes (laparoscopes). To overcome this challenge, surgeons developed a technique to break the large fibroids into smaller pieces (morcellation).\nHowever, power morcellation (using a drill-like instrument to cut up the fibroids) inside the abdomen may lead to the scattering of the tissues. These tissues may be benign (such as fibroids or endometriosis), but they may also be tissues from an undiagnosed cancer. A strategy to optimise women's safety is to cut up larger fibroids with a scalpel or scissors, inside a bag."
}
] |
query-laysum
|
6023
|
[
{
"role": "user",
"content": "Abstract: Background\nOn the American continent, cutaneous and mucocutaneous leishmaniasis (CL and MCL) are diseases associated with infection by several species of Leishmania parasites. Pentavalent antimonials remain the first-choice treatment. There are alternative interventions, but reviewing their effectiveness and safety is important as availability is limited. This is an update of a Cochrane Review first published in 2009.\nObjectives\nTo assess the effects of interventions for all immuno-competent people who have American cutaneous and mucocutaneous leishmaniasis (ACML).\nSearch methods\nWe updated our database searches of the Cochrane Skin Group Specialised Register, CENTRAL, MEDLINE, Embase, LILACS and CINAHL to August 2019. We searched five trials registers.\nSelection criteria\nRandomised controlled trials (RCTs) assessing either single or combination treatments for ACML in immuno-competent people, diagnosed by clinical presentation and Leishmania infection confirmed by smear, culture, histology, or polymerase chain reaction on a biopsy specimen. The comparators were either no treatment, placebo only, or another active compound.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Our key outcomes were the percentage of participants 'cured' at least three months after the end of treatment, adverse effects, and recurrence. We used GRADE to assess evidence certainty for each outcome.\nMain results\nWe included 75 studies (37 were new), totalling 6533 randomised participants with ATL. The studies were mainly conducted in Central and South America at regional hospitals, local healthcare clinics, and research centres. More male participants were included (mean age: roughly 28.9 years (SD: 7.0)). The most common confirmed species were L. braziliensis, L. panamensis, and L. mexicana. The most assessed interventions and comparators were non-antimonial systemics (particularly oral miltefosine) and antimonials (particularly meglumine antimoniate (MA), which was also a common intervention), respectively.\nThree studies included moderate-to-severe cases of mucosal leishmaniasis but none included cases with diffuse cutaneous or disseminated CL, considered the severe cutaneous form. Lesions were mainly ulcerative and located in the extremities and limbs. The follow-up (FU) period ranged from 28 days to 7 years. All studies had high or unclear risk of bias in at least one domain (especially performance bias). None of the studies reported the degree of functional or aesthetic impairment, scarring, or quality of life.\nCompared to placebo, at one-year FU, intramuscular (IM) MA given for 20 days to treat L. braziliensis and L. panamensis infections in ACML may increase the likelihood of complete cure (risk ratio (RR) 4.23, 95% confidence interval (CI) 0.84 to 21.38; 2 RCTs, 157 participants; moderate-certainty evidence), but may also make little to no difference, since the 95% CI includes the possibility of both increased and reduced healing (cure rates), and IMMA probably increases severe adverse effects such as myalgias and arthralgias (RR 1.51, 95% CI 1.17 to 1.96; 1 RCT, 134 participants; moderate-certainty evidence). IMMA may make little to no difference to the recurrence risk, but the 95% CI includes the possibility of both increased and reduced risk (RR 1.79, 95% CI 0.17 to 19.26; 1 RCT, 127 participants; low-certainty evidence).\nCompared to placebo, at six-month FU, oral miltefosine given for 28 days to treat L. mexicana, L. panamensis and L. braziliensis infections in American cutaneous leishmaniasis (ACL) probably improves the likelihood of complete cure (RR 2.25, 95% CI 1.42 to 3.38), and probably increases nausea rates (RR 3.96, 95% CI 1.49 to 10.48) and vomiting (RR 6.92, 95% CI 2.68 to 17.86) (moderate-certainty evidence). Oral miltefosine may make little to no difference to the recurrence risk (RR 2.97, 95% CI 0.37 to 23.89; low-certainty evidence), but the 95% CI includes the possibility of both increased and reduced risk (all based on 1 RCT, 133 participants).\nCompared to IMMA, at 6 to 12 months FU, oral miltefosine given for 28 days to treat L. braziliensis, L. panamensis, L. guyanensis and L. amazonensis infections in ACML may make little to no difference to the likelihood of complete cure (RR 1.05, 95% CI 0.90 to 1.23; 7 RCTs, 676 participants; low-certainty evidence). Based on moderate-certainty evidence (3 RCTs, 464 participants), miltefosine probably increases nausea rates (RR 2.45, 95% CI 1.72 to 3.49) and vomiting (RR 4.76, 95% CI 1.82 to 12.46) compared to IMMA. Recurrence risk was not reported.\nFor the rest of the key comparisons, recurrence risk was not reported, and risk of adverse events could not be estimated.\nCompared to IMMA, at 6 to 12 months FU, oral azithromycin given for 20 to 28 days to treat L. braziliensis infections in ACML probably reduces the likelihood of complete cure (RR 0.51, 95% CI 0.34 to 0.76; 2 RCTs, 93 participants; moderate-certainty evidence).\nCompared to intravenous MA (IVMA) and placebo, at 12 month FU, adding topical imiquimod to IVMA, given for 20 days to treat L. braziliensis, L. guyanensis and L. peruviana infections in ACL probably makes little to no difference to the likelihood of complete cure (RR 1.30, 95% CI 0.95 to 1.80; 1 RCT, 80 participants; moderate-certainty evidence).\nCompared to MA, at 6 months FU, one session of local thermotherapy to treat L. panamensis and L. braziliensis infections in ACL reduces the likelihood of complete cure (RR 0.80, 95% CI 0.68 to 0.95; 1 RCT, 292 participants; high-certainty evidence).\nCompared to IMMA and placebo, at 26 weeks FU, adding oral pentoxifylline to IMMA to treat CL (species not stated) probably makes little to no difference to the likelihood of complete cure (RR 0.86, 95% CI 0.63 to 1.18; 1 RCT, 70 participants; moderate-certainty evidence).\nAuthors' conclusions\nEvidence certainty was mostly moderate or low, due to methodological shortcomings, which precluded conclusive results. Overall, both IMMA and oral miltefosine probably result in an increase in cure rates, and nausea and vomiting are probably more common with miltefosine than with IMMA.\nFuture trials should investigate interventions for mucosal leishmaniasis and evaluate recurrence rates of cutaneous leishmaniasis and its progression to mucosal disease.\n\nGiven the provided abstract, please respond to the following query: \"What we found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 75 studies on 6533 people (approximately 75% male; average age: 29 years).\n- One study investigated children (2 to 12 years).\n- Most studies (67) involved people with cutaneous leishmaniasis.\n- Eight studies investigated people with mucosal or mucocutaneous leishmaniasis.\n- The parasite Leishmania braziliensis caused the disease in 52 studies.\n- Studies were conducted at regional hospitals, local clinics, and research centres.\n- Studies lasted between 28 days and seven years.\n- Most studies reported their funding source: the US army funded eight studies, industry funded 10, and institutional grants funded 33 (five of these also reported industry funding).\nTreatments were mainly compared to a placebo (fake treatment) or meglumine antimoniate (an antimonial).\nHere we report this review's main results. We were only able to report the risk of recurrence and side effects for the comparisons of meglumine antimoniate (MA) or miltefosine versus placebo and miltefosine versus MA."
}
] |
query-laysum
|
6024
|
[
{
"role": "user",
"content": "Abstract: Background\nPain management is a high priority for patients with rheumatoid arthritis (RA). Despite deficiencies in research data, neuromodulators have gained widespread clinical acceptance as adjuvants in the management of patients with chronic musculoskeletal pain.\nObjectives\nThe aim of this review was to determine the efficacy and safety of neuromodulators in pain management in patients with RA. Neuromodulators included in this review were anticonvulsants (gabapentin, pregabalin, phenytoin, sodium valproate, lamotrigine, carbamazepine, levetiracetam, oxcarbazepine, tiagabine and topiramate), ketamine, bupropion, methylphenidate, nefopam, capsaicin and the cannabinoids.\nSearch methods\nWe performed a computer-assisted search of the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2010, 4th quarter), MEDLINE (1950 to week 1 November 2010), EMBASE (Week 44, 2010) and PsycINFO (1806 to week 2 November 2010). We also searched the 2008 and 2009 American College of Rheumatology (ACR) and European League against Rheumatism (EULAR) conference abstracts and performed a handsearch of reference lists of articles.\nSelection criteria\nWe included randomised controlled trials which compared any neuromodulator to another therapy (active or placebo, including non-pharmacological therapies) in adult patients with RA that had at least one clinically relevant outcome measure.\nData collection and analysis\nTwo blinded review authors independently extracted data and assessed the risk of bias in the trials. Meta-analyses were used to examine the efficacy of a neuromodulator on pain, depression and function as well as their safety.\nMain results\nFour trials with high risk of bias were included in this review. Two trials evaluated oral nefopam (52 participants) and one trial each evaluated topical capsaicin (31 participants) and oromucosal cannabis (58 participants).\nThe pooled analyses identified a significant reduction in pain levels favouring nefopam over placebo (weighted mean difference (WMD) -21.16, 95% CI -35.61 to -6.71; number needed to treat (NNT) 2, 95% CI 1.4 to 9.5) after two weeks. There were insufficient data to assess withdrawals due to adverse events. Nefopam was associated with significantly more adverse events (RR 4.11, 95% CI 1.58 to 10.69; NNTH 9, 95% CI 2 to 367), which were predominantly nausea and sweating.\nIn a mixed population trial, qualitative analysis of patients with RA showed a significantly greater reduction in pain favouring topical capsaicin over placebo at one and two weeks (MD -23.80, 95% CI -44.81 to -2.79; NNT 3, 95% CI 2 to 47; MD -34.40, 95% CI -54.66 to -14.14; NNT 2, 95% CI 1.4 to 6 respectively). No separate safety data were available for patients with RA, however 44% of patients developed burning at the site of application and 2% withdrew because of this.\nOne small, low quality trial assessed oromucosal cannabis against placebo and found a small, significant difference favouring cannabis in the verbal rating score 'pain at present' (MD -0.72, 95% CI -1.31 to -0.13) after five weeks. Patients receiving cannabis were significantly more likely to suffer an adverse event (risk ratio (RR) 1.82, 95% CI 1.10 to 3.00; NNTH 3, 95% CI 3 to 13). These were most commonly dizziness (26%), dry mouth (13%) and light headedness (10%).\nAuthors' conclusions\nThere is currently weak evidence that oral nefopam, topical capsaicin and oromucosal cannabis are all superior to placebo in reducing pain in patients with RA. However, each agent is associated with a significant side effect profile. The confidence in our estimates is not strong given the difficulties with blinding, the small numbers of participants evaluated and the lack of adverse event data. In some patients, however, even a small degree of pain relief may be considered worthwhile. Until further research is available, given the relatively mild nature of the adverse events, capsaicin could be considered as an add-on therapy for patients with persistent local pain and inadequate response or intolerance to other treatments. Oral nefopam and oromucosal cannabis have more significant side effect profiles however and the potential harms seem to outweigh any modest benefit achieved.\n\nGiven the provided abstract, please respond to the following query: \"The review shows that in people with rheumatoid arthritis\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- Nefopam, topical capsaicin and oromucosal cannabis may improve pain levels\n- Oromucosal cannabis may slightly improve sleep\n- No trials were found that evaluated whether neuromodulators affect functional status, quality of life, withdrawals due to inadequate analgesia or depression\nWe also do not have precise information about serious side effects and complications. Possible side effects of nefopam that were found in the trials include nausea, dry mouth, sweating and feeling tired. However, rare complications have also been reported and include convulsions and cardiac arrhythmias.\nCommon side effects of topical capsaicin therapy include local skin irritation and burning. More serious allergic reactions are rare but have been reported.\nPossible side effects of oromucosal cannabis found in this review were dizziness, fatigue and loss of balance. Readers should be aware that, although not seen in our review, rarer complications including psychosis and suicidal thoughts have also been reported."
}
] |
query-laysum
|
6025
|
[
{
"role": "user",
"content": "Abstract: Background\nStroke is one of the leading causes of long-lasting disability and mortality and its global burden has increased in the past two decades. Several therapies have been proposed for the recovery from, and treatment of, ischemic stroke. One of them is citicoline. This review assessed the benefits and harms of citicoline for treating patients with acute ischemic stroke.\nObjectives\nTo assess the clinical benefits and harms of citicoline compared with placebo or any other control for treating people with acute ischemic stroke.\nSearch methods\nWe searched in the Cochrane Stroke Group Trials Register, CENTRAL, MEDLINE Ovid, Embase Ovid, LILACS until 29 January 2020. We searched the World Health Organization Clinical Trials Search Portal and ClinicalTrials.gov. Additionally, we also reviewed reference lists of the retrieved publications and review articles, and searched the websites of the US Food and Drug Administration (FDA) and European Medicines Agency (EMA).\nSelection criteria\nWe included randomized controlled trials (RCTs) in any setting including participants with acute ischemic stroke. Trials were eligible for inclusion if they compared citicoline versus placebo or no intervention.\nData collection and analysis\nWe selected RCTs, assessed the risk of bias in seven domains, and extracted data by duplicate. Our primary outcomes of interest were all-cause mortality and the degree of disability or dependence in daily activities at 90 days. We estimated risk ratios (RRs) for dichotomous outcomes. We measured statistical heterogeneity using the I² statistic. We conducted our analyses using the fixed-effect and random-effects model meta-analyses. We assessed the overall quality of evidence for six pre-specified outcomes using the GRADE approach.\nMain results\nWe identified 10 RCTs including 4281 participants. In all these trials, citicoline was given either orally, intravenously, or a combination of both compared with placebo or standard care therapy. Citicoline doses ranged between 500 mg and 2000 mg per day. We assessed all the included trials as having high risk of bias. Drug companies sponsored six trials.\nA pooled analysis of eight trials indicates there may be little or no difference in all-cause mortality comparing citicoline with placebo (17.3% versus 18.5%; RR 0.94, 95% CI 0.83 to 1.07; I² = 0%; low-quality evidence due to risk of bias). Four trials showed that citicoline may not increase the proportion of patients with a moderate or lower degree of disability or dependence compared with placebo, according to the Rankin Scale (21.72% versus 19.23%; RR 1.11, 95% CI 0.97 to 1.26; I² = 1%; low-quality evidence due to risk of bias).\nMeta-analysis of three trials indicates there may be little or no difference in serious cardiovascular adverse events comparing citicoline with placebo (8.83% versus 7.77%; RR 1.04, 95% CI 0.84 to 1.29; I² = 0%; low-quality evidence due to risk of bias). Overall, either serious or non-serious adverse events – central nervous system, gastrointestinal, musculoskeletal, etc. – were poorly reported and harms may have been underestimated.\nFour trials suggested that citicoline results in no difference in functional recovery, according to the Barthel Index, compared with placebo (32.78% versus 30.70%; RR 1.03, 95% CI 0.94 to 1.13; I² = 24%; low-quality evidence due to risk of bias). Citicoline may not increase the proportion of patients with a minor impairment (according to ≤ 1 scores in the National Institutes of Health Stroke Scale) (5 trials, 24.31% versus 22.44%; RR 1.08, 95% CI 0.96 to 1.21; I² = 27%, low-quality evidence due to risk of bias). None of the included trials reported data on quality of life.\nA pre-planned Trial Sequential Analysis suggested that no more trials may be needed for the primary outcomes.\nAuthors' conclusions\nThis review assessed the clinical benefits and harms of citicoline compared with placebo or any other standard treatment for people with acute ischemic stroke. The findings of the review suggest there may be little to no difference between citicoline and its controls regarding all-cause mortality, disability or dependence in daily activities, functional recovery, neurological function and severe adverse events, based on low-certainty evidence. None of the included trials assessed quality of life and the safety profile of citicoline remains unknown. The available evidence is of low quality due to either limitations in the design or execution of the trials.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 10 relevant trials that included 4281 participants; six trials were carried out in multiple centers and two were international. We considered most trials to be at high risk of bias and they included small numbers of participants. This raises the risk of over-estimating benefits and under-estimating harms. Trials tested citicoline given either by mouth or by injection. Citicoline did not seem to influence death or disability in daily activities, severe side-effects, functional recovery, or neurological recovery."
}
] |
query-laysum
|
6026
|
[
{
"role": "user",
"content": "Abstract: Background\nIn epidemics of highly infectious diseases, such as Ebola, severe acute respiratory syndrome (SARS), or coronavirus (COVID-19), healthcare workers (HCW) are at much greater risk of infection than the general population, due to their contact with patients' contaminated body fluids. Personal protective equipment (PPE) can reduce the risk by covering exposed body parts. It is unclear which type of PPE protects best, what is the best way to put PPE on (i.e. donning) or to remove PPE (i.e. doffing), and how to train HCWs to use PPE as instructed.\nObjectives\nTo evaluate which type of full-body PPE and which method of donning or doffing PPE have the least risk of contamination or infection for HCW, and which training methods increase compliance with PPE protocols.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and CINAHL to 20 March 2020.\nSelection criteria\nWe included all controlled studies that evaluated the effect of full-body PPE used by HCW exposed to highly infectious diseases, on the risk of infection, contamination, or noncompliance with protocols. We also included studies that compared the effect of various ways of donning or doffing PPE, and the effects of training on the same outcomes.\nData collection and analysis\nTwo review authors independently selected studies, extracted data and assessed the risk of bias in included trials. We conducted random-effects meta-analyses were appropriate.\nMain results\nEarlier versions of this review were published in 2016 and 2019. In this update, we included 24 studies with 2278 participants, of which 14 were randomised controlled trials (RCT), one was a quasi-RCT and nine had a non-randomised design.\nEight studies compared types of PPE. Six studies evaluated adapted PPE. Eight studies compared donning and doffing processes and three studies evaluated types of training. Eighteen studies used simulated exposure with fluorescent markers or harmless microbes. In simulation studies, median contamination rates were 25% for the intervention and 67% for the control groups.\nEvidence for all outcomes is of very low certainty unless otherwise stated because it is based on one or two studies, the indirectness of the evidence in simulation studies and because of risk of bias.\nTypes of PPE\nThe use of a powered, air-purifying respirator with coverall may protect against the risk of contamination better than a N95 mask and gown (risk ratio (RR) 0.27, 95% confidence interval (CI) 0.17 to 0.43) but was more difficult to don (non-compliance: RR 7.5, 95% CI 1.81 to 31.1). In one RCT (59 participants) coveralls were more difficult to doff than isolation gowns (very low-certainty evidence). Gowns may protect better against contamination than aprons (small patches: mean difference (MD) −10.28, 95% CI −14.77 to −5.79). PPE made of more breathable material may lead to a similar number of spots on the trunk (MD 1.60, 95% CI −0.15 to 3.35) compared to more water-repellent material but may have greater user satisfaction (MD −0.46, 95% CI −0.84 to −0.08, scale of 1 to 5). According to three studies that tested more recently introduced full-body PPE ensembles, there may be no difference in contamination.\nModified PPE versus standard PPE\nThe following modifications to PPE design may lead to less contamination compared to standard PPE: sealed gown and glove combination (RR 0.27, 95% CI 0.09 to 0.78), a better fitting gown around the neck, wrists and hands (RR 0.08, 95% CI 0.01 to 0.55), a better cover of the gown-wrist interface (RR 0.45, 95% CI 0.26 to 0.78, low-certainty evidence), added tabs to grab to facilitate doffing of masks (RR 0.33, 95% CI 0.14 to 0.80) or gloves (RR 0.22, 95% CI 0.15 to 0.31).\nDonning and doffing\nUsing Centers for Disease Control and Prevention (CDC) recommendations for doffing may lead to less contamination compared to no guidance (small patches: MD −5.44, 95% CI −7.43 to −3.45). One-step removal of gloves and gown may lead to less bacterial contamination (RR 0.20, 95% CI 0.05 to 0.77) but not to less fluorescent contamination (RR 0.98, 95% CI 0.75 to 1.28) than separate removal. Double-gloving may lead to less viral or bacterial contamination compared to single gloving (RR 0.34, 95% CI 0.17 to 0.66) but not to less fluorescent contamination (RR 0.98, 95% CI 0.75 to 1.28). Additional spoken instruction may lead to fewer errors in doffing (MD −0.9, 95% CI −1.4 to −0.4) and to fewer contamination spots (MD −5, 95% CI −8.08 to −1.92). Extra sanitation of gloves before doffing with quaternary ammonium or bleach may decrease contamination, but not alcohol-based hand rub.\nTraining\nThe use of additional computer simulation may lead to fewer errors in doffing (MD −1.2, 95% CI −1.6 to −0.7). A video lecture on donning PPE may lead to better skills scores (MD 30.70, 95% CI 20.14 to 41.26) than a traditional lecture. Face-to-face instruction may reduce noncompliance with doffing guidance more (odds ratio 0.45, 95% CI 0.21 to 0.98) than providing folders or videos only.\nAuthors' conclusions\nWe found low- to very low-certainty evidence that covering more parts of the body leads to better protection but usually comes at the cost of more difficult donning or doffing and less user comfort. More breathable types of PPE may lead to similar contamination but may have greater user satisfaction. Modifications to PPE design, such as tabs to grab, may decrease the risk of contamination. For donning and doffing procedures, following CDC doffing guidance, a one-step glove and gown removal, double-gloving, spoken instructions during doffing, and using glove disinfection may reduce contamination and increase compliance. Face-to-face training in PPE use may reduce errors more than folder-based training.\nWe still need RCTs of training with long-term follow-up. We need simulation studies with more participants to find out which combinations of PPE and which doffing procedure protects best. Consensus on simulation of exposure and assessment of outcome is urgently needed. We also need more real-life evidence. Therefore, the use of PPE of HCW exposed to highly infectious diseases should be registered and the HCW should be prospectively followed for their risk of infection.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 24 relevant studies with 2278 participants that evaluated types of PPE, modified PPE, procedures for putting on and removing PPE, and types of training. Eighteen of the studies did not assess healthcare workers who were treating infected patients but simulated the effect of exposure to infection using fluorescent markers or harmless viruses or bacteria. Most of the studies were small, and only one or two studies addressed each of our questions."
}
] |
query-laysum
|
6027
|
[
{
"role": "user",
"content": "Abstract: Background\nPreterm infants have fewer nutrient reserves at birth than full-term infants and often receive artificial formula feeds in the absence of expressed breast milk. Although it is generally agreed that feeding must be initiated slowly and advanced with much greater deliberation than in a healthy, full-term infant, the way in which feeds are introduced and advanced in preterm infants varies widely. This review focuses on whether dilute or full-strength formula is the preferable mode of introducing feeds in preterm infants for whom expressed breast milk is unavailable.\nObjectives\nTo assess the effects of dilute versus full-strength formula on the incidence of necrotising enterocolitis, feeding intolerance, weight gain, length of stay in hospital and time to achieve full calorie intake in exclusively formula-fed preterm or low birth weight infants. A secondary objective was to assess the effects of different dilution strategies.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to update the search in the Cochrane Central Register of Controlled Trials (CENTRAL 2018, Issue 9), MEDLINE via PubMed (1966 to 1 October 2018), Embase (1980 to 1 October 2018), and CINAHL (1982 to 1 October 2018).\nWe searched clinical trials' registries for ongoing or recently completed trials (clinicaltrials.gov; the World Health Organization’s International Trials Registry and Platform; and the ISRCTN Registry).\nSelection criteria\nRandomised or quasi-randomised trials comparing strengths of formula milk in exclusively formula-fed preterm or low birth weight infants. We excluded studies if infants received formula as a supplement to breast milk.\nData collection and analysis\nWe independently assessed studies for inclusion. We collected data using the standard methods of Cochrane Neonatal, with independent assessment of risk of bias and data extraction. We synthesised mean differences using a fixed-effect meta-analysis model. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe included three studies involving 102 preterm or low birth weight infants in the review. The studies compared dilute (double-volume, half-strength) formula with full-strength (20 kcal/oz (~ 68 to 70 kcal/100 mL)) formula. We assessed all three studies as having unclear risk of bias due to the likely absence of blinding of study personnel and the potential for selection bias in the largest trial. Data for the primary outcome of necrotising enterocolitis were not reported in any of the studies. We could combine two of the studies (88 infants) in the meta-analysis. The evidence suggests that dilute formula with double-volume (half-strength) may lead to fewer episodes of gastric residuals per day (one study; mean difference (MD) −1.20, 95% confidence interval (CI) −2.20 to −0.20; low-certainty evidence), fewer episodes of gastric residuals per baby until attaining 100 kcal/kg (one study; MD −0.80, 95% CI −1.32 to −0.28; low-certainty evidence), fewer episodes of vomiting per day (one study; MD −0.04, 95% CI −0.07 to −0.01; low-certainty evidence) and fewer occurrences of abdominal distension greater than 2 cm (two studies; MD −0.16, 95% CI −0.19 to −0.13; low-certainty evidence). For the secondary outcomes, data suggest that infants in the dilute formula with double-volume (half-strength) group may have attained an adequate energy intake earlier than infants in the full-strength group (two studies; MD −2.26, 95% CI −2.85 to −1.67; low-certainty evidence). There was no evidence of a difference between groups for weight gain one week after commencement of intragastric feeds (one study; MD 0.05 kg, 95% CI −0.06 to 0.15; low-certainty evidence). Data were not reported for length of hospital stay.\nAuthors' conclusions\nThere is low-certainty evidence from three small, old trials that use of dilute formula in preterm or low birth weight formula-fed infants may lead to an important reduction in the time taken for preterm infants to attain an adequate energy intake.\nHowever, our confidence in this result is limited due to uncertainty over risk of bias and sparsity of available data. Dilute formula may reduce incidence of feeding intolerance, but the clinical significance of the reduction is uncertain. The impact on serious gastrointestinal problems, including necrotising enterocolitis, was not reported in any of the trials. Further randomised trials are needed to confirm these results.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence for this review is current as of 1 October 2018. We included three trials that compared dilute formula milk (half-strength, double-volume) with full-strength formula milk. The trials involved 102 preterm or low birth weight infants; two were conducted in the USA and one in India. The trials were small (14, 38 and 50 infants, respectively) and conducted between 25 and 30 years ago. The quality of two trials was judged to be poor due to insufficient information being provided in the trial publications, but judged to be moderate in the third trial (of 38 infants). None of the trials assessed necrotising enterocolitis as an outcome."
}
] |
query-laysum
|
6028
|
[
{
"role": "user",
"content": "Abstract: Background\nCombined modality treatment consisting of chemotherapy followed by localised radiotherapy is the standard treatment for patients with early stage Hodgkin lymphoma (HL). However, due to long- term adverse effects such as secondary malignancies the role of radiotherapy has been questioned recently and some clinical study groups advocate chemotherapy only for this indication.\nObjectives\nTo assess the effects of chemotherapy alone compared to chemotherapy plus radiotherapy in adults with early stage HL .\nSearch methods\nFor the or i ginal version of this review, we searched MEDLINE, Embase and CENTRAL as well as conference proceedings (American Society of Hematology, American Society of Clinical Oncology and International Symposium of Hodgkin Lymphoma) from January 1980 to November 2010 for randomised controlled trials (RCTs) comparing chemotherapy alone versus chemotherapy regimens plus radiotherapy. For the updated review we searched MEDLINE, CENTRAL and conference proceedings to December 2016.\nSelection criteria\nWe included RCTs comparing chemotherapy alone with chemotherapy plus radiotherapy in patients with early stage HL. We excluded trials with more than 20% of patients in advanced stage. As the value of radiotherapy in addition to chemotherapy is still not clear, we also compared to more cycles of chemotherapy in the control arm. In this updated review, we also included a second comparison evaluating trials with varying numbers of cycles of chemotherapy between intervention and control arms, same chemotherapy regimen in both arms assumed. We excluded trials evaluating children only, therefore only trials involving adults are included in this updated review.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of trials. We contacted study authors to obtain missing information. As effect measures we used hazard ratios (HR) for overall survival (OS) and progression-free survival (PFS) and risk ratios (RR) for response rates. Since not all trials reported PFS according to our definitions, we evaluated all similar outcomes (e.g. event-free survival) as PFS/tumour control.\nMain results\nOur search led to 5518 potentially relevant references. From these, we included seven RCTs in the analyses involving 2564 patients. In contrast to the first version of this review including five trials, we excluded trials randomising children. As a result, we excluded one trial from the former analyses and we identified three new trials.\nFive trials with 1388 patients compared the combination of chemotherapy alone and chemotherapy plus radiotherapy, with the same number of chemotherapy cycles in both arms. The addition of radiotherapy to chemotherapy has probably little or no difference on OS (HR 0.48; 95% confidence interval (CI) 0.22 to 1.06; P = 0.07, moderate- quality evidence), however two included trials had potential other high risk of bias due to a high number of patients not receiving planned radiotherapy. After excluding these trials in a sensitivity analysis, the results showed that the combination of chemotherapy and radiotherapy improved OS compared to chemotherapy alone (HR 0.31; 95% CI 0.19 to 0.52; P <0.00001, moderate- quality evidence). In contrast to chemotherapy alone the use of chemotherapy and radiotherapy improved PFS (HR 0.42; 95% CI 0.25 to 0.72; P = 0.001; moderate- quality evidence). Regarding infection- related mortality (RR 0.33; 95% CI 0.01 to 8.06; P = 0.5; low- quality evidence), second cancer- related mortality (RR 0.53; 95% CI 0.07 to 4.29; P = 0.55; low- quality evidence) and cardiac disease- related mortality (RR 2.94; 95% CI 0.31 to 27.55; P = 0.35;low- quality evidence), there is no evidence for a difference between the use of chemotherapy alone and chemotherapy plus radiotherapy. For complete response rate (CRR) (RR 1.08; 95% CI 0.93 to 1.25; P = 0.33; low- quality evidence), there is also no evidence for a difference between treatment groups.\nTwo trials with 1176 patients compared the combination of chemotherapy alone and chemotherapy plus radiotherapy, with different numbers of chemotherapy cycles in both arms. OS is reported in one trial only, the use of chemotherapy alone (more chemotherapy cycles) may improve OS compared to chemotherapy plus radiotherapy (HR 2.12; 95% CI 1.03 to 4.37; P = 0.04; low- quality evidence). This trial also had a potential other high risk of bias due to a high number of patients not receiving planned therapy. There is no evidence for a difference between chemotherapy alone and chemotherapy plus radiotherapy regarding PFS (HR 0.42; 95% CI 0.14 to 1.24; P = 0.12; low- quality evidence). After excluding the trial with patients not receiving the planned therapy in a sensitivity analysis, the results showed that the combination of chemotherapy and radiotherapy improved PFS compared to chemotherapy alone (HR 0.24; 95% CI 0.070 to 0.88; P = 0.03, based on one trial). For infection- related mortality (RR 6.90; 95% CI 0.36 to 132.34; P = 0.2; low- quality evidence), second cancer- related mortality (RR 2.22; 95% CI 0.7 to 7.03; P = 0.18; low- quality evidence) and cardiac disease-related mortality (RR 0.99; 95% CI 0.14 to 6.90; P = 0.99; low-quality evidence), there is no evidence for a difference between the use of chemotherapy alone and chemotherapy plus radiotherapy. CRR rate was not reported.\nAuthors' conclusions\nThis systematic review compared the effects of chemotherapy alone and chemotherapy plus radiotherapy in adults with early stage HL .\nFor the comparison with same numbers of chemotherapy cycles in both arms, we found moderate- quality evidence that PFS is superior in patients receiving chemotherapy plus radiotherapy than in those receiving chemotherapy alone. The addition of radiotherapy to chemotherapy has probably little or no difference on OS . The sensitivity analysis without the trials with potential other high risk of bias showed that chemotherapy plus radiotherapy improves OS compared to chemotherapy alone.\nFor the comparison with different numbers of chemotherapy cycles between the arms there are no implications for OS and PFS possible, because of the low quality of evidence of the results.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For the comparison of chemotherapy alone and chemotherapy plus radiotherapy with the same number of chemotherapy cycles in both arms, this systematic review found no evidence for a difference regarding OS between the interventions, however, two included trials had potential other high risk of bias due to a high number of patients not receiving radiotherapy as planned beforehand. After excluding these trials in a further analysis, OS was superior in adults receiving chemotherapy plus radiotherapy than in those receiving chemotherapy alone. PFS was also superior in adults receiving chemotherapy plus radiotherapy. Most trials reported adverse events (AEs), but in different ways. Because of insufficient comparable data we focused on adverse events considered of particular interest. For infection- related mortality, second cancer- related mortality and cardiac disease- related mortality, there was no evidence for a difference between treatment groups. For complete response rate (CRR) there was no evidence for a difference between treatment groups either.\nFor the comparison of chemotherapy alone and chemotherapy plus radiotherapy with different numbers of chemotherapy cycles in the arms, OS was reported in one trial only. The use of chemotherapy alone may improve OS compared to chemotherapy plus radiotherapy. There was no evidence for a difference between treatment groups regarding PFS. After excluding one trial with patients not receiving the planned therapy the results showed that chemotherapy plus radiotherapy improved PFS. For infection- related mortality, second cancer- related mortality and cardiac disease- related mortality, there is no evidence for a difference between treatment groups. CR was not reported."
}
] |
query-laysum
|
6029
|
[
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis is the most common life-limiting autosomal recessive genetic disorder in white populations. Distal intestinal obstruction syndrome (DIOS) is an important morbidity in cystic fibrosis. It is the result of the accumulation of viscid faecal material within the bowel which combines with thick, sticky mucus produced in the intestines of people with cystic fibrosis. The intestine may be completely blocked (complete DIOS) or only partially blocked (incomplete DIOS). Once a diagnosis of DIOS has been made, the goal of therapy is to relieve the acute complete or incomplete faecal obstruction and ultimately prevent the need for surgical intervention.\nObjectives\nThis review aimed to evaluate the effectiveness and safety of different treatment regimens for the treatment of DIOS (complete and incomplete) in children and adults with cystic fibrosis.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. Date of search: 09 September 2021.\nWe also searched online trial registries. Date of last search: 12 October 2021.\nSelection criteria\nRandomised controlled trials, quasi-randomised controlled trials (including cross-over trials (to be judged on an individual basis)) comparing the use of laxative agents or surgery for treating DIOS in children, young people and adults with cystic fibrosis to each other, placebo or no intervention.\nData collection and analysis\nTwo authors independently screened papers, extracted trial details and assessed for risk of bias. The authors assessed the quality of evidence using GRADE.\nMain results\nThere was one trial with 20 participants (16 females) included in the review. The mean age of participants was 13.1 years. The trial was a double-blinded, randomised cross-over trial which had a duration of 12 months in total and compared high-dose and low-dose pancreatic enzyme therapy. As only the abstract of the trial was available, the overall risk of bias was judged to be unclear. The trial did not address either of our primary outcomes (time until resolution of DIOS and treatment failure rate), but reported episodes of acute DIOS, presence of abdominal mass and abdominal pain. There were no numerical data available for these outcomes, but the authors stated that there was no difference between treatment with high-dose or low-dose pancreatic enzymes. The overall certainty of the evidence was found to be very low.\nAuthors' conclusions\nThere is a clear lack of evidence for the treatment of DIOS in people with cystic fibrosis. The included abstract did not address our primary outcome measures and did not provide numerical data for the two secondary outcomes it did address. Therefore, we cannot justify the use of high-dose pancreatic enzymes for treating DIOS, nor can we comment on the efficacy and safety of other laxative agents. From our findings, it is clear that more randomised controlled trials need to be conducted in this area.\n\nGiven the provided abstract, please respond to the following query: \"Trial characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review included one trial with 20 people with cystic fibrosis who were aged between 7.1 and 23.2 years of age (average 13.1 years old). The 12-month trial compared a high dose of pancreatic enzymes with a low dose of pancreatic enzymes for treating chronic DIOS."
}
] |
query-laysum
|
6030
|
[
{
"role": "user",
"content": "Abstract: Background\nSpinal muscular atrophy (SMA) is caused by a homozygous deletion of the survival motor neuron 1 (SMN1) gene on chromosome 5, or a heterozygous deletion in combination with a point mutation in the second SMN1 allele. This results in degeneration of anterior horn cells, which leads to progressive muscle weakness. By definition, children with SMA type I are never able to sit without support and usually die or become ventilator dependent before the age of two years. There have until very recently been no drug treatments to influence the course of SMA. We undertook this updated review to evaluate new evidence on emerging treatments for SMA type I. The review was first published in 2009 and previously updated in 2011.\nObjectives\nTo assess the efficacy and safety of any drug therapy designed to slow or arrest progression of spinal muscular atrophy (SMA) type I.\nSearch methods\nWe searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, and ISI Web of Science conference proceedings in October 2018. We also searched two trials registries to identify unpublished trials (October 2018).\nSelection criteria\nWe sought all randomised controlled trials (RCTs) or quasi-RCTs that examined the efficacy of drug treatment for SMA type I. Included participants had to fulfil clinical criteria and have a genetically confirmed deletion or mutation of the SMN1 gene (5q11.2-13.2).\nThe primary outcome measure was age at death or full-time ventilation. Secondary outcome measures were acquisition of motor milestones, i.e. head control, rolling, sitting or standing, motor milestone response on disability scores within one year after the onset of treatment, and adverse events and serious adverse events attributable to treatment during the trial period.\nTreatment strategies involving SMN1 gene replacement with viral vectors are out of the scope of this review.\nData collection and analysis\nWe followed standard Cochrane methodology.\nMain results\nWe identified two RCTs: one trial of intrathecal nusinersen in comparison to a sham (control) procedure in 121 randomised infants with SMA type I, which was newly included at this update, and one small trial comparing riluzole treatment to placebo in 10 children with SMA type I.\nThe RCT of intrathecally-injected nusinersen was stopped early for efficacy (based on a predefined Hammersmith Infant Neurological Examination-Section 2 (HINE-2) response). At the interim analyses after 183 days of treatment, 41% (21/51) of nusinersen-treated infants showed a predefined improvement on HINE-2, compared to 0% (0/27) of participants in the control group. This trial was largely at low risk of bias.\nFinal analyses (ranging from 6 months to 13 months of treatment), showed that fewer participants died or required full-time ventilation (defined as more than 16 hours daily for 21 days or more) in the nusinersen-treated group than the control group (hazard ratio (HR) 0.53, 95% confidence interval (CI) 0.32 to 0.89; N = 121; a 47% lower risk; moderate-certainty evidence). A proportion of infants in the nusinersen group and none of 37 infants in the control group achieved motor milestones: 37/73 nusinersen-treated infants (51%) achieved a motor milestone response on HINE-2 (risk ratio (RR) 38.51, 95% CI 2.43 to 610.14; N = 110; moderate-certainty evidence); 16/73 achieved head control (RR 16.95, 95% CI 1.04 to 274.84; moderate-certainty evidence); 6/73 achieved independent sitting (RR 6.68, 95% CI 0.39 to 115.38; moderate-certainty evidence); 7/73 achieved rolling over (RR 7.70, 95% CI 0.45 to 131.29); and 1/73 achieved standing (RR 1.54, 95% CI 0.06 to 36.92; moderate-certainty evidence). Seventy-one per cent of nusinersen-treated infants versus 3% of infants in the control group were responders on the Children's Hospital of Philadelphia Infant Test of Neuromuscular Disorders (CHOP INTEND) measure of motor disability (RR 26.36, 95% CI 3.79 to 183.18; N = 110; moderate-certainty evidence).\nAdverse events and serious adverse events occurred in the majority of infants but were no more frequent in the nusinersen-treated group than the control group (RR 0.99, 95% CI 0.92 to 1.05 and RR 0.70, 95% CI 0.55 to 0.89, respectively; N = 121; moderate-certainty evidence).\nIn the riluzole trial, three of seven children treated with riluzole were still alive at the ages of 30, 48, and 64 months, whereas all three children in the placebo group died. None of the children in the riluzole or placebo group developed the ability to sit, which was the only milestone reported. There were no adverse effects. The certainty of the evidence for all measured outcomes from this study was very low, because the study was too small to detect or rule out an effect, and had serious limitations, including baseline differences. This trial was stopped prematurely because the pharmaceutical company withdrew funding.\nVarious trials and studies investigating treatment strategies other than nusinersen, such as SMN2 augmentation by small molecules, are ongoing.\nAuthors' conclusions\nBased on the very limited evidence currently available regarding drug treatments for SMA type 1, intrathecal nusinersen probably prolongs ventilation-free and overall survival in infants with SMA type I. It is also probable that a greater proportion of infants treated with nusinersen than with a sham procedure achieve motor milestones and can be classed as responders to treatment on clinical assessments (HINE-2 and CHOP INTEND). The proportion of children experiencing adverse events and serious adverse events on nusinersen is no higher with nusinersen treatment than with a sham procedure, based on evidence of moderate certainty. It is uncertain whether riluzole has any effect in patients with SMA type I, based on the limited available evidence. Future trials could provide more high-certainty, longer-term evidence to confirm this result, or focus on comparing new treatments to nusinersen or evaluate them as an add-on therapy to nusinersen.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Nusinersen probably increases ventilation-free and overall survival in children with SMA type I. Nusinersen may improve the proportion of infants achieving motor milestones. Adverse events and serious adverse events are probably not more common with nusinersen injection than with a sham procedure.\nIt is uncertain whether riluzole has any effect in SMA type I, based on the limited available evidence."
}
] |
query-laysum
|
6031
|
[
{
"role": "user",
"content": "Abstract: Background\nAcute limb ischaemia (ALI), the sudden and significant reduction of blood flow to the limb, is considered a vascular emergency. In the general population, the incidence is estimated as 14 per 100,000. Prognosis depends on the time it takes to diagnose the condition and begin appropriate treatment. Standard initial interventional treatments include conventional open surgery and endovascular interventions such as catheter-directed thrombolysis (CDT). Percutaneous interventions, such as percutaneous thrombectomy (PT, including mechanical thrombectomy or pharmomechanical thrombectomy) and ultrasound-accelerated thrombolysis (USAT), are also performed as alternative endovascular techniques. The proposed advantages of PT and USAT include reduced time to revascularisation and when combined with catheter-directed thrombolysis, a reduction in dose of thrombolytic agents and infusion time. The benefits of PT or USAT versus open surgery or thrombolysis alone are still uncertain. In this review, we compared PT or USAT against standard treatment for ALI, in an attempt to determine if any technique is comparatively safer and more effective.\nObjectives\nTo assess the safety and effectiveness of percutaneous thrombectomy or ultrasound-accelerated thrombolysis for the initial management of acute limb ischaemia in adults.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, the World Health Organization (WHO) International Clinical Trials Registry Platform, and ClinicalTrials.gov to 3 March 2021. We searched reference lists of relevant studies and papers.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared PT (any modality, including mechanical thrombectomy (aspiration, rheolysis, rotation) or pharmomechanical thrombectomy) or USAT with open surgery, thrombolysis alone, no treatment, or another PT modality for the treatment of ALI.\nData collection and analysis\nTwo review authors independently selected the studies, assessed risk of bias, extracted data, performed data analysis, and assessed the certainty of evidence according to GRADE. Outcomes of interest were primary patency, amputation rate, major bleeding, clinical success, secondary patency, and adverse effects.\nMain results\nWe included one RCT in this review. This study had a total of 60 participants and compared USAT with standard treatment (CDT). The study included 32 participants in the CDT group and 28 participants in the USAT group.\nWe found no evidence of a difference between USAT and CDT alone for the following evaluated outcomes: amputation rate (risk ratio (RR) 1.14, 95% confidence interval (CI) 0.17 to 7.59); major bleeding (RR 1.71, 95% CI 0.31 to 9.53); clinical success (RR 1.00, 95% CI 0.94 to 1.07); and adverse effects (RR 5.69, 95% CI 0.28 to 113.72). We rated the certainty of the evidence as very low for these outcomes. We downgraded the certainty of the evidence for amputation rate, major bleeding, clinical success, and adverse effects by two levels due to serious limitations in the design (there was a high risk of bias in critical domains) and by two further levels due to imprecision (a small number of participants and only one study included). The study authors reported 30-day patency, but did not report primary and secondary patency separately. The patency rate in the successfully lysed participants was 71% (15/21) in the USAT group and 82% (22/27) in the CDT group. The study authors did not directly report secondary patency, which is patency after secondary procedures, but they did report on secondary procedures. Secondary procedures were subdivided into embolectomy and bypass grafting. Embolectomy was performed on 14% (4/28) of participants in the USAT group versus 3% (1/32) of participants in the CDT group. Bypass grafting was performed on 4% (1/28) of participants in the USAT group versus 0% in the CDT group. As we did not have access to the specific participant data, it was not possible to assess these outcomes further.\nWe did not identify studies comparing the other planned interventions.\nAuthors' conclusions\nThere is insufficient evidence to assess the safety and effectiveness of USAT versus CDT alone for ALI for our evaluated outcomes: amputation rate, major bleeding, clinical success, and adverse effects. Primary and secondary patency were not reported separately. There was no RCT evidence for PT. Limitations of this systematic review derive from the single included study, small sample size, short clinical follow-up period, and high risk of bias in critical domains. For this reason, the applicability of the results is limited. There is a need for high-quality studies to compare PT or USAT against open surgery, thrombolysis alone, no treatment, or other PT modalities for ALI. Future trials should assess outcomes, such as primary patency, amputation rate, major bleeding, clinical success, secondary patency, and adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have very little confidence in the evidence because it is possible that people in the study were aware of what treatment they received. The study did not provide information about important effects of the treatments (blood flow), and the study involved only a small number of people."
}
] |
query-laysum
|
6032
|
[
{
"role": "user",
"content": "Abstract: Background\nNuts contain a number of nutritional attributes which may be cardioprotective. A number of epidemiological studies have shown that nut consumption may have a beneficial effect on people who have cardiovascular disease (CVD) risk factors. However, results from randomised controlled trials (RCTs) are less consistent.\nObjectives\nTo determine the effectiveness of nut consumption for the primary prevention of CVD.\nSearch methods\nWe searched the following electronic databases: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science Core Collection, CINAHL, Database of Abstracts of Reviews of Effects (DARE), Health Technology Assessment Database (HTA) and Health Economics Evaluations Database (HEED) up to 30 July 2015. We searched trial registers and reference lists of reviews for further studies. We did not apply any language restrictions.\nSelection criteria\nWe included RCTs of dietary advice to increase nut consumption or provision of nuts to increase consumption lasting at least three months and including healthy adults or adults at moderate and high risk of CVD. The comparison group was no intervention or minimal intervention. The outcomes of interest were CVD clinical events and CVD risk factors.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, abstracted the data and assessed the risk of bias in included trials.\nMain results\nWe included five trials (435 participants randomised) and one ongoing trial. One study is awaiting classification. All trials examined the provision of nuts to increase consumption rather than dietary advice. None of the included trials reported on the primary outcomes, CVD clinical events, but trials were small and short term. All five trials reported on CVD risk factors. Four of these trials provided data in a useable format for meta-analyses, but heterogeneity precluded meta-analysis for most of the analyses. Overall trials were judged to be at unclear risk of bias.\nThere were variable and inconsistent effects of nut consumption on CVD risk factors (lipid levels and blood pressure). Three trials monitored adverse events. One trial reported an allergic reaction to nuts and three trials reported no significant weight gain with increased nut consumption. None of the included trials reported on other secondary outcomes, occurrence of type 2 diabetes as a major risk factor for CVD, health-related quality of life and costs.\nAuthors' conclusions\nCurrently there is a lack of evidence for the effects of nut consumption on CVD clinical events in primary prevention and very limited evidence for the effects on CVD risk factors. No conclusions can be drawn and further high quality longer term and adequately powered trials are needed to answer the review question.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This Cochrane review aims to answer the question whether or not eating nuts can prevent cardiovascular disease."
}
] |
query-laysum
|
6033
|
[
{
"role": "user",
"content": "Abstract: Background\nThere is increasing evidence that propofol is efficacious and safe for procedural sedation (PS) in the emergency department (ED) setting. However, propofol has a narrow therapeutic window and lacks of a reversal agent. The aim of this review was to cohere the evidence base regarding the efficacy and safety profile of propofol when used in the ED setting for PS.\nObjectives\nTo identify and evaluate all randomized controlled trials (RCTs) comparing propofol with alternative drugs (benzodiazepines, barbiturates, etomidate and ketamine) used in the ED setting for PS.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 9), MEDLINE (1950 to September week 2 2013) and EMBASE (1980 to week 2 2013). We searched the Current Controlled Trials metaRegister of Clinical Trials (compiled by Current Science) (September 2013). We checked the reference lists of trials and contacted trial authors. We imposed no language restriction. We re-ran the search in February 2015. We will deal with the one study awaiting classification when we update the review.\nSelection criteria\nRCTs comparing propofol to alternative drugs (benzodiazepines, barbiturates, etomidate and ketamine) used in the ED setting for PS in participants of all ages.\nData collection and analysis\nTwo authors independently performed data extraction. Two authors performed trial quality assessment. We used mean difference (MD), odds ratio (OR) and 95% confidence intervals (CI) to measure effect sizes. Two authors independently assessed and rated the methodological quality of each trial using The Cochrane Collaboration tool for assessing risk of bias.\nMain results\nTen studies (813 participants) met the inclusion criteria. Two studies only included participants 18 years and younger; six studies only included participants 18 years and older; one study included participants between 16 and 65 years of age and one study included only adults but did not specify the age range. Eight of the included studies had a high risk of bias. The included studies were clinically heterogeneous. We undertook no meta-analysis.\nThe primary outcome measures of this review were: adverse effects (as defined by the study authors) and participant satisfaction (as defined by the study authors). In one study comparing propofol/fentanyl with ketamine/midazolam, delayed adverse reactions (nightmares and behavioural change) were noted in 10% of the ketamine/midazolam group and none in the propofol/fentanyl group. Seven individual studies reported no evidence of a difference in adverse effects between intravenous propofol, with and without adjunctive analgesic agents, and alternative interventions. Three individual studies reported no evidence of a difference in pain at the injection site between intravenous propofol and alternative interventions. Four individual studies reported no evidence of a difference in participant satisfaction between intravenous propofol, with and without adjunctive analgesic agents, and alternative interventions (ketamine, etomidate, midazolam). All the studies employed propofol without the use of an adjunctive analgesic and all, except one, were small (fewer than 100 participants) studies. The quality of evidence for the adverse effects and participant satisfaction outcomes was very low.\nNine included studies (eight comparisons) reported all the secondary outcome measures of the review except mortality. It was not possible to pool the results of the included studies for any of the secondary outcome measures because the comparator interventions were different and the measures were reported in different ways. Seven individual studies reported no evidence of difference in incidence of hypoxia between intravenous propofol, with and without adjunctive analgesic agents, and alternative interventions.\nAuthors' conclusions\nNo firm conclusions can be drawn concerning the comparative effects of administering intravenous propofol, with or without an adjunctive analgesic agent, with alternative interventions in participants undergoing PS in the ED setting on adverse effects (including pain at the injection site) and participant satisfaction. The review was limited because no two included studies employed the same comparator interventions, and because the number of participants in eight of the included studies were small (fewer than 100 participants).\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was overall very low."
}
] |
query-laysum
|
6034
|
[
{
"role": "user",
"content": "Abstract: Background\nStillbirth affects 2.6 million pregnancies worldwide each year. Whilst the majority of cases occur in low- and middle-income countries, stillbirth remains an important clinical issue for high-income countries (HICs) - with both the UK and the USA reporting rates above the mean for HICs. In HICs, the most frequently reported association with stillbirth is placental dysfunction. Placental dysfunction may be evident clinically as fetal growth restriction (FGR) and small-for-dates infants. It can be caused by placental abruption or hypertensive disorders of pregnancy and many other disorders and factors\nPlacental abnormalities are noted in 11% to 65% of stillbirths. Identification of FGR is difficult in utero. Small-for-gestational age (SGA), as assessed after birth, is the most commonly used surrogate measure for this outcome. The degree of SGA is associated with the likelihood of FGR; 30% of infants with a birthweight < 10th centile are thought to be FGR, while 70% of infants with a birthweight < 3rd centile are thought to be FGR. Critically, SGA is the most significant antenatal risk factor for a stillborn infant. Correct identification of SGA infants is associated with a reduction in the perinatal mortality rate. However, currently used tests, such as measurement of symphysis-fundal height, have a low reported sensitivity and specificity for the identification of SGA infants.\nObjectives\nThe primary objective was to assess and compare the diagnostic accuracy of ultrasound assessment of fetal growth by estimated fetal weight (EFW) and placental biomarkers alone and in any combination used after 24 weeks of pregnancy in the identification of placental dysfunction as evidenced by either stillbirth, or birth of a SGA infant. Secondary objectives were to investigate the effect of clinical and methodological factors on test performance.\nSearch methods\nWe developed full search strategies with no language or date restrictions. The following sources were searched: MEDLINE, MEDLINE In Process and Embase via Ovid, Cochrane (Wiley) CENTRAL, Science Citation Index (Web of Science), CINAHL (EBSCO) with search strategies adapted for each database as required; ISRCTN Registry, UK Clinical Trials Gateway, WHO International Clinical Trials Portal and ClinicalTrials.gov for ongoing studies; specialist abstract and conference proceeding resources (British Library’s ZETOC and Web of Science Conference Proceedings Citation Index). Search last conducted in Ocober 2016.\nSelection criteria\nWe included studies of pregnant women of any age with a gestation of at least 24 weeks if relevant outcomes of pregnancy (live birth/stillbirth; SGA infant) were assessed. Studies were included irrespective of whether pregnant women were deemed to be low or high risk for complications or were of mixed populations (low and high risk). Pregnancies complicated by fetal abnormalities and multi-fetal pregnancies were excluded as they have a higher risk of stillbirth from non-placental causes. With regard to biochemical tests, we included assays performed using any technique and at any threshold used to determine test positivity.\nData collection and analysis\nWe extracted the numbers of true positive, false positive, false negative, and true negative test results from each study. We assessed risk of bias and applicability using the QUADAS-2 tool. Meta-analyses were performed using the hierarchical summary ROC model to estimate and compare test accuracy.\nMain results\nWe included 91 studies that evaluated seven tests — blood tests for human placental lactogen (hPL), oestriol, placental growth factor (PlGF) and uric acid, ultrasound EFW and placental grading and urinary oestriol — in a total of 175,426 pregnant women, in which 15,471 pregnancies ended in the birth of a small baby and 740 pregnancies which ended in stillbirth. The quality of included studies was variable with most domains at low risk of bias although 59% of studies were deemed to be of unclear risk of bias for the reference standard domain. Fifty-three per cent of studies were of high concern for applicability due to inclusion of only high- or low-risk women.\nUsing all available data for SGA (86 studies; 159,490 pregnancies involving 15,471 SGA infants), there was evidence of a difference in accuracy (P < 0.0001) between the seven tests for detecting pregnancies that are SGA at birth. Ultrasound EFW was the most accurate test for detecting SGA at birth with a diagnostic odds ratio (DOR) of 21.3 (95% CI 13.1 to 34.6); hPL was the most accurate biochemical test with a DOR of 4.78 (95% CI 3.21 to 7.13). In a hypothetical cohort of 1000 pregnant women, at the median specificity of 0.88 and median prevalence of 19%, EFW, hPL, oestriol, urinary oestriol, uric acid, PlGF and placental grading will miss 50 (95% CI 32 to 68), 116 (97 to 133), 124 (108 to 137), 127 (95 to 152), 139 (118 to 154), 144 (118 to 161), and 144 (122 to 161) SGA infants, respectively. For the detection of pregnancies ending in stillbirth (21 studies; 100,687 pregnancies involving 740 stillbirths), in an indirect comparison of the four biochemical tests, PlGF was the most accurate test with a DOR of 49.2 (95% CI 12.7 to 191). In a hypothetical cohort of 1000 pregnant women, at the median specificity of 0.78 and median prevalence of 1.7%, PlGF, hPL, urinary oestriol and uric acid will miss 2 (95% CI 0 to 4), 4 (2 to 8), 6 (6 to 7) and 8 (3 to 13) stillbirths, respectively. No studies assessed the accuracy of ultrasound EFW for detection of pregnancy ending in stillbirth.\nAuthors' conclusions\nBiochemical markers of placental dysfunction used alone have insufficient accuracy to identify pregnancies ending in SGA or stillbirth. Studies combining U and placental biomarkers are needed to determine whether this approach improves diagnostic accuracy over the use of ultrasound estimation of fetal size or biochemical markers of placental dysfunction used alone. Many of the studies included in this review were carried out between 1974 and 2016. Studies of placental substances were mostly carried out before 1991 and after 2013; earlier studies may not reflect developments in test technology.\n\nGiven the provided abstract, please respond to the following query: \"What we did\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies in October 2016 and identified and total of 24,059 studies - with 91 of those studies providing us with information that we could include in this review. We looked at ultrasound scanning and six different tests of placental substances, including proteins and hormones. These studies involved 175,426 women in total of which 15,471 pregnancies ended in the birth of a small baby and 740 pregnancies which ended in stillbirth."
}
] |
query-laysum
|
6035
|
[
{
"role": "user",
"content": "Abstract: Background\nAcute lower respiratory tract infections (LRTI) range from acute bronchitis and acute exacerbations of chronic bronchitis to pneumonia. Approximately five million people die from acute respiratory tract infections annually. Among these, pneumonia represents the most frequent cause of mortality, hospitalisation and medical consultation. Azithromycin is a macrolide antibiotic, structurally modified from erythromycin and noted for its activity against some gram-negative organisms associated with respiratory tract infections, particularly Haemophilus influenzae (H. influenzae).\nObjectives\nTo compare the effectiveness of azithromycin to amoxycillin or amoxycillin/clavulanic acid (amoxyclav) in the treatment of LRTI, in terms of clinical failure, incidence of adverse events and microbial eradication.\nSearch methods\nWe searched CENTRAL (2014, Issue 10), MEDLINE (January 1966 to October week 4, 2014) and EMBASE (January 1974 to November 2014).\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs, comparing azithromycin to amoxycillin or amoxycillin/clavulanic acid in participants with clinical evidence of an acute LRTI, such as acute bronchitis, pneumonia and acute exacerbation of chronic bronchitis.\nData collection and analysis\nThe review authors independently assessed all potential studies identified from the searches for methodological quality. We extracted and analysed relevant data separately. We resolved discrepancies through discussion. We initially pooled all types of acute LRTI in the meta-analyses. We investigated the heterogeneity of results using the forest plot and Chi2 test. We also used the index of the I2 statistic to measure inconsistent results among trials. We conducted subgroup and sensitivity analyses.\nMain results\nWe included 16 trials involving 2648 participants. We were able to analyse 15 of the trials with 2496 participants. The pooled analysis of all the trials showed that there was no significant difference in the incidence of clinical failure on about days 10 to 14 between the two groups (risk ratio (RR), random-effects 1.09; 95% confidence interval (CI) 0.64 to 1.85). A subgroup analysis in trials with acute bronchitis participants showed significantly lower clinical failure in the azithromycin group compared to amoxycillin or amoxyclav (RR random-effects 0.63; 95% CI 0.45 to 0.88). A sensitivity analysis showed a non-significant reduction in clinical failure in azithromycin-treated participants (RR 0.55; 95% CI 0.25 to 1.21) in three adequately concealed studies, compared to RR 1.32; 95% CI 0.70 to 2.49 in 12 studies with inadequate concealment. Twelve trials reported the incidence of microbial eradication and there was no significant difference between the two groups (RR 0.95; 95% CI 0.87 to 1.03). The reduction of adverse events in the azithromycin group was RR 0.76 (95% CI 0.57 to 1.00).\nAuthors' conclusions\nThere is unclear evidence that azithromycin is superior to amoxycillin or amoxyclav in treating acute LRTI. In patients with acute bronchitis of a suspected bacterial cause, azithromycin tends to be more effective in terms of lower incidence of treatment failure and adverse events than amoxycillin or amoxyclav. However, most studies were of unclear methodological quality and had small sample sizes; future trials of high methodological quality and adequate sizes are needed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall the quality of the evidence for the main outcome is low as only three of 15 included trials showed adequate allocation concealment. Hence, currently, there is insufficient evidence to show conclusively that azithromycin is superior to amoxycillin or amoxyclav in treating acute LRTI."
}
] |
query-laysum
|
6036
|
[
{
"role": "user",
"content": "Abstract: Background\nAnti-neutrophilic cytoplasmic antibodies (ANCA)-associated vasculitis (AAV) are a group of rare auto-inflammatory diseases that affects mainly small vessels. AAV includes: granulomatosis with polyangiitis (GPA), microscopic polyangiitis (MPA) and eosinophilic granulomatosis with polyangiitis (EGPA). Anti-cytokine targeted therapy uses biological agents capable of specifically targeting and neutralising cytokine mediators of the inflammatory response.\nObjectives\nTo assess the benefits and harms of anti-cytokine targeted therapy for adults with AAV.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (2019, Issue 7), MEDLINE and Embase up to 16 August 2019. We also examined reference lists of articles, clinical trial registries, websites of regulatory agencies and contacted manufacturers.\nSelection criteria\nRandomised controlled trials (RCTs) or controlled clinical trials of targeted anti-cytokine therapy in adults (18 years or older) with AAV compared with placebo, standard therapy or another modality and anti-cytokine therapy of different type or dose.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included four RCTs with a total of 440 participants (mean age 48 to 56 years). We analysed the studies in three groups: 1) mepolizumab (300 mg; three separate injections every four weeks for 52 weeks) versus placebo in participants with relapsing or refractory EGPA; 2) belimumab (10 mg/kg on days 0, 14, 28 and every 28 days thereafter until 12 months after the last participant was randomised) or etanercept (25 mg twice a week) with standard therapy (median 25 months) versus placebo with standard therapy (median 19 months) in participants with GPA/MPA; and 3) infliximab (3 mg/kg on days 1 and 14, before the response assessment on day 42) versus rituximab (0.375g/m2 on days 1, 8, 15 and 22) in participants with refractory GPA for up to 12 months. None of the studies were assessed as low risk of bias in all domains: one study did not report randomisation or blinding methods clearly. Three studies were at high risk and one study was at unclear risk of bias for selective outcome reporting.\nOne trial with 136 participants with relapsing or refractory EGPA compared mepolizumab with placebo during 52 weeks of follow-up and observed one death in the mepolizumab group (1/68, 1.5%) and none in the placebo group (0/68, 0%) (Peto odds ratio (OR) 7.39, 95% confidence interval (CI) 0.15 to 372.38; low-certainty evidence). Low-certainty evidence suggests that more participants in the mepolizumab group had ≥ 24 weeks of accrued remission over 52 weeks compared to placebo (27.9% versus 2.9%; risk ratio (RR) 9.5, 95% CI 2.30 to 39.21), and durable remission within the first 24 weeks sustained until week 52 (19.1% mepolizumab versus 1.5% placebo; RR 13.0, 95% CI 1.75 to 96.63; number needed to treat for an additional beneficial outcome (NNTB) 6, 95% Cl 4 to 13). Mepolizumab probably decreases risk of relapse (55.8% versus 82.4%; RR 0.68, 95% CI 0.53 to 0.86; NNTB 4, 95% CI 3 to 9; moderate-certainty evidence). There was low-certainty evidence regarding similar frequency of adverse events (AEs): total AEs (96.9% versus 94.1%; RR 1.03, 95% CI 0.96 to 1.11), serious AEs (17.7% versus 26.5%; RR 0.67, 95% CI 0.35 to 1.28) and withdrawals due to AEs (2.9% versus 1.5%; RR 2.00, 95% CI 0.19 to 21.54). Disease flares were not measured.\nBased on two trials with different follow-up periods (mean of 27 months for etanercept study; up to four years for belimumab study) including people with GPA (n = 263) and a small group of participants with MPA (n = 22) analysed together, we found low-certainty evidence suggesting that adding an active drug (etanercept or belimumab) to standard therapy does not increase or reduce mortality (3.4% versus 1.4%; Peto OR 2.45, 95% CI 0.55 to 10.97). Etanercept may have little or no effect on remission (92.3% versus 89.5%; RR 0.97, 95% CI 0.89 to 1.07), durable remission (70% versus 75.3%; RR 0.93, 95% CI 0.77 to 1.11; low-certainty evidence) and disease flares (56% versus 57.1%; RR 0.98, 95% CI 0.76 to 1.27; moderate-certainty evidence). Low-certainty evidence suggests that belimumab does not increase or reduce major relapse (1.9% versus 0%; RR 2.94, 95% CI 0.12 to 70.67) or any AE (92.5% versus 82.7%; RR 1.12, 95% CI 0.97 to 1.29). Low-certainty evidence suggests a similar frequency of serious or severe AEs (47.6% versus 47.6%; RR 1.00, 95% CI 0.80 to 1.27), but more frequent withdrawals due to AEs in the active drug group (11.2%) compared to the placebo group (4.2%), RR 2.66, 95% CI 1.07 to 6.59).\nOne trial involving 17 participants with refractory GPA compared infliximab versus rituximab added to steroids and cytotoxic agents for 12 months. One participant died in each group (Peto OR 0.88, 95% CI, 0.05 to 15.51; 11% versus 12.5%). We have very low-certainty evidence for remission (22% versus 50%, RR 0.44, 95% Cl 0.11 to 1.81) and durable remission (11% versus 50%, RR 0.22, 95% CI 0.03 to 1.60), any severe AE (22.3% versus 12.5%; RR 1.78, 95% CI 0.2 to 16.1) and withdrawals due to AEs (0% versus 0%; RR 2.70, 95% CI 0.13 to 58.24). Disease flare/relapse and the frequency of any AE were not reported.\nAuthors' conclusions\nWe found four studies but concerns about risk of bias and small sample sizes preclude firm conclusions.\nWe found moderate-certainty evidence that in patients with relapsing or refractory EGPA, mepolizumab compared to placebo probably decreases disease relapse and low-certainty evidence that mepolizumab may increase the probability of accruing at least 24 weeks of disease remission. There were similar frequencies of total and serious AEs in both groups, but the study was too small to reliably assess these outcomes. Mepolizumab may result in little to no difference in mortality. However, there were very few events.\nIn participants with GPA (and a small subgroup of participants with MPA), etanercept or belimumab may increase the probability of withdrawal due to AEs and may have little to no impact on serious AEs. Etanercept may have little or no impact on durable remission and probably does not reduce disease flare.\n\nGiven the provided abstract, please respond to the following query: \"How-up-to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this Cochrane Review is current to August 2019."
}
] |
query-laysum
|
6037
|
[
{
"role": "user",
"content": "Abstract: Background\nCOVID-19 infection poses a serious risk to patients and – due to its contagious nature – to those healthcare workers (HCWs) treating them. The risks of transmission of infection are greater when a patient is undergoing an aerosol-generating procedure (AGP). Not all those with COVID-19 infection are symptomatic, or suspected of harbouring the infection. If a patient who is not known to have or suspected of having COVID-19 infection is to undergo an AGP, it would nonetheless be sensible to minimise the risk to those HCWs treating them.\nIf the mouth and nose of an individual undergoing an AGP are irrigated with antimicrobial solutions, this may be a simple and safe method of reducing the risk of any covert infection being passed to HCWs through droplet transmission or direct contact. Alternatively, the use of antimicrobial solutions by the HCW may decrease the chance of them acquiring COVID-19 infection. However, the use of such antimicrobial solutions may be associated with harms related to the toxicity of the solutions themselves or alterations in the natural microbial flora of the mouth or nose.\nObjectives\nTo assess the benefits and harms of antimicrobial mouthwashes and nasal sprays administered to HCWs and/or patients when undertaking AGPs on patients without suspected or confirmed COVID-19 infection.\nSearch methods\nInformation Specialists from Cochrane ENT and Cochrane Oral Health searched the Central Register of Controlled Trials (CENTRAL 2020, Issue 6); Ovid MEDLINE; Ovid Embase and additional sources for published and unpublished trials. The date of the search was 1 June 2020.\nSelection criteria\nThis is a question that urgently requires evidence, however at the present time we did not anticipate finding many completed RCTs. We therefore planned to include the following types of studies: randomised controlled trials (RCTs); quasi-RCTs; non-randomised controlled trials; prospective cohort studies; retrospective cohort studies; cross-sectional studies; controlled before-and-after studies. We set no minimum duration for the studies.\nWe sought studies comparing any antimicrobial mouthwash and/or nasal spray (alone or in combination) at any concentration, delivered to the patient or HCW before and/or after an AGP.\nData collection and analysis\nWe used standard Cochrane methodological procedures. Our primary outcomes were: 1) incidence of symptomatic or test-positive COVID-19 infection in HCWs or patients; 2) significant adverse event: anosmia (or disturbance in sense of smell). Our secondary outcomes were: 3) COVID-19 viral content of aerosol (when present); 4) change in COVID-19 viral load at site(s) of irrigation; 5) other adverse events: changes in microbiome in oral cavity, nasal cavity, oro- or nasopharynx; 6) other adverse events: allergy, irritation/burning of nasal, oral or oropharyngeal mucosa (e.g. erosions, ulcers, bleeding), long-term staining of mucous membranes or teeth, accidental ingestion. We planned to use GRADE to assess the certainty of the evidence for each outcome.\nMain results\nWe found no completed studies to include in this review.\nAuthors' conclusions\nWe identified no studies for inclusion in this review, nor any ongoing studies. The absence of completed studies is not surprising given the relatively recent emergence of COVID-19 infection. However, we are disappointed that this important clinical question is not being addressed by ongoing studies.\n\nGiven the provided abstract, please respond to the following query: \"How-up-to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We last searched for evidence on 1 June 2020. This review covered research that was available up to that date, but did not consider any evidence that may have been produced since then."
}
] |
query-laysum
|
6038
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review first published in 2014 and last updated in 2020.\nFor nearly 30% of people with epilepsy, current treatments do not control seizures. Stiripentol is an antiepileptic drug (AED) that was developed in France and was approved by the European Medicines Agency (EMA) in 2007 as an adjunctive therapy with valproate and clobazam for the treatment of Dravet syndrome.\nObjectives\nTo evaluate the efficacy and tolerability of stiripentol as add-on treatment for people with drug-resistant focal epilepsy who are taking AEDs.\nSearch methods\nFor the latest update, we searched the Cochrane Register of Studies (CRS Web) and MEDLINE on 28 March 2022. We contacted the manufacturer of stiripentol and epilepsy experts to identify published, unpublished and ongoing trials.\nSelection criteria\nRandomised controlled trials of add-on stiripentol in people with drug-resistant focal epilepsy.\nData collection and analysis\nReview authors independently selected trials for inclusion and extracted data. We investigated outcomes including 50% or greater reduction in seizure frequency, seizure freedom, adverse effects, treatment withdrawal and changes in quality of life.\nMain results\nOn the basis of our selection criteria, we included no new studies in the present review update. We included only one study from the original review (32 children with focal epilepsy). This study adopted a responder-enriched design and found no clear evidence of a reduction of 50% or more in seizure frequency (risk ratio (RR) 1.51, 95% confidence interval (CI) 0.81 to 2.82; low-certainty evidence) and no clear evidence of seizure freedom (RR 1.18, 95% CI 0.31 to 4.43; low-certainty evidence) when comparing add-on stiripentol with placebo.\nStiripentol led to a greater risk of adverse effects considered as a whole (RR 2.65, 95% CI 1.08 to 6.47; low-certainty evidence). When we considered specific adverse effects, CIs were very wide and showed the possibility of substantial increases and small reductions in risks of neurological adverse effects (RR 2.65, 95% CI 0.88 to 8.01; low-certainty evidence). Researchers noted no clear reduction in the risk of study withdrawal (RR 0.66, 95% CI 0.30 to 1.47; low-certainty evidence), which was high in both groups (53.3% in placebo group and 35.3% in stiripentol group; low-certainty evidence).\nThe external validity of this study was limited because only responders to stiripentol (i.e. participants experiencing a decrease in seizure frequency of 50% or greater during an open prerandomisation phase compared with baseline) were included in the randomised, add-on, placebo-controlled, double-blind phase. Furthermore, carry-over and withdrawal effects probably influenced outcomes related to seizure frequency. Very limited information derived from the only included study shows that adverse effects considered as a whole may occur more often with add-on stiripentol than with add-on placebo.\nAuthors' conclusions\nWe have found no new studies since the last version of this review was published. Hence, we have made no changes to the conclusions as presented in previous versions. We can draw no conclusions to support the use of stiripentol as add-on treatment for drug-resistant focal epilepsy. Additional large, randomised, well-conducted trials are needed.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "On the basis of our review criteria, we included only one study in the review. In the included study, 67 children were given a dummy medicine (placebo) in addition to their usual treatment for one month, and were then given add-on stiripentol (without the placebo) for four months. The 32 children who responded to stiripentol (who had half the number of seizures or fewer when taking stiripentol compared with during the first month) were included in the next part of the study. Seventeen of these children continued add-on stiripentol, while the other 15 received add-on placebo. This stage of the study was randomised and double-blind (the treatment was allocated at random, and neither the children nor the doctors knew who was receiving what treatment).\nAfter two months, the study authors found no clear differences between the two treatment groups in terms of seizure reduction (the number of children with half the number of seizures or fewer) or seizure freedom (the number of children who had no seizures). However, the children taking stiripentol were more likely to have harmful side effects than those taking the placebo. The results of the study may not apply to a more general population because it only included children who responded to stiripentol."
}
] |
query-laysum
|
6039
|
[
{
"role": "user",
"content": "Abstract: Background\nMany employers and other stakeholders believe that health examinations of job applicants prevent occupational diseases and sickness absence. This is an update of the original Cochrane review (Mahmud 2010).\nObjectives\nTo evaluate the effectiveness of pre-employment examinations of job applicants in preventing occupational injury, disease and sick leave compared to no intervention or alternative interventions.\nSearch methods\nWe searched CENTRAL (the Cochrane Library), MEDLINE, EMBASE, CINAHL, PsycINFO and PEDro (up to 31 March 2015). We did not impose any restrictions on date, language or publication type.\nSelection criteria\nWe included randomised controlled trials (RCTs), controlled before-after (CBA) studies, and interrupted time-series (ITS) studies of health examinations to prevent occupational diseases and injuries in job applicants in comparison to no intervention or alternative interventions.\nData collection and analysis\nAll five review authors independently selected studies from the updated search for inclusion. We retrieved two new studies with the updated search from 1 April 2008 to 31 March 2015, resulting in a total of eleven studies.\nMain results\nWe included two RCTs, seven CBA studies and two ITS studies. Nine studies with 7820 participants evaluated the screening process of pre-employment examinations as a whole, and two studies with 2164 participants evaluated the measures to mitigate the risks found following the screening process. The studies were too heterogeneous for statistical pooling of results. We rated the quality of the evidence for all outcomes as very low quality. The two new CBA studies both used historical controls and both had a high risk of bias.\nOf those studies that evaluated the screening process, there is very low quality evidence based on one RCT that a general examination for light duty work may not reduce the risk for sick leave (mean difference (MD) -0.09, 95% confidence interval (CI) -0.47 to 0.29). For army recruits, there is very low quality evidence based on one CBA study that there is a positive effect on fitness for duty after 12 months follow-up (odds ratio (OR) 0.40, 95% CI 0.19 to 0.85).\nWe found inconsistent evidence of an effect of job-focused pre-employment examinations on the risk of musculoskeletal injuries in comparison with general or no pre-employment examination based on one RCT with high risk of bias, and four CBA studies. There is very low quality evidence based on one ITS study that incorporation of a bronchial challenge test may decrease occupational asthma (trend change -2.6, 95% CI -3.6 to -1.5) compared to a general pre-employment examination with lung function tests.\nPre-employment examinations may also result in a rejection of the applicant for the new job. In six studies, the rates of rejecting job applicants increased because of the studied examinations , on average, from 2% to 35%, but not in one study.\nThere is very low quality evidence based on two CBA studies that risk mitigation among applicants considered not fit for work at the pre-employment examination may result in a similar risk of work-related musculoskeletal injury during follow-up compared to workers considered fit for work at the health examination.\nAuthors' conclusions\nThere is very low quality evidence that a general examination for light duty work may not reduce the risk for sick leave, but may have a positive effect on fitness for duty for army recruits after 12 months follow-up.\nThere is inconsistent evidence of an effect of job-focused pre-employment examinations on the risk of musculoskeletal injuries in comparison with general or no pre-employment examination. There is very low quality evidence that incorporation of a bronchial challenge test may decrease occupational asthma compared to a general pre-employment examination with lung function tests. Pre-employment examinations may result in an increase of rejecting job applicants in six out of seven studies.\nRisk mitigation based on the result of pre-employment examinations may be effective in reducing an increased risk for occupational injuries based on very low quality evidence. This evidence supports the current policy to restrict pre-employment examinations to only job-specific examinations. Better quality evaluation studies on pre-employment examinations are necessary, including the evaluation of the benefits of risk mitigation, given the effect on health and on the financial situation for those employees who do not pass the pre-employment examination.\n\nGiven the provided abstract, please respond to the following query: \"How has this been studied?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We conducted a systematic search for studies that had been published up to 31 March 2015. We found eleven studies, including 7820 people that evaluated the whole process of health examinations, including rejection of applicants with higher risks of occupational disease, injury or sick leave."
}
] |
query-laysum
|
6040
|
[
{
"role": "user",
"content": "Abstract: Background\nLow back pain is a common presentation across different healthcare settings. Clinicians need to confidently be able to screen and identify people presenting with low back pain with a high suspicion of serious or specific pathology (e.g. vertebral fracture). Patients identified with an increased likelihood of having a serious pathology will likely require additional investigations and specific treatment. Guidelines recommend a thorough history and clinical assessment to screen for serious pathology as a cause of low back pain. However, the diagnostic accuracy of recommended red flags (e.g. older age, trauma, corticosteroid use) remains unclear, particularly those used to screen for vertebral fracture.\nObjectives\nTo assess the diagnostic accuracy of red flags used to screen for vertebral fracture in people presenting with low back pain. Where possible, we reported results of red flags separately for different types of vertebral fracture (i.e. acute osteoporotic vertebral compression fracture, vertebral traumatic fracture, vertebral stress fracture, unspecified vertebral fracture).\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 26 July 2022.\nSelection criteria\nWe considered primary diagnostic studies if they compared results of history taking or physical examination (or both) findings (index test) with a reference standard test (e.g. X-ray, magnetic resonance imaging (MRI), computed tomography (CT), single-photon emission computerised tomography (SPECT)) for the identification of vertebral fracture in people presenting with low back pain. We included index tests that were presented individually or as part of a combination of tests.\nData collection and analysis\nTwo review authors independently extracted data for diagnostic two-by-two tables from the publications or reconstructed them using information from relevant parameters to calculate sensitivity, specificity, and positive (+LR) and negative (−LR) likelihood ratios with 95% confidence intervals (CIs). We extracted aspects of study design, characteristics of the population, index test, reference standard, and type of vertebral fracture. Meta-analysis was not possible due to heterogeneity of studies and index tests, therefore the analysis was descriptive. We calculated sensitivity, specificity, and LRs for each test and used these as an indication of clinical usefulness. Two review authors independently conducted risk of bias and applicability assessment using the QUADAS-2 tool.\nMain results\nThis review is an update of a previous Cochrane Review of red flags to screen for vertebral fracture in people with low back pain. We included 14 studies in this review, six based in primary care, five in secondary care, and three in tertiary care. Four studies reported on 'osteoporotic vertebral fractures', two studies reported on 'vertebral compression fracture', one study reported on 'osteoporotic and traumatic vertebral fracture', two studies reported on 'vertebral stress fracture', and five studies reported on 'unspecified vertebral fracture'. Risk of bias was only rated as low in one study for the domains reference standard and flow and timing. The domain patient selection had three studies and the domain index test had six studies rated at low risk of bias. Meta-analysis was not possible due to heterogeneity of the data. Results from single studies suggest only a small number of the red flags investigated may be informative.\nIn the primary healthcare setting, results from single studies suggest 'trauma' demonstrated informative +LRs (range: 1.93 to 12.85) for 'unspecified vertebral fracture' and 'osteoporotic vertebral fracture' (+LR: 6.42, 95% CI 2.94 to 14.02). Results from single studies suggest 'older age' demonstrated informative +LRs for studies in primary care for 'unspecified vertebral fracture' (older age greater than 70 years: 11.19, 95% CI 5.33 to 23.51). Results from single studies suggest 'corticosteroid use' may be an informative red flag in primary care for 'unspecified vertebral fracture' (+LR range: 3.97, 95% CI 0.20 to 79.15 to 48.50, 95% CI 11.48 to 204.98) and 'osteoporotic vertebral fracture' (+LR: 2.46, 95% CI 1.13 to 5.34); however, diagnostic values varied and CIs were imprecise. Results from a single study suggest red flags as part of a combination of index tests such as 'older age and female gender' in primary care demonstrated informative +LRs for 'unspecified vertebral fracture' (16.17, 95% CI 4.47 to 58.43).\nIn the secondary healthcare setting, results from a single study suggest 'trauma' demonstrated informative +LRs for 'unspecified vertebral fracture' (+LR: 2.18, 95% CI 1.86 to 2.54) and 'older age' demonstrated informative +LRs for 'osteoporotic vertebral fracture' (older age greater than 75 years: 2.51, 95% CI 1.48 to 4.27). Results from a single study suggest red flags as part of a combination of index tests such as 'older age and trauma' in secondary care demonstrated informative +LRs for 'unspecified vertebral fracture' (+LR: 4.35, 95% CI 2.92 to 6.48). Results from a single study suggest when '4 of 5 tests' were positive in secondary care, they demonstrated informative +LRs for 'osteoporotic vertebral fracture' (+LR: 9.62, 95% CI 5.88 to 15.73).\nIn the tertiary care setting, results from a single study suggest 'presence of contusion/abrasion' was informative for 'vertebral compression fracture' (+LR: 31.09, 95% CI 18.25 to 52.96).\nAuthors' conclusions\nThe available evidence suggests that only a few red flags are potentially useful in guiding clinical decisions to further investigate people suspected to have a vertebral fracture. Most red flags were not useful as screening tools to identify vertebral fracture in people with low back pain. In primary care, 'older age' was informative for 'unspecified vertebral fracture', and 'trauma' and 'corticosteroid use' were both informative for 'unspecified vertebral fracture' and 'osteoporotic vertebral fracture'. In secondary care, 'older age' was informative for 'osteoporotic vertebral fracture' and 'trauma' was informative for 'unspecified vertebral fracture'. In tertiary care, 'presence of contusion/abrasion' was informative for 'vertebral compression fracture'. Combinations of red flags were also informative and may be more useful than individual tests alone. Unfortunately, the challenge to provide clear guidance on which red flags should be used routinely in clinical practice remains. Further research with primary studies is needed to improve and consolidate our current recommendations for screening for vertebral fractures to guide clinical care.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We updated a previous Cochrane Review. We searched for studies that investigated the accuracy of red flags across different healthcare settings. We included studies that compared results of history taking and physical examination (or both) (known as index tests or red flags) with different types of imaging (known as the reference standard) to identify spinal fractures in people with low back pain. We also included studies if they reported on the results of red flags separately for the different types of spinal fractures."
}
] |
query-laysum
|
6041
|
[
{
"role": "user",
"content": "Abstract: Background\nBreast milk is well recognised as the best food source for infants. The impact of antenatal breastfeeding (BF) education on the duration of BF has not been evaluated.\nObjectives\nTo assess the effectiveness of antenatal breastfeeding (BF) education for increasing BF initiation and duration.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth's Trials Register on 1 March 2016, CENTRAL (The Cochrane Library, 2016, Issue 3), MEDLINE (1966 to 1 March 2016) and Scopus (January 1985 to 1 March 2016). We contacted experts and searched reference lists of retrieved articles.\nSelection criteria\nAll identified published, unpublished and ongoing randomised controlled trials (RCTs) assessing the effect of formal antenatal BF education or comparing two different methods of formal antenatal BF education, on the duration of BF. We included RCTs that only included antenatal interventions and excluded those that combined antenatal and intrapartum or postpartum BF education components. Cluster-randomised trials were included in this review. Quasi-randomised trials were not eligible for inclusion.\nData collection and analysis\nWe assessed all potential studies identified as a result of the search strategy. Two review authors extracted data from each included study using the agreed form and assessed risk of bias. We resolved discrepancies through discussion. We assessed the quality of the evidence using the GRADE approach.\nMain results\nThis review update includes 24 studies (10,056 women). Twenty studies (9789 women) contribute data to analyses. Most studies took place in high-income countries such as the USA, UK, Canada and Australia. In the first five comparisons, we display the included trials according to type of intervention without pooling data. For the 'Summary of findings' we pooled data for a summary effect.\nFive included studies were cluster-randomised trials: all of these adjusted data and reported adjustments as odds ratios (OR). We have analysed the data using the generic inverse variance method and presented results as odds ratios, because we were unable to derive a cluster-adjusted risk ratio from the published cluster-trial. We acknowledge that the use of odds ratio prevents the pooling of these cluster trials in our main analyses.\nOne method of BF education with standard (routine) care\nThere were no group differences for duration of any BF in days or weeks. There was no evidence that interventions improved the proportion of women with any BF or exclusive BF at three or six months. Single trials of different interventions were unable to show that education improved initiation of BF, apart from one small trial at high risk of attrition bias. Many trial results marginally favoured the intervention but had wide confidence intervals crossing the line of no effect. BF complications such as mastitis and other BF problems were similar in treatment arms in single trials reporting these outcomes.\nMultiple methods of BF education versus standard care\nFor all trials included in this comparison we have presented the cluster-adjusted odds ratios as reported in trial publications. One three-arm study found the intervention of BF booklet plus video plus Lactation Consultant versus standard care improved the proportion of women exclusively BF at three months (OR 2.60, 95% CI 1.25 to 5.40; women = 159) and marginally at six months (OR 2.40, 95% CI 1.00 to 5.76; women = 175). For the same trial, an intervention arm without a lactation consultant but with the BF booklet and video did not have the same effect on proportion of women exclusively BF at three months (OR 1.80, 95% CI 0.80 to 4.05; women = 159) or six months (OR 0.90, 95% CI 0.30 to 2.70; women = 184). One study compared monthly BF sessions and weekly cell phone message versus standard care and reported improvements in the proportion of women exclusively BF at both three and six months (three months OR 1.80, 95% CI 1.10 to 2.95; women = 390; six months OR 2.40, 95% CI 1.40 to 4.11; women = 390). One study found monthly BF sessions and weekly cell phone messages improved initiation of BF over standard care (OR 2.61, 95% CI 1.61 to 4.24; women = 380).\nBF education session versus standard care, pooled analyses for 'Summary of findings' (SoF)\nThis comparison does not include cluster-randomised trials reporting adjusted odds ratios. We did not downgrade any evidence for trials' lack of blinding; no trial had adequate blinding of staff and participants. The SoF table presents risk ratios for all outcomes analysed. For proportion of women exclusively BF there is no evidence that antenatal BF education improved BF at three months (RR 1.06, 95% CI 0.90 to 1.25; women = 822; studies = 3; moderate quality evidence) or at six months (RR 1.07, 95% CI 0.87 to 1.30; women = 2161; studies = 4; moderate quality evidence). For proportion of women with any BF there were no group differences in BF at three (average RR 0.98, 95% CI 0.82 to 1.18; women = 654; studies = 2; I² = 60%; low-quality evidence) or six months (average RR 1.05, 95% CI 0.90 to 1.23; women = 1636; studies = 4; I² = 61%; high-quality evidence). There was no evidence that antenatal BF education could improve initiation of BF (average RR 1.01, 95% CI 0.94 to 1.09; women = 3505; studies = 8; I² = 69%; high-quality evidence). Where we downgraded evidence this was due to small sample size or wide confidence intervals crossing the line of no effect, or both.\nThere was insufficient data for subgroup analysis of mother's occupation or education.\nAuthors' conclusions\nThere was no conclusive evidence supporting any antenatal BF education for improving initiation of BF, proportion of women giving any BF or exclusively BF at three or six months or the duration of BF. There is an urgent need to conduct a high-quality, randomised controlled study to evaluate the effectiveness and adverse effects of antenatal BF education, especially in low- and middle-income countries. Evidence in this review is primarily relevant to high-income settings.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 24 studies with 10,056 women in the review, and 20 studies involving 9789 women contributed data to the analyses. Most studies took place in high-income countries including the USA, Canada, UK and Australia. Peer counselling, lactation consultation and formal BF education during pregnancy do not appear to improve uptake of BF or duration. However, some larger trials in different settings (one in Nigeria and one in Singapore) had some evidence that education may help."
}
] |
query-laysum
|
6042
|
[
{
"role": "user",
"content": "Abstract: Background\nHip fractures are a major healthcare problem, presenting a challenge and burden to individuals and healthcare systems. The number of hip fractures globally is rising. The majority of extracapsular hip fractures are treated surgically.\nObjectives\nTo assess the relative effects (benefits and harms) of all surgical treatments used in the management of extracapsular hip fractures in older adults, using a network meta-analysis of randomised trials, and to generate a hierarchy of interventions according to their outcomes.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Web of Science and five other databases in July 2020.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs comparing different treatments for fragility extracapsular hip fractures in older adults. We included internal and external fixation, arthroplasties and non-operative treatment. We excluded studies of hip fractures with specific pathologies other than osteoporosis or resulting from high-energy trauma.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion. One review author completed data extraction which was checked by a second review author. We collected data for three outcomes at different time points: mortality and health-related quality of life (HRQoL) - both reported within 4 months, at 12 months and after 24 months of surgery, and unplanned return to theatre (at end of study follow-up).\nWe performed a network meta-analysis (NMA) with Stata software, using frequentist methods, and calculated the differences between treatments using risk ratios (RRs) and standardised mean differences (SMDs) and their corresponding 95% confidence intervals (CIs). We also performed direct comparisons using the same codes.\nMain results\nWe included 184 studies (160 RCTs and 24 quasi-RCTs) with 26,073 participants with 26,086 extracapsular hip fractures in the review. The mean age in most studies ranged from 60 to 93 years, and 69% were women.\nAfter discussion with clinical experts, we selected nine nodes that represented the best balance between clinical plausibility and efficiency of the networks: fixed angle plate (dynamic and static), cephalomedullary nail (short and long), condylocephalic nail, external fixation, hemiarthroplasty, total hip arthroplasty (THA) and non-operative treatment. Seventy-three studies (with 11,126 participants) with data for at least two of these treatments contributed to the NMA.\nWe selected the dynamic fixed angle plate as a reference treatment against which other treatments were compared. This was a common treatment in the networks, providing a clinically appropriate comparison.\nWe downgraded the certainty of the evidence for serious and very serious risks of bias, and because some of the estimates included the possibility of transitivity owing to the proportion of stable and unstable fractures between treatment comparisons. We also downgraded if we noted evidence of inconsistency in direct or indirect estimates from which the network estimate was derived. Most estimates included the possibility of benefits and harms, and we downgraded the evidence for these treatments for imprecision.\nOverall, 20.2% of participants who received the reference treatment had died by 12 months after surgery. We noted no evidence of any differences in mortality at this time point between the treatments compared. Effect estimates of all treatments included plausible benefits as well as harms. Short cephalomedullary nails had the narrowest confidence interval (CI), with 7 fewer deaths (26 fewer to 15 more) per 1000 participants, compared to the reference treatment (risk ratio (RR) 0.97, 95% CI 0.87 to 1.07). THA had the widest CI, with 62 fewer deaths (177 fewer to 610 more) per 1000 participants, compared to the reference treatment (RR 0.69, 95% CI 0.12 to 4.03). The certainty of the evidence for all treatments was low to very low. Although we ranked the treatments, this ranking should be interpreted cautiously because of the imprecision in all the network estimates for these treatments.\nOverall, 4.3% of participants who received the reference treatment had unplanned return to theatre. Compared to this treatment, we found very low-certainty evidence that 58 more participants (14 to 137 more) per 1000 participants returned to theatre if they were treated with a static fixed angle plate (RR 2.48, 95% CI 1.36 to 4.50), and 91 more participants (37 to 182 more) per 1000 participants returned to theatre if treated with a condylocephalic nail (RR 3.33, 95% CI 1.95 to 5.68). We also found that these treatments were ranked as having the highest probability of unplanned return to theatre. In the remaining treatments, we noted no evidence of any differences in unplanned return to theatre, with effect estimates including benefits as well as harms. The certainty of the evidence for these other treatments ranged from low to very low.\nWe did not use GRADE to assess the certainty of the evidence for early mortality, but our findings were similar to those for 12-month mortality, with no evidence of any differences in treatments when compared to dynamic fixed angle plate. Very few studies reported HRQoL and we were unable to build networks from these studies and perform network meta-analysis.\nAuthors' conclusions\nAcross the networks, we found that there was considerable variability in the ranking of each treatment such that there was no one outstanding, or subset of outstanding, superior treatments. However, static implants such as condylocephalic nails and static fixed angle plates did yield a higher risk of unplanned return to theatre. We had insufficient evidence to determine the effects of any treatments on HRQoL, and this review includes data for only two outcomes. More detailed pairwise comparisons of some of the included treatments are reported in other Cochrane Reviews in this series. Short cephalomedullary nails versus dynamic fixed angle plates contributed the most evidence to each network, and our findings indicate that there may be no difference between these treatments. These data included people with both stable and unstable extracapsular fractures. At this time, there are too few studies to draw any conclusions regarding the benefits or harms of arthroplasty or external fixation for extracapsular fracture in older adults.\nFuture research could focus on the benefits and harms of arthroplasty interventions compared with internal fixation using a dynamic implant.\n\nGiven the provided abstract, please respond to the following query: \"Hip fractures in older people\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A hip fractures is a break at the top of the leg bone. There are two types of hip fractures; in this review, we included people with a break just outside the hip joint. The other type of hip fracture is a break just below the ball and socket joint - we reviewed these fractures in another review. Both types of broken hip are common in older adults whose bones may be fragile because of a condition called osteoporosis."
}
] |
query-laysum
|
6043
|
[
{
"role": "user",
"content": "Abstract: Background\nOperations on structures in the chest (usually the lungs) involve cutting between the ribs (thoracotomy). Severe post-thoracotomy pain can result from pleural (lung lining) and muscular damage, costovertebral joint (ribcage) disruption and intercostal nerve (nerves that run along the ribs) damage during surgery. Poor pain relief after surgery can impede recovery and increase the risks of developing complications such as lung collapse, chest infections and blood clots due to ineffective breathing and clearing of secretions. Effective management of acute pain following thoracotomy may prevent these complications and reduce the likelihood of developing chronic pain. A multi-modal approach to analgesia is widely employed by thoracic anaesthetists using a combination of regional anaesthetic blockade and systemic analgesia, with both non-opioid and opioid medications and local anaesthesia blockade.\nThere is some evidence that blocking the nerves as they emerge from the spinal column (paravertebral block, PVB) may be associated with a lower risk of major complications in thoracic surgery but the majority of thoracic anaesthetists still prefer to use a thoracic epidural blockade (TEB) as analgesia for their patients undergoing thoracotomy. In order to bring about a change in practice, anaesthetists need a review that evaluates the risk of all major complications associated with thoracic epidural and paravertebral block in thoracotomy.\nObjectives\nTo compare the two regional techniques of TEB and PVB in adults undergoing elective thoracotomy with respect to:\n1. analgesic efficacy;\n2. the incidence of major complications (including mortality);\n3. the incidence of minor complications;\n4. length of hospital stay;\n5. cost effectiveness.\nSearch methods\nWe searched for studies in the Cochrane Central Register of Controlled Trials (CENTRAL 2013, Issue 9); MEDLINE via Ovid (1966 to 16 October 2013); EMBASE via Ovid (1980 to 16 October 2013); CINAHL via EBSCO host (1982 to 16 October 2013); and reference lists of retrieved studies. We handsearched the Journal of Cardiothoracic Surgery and Journal of Cardiothoracic and Vascular Anesthesia (16 October 2013). We reran the search on 31st January 2015. We found one additional study which is awaiting classification and will be addressed when we update the review.\nSelection criteria\nWe included all randomized controlled trials (RCTs) comparing PVB with TEB in thoracotomy, including upper gastrointestinal surgery.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors (JY and SG) independently assessed the studies for inclusion and then extracted data as eligible for inclusion in qualitative and quantitative synthesis (meta-analysis).\nMain results\nWe included 14 studies with a total of 698 participants undergoing thoracotomy. There are two studies awaiting classification. The studies demonstrated high heterogeneity in insertion and use of both regional techniques, reflecting real-world differences in the anaesthesia techniques. Overall, the included studies have a moderate to high potential for bias, lacking details of randomization, group allocation concealment or arrangements to blind participants or outcome assessors. There was low to very low-quality evidence that showed no significant difference in 30-day mortality (2 studies, 125 participants. risk ratio (RR) 1.28, 95% confidence interval (CI) 0.39 to 4.23, P value = 0.68) and major complications (cardiovascular: 2 studies, 114 participants. Hypotension RR 0.30, 95% CI 0.01 to 6.62, P value = 0.45; arrhythmias RR 0.36, 95% CI 0.04 to 3.29, P value = 0.36, myocardial infarction RR 3.19, 95% CI 0.13, 76.42, P value = 0.47); respiratory: 5 studies, 280 participants. RR 0.62, 95% CI 0.26 to 1.52, P value = 0.30). There was moderate-quality evidence that showed comparable analgesic efficacy across all time points both at rest and after coughing or physiotherapy (14 studies, 698 participants). There was moderate-quality evidence that showed PVB had a better minor complication profile than TEB including hypotension (8 studies, 445 participants. RR 0.16, 95% CI 0.07 to 0.38, P value < 0.0001), nausea and vomiting (6 studies, 345 participants. RR 0.48, 95% CI 0.30 to 0.75, P value = 0.001), pruritis (5 studies, 249 participants. RR 0.29, 95% CI 0.14 to 0.59, P value = 0.0005) and urinary retention (5 studies, 258 participants. RR 0.22, 95% CI 0.11 to 0.46, P value < 0.0001). There was insufficient data in chronic pain (six or 12 months). There was no difference found in and length of hospital stay (3 studies, 124 participants). We found no studies that reported costs.\nAuthors' conclusions\nParavertebral blockade reduced the risks of developing minor complications compared to thoracic epidural blockade. Paravertebral blockade was as effective as thoracic epidural blockade in controlling acute pain. There was a lack of evidence in other outcomes. There was no difference in 30-day mortality, major complications, or length of hospital stay. There was insufficient data on chronic pain and costs. Results from this review should be interpreted with caution due to the heterogeneity of the included studies and the lack of reliable evidence. Future studies in this area need well-conducted, adequately-powered RCTs that focus not only on acute pain but also on major complications, chronic pain, length of stay and costs.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Operations on structures in the chest (usually the lungs) involve cutting between the ribs (thoracotomy) resulting in severe pain. Poor pain relief post-surgery can slow down recovery and increase risks of developing complications. Effective management of acute pain following thoracotomy may prevent these complications and reduce the likelihood of developing long-term pain. We wanted to discover whether blocking the nerves as they emerge from the spinal column (paravertebral block, (PVB)) was better or worse than using central neuraxial nerve block (thoracic epidural block, (TEB)).\nThis evidence is current to 16th October 2013. We reran the search on 31st January 2015. We found one additional study which is awaiting classification and which we will include when we update the review."
}
] |
query-laysum
|
6044
|
[
{
"role": "user",
"content": "Abstract: Background\nSpinal cord stimulation (SCS) is a surgical intervention used to treat persistent low back pain. SCS is thought to modulate pain by sending electrical signals via implanted electrodes into the spinal cord. The long term benefits and harms of SCS for people with low back pain are uncertain.\nObjectives\nTo assess the effects, including benefits and harms, of SCS for people with low back pain.\nSearch methods\nOn 10 June 2022, we searched CENTRAL, MEDLINE, Embase, and one other database for published trials. We also searched three clinical trials registers for ongoing trials.\nSelection criteria\nWe included all randomised controlled trials and cross-over trials comparing SCS with placebo or no treatment for low back pain. The primary comparison was SCS versus placebo, at the longest time point measured in the trials. Major outcomes were mean low back pain intensity, function, health-related quality of life, global assessment of efficacy, withdrawals due to adverse events, adverse events, and serious adverse events. Our primary time point was long-term follow-up (≥ 12 months).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 13 studies with 699 participants: 55% of participants were female; mean age ranged from 47 to 59 years; and all participants had chronic low back pain with mean duration of symptoms ranging from five to 12 years. Ten cross-over trials compared SCS with placebo. Three parallel-group trials assessed the addition of SCS to medical management.\nMost studies were at risk of performance and detection bias from inadequate blinding and selective reporting bias. The placebo-controlled trials had other important biases, including lack of accounting for period and carryover effects. Two of the three parallel trials assessing SCS as an addition to medical management were at risk of attrition bias, and all three had substantial cross-over to the SCS group for time points beyond six months. In the parallel-group trials, we considered the lack of placebo control to be an important source of bias.\nNone of our included studies evaluated the impact of SCS on mean low back pain intensity in the long term (≥ 12 months). The studies most often assessed outcomes in the immediate term (less than one month). At six months, the only available evidence was from a single cross-over trial (50 participants). There was moderate-certainty evidence that SCS probably does not improve back or leg pain, function, or quality of life compared with placebo. Pain was 61 points (on a 0- to 100-point scale, 0 = no pain) at six months with placebo, and 4 points better (8.2 points better to 0.2 points worse) with SCS. Function was 35.4 points (on a 0- to 100-point scale, 0 = no disability or best function) at six months with placebo, and 1.3 points better (3.9 points better to 1.3 points worse) with SCS. Health-related quality of life was 0.44 points out of 1 (0 to 1 index, 0 = worst quality of life) at six months with placebo, and 0.04 points better (0.16 points better to 0.08 points worse) with SCS. In that same study, nine participants (18%) experienced adverse events and four (8%) required revision surgery. Serious adverse events with SCS included infections, neurological damage, and lead migration requiring repeated surgery. We could not provide effect estimates of the relative risks as events were not reported for the placebo period.\nIn parallel trials assessing SCS as an addition to medical management, it is uncertain whether, in the medium or long term, SCS can reduce low back pain, leg pain, or health-related quality of life, or if it increases the number of people reporting a 50% improvement or better, because the certainty of the evidence was very low. Low-certainty evidence suggests that adding SCS to medical management may slightly improve function and slightly reduce opioid use. In the medium term, mean function (0- to 100-point scale; lower is better) was 16.2 points better with the addition of SCS to medical management compared with medical management alone (95% confidence interval (CI) 19.4 points better to 13.0 points better; I2 = 95%; 3 studies, 430 participants; low-certainty evidence). The number of participants reporting opioid medicine use was 15% lower with the addition of SCS to medical management (95% CI 27% lower to 0% lower; I2 = 0%; 2 studies, 290 participants; low-certainty evidence). Adverse events with SCS were poorly reported but included infection and lead migration. One study found that, at 24 months, 13 of 42 people (31%) receiving SCS required revision surgery. It is uncertain to what extent the addition of SCS to medical management increases the risk of withdrawals due to adverse events, adverse events, or serious adverse events, because the certainty of the evidence was very low.\nAuthors' conclusions\nData in this review do not support the use of SCS to manage low back pain outside a clinical trial. Current evidence suggests SCS probably does not have sustained clinical benefits that would outweigh the costs and risks of this surgical intervention.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Low back pain is a leading cause of disability around the world. Spinal cord stimulation, a surgical treatment involving implantation of a device that applies electric impulses to the spinal cord, has been suggested to improve pain in people with long-term low back pain. This study aimed to review evidence regarding the benefits and harms of this procedure for people with low back pain."
}
] |
query-laysum
|
6045
|
[
{
"role": "user",
"content": "Abstract: Background\nPeriodontitis is a highly prevalent, chronic inflammation that causes damage to the soft tissues and bones supporting the teeth. Conventional treatment is quadrant scaling and root planing (the second step of periodontal therapy), which comprises scaling and root planing of teeth in one quadrant of the mouth at a time, with the four different sessions separated by at least one week. Alternative protocols for anti-infective periodontal therapy have been introduced to help enhance treatment outcomes: full-mouth scaling (subgingival instrumentation of all quadrants within 24 hours), or full-mouth disinfection (subgingival instrumentation of all quadrants in 24 hours plus adjunctive antiseptic). We use the older term 'scaling and root planing' (SRP) interchangeably with the newer term 'subgingival instrumentation' in this iteration of the review, which updates one originally published in 2008 and first updated in 2015.\nObjectives\nTo evaluate the clinical effects of full-mouth scaling or full-mouth disinfection (within 24 hours) for the treatment of periodontitis compared to conventional quadrant subgingival instrumentation (over a series of visits at least one week apart) and to evaluate whether there was a difference in clinical effects between full-mouth disinfection and full-mouth scaling.\nSearch methods\nAn information specialist searched five databases up to 17 June 2021 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) lasting at least three months that evaluated full-mouth scaling and root planing within 24 hours, with or without adjunctive use of an antiseptic, compared to conventional quadrant SRP (control). Participants had a clinical diagnosis of (chronic) periodontitis according to the International Classification of Periodontal Diseases from 1999. A new periodontitis classification was launched in 2018; however, we used the 1999 classification for inclusion or exclusion of studies, as most studies used it. We excluded studies of people with systemic disorders, taking antibiotics or with the older diagnosis of 'aggressive periodontitis'.\nData collection and analysis\nSeveral review authors independently conducted data extraction and risk of bias assessment (based on randomisation method, allocation concealment, examiner blinding and completeness of follow-up). Our primary outcomes were tooth loss and change in probing pocket depth (PPD); secondary outcomes were change in probing attachment (i.e. clinical attachment level (CAL)), bleeding on probing (BOP), adverse events and pocket closure (the number/proportion of sites with PPD of 4 mm or less after treatment). We followed Cochrane's methodological guidelines for data extraction and analysis.\nMain results\nWe included 20 RCTs, with 944 participants, in this updated review. No studies assessed the primary outcome tooth loss. Thirteen trials compared full-mouth scaling and root planing within 24 hours without the use of antiseptic (FMS) versus control, 13 trials compared full-mouth scaling and root planing within 24 hours with adjunctive use of an antiseptic (FMD) versus control, and six trials compared FMS with FMD.\nOf the 13 trials comparing FMS versus control, we assessed three at high risk of bias, six at low risk of bias and four at unclear risk of bias. We assessed our certainty about the evidence as low or very low for the outcomes in this comparison. There was no evidence for a benefit for FMS over control for change in PPD, gain in CAL or reduction in BOP at six to eight months (PPD: mean difference (MD) 0.03 mm, 95% confidence interval (CI) –0.14 to 0.20; 5 trials, 148 participants; CAL: MD 0.10 mm, 95% CI –0.05 to 0.26; 5 trials, 148 participants; BOP: MD 2.64%, 95% CI –8.81 to 14.09; 3 trials, 80 participants). There was evidence of heterogeneity for BOP (I² = 50%), but none for PPD and CAL.\nOf the 13 trials comparing FMD versus control, we judged four at high risk of bias, one at low risk of bias and eight at unclear risk of bias. At six to eight months, there was no evidence for a benefit for FMD over control for change in PPD or CAL (PPD: MD 0.11 mm, 95% CI –0.04 to 0.27; 6 trials, 224 participants; low-certainty evidence; CAL: 0.07 mm, 95% CI –0.11 to 0.24; 6 trials, 224 participants; low-certainty evidence). The analyses found no evidence of a benefit for FMD over control for BOP (very low-certainty evidence). There was no evidence of heterogeneity for PPD or CAL, but considerable evidence of heterogeneity for BOP, attributed to one study. There were no consistent differences in these outcomes between intervention and control (low- to very low-certainty evidence).\nOf the six trials comparing FMS and FMD, we judged two trials at high risk of bias, one at low risk of bias and three as unclear. At six to eight months, there was no evidence of a benefit of FMD over FMS for change in PPD or gain in CAL (PPD: MD –0.11 mm, 95% CI –0.30 to 0.07; P = 0.22; 4 trials, 112 participants; low-certainty evidence; CAL: MD –0.05 mm, 95% CI –0.23 to –0.13; P = 0.58; 4 trials, 112 participants; low-certainty evidence). There was no evidence of a difference between FMS and FMD for BOP at any time point (P = 0.98; 2 trials, 22 participants; low- to very low-certainty evidence). There was evidence of heterogeneity for BOP (I² = 52%), but not for PPD or CAL.\nThirteen studies predefined adverse events as an outcome; three reported an event after FMD or FMS. The most important harm identified was an increase in body temperature.\nWe assessed the certainty of the evidence for most comparisons and outcomes as low because of design limitations leading to risk of bias, and the small number of trials and participants, leading to imprecision in the effect estimates.\nAuthors' conclusions\nThe inclusion of nine new RCTs in this updated review has not changed the conclusions of the previous version of the review. There is still no clear evidence that FMS or FMD approaches provide additional clinical benefit compared to conventional mechanical treatment for adult periodontitis. In practice, the decision to select one approach to non-surgical periodontal therapy over another should include patient preference and the convenience of the treatment schedule.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review, produced within Cochrane Oral Health, is the second update of one we originally published in 2008. It evaluates the effectiveness of full-mouth treatments within 24 hours (FMS and FMD) compared to conventional treatment over a number of weeks, and whether there is a difference between FMS and FMD. The evidence is current up to June 2021."
}
] |
query-laysum
|
6046
|
[
{
"role": "user",
"content": "Abstract: Background\nVenous thromboembolism (VTE) is a collective term for two conditions: deep vein thrombosis (DVT) and pulmonary embolism (PE). A proportion of people with VTE have no underlying or immediately predisposing risk factors and the VTE is referred to as unprovoked. Unprovoked VTE can often be the first clinical manifestation of an underlying malignancy. This has raised the question of whether people with an unprovoked VTE should be investigated for an underlying cancer. Treatment for VTE is different in cancer and non-cancer patients and a correct diagnosis would ensure that people received the optimal treatment for VTE to prevent recurrence and further morbidity. Furthermore, an appropriate cancer diagnosis at an earlier stage could avoid the risk of cancer progression and lead to improvements in cancer-related mortality and morbidity. This is the third update of the review first published in 2015.\nObjectives\nTo determine whether testing for undiagnosed cancer in people with a first episode of unprovoked VTE (DVT of the lower limb or PE) is effective in reducing cancer- or VTE-related mortality and morbidity and to determine which tests for cancer are best at identifying treatable cancers early.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase and CINAHL databases and World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 5 May 2021. We also undertook reference checking to identify additional studies.\nSelection criteria\nRandomised and quasi-randomised trials in which people with an unprovoked VTE were allocated to receive specific tests for identifying cancer or clinically indicated tests only were eligible for inclusion.\nData collection and analysis\nTwo review authors independently selected studies, assessed risk of bias and extracted data. We assessed the certainty of the evidence using GRADE criteria. We resolved any disagreements by discussion. The main outcomes of interest were all-cause mortality, cancer-related mortality and VTE-related mortality.\nMain results\nNo new studies were identified for this 2021 update. In total, four studies with 1644 participants are included. Two studies assessed the effect of extensive tests including computed tomography (CT) scanning versus tests at the physician's discretion, while the other two studies assessed the effect of standard testing plus positron emission tomography (PET)/CT scanning versus standard testing alone. For extensive tests including CT versus tests at the physician's discretion, the certainty of the evidence, as assessed according to GRADE, was low due to risk of bias (early termination of the studies). When comparing standard testing plus PET/CT scanning versus standard testing alone, the certainty of evidence was moderate due to a risk of detection bias. The certainty of the evidence was downgraded further as detection bias was present in one study with a low number of events.\nWhen comparing extensive tests including CT versus tests at the physician's discretion, pooled analysis on two studies showed that testing for cancer was consistent with either benefit or no benefit on cancer-related mortality (odds ratio (OR) 0.49, 95% confidence interval (CI) 0.15 to 1.67; 396 participants; 2 studies; low-certainty evidence). One study (201 participants) showed that, overall, malignancies were less advanced at diagnosis in extensively tested participants than in participants in the control group. In total, 9/13 participants diagnosed with cancer in the extensively tested group had a T1 or T2 stage malignancy compared to 2/10 participants diagnosed with cancer in the control group (OR 5.00, 95% CI 1.05 to 23.76; low-certainty evidence). There was no clear difference in detection of advanced stages between extensive tests versus tests at the physician's discretion: one participant in the extensively tested group had stage T3 compared with four participants in the control group (OR 0.25, 95% CI 0.03 to 2.28; low-certainty evidence). In addition, extensively tested participants were diagnosed earlier than control group (mean: 1 month with extensive tests versus 11.6 months with tests at physician's discretion to cancer diagnosis from the time of diagnosis of VTE). Extensive testing did not increase the frequency of an underlying cancer diagnosis (OR 1.32, 95% CI 0.59 to 2.93; 396 participants; 2 studies; low-certainty evidence). Neither study measured all-cause mortality, VTE-related morbidity and mortality, complications of anticoagulation, adverse effects of cancer tests, participant satisfaction or quality of life.\nWhen comparing standard testing plus PET/CT screening versus standard testing alone, standard testing plus PET/CT screening was consistent with either benefit or no benefit on all-cause mortality (OR 1.22, 95% CI 0.49 to 3.04; 1248 participants; 2 studies; moderate-certainty evidence), cancer-related mortality (OR 0.55, 95% CI 0.20 to 1.52; 1248 participants; 2 studies; moderate-certainty evidence) or VTE-related morbidity (OR 1.02, 95% CI 0.48 to 2.17; 854 participants; 1 study; moderate-certainty evidence). Regarding stage of cancer, there was no clear difference for detection of early (OR 1.78, 95% 0.51 to 6.17; 394 participants; 1 study; low-certainty evidence) or advanced (OR 1.00, 95% CI 0.14 to 7.17; 394 participants; 1 study; low-certainty evidence) stages of cancer. There was also no clear difference in the frequency of an underlying cancer diagnosis (OR 1.71, 95% CI 0.91 to 3.20; 1248 participants; 2 studies; moderate-certainty evidence). Time to cancer diagnosis was 4.2 months in the standard testing group and 4.0 months in the standard testing plus PET/CT group (P = 0.88). Neither study measured VTE-related mortality, complications of anticoagulation, adverse effects of cancer tests, participant satisfaction or quality of life.\nAuthors' conclusions\nSpecific testing for cancer in people with unprovoked VTE may lead to earlier diagnosis of cancer at an earlier stage of the disease. However, there is currently insufficient evidence to draw definitive conclusions concerning the effectiveness of testing for undiagnosed cancer in people with a first episode of unprovoked VTE (DVT or PE) in reducing cancer- or VTE-related morbidity and mortality. The results could be consistent with either benefit or no benefit. Further good-quality large-scale randomised controlled trials are required before firm conclusions can be made.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found four studies with 1644 participants. Two studies compared extensive cancer tests with tests carried out at the physician's discretion and two studies compared cancer tests plus scanning with cancer tests alone. Combining the results of the two studies showed that extensive testing had no effect on the number of cancer-related deaths. Additionally, extensive testing did not identify more people with cancer. However, extensive testing did identify cancers at an earlier stage (approximately 10 months earlier) and cancers were less advanced in people in the extensive testing group than in people in the group with tests carried out at the physician's discretion. Neither study looked at the number of deaths due to any cause, deaths and illness associated with VTE, side effects of cancer tests, side effects of VTE treatment or participant satisfaction. Two studies that compared tests plus scanning with tests alone showed that adding computed tomography scanning had little or no effect on the number of deaths, cancer-related deaths, illness associated with VTE; nor did it identify more people with cancer, or show a clear difference in time to diagnosis or stages of cancer diagnosed. Neither study looked at deaths associated with VTE, side effects of cancer tests, side effects of VTE treatment, participant satisfaction or quality of life."
}
] |
query-laysum
|
6047
|
[
{
"role": "user",
"content": "Abstract: Background\nFemoro-popliteal bypass is implemented to save limbs that might otherwise require amputation, in patients with ischaemic rest pain or tissue loss; and to improve walking distance in patients with severe life-limiting claudication. Contemporary practice involves grafts using autologous vein, polytetrafluoroethylene (PTFE) or Dacron as a bypass conduit. This is the second update of a Cochrane review first published in 1999 and last updated in 2010.\nObjectives\nTo assess the effects of bypass graft type in the treatment of stenosis or occlusion of the femoro-popliteal arterial segment, for above- and below-knee femoro-popliteal bypass grafts.\nSearch methods\nFor this update, the Cochrane Vascular Information Specialist searched the Vascular Specialised Register (13 March 2017) and CENTRAL (2017, Issue 2). Trial registries were also searched.\nSelection criteria\nWe included randomised trials comparing at least two different types of femoro-popliteal grafts for arterial reconstruction in patients with femoro-popliteal ischaemia. Randomised controlled trials comparing bypass grafting to angioplasty or to other interventions were not included.\nData collection and analysis\nBoth review authors (GKA and CPT) independently screened studies, extracted data, assessed trials for risk of bias and graded the quality of the evidence using GRADE criteria.\nMain results\nWe included nineteen randomised controlled trials, with a total of 3123 patients (2547 above-knee, 576 below-knee bypass surgery). In total, nine graft types were compared (autologous vein, polytetrafluoroethylene (PTFE) with and without vein cuff, human umbilical vein (HUV), polyurethane (PUR), Dacron and heparin bonded Dacron (HBD); FUSION BIOLINE and Dacron with external support). Studies differed in which graft types they compared and follow-up ranged from six months to 10 years.\nAbove-knee bypass\nFor above-knee bypass, there was moderate-quality evidence that autologous vein grafts improve primary patency compared to prosthetic grafts by 60 months (Peto odds ratio (OR) 0.47, 95% confidence interval (CI) 0.28 to 0.80; 3 studies, 269 limbs; P = 0.005). We found low-quality evidence to suggest that this benefit translated to improved secondary patency by 60 months (Peto OR 0.41, 95% CI 0.22 to 0.74; 2 studies, 176 limbs; P = 0.003).\nWe found no clear difference between Dacron and PTFE graft types for primary patency by 60 months (Peto OR 1.67, 95% CI 0.96 to 2.90; 2 studies, 247 limbs; low-quality evidence). We found low-quality evidence that Dacron grafts improved secondary patency over PTFE by 24 months (Peto OR 1.54, 95% CI 1.04 to 2.28; 2 studies, 528 limbs; P = 0.03), an effect which continued to 60 months in the single trial reporting this timepoint (Peto OR 2.43, 95% CI 1.31 to 4.53; 167 limbs; P = 0.005).\nExternally supported prosthetic grafts had inferior primary patency at 24 months when compared to unsupported prosthetic grafts (Peto OR 2.08, 95% CI 1.29 to 3.35; 2 studies, 270 limbs; P = 0.003). Secondary patency was similarly affected in the single trial reporting this outcome (Peto OR 2.25, 95% CI 1.24 to 4.07; 236 limbs; P = 0.008). No data were available for 60 months follow-up.\nHUV showed benefits in primary patency over PTFE at 24 months (Peto OR 4.80, 95% CI 1.76 to 13.06; 82 limbs; P = 0.002). This benefit was still seen at 60 months (Peto OR 3.75, 95% CI 1.46 to 9.62; 69 limbs; P = 0.006), but this was only compared in one trial. Results were similar for secondary patency at 24 months (Peto OR 4.01, 95% CI 1.44 to 11.17; 93 limbs) and at 60 months (Peto OR 3.87, 95% CI 1.65 to 9.05; 93 limbs).\nWe found HBD to be superior to PTFE for primary patency at 60 months for above-knee bypass, but these results were based on a single trial (Peto OR 0.38, 95% CI 0.20 to 0.72; 146 limbs; very low-quality evidence). There was no difference in primary patency between HBD and HUV for above-knee bypass in the one small study which reported this outcome.\nWe found only one small trial studying PUR and it showed very poor primary and secondary patency rates which were inferior to Dacron at all time points.\nBelow-knee bypass\nFor bypass below the knee, we found no graft type to be superior to any other in terms of primary patency, though one trial showed improved secondary patency of HUV over PTFE at all time points to 24 months (Peto OR 3.40, 95% CI 1.45 to 7.97; 88 limbs; P = 0.005).\nOne study compared PTFE alone to PTFE with vein cuff; very low-quality evidence indicates no effect to either primary or secondary patency at 24 months (Peto OR 1.08, 95% CI 0.58 to 2.01; 182 limbs; 2 studies; P = 0.80 and Peto OR 1.22, 95% CI 0.67 to 2.23; 181 limbs; 2 studies; P = 0.51 respectively)\nLimited data were available for limb survival, and those studies reporting on this outcome showed no clear difference between graft types for this outcome. Antiplatelet and anticoagulant protocols varied extensively between trials, and in some cases within trials.\nThe overall quality of the evidence ranged from very low to moderate. Issues which affected the quality of the evidence included differences in the design of the trials, and differences in the types of grafts they compared. These differences meant we were often only able to combine and analyse small numbers of participants and this resulted in uncertainty over the true effects of the graft type used.\nAuthors' conclusions\nThere was moderate-quality evidence of improved long-term (60 months) primary patency for autologous vein grafts when compared to prosthetic materials for above-knee bypasses. In the long term (two to five years) there was low-quality evidence that Dacron confers a small secondary patency benefit over PTFE for above-knee bypass. Only very low-quality data exist on below-knee bypasses, so we are uncertain which graft type is best. Further randomised data are needed to ascertain whether this information translates into an improvement in limb survival.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of the evidence ranged from very low to moderate. Issues which affected the quality of the evidence included differences in the design of the trials, and differences in the types of grafts they compared. These differences meant we were often only able to combine and analyse small numbers of participants and this resulted in uncertainty over the true effects of the graft type used."
}
] |
query-laysum
|
6048
|
[
{
"role": "user",
"content": "Abstract: Background\nA large number of people are employed in sedentary occupations. Physical inactivity and excessive sitting at workplaces have been linked to increased risk of cardiovascular disease, obesity, and all-cause mortality.\nObjectives\nTo evaluate the effectiveness of workplace interventions to reduce sitting at work compared to no intervention or alternative interventions.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, OSH UPDATE, PsycINFO, ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) search portal up to 9 August 2017. We also screened reference lists of articles and contacted authors to find more studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), cross-over RCTs, cluster-randomised controlled trials (cluster-RCTs), and quasi-RCTs of interventions to reduce sitting at work. For changes of workplace arrangements, we also included controlled before-and-after studies. The primary outcome was time spent sitting at work per day, either self-reported or measured using devices such as an accelerometer-inclinometer and duration and number of sitting bouts lasting 30 minutes or more. We considered energy expenditure, total time spent sitting (including sitting at and outside work), time spent standing at work, work productivity and adverse events as secondary outcomes.\nData collection and analysis\nTwo review authors independently screened titles, abstracts and full-text articles for study eligibility. Two review authors independently extracted data and assessed risk of bias. We contacted authors for additional data where required.\nMain results\nWe found 34 studies — including two cross-over RCTs, 17 RCTs, seven cluster-RCTs, and eight controlled before-and-after studies — with a total of 3,397 participants, all from high-income countries. The studies evaluated physical workplace changes (16 studies), workplace policy changes (four studies), information and counselling (11 studies), and multi-component interventions (four studies). One study included both physical workplace changes and information and counselling components. We did not find any studies that specifically investigated the effects of standing meetings or walking meetings on sitting time.\nPhysical workplace changes\nInterventions using sit-stand desks, either alone or in combination with information and counselling, reduced sitting time at work on average by 100 minutes per workday at short-term follow-up (up to three months) compared to sit-desks (95% confidence interval (CI) −116 to −84, 10 studies, low-quality evidence). The pooled effect of two studies showed sit-stand desks reduced sitting time at medium-term follow-up (3 to 12 months) by an average of 57 minutes per day (95% CI −99 to −15) compared to sit-desks. Total sitting time (including sitting at and outside work) also decreased with sit-stand desks compared to sit-desks (mean difference (MD) −82 minutes/day, 95% CI −124 to −39, two studies) as did the duration of sitting bouts lasting 30 minutes or more (MD −53 minutes/day, 95% CI −79 to −26, two studies, very low-quality evidence).\nWe found no significant difference between the effects of standing desks and sit-stand desks on reducing sitting at work. Active workstations, such as treadmill desks or cycling desks, had unclear or inconsistent effects on sitting time.\nWorkplace policy changes\nWe found no significant effects for implementing walking strategies on workplace sitting time at short-term (MD −15 minutes per day, 95% CI −50 to 19, low-quality evidence, one study) and medium-term (MD −17 minutes/day, 95% CI −61 to 28, one study) follow-up. Short breaks (one to two minutes every half hour) reduced time spent sitting at work on average by 40 minutes per day (95% CI −66 to −15, one study, low-quality evidence) compared to long breaks (two 15-minute breaks per workday) at short-term follow-up.\nInformation and counselling\nProviding information, feedback, counselling, or all of these resulted in no significant change in time spent sitting at work at short-term follow-up (MD −19 minutes per day, 95% CI −57 to 19, two studies, low-quality evidence). However, the reduction was significant at medium-term follow-up (MD −28 minutes per day, 95% CI −51 to −5, two studies, low-quality evidence).\nComputer prompts combined with information resulted in no significant change in sitting time at work at short-term follow-up (MD −14 minutes per day, 95% CI −39 to 10, three studies, low-quality evidence), but at medium-term follow-up they produced a significant reduction (MD −55 minutes per day, 95% CI −96 to −14, one study). Furthermore, computer prompting resulted in a significant decrease in the average number (MD −1.1, 95% CI −1.9 to −0.3, one study) and duration (MD -74 minutes per day, 95% CI −124 to −24, one study) of sitting bouts lasting 30 minutes or more.\nComputer prompts with instruction to stand reduced sitting at work on average by 14 minutes per day (95% CI 10 to 19, one study) more than computer prompts with instruction to walk at least 100 steps at short-term follow-up.\nWe found no significant reduction in workplace sitting time at medium-term follow-up following mindfulness training (MD −23 minutes per day, 95% CI −63 to 17, one study, low-quality evidence). Similarly a single study reported no change in sitting time at work following provision of highly personalised or contextualised information and less personalised or contextualised information. One study found no significant effects of activity trackers on sitting time at work.\nMulti-component interventions\nCombining multiple interventions had significant but heterogeneous effects on sitting time at work (573 participants, three studies, very low-quality evidence) and on time spent in prolonged sitting bouts (two studies, very low-quality evidence) at short-term follow-up.\nAuthors' conclusions\nAt present there is low-quality evidence that the use of sit-stand desks reduce workplace sitting at short-term and medium-term follow-ups. However, there is no evidence on their effects on sitting over longer follow-up periods. Effects of other types of interventions, including workplace policy changes, provision of information and counselling, and multi-component interventions, are mostly inconsistent. The quality of evidence is low to very low for most interventions, mainly because of limitations in study protocols and small sample sizes. There is a need for larger cluster-RCTs with longer-term follow-ups to determine the effectiveness of different types of interventions to reduce sitting time at work.\n\nGiven the provided abstract, please respond to the following query: \"Effect of active workstations\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Treadmill desks combined with counselling seem to reduce sitting time at work, while the available evidence is insufficient to conclude whether cycling desks combined with the provision of information reduce sitting at work more than the provision of information alone."
}
] |
query-laysum
|
6049
|
[
{
"role": "user",
"content": "Abstract: Background\nImmobilization of the lower limb is a risk factor for venous thromboembolism (VTE). Low molecular weight heparins (LMWHs) are anticoagulants, which might be used in adult patients with lower-limb immobilization to prevent deep venous thrombosis (DVT) and its complications. This is an update of the review first published in 2008.\nObjectives\nTo assess the effectiveness of low molecular weight heparin for the prevention of venous thromboembolism in patients with lower-limb immobilization in an ambulatory setting.\nSearch methods\nFor this update, the Cochrane Vascular Information Specialist searched the Specialised Register, CENTRAL, and three trials registers (April 2017).\nSelection criteria\nRandomized controlled trials (RCTs) and controlled clinical trials (CCTs) that described thromboprophylaxis by means of LMWH compared with no prophylaxis or placebo in adult patients with lower-limb immobilization. Immobilization was by means of a plaster cast or brace.\nData collection and analysis\nTwo review authors independently selected trials, assessed risk of bias and extracted data. The review authors contacted the trial authors for additional information if required. Statistical analysis was carried out using Review Manager 5.\nMain results\nWe included eight RCTs that fulfilled our criteria, with a total of 3680 participants. The quality of evidence, according GRADE, varied by outcome and ranged from low to moderate. We found an incidence of DVT ranging from 4.3% to 40% in patients who had a leg injury that had been immobilized in a plaster cast or a brace for at least one week, and who received no prophylaxis, or placebo. This number was significantly lower in patients who received daily subcutaneous injections of LMWH during immobilization, with event rates ranging from 0% to 37% (odds ratio (OR) 0.45, 95% confidence interval (CI) 0.33 to 0.61; with minimal evidence of heterogeneity: I² = 26%, P = 0.23; seven studies; 1676 participants, moderate-quality evidence). Comparable results were seen in the following groups of participants: patients with below-knee casts, conservatively treated patients (non-operated patients), operated patients, patients with fractures, patients with soft-tissue injuries, and patients with distal or proximal thrombosis. No clear differences were found between the LMWH and control groups for pulmonary embolism (OR 0.50, 95% CI 0.17 to 1.47; with no evidence of heterogeneity: I² = 0%, P = 0.56; five studies, 2517 participants; low-quality evidence). The studies also showed less symptomatic VTE in the LMWH groups compared with the control groups (OR 0.40, 95% CI 0.21 to 0.76; with minimal evidence of heterogeneity: I² = 16%, P = 0.31; six studies; 2924 participants; low-quality evidence). One death was reported in the included studies, but no deaths due to pulmonary embolism were reported. Complications of major adverse events were rare, with minor bleeding the main adverse events reported.\nAuthors' conclusions\nModerate-quality evidence showed that the use of LMWH in outpatients reduced DVT when immobilization of the lower limb was required, when compared with no prophylaxis or placebo. The quality of the evidence was reduced to moderate because of risk of selection and attrition bias in the included studies. Low-quality evidence showed no clear differences in PE between the LMWH and control groups, but less symptomatic VTE in the LMWH groups. The quality of the evidence was downgraded due to risk of bias and imprecision.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Venous thromboembolism is a condition where a blood clot forms in the deep veins (DVT), most commonly of the leg. The concern is that it can travel up to block the arteries in the lungs (pulmonary embolism). In adult patients, immobilization of the lower limb with a plaster cast or brace is a risk factor for DVT and pulmonary embolism. To prevent this complication, preventive treatment with anticoagulants (medication that thins the blood) is often used, most commonly, low molecular weight heparin (LMWH). However, there is no agreement on this in existing national guidelines. Therefore, we searched the literature for trials on this topic, in order to assess the evidence."
}
] |
query-laysum
|
6050
|
[
{
"role": "user",
"content": "Abstract: Background\nAlcoholic hepatitis is a form of alcoholic liver disease characterised by steatosis, necroinflammation, fibrosis, and complications to the liver. Typically, alcoholic hepatitis presents in people between 40 and 50 years of age. Alcoholic hepatitis can be resolved if people abstain from drinking, but the risk of death will depend on the severity of the liver damage and abstinence from alcohol. Glucocorticosteroids have been studied extensively in randomised clinical trials to assess their benefits and harms. However, the results have been contradictory.\nObjectives\nTo assess the benefits and harms of glucocorticosteroids in people with alcoholic hepatitis.\nSearch methods\nWe identified trials through electronic searches in Cochrane Hepato-Biliary's (CHB) Controlled Trials Register, CENTRAL, MEDLINE, Embase, LILACS, and Science Citation Index Expanded. We looked for ongoing or unpublished trials in clinical trials registers and pharmaceutical company sources. We also scanned reference lists of the studies retrieved. The last search was 18 January 2019.\nSelection criteria\nRandomised clinical trials assessing glucocorticosteroids versus placebo or no intervention in people with alcoholic hepatitis, irrespective of year, language of publication, or format. We considered trials with adults diagnosed with alcoholic hepatitis, which could have been established through clinical or biochemical diagnostic criteria or both. We defined alcoholic hepatitis as mild (Maddrey's score less than 32) and severe (Maddrey's score 32 or more). We allowed cointerventions in the trial groups, provided they were similar.\nData collection and analysis\nWe followed Cochrane methodology, performing the meta-analyses using Review Manager 5. We presented the results of dichotomous outcomes as risk ratios (RR) and of continuous outcomes as mean difference (MD), with 95% confidence intervals (CI). We used both the fixed-effect and the random-effects models for meta-analyses. Whenever there were significant discrepancies in the results, we reported the more conservative point estimate of the two. We considered a P value of 0.01 or less, two-tailed, as statistically significant if the required information size was reached for our three primary outcomes (all-cause mortality, health-related quality of life, and serious adverse events during treatment) and our post hoc decision to include analyses of mortality at more time points. We presented heterogeneity using the I² statistic. If trialists used intention-to-treat analysis to deal with missing data, we used these data in our primary analysis; otherwise, we used the available data. We assessed the bias risk of the trials using bias risk domains and the certainty of the evidence using GRADE.\nMain results\nSixteen trials fulfilled our inclusion criteria. All trials but one were at overall high risk of bias. Fifteen trials (one of which was an abstract) provided data for analysis (927 participants received glucocorticosteroids and 934 participants received placebo or no intervention). Glucocorticosteroids were administered orally or parenterally for a median 28 days (range 3 days to 12 weeks). The participants were between 25 and 70 years old, had different stages of alcoholic liver disease, and 65% were men. Follow-up, when reported, was up to the moment of discharge from the hospital, until they died (median of 63 days), or for at least one year. There was no evidence of effect of glucocorticosteroids on all-cause mortality up to three months following randomisation (random-effects RR 0.90, 95% CI 0.70 to 1.15; participants = 1861; trials = 15; very low-certainty evidence) or on health-related quality of life up to three months, measured with the European Quality of Life – 5 Dimensions – 3 Levels scale (MD –0.04 points, 95% CI –0.11 to 0.03; participants = 377; trial = 1; low-certainty evidence). There was no evidence of effect on the occurrence of serious adverse events during treatment (random-effects RR 1.05, 95% CI 0.85 to 1.29; participants = 1861; trials = 15; very low-certainty evidence), liver-related mortality up to three months following randomisation (random-effects RR 0.89, 95% CI 0.69 to 1.14; participants = 1861; trials = 15; very low-certainty evidence), number of participants with any complications up to three months following randomisation (random-effects RR 1.04, 95% CI 0.86 to 1.27; participants = 1861; very low-certainty evidence), and number of participants of non-serious adverse events up to three months' follow-up after end of treatment (random-effects RR 1.99, 95% CI 0.72 to 5.48; participants = 160; trials = 4; very low-certainty evidence). Based on the information that we collected from the published trial reports, only one of the trials seems not to be industry-funded, and the remaining 15 trials did not report clearly whether they were partly or completely funded by the industry.\nAuthors' conclusions\nWe are very uncertain about the effect estimate of no difference between glucocorticosteroids and placebo or no intervention on all-cause mortality and serious adverse events during treatment because the certainty of evidence was very low, and low for health-related quality of life. Due to inadequate reporting, we cannot exclude increases in adverse events. As the CIs were wide, we cannot rule out significant benefits or harms of glucocorticosteroids. Therefore, we need placebo-controlled randomised clinical trials, designed according to the SPIRIT guidelines and reported according to the CONSORT guidelines. Future trials ought to report depersonalised individual participant data, so that proper individual participant data meta-analyses of the effects of glucocorticosteroids in subgroups can be conducted.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The date of the last search was 18 January 2019."
}
] |
query-laysum
|
6051
|
[
{
"role": "user",
"content": "Abstract: Background\nTrial monitoring is an important component of good clinical practice to ensure the safety and rights of study participants, confidentiality of personal information, and quality of data. However, the effectiveness of various existing monitoring approaches is unclear. Information to guide the choice of monitoring methods in clinical intervention studies may help trialists, support units, and monitors to effectively adjust their approaches to current knowledge and evidence.\nObjectives\nTo evaluate the advantages and disadvantages of different monitoring strategies (including risk-based strategies and others) for clinical intervention studies examined in prospective comparative studies of monitoring interventions.\nSearch methods\nWe systematically searched CENTRAL, PubMed, and Embase via Elsevier for relevant published literature up to March 2021. We searched the online 'Studies within A Trial' (SWAT) repository, grey literature, and trial registries for ongoing or unpublished studies.\nSelection criteria\nWe included randomized or non-randomized prospective, empirical evaluation studies of different monitoring strategies in one or more clinical intervention studies. We applied no restrictions for language or date of publication.\nData collection and analysis\nWe extracted data on the evaluated monitoring methods, countries involved, study population, study setting, randomization method, and numbers and proportions in each intervention group. Our primary outcome was critical and major monitoring findings in prospective intervention studies. Monitoring findings were classified according to different error domains (e.g. major eligibility violations) and the primary outcome measure was a composite of these domains. Secondary outcomes were individual error domains, participant recruitment and follow-up, and resource use. If we identified more than one study for a comparison and outcome definitions were similar across identified studies, we quantitatively summarized effects in a meta-analysis using a random-effects model. Otherwise, we qualitatively summarized the results of eligible studies stratified by different comparisons of monitoring strategies. We used the GRADE approach to assess the certainty of the evidence for different groups of comparisons.\nMain results\nWe identified eight eligible studies, which we grouped into five comparisons.\n1. Risk-based versus extensive on-site monitoring: based on two large studies, we found moderate certainty of evidence for the combined primary outcome of major or critical findings that risk-based monitoring is not inferior to extensive on-site monitoring. Although the risk ratio was close to 'no difference' (1.03 with a 95% confidence interval [CI] of 0.81 to 1.33, below 1.0 in favor of the risk-based strategy), the high imprecision in one study and the small number of eligible studies resulted in a wide CI of the summary estimate. Low certainty of evidence suggested that monitoring strategies with extensive on-site monitoring were associated with considerably higher resource use and costs (up to a factor of 3.4). Data on recruitment or retention of trial participants were not available.\n2. Central monitoring with triggered on-site visits versus regular on-site visits: combining the results of two eligible studies yielded low certainty of evidence with a risk ratio of 1.83 (95% CI 0.51 to 6.55) in favor of triggered monitoring intervention. Data on recruitment, retention, and resource use were not available.\n3. Central statistical monitoring and local monitoring performed by site staff with annual on-site visits versus central statistical monitoring and local monitoring only: based on one study, there was moderate certainty of evidence that a small number of major and critical findings were missed with the central monitoring approach without on-site visits: 3.8% of participants in the group without on-site visits and 6.4% in the group with on-site visits had a major or critical monitoring finding (odds ratio 1.7, 95% CI 1.1 to 2.7; P = 0.03). The absolute number of monitoring findings was very low, probably because defined major and critical findings were very study specific and central monitoring was present in both intervention groups. Very low certainty of evidence did not suggest a relevant effect on participant retention, and very low-quality evidence indicated an extra cost for on-site visits of USD 2,035,392. There were no data on recruitment.\n4. Traditional 100% source data verification (SDV) versus targeted or remote SDV: the two studies assessing targeted and remote SDV reported findings only related to source documents. Compared to the final database obtained using the full SDV monitoring process, only a small proportion of remaining errors on overall data were identified using the targeted SDV process in the MONITORING study (absolute difference 1.47%, 95% CI 1.41% to 1.53%). Targeted SDV was effective in the verification of source documents but increased the workload on data management. The other included study was a pilot study which compared traditional on-site SDV versus remote SDV and found little difference in monitoring findings and the ability to locate data values despite marked differences in remote access in two clinical trial networks. There were no data on recruitment or retention.\n5. Systematic on-site initiation visit versus on-site initiation visit upon request: very low certainty of evidence suggested no difference in retention and recruitment between the two approaches. There were no data on critical and major findings or on resource use.\nAuthors' conclusions\nThe evidence base is limited in terms of quantity and quality. Ideally, for each of the five identified comparisons, more prospective, comparative monitoring studies nested in clinical trials and measuring effects on all outcomes specified in this review are necessary to draw more reliable conclusions. However, the results suggesting risk-based, targeted, and mainly central monitoring as an efficient strategy are promising. The development of reliable triggers for on-site visits is ongoing; different triggers might be used in different settings. More evidence on risk indicators that identify sites with problems or the prognostic value of triggers is needed to further optimize central monitoring strategies. In particular, approaches with an initial assessment of trial-specific risks that need to be closely monitored centrally during trial conduct with triggered on-site visits should be evaluated in future research.\n\nGiven the provided abstract, please respond to the following query: \"Our question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence on the effects of new monitoring strategies on monitoring findings, participant recruitment, participant follow-up, and resource use in clinical trials. We also summarized the different components of tested strategies and qualitative evidence from process evaluations."
}
] |
query-laysum
|
6052
|
[
{
"role": "user",
"content": "Abstract: Background\nBelimumab, the first biologic approved for the treatment of systemic lupus erythematosus (SLE), has been shown to reduce autoantibody levels in people with SLE and help control disease activity.\nObjectives\nTo assess the benefits and harms of belimumab (alone or in combination) in systematic lupus erythematosus.\nSearch methods\nAn Information Specialist carried out the searches of CENTRAL, MEDLINE, Embase, CINAHL, Web of Science, the World Health Organization (WHO) International Clinical Trials Registry Platform, and clinicaltrials.gov from inception to 25 September 2019. There were no language or date restrictions.\nSelection criteria\nWe included randomized controlled trials (RCTs) or controlled clinical trials (CCTs) of belimumab (alone or in combination) compared to placebo/control treatment (immunosuppressive drugs, such as azathioprine, cyclosporine, mycophenolate mofetil or another biologic), in adults with SLE.\nData collection and analysis\nWe used standard methodologic procedures expected by Cochrane.\nMain results\nSix RCTs (2917 participants) qualified for quantitative analyses. All included studies were multicenter, international or US-based. The age range of the included participants was 22 to 80 years; most were women; and study duration ranged from 84 days to 76 weeks. The risk of bias was generally low except for attrition bias, which was high in 67% of studies.\nCompared to placebo, more participants on belimumab 10 mg/kg (Food and Drug Administration (FDA)-approved dose) showed at least a 4-point improvement (reduction) in Safety of Estrogen in Lupus National Assessment (SELENA) - Systemic Lupus Erythematosus Disease Activity Index (SLEDAI) score, a validated SLE disease activity index: (risk ratio (RR) 1.33, 95% confidence interval (CI) 1.22 to 1.45; 829/1589 in belimumab group and 424/1077 in placebo; I2= 0%; 4 RCTs; high-certainty evidence).\nChange in health-related quality of life (HRQOL), assessed by Short Form-36 Physical Component Summary score improvement (range 0 to 100), showed there was probably little or no difference between groups (mean difference 1.6 points, 95% CI 0.30 to 2.90; 401 in belimumab group and 400 in placebo; I2= 0%; 2 RCTs; moderate-certainty evidence). The belimumab 10 mg/kg group showed greater improvement in glucocorticoid dose, with a higher proportion of participants reducing their dose by at least 50% compared to placebo (RR 1.59, 95% CI 1.17 to 2.15; 81/269 in belimumab group and 52/268 in placebo; I2= 0%; 2 RCTs; high-certainty evidence).\nThe proportion of participants experiencing harm may not differ meaningfully between the belimumab 10 mg/kg and placebo groups: one or more serious adverse event (RR 0.87, 95% CI: 0.68 to 1.11; 238/1700 in belimumab group and 199/1190 in placebo; I2= 48%; 5 RCTs; low-certainty evidence; ); one or more serious infection (RR 1.01, 95% CI: 0.66 to 1.54; 44/1230 in belimumab group and 40/955 in placebo; I2= 0%; 4 RCTs; moderate-certainty evidence); and withdrawals due to adverse events (RR 0.82, 95% CI: 0.63 to 1.07; 113/1700 in belimumab group and 94/1190 in placebo; I2= 0%; 5 RCTs; moderate-certainty evidence). Mortality was rare, and may not differ between belimumab 10 mg/kg and placebo (Peto odds ratio 1.15, 95% CI 0.41 to 3.25; 9/1714 in belimumab group and 6/1203 in placebo; I2= 4%; 6 RCTs; low-certainty evidence).\nAuthors' conclusions\nThe six studies that provided evidence for benefits and harms of belimumab were well-designed, high-quality RCTs. At the FDA-approved dose of 10 mg/kg, based on moderate to high-certainty data, belimumab was probably associated with a clinically meaningful efficacy benefit compared to placebo in participants with SLE at 52 weeks. Evidence related to harms is inconclusive and mostly of moderate to low-certainty evidence. More data are needed for the longer-term efficacy of belimumab.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found six studies that involved a total of 2917 people (mostly women) aged between 22 and 80 years. People were followed for a minimum of 84 days and a maximum of 76 weeks. All studies compared belimumab against placebo.\nIntensity of disease\nBelimumab is better than placebo at reducing SLE intensity (4 studies, 2666 people). SLE became less intense in 52% of people treated with belimumab, compared to 39% of people treated with placebo.\nHealth-related quality of life\nThe evidence suggests that there is probably little to no difference between belimumab and placebo for changes in health-related quality of life (2 studies, 801 people). The quality of life scores reported by people treated with belimumab were only 1.6 points higher on average than those of people treated with placebo (scale: 0 to 100, higher scores = better quality of life).\nNeed for glucocorticoids (anti-inflammation medicines)\nBelimumab is more likely than placebo to reduce the amount of glucocorticoids required. Glucocorticoid doses were lowered by at least 50% in 30% of people treated with belimumab, compared to 19% of people treated with placebo (2 studies, 537 people).\nSerious adverse events\nThe evidence suggests that there may be little to no difference in the numbers of serious adverse events attributable to belimumab or placebo (5 studies, 2890 people). We have little confidence in this finding, because it is based on relatively small numbers of adverse events and studies reported conflicting findings.\nSerious infection\nThe evidence suggests that there is probably little to no difference in the numbers of serious infections attributable to belimumab or placebo (4 studies, 2185 people). We are only moderately confident about this finding, because it is based on small numbers of infections.\nAdverse events causing people to withdraw from studies\nThe evidence suggests that there is probably little to no difference between belimumab and placebo in the numbers of adverse events that caused people to withdraw from studies (5 studies, 2890 people). We are only moderately confident about this finding, because it is based on relatively small numbers of adverse events.\nDeath\nFew deaths (less than 1%) were reported in association with either belimumab or placebo. This suggests that there may be little to no difference in numbers of deaths between the two treatments, though, because death occurs rarely, it is difficult to be confident about this finding."
}
] |
query-laysum
|
6053
|
[
{
"role": "user",
"content": "Abstract: Background\nLong-acting bronchodilators, comprising long-acting beta2-agonists (LABA) and long-acting anti-muscarinic agents (LAMA, principally tiotropium), are commonly used for managing persistent symptoms of chronic obstructive pulmonary disease (COPD). Combining these treatments, which have different mechanisms of action, may be more effective than the individual components. However, the benefits and risks of combining tiotropium and LABAs for the treatment of COPD are unclear.\nObjectives\nTo compare the relative effects on markers of quality of life, exacerbations, symptoms, lung function and serious adverse events in people with COPD randomised to LABA plus tiotropium versus tiotropium alone; or LABA plus tiotropium versus LABA alone.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register of trials and ClinicalTrials.gov up to July 2015.\nSelection criteria\nWe included parallel-group, randomised controlled trials of three months or longer comparing treatment with tiotropium in addition to LABA against tiotropium or LABA alone for people with COPD.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and then extracted data on trial quality and the outcome results. We contacted study authors for additional information. We collected information on adverse effects from the trials.\nMain results\nThis review included 10 trials on 10,894 participants, mostly recruiting participants with moderate or severe COPD. All of the trials compared tiotropium in addition to LABA to tiotropium alone, and four trials additionally compared LAMA plus LABA with LABA alone. Four studies used the LABA olodaterol, three used indacaterol, two used formoterol, and one used salmeterol.\nCompared to tiotropium alone, treatment with tiotropium plus LABA resulted in a slightly larger improvement in mean health-related quality of life (St George's Respiratory Questionnaire (SGRQ) (mean difference (MD) -1.34, 95% confidence interval (CI) -1.87 to -0.80; 6709 participants; 5 studies). The MD was smaller than the four units that is considered clinically important, but a responder analysis indicated that 7% more participants receiving tiotropium plus LABA had a noticeable benefit (greater than four units) from treatment in comparison to tiotropium alone. In the control arm in one study, which was tiotropium alone, the SGRQ improved by falling 4.5 units from baseline and with tiotropium plus LABA the improvement was a fall of a further 1.3 units (on average). Most of the data came from studies using olodaterol. High withdrawal rates in the trials increased the uncertainty in this result, and the GRADE assessment for this outcome was therefore moderate. There were no significant differences in the other primary outcomes (hospital admission or mortality).\nThe secondary outcome of pre-bronchodilator forced expiratory volume in one second (FEV1) showed a small mean increase with the addition of LABA over the control arm (MD 0.06, 95% CI 0.05 to 0.07; 9573 participants; 10 studies), which showed a change from baseline ranging from 0.03 L to 0.13 L with tiotropium alone. None of the other secondary outcomes (exacerbations, symptom scores, serious adverse events, and withdrawals) showed any statistically significant differences between the groups. There was moderate heterogeneity for both exacerbations and withdrawals.\nThis review included data on four LABAs: two administered twice daily (salmeterol, formoterol) and two once daily (indacaterol, olodaterol). The results were largely from studies of olodaterol and there was insufficient information to assess whether the other LABAs were equivalent to olodaterol or each other.\nComparing LABA plus tiotropium treatment with LABA alone, there was a small but significant improvement in SGRQ (MD -1.25, 95% CI -2.14 to -0.37; 3378 participants; 4 studies). The data came mostly from studies using olodaterol and, although the difference was smaller than four units, this still represented an increase of 10 people with a clinically important improvement for 100 treated. There was also an improvement in FEV1 (MD 0.07, 95% CI 0.06 to 0.09; 3513 participants; 4 studies), and in addition an improvement in exacerbation rates (odds ratio (OR) 0.80, 95% CI 0.69 to 0.93; 3514 participants; 3 studies).\nAuthors' conclusions\nThe results from this review indicated a small mean improvement in health-related quality of life and FEV1 for participants on a combination of tiotropium and LABA compared to either agent alone, and this translated into a small increase in the number of responders on combination treatment. In addition, adding tiotropium to LABA reduced exacerbations, although adding LABA to tiotropium did not. Hospital admission and mortality were not altered by adding LABA to tiotropium, although there may not be enough data. While it is possible that this is affected by higher attrition in the tiotropium group, one would expect that participants withdrawn from the study would have had less favourable outcomes; this means that the expected direction of attrition bias would be to reduce the estimated benefit of the combination treatment. The results were largely from studies of olodaterol and there was insufficient information to assess whether the other LABAs were equivalent to olodaterol or each other.\n\nGiven the provided abstract, please respond to the following query: \"How did we answer the question?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Two people looked for published and unpublished research in several databases and websites to find relevant studies comparing tiotropium plus LABA with either (tiotropium or a LABA) on its own in adults with COPD. We analysed the results available up to July 2015 in this systematic review."
}
] |
query-laysum
|
6054
|
[
{
"role": "user",
"content": "Abstract: Background\nDespite the high prevalence of apathy in Alzheimer’s disease (AD), and its harmful effects, there are currently no therapies proven to treat this symptom. Recently, a number of pharmacological therapies have been investigated as potential treatments for apathy in AD.\nObjectives\nObjective 1: To assess the safety and efficacy of pharmacotherapies for the treatment of apathy in Alzheimer's disease (AD).\n\nObjective 2: To assess the effect on apathy of pharmacotherapies investigated for other primary outcomes in the treatment of AD.\nSearch methods\nWe searched the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group (ALOIS), MEDLINE, Embase, CINAHL, PsycINFO, LILACS, ClinicalTrials.gov and the World Health Organization (WHO) portal, ICTRP on 17 May 2017.\nSelection criteria\nEligible studies were double-blind, randomized, placebo-controlled trials (RCTs) investigating apathy as a primary or secondary outcome in people with AD.\nData collection and analysis\nThree review authors extracted data. We assessed the risks of bias of included studies using Cochrane methods, and the overall quality of evidence for each outcome using GRADE methods. We calculated mean difference (MD), standardized mean difference (SMD) or risk ratio (RR) with 95% confidence intervals on an intention-to-treat basis for all relevant outcome measures.\nMain results\nWe included 21 studies involving a total of 6384 participants in the quantitative analyses. Risk of bias is very low to moderate. All studies reported appropriate methods of randomization and blinding. Most studies reported appropriate methods of allocation concealment. Four studies, three with methylphenidate and one with modafinil, had a primary aim of improving apathy. In these studies, all participants had clinically significant apathy at baseline. Methylphenidate may improve apathy compared to placebo. This finding was present when apathy was assessed using the apathy evaluation scale (AES), which was used by all three studies investigating methylphenidate: MD -4.99, 95% CI -9.55 to -0.43, n = 145, 3 studies, low quality of evidence, but not when assessed with the neuropsychiatric inventory (NPI)-apathy subscale, which was used by two of the three studies investigating methylphenidate: MD -0.08, 95% CI -3.85 to 3.69, n = 85, 2 studies, low quality of evidence. As well as having potential benefits for apathy, methylphenidate probably also slightly improves cognition (MD 1.98, 95% CI 1.06 to 2.91, n = 145, 3 studies, moderate quality of evidence), and probably improves instrumental activities of daily living (MD 2.30, 95% CI 0.74 to 3.86, P = 0.004, n = 60, 1 study, moderate quality of evidence), compared to placebo. There may be no difference between methylphenidate and placebo in the risk of developing an adverse event: RR 1.28, 95% CI 0.67 to 2.42, n = 145, 3 studies, low quality of evidence. There was insufficient evidence from one very small study of modafinil to determine the effect of modafinil on apathy assessed with the FrSBe-apathy subscale: MD 0.27, 95% CI -3.51 to 4.05, n = 22, 1 study, low quality of evidence. In all other included studies, apathy was a secondary outcome and participants were not selected on the basis of clinically significant apathy at baseline. We considered the evidence on apathy from these studies to be indirect and associated with publication bias. There was low or very low quality of evidence on cholinesterase inhibitors (ChEIs) (six studies), ChEI discontinuation (one study), antipsychotics (two studies), antipsychotic discontinuation (one study), antidepressants (two studies), mibampator (one study), valproate (three studies) and semagacestat (one study).\nAuthors' conclusions\nMethylphenidate may demonstrate a benefit for apathy and may have slight benefits for cognition and functional performance in people with AD, but this finding is associated with low-quality evidence. Our meta-analysis is limited by the small number of studies within each drug class, risk of bias, publication bias, imprecision and inconsistency between studies. Additional studies should be encouraged targeting people with AD with clinically significant apathy which investigate apathy as a primary outcome measure, and which have a longer duration and a larger sample size. This could increase the quality of evidence for methylphenidate, and may confirm whether or not it is an effective pharmacotherapy for apathy in AD.\n\nGiven the provided abstract, please respond to the following query: \"What we did\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for randomized controlled trials (RCTs) up to May 2017 which had compared any medicine with a placebo (dummy pill) and measured the effect on apathy in people with Alzheimer's disease. We were only interested in trials in which it was decided randomly whether the people taking part got the drug of interest or the placebo; this was to make sure that the comparison was as fair as possible."
}
] |
query-laysum
|
6055
|
[
{
"role": "user",
"content": "Abstract: Background\nCerebral palsy is the most common physical disability in childhood. Mechanically assisted walking training can be provided with or without body weight support to enable children with cerebral palsy to perform repetitive practice of complex gait cycles. It is important to examine the effects of mechanically assisted walking training to identify evidence-based treatments to improve walking performance.\nObjectives\nTo assess the effects of mechanically assisted walking training compared to control for walking, participation, and quality of life in children with cerebral palsy 3 to 18 years of age.\nSearch methods\nIn January 2020, we searched CENTRAL, MEDLINE, Embase, six other databases, and two trials registers. We handsearched conference abstracts and checked reference lists of included studies.\nSelection criteria\nRandomized controlled trials (RCTs) or quasi-RCTs, including cross-over trials, comparing any type of mechanically assisted walking training (with or without body weight support) with no walking training or the same dose of overground walking training in children with cerebral palsy (classified as Gross Motor Function Classification System [GMFCS] Levels I to IV) 3 to 18 years of age.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nThis review includes 17 studies with 451 participants (GMFCS Levels I to IV; mean age range 4 to 14 years) from outpatient settings. The duration of the intervention period (4 to 12 weeks) ranged widely, as did intensity of training in terms of both length (15 minutes to 40 minutes) and frequency (two to five times a week) of sessions. Six studies were funded by grants, three had no funding support, and eight did not report information on funding. Due to the nature of the intervention, all studies were at high risk of performance bias.\nMechanically assisted walking training without body weight support versus no walking training\nFour studies (100 participants) assessed this comparison. Compared to no walking, mechanically assisted walking training without body weight support increased walking speed (mean difference [MD] 0.05 meter per second [m/s] [change scores], 95% confidence interval [CI] 0.03 to 0.07; 1 study, 10 participants; moderate-quality evidence) as measured by the Biodex Gait Trainer 2™ (Biodex, Shirley, NY, USA) and improved gross motor function (standardized MD [SMD] 1.30 [postintervention scores], 95% CI 0.49 to 2.11; 2 studies, 60 participants; low-quality evidence) postintervention. One study (30 participants) reported no adverse events (low-quality evidence). No study measured participation or quality of life.\nMechanically assisted walking training without body weight support versus the same dose of overground walking training\nTwo studies (55 participants) assessed this comparison. Compared to the same dose of overground walking, mechanically assisted walking training without body weight support increased walking speed (MD 0.25 m/s [change or postintervention scores], 95% CI 0.13 to 0.37; 2 studies, 55 participants; moderate-quality evidence) as assessed by the 6-minute walk test or Vicon gait analysis. It also improved gross motor function (MD 11.90% [change scores], 95% CI 2.98 to 20.82; 1 study, 35 participants; moderate-quality evidence) as assessed by the Gross Motor Function Measure (GMFM) and participation (MD 8.20 [change scores], 95% CI 5.69 to 10.71; 1 study, 35 participants; moderate-quality evidence) as assessed by the Pediatric Evaluation of Disability Inventory (scored from 0 to 59), compared to the same dose of overground walking training. No study measured adverse events or quality of life.\nMechanically assisted walking training with body weight support versus no walking training\nEight studies (210 participants) assessed this comparison. Compared to no walking training, mechanically assisted walking training with body weight support increased walking speed (MD 0.07 m/s [change and postintervention scores], 95% CI 0.06 to 0.08; 7 studies, 161 participants; moderate-quality evidence) as assessed by the 10-meter or 8-meter walk test. There were no differences between groups in gross motor function (MD 1.09% [change and postintervention scores], 95% CI -0.57 to 2.75; 3 studies, 58 participants; low-quality evidence) as assessed by the GMFM; participation (SMD 0.33 [change scores], 95% CI -0.27 to 0.93; 2 studies, 44 participants; low-quality evidence); and quality of life (MD 9.50% [change scores], 95% CI -4.03 to 23.03; 1 study, 26 participants; low-quality evidence) as assessed by the Pediatric Quality of Life Cerebral Palsy Module (scored 0 [bad] to 100 [good]). Three studies (56 participants) reported no adverse events (low-quality evidence).\nMechanically assisted walking training with body weight support versus the same dose of overground walking training\nThree studies (86 participants) assessed this comparison. There were no differences between groups in walking speed (MD -0.02 m/s [change and postintervention scores], 95% CI -0.08 to 0.04; 3 studies, 78 participants; low-quality evidence) as assessed by the 10-meter or 5-minute walk test; gross motor function (MD -0.73% [postintervention scores], 95% CI -14.38 to 12.92; 2 studies, 52 participants; low-quality evidence) as assessed by the GMFM; and participation (MD -4.74 [change scores], 95% CI -11.89 to 2.41; 1 study, 26 participants; moderate-quality evidence) as assessed by the School Function Assessment (scored from 19 to 76). No study measured adverse events or quality of life.\nAuthors' conclusions\nCompared with no walking, mechanically assisted walking training probably results in small increases in walking speed (with or without body weight support) and may improve gross motor function (with body weight support). Compared with the same dose of overground walking, mechanically assisted walking training with body weight support may result in little to no difference in walking speed and gross motor function, although two studies found that mechanically assisted walking training without body weight support is probably more effective than the same dose of overground walking training for walking speed and gross motor function. Not many studies reported adverse events, although those that did appeared to show no differences between groups. The results are largely not clinically significant, sample sizes are small, and risk of bias and intensity of intervention vary across studies, making it hard to draw robust conclusions. Mechanically assisted walking training is a means to undertake high-intensity, repetitive, task-specific training and may be useful for children with poor concentration.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What is the effect of mechanically assisted walking compared to no walking or to the same amount of overground walking on walking, participation, and quality of life in children with cerebral palsy 3 to 18 years of age."
}
] |
query-laysum
|
6056
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the Cochrane review \"Pharmacologic treatment for memory disorder in multiple sclerosis\" (first published in The Cochrane Library 2011, Issue 10).\nMultiple sclerosis (MS) is a chronic immune-mediated, inflammatory, demyelinating, neurodegenerative disorder of the central nervous system (CNS) and can cause both neurological and neuropsychological disability. Both demyelination and axonal and neuronal loss are believed to contribute to MS-related cognitive impairment. Memory disorder is one of the most frequent cognitive dysfunctions and presents a considerable burden to people with MS and to society due to the negative impact on function. A number of pharmacological agents have been evaluated in many existing randomised controlled trials for their efficacy on memory disorder in people with MS but the results were not consistent.\nObjectives\nTo assess the absolute and comparative efficacy, tolerability and safety of pharmacological treatments for memory disorder in adults with MS.\nSearch methods\nWe searched the Cochrane Multiple Sclerosis and Rare Diseases of the Central Nervous System Group Trials Register (24 July 2013), PsycINFO (January 1980 to 26 June 2013) and CBMdisc (1978 to 24 June 2013), and checked reference lists of identified articles, searched some relevant journals manually, registers of clinical trials and published abstracts of conference proceedings.\nSelection criteria\nAll double-blind, randomised controlled parallel trials on pharmacological treatment versus placebo or one or more pharmacological treatments in adults with MS who had at least mild memory impairment (at 0.5 standard deviations below age- and sex-based normative data on a validated memory scale). We placed no restrictions regarding dose, route of administration and frequency; however, we only included trials with an administration duration of 12 weeks or greater.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. We discussed disagreements and resolved them by consensus among review authors. We contacted principal investigators of included studies for additional data or confirmation.\nMain results\nWe included seven randomised controlled trials (RCTs) involving 625 people mostly with relapsing-remitting, secondary-progressive and primary-progressive MS, evaluating the absolute efficacy of donepezil, ginkgo biloba, memantine and rivastigmine versus placebo in improving memory performance with diverse assessment scales. Overall, clinical and methodological heterogeneities existed across these studies. Moreover, most of them had methodological limitations on non-specific selections of targeted sample, non-matched variables at baseline or incomplete outcome data (high attrition bias). Only the two studies on donepezil had clinical and methodological homogeneity and relatively low risks for bias. One RCT evaluating estriol versus placebo is currently ongoing.\nWe could not carry out a meta-analysis due to the heterogeneities across studies and the high attrition bias. A subgroup analysis for donepezil versus placebo showed no treatment effects on total recall on the Selective Reminding Test (mean difference (MD) 1.68; 95% confidence interval (CI) -2.21 to 5.58), total correct scores on the 10/36 Spatial Recall Test (MD -0.93; 95% CI -3.18 to 1.32), the Symbol Digit Modalities Test (MD -1.27; 95% CI -3.15 to 0.61) and the Paced Auditory Serial Addition Test (2+3 sec) (MD 2.23; 95% CI -1.87 to 6.33). Concerning safety, the main adverse events were: diarrhoea (risk ratio (RR) 3.88; 95% CI 1.66 to 9.05), nausea (RR 1.71; 95% CI 0.93 to 3.18) and abnormal dreams (RR 2.91; 95% CI 1.38 to 6.14). However, the results in both studies were subjected to a serious imprecision resulting from the small sample sizes and the low power of test (lower than 80%), which contributed to a moderate quality of the evidence. No serious adverse events were attributed to the treatments in all experimental groups.\nAuthors' conclusions\nWe found no convincing evidence to support the efficacy of pharmacological symptomatic treatment for MS-associated memory disorder because most of available RCTs had a limited quality. Whether pharmacological treatment is effective for memory disorder in patients with MS remains inconclusive. However, there is moderate-quality evidence that donepezil 10 mg daily was not effective in improving memory in MS patients with mild memory impairment, but had a good tolerability. Adverse events such as nausea, diarrhoea and abnormal dreams were not frequent but were associated with treatment. Ginkgo biloba, memantine and rivastigmine were safe and well tolerated and no serious adverse effects were reported. Future large-scale RCTs with higher methodological quality are needed.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Memory disorder is one of the most frequent cognitive impairments (where people have difficulty with thinking, learning and memory) in people with multiple sclerosis (MS), affecting an estimated 40% to 60% of people and represents a considerable burden to those people. Data on the use of agents such as donepezil, ginkgo biloba, memantine and rivastigmine, either by mouth (orally) and by other means (parenterally), seemed to provide promising results. The authors of this review tried to assess the extent by which these treatments produced beneficial results (efficacy), whether they had side effects (tolerability) and whether they were safe in adults with all the types of MS."
}
] |
query-laysum
|
6057
|
[
{
"role": "user",
"content": "Abstract: Background\nLong-acting reversible contraception (LARC), including intrauterine devices (IUDs) and contraceptive implants, are highly effective, reversible methods of contraception. Providing LARC methods during the postpartum period is important to support contraceptive choice, and to prevent unintended pregnancy and short interpregnancy intervals. Delaying offering contraception to postpartum people until the first comprehensive postpartum visit, traditionally at around six weeks postpartum, may put some postpartum people at risk of unintended pregnancy, either due to loss to follow-up or because of initiation of sexual intercourse prior to receiving contraception. Therefore, immediate provision of highly effective contraception, prior to discharge from hospital, has the potential to improve contraceptive use and prevent unintended pregnancies and short interpregnancy intervals.\nObjectives\nTo compare the initiation rate, utilization rates (at six months and 12 months after delivery), effectiveness, and adverse effects of immediate versus delayed postpartum insertion of implants and IUDs for contraception.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and POPLINE for eligible studies up to December 2020. We examined review articles and contacted investigators. We checked registers of ongoing clinical trials, citation lists of included studies, key textbooks, grey literature, and previous systematic reviews for potentially relevant studies.\nSelection criteria\nWe sought randomized controlled trials (RCTs) that compared immediate postpartum versus delayed insertion of contraceptive implant and IUDs for contraception.\nData collection and analysis\nTwo review authors (JS, SK) independently screened titles and abstracts of the search results, and assessed the full-text articles of potentially relevant studies for inclusion. They extracted data from the included studies, assessed risk of bias, compared results, and resolved disagreements by consulting a third review author (PL, SA or PP). We contacted investigators for additional data, where possible. We computed the Mantel-Haenszel or inverse variance risk ratio (RR) with 95% confidence interval (CI) for binary outcomes and the mean difference (MD) with 95% CI for continuous variables.\nMain results\nIn this updated review, 16 studies met the inclusion criteria; five were studies of contraceptive implants (715 participants) and 11 were studies of IUDs (1894 participants). We identified 12 ongoing studies. We applied GRADE judgements to our results; the overall certainty of the evidence for each outcome ranged from moderate to very low, with the main limitations being risk of bias, inconsistency, and imprecision.\nContraceptive implants\nImmediate insertion probably improves the initiation rate for contraceptive implants compared with delayed insertion (RR 1.48, 95% CI 1.11 to 1.98; 5 studies, 715 participants; I2 = 95%; moderate-certainty evidence).\nWe are uncertain if there was a difference between the two groups for the utilization rate of contraceptive implants at six months after delivery (RR 1.16, 95% CI 0.90 to 1.50; 3 studies, 330 participants; I2 = 89%; very low-certainty evidence) or at 12 months after insertion (RR 0.98, 95% CI 0.93 to 1.04; 2 studies, 164 participants; I2 = 0%; very low-certainty evidence).\nPeople who received an immediate postpartum contraceptive implant insertion may have had a higher mean number of days of prolonged vaginal bleeding within six weeks postpartum (mean difference (MD) 2.98 days, 95% CI -2.71 to 8.66; 2 studies, 420 participants; I2 = 91%; low-certainty evidence) and a higher rate of other adverse effects in the first six weeks after birth (RR 2.06, 95% CI 1.38 to 3.06; 1 study, 215 participants; low-certainty evidence) than those who received a delayed postpartum insertion. We are uncertain if there was a difference between the two groups for prolonged bleeding at six months after delivery (RR 1.19, 95% CI 0.29 to 4.94; 2 studies, 252 participants; I2 = 0%; very low-certainty evidence).\nThere may be little or no difference between the two groups for rates of unintended pregnancy at six months (RR 0.20, 95% CI 0.01 to 4.08; one study, 205 participants; low-certainty evidence). We are uncertain whether there was a difference in rates of unintended pregnancy at 12 months postpartum (RR 1.82, 95% CI 0.38 to 8.71; 1 study, 64 participants; very low-certainty evidence). There may be little or no difference between the two groups for any breastfeeding rates at six months (RR 0.97, 95% CI 0.92 to 1.01; 2 studies, 225 participants; I2 = 48%; low-certainty evidence).\nIUDs\nImmediate insertion of IUDs probably improves the initiation rate compared with delayed insertion, regardless of type of IUD (RR 1.27, 95% CI 1.07 to 1.51; 10 studies, 1894 participants; I2 = 98%; moderate-certainty evidence). However, people who received an immediate postpartum IUD insertion may have had a higher expulsion rate at six months after delivery (RR 4.55, 95% CI 2.52 to 8.19; 8 studies, 1206 participants; I2 = 31%; low-certainty evidence) than those who received a delayed postpartum insertion.\nWe are uncertain if there was a difference between the two groups in the utilization of IUDs at six months after insertion (RR 1.02, 95% CI 0.65 to 1.62; 6 studies, 971 participants; I2 = 96%; very low-certainty evidence) or at 12 months after insertion (RR 0.86, 95% CI 0.5 to 1.47; 3 studies, 796 participants; I2 = 92%; very low-certainty evidence).\nImmediate IUDs insertion may reduce unintended pregnancy at 12 months (RR 0.26, 95% CI 0.17 to 0.41; 1 study, 1000 participants; low-certainty evidence). We are uncertain whether there was difference in any breastfeeding rates at six months in people receiving progestin-releasing IUDs (RR 0.90, 95% CI 0.63 to 1.30; 5 studies, 435 participants; I2 = 54%; very low-certainty evidence).\nAuthors' conclusions\nEvidence from this updated review indicates that immediate postpartum insertion improves the initiation rate of both contraceptive implants and IUDs by the first postpartum visit compared to delayed insertion. However, it is not clear whether that there are differences in utilization rates at six and 12 months postpartum. We are uncertain whether there is any difference in the unintended pregnancy rate at 12 months. Provision of progestin-releasing implants and IUDs immediately postpartum may have little or no negative impact on breastfeeding. However, the expulsion rate of IUDs and prolonged vaginal bleeding associated with immediate implants appears to be higher.\n\nGiven the provided abstract, please respond to the following query: \"What are contraceptive implants or intrauterine devices?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Contraceptive implant and intrauterine devices (IUDs) are highly effective methods of birth control that are reversible and safe for use shortly after childbirth. Implants are inserted into the upper arm, and IUDs into the womb (uterus) by doctors or nurses. People who use these methods of contraception use either an implant or an IUD.\nAppropriate spacing of pregnancies is good for the health of both pregnant people and newborn babies. Normally, contraception is provided at the first comprehensive health visit after childbirth (usually around six weeks after giving birth). However, some people have sex before this visit, or do not attend it, which increases the risk of unplanned pregnancy. Insertion of a contraceptive implant or IUD within days of childbirth, before discharge from hospital, is convenient for patients and medical staff. The person having the implant or IUD is known not to be pregnant, and this practice may increase the number of people who are able to use these contraception methods."
}
] |
query-laysum
|
6058
|
[
{
"role": "user",
"content": "Abstract: Background\nLead exposure is a serious health hazard, especially for children. It is associated with physical, cognitive and neurobehavioural impairment in children. There are many potential sources of lead in the environment, therefore trials have tested many household interventions to prevent or reduce lead exposure. This is an update of a previously published review.\nObjectives\nTo assess the effects of household interventions intended to prevent or reduce further lead exposure in children on improvements in cognitive and neurobehavioural development, reductions in blood lead levels and reductions in household dust lead levels.\nSearch methods\nIn March 2020, we updated our searches of CENTRAL, MEDLINE, Embase, 10 other databases and ClinicalTrials.gov. We also searched Google Scholar, checked the reference lists of relevant studies and contacted experts to identify unpublished studies.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs of household educational or environmental interventions, or combinations of interventions to prevent lead exposure in children (from birth to 18 years of age), where investigators reported at least one standardised outcome measure.\nData collection and analysis\nTwo authors independently reviewed all eligible studies for inclusion, assessed risk of bias and extracted data. We contacted trialists to obtain missing information. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included 17 studies (three new to this update), involving 3282 children: 16 RCTs (involving 3204 children) and one quasi-RCT (involving 78 children). Children in all studies were under six years of age. Fifteen studies took place in urban areas of North America, one in Australia and one in China. Most studies were in areas with low socioeconomic status. Girls and boys were equally represented in those studies reporting this information. The duration of the intervention ranged from three months to 24 months in 15 studies, while two studies performed interventions on a single occasion. Follow-up periods ranged from three months to eight years. Three RCTs were at low risk of bias in all assessed domains. The other 14 studies were at unclear or high risk of bias; for example, we considered two RCTs and one quasi-RCT at high risk of selection bias and six RCTs at high risk of attrition bias. National or international research grants or governments funded 15 studies, while the other two did not report their funding sources.\nEducation interventions versus no intervention\nNone of the included studies in this comparison assessed effects on cognitive or neurobehavioural outcomes, or adverse events. All studies reported data on blood lead level outcomes.\nEducational interventions showed there was probably no evidence of a difference in reducing blood lead levels (continuous: mean difference (MD) –0.03, 95% confidence interval (CI) –0.13 to 0.07; I² = 0%; 5 studies, 815 participants; moderate-certainty evidence; log-transformed data), or in reducing floor dust levels (MD –0.07, 95% CI –0.37 to 0.24; I² = 0%; 2 studies, 318 participants; moderate-certainty evidence).\nEnvironmental interventions versus no intervention\nDust control: one study in this comparison reported data on cognitive and neurobehavioural outcomes, and on adverse events in children. The study showed numerically there may be better neurobehavioural outcomes in children of the intervention group. However, differences were small and the CI included both a beneficial and non-beneficial effect of the environmental intervention (e.g. mental development (Bayley Scales of Infant Development-II): MD 0.1, 95% CI –2.1 to 2.4; 1 study, 302 participants; low-certainty evidence). The same study did not observe any adverse events related to the intervention during the eight-year follow-up, but observed two children with adverse events in the control group (1 study, 355 participants; very low-certainty evidence).\nMeta-analysis also found no evidence of effectiveness on blood lead levels (continuous: MD –0.02, 95% CI –0.09 to 0.06; I² = 0%; 4 studies, 565 participants; moderate-certainty evidence; log-transformed data). We could not pool the data regarding floor dust levels, but studies reported that there may be no evidence of a difference between the groups (very low-certainty evidence).\nSoil abatement: the two studies assessing this environmental intervention only reported on the outcome of 'blood lead level'. One study showed a small effect on blood lead level reduction, while the other study showed no effect. Therefore, we deem the current evidence insufficient to draw conclusions about the effectiveness of soil abatement (very low-certainty evidence).\nCombination of educational and environmental interventions versus standard education\nStudies in this comparison only reported on blood lead levels and dust lead levels. We could not pool the studies in a meta-analysis due to substantial differences between the studies. Since the studies reported inconsistent results, the evidence is currently insufficient to clarify whether a combination of interventions reduces blood lead levels and floor dust levels (very low-certainty evidence).\nAuthors' conclusions\nBased on available evidence, household educational interventions and environmental interventions (namely dust control measures) show no evidence of a difference in reducing blood lead levels in children as a population health measure. The evidence of the effects of environmental interventions on cognitive and neurobehavioural outcomes and adverse events is uncertain too.\nFurther trials are required to establish the most effective intervention for reducing or even preventing further lead exposure. Key elements of these trials should include strategies to reduce multiple sources of lead exposure simultaneously using empirical dust clearance levels. It is also necessary for trials to be carried out in low- and middle-income countries and in differing socioeconomic groups in high-income countries.\n\nGiven the provided abstract, please respond to the following query: \"Environmental interventions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "one study comparing dust control measures with no intervention showed little to no difference in cognitive and neurobehavioural outcomes between the groups after three years to eight years. The same study assessed harms and found none associated with the intervention, but observed two children with side effects in the control group. All included studies in this comparison found that dust control did not lead to more or less reduced blood lead levels of young children (moderate-quality evidence) or floor dust levels than no intervention (very low-quality evidence). Two studies assessed the effect of soil abatement and did not allow any conclusions about its effectiveness (very low-quality evidence)."
}
] |
query-laysum
|
6059
|
[
{
"role": "user",
"content": "Abstract: Background\nEndocrine therapy is effective at preventing or treating breast cancer. Some forms of endocrine therapy have been shown to reduce mammographic density. Reduced mammographic density for women receiving endocrine therapy could be used to estimate the chance of breast cancer returning or developing breast cancer in the first instance (a prognostic biomarker). In addition, changes in mammographic density might be able to predict how well a woman responds to endocrine therapy (a predictive biomarker). The role of breast density as a prognostic or predictive biomarker could help improve the management of breast cancer.\nObjectives\nTo assess the evidence that a reduction in mammographic density following endocrine therapy for breast cancer prevention in women without previous breast cancer, or for treatment in women with early-stage hormone receptor-positive breast cancer, is a prognostic or predictive biomarker.\nSearch methods\nWe searched the Cochrane Breast Cancer Group Specialised Register, CENTRAL, MEDLINE, Embase, and two trials registers on 3 August 2020 along with reference checking, bibliographic searching, and contact with study authors to obtain further data.\nSelection criteria\nWe included randomised, cohort and case-control studies of adult women with or without breast cancer receiving endocrine therapy. Endocrine therapy agents included were selective oestrogen receptor modulators and aromatase inhibitors. We required breast density before start of endocrine therapy and at follow-up. We included studies published in English.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently extracted data and assessed risk of bias using adapted Quality in Prognostic Studies (QUIPS) and Risk Of Bias In Non-randomised Studies - of Interventions (ROBINS-I) tools. We used the GRADE approach to evaluate the certainty of the evidence. We did not perform a quantitative meta-analysis due to substantial heterogeneity across studies.\nMain results\nEight studies met our inclusion criteria, of which seven provided data on outcomes listed in the protocol (5786 women). There was substantial heterogeneity across studies in design, sample size (349 to 1066 women), participant characteristics, follow-up (5 to 14 years), and endocrine therapy agent. There were five breast density measures and six density change definitions. All studies had at least one domain as at moderate or high risk of bias. Common concerns were whether the study sample reflected the review target population, and likely post hoc definitions of breast density change.\nMost studies on prognosis for women receiving endocrine therapy reported a reduced risk associated with breast density reduction. Across endpoints, settings, and agents, risk ratio point estimates (most likely value) were between 0.1 and 1.5, but with substantial uncertainty. There was greatest consistency in the direction and magnitude of the effect for tamoxifen (across endpoints and settings, risk ratio point estimates were between 0.3 and 0.7). The findings are summarised as follows.\nPrognostic biomarker findings:\nTreatment\nBreast cancer mortality\nTwo studies of 823 women on tamoxifen (172 breast cancer deaths) reported risk ratio point estimates of ~0.4 and ~0.5 associated with a density reduction. The certainty of the evidence was low.\nRecurrence\nTwo studies of 1956 women on tamoxifen reported risk ratio point estimates of ~0.4 and ~0.7 associated with a density reduction. There was risk of bias in methodology for design and analysis of the studies and considerable uncertainty over the size of the effect.\nOne study of 175 women receiving an aromatase inhibitor reported a risk ratio point estimate of ~0.1 associated with a density reduction. There was considerable uncertainty about the effect size and a moderate or high risk of bias in all domains.\nOne study of 284 women receiving exemestane or tamoxifen as part of a randomised controlled trial reported risk ratio point estimates of ~1.5 (loco-regional recurrence) and ~1.3 (distance recurrence) associated with a density reduction. There was risk of bias in reporting and study confounding, and uncertainty over the size of the effects.\nThe certainty of the evidence for all recurrence endpoints was very low.\nIncidence of a secondary primary breast cancer\nTwo studies of 451 women on exemestane, tamoxifen, or unknown endocrine therapy reported risk ratio point estimates of ~0.5 and ~0.6 associated with a density reduction. There was risk of bias in reporting and study confounding, and uncertainty over the effect size. The certainty of the evidence was very low.\nWe were unable to find data regarding the remaining nine outcomes prespecified in the review protocol.\nPrevention\nIncidence of invasive breast cancer and ductal carcinoma in situ (DCIS)\nOne study of 507 women without breast cancer who were receiving preventive tamoxifen as part of a randomised controlled trial (51 subsequent breast cancers) reported a risk ratio point estimate of ~0.3 associated with a density reduction. The certainty of the evidence was low.\nPredictive biomarker findings:\nOne study of a subset of 1065 women from a randomised controlled trial assessed how much the effect of endocrine therapy could be explained by breast density declines in those receiving endocrine therapy. This study evaluated the prevention of invasive breast cancer and DCIS. We found some evidence to support the hypothesis, with a risk ratio interaction point estimate ~0.5. However, the 95% confidence interval included unity, and data were based on 51 women with subsequent breast cancer in the tamoxifen group. The certainty of the evidence was low.\nAuthors' conclusions\nThere is low-/very low-certainty evidence to support the hypothesis that breast density change following endocrine therapy is a prognostic biomarker for treatment or prevention. Studies suggested a potentially large effect size with tamoxifen, but the evidence was limited. There was less evidence that breast density change following tamoxifen preventive therapy is a predictive biomarker than prognostic biomarker. Evidence for breast density change as a prognostic treatment biomarker was stronger for tamoxifen than aromatase inhibitors. There were no studies reporting mammographic density change following endocrine therapy as a predictive biomarker in the treatment setting, nor aromatase inhibitor therapy as a prognostic or predictive biomarker in the preventive setting. Further research is warranted to assess mammographic density as a biomarker for all classes of endocrine therapy and review endpoints.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for previously published studies. We assessed whether a reduction in breast density after receiving endocrine therapy was associated with better outcomes. For women without breast cancer, this focused on whether those with decreased breast density were less likely to develop breast cancer. For women with breast cancer, this included whether those with greater decreases in breast density were less likely to die from breast cancer."
}
] |
query-laysum
|
6060
|
[
{
"role": "user",
"content": "Abstract: Background\nMidwives are primary providers of care for childbearing women around the world. However, there is a lack of synthesised information to establish whether there are differences in morbidity and mortality, effectiveness and psychosocial outcomes between midwife-led continuity models and other models of care.\nObjectives\nTo compare midwife-led continuity models of care with other models of care for childbearing women and their infants.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (25 January 2016) and reference lists of retrieved studies.\nSelection criteria\nAll published and unpublished trials in which pregnant women are randomly allocated to midwife-led continuity models of care or other models of care during pregnancy and birth.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. The quality of the evidence was assessed using the GRADE approach.\nMain results\nWe included 15 trials involving 17,674 women. We assessed the quality of the trial evidence for all primary outcomes (i.e. regional analgesia (epidural/spinal), caesarean birth, instrumental vaginal birth (forceps/vacuum), spontaneous vaginal birth, intact perineum, preterm birth (less than 37 weeks) and all fetal loss before and after 24 weeks plus neonatal death using the GRADE methodology: all primary outcomes were graded as of high quality.\nFor the primary outcomes, women who had midwife-led continuity models of care were less likely to experience regional analgesia (average risk ratio (RR) 0.85, 95% confidence interval (CI) 0.78 to 0.92; participants = 17,674; studies = 14; high quality), instrumental vaginal birth (average RR 0.90, 95% CI 0.83 to 0.97; participants = 17,501; studies = 13; high quality), preterm birth less than 37 weeks (average RR 0.76, 95% CI 0.64 to 0.91; participants = 13,238; studies = eight; high quality) and less all fetal loss before and after 24 weeks plus neonatal death (average RR 0.84, 95% CI 0.71 to 0.99; participants = 17,561; studies = 13; high quality evidence). Women who had midwife-led continuity models of care were more likely to experience spontaneous vaginal birth (average RR 1.05, 95% CI 1.03 to 1.07; participants = 16,687; studies = 12; high quality). There were no differences between groups for caesarean births or intact perineum.\nFor the secondary outcomes, women who had midwife-led continuity models of care were less likely to experience amniotomy (average RR 0.80, 95% CI 0.66 to 0.98; participants = 3253; studies = four), episiotomy (average RR 0.84, 95% CI 0.77 to 0.92; participants = 17,674; studies = 14) and fetal loss less than 24 weeks and neonatal death (average RR 0.81, 95% CI 0.67 to 0.98; participants = 15,645; studies = 11). Women who had midwife-led continuity models of care were more likely to experience no intrapartum analgesia/anaesthesia (average RR 1.21, 95% CI 1.06 to 1.37; participants = 10,499; studies = seven), have a longer mean length of labour (hours) (mean difference (MD) 0.50, 95% CI 0.27 to 0.74; participants = 3328; studies = three) and more likely to be attended at birth by a known midwife (average RR 7.04, 95% CI 4.48 to 11.08; participants = 6917; studies = seven). There were no differences between groups for fetal loss equal to/after 24 weeks and neonatal death, induction of labour, antenatal hospitalisation, antepartum haemorrhage, augmentation/artificial oxytocin during labour, opiate analgesia, perineal laceration requiring suturing, postpartum haemorrhage, breastfeeding initiation, low birthweight infant, five-minute Apgar score less than or equal to seven, neonatal convulsions, admission of infant to special care or neonatal intensive care unit(s) or in mean length of neonatal hospital stay (days).\nDue to a lack of consistency in measuring women's satisfaction and assessing the cost of various maternity models, these outcomes were reported narratively. The majority of included studies reported a higher rate of maternal satisfaction in midwife-led continuity models of care. Similarly, there was a trend towards a cost-saving effect for midwife-led continuity care compared to other care models.\nAuthors' conclusions\nThis review suggests that women who received midwife-led continuity models of care were less likely to experience intervention and more likely to be satisfied with their care with at least comparable adverse outcomes for women or their infants than women who received other models of care.\nFurther research is needed to explore findings of fewer preterm births and fewer fetal deaths less than 24 weeks, and all fetal loss/neonatal death associated with midwife-led continuity models of care.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Midwife-led continuity models provide care from the same midwife or team of midwives during the pregnancy, birth and the early parenting period, and many women value this. These midwives also involve other care-providers if they are needed. Obstetrician-led or family doctor-led models are not usually able to provide the same midwife/wives throughout. We need to know if the midwife-led continuity model is safe, and if it brings benefits to mothers and babies."
}
] |
query-laysum
|
6061
|
[
{
"role": "user",
"content": "Abstract: Background\nThe detection and diagnosis of caries at the earliest opportunity is fundamental to the preservation of tooth tissue and maintenance of oral health. Radiographs have traditionally been used to supplement the conventional visual-tactile clinical examination. Accurate, timely detection and diagnosis of early signs of disease could afford patients the opportunity of less invasive treatment with less destruction of tooth tissue, reduce the need for treatment with aerosol-generating procedures, and potentially result in a reduced cost of care to the patient and to healthcare services.\nObjectives\nTo determine the diagnostic accuracy of different dental imaging methods to inform the detection and diagnosis of non-cavitated enamel only coronal dental caries.\nSearch methods\nCochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 31 December 2018); Embase Ovid (1980 to 31 December 2018); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 31 December 2018); and the World Health Organization International Clinical Trials Registry Platform (to 31 December 2018). We studied reference lists as well as published systematic review articles.\nSelection criteria\nWe included diagnostic accuracy study designs that compared a dental imaging method with a reference standard (histology, excavation, enhanced visual examination), studies that evaluated the diagnostic accuracy of single index tests, and studies that directly compared two or more index tests. Studies reporting at both the patient or tooth surface level were included. In vitro and in vivo studies were eligible for inclusion. Studies that explicitly recruited participants with more advanced lesions that were obviously into dentine or frankly cavitated were excluded. We also excluded studies that artificially created carious lesions and those that used an index test during the excavation of dental caries to ascertain the optimum depth of excavation.\nData collection and analysis\nTwo review authors extracted data independently and in duplicate using a standardised data extraction form and quality assessment based on QUADAS-2 specific to the clinical context. Estimates of diagnostic accuracy were determined using the bivariate hierarchical method to produce summary points of sensitivity and specificity with 95% confidence regions. Comparative accuracy of different radiograph methods was conducted based on indirect and direct comparisons between methods. Potential sources of heterogeneity were pre-specified and explored visually and more formally through meta-regression.\nMain results\nWe included 104 datasets from 77 studies reporting a total of 15,518 tooth sites or surfaces. The most frequently reported imaging methods were analogue radiographs (55 datasets from 51 studies) and digital radiographs (42 datasets from 40 studies) followed by cone beam computed tomography (CBCT) (7 datasets from 7 studies). Only 17 studies were of an in vivo study design, carried out in a clinical setting. No studies were considered to be at low risk of bias across all four domains but 16 studies were judged to have low concern for applicability across all domains. The patient selection domain had the largest number of studies judged to be at high risk of bias (43 studies); the index test, reference standard, and flow and timing domains were judged to be at high risk of bias in 30, 12, and 7 studies respectively.\nStudies were synthesised using a hierarchical bivariate method for meta-analysis. There was substantial variability in the results of the individual studies, with sensitivities that ranged from 0 to 0.96 and specificities from 0 to 1.00. For all imaging methods the estimated summary sensitivity and specificity point was 0.47 (95% confidence interval (CI) 0.40 to 0.53) and 0.88 (95% CI 0.84 to 0.92), respectively. In a cohort of 1000 tooth surfaces with a prevalence of enamel caries of 63%, this would result in 337 tooth surfaces being classified as disease free when enamel caries was truly present (false negatives), and 43 tooth surfaces being classified as diseased in the absence of enamel caries (false positives). Meta-regression indicated that measures of accuracy differed according to the imaging method (Chi2(4) = 32.44, P < 0.001), with the highest sensitivity observed for CBCT, and the highest specificity observed for analogue radiographs. None of the specified potential sources of heterogeneity were able to explain the variability in results. No studies included restored teeth in their sample or reported the inclusion of sealants.\nWe rated the certainty of the evidence as low for sensitivity and specificity and downgraded two levels in total for risk of bias due to limitations in the design and conduct of the included studies, indirectness arising from the in vitro studies, and the observed inconsistency of the results.\nAuthors' conclusions\nThe design and conduct of studies to determine the diagnostic accuracy of methods to detect and diagnose caries in situ are particularly challenging. Low-certainty evidence suggests that imaging for the detection or diagnosis of early caries may have poor sensitivity but acceptable specificity, resulting in a relatively high number of false-negative results with the potential for early disease to progress. If left untreated, the opportunity to provide professional or self-care practices to arrest or reverse early caries lesions will be missed. The specificity of lesion detection is however relatively high, and one could argue that initiation of non-invasive management (such as the use of topical fluoride), is probably of low risk.\nCBCT showed superior sensitivity to analogue or digital radiographs but has very limited applicability to the general dental practitioner. However, given the high-radiation dose, and potential for caries-like artefacts from existing restorations, its use cannot be justified in routine caries detection. Nonetheless, if early incidental carious lesions are detected in CBCT scans taken for other purposes, these should be reported. CBCT has the potential to be used as a reference standard in diagnostic studies of this type.\nDespite the robust methodology applied in this comprehensive review, the results should be interpreted with some caution due to shortcomings in the design and execution of many of the included studies. Future research should evaluate the comparative accuracy of different methods, be undertaken in a clinical setting, and focus on minimising bias arising from the use of imperfect reference standards in clinical studies.\n\nGiven the provided abstract, please respond to the following query: \"What are the implications of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Low-certainty evidence suggests that imaging for the detection or diagnosis of early tooth decay may result in a relatively high proportion of false-negative results, with the potential for early disease to become more advanced. If left untreated, the opportunity to provide professional or self-care practices to arrest or reverse early tooth decay will be missed."
}
] |
query-laysum
|
6062
|
[
{
"role": "user",
"content": "Abstract: Background\nAnimal-derived surfactants have been shown to have several advantages over the first generation synthetic surfactants and are the most commonly used surfactant preparations. The animal-derived surfactants in clinical use are minced or lavaged and modified or purified from bovine or porcine lungs. It is unclear whether significant differences in clinical outcome exist among the available bovine (modified minced or lavage) and porcine (minced or lavage) surfactant extracts.\nObjectives\nTo compare the effect of administration of different animal-derived surfactant extracts on the risk of mortality, chronic lung disease, and other morbidities associated with prematurity in preterm infants at risk for or having respiratory distress syndrome (RDS).\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review group to search the Cochrane Central Register of Controlled Trials (CENTRAL 2015, Issue 7), MEDLINE via PubMed (1966 to July 31, 2015), EMBASE (1980 to July 31, 2015), and CINAHL (1982 to July 31, 2015). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomized controlled trials and quasi-randomized trials.\nSelection criteria\nRandomized or quasi-randomized controlled trials that compared the effect of animal-derived surfactant extract treatment administered to preterm infants at risk for or having RDS to prevent complications of prematurity and mortality.\nData collection and analysis\nData regarding clinical outcomes were excerpted from the reports of the clinical trials by the review authors. Subgroup analyses were performed based on gestational age, surfactant dosing and schedule, treatment severity and treatment strategy. Data analysis was performed in accordance with the standards of the Cochrane Neonatal Review Group.\nMain results\nSixteen randomized controlled trials were included in the analysis.\nBovine lung lavage surfactant extract to modified bovine minced lung surfactant extract: Seven treatment studies and two prevention studies compared bovine lung lavage surfactant extract to modified bovine minced lung surfactant extract. The meta-analysis did not demonstrate any significant differences in death or chronic lung disease in the prevention trials (typical RR 1.02, 95% CI 0.89 to 1.17; typical RD 0.01, 95% CI −0.05 to 0.06; 2 studies and 1123 infants; high quality evidence) or treatment trials (typical RR 0.95, 95% CI 0.86 to 1.06; typical RD −0.02 , 95% CI −0.06 to 0.02; 3 studies and 2009 infants; high quality evidence)\nModified bovine minced lung surfactant extract compared with porcine minced lung surfactant extract: Nine treatment studies compared modified bovine minced lung surfactant extract to porcine minced lung surfactant extract. Meta-analysis of these trials demonstrate a significant increase in the risk of mortality prior to hospital discharge (typical RR 1.44, 95% CI 1.04 to 2.00; typical RD 0.05, 95% CI 0.01 to 0.10; NNTH 20, 95% CI 10 to 100; 9 studies and 901 infants; moderate quality evidence), death or oxygen requirement at 36 weeks' postmenstrual age (typical RR 1.30, 95% CI 1.04 to 1.64; typical RD 0.11, 95% CI 0.02 to 0.20; NNTH 9, 95% CI 5 to 50; 3 studies and 448 infants; moderate quality evidence), receiving more than one dose of surfactant (typical RR 1.57, 95% CI 1.29 to 1.92; typical RD 0.14, 95% CI 0.08 to 0.20; NNTH 7, 95% CI 5 to 13; 6 studies and 786 infants), and patent ductus arteriosus (PDA) requiring treatment (typical RR 1.86, 95% CI 1.28 to 2.70; typical RD 0.28, 95% CI 0.13 to 0.43; NNTH 4, 95% CI 2 to 8; 3 studies and 137 infants) in infants treated with modified bovine minced lung surfactant extract compared with porcine minced lung surfactant extract. In the subgroup analysis based on initial dose of surfactant, improvement in mortality prior to discharge (typical RR 1.62, 95% CI 1.11 to 2.38; typical RD 0.06, 95% CI 0.01 to 0.11; NNTH 16, 95% CI 9 to 100) and risk of death or oxygen requirement at 36 weeks' postmenstrual age (typical RR 1.39, 95% CI 1.08 to 1.79; typical RD 0.13, 95% 0.03 to 0.23; NNTH 7, 95% CI 4 to 33) was limited to higher initial dose of porcine minced lung surfactant (> 100 mg/kg).\nOther comparisons: No difference in outcome was noted between bovine lung lavage surfactant extract versus porcine minced lung surfactant extract. There were no studies comparing bovine lung lavage surfactant extract versus porcine lung lavage surfactant; or porcine minced lung surfactant extract versus porcine lung lavage surfactant.\nAuthors' conclusions\nSignificant differences in clinical outcome were noted in the comparison trials of modified minced lung surfactant extract (beractant) compared with porcine minced lung surfactant extract (poractant alfa) including a significant increase in the risk of mortality prior to discharge, death or oxygen requirement at 36 weeks' postmenstrual age, PDA requiring treatment and \"receiving > 1 dose of surfactant\" in infants treated with modified bovine minced lung surfactant extract compared with porcine minced lung surfactant extract. The difference in these outcomes was limited to studies using a higher initial dose of porcine minced lung surfactant extract. It is uncertain whether the observed differences are from differences in dose or from source of extraction (porcine vs. bovine) because of the lack of dose-equivalent comparison groups with appropriate sample size. No differences in clinical outcomes were observed in comparative trials between bovine lung lavage surfactant and modified bovine minced lung surfactants.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found improvement in risk of death before hospital discharge and the risk of combined outcome of death or oxygen requirement at four weeks before due date of birth with the use of higher dose of porcine surfactant (a surfactant product derived from pig's lung) compared with a minced bovine surfactant (surfactant obtained from minced cow's lung). Based on the available evidence, we cannot state with certainty whether the improvement in outcome with high dose porcine-derived surfactant is from dose effect (higher phospholipid levels with use of higher initial dose) or from the effect of source of surfactant extraction (pig vs. cow). Comparison trials of surfactants obtained from bovine sources that used different extraction methods or modifications showed no difference in clinical outcomes."
}
] |
query-laysum
|
6063
|
[
{
"role": "user",
"content": "Abstract: Background\nOcular discomfort is the leading cause of permanent discontinuation of soft contact lens (SCL) wear. Silicone hydrogel and hydrogel materials are the two major categories of SCLs, with silicone hydrogel materials being newer and more breathable than hydrogel materials. Whether comfort is associated with SCL material is controversial despite numerous studies. Similarly, the difference between these materials in terms of safety outcomes (e.g. frequency of microbial keratitis) is unclear.\nObjectives\nTo evaluate the comparative effectiveness and safety of silicone hydrogel compared with hydrogel SCLs on self-reported comfort, dry eye test results, and adverse events in SCL-wearing adults 18 years of age or older.\nSearch methods\nThe Cochrane Eyes and Vision Information Specialist searched the electronic databases for randomized controlled trials (RCTs). There were no restrictions on language or date of publication. We searched the Cochrane Central Register of Controlled Trials (CENTRAL, including the Cochrane Eyes and Vision Trials Register; 2022, Issue 6), MEDLINE Ovid, Embase.com, PubMed, LILACS (Latin American and Caribbean Health Science Information database), ClinicalTrials.gov, and World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We also searched the reference lists of identified studies, review articles, and guidelines for information about relevant studies that may not have been identified by our search strategy. Furthermore, we contacted investigators regarding ongoing trials. The most recent database search was conducted on 24 June 2022.\nSelection criteria\nOur search selection criteria included RCTs, quasi-RCTs, and cross-over RCTs.\nData collection and analysis\nWe applied standard Cochrane methodology.\nMain results\nWe included seven parallel-group RCTs conducted in the USA, the UK, Australia, Germany, India, and Turkey. A total of 1371 participants were randomized. The duration of SCL wear ranged from one to 52 weeks.\nStudy characteristics and risk of bias\nThe median number of participants per trial was 120 (interquartile range: 51 to 314), and the average age ranged from 20.7 to 33.0 years. Women represented the majority of participants (range 55% to 74.9%; 5 RCTs). Collectively, the included trials compared eight different silicone hydrogel SCLs with three different hydrogel SCLs. Five trials compared daily disposable SCLs, and two compared extended wear SCLs (worn for seven days and six nights). New SCL wearers were enrolled in three trials. Two trials included both new and established SCL wearers, and two trials did not report participants' history of SCL use. Five trials were sponsored by industry. We judged the overall risk of bias to be 'high' or 'some concerns' for the safety and efficacy outcomes.\nFindings\nOne trial reported Ocular Surface Disease Index (OSDI) results, with the evidence being very uncertain about the effects of SCL material on OSDI scores (mean difference −1.20, 95% confidence interval [CI] −10.49 to 8.09; 1 RCT, 47 participants; very low certainty evidence). Three trials reported visual analog scale comfort score results, with no clear difference in comfort between materials, but the evidence was of very low certainty; trial results could not be combined because the three trials reported results at different time points. The evidence is very uncertain about the effect of SCL material on discontinuation of contact lens wear (risk ratio [RR] 0.64, 95% CI 0.11 to 3.74; 1 RCT, 248 participants). None of the included trials reported Contact Lens Dry Eye Questionnaire (CLDEQ-8) or Standard Patient Evaluation of Eye Dryness (SPEED) scores.\nThere was no evidence of a clinically meaningful difference (> 0.5 unit) between daily disposable silicone hydrogel and hydrogel SCLs in corneal staining, conjunctival staining, or conjunctival redness (very low certainty evidence).\nAdverse events\nVery low certainty evidence from two trials comparing daily disposable SCLs suggested no evidence of a difference between lens materials in the risk of vision-threatening adverse events at one to four weeks (RR 0.68, 95% CI 0.08 to 5.51; 2 RCTs, 368 participants). Two trials comparing extended wear SCLs indicated that hydrogel SCLs may have a 2.03 times lower risk of adverse events at 52 weeks compared with silicone hydrogel SCLs (RR 2.03, 95% CI 1.38 to 2.99; 815 participants), but the certainty of evidence was very low.\nAuthors' conclusions\nThe overall evidence for a difference between all included silicone hydrogel and hydrogel SCLs was of very low certainty, with most trials at high overall risk of bias. The majority of studies did not assess comfort using a validated instrument. There was insufficient evidence to support recommending one SCL material over the other. For extended wear, hydrogel SCL may have a lower risk of adverse events at 52 weeks compared to silicon hydrogel. Future well-designed trials are needed to generate high certainty evidence to further clarify differences in SCL material comfort and safety.\n\nGiven the provided abstract, please respond to the following query: \"What are silicone hydrogel and hydrogel soft contact lenses?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Contact lenses are often used as an alternative to eyeglasses for vision correction. Unlike eyeglasses, contact lenses are placed directly onto the surface of the eye. SCLs are made from flexible plastics, which are typically made from either silicone hydrogel or hydrogel plastic materials. These materials are porous, with tiny spaces or holes that allow liquid and air to pass through to the surface of the eye. Silicone hydrogel materials are more breathable and have less water content compared with hydrogel SCLs."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.