dataset
stringclasses
1 value
id
stringlengths
1
5
messages
listlengths
2
2
query-laysum
23350
[ { "role": "user", "content": "Abstract: Background\nPeople living in 'humanitarian settings' in low- and middle-income countries (LMICs) are exposed to a constellation of physical and psychological stressors that make them vulnerable to developing mental disorders. A range of psychological and social interventions have been implemented with the aim to prevent the onset of mental disorders and/or lower psychological distress in populations at risk, and it is not known whether interventions are effective.\nObjectives\nTo compare the efficacy and acceptability of psychological and social interventions versus control conditions (wait list, treatment as usual, attention placebo, psychological placebo, or no treatment) aimed at preventing the onset of non-psychotic mental disorders in people living in LMICs affected by humanitarian crises.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Controlled Trials Register (CCMD-CTR), the Cochrane Drugs and Alcohol Review Group (CDAG) Specialized Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (OVID), Embase (OVID), PsycINFO (OVID), and ProQuest PILOTS database with results incorporated from searches to February 2020. We also searched the World Health Organization's (WHO) International Clinical Trials Registry Platform and ClinicalTrials.gov to identify unpublished or ongoing studies. We checked the reference lists of relevant studies and reviews.\nSelection criteria\nAll randomised controlled trials (RCTs) comparing psychological and social interventions versus control conditions to prevent the onset of mental disorders in adults and children living in LMICs affected by humanitarian crises. We excluded studies that enrolled participants based on a positive diagnosis of mental disorder (or based on a proxy of scoring above a cut-off score on a screening measure).\nData collection and analysis\nWe calculated standardised mean differences for continuous outcomes and risk ratios for dichotomous data, using a random-effects model. We analysed data at endpoint (zero to four weeks after therapy) and at medium term (one to four months after intervention). No data were available at long term (six months or longer). We used GRADE to assess the quality of evidence.\nMain results\nIn the present review we included seven RCTs with a total of 2398 participants, coming from both children/adolescents (five RCTs), and adults (two RCTs). Together, the seven RCTs compared six different psychosocial interventions against a control comparator (waiting list in all studies). All the interventions were delivered by paraprofessionals and, with the exception of one study, delivered at a group level.\nNone of the included studies provided data on the efficacy of interventions to prevent the onset of mental disorders (incidence). For the primary outcome of acceptability, there may be no evidence of a difference between psychological and social interventions and control at endpoint for children and adolescents (RR 0.93, 95% CI 0.78 to 1.10; 5 studies, 1372 participants; low-quality evidence) or adults (RR 0.96, 95% CI 0.61 to 1.50; 2 studies, 767 participants; very low quality evidence). No information on adverse events related to the interventions was available.\nFor children's and adolescents' secondary outcomes of prevention interventions, there may be no evidence of a difference between psychological and social intervention groups and control groups for reducing PTSD symptoms (standardised mean difference (SMD) −0.16, 95% CI −0.50 to 0.18; 3 studies, 590 participants; very low quality evidence), depressive symptoms (SMD −0.01, 95% CI −0.29 to 0.31; 4 RCTs, 746 participants; very low quality evidence) and anxiety symptoms (SMD 0.11, 95% CI −0.09 to 0.31; 3 studies, 632 participants; very low quality evidence) at study endpoint.\nIn adults' secondary outcomes of prevention interventions, psychological counselling may be effective for reducing depressive symptoms (MD −7.50, 95% CI −9.19 to −5.81; 1 study, 258 participants; very low quality evidence) and anxiety symptoms (MD −6.10, 95% CI −7.57 to −4.63; 1 study, 258 participants; very low quality evidence) at endpoint. No data were available for PTSD symptoms in the adult population.\nOwing to the small number of RCTs included in the present review, it was not possible to carry out neither sensitivity nor subgroup analyses.\nAuthors' conclusions\nOf the seven prevention studies included in this review, none assessed whether prevention interventions reduced the incidence of mental disorders and there may be no evidence for any differences in acceptability. Additionally, for both child and adolescent populations and adult populations, a very small number of RCTs with low quality evidence on the review's secondary outcomes (changes in symptomatology at endpoint) did not suggest any beneficial effect for the studied prevention interventions.\nConfidence in the findings is hampered by the scarcity of prevention studies eligible for inclusion in the review, by risk of bias in the studies, and by substantial levels of heterogeneity. Moreover, it is possible that random error had a role in distorting results, and that a more thorough picture of the efficacy of prevention interventions will be provided by future studies. For this reason, prevention studies are urgently needed to assess the impact of interventions on the incidence of mental disorders in children and adults, with extended periods of follow-up.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We did not find any randomized evidence whether psychosocial interventions can stop mental health disorders developing in people living through humanitarian crises in low- to middle-income countries. We did not find enough reliable evidence about the benefits of these interventions in reducing mental health symptoms. Larger, well-conducted studies are needed to give more reliable evidence about the short- and long-term effects of psychosocial interventions to prevent mental disorders in people living in low- and middle-income countries affected by humanitarian crises." } ]
query-laysum
23351
[ { "role": "user", "content": "Abstract: Background\nThe primary manifestation of coronavirus disease 2019 (COVID-19) is respiratory insufficiency that can also be related to diffuse pulmonary microthrombosis and thromboembolic events, such as pulmonary embolism, deep vein thrombosis, or arterial thrombosis. People with COVID-19 who develop thromboembolism have a worse prognosis.\nAnticoagulants such as heparinoids (heparins or pentasaccharides), vitamin K antagonists and direct anticoagulants are used for the prevention and treatment of venous or arterial thromboembolism. Besides their anticoagulant properties, heparinoids have an additional anti-inflammatory potential. However, the benefit of anticoagulants for people with COVID-19 is still under debate.\nObjectives\nTo assess the benefits and harms of anticoagulants versus active comparator, placebo or no intervention in people hospitalised with COVID-19.\nSearch methods\nWe searched the CENTRAL, MEDLINE, Embase, LILACS and IBECS databases, the Cochrane COVID-19 Study Register and medRxiv preprint database from their inception to 14 April 2021. We also checked the reference lists of any relevant systematic reviews identified, and contacted specialists in the field for additional references to trials.\nSelection criteria\nEligible studies were randomised controlled trials (RCTs), quasi-RCTs, cluster-RCTs and cohort studies that compared prophylactic anticoagulants versus active comparator, placebo or no intervention for the management of people hospitalised with COVID-19. We excluded studies without a comparator group and with a retrospective design (all previously included studies) as we were able to include better study designs. Primary outcomes were all-cause mortality and necessity for additional respiratory support. Secondary outcomes were mortality related to COVID-19, deep vein thrombosis, pulmonary embolism, major bleeding, adverse events, length of hospital stay and quality of life.\nData collection and analysis\nWe used standard Cochrane methodological procedures. We used Cochrane RoB 1 to assess the risk of bias for RCTs, ROBINS-I to assess risk of bias for non-randomised studies (NRS) and GRADE to assess the certainty of evidence. We meta-analysed data when appropriate.\nMain results\nWe included seven studies (16,185 participants) with participants hospitalised with COVID-19, in either intensive care units, hospital wards or emergency departments. Studies were from Brazil (2), Iran (1), Italy (1), and the USA (1), and two involved more than country. The mean age of participants was 55 to 68 years and the follow-up period ranged from 15 to 90 days. The studies assessed the effects of heparinoids, direct anticoagulants or vitamin K antagonists, and reported sparse data or did not report some of our outcomes of interest: necessity for additional respiratory support, mortality related to COVID-19, and quality of life.\nHigher-dose versus lower-dose anticoagulants (4 RCTs, 4647 participants)\nHigher-dose anticoagulants result in little or no difference in all-cause mortality (risk ratio (RR) 1.03, 95% CI 0.92 to 1.16, 4489 participants; 4 RCTs) and increase minor bleeding (RR 3.28, 95% CI 1.75 to 6.14, 1196 participants; 3 RCTs) compared to lower-dose anticoagulants up to 30 days (high-certainty evidence). Higher-dose anticoagulants probably reduce pulmonary embolism (RR 0.46, 95% CI 0.31 to 0.70, 4360 participants; 4 RCTs), and slightly increase major bleeding (RR 1.78, 95% CI 1.13 to 2.80, 4400 participants; 4 RCTs) compared to lower-dose anticoagulants up to 30 days (moderate-certainty evidence). Higher-dose anticoagulants may result in little or no difference in deep vein thrombosis (RR 1.08, 95% CI 0.57 to 2.03, 3422 participants; 4 RCTs), stroke (RR 0.91, 95% CI 0.40 to 2.03, 4349 participants; 3 RCTs), major adverse limb events (RR 0.33, 95% CI 0.01 to 7.99, 1176 participants; 2 RCTs), myocardial infarction (RR 0.86, 95% CI 0.48 to 1.55, 4349 participants; 3 RCTs), atrial fibrillation (RR 0.35, 95% CI 0.07 to 1.70, 562 participants; 1 study), or thrombocytopenia (RR 0.94, 95% CI 0.71 to 1.24, 2789 participants; 2 RCTs) compared to lower-dose anticoagulants up to 30 days (low-certainty evidence). It is unclear whether higher-dose anticoagulants have any effect on necessity for additional respiratory support, mortality related to COVID-19, and quality of life (very low-certainty evidence or no data).\nAnticoagulants versus no treatment (3 prospective NRS, 11,538 participants)\nAnticoagulants may reduce all-cause mortality but the evidence is very uncertain due to two study results being at critical and serious risk of bias (RR 0.64, 95% CI 0.55 to 0.74, 8395 participants; 3 NRS; very low-certainty evidence). It is uncertain if anticoagulants have any effect on necessity for additional respiratory support, mortality related to COVID-19, deep vein thrombosis, pulmonary embolism, major bleeding, stroke, myocardial infarction and quality of life (very low-certainty evidence or no data).\nOngoing studies\nWe found 62 ongoing studies in hospital settings (60 RCTs, 35,470 participants; 2 prospective NRS, 120 participants) in 20 different countries. Thirty-five ongoing studies plan to report mortality and 26 plan to report necessity for additional respiratory support. We expect 58 studies to be completed in December 2021, and four in July 2022. From 60 RCTs, 28 are comparing different doses of anticoagulants, 24 are comparing anticoagulants versus no anticoagulants, seven are comparing different types of anticoagulants, and one did not report detail of the comparator group.\nAuthors' conclusions\nWhen compared to a lower-dose regimen, higher-dose anticoagulants result in little to no difference in all-cause mortality and increase minor bleeding in people hospitalised with COVID-19 up to 30 days. Higher-dose anticoagulants possibly reduce pulmonary embolism, slightly increase major bleeding, may result in little to no difference in hospitalisation time, and may result in little to no difference in deep vein thrombosis, stroke, major adverse limb events, myocardial infarction, atrial fibrillation, or thrombocytopenia.\nCompared with no treatment, anticoagulants may reduce all-cause mortality but the evidence comes from non-randomised studies and is very uncertain. It is unclear whether anticoagulants have any effect on the remaining outcomes compared to no anticoagulants (very low-certainty evidence or no data).\nAlthough we are very confident that new RCTs will not change the effects of different doses of anticoagulants on mortality and minor bleeding, high-quality RCTs are still needed, mainly for the other primary outcome (necessity for additional respiratory support), the comparison with no anticoagulation, when comparing the types of anticoagulants and giving anticoagulants for a prolonged period of time.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The evidence is up to date to 14 April 2021." } ]
query-laysum
23352
[ { "role": "user", "content": "Abstract: Background\nSitting can be viewed as a therapeutic intervention and an important part of a person's recovery process; but the risk of ulceration must be mitigated. Interventions for ulcer prevention in those at risk from prolonged sitting include the use of specialist cushions and surfaces, especially for wheelchair users. Whilst there is interest in the effects of different pressure redistributing cushions for wheelchairs, the benefits of pressure redistributing static chairs, compared with standard chairs, for pressure ulcer development in at-risk people are not clear.\nObjectives\nTo assess the effects of pressure redistributing static chairs on the prevention of pressure ulcers in health, rehabilitation and social care settings, and places of residence in which people may spend their day.\nSearch methods\nIn June 2021 we searched the following electronic databases to identify reports of relevant randomised clinical trials: the Cochrane Wounds Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE, Ovid Embase and EBSCO CINAHL Plus (Cumulative Index to Nursing and Allied Health Literature). We also searched clinical trials registers for ongoing and unpublished studies, and reference lists of relevant systematic reviews, meta-analyses and health technology reports. There were no restrictions by language, date of publication or study setting.\nSelection criteria\nWe sought to include published or unpublished randomised controlled trials that assessed pressure redistributing static chairs in the prevention or management of pressure ulcers.\nData collection and analysis\nTwo review authors independently performed study selection. We planned that two review authors would also assess the risk of bias, extract study data and assess the certainty of evidence according to GRADE methodology.\nMain results\nWe did not identify any studies that met the review eligibility criteria, nor any registered studies investigating the role of pressure redistributing static chairs in the prevention or management of pressure ulcers.\nAuthors' conclusions\nCurrently, there is no randomised evidence that supports or refutes the role of pressure redistributing static chairs in the prevention or management of pressure ulcers. This is a priority area and there is a need to explore this intervention with rigorous and robust research.\n\nGiven the provided abstract, please respond to the following query: \"What are pressure ulcers?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Pressure ulcers are injuries to the skin and underlying tissue that can be caused by prolonged pressure. Sitting can be an important part of a person's recovery process, but sitting for long periods can increase the risk of developing pressure ulcers." } ]
query-laysum
23353
[ { "role": "user", "content": "Abstract: Background\nThere is increasing evidence that propofol is efficacious and safe for procedural sedation (PS) in the emergency department (ED) setting. However, propofol has a narrow therapeutic window and lacks of a reversal agent. The aim of this review was to cohere the evidence base regarding the efficacy and safety profile of propofol when used in the ED setting for PS.\nObjectives\nTo identify and evaluate all randomized controlled trials (RCTs) comparing propofol with alternative drugs (benzodiazepines, barbiturates, etomidate and ketamine) used in the ED setting for PS.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 9), MEDLINE (1950 to September week 2 2013) and EMBASE (1980 to week 2 2013). We searched the Current Controlled Trials metaRegister of Clinical Trials (compiled by Current Science) (September 2013). We checked the reference lists of trials and contacted trial authors. We imposed no language restriction. We re-ran the search in February 2015. We will deal with the one study awaiting classification when we update the review.\nSelection criteria\nRCTs comparing propofol to alternative drugs (benzodiazepines, barbiturates, etomidate and ketamine) used in the ED setting for PS in participants of all ages.\nData collection and analysis\nTwo authors independently performed data extraction. Two authors performed trial quality assessment. We used mean difference (MD), odds ratio (OR) and 95% confidence intervals (CI) to measure effect sizes. Two authors independently assessed and rated the methodological quality of each trial using The Cochrane Collaboration tool for assessing risk of bias.\nMain results\nTen studies (813 participants) met the inclusion criteria. Two studies only included participants 18 years and younger; six studies only included participants 18 years and older; one study included participants between 16 and 65 years of age and one study included only adults but did not specify the age range. Eight of the included studies had a high risk of bias. The included studies were clinically heterogeneous. We undertook no meta-analysis.\nThe primary outcome measures of this review were: adverse effects (as defined by the study authors) and participant satisfaction (as defined by the study authors). In one study comparing propofol/fentanyl with ketamine/midazolam, delayed adverse reactions (nightmares and behavioural change) were noted in 10% of the ketamine/midazolam group and none in the propofol/fentanyl group. Seven individual studies reported no evidence of a difference in adverse effects between intravenous propofol, with and without adjunctive analgesic agents, and alternative interventions. Three individual studies reported no evidence of a difference in pain at the injection site between intravenous propofol and alternative interventions. Four individual studies reported no evidence of a difference in participant satisfaction between intravenous propofol, with and without adjunctive analgesic agents, and alternative interventions (ketamine, etomidate, midazolam). All the studies employed propofol without the use of an adjunctive analgesic and all, except one, were small (fewer than 100 participants) studies. The quality of evidence for the adverse effects and participant satisfaction outcomes was very low.\nNine included studies (eight comparisons) reported all the secondary outcome measures of the review except mortality. It was not possible to pool the results of the included studies for any of the secondary outcome measures because the comparator interventions were different and the measures were reported in different ways. Seven individual studies reported no evidence of difference in incidence of hypoxia between intravenous propofol, with and without adjunctive analgesic agents, and alternative interventions.\nAuthors' conclusions\nNo firm conclusions can be drawn concerning the comparative effects of administering intravenous propofol, with or without an adjunctive analgesic agent, with alternative interventions in participants undergoing PS in the ED setting on adverse effects (including pain at the injection site) and participant satisfaction. The review was limited because no two included studies employed the same comparator interventions, and because the number of participants in eight of the included studies were small (fewer than 100 participants).\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We reviewed the evidence regarding the use of propofol to sedate people in the ED undergoing painful medical procedures. We wanted to discover the effectiveness and safety of propofol compared with other drugs used to sedate people in the ED." } ]
query-laysum
23354
[ { "role": "user", "content": "Abstract: Background\nA dacryocystorhinostomy (DCR) procedure aims to restore drainage of tears by bypassing a blockage in the nasolacrimal duct, through the creation of a bony ostium that allows communication between the lacrimal sac and the nasal cavity. It can be performed using endonasal or external approaches. The comparative success rates of these two approaches have not yet been established and this review aims to evaluate the relevant up-to-date research.\nObjectives\nThe primary aim of this review is to compare the success rates of endonasal DCR with that of external DCR. The secondary aim is to compare the complication rates between the two procedures.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2016, Issue 8), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to 22 August 2016), Embase (January 1980 to 22 August 2016), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to 22 August 2016), Web of Science Conference Proceedings Citation Index- Science (CPCI-S) (January 1990 to 22 August 2016), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 22 August 2016. We requested or examined relevant conference proceedings for appropriate trials.\nSelection criteria\nWe included all randomised controlled trials (RCTs) comparing endonasal and external DCRs.\nData collection and analysis\nTwo review authors independently assessed studies for eligibility and extracted data on reported outcomes. We attempted to contact investigators to clarify the methodological quality of the studies. We graded the certainty of the evidence using GRADE.\nMain results\nWe included two trials in this review. One trial from Finland compared laser-assisted endonasal DCR with external DCR, and one trial from India compared mechanical endonasal DCR (using punch forceps) with external DCR. The trials were poorly reported and it was difficult to judge the extent to which bias had been avoided.\nAnatomic success was defined as the demonstration of a patent lacrimal passage on syringing, or endoscopic visualisation of fluorescein dye at the nasal opening of the anastomoses after a period of at least six months following surgery. Subjective success was defined as the resolution of symptoms of watering following surgery after a period of at least six months. Both included trials used anatomic patency demonstrated by irrigation as a measure of anatomic success. Different effects were seen in these two trials (I2 = 76%). People receiving laser-assisted endonasal DCR were less likely to have a successful operation compared with external DCR (63% versus 91%; risk ratio (RR) 0.69, 95% confidence intervals (CI) 0.52 to 0.92; 64 participants). There was little or no difference in success comparing mechanical endonasal DCR and external DCR (90% in both groups; RR 1.00, CI 0.81 to 1.23; 40 participants). We judged this evidence on success to be very low-certainty, downgrading for risk of bias, imprecision and inconsistency. The trial from Finland also assessed subjective improvement in symptoms following surgery. Resolution of symptoms of watering in outdoor conditions was reported by 84% of the participants in the external DCR group and 59% of those in the laser-assisted endonasal DCR group (RR 0.70, CI 0.51 to 0.97; 64 participants, low-certainty evidence).\nThere were no cases of intraoperative bleeding in any participant in the trial that compared laser-assisted endonasal DCR to external DCR. This was in contrast to the trial comparing mechanical endonasal DCR to external DCR in which 45% of participants in both groups experienced intraoperative bleeding (RR 1.00, 95% CI 0.50 to 1.98; 40 participants). We judged this evidence on intraoperative bleeding to be very low-certainty, downgrading for risk of bias, imprecision and inconsistency.\nThere were only two cases of postoperative bleeding, both in the external DCR group (RR 0.33, 95% CI 0.04 to 3.10; participants = 104; studies = 2). There were only two cases of wound infection/gaping, again both in the external DCR group (RR 0.20, CI 0.01 to 3.92; participants = 40; studies = 1). We judged this evidence on complications to be very low-certainty, downgrading one level for risk of bias and two levels for imprecision due to the very low number of cases.\nAuthors' conclusions\nThere is uncertainty as to the relative effects of endonasal and external DCR. Differences in effect seen in the two trials included in this review may be due to variations in the endonasal technique, but may also be due to other differences between the trials. Future larger RCTs are required to further assess the success and complication rates of endonasal and external DCR. Different techniques of endonasal DCR should also be assessed, as the choice of endonasal technique can influence the outcome. Strict outcome criteria should be adopted to assess functional and anatomical outcomes with a minimal follow-up of six months.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The Cochrane researchers searched for studies that had been published up to 22 August 2016." } ]
query-laysum
23355
[ { "role": "user", "content": "Abstract: Background\nNeonatal abstinence syndrome (NAS) due to opioid withdrawal may result in disruption of the mother-infant relationship, sleep-wake abnormalities, feeding difficulties, weight loss, seizures and neurodevelopmental problems.\nObjectives\nTo assess the effectiveness and safety of using an opioid for treatment of NAS due to withdrawal from opioids in newborn infants.\nSearch methods\nWe ran an updated search on 17 September 2020 in CENTRAL via Cochrane Register of Studies Web and MEDLINE via Ovid. We also searched clinical trials databases, conference proceedings and the reference lists of retrieved articles for eligible trials.\nSelection criteria\nWe included randomised controlled trials (RCTs), quasi- and cluster-RCTs which enrolled infants born to mothers with opioid dependence and who were experiencing NAS requiring treatment with an opioid.\nData collection and analysis\nThree review authors independently assessed trial eligibility and risk of bias, and independently extracted data. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe included 16 trials (1110 infants) with NAS secondary to maternal opioid use in pregnancy. Seven studies at low risk of bias were included in sensitivity analysis.\nOpioid versus no treatment / usual care: a single trial (80 infants) of morphine and supportive care versus supportive care alone reported no difference in treatment failure (risk ratio (RR) 1.29, 95% confidence interval (CI) 0.41 to 4.07; very low certainty evidence). No infant had a seizure. The trial did not report mortality, neurodevelopmental disability and adverse events. Morphine increased days hospitalisation (mean difference (MD) 15.00, 95% CI 8.86 to 21.14; very low certainty evidence) and treatment (MD 12.50, 95% CI 7.52 to 17.48; very low certainty evidence), but decreased days to regain birthweight (MD -2.80, 95% CI -5.33 to -0.27) and duration (minutes) of supportive care each day (MD -197.20, 95% CI -274.15 to -120.25).\nMorphine versus methadone: there was no difference in treatment failure (RR 1.59, 95% CI 0.95 to 2.67; 2 studies, 147 infants; low certainty evidence). Seizures, neonatal or infant mortality and neurodevelopmental disability were not reported. A single study reported no difference in days hospitalisation (MD 1.40, 95% CI -3.08 to 5.88; 116 infants; low certainty evidence), whereas data from two studies found an increase in days treatment (MD 2.71, 95% CI 0.22 to 5.21; 147 infants; low certainty) for infants treated with morphine. A single study reported no difference in breastfeeding, adverse events, or out of home placement.\nMorphine versus sublingual buprenorphine: there was no difference in treatment failure (RR 0.79, 95% CI 0.36 to 1.74; 3 studies, 113 infants; very low certainty evidence). Neonatal or infant mortality and neurodevelopmental disability were not reported. There was moderate certainty evidence of an increase in days hospitalisation (MD 11.45, 95% CI 5.89 to 17.01; 3 studies, 113 infants), and days treatment (MD 12.79, 95% CI 7.57 to 18.00; 3 studies, 112 infants) for infants treated with morphine. A single adverse event (seizure) was reported in infants exposed to buprenorphine.\nMorphine versus diluted tincture of opium (DTO): a single study (33 infants) reported no difference in days hospitalisation, days treatment or weight gain (low certainty evidence).\nOpioid versus clonidine: a single study (31 infants) reported no infant with treatment failure in either group. This study did not report seizures, neonatal or infant mortality and neurodevelopmental disability. There was low certainty evidence for no difference in days hospitalisation or days treatment. This study did not report adverse events.\nOpioid versus diazepam: there was a reduction in treatment failure from use of an opioid (RR 0.43, 95% CI 0.23 to 0.80; 2 studies, 86 infants; low certainty evidence). Seizures, neonatal or infant mortality and neurodevelopmental disability were not reported. A single study of 34 infants comparing methadone versus diazepam reported no difference in days hospitalisation or days treatment (very low certainty evidence). Adverse events were not reported.\nOpioid versus phenobarbital: there was a reduction in treatment failure from use of an opioid (RR 0.51, 95% CI 0.35 to 0.74; 6 studies, 458 infants; moderate certainty evidence). Subgroup analysis found a reduction in treatment failure in trials titrating morphine to ≧ 0.5 mg/kg/day (RR 0.21, 95% CI 0.10 to 0.45; 3 studies, 230 infants), whereas a single study using morphine < 0.5 mg/kg/day reported no difference compared to use of phenobarbital (subgroup difference P = 0.05). Neonatal or infant mortality and neurodevelopmental disability were not reported. A single study (111 infants) of paregoric versus phenobarbital reported seven infants with seizures in the phenobarbital group, whereas no seizures were reported in two studies (170 infants) comparing morphine to phenobarbital. There was no difference in days hospitalisation or days treatment. A single study (96 infants) reported no adverse events in either group.\nOpioid versus chlorpromazine: there was a reduction in treatment failure from use of morphine versus chlorpromazine (RR 0.08, 95% CI 0.01 to 0.62; 1 study, 90 infants; moderate certainty evidence). No seizures were reported in either group. There was low certainty evidence for no difference in days treatment. This trial reported no adverse events in either group.\nNone of the included studies reported time to control of NAS. Data for duration and severity of NAS were limited, and we were unable to use these data in quantitative synthesis.\nAuthors' conclusions\nCompared to supportive care alone, the addition of an opioid may increase duration of hospitalisation and treatment, but may reduce days to regain birthweight and the duration of supportive care each day. Use of an opioid may reduce treatment failure compared to phenobarbital, diazepam or chlorpromazine. Use of an opioid may have little or no effect on duration of hospitalisation or treatment compared to use of phenobarbital, diazepam or chlorpromazine. The type of opioid used may have little or no effect on the treatment failure rate. Use of buprenorphine probably reduces duration of hospitalisation and treatment compared to morphine, but there are no data for time to control NAS with buprenorphine, and insufficient evidence to determine safety. There is insufficient evidence to determine the effectiveness and safety of clonidine.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The evidence in this Cochrane Review is current to September 2020." } ]
query-laysum
23356
[ { "role": "user", "content": "Abstract: Background\nThere is a large volume of health information available, and, if applied in clinical practice, may contribute to effective patient care. Despite an abundance of information, sub-optimal care is common. Many factors influence practitioners' use of health information, and format (electronic or other) may be one such factor.\nObjectives\nTo assess the effects of interventions aimed at improving or increasing healthcare practitioners' use of electronic health information (EHI) on professional practice and patient outcomes.\nSearch methods\nWe searched The Cochrane Library (Wiley), MEDLINE (Ovid), EMBASE (Ovid), CINAHL (EBSCO), and LISA (EBSCO) up to November 2013. We contacted researchers in the field and scanned reference lists of relevant articles.\nSelection criteria\nWe included studies that evaluated the effects of interventions to improve or increase the use of EHI by healthcare practitioners on professional practice and patient outcomes. We defined EHI as information accessed on a computer. We defined 'use' as logging into EHI. We considered any healthcare practitioner involved in patient care. We included randomized, non-randomized, and cluster randomized controlled trials (RCTs, NRCTs, CRCTs), controlled clinical trials (CCTs), interrupted time series (ITS), and controlled before-and-after studies (CBAs).The comparisons were: electronic versus printed health information; EHI on different electronic devices (e.g. desktop, laptop or tablet computers, etc.; cell / mobile phones); EHI via different user interfaces; EHI provided with or without an educational or training component; and EHI compared to no other type or source of information.\nData collection and analysis\nTwo review authors independently extracted data and assessed the risk of bias for each study. We used GRADE to assess the quality of the included studies. We reassessed previously excluded studies following our decision to define logins to EHI as a measure of professional behavior. We reported results in natural units. When possible, we calculated and reported median effect size (odds ratio (OR), interquartile ranges (IQR)). Due to high heterogeneity across studies, meta-analysis was not feasible.\nMain results\nWe included two RCTs and four CRCTs involving 352 physicians, 48 residents, and 135 allied health practitioners. Overall risk of bias was low as was quality of the evidence. One comparison was supported by three studies and three comparisons were supported by single studies, but outcomes across the three studies were highly heterogeneous. We found no studies to support EHI versus no alternative. Given these factors, it was not possible to determine the relative effectiveness of interventions. All studies reported practitioner use of EHI, two reported on compliance with electronic practice guidelines, and none reported on patient outcomes.\nOne trial (139 participants) measured guideline adherence for an electronic versus printed guideline, but reported no difference between groups (median OR 0.85, IQR 0.74 to 1.08). One small cross-over trial (10 participants) reported increased use of clinical guidelines when provided with a mobile versus stationary, desktop computer (mean use per shift: intervention group (IG) 3.6, standard deviation (SD) 1.7 vs. control group (CG) 2.0 (SD 1.9), P value = 0.033). One cross-over trial (203 participants) reported that using a customized versus a generic interface had little impact on practitioners' use of EHI (mean difference in adjusted end-of-study rate: 0.77 logins/month/user, 95% confidence interval (CI) CI 0.43 to 1.11). Three trials included education or training and reported increased use of EHI by practitioners following training.\nAuthors' conclusions\nThis review provided no evidence that the use of EHI translates into improved clinical practice or patient outcomes, though it does suggest that when practitioners are provided with EHI and education or training, the use of EHI increases. We have defined use as the activity of logging into an EHI resource, but based on our findings use does not automatically translate to the application of EHI in practice. While using EHI may be an important component of evidence-based medicine, alone it is insufficient to improve patient care or clinical practices. For EHI to be applied in patient care, it will be necessary to understand why practitioners' are reluctant to apply EHI when treating people, and to determine the most effective way(s) to reduce this reluctance.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The results of this review showed that when provided with a combination of EHI and training, practitioners used the information more often. Two studies measured doctors' use of electronic treatment guidelines, but showed that the electronic aspect of the guidelines did not mean that doctors followed the guidelines. This review provided no information on whether more frequent use of EHI translated into improved clinical practice or whether patients were better off when doctors or nurses used health information when treating them." } ]
query-laysum
23357
[ { "role": "user", "content": "Abstract: Background\nThis is an update of a review that was first published in 2002. Female sterilisation is the most popular contraceptive method worldwide. Several techniques exist for interrupting the patency of fallopian tubes, including cutting and tying the tubes, damaging the tube using electric current, applying clips or silicone rubber rings, and blocking the tubes with chemicals or tubal inserts.\nObjectives\nTo compare the different tubal occlusion techniques in terms of major and minor morbidity, failure rates (pregnancies), technical failures and difficulties, and women's and surgeons' satisfaction.\nSearch methods\nFor the original review published in 2002 we searched MEDLINE and the Cochrane Central Register of Controlled Trials (CENTRAL). For this 2015 update, we searched POPLINE, LILACS, PubMed and CENTRAL on 23 July 2015. We used the related articles feature of PubMed and searched reference lists of newly identified trials.\nSelection criteria\nAll randomized controlled trials (RCTs) comparing different techniques for tubal sterilisation, irrespective of the route of fallopian tube access or the method of anaesthesia.\nData collection and analysis\nFor the original review, two review authors independently selected studies, extracted data and assessed risk of bias. For this update, data extraction was performed by one author (TL) and checked by another (RK). We grouped trials according to the type of comparison evaluated. Results are reported as odds ratios (OR) or mean differences (MD) using fixed-effect methods, unless heterogeneity was high, in which case we used random-effects methods.\nMain results\nWe included 19 RCTs involving 13,209 women. Most studies concerned interval sterilisation; three RCTs involving 1632 women, concerned postpartum sterilisation. Comparisons included tubal rings versus clips (six RCTs, 4232 women); partial salpingectomy versus electrocoagulation (three RCTs, 2019 women); tubal rings versus electrocoagulation (two RCTs, 599 women); partial salpingectomy versus clips (four RCTs, 3627 women); clips versus electrocoagulation (two RCTs, 206 women); and Hulka versus Filshie clips (two RCTs, 2326 women). RCTs of clips versus electrocoagulation contributed no data to the review.\nOne year after sterilisation, failure rates were low (< 5/1000) for all methods.There were no deaths reported with any method, and major morbidity related to the occlusion technique was rare.\nMinor morbidity was higher with the tubal ring than the clip (Peto OR 2.15, 95% CI 1.22 to 3.78; participants = 842; studies = 2; I² = 0%; high-quality evidence), as were technical failures (Peto OR 3.93, 95% CI 2.43 to 6.35; participants = 3476; studies = 3; I² = 0%; high-quality evidence).\nMajor morbidity was significantly higher with the modified Pomeroy technique than electrocoagulation (Peto OR 2.87, 95% CI 1.13 to 7.25; participants = 1905; studies = 2; I² = 0%; low-quality evidence), as was postoperative pain (Peto OR 3.85, 95% CI 2.91 to 5.10; participants = 1905; studies = 2; I² = 0%; moderate-quality evidence).\nWhen tubal rings were compared with electrocoagulation, postoperative pain was reported significantly more frequently for tubal rings (OR 3.40, 95% CI 1.17 to 9.84; participants = 596; studies = 2; I² = 87%; low-quality evidence).\nWhen partial salpingectomy was compared with clips, there were no major morbidity events in either group (participants = 2198, studies = 1). The frequency of minor morbidity was low and not significantly different between groups (Peto OR 7.39, 95% CI 0.46 to 119.01; participants = 193; studies = 1, low-quality evidence). Although technical failure occurred more frequently with clips (Peto OR 0.18, 95% CI 0.08 to 0.40; participants = 2198; studies = 1; moderate-quality evidence); operative time was shorter with clips than partial salpingectomy (MD 4.26 minutes, 95% CI 3.65 to 4.86; participants = 2223; studies = 2; I² = 0%; high-quality evidence).\nWe found little evidence concerning women's or surgeon's satisfaction. No RCTs compared tubal microinserts (hysteroscopic sterilisation) or chemical inserts (quinacrine) to other methods.\nAuthors' conclusions\nTubal sterilisation by partial salpingectomy, electrocoagulation, or using clips or rings, is a safe and effective method of contraception. Failure rates at 12 months post-sterilisation and major morbidity are rare outcomes with any of these techniques. Minor complications and technical failures appear to be more common with rings than clips. Electrocoagulation may be associated with less postoperative pain than the modified Pomeroy or tubal ring methods. Further research should include RCTs (for effectiveness) and controlled observational studies (for adverse effects) on sterilisation by minimally-invasive methods, i.e. tubal inserts and quinacrine.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "This is an update of a Cochrane Review that was first published in 2002 and previously updated in 2011.\nTubal sterilisation prevents pregnancy by stopping the woman's unfertilised eggs from passing through the fallopian tubes to be fertilised by sperm. Techniques to close the tubes include cutting and tying them (partial salpingectomy), blocking them mechanically by using clips or rings, or by applying electric current (electrocoagulation) to damage and block them, and blocking them by using chemicals or tubal inserts (inserted via the mouth of the womb) that cause tubal scarring." } ]
query-laysum
23358
[ { "role": "user", "content": "Abstract: Background\nLumbar puncture (LP) is a common invasive procedure, most frequently performed to diagnose infection. Physicians perform LP in newborn infants with the help of an assistant using a strict aseptic technique; it is important to monitor the infant during all the steps of the procedure. Without adequate analgesia, LP can cause considerable pain and discomfort. As newborns have increased sensitivity to pain, it is crucial to adequately manage the procedural pain of LP in this population.\nObjectives\nTo assess the benefits and harms, including pain, discomfort, and success rate, of any pharmacological intervention during lumbar puncture in newborn infants, compared to placebo, no intervention, non-pharmacological interventions, or other pharmacological interventions.\nSearch methods\nWe searched CENTRAL, PubMed, Embase, and three trial registries in December 2022. We also screened the reference lists of included studies and related systematic reviews for studies not identified by the database searches.\nSelection criteria\nWe included randomized controlled trials (RCTs) and quasi-RCTs comparing drugs used for pain management, sedation, or both, during LP. We considered the following drugs suitable for inclusion.\n• Topical anesthetics (e.g. eutectic mixture of local anesthetics [EMLA], lidocaine)\n• Opioids (e.g. morphine, fentanyl)\n• Alpha-2 agonists (e.g. clonidine, dexmedetomidine)\n• N-Methyl-D-aspartate (NMDA) receptor antagonists (e.g. ketamine)\n• Other analgesics (e.g. paracetamol)\n• Sedatives (e.g. benzodiazepines such as midazolam)\nData collection and analysis\nWe used standard Cochrane methods. We used the fixed-effect model with risk ratio (RR) for dichotomous data and mean difference (MD) or standardized mean difference (SMD) for continuous data, with their 95% confidence intervals (CIs). Our main outcomes were successful LP on first attempt, total number of LP attempts, episodes of bradycardia, pain assessed with validated scales, episodes of desaturation, number of episodes of apnea, and number of infants with one or more episodes of apnea. We used the GRADE approach to evaluate the certainty of the evidence.\nMain results\nWe included three studies (two RCTs and one quasi-RCT) that enrolled 206 newborns. One study included only term infants. All studies assessed topical treatment versus placebo or no intervention. The topical anesthetics were lidocaine 4%, lidocaine 1%, and EMLA. We identified no completed studies on opioids, non-steroidal anti-inflammatory drugs, alpha-2 agonists, NMDA receptor antagonists, other analgesics, sedatives, or head-to-head comparisons (drug A versus drug B).\nBased on very low-certainty evidence from one quasi-RCT of 100 LPs in 76 infants, we are unsure if topical anesthetics (lidocaine), compared to no anesthesia, has an effect on the following outcomes.\n• Successful LP on first attempt (first-attempts success in 48% of LPs in the lidocaine group and 42% of LPs in the control group)\n• Number of attempts per LP (mean 1.9 attempts, [standard error of the mean 0.2] in the lidocaine group, and mean 2.1 attempts [standard error of the mean 2.1] in the control group)\n• Episodes of bradycardia (0% of LPs in the lidocaine group and 4% of LPs in the control group)\n• Episodes of desaturation (0% of LPs in the lidocaine group and 8% of LPs in the control group)\n• Occurrence of apnea (RR 3.24, 95% CI 0.14 to 77.79; risk difference [RD] 0.02, 95% CI −0.03 to 0.08).\nTopical anesthetics compared to placebo may reduce pain assessed with the Neonatal Facial Coding System (NFCS) score (SMD −1.00 standard deviation (SD), 95% CI −1.47 to −0.53; I² = 98%; 2 RCTs, 112 infants; low-certainty evidence). No studies in this comparison reported total number of episodes of apnea.\nWe identified three ongoing studies, which will assess the effects of EMLA, lidocaine, and fentanyl. Three studies are awaiting classification.\nAuthors' conclusions\nThe evidence is very uncertain about the effect of topical anesthetics (lidocaine) compared to no anesthesia on successful lumbar puncture on first attempt, the number of attempts per lumbar puncture, episodes of bradycardia, episodes of desaturation, and occurrence of apnea. Compared to placebo, topical anesthetics (lidocaine or EMLA) may reduce pain assessed with the NFCS score. One ongoing study will assess the effects of systemic treatment.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched for studies that looked at using certain medicines compared with using other medicines or no medicines in newborn infants undergoing spinal tap. We compared and summarized the results of the studies and rated our confidence in the evidence, based on factors such as study methods and sizes." } ]
query-laysum
23359
[ { "role": "user", "content": "Abstract: Background\nPrinted educational materials are widely used dissemination strategies to improve the quality of healthcare professionals' practice and patient health outcomes. Traditionally they are presented in paper formats such as monographs, publication in peer-reviewed journals and clinical guidelines. This is the fourth update of the review.\nObjectives\nTo assess the effect of printed educational materials (PEMs) on the practice of healthcare professionals and patient health outcomes.\nTo explore the influence of some of the characteristics of the printed educational materials (e.g. source, content, format) on their effect on healthcare professionals' practice and patient health outcomes.\nSearch methods\nWe searched MEDLINE, Embase, the Cochrane Central Register of Controlled Trials (CENTRAL), HealthStar, CINAHL, ERIC, CAB Abstracts, Global Health, and EPOC Register from their inception to 6 February 2019. We checked the reference lists of all included studies and relevant systematic reviews.\nSelection criteria\nWe included randomised trials (RTs), controlled before-after studies (CBAs) and interrupted time series studies (ITSs) that evaluated the impact of PEMs on healthcare professionals' practice or patient health outcomes. We included three types of comparisons: (1) PEM versus no intervention, (2) PEM versus single intervention, (3) multifaceted intervention where PEM is included versus multifaceted intervention without PEM. Any objective measure of professional practice (e.g. prescriptions for a particular drug), or patient health outcomes (e.g. blood pressure) were included.\nData collection and analysis\nTwo reviewers undertook data extraction independently. Disagreements were resolved by discussion. For analyses, we grouped the included studies according to study design, type of outcome and type of comparison. For controlled trials, we reported the median effect size for each outcome within each study, the median effect size across outcomes for each study and the median of these effect sizes across studies. Where data were available, we re-analysed the ITS studies by converting all data to a monthly basis and estimating the effect size from the change in the slope of the regression line between before and after implementation of the PEM. We reported median changes in slope for each outcome, for each study, and then across studies. We standardised all changes in slopes by their standard error, allowing comparisons and combination of different outcomes. We categorised each PEM according to potential effects modifiers related to the source of the PEMs, the channel used for their delivery, their content, and their format. We assessed the risks of bias of all the included studies.\nMain results\nWe included 84 studies: 32 RTs, two CBAs and 50 ITS studies. Of the 32 RTs, 19 were cluster RTs that used various units of randomisation, such as practices, health centres, towns, or areas.\nThe majority of the included studies (82/84) compared the effectiveness of PEMs to no intervention. Based on the RTs that provided moderate-certainty evidence, we found that PEMs distributed to healthcare professionals probably improve their practice, as measured with dichotomous variables, compared to no intervention (median absolute risk difference (ARD): 0.04; interquartile range (IQR): 0.01 to 0.09; 3,963 healthcare professionals randomised within 3073 units). We could not confirm this finding using the evidence gathered from continuous variables (standardised mean difference (SMD): 0.11; IQR: -0.16 to 0.52; 1631 healthcare professionals randomised within 1373 units ), from the ITS studies (standardised median change in slope = 0.69; 35 studies), or from the CBA study because the certainty of this evidence was very low. We also found, based on RTs that provided moderate-certainty evidence, that PEMs distributed to healthcare professionals probably make little or no difference to patient health as measured using dichotomous variables, compared to no intervention (ARD: 0.02; IQR: -0.005 to 0.09; 935,015 patients randomised within 959 units). The evidence gathered from continuous variables (SMD: 0.05; IQR: -0.12 to 0.09; 6,737 patients randomised within 594 units) or from ITS study results (standardised median change in slope = 1.12; 8 studies) do not strengthen these findings because the certainty of this evidence was very low.\nTwo studies (a randomised trial and a CBA) compared a paper-based version to a computerised version of the same PEM. From the RT that provided evidence of low certainty, we found that PEM in computerised versions may make little or no difference to professionals' practice compared to PEM in printed versions (ARD: -0.02; IQR: -0.03 to 0.00; 139 healthcare professionals randomised individually). This finding was not strengthened by the CBA study that provided very low certainty evidence (SMD: 0.44; 32 healthcare professionals).\nThe data gathered did not allow us to conclude which PEM characteristics influenced their effectiveness.\nThe methodological quality of the included studies was variable. Half of the included RTs were at risk of selection bias. Most of the ITS studies were conducted retrospectively, without prespecifying the expected effect of the intervention, or acknowledging the presence of a secular trend.\nAuthors' conclusions\nThe results of this review suggest that, when used alone and compared to no intervention, PEMs may slightly improve healthcare professionals' practice outcomes and patient health outcomes. The effectiveness of PEMs compared to other interventions, or of PEMs as part of a multifaceted intervention, is uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The results of this review indicate that printed educational materials probably improve the practice of healthcare professionals and probably make little or no difference to patient health. The results also suggest that computerised versions may make little or no difference to healthcare professionals' practice compared to printed versions of the same printed educational material. Further research with rigorous methodology is likely to have an important impact on our confidence in these estimates of effect, and may change the estimate." } ]
query-laysum
23360
[ { "role": "user", "content": "Abstract: Background\nThe prevalence and incidence of pain and skeletal complications of metastatic bone disease such as pathologic fractures, spinal cord compression and hypercalcemia is high and an important contributor to morbidity, poor performance status and decreased quality of life. Moreover, pathologic fractures are associated with increased risk of death in people with disseminated malignancies. Therefore, prevention of pain and fractures are important goals in men with prostate cancer at risk for skeletal complications.\nObjectives\nTo assess the effects of bisphosphonates in men with bone metastases from prostate cancer.\nSearch methods\nWe identified studies by electronic search of bibliographic databases including the Cochrane Controlled Trials Register and MEDLINE on 13 July 2017 and trial registries. We handsearched the Proceedings of American Society of Clinical Oncology (to July 2017) and reference lists of all eligible trials identified. This is an update of a review last published in 2006.\nSelection criteria\nWe included randomized controlled studies comparing the effectiveness of bisphosphonates in men with bone metastases from prostate cancer.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of trials. We defined the proportion of participants with pain response as the primary end point; secondary outcomes were skeletal-related events, mortality, quality of life, adverse events, analgesic consumption and disease progression. We assessed the quality of the evidence for the main outcomes using the GRADE approach.\nMain results\nWe included 18 trials reporting on 4843 participants comparing the effect of bisphosphonate administration to control regimens.\nPrimary outcome: there was no clear difference in the proportion of participants with pain response (RR 1.15, 95% CI 0.93 to 1.43; P = 0.20; I2 = 0%; 3 trials; 876 participants; low quality evidence). In absolute terms, bisphosphonates resulted in a pain response in 40 more participants per 1000 (19 fewer to 114 more).\nSecondary outcomes: bisphosphonates probably reduced the incidence of skeletal-related events in participants with prostate cancer metastatic to bone (RR 0.87, 95% CI 0.81 to 0.94; P = 0.27; I2 = 19%; 9 trials; 3153 participants; moderate quality evidence). In absolute terms, bisphosphonates resulted in 58 fewer SREs per 1000 (85 fewer to 27 fewer).\nWe found no clinically relevant differences in mortality (RR 0.97, 95% CI 0.91 to 1.04; P = 0.43; I2 = 1%; 9 trials; 2450 participants; moderate quality evidence). In absolute terms, bisphosphonates resulted in 16 fewer deaths per 1000 (47 fewer to 21 more).\nOutcome definition of quality of life and the measurement tools varied greatly across trials and we were unable to extract any quantitative data for meta-analysis.\nBisphosphonates probably increased the number of participants affected by nausea (RR 1.19, 95% CI 1.00 to 1.41; P = 0.05; I2 = 0%; 9 trials; 3008 participants; moderate quality evidence). In absolute terms, bisphosphonates resulted in seven more cases of nausea per 1000 (0 fewer to 14 more). Bisphosphonates probably increased the number of renal adverse events (RR 1.65, 95% CI 1.11 to 2.46; P = 0.01; I2 = 0%; 7 trials; 1794 participants; moderate quality evidence). In absolute terms, bisphosphonates resulted in 22 more renal adverse events per 1000 (4 more to 50 more). We found no clear difference in the number of participants with osteonecrosis of the jaw between groups (RR 1.92, 95% CI 0.75 to 4.90; P = 0.17; I2 = 0%; 5 trials; 1626 participants; very low quality evidence). In absolute terms, bisphosphonates resulted in seven more cases with osteonecrosis of the jaw per 1000 (2 fewer to 29 more). We observed no clinically relevant difference in the proportion of participants with decreased analgesic consumption (RR 1.19, 95% CI 0.87 to 1.63; P = 0.28; I2 = 37%; 4 trials; 416 participants). Statistical analysis revealed that bisphosphonates probably reduced the number of participants with disease progression (RR 0.94, 95% CI 0.90 to 0.98; P = 0.006; I2 = 0%; 7 trials; 2115 participants; moderate quality evidence). In absolute terms, bisphosphonates resulted in 36 fewer cases of disease progression per 1000 (71 fewer to 7 fewer).\nFindings of our predefined subgroup and sensitivity analyses were no different from those of the primary analyses.\nAuthors' conclusions\nBased on low quality evidence, there may be no clinically relevant difference in the proportion of men with pain response between bisphosphonates and control regimens in men with bone metastases from prostate cancer. Bisphosphonates probably decrease the number of skeletal-related events and disease progression. These benefits need to be weighed against the increased risk of renal impairment and nausea in men receiving bisphosphonates. Future studies should explicitly evaluate patient important outcomes such as quality of life and pain by using standardized and comparable assessment tools.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched medical databases to 13 July 2017. Two review authors independently screened, summarized and analyzed the findings. This led to the inclusion of 18 clinical trials." } ]
query-laysum
23361
[ { "role": "user", "content": "Abstract: Background\nHepatitis C virus (HCV) infection is common in chronic kidney disease (CKD) patients on dialysis, causes chronic liver disease, may increase the risk of death, and impacts kidney transplant outcomes. Direct-acting antivirals have replaced interferons because of better efficacy and tolerability. This is an update of a review first published in 2015.\nObjectives\nWe aimed to look at the benefits and harms of interventions for HCV in CKD patients on dialysis: death, disease relapse, treatment response/discontinuation, time to recovery, quality of life (QoL), cost-effectiveness, and adverse events. We aimed to study comparisons of available interventions, compared with placebo, control, with each other and with newer treatments.\nSearch methods\nWe searched the Cochrane Kidney and Transplant's Specialised Register to 23 February 2023 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE and EMBASE, handsearching conference proceedings, and searching the International Clinical Trials Register Portal (ICTRP) and ClinicalTrials.gov.\nSelection criteria\nRandomised controlled trials (RCTs), quasi-RCTs, first period of randomised cross-over studies on interventions for HCV in CKD on dialysis were considered.\nData collection and analysis\nSummary estimates of effect were obtained using a random-effects model, and results were expressed as risk ratios (RR) and their 95% confidence intervals (CI). Confidence in the evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nThree studies were included in this update, therefore 13 studies (997 randomised participants) met our inclusion criteria. Overall, the risk of bias was judged low in seven studies, unclear in four, low to unclear in one, and high in one study. Interventions included standard interferon, pegylated (PEG) interferon, standard or PEG interferon plus ribavirin; direct-acting antivirals, and direct-acting antivirals plus PEG interferon plus ribavirin.\nCompared to placebo or control, standard interferon may make little or no difference to death (5 studies, 134 participants: RR 0.89, 95% CI 0.06 to 13.23) or relapse (low certainty evidence), probably improves end-of-treatment response (ETR) (5 studies, 132 participants: RR 8.62, 95% CI 3.03 to 24.55; I² = 0%) (moderate certainty evidence), and probably makes little or no difference to sustained virological response (SVR) (4 studies, 98 participants: RR 3.25, 95% CI 0.81 to 13.07; I² = 53%), treatment discontinuation (4 studies, 116 participants: RR 4.59, 95% CI 0.49 to 42.69; I² = 63%), and adverse events (5 studies, 143 participants: RR 3.56, 95% CI 0.98 to 13.01; I² = 25%) (moderate certainty evidence).\nIn low certainty evidence, PEG interferon (1 study, 50 participants) may improve ETR (RR 1.53, 95% CI 1.09 to 2.15) but may make little or no difference to death (RR 0.33, 95% CI 0.01 to 7.81), SVR (RR 2.40, 95% CI 0.99 to 5.81), treatment discontinuation (RR 0.11, 95% CI 0.01 to 1.96), adverse events (RR 0.11, 95% CI 0.01 to 1.96) and relapses (21/38 relapsed) (RR 0.72, 95% CI 0.41 to 1.25) compared to standard interferon.\nIn moderate certainty evidence, high-dose PEG interferon (alpha-2a and alpha-2b) may make little or no difference to death (2 studies, 97 participants: RR 4.30, 95% CI 0.76 to 24.33; I² = 0%), ETR (RR 1.42, 95% CI 0.51 to 3.90; I² = 20%), SVR (RR 1.19, 95% CI 0.68 to 2.07; I² = 0%), treatment discontinuation (RR 1.20, 95% CI 0.63 to 2.28; I² = 0%) or adverse events (RR 1.05, 95% CI 0.61 to 1.83; I² = 27%) compared to low-dose PEG interferon. High-dose PEG interferon may make little or no difference to relapses (1 study, 43 participants: RR 1.11, 95% CI 0.45 to 2.77; low certainty evidence). There were no significant subgroup differences.\nStandard interferon plus ribavirin may lead to higher treatment discontinuation (1 study, 52 participants: RR 2.97, 95% CI 1.19 to 7.36; low certainty evidence) compared to standard interferon alone.\nIn low certainty evidence, PEG interferon plus ribavirin (1 study, 377 participants) may improve SVR (RR 1.80, 95% CI 1.46 to 2.21), reduce relapses (RR 0.33, 95% CI 0.23 to 0.48), slightly increase the number with adverse events (RR 1.10, 95% CI 1.01 to 1.19), and may make little or no difference to ETR (RR 1.01, 95% CI 0.94 to 1.09) compared to PEG interferon alone. The evidence is very uncertain about the effect of PEG interferon plus ribavirin on treatment discontinuation (RR 1.71, 95% CI 0.69 to 4.24) compared to PEG interferon alone.\nOne study reported grazoprevir plus elbasvir improved ETR (173 participants: RR 174.99, 95% CI 11.03 to 2775.78; low certainty evidence) compared to placebo.\nIt is uncertain whether telaprevir plus ribavirin (high versus low initial dose) plus PEG interferon for 24 versus 48 weeks (1 study, 35 participants) improves ETR (RR 1.02, 95% CI 0.67 to 1.56) or SVR (RR 1.02, 95% CI 0.67 to 1.56) because the certainty of the evidence is very low.\nData on QoL, cost-effectiveness, cardiovascular outcomes and peritoneal dialysis were not available.\nAuthors' conclusions\nIn dialysis patients with HCV infection grazoprevir plus elbasvir probably improves ETR. There is no difference in ETR or SVR for combinations of telaprevir, ribavirin and PEG interferon given for different durations and doses. Though no longer in use, PEG interferon was more effective than standard interferon for ETR but not SVR. Increasing doses of PEG interferon did not improve responses. The addition of ribavirin to PEG interferon may result in fewer relapses, higher SVR, and higher numbers with adverse events.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Since the publication of our previous review in 2015, newer medicines (direct-acting antivirals) for the treatment of HCV infection have become available, therefore, we have now updated the evidence to include the efficacy of direct-acting antivirals. This update searched for new evidence from randomised controlled studies for the treatment of HCV in dialysis patients." } ]
query-laysum
23362
[ { "role": "user", "content": "Abstract: Background\nPostoperative nausea and vomiting (PONV) are common complications following surgery and anaesthesia. Antiemetic drugs are only partially effective in preventing PONV. An alternative approach is to stimulate the PC6 acupoint on the wrist. This is an update of a Cochrane review first published in 2004, updated in 2009 and now in 2015.\nObjectives\nTo determine the effectiveness and safety of PC6 acupoint stimulation with or without antiemetic drug versus sham or antiemetic drug for the prevention of PONV in people undergoing surgery.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (Cochrane Library, Issue 12, 2014), MEDLINE (January 2008 to December 2014), EMBASE (January 2008 to December 2014), ISI Web of Science (January 2008 to December 2014), World Health Organization Clinical Trials Registry, ClinicalTrials.gov, and reference lists of articles to identify additional studies. We applied no language restrictions.\nSelection criteria\nAll randomized trials of techniques that stimulated the PC6 acupoint compared with sham treatment or drug therapy, or combined PC6 acupoint and drug therapy compared to drug therapy, for the prevention of PONV. Interventions used in these trials included acupuncture, electro-acupuncture, transcutaneous electrical acupoint stimulation, transcutaneous nerve stimulation, laser stimulation, capsicum plaster, acu-stimulation device, and acupressure in people undergoing surgery. Primary outcomes were the incidences of nausea and vomiting after surgery. Secondary outcomes were the need for rescue antiemetic therapy and adverse effects.\nData collection and analysis\nTwo review authors independently extracted the data and assessed the risk of bias domains for each trial. We used a random-effects model and reported risk ratio (RR) with associated 95% confidence interval (95% CI). We used trial sequential analyses to help provide information on when we had reached firm evidence in cumulative meta-analyses of the primary outcomes, based on a 30% risk ratio reduction in PONV.\nMain results\nWe included 59 trials involving 7667 participants. We rated two trials at low risk of bias in all domains (selection, attrition, reporting, blinding and other). We rated 25 trials at high risk in one or more risk-of-bias domains. Compared with sham treatment, PC6 acupoint stimulation significantly reduced the incidence of nausea (RR 0.68, 95% CI 0.60 to 0.77; 40 trials, 4742 participants), vomiting (RR 0.60, 95% CI 0.51 to 0.71; 45 trials, 5147 participants) and the need for rescue antiemetics (RR 0.64, 95% CI 0.55 to 0.73; 39 trials, 4622 participants). As heterogeneity among trials was substantial and there were study limitations, we rated the quality of evidence as low. Using trial sequential analysis, the required information size and boundary for benefit were reached for both primary outcomes.\nPC6 acupoint stimulation was compared with six different types of antiemetic drugs (metoclopramide, cyclizine, prochlorperazine, droperidol. ondansetron and dexamethasone). There was no difference between PC6 acupoint stimulation and antiemetic drugs in the incidence of nausea (RR 0.91, 95% CI 0.75 to 1.10; 14 trials, 1332 participants), vomiting (RR 0.93, 95% CI 0.74 to 1.17; 19 trials, 1708 participants), or the need for rescue antiemetics (RR 0.87, 95% CI 0.65 to 1.16; 9 trials, 895 participants). We rated the quality of evidence as moderate, due to the study limitations. Using trial sequential analyses, the futility boundary was crossed before the required information size was surpassed for both primary outcomes.\nCompared to antiemetic drugs, the combination of PC6 acupoint stimulation and antiemetic therapy reduced the incidence of vomiting (RR 0.56, 95% CI 0.35 to 0.91; 9 trials, 687 participants) but not nausea (RR 0.79, 95% CI 0.55 to 1.13; 8 trials, 642 participants). We rated the quality of evidence as very low, due to substantial heterogeneity among trials, study limitations and imprecision. Using trial sequential analysis, none of the boundaries for benefit, harm or futility were crossed for PONV. The need for rescue antiemetic was lower in the combination PC6 acupoint stimulation and antiemetic group than the antiemetic group (RR 0.61, 95% CI 0.44 to 0.86; 5 trials, 419 participants).\nThe side effects associated with PC6 acupoint stimulation were minor, transient and self-limiting (e.g. skin irritation, blistering, redness and pain) in 14 trials. Publication bias was not apparent in the contour-enhanced funnel plots.\nAuthors' conclusions\nThere is low-quality evidence supporting the use of PC6 acupoint stimulation over sham. Compared to the last update in 2009, no further sham comparison trials are needed. We found that there is moderate-quality evidence showing no difference between PC6 acupoint stimulation and antiemetic drugs to prevent PONV. Further PC6 acupoint stimulation versus antiemetic trials are futile in showing a significant difference, which is a new finding in this update. There is inconclusive evidence supporting the use of a combined strategy of PC6 acupoint stimulation and antiemetic drug over drug prophylaxis, and further high-quality trials are needed.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Does a review of the evidence support the use of wrist PC6 acupuncture point stimulation (PC6 acupoint) as effective in reducing nausea and vomiting after surgery (PONV), compared to sham (dummy acupoint stimulation) or antiemetics (drugs that relieve nausea and vomiting) in people undergoing surgery? This review updates the evidence published in 2009, and is current to December 2014." } ]
query-laysum
23363
[ { "role": "user", "content": "Abstract: Background\nPulmonary arteriovenous malformations are abnormal direct connections between the pulmonary artery and pulmonary vein which result in a right-to-left shunt. They are associated with substantial morbidity and mortality mainly from the effects of paradoxical emboli. Potential complications include stroke, cerebral abscess, pulmonary haemorrhage and hypoxaemia. Embolisation is an endovascular intervention based on the occlusion of the feeding arteries the pulmonary arteriovenous malformations thus eliminating the abnormal right-to-left-shunting. This is an update of a previously published review.\nObjectives\nTo determine the efficacy and safety of embolisation in patients with pulmonary arteriovenous malformations including a comparison with surgical resection and different embolisation devices.\nSearch methods\nWe searched the Cystic Fibrosis and Genetic Disorders Group's Trials Register; date of last search: 10 April 2017.\nWe also searched the following databases: the Australian New Zealand Clinical Trials Registry; ClinicalTrials.gov; International Standard Randomised Controlled Trial Number Register; International Clinical Trials Registry Platform Search Portal (last searched 27 August 2017). to be updated\nWe checked cross-references and searched references from review articles.\nSelection criteria\nTrials in which individuals with pulmonary arteriovenous malformations were randomly allocated to embolisation compared to no treatment, surgical resection or embolisation using a different embolisation device.\nData collection and analysis\nStudies identified for potential inclusion were independently assessed for eligibility by two authors, with excluded studies further checked by a third author. No trials were identified for inclusion in the review and hence no analysis was performed.\nMain results\nThere were no randomised controlled trials included in the review; one ongoing trial has been identified which may be eligible for inclusion in the future.\nAuthors' conclusions\nThere is no evidence from randomised controlled trials for embolisation of pulmonary arteriovenous malformations. However, randomised controlled trials are not always feasible on ethical grounds. Accumulated data from observational studies suggest that embolisation is a safe procedure which reduces morbidity and mortality. A standardised approach to reporting with long-term follow-up through registry studies can help to strengthen the evidence for embolisation in the absence of randomised controlled trials.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Pulmonary arteriovenous malformations are abnormal connections between arteries and veins in the lung which can cause serious complications such as stroke, brain abscess, bleeding in the lung and poor oxygenation. Embolisation is the mainstream treatment for pulmonary arteriovenous malformations. During the procedure, balloon or coil embolisation devices (or both combined) are used to block the feeding artery or arteries to the malformation. These malformations can often be small, multiple and widely spread out, so not all of them are suitable for embolisation treatment.\nThis is an update of a previously published review." } ]
query-laysum
23364
[ { "role": "user", "content": "Abstract: Background\nCrohn’s disease (CD) is a chronic inflammatory disorder that can involve any part of the gastrointestinal tract. 5-Aminosalicylates (5-ASAs) are locally acting, anti-inflammatory compounds that reduce inflammation of the colonic mucosa with release profiles that vary among various commercially available formulations. This updated Cochrane review summarizes current evidence on the use of 5-ASA formulations for maintenance of surgically-induced remission in CD.\nObjectives\nTo assess the efficacy and safety of 5-ASA agents for the maintenance of surgically-induced remission in CD.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, the Cochrane IBD Group Specialized Register from inception to 16 July 2018. We also searched references, conference abstracts, and trials registers.\nSelection criteria\nRandomised controlled trials (RCTs) that included participants with CD in remission following surgery and compared 5-ASAs to no treatment, placebo or any other active intervention with duration of at least three months were considered for inclusion.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. The primary outcome was clinical relapse. Secondary outcomes included endoscopic recurrence, radiologic and surgical relapse, adverse events, serious adverse events and withdrawal due to adverse events.\nMain results\nFourteen RCTs (1867 participants) were included in the review. Participants (15 to 70 years) were recruited from gastroenterology hospitals and medical clinics in Europe and North America and followed up between 3 and 72 months. The risk of bias was assessed as 'low' in one study, 'unclear' in seven and as 'high' in six.\nAt 12 months, 36% (20/55) of participants in the 5-ASA group experienced clinical relapse compared to 51% (28/55) in the no treatment control group (RR 0.71, 95% CI 0.46 to 1.10; low certainty evidence). Moderate certainty evidence suggests that 5-ASAs are more effective for preventing clinical relapse than placebo. During a follow-up period of 12 to 72 months, 36% (131/361) of 5-ASA participants relapsed compared to 43% (160/369) of placebo participants (RR 0.83, 95% CI 0.72 to 0.96; I² = 0%; moderate certainty evidence). At 12 months, 17% (17/101) of the 4 g/day mesalamine group relapsed compared to 26% (27/105) of the 2.4 g/day group (RR 0.65, 95% CI 0.38 to 1.13; moderate certainty evidence). There was no evidence of a difference in clinical relapse rates when 5-ASA compounds were compared to purine antimetabolites. At 24 months, 61% (103/170) of mesalamine participants relapsed compared to 67% (119/177) of azathioprine participants (RR 0.90, 95% CI 0.76 to 1.07; I² = 28%; low certainty evidence). During 24 months, 50% (9/18) of 5-ASA participants had clinical relapse compared to 13% (2/16) of adalimumab participants (RR 4.0, 95% CI 1.01 to 15.84; low certainty evidence). The effects of sulphasalazine compared to placebo on clinical relapse rate is uncertain. After 18 to 36 months, 66% (95/143) of participants treated with sulphasalazine relapsed compared to 71% (110/155) in the placebo group (RR 0.88, 95% CI 0.56 to 1.38; I² = 38%; low certainty evidence).\nThe effect of 5-ASA drugs on safety was uncertain. During 24 months follow-up, 4% (2/55) of 5-ASA participants experienced adverse events compared to none (0/55) in the no treatment control group (RR 5.00, 95% CI 0.25 to 101.81; very low certainty evidence). An equal proportion of 5-ASA participants (10%; 23/241) and placebo (9%; 20/225) groups experienced an adverse event during a follow-up of 3 to 72 months (RR 1.07, 95% CI 0.60 to 1.91; I² = 0%; low certainty evidence). Adverse event rates were similar in the 5-ASA and purine analogues groups. However, serious adverse events and withdrawals due to adverse events were more common in participants who received purine analogues than 5-ASA. At 52 weeks to 24 months, 52% (107/207) of 5-ASA participants had an adverse event compared to 47% (102/218) of purine analogue participants (RR 1.11, 95% CI 0.97 to 1.27, I² = 0%; low certainty evidence). Four per cent (6/152) of 5-ASA participants had a serious adverse event compared to 17% (27/159) of purine analogue participants (RR 0.30, 95% CI 0.11 to 0.80; very low certainty evidence). Eight per cent (17/207) of 5-ASA participants withdrew due to an adverse event compared to 19% (42/218) of purine analogue participants (RR 0.48, 95% CI 0.28 to 0.83; low certainty evidence). Adverse event rates were similar in high and low dose mesalamine participants. After 12 months, 2% (2/101) of 4 g/day mesalamine participants had an adverse event compared to 2% (2/105) of 2.4 g/day participants (RR 1.04, 95% CI 0.15 to 7.24; low certainty evidence). The proportion of participants who experienced adverse events over a 24 month follow-up in the mesalamine group was 78% (14/18) compared to 69% (11/16) of adalimumab participants (RR 1.13, 95% CI 0.75 to 1.71; very low certainty evidence). None (0/32) of the sulphasalazine participants had an adverse event at 18 months follow-up compared to 3% (1/34) of the placebo group (RR 0.35, 95% CI 0.01 to 8.38; very low certainty evidence). Commonly reported adverse events in the included studies were diarrhoea, nausea, increased liver function tests, pancreatitis, and abdominal pain.\nAuthors' conclusions\n5-ASA preparations are superior to placebo for the maintenance of surgically-induced clinical remission in patients with CD (moderate certainty). The number needed to treat to prevent one relapse was 13 patients. The evidence for endoscopic remission is uncertain. The sulphasalazine class of 5-ASA agents failed to demonstrate superiority against placebo, 5-ASAs failed to demonstrate superiority compared to no treatment (very low and low certainty). The efficacy of two different doses of the same 5-ASA and the efficacy of 5-ASA compared to purine antimetabolites (azathioprine or 6-mercaptopurine) in maintaining surgically-induced remission of CD remains unclear. However, purine analogues lead to more serious adverse events and discontinuation due to adverse events. There is a low certainty that 5-ASA is inferior for maintaining surgically-induced remission of CD compared to biologics (anti TNF-ɑ). 5-ASA formulations appear to be safe with no difference in the occurrence of adverse events or withdrawal when compared with placebo, no treatment or biologics.\n\nGiven the provided abstract, please respond to the following query: \"What is Crohn's disease?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Crohn's is a chronic disease of the gut. Crohn's changes from periods when sufferers have symptoms (relapse) to periods when the symptoms disappear (remission). Symptoms include abdominal pain, diarrhoea and weight loss. People with Crohn's disease may undergo surgery to remove diseased parts of their gut. However, their symptoms may return after a short time. Different drugs can be given to maintain remission, however, there are concerns about possible side effects. 5-ASA drugs reduce inflammation (pain and swelling) in the gut. We researched whether 5-ASA can maintain remission in people with Crohn's after the diseased part of their gut has been removed." } ]
query-laysum
23365
[ { "role": "user", "content": "Abstract: Background\nNon-absorbable disaccharides (lactulose and lactitol) are recommended as first-line treatment for hepatic encephalopathy. The previous (second) version of this review included 10 randomised clinical trials (RCTs) evaluating non-absorbable disaccharides versus placebo/no intervention and eight RCTs evaluating lactulose versus lactitol for people with cirrhosis and hepatic encephalopathy. The review found no evidence to either support or refute the use of the non-absorbable disaccharides and no differences between lactulose versus lactitol.\nObjectives\nTo assess the beneficial and harmful effects of i) non-absorbable disaccharides versus placebo/no intervention and ii) lactulose versus lactitol in people with cirrhosis and hepatic encephalopathy.\nSearch methods\nWe carried out electronic searches of the Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL 2015, Issue 10), MEDLINE, EMBASE, and Science Citation Index Expanded to 19 October 2015; manual searches of meetings and conference proceedings; checks of bibliographies; and correspondence with investigators and pharmaceutical companies.\nSelection criteria\nWe included RCTs, irrespective of publication status, language, or blinding.\nData collection and analysis\nTwo review authors, working independently, retrieved data from published reports and correspondence with investigators. The primary outcomes were mortality, hepatic encephalopathy, and serious adverse events. We presented the results of meta-analyses as risk ratios (RR) and mean differences (MD) with 95% confidence intervals (CI). We assessed the quality of the evidence using 'Grading of Recommendations Assessment Development and Evaluation' (GRADE) and bias control using the Cochrane Hepato-Biliary Group domains. Our analyses included regression analyses of publication bias and other small study effects, Trial Sequential Analyses to detect type 1 and type 2 errors, and subgroup and sensitivity analyses.\nMain results\nWe included 38 RCTs with a total of 1828 participants. Eight RCTs had a low risk of bias in the assessment of mortality. All trials had a high risk of bias in the assessment of the remaining outcomes. Random-effects meta-analysis showed a beneficial effect of non-absorbable disaccharides versus placebo/no intervention on mortality when including all RCTs with extractable data (RR 0.59, 95% CI 0.40 to 0.87; 1487 participants; 24 RCTs; I2 = 0%; moderate quality evidence) and in the eight RCTs with a low risk of bias (RR 0.63, 95% CI 0.41 to 0.97; 705 participants). The Trial Sequential Analysis with the relative risk reduction (RRR) reduced to 30% confirmed the findings when including all RCTs, but not when including only RCTs with a low risk of bias or when we reduced the RRR to 22%. Compared with placebo/no intervention, the non-absorbable disaccharides were associated with beneficial effects on hepatic encephalopathy (RR 0.58, 95% CI 0.50 to 0.69; 1415 participants; 22 RCTs; I2 = 32%; moderate quality evidence). Additional analyses showed that non-absorbable disaccharides can help to reduce serious adverse events associated with the underlying liver disease including liver failure, hepatorenal syndrome, and variceal bleeding (RR 0.47, 95% CI 0.36 to 0.60; 1487 participants; 24 RCTs; I2 = 0%; moderate quality evidence). We confirmed the results in Trial Sequential Analysis. Tests for subgroup differences showed no statistical differences between RCTs evaluating prevention, overt, or minimal hepatic encephalopathy. The evaluation of secondary outcomes showed a potential beneficial effect of the non-absorbable disaccharides on quality of life, but we were not able to include the data in an overall meta-analysis (very low quality evidence). Non-absorbable disaccharides were associated with non-serious (mainly gastrointestinal) adverse events (very low quality evidence). None of the RCTs comparing lactulose versus lactitol evaluated quality of life. The review found no differences between lactulose and lactitol for the remaining outcomes (very low quality evidence).\nAuthors' conclusions\nThis review includes a large number of RCTs evaluating the prevention or treatment of hepatic encephalopathy. The analyses found evidence that non-absorbable disaccharides may be associated with a beneficial effect on clinically relevant outcomes compared with placebo/no intervention.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The search date was October 2015." } ]
query-laysum
23366
[ { "role": "user", "content": "Abstract: Background\nPresbyopia occurs when the lens of the eyes loses its elasticity leading to loss of accommodation. The lens may also progress to develop cataract, affecting visual acuity and contrast sensitivity. One option of care for individuals with presbyopia and cataract is the use of multifocal or extended depth of focus intraocular lens (IOL) after cataract surgery. Although trifocal and bifocal IOLs are designed to restore three and two focal points respectively, trifocal lens may be preferable because it restores near, intermediate, and far vision, and may also provide a greater range of useful vision and allow for greater spectacle independence in individuals with presbyopia.\nObjectives\nTo assess the effectiveness and safety of implantation with trifocal versus bifocal IOLs during cataract surgery among people with presbyopia.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2022, Issue 3); Ovid MEDLINE; Embase.com; PubMed; ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 31 March 2022.\nSelection criteria\nWe included randomized controlled trials that compared trifocal and bifocal IOLs among participants 30 years of age or older with presbyopia undergoing cataract surgery.\nData collection and analysis\nWe used standard Cochrane methodology and graded the certainty of the body of evidence according to the GRADE classification.\nMain results\nWe identified seven studies conducted in Europe and Turkey with a total of 331 participants. All included studies assessed visual acuity using a logarithm of the minimum angle of resolution (LogMAR chart). Of them, six (86%) studies assessed uncorrected distance visual acuity (the primary outcome of this review). Some studies also examined our secondary outcomes including uncorrected near, intermediate, and best-corrected distance visual acuity, as well as contrast sensitivity.\nStudy characteristics\nAll participants had bilateral cataracts with no pre-existing ocular pathologies or ocular surgery. Participants' mean age ranged from 55 to 74 years. Three studies reported on gender of participants, and they were mostly women. We assessed all of the included studies as being at unclear risk of bias for most domains. Two studies received financial support from manufacturers of lenses evaluated in this review, and at least one author of another study reported receiving payments for delivering lectures with lens manufacturers.\nFindings\nAll studies compared trifocal versus bifocal IOL implantation on visual acuity outcomes measured on a LogMAR scale. At one year, trifocal IOL showed no evidence of effect on uncorrected distance visual acuity (mean difference (MD) 0.00, 95% confidence interval (CI) −0.04 to 0.04; I2 = 0%; 2 studies, 107 participants; low-certainty evidence) and uncorrected near visual acuity (MD 0.01, 95% CI −0.04 to 0.06; I2 = 0%; 2 studies, 107 participants; low-certainty evidence). Trifocal IOL implantation may improve uncorrected intermediate visual acuity at one year (MD −0.16, 95% CI −0.22 to −0.10; I2 = 0%; 2 studies, 107 participants; low-certainty evidence), but showed no evidence of effect on best-corrected distance visual acuity at one year (MD 0.00, 95% CI −0.03 to 0.04; I2 = 0%; 2 studies, 107 participants; low-certainty evidence). No study reported on contrast sensitivity or quality of life at one-year follow-up. Data from one study at three months suggest that contrast sensitivity did not differ between groups under photopic conditions, but may be worse in the trifocal group in one of the four frequencies under mesopic conditions (MD −0.19, 95% CI −0.33 to −0.05; 1 study; I2 = 0%, 25 participants; low-certainty evidence). One study examined vision-related quality of life using the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25) at six months, and suggested no evidence of a difference between trifocal and bifocal IOLs (MD 1.41, 95% CI −1.78 to 4.60; 1 study, 40 participants; low-certainty evidence).\nAdverse events\nAdverse events reporting varied among studies. Of five studies reporting information on adverse events, two studies observed no intraoperative and postoperative complications or no posterior capsular opacification at six months. One study reported that glare and halos were similar to the preoperative measurements. One study reported that 4 (20%) and 10 (50%) participants had glare complaints at 6 months in trifocal and bifocal group, respectively (risk ratio 0.40, 95% CI 0.15 to 1.07; 40 participants). One study reported that four eyes (11.4%) in the bifocal group and three eyes (7.5%) in the trifocal group developed significant posterior capsular opacification requiring YAG capsulotomy at one year. The certainty of the evidence for adverse events was low.\nAuthors' conclusions\nWe found low-certainty of evidence that compared with bifocal IOL, implantation of trifocal IOL may improve uncorrected intermediate visual acuity at one year. However, there was no evidence of a difference between trifocal and bifocal IOL for uncorrected distance visual acuity, uncorrected near visual acuity, and best-corrected visual acuity at one year. Future research should include the comparison of both trifocal IOL and specific bifocal IOLs that correct intermediate visual acuity to evaluate important outcomes such as contrast sensitivity, quality of life, and vision-related adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"What is presbyopia, and how is it treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Presbyopia is an age-related condition of the lens of the eye that causes a gradual loss of the ability to focus on nearby objects. Further age-related lens changes may lead to loss of clarity of the lens (cataract) causing loss of visual acuity and contrast sensitivity. Lenses with three or two regions (trifocal or bifocal, respectively) are a new technology intended to decrease dependence on eye glasses use after cataract surgery." } ]
query-laysum
23367
[ { "role": "user", "content": "Abstract: Background\nPain is the hallmark of sickle cell disease (SCD) and it can be severe, frequent and unpredictable. Although nociceptive pain is more common, at times, people with SCD may have neuropathic pain. The latter can occur due to peripheral or central nerve injury. This review is focused on identifying treatment of only painful sensory neuropathy in people with SCD.\nObjectives\nTo determine the effectiveness and safety of any pharmacological or non-pharmacological therapies for treating neuropathic pain in people with SCD.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. We also searched trial registries, the reference lists of relevant articles and reviews and contacted experts in the field.\nDate of last search: 31 January 2019.\nSelection criteria\nRandomised controlled trials (RCTs) (parallel or cross-over in design), quasi-RCTs of pharmacological or non-pharmacological therapies for treating neuropathic pain in people with SCD compared to placebo or another intervention in any category (i.e. pharmacological or non-pharmacological).\nData collection and analysis\nTwo review authors independently assessed all trials identified by the searches and extracted relevant data. Two authors independently assessed the risk of bias in the selected trials using the Cochrane risk of bias tool. Two review authors independently rated the quality of the evidence for each outcome using the GRADE guidelines.\nMain results\nOne RCT of 22 participants with SCD, conducted in the USA was included in this review. Participants were randomly assigned to either pregabalin (n = 11) or placebo (n = 11). Oral pregabalin was administered at an initial dose of 75 mg twice daily. The drug was titrated at increments of 75 mg to a maximum of 600 mg daily or decreased by 75 mg per day if necessary, based on clinical presentation and pain level. Neuropathic pain was assessed using self-reports on the Leeds Assessment of Neuropathic Symptoms and Signs (S-LANNS) scale and the Neuropathic Pain Symptom Inventory (NPSI), where higher scores were indicative of more pain. Outcomes included self-reported pain, quality of life and withdrawal due to adverse effects measured at baseline and monthly for three months post-intervention. The overall risk of bias was low with a high risk of bias due to attrition.\nIn relation to this reviews primary outcomes, for self-reported neuropathic pain relief, given the paucity of data, we are very uncertain whether there is a difference between the pregabalin and placebo groups at the end of three months as measured by the S-LANSS scale, mean difference (MD) -2.00 (95% confidence interval (CI) -9.18 to 5.18), or the NPSI scale, MD -11.10 (95% CI -33.97 to 11.77) (very low-quality evidence). There was no report of 'Patient Global Impression of Change' in the included trial.\nAlthough the mean quality of life scores (Short Form-36) at three months showed small increases in seven of the eight domains post-intervention in the pregabalin group as compared to the placebo group, this was very low-quality evidence and we are very uncertain whether pregabalin increases quality of life. Neither of our pre-defined outcomes of 'time to improvement of symptoms' or 'changes in sleep quality', were measured in the included trial.\nWhile treatment-related adverse effects appeared higher in pregabalin group than the placebo group at three months, this was very low-quality evidence and we are very uncertain whether there is a difference, RR 1.33 (95% CI 0.39 to 4.62) (very low-quality evidence). There was one withdrawal for adverse effects in the pregabalin group while three people withdrew or dropped out from the placebo group due to adverse effects and complications and hospitalisation related to SCD.\nAuthors' conclusions\nThe included trial provided very low-quality evidence. Self-reported pain relief was greater in the pregabalin group compared to the placebo control group but only using the S-LANSS scale and we are very unsure whether there is a difference. While the pregabalin group tended to have improved quality of life over the duration of the trial, this was very low-quality evidence and we are uncertain whether there is a difference. Adverse effects and withdrawals were similar across the treatment and placebo control group in trial. There are both insufficient trials addressing this review question and insufficient outcomes addressed in the single included RCT. Therefore, there is still a significant gap in evidence on interventions for neuropathic pain in people with SCD.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The evidence in this review was assessed as very-low quality. The single trial in this review lost more than 15% of participants to follow-up over three months. We downgraded the level of evidence due to lack of clarity in how the participants were assigned to the pregabalin or the placebo group and due to the small number of events and participants in the trial." } ]
query-laysum
23368
[ { "role": "user", "content": "Abstract: Background\nCongenital adrenal hyperplasia (CAH) is an autosomal recessive condition which leads to glucocorticoid deficiency and is the most common cause of adrenal insufficiency in children. In over 90% of cases, 21-hydroxylase enzyme deficiency is found which is caused by mutations in the 21-hydroxylase gene. Managing individuals with CAH due to 21-hydroxylase deficiency involves replacing glucocorticoids with oral glucocorticoids (including prednisolone and hydrocortisone), suppressing adrenocorticotrophic hormones and replacing mineralocorticoids to prevent salt wasting. During childhood, the main aims of treatment are to prevent adrenal crises and to achieve normal stature, optimal adult height and to undergo normal puberty. In adults, treatment aims to prevent adrenal crises, ensure normal fertility and to avoid the long-term consequences of glucocorticoid use. Current glucocorticoid treatment regimens can not optimally replicate the normal physiological cortisol level and over-treatment or under-treatment is often reported.\nObjectives\nTo compare and determine the efficacy and safety of different glucocorticoid replacement regimens in the treatment of CAH due to 21-hydroxylase deficiency in children and adults.\nSearch methods\nWe searched the Cochrane Inborn Errors of Metabolism Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. We also searched the reference lists of relevant articles and reviews, and trial registries (ClinicalTrials.gov and WHO ICTRP).\nDate of last search of trials register: 24 June 2019.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs comparing different glucocorticoid replacement regimens for treating CAH due to 21-hydroxylase deficiency in children and adults.\nData collection and analysis\nThe authors independently extracted and analysed the data from different interventions. They undertook the comparisons separately and used GRADE to assess the quality of the evidence.\nMain results\nSearches identified 1729 records with 43 records subject to further examination. After screening, we included five RCTs (six references) with a total of 101 participants and identified a further six ongoing RCTs. The number of participants in each trial varied from six to 44, with participants' ages ranging from 3.6 months to 21 years. Four trials were of cross-over design and one was of parallel design. Duration of treatment ranged from two weeks to six months per treatment arm with an overall follow-up between six and 12 months for all trials. Overall, we judged the quality of the trials to be at moderate to high risk of bias; with lack of methodological detail leading to unclear or high risk of bias judgements across many of the domains.\nAll trials employed an oral glucocorticoid replacement therapy, but with different daily schedules and dose levels. Three trials compared different dose schedules of hydrocortisone (HC), one three-arm trial compared HC to prednisolone (PD) and dexamethasone (DXA) and one trial compared HC with fludrocortisone to PD with fludrocortisone. Due to the heterogeneity of the trials and the limited amount of evidence, we were unable to perform any meta-analyses.\nNo trials reported on quality of life, prevention of adrenal crisis, presence of osteopenia, presence of testicular or ovarian adrenal rest tumours, subfertility or final adult height.\nFive trials (101 participants) reported androgen normalisation but using different measurements (very low-quality evidence for all measurements). Five trials reported 17 hydroxyprogesterone (17 OHP) levels, four trials reported androstenedione, three trials reported testosterone and one trial reported dehydroepiandrosterone sulphate (DHEAS). After four weeks, results from one trial (15 participants) showed a high morning dose of HC or a high evening dose made little or no difference in 17 OHP, testosterone, androstenedione and DHEAS. One trial (27 participants) found that HC and DXA treatment suppressed 17 OHP and androstenedione more than PD treatment after six weeks and a further trial (eight participants) reported no difference in 17 OHP between the five different dosing schedules of HC at between four and six weeks. One trial (44 participants) comparing HC and PD found no differences in the values of 17 OHP, androstenedione and testosterone at one year. One trial (26 participants) of HC versus HC plus fludrocortisone found that at six months 17 OHP and androstenedione levels were more suppressed on HC alone, but there were no differences noted in testosterone levels.\nWhile no trials reported on absolute final adult height, we reported some surrogate markers. Three trials reported on growth and bone maturation and two trials reported on height velocity. One trial found height velocity was reduced at six months in 26 participants given once daily HC 25 mg/m²/day compared to once daily HC 15 mg/m²/day (both groups also received fludrocortisone 0.1 mg/day), but as the quality of the evidence was very low we are unsure whether the variation in HC dose caused the difference. There were no differences noted in growth hormone or IGF1 levels. The results from another trial (44 participants) indicate no difference in growth velocity between HC and PD at one year (very low-quality evidence), but this trial did report that once daily PD treatment may lead to better control of bone maturation compared to HC in prepubertal children and that the absolute change in bone age/chronological age ratio was higher in the HC group compared to the PD group.\nAuthors' conclusions\nThere are currently limited trials comparing the efficacy and safety of different glucocorticoid replacement regimens for treating 21-hydroxylase deficiency CAH in children and adults and we were unable to draw any firm conclusions based on the evidence that was presented in the included trials.\nNo trials included long-term outcomes such as quality of life, prevention of adrenal crisis, presence of osteopenia, presence of testicular or ovarian adrenal rest tumours, subfertility and final adult height. There were no trials examining a modified-release formulation of HC or use of 24-hour circadian continuous subcutaneous infusion of hydrocortisone. As a consequence, uncertainty remains about the most effective form of glucocorticoid replacement therapy in CAH for children and adults.\nFuture trials should include both children and adults with CAH. A longer duration of follow-up is required to monitor biochemical and clinical outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Many trials had a high or unclear overall risk of bias. There were problems with the quality of the evidence, which was judged to be very low for all outcomes we considered across all the trials. This was because the trials were only small and if they compared a second treatment after the first, they did not leave enough time for the effects of the first treatment to clear. Also, people taking part had previously been treated with different glucocorticoids." } ]
query-laysum
23369
[ { "role": "user", "content": "Abstract: Background\nCombined oral contraceptives (COCs) have been associated with an increased risk of arterial thrombosis, i.e. myocardial infarction or ischemic stroke. However, as these diseases are rare in young women and as many types of combined oral contraception exist, the magnitude of the risk and the effect of different hormonal contents of COC preparations remain unclear.\nObjectives\nTo estimate the risk of myocardial infarction or ischemic stroke in users compared with non-users of different types, doses and generations of combined oral contraception.\nSearch methods\nWe searched electronic databases (MEDLINE (1966 to July 08, 2015), EMBASE (1980 to July 08, 2015), Popline (1970 to July 08, 2015) and LILACS (1985 to July 08, 2015) for eligible studies, without language restrictions.\nSelection criteria\nWe included observational studies that recruited women in the reproductive age group (18 to 50 years) and compared the risk of myocardial infarction or ischemic stroke between users and non-users of COCs.\nData collection and analysis\nTwo review authors independently selected relevant studies and extracted data. We pooled relative risks ()(combined odds ratios and one incidence rate ratio) and 95% confidence intervals (CIs) for myocardial infarction or ischemic stroke in users versus non-users of COCs. We combined the outcomes of myocardial infarction and ischemic stroke and also analysed these outcomes separately. Analyses were stratified according to estrogen dose and progestagen type.\nMain results\nIn total, we identified 1298 publications through the search strategy. We included 28 publications reporting on 24 studies. COC users were at increased risk of myocardial infarction or ischemic stroke compared with non-users: relative risk (RR) 1.6 (95% CI 1.3-1.9).These RRs were similar for myocardial infarction (1.6, 95% CI 1.2 to 2.1) and ischemic stroke (1.7, 95% CI 1.5 to 1.9). The risks did not vary clearly according to the generation of progestagen or according to progestagen type. When we stratified preparations according to estrogen dose, the risk of myocardial infarction or ischemic stroke seemed to increase with higher doses of estrogen.\nAuthors' conclusions\nThis meta-analysis showed that the risk of myocardial infarction or ischemic stroke was 1.6-fold increased in women using COCs . The risk was highest for pills with > 50 microgram estrogen. When combined with the results of studies on the risk of venous thrombosis in COC users, it seems that the COC pill containing levonorgestrel and 30 µg of estrogen is the safest oral form of combined oral hormonal contraception.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Our results showed that the overall risk of arterial thrombosis was was 1.6-fold increased in women using oral contraceptive pills compared with women who did not use oral contraceptive pills. The risk did not vary clearly according to progestagen type. However, we found that the risk of arterial thrombosis seemed to be twice as high in women taking pills with higher doses of estrogen. Also, the risk of other side effects of oral contraceptive pills (such as a blood clot in a vein-venous thrombosis) should be considered before any type of oral contraceptive pill is prescribed. It is likely that the COC pill containing levonorgestrel and 30 µg of estrogen is the safest oral form of combined oral hormonal contraception." } ]
query-laysum
23370
[ { "role": "user", "content": "Abstract: Background\nWomen with a diagnosis of breast cancer may experience short- and long-term disease and treatment-related adverse physiological and psychosocial outcomes. These outcomes can negatively impact prognosis, health-related quality of life (HRQoL), and psychosocial and physical function. Physical activity may help to improve prognosis and may alleviate the adverse effects of adjuvant therapy.\nObjectives\nTo assess effects of physical activity interventions after adjuvant therapy for women with breast cancer.\nSearch methods\nWe searched the Cochrane Breast Cancer Group (CBCG) Specialised Registry, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Physiotherapy Evidence Database (PEDro), SPORTDiscus, PsycINFO, ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform, on 18 September 2015. We also searched OpenGrey and Healthcare Management Information Consortium databases.\nSelection criteria\nWe searched for randomised and quasi-randomised trials comparing physical activity interventions versus control (e.g. usual or standard care, no physical activity, no exercise, attention control, placebo) after adjuvant therapy (i.e. after completion of chemotherapy and/or radiation therapy, but not hormone therapy) in women with breast cancer.\nData collection and analysis\nTwo review authors independently selected studies, assessed risk of bias, and extracted data. We contacted trial authors to ask for additional information when needed. We calculated an overall effect size with 95% confidence intervals (CIs) for each outcome and used GRADE to assess the quality of evidence for the most important outcomes.\nMain results\nWe included 63 trials that randomised 5761 women to a physical activity intervention (n = 3239) or to a control (n = 2524). The duration of interventions ranged from 4 to 24 months, with most lasting 8 or 12 weeks (37 studies). Twenty-eight studies included aerobic exercise only, 21 involved aerobic exercise and resistance training, and seven used resistance training only. Thirty studies described the comparison group as usual or standard care, no intervention, or control. One-fifth of studies reported at least 20% intervention attrition and the average physical activity adherence was approximately 77%.\nNo data were available on effects of physical activity on breast cancer-related and all-cause mortality, or on breast cancer recurrence. Analysis of immediately postintervention follow-up values and change from baseline to end of intervention scores revealed that physical activity interventions resulted in significant small-to-moderate improvements in HRQoL (standardised mean difference (SMD) 0.39, 95% CI 0.21 to 0.57, 22 studies, 1996 women; SMD 0.78, 95% CI 0.39 to 1.17, 14 studies, 1459 women, respectively; low-quality evidence), emotional function (SMD 0.21, 95% CI 0.10 to 0.32, 26 studies, 2102 women, moderate-quality evidence; SMD 0.31, 95% CI 0.09 to 0.53, 15 studies, 1579 women, respectively; low-quality evidence), perceived physical function (SMD 0.33, 95% CI 0.18 to 0.49, 25 studies, 2129 women; SMD 0.60, 95% CI 0.23 to 0.97, 13 studies, 1433 women, respectively; moderate-quality evidence), anxiety (SMD -0.57, 95% CI -0.95 to -0.19, 7 studies, 326 women; SMD -0.37, 95% CI -0.63 to -0.12, 4 studies, 235 women, respectively; low-quality evidence), and cardiorespiratory fitness (SMD 0.44, 95% CI 0.30 to 0.58, 23 studies, 1265 women, moderate-quality evidence; SMD 0.83, 95% CI 0.40 to 1.27, 9 studies, 863 women, respectively; very low-quality evidence).\nInvestigators reported few minor adverse events.\nSmall improvements in physical activity interventions were sustained for three months or longer postintervention in fatigue (SMD -0.43, 95% CI -0.60 to -0.26; SMD -0.47, 95% CI -0.84 to -0.11, respectively), cardiorespiratory fitness (SMD 0.36, 95% CI 0.03 to 0.69; SMD 0.42, 95% CI 0.05 to 0.79, respectively), and self-reported physical activity (SMD 0.44, 95% CI 0.17 to 0.72; SMD 0.51, 95% CI 0.08 to 0.93, respectively) for both follow-up values and change from baseline scores.\nHowever, evidence of heterogeneity across trials was due to variation in intervention components (i.e. mode, frequency, intensity, duration of intervention and sessions) and measures used to assess outcomes. All trials reviewed were at high risk of performance bias, and most were also at high risk of detection, attrition, and selection bias. In light of the aforementioned issues, we determined that the evidence was of very low, low, or moderate quality.\nAuthors' conclusions\nNo conclusions regarding breast cancer-related and all-cause mortality or breast cancer recurrence were possible. However, physical activity interventions may have small-to-moderate beneficial effects on HRQoL, and on emotional or perceived physical and social function, anxiety, cardiorespiratory fitness, and self-reported and objectively measured physical activity. The positive results reported in the current review must be interpreted cautiously owing to very low-to-moderate quality of evidence, heterogeneity of interventions and outcome measures, imprecision of some estimates, and risk of bias in many trials. Future studies with low risk of bias are required to determine the optimal combination of physical activity modes, frequencies, intensities, and durations needed to improve specific outcomes among women who have undergone adjuvant therapy.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "What effects do physical activity (PA) interventions have on women with breast cancer who have completed cancer treatment?" } ]
query-laysum
23371
[ { "role": "user", "content": "Abstract: Background\nPeople living in 'humanitarian settings' in low- and middle-income countries (LMICs) are exposed to a constellation of physical and psychological stressors that make them vulnerable to developing mental disorders. A range of psychological and social interventions have been implemented with the aim to prevent the onset of mental disorders and/or lower psychological distress in populations at risk, and it is not known whether interventions are effective.\nObjectives\nTo compare the efficacy and acceptability of psychological and social interventions versus control conditions (wait list, treatment as usual, attention placebo, psychological placebo, or no treatment) aimed at preventing the onset of non-psychotic mental disorders in people living in LMICs affected by humanitarian crises.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Controlled Trials Register (CCMD-CTR), the Cochrane Drugs and Alcohol Review Group (CDAG) Specialized Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (OVID), Embase (OVID), PsycINFO (OVID), and ProQuest PILOTS database with results incorporated from searches to February 2020. We also searched the World Health Organization's (WHO) International Clinical Trials Registry Platform and ClinicalTrials.gov to identify unpublished or ongoing studies. We checked the reference lists of relevant studies and reviews.\nSelection criteria\nAll randomised controlled trials (RCTs) comparing psychological and social interventions versus control conditions to prevent the onset of mental disorders in adults and children living in LMICs affected by humanitarian crises. We excluded studies that enrolled participants based on a positive diagnosis of mental disorder (or based on a proxy of scoring above a cut-off score on a screening measure).\nData collection and analysis\nWe calculated standardised mean differences for continuous outcomes and risk ratios for dichotomous data, using a random-effects model. We analysed data at endpoint (zero to four weeks after therapy) and at medium term (one to four months after intervention). No data were available at long term (six months or longer). We used GRADE to assess the quality of evidence.\nMain results\nIn the present review we included seven RCTs with a total of 2398 participants, coming from both children/adolescents (five RCTs), and adults (two RCTs). Together, the seven RCTs compared six different psychosocial interventions against a control comparator (waiting list in all studies). All the interventions were delivered by paraprofessionals and, with the exception of one study, delivered at a group level.\nNone of the included studies provided data on the efficacy of interventions to prevent the onset of mental disorders (incidence). For the primary outcome of acceptability, there may be no evidence of a difference between psychological and social interventions and control at endpoint for children and adolescents (RR 0.93, 95% CI 0.78 to 1.10; 5 studies, 1372 participants; low-quality evidence) or adults (RR 0.96, 95% CI 0.61 to 1.50; 2 studies, 767 participants; very low quality evidence). No information on adverse events related to the interventions was available.\nFor children's and adolescents' secondary outcomes of prevention interventions, there may be no evidence of a difference between psychological and social intervention groups and control groups for reducing PTSD symptoms (standardised mean difference (SMD) −0.16, 95% CI −0.50 to 0.18; 3 studies, 590 participants; very low quality evidence), depressive symptoms (SMD −0.01, 95% CI −0.29 to 0.31; 4 RCTs, 746 participants; very low quality evidence) and anxiety symptoms (SMD 0.11, 95% CI −0.09 to 0.31; 3 studies, 632 participants; very low quality evidence) at study endpoint.\nIn adults' secondary outcomes of prevention interventions, psychological counselling may be effective for reducing depressive symptoms (MD −7.50, 95% CI −9.19 to −5.81; 1 study, 258 participants; very low quality evidence) and anxiety symptoms (MD −6.10, 95% CI −7.57 to −4.63; 1 study, 258 participants; very low quality evidence) at endpoint. No data were available for PTSD symptoms in the adult population.\nOwing to the small number of RCTs included in the present review, it was not possible to carry out neither sensitivity nor subgroup analyses.\nAuthors' conclusions\nOf the seven prevention studies included in this review, none assessed whether prevention interventions reduced the incidence of mental disorders and there may be no evidence for any differences in acceptability. Additionally, for both child and adolescent populations and adult populations, a very small number of RCTs with low quality evidence on the review's secondary outcomes (changes in symptomatology at endpoint) did not suggest any beneficial effect for the studied prevention interventions.\nConfidence in the findings is hampered by the scarcity of prevention studies eligible for inclusion in the review, by risk of bias in the studies, and by substantial levels of heterogeneity. Moreover, it is possible that random error had a role in distorting results, and that a more thorough picture of the efficacy of prevention interventions will be provided by future studies. For this reason, prevention studies are urgently needed to assess the impact of interventions on the incidence of mental disorders in children and adults, with extended periods of follow-up.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched for studies that looked at the preventive effects of psychosocial interventions on people's mental health in low- and middle-income countries affected by humanitarian crises. We looked for randomised controlled studies, in which the interventions people received were decided at random. This type of study usually gives the most reliable evidence about the effects of an intervention." } ]
query-laysum
23372
[ { "role": "user", "content": "Abstract: Background\nCrohn's disease (CD) is a chronic relapsing inflammatory condition and maintenance of remission is a major issue as many patients fail to achieve remission with medical management and require surgical interventions. Purine analogues such as azathioprine (AZA) and 6-mercaptopurine (6-MP) have been used to maintain surgically-induced remission in CD, but the effectiveness, tolerability and safety of these agents remains controversial.\nObjectives\nTo assess the efficacy and safety of purine analogues (AZA and 6-MP) for maintenance of surgically-induced remission in CD.\nSearch methods\nWe searched PubMed, MEDLINE, Embase, CENTRAL, and the Cochrane IBD Group Specialized Register from inception to 26 July 2018 (and from inception to 31 July 2019). In addition, we searched reference lists of all included studies and relevant reviews, conference proceedings and trials registers.\nSelection criteria\nRandomised controlled trials (RCTs) with a duration of at least three months that enrolled adults and children with surgically-induced remission of CD and compared AZA or 6-MP to no treatment, placebo or any other active intervention were considered for inclusion.\nData collection and analysis\nTwo authors independently assessed trial eligibility, extracted data, assessed the risk of bias and assessed the certainty of the evidence using GRADE. The primary outcome was clinical relapse. Secondary outcomes included endoscopic relapse, radiologic and surgical relapse, adverse events (AEs), serious adverse events (SAEs), withdrawal due to AEs and health-related quality of life.\nMain results\nTen RCTs with a total of 928 participants were included. Study participants were adults recruited from university clinics and gastroenterology hospitals who received interventions post-surgery for a duration between 12 to 36 months. Most study participants were recruited less than three months after surgery in all except one study where participants were recruited between 6 to 24 months post-surgery. One study was rated as low risk of bias, six studies were rated high risk of bias and three were rated unclear risk of bias.\nThere was moderate certainty evidence that purine analogues are more efficient for preventing clinical relapse than placebo. At 12 to 36 months, 51% (109/215) of AZA/6-MP participants relapsed compared to 64% (124/193) of placebo participants (RR 0.79; 95% CI 0.67 to 0.92; 408 participants; 3 studies; I² = 0%; moderate certainty evidence). The certainty of the evidence regarding the efficacy of AZA or 6-MP for maintaining postoperative clinical remission compared to 5-ASA compounds was low. At 12 to 24 months , 64% (113/177) of purine analogue participants relapsed compared to 59% (101/170) of 5-ASA participants (RR 1.05; 95% CI 0.89 to 1.24; 347 participants; 4 studies; I² = 8%; low certainty evidence). The certainty of evidence that purine analogues are inferior for preventing postsurgical clinical relapse compared to tumour necrosis factor alpha agents (anti-TNF-α) was very low. At 12 to 24 months, 43% (29/67) of AZA participants relapsed compared to 14% (10/72) of anti-TNF-α participants (RR 2.89; 95% CI 1.50 to 5.57; 139 participants; 3 studies; I² = 0%; very low certainty evidence).\nThe effect of purine analogues compounds on AEs compared to placebo or any active treatment was uncertain, as the quality of evidence ranged from very low to low. After 12 to 24 months, 14% (12/87) of purine analogue participants experienced an AE compared to 10% (8/81) of placebo participants (RR 1.36; 95% CI 0.57 to 3.27; 168 participants; 2 studies; I² = 0%; low certainty evidence). The effect of purine analogues on AEs compared to 5-ASA agents was uncertain. After 12 to 24 months, 41% (73/176) of purine analogue participants had an AE compared to 47% (81/171) of 5-ASA participants (RR 0.89; 95% CI 0.74 to 1.07; 346 participants; 4 studies; I² = 15%; low certainty evidence). The effect of purine analogues on AEs in comparison to anti TNF-α agents was uncertain. At 12 to 24 months, 57% (32/56) of AZA participants had an AE compared to 51% (31/61) of anti-TNF-α participants (RR 1.13; 95% CI 0.83 to 1.53; 117 participants; 2 studies; I² = 0%; low certainty evidence). Purine analogue participants were more like than 5-ASA participants to have a SAE (RR 3.39, 95% CI 1.26 to 9.13, 311 participants; 3 studies; I² = 9%; very low certainty evidence), or to withdraw due to an AE (RR 2.21, 95% CI 1.28 to 3.81; 425 participants; 5 studies; I² = 0%; low certainty evidence). Commonly reported AEs across all studies included leucopenia, arthralgia, abdominal pain or severe epigastric intolerance, elevated liver enzymes, nausea and vomiting, pancreatitis, anaemia, nasopharyngitis and flatulence.\nAuthors' conclusions\nModerate certainty evidence suggests that AZA and 6-MP may be superior to placebo for maintenance of surgically-induced remission in participants with CD. There was no clear difference in the number of clinical relapses when purine analogues were compared with 5-ASA agents, however this is based on low certainty evidence. There was very low certainty evidence that AZA and 6-MP are more likely to result in more serious adverse events (SAEs) and withdrawals due to an AE (low certainty) when compared to 5-ASA agents. Very low certainty evidence suggests that purine analogues may be inferior to anti-TNF-α agents, however, no firm conclusions can be drawn. Further research investigating the efficacy and safety of AZA and 6-MP in comparison to other active medications in surgically-induced remission of CD is warranted.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The review authors found 10 relevant studies with a total of 928 participants, conducted across several European countries, Israel and the US. The studies included people with Crohn's disease over 16 years of age who had undergone surgery and were free from symptoms. These studies compared purine analogues with placebo (e.g. a sugar pill), or oral 5-aminosalicylic acid (5-ASA) formulations or with anti-tumour necrosis factor-alpha (anti-TNF-α) drugs. 5-ASA and anti-TNF-ɑ drugs are used reduce inflammation (pain and swelling) in the gut.\nOne study was high quality, while six studies were of lower quality and three studies did not report enough information to make a judgement on quality. Purine analogues are probably better than placebo for maintaining surgically-induced remission of Crohn's disease (moderate certainty evidence). The analysis of studies that compared purine antimetabolites to 5-ASA medications found no difference in the number of people who remained in remission. However, more people who received purine analogues experienced serious side effects or discontinued treatment due to side effects than those who received 5-ASA (very low and low certainty evidence). The analysis of studies that compared purine analogues to anti-TNF-α drugs showed that purine analogues were less effective for maintaining remission of Crohn's disease after surgery. However, the overall certainty of evidence was very low. Well designed studies are needed to better understand the benefits and harms of purine analogues compared with anti-TNF-ɑ agents and other active drugs used for Crohn's disease. Due to sparse data and inconsistent reporting across all studies, the effect of purine analogues on side effects compared with placebo, 5-ASA or biologics was uncertain. Commonly reported side effects across the studies included leucopenia (a reduction in the number of white cells in the blood), pancreatitis (inflamed pancreas), arthralgia (joint pain), abdominal pain or severe epigastric intolerance, elevated liver enzymes, nausea and vomiting, anaemia (low number of red blood cells), nasopharyngitis (common cold) and flatulence (intestinal gas)." } ]
query-laysum
23373
[ { "role": "user", "content": "Abstract: Background\nPremenstrual syndrome (PMS) is a common problem. Premenstrual dysphoric disorder (PMDD) is a severe form of premenstrual syndrome. Combined oral contraceptives (COC), which provide both progestin and oestrogen, have been examined for their ability to relieve premenstrual symptoms. A combined oral contraceptive containing drospirenone and a low oestrogen dose has been approved for treating PMDD in women who choose combined oral contraceptives for contraception.\nObjectives\nTo evaluate the effectiveness and safety of COCs containing drospirenone in women with PMS.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group trial register, CENTRAL (now containing output from two trials registers and CINAHL), MEDLINE, Embase, PsycINFO, LILACS, Google Scholar, and Epistemonikos on 29 June 2022. We checked included studies' reference lists and contacted study authors and experts in the field to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCT) that compared COCs containing drospirenone with placebo or with another COC for treatment of women with PMS.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. The primary review outcomes were effects on premenstrual symptoms that were prospectively recorded, and withdrawal due to adverse events. Secondary outcomes included effects on mood, adverse events, and response rate to study medications.\nMain results\nWe included five RCTs (858 women analysed, most diagnosed with PMDD). The evidence was very low to moderate quality; the main limitations were serious risk of bias due to poor reporting of study methods, and serious inconsistency and imprecision.\nCOCs containing drospirenone and ethinylestradiol (EE) versus placebo\nCOCs containing drospirenone and EE may improve overall premenstrual symptoms (standardised mean difference (SMD) -0.41, 95% confidence interval (CI) -0.59 to -0.24; 2 RCTs, N = 514; I2 = 64%; low-quality evidence); and functional impairment due to premenstrual symptoms in terms of productivity (mean difference (MD) -0.31, 95% CI -0.55 to -0.08; 2 RCTs, N = 432; I2 = 47%; low-quality evidence), social activities (MD -0.29, 95% CI -0.54 to -0.04; 2 RCTs, N = 432; I2 = 53%; low-quality evidence), and relationships (MD -0.30, 95% CI -0.54 to -0.06; 2 RCTs, N = 432; I2 = 45%; low-quality evidence). The effects from COCs containing drospirenone may be small to moderate.\nCOCs containing drospirenone and EE may increase withdrawal from trials due to adverse effects (odds ratio (OR) 3.41, 95% CI 2.01 to 5.78; 4 RCT, N = 776; I2 = 0%; low-quality evidence). This suggests that if you assume the risk of withdrawal due to adverse effects from placebo is 3%, the risk from drospirenone plus EE will be between 6% and 16%.\nWe are uncertain of the effect of drospirenone plus EE on premenstrual mood symptoms, when measured by validated tools that were not developed to assess premenstrual symptoms.\nCOCs containing drospirenone may lead to more adverse effects in total (OR 2.31, 95% CI 1.71 to 3.11; 3 RCT, N = 739; I2 = 0%; low-quality evidence). This suggests that if you assume the risk of having adverse effects from placebo is 28%, the risk from drospirenone plus EE will be between 40% and 54%. It probably leads to more breast pain, and may lead to more nausea, intermenstrual bleeding, and menstrual disorder. Its effect on nervousness, headache, asthenia, and pain is uncertain. There was no report of any rare but serious adverse effects, such as venous thromboembolism in any of the included studies.\nCOCs containing drospirenone may improve response rate (OR 1.65, 95% CI 1.13 to 2.40; 1 RCT, N = 449; I2 not applicable; low-quality evidence). This suggests that if you assume the response rate from placebo is 36%, the risk from drospirenone plus EE will be between 39% and 58%.\nWe did not identify any studies that compared COCs containing drospirenone with other COCs.\nAuthors' conclusions\nCOCs containing drospirenone and EE may improve premenstrual symptoms that result in functional impairments in women with PMDD. The placebo also had a significant effect. COCs containing drospirenone and EE may lead to more adverse effects compared to placebo. We do not know whether it works after three cycles, helps women with less severe symptoms, or is better than other combined oral contraceptives that contain a different progestogen.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "After three months, women on the drospirenone with low oestrogen pill had less severe premenstrual symptoms than the group taking the dummy pill. Women on the drospirenone with low oestrogen pill said they could do more, and had more social activities and friends. However, women on the drospirenone with low oestrogen pill were more likely to pull out of the trials due to side effects. They reported more side effects overall; specifically, nausea, bleeding between periods, and breast pain, which are all common side effects with birth control pills. We do not know if the drospirenone with low oestrogen pill works longer than three cycles, helps women with less severe symptoms, or is better than other birth control pills.\nThe beneficial effects from the drospirenone with low oestrogen pill on premenstrual symptoms were small to moderate, although there was significant benefit from the dummy pill too. The evidence suggests that if the chance of pulling out of the trials due to side effects while on the dummy pill is assumed to be 3%, the chance while on the drospirenone with low oestrogen pill would be between 6% and 16%. It also suggests that if the risk of having side effects from the dummy pill is assumed to be 28%, the risk from the drospirenone with low oestrogen pill would be between 40% and 54%." } ]
query-laysum
23374
[ { "role": "user", "content": "Abstract: Background\nRadical cystectomy (RC) is the primary surgical treatment for muscle-invasive urothelial carcinoma of the bladder. This major operation is typically associated with an extended hospital stay, a prolonged recovery period and potentially major complications. Nutritional interventions are beneficial in some people with other types of cancer and may be of value in this setting too.\nObjectives\nTo assess the effects of perioperative nutrition in people undergoing radical cystectomy for the treatment of bladder cancer.\nSearch methods\nWe performed a comprehensive search using multiple databases (Evidence Based Medicine Reviews, MEDLINE, Embase, AMED, CINAHL), trials registries, other sources of grey literature, and conference proceedings published up to 22 February 2019, with no restrictions on the language or status of publication.\nSelection criteria\nWe included parallel-group randomised controlled trials (RCTs) of adults undergoing RC for bladder cancer. The intervention was any perioperative nutrition support.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, and assessed risk of bias and the quality of evidence using GRADE. Primary outcomes were postoperative complications at 90 days and length of hospital stay. The secondary outcome was mortality up to 90 days after surgery. When 90-day outcome data were not available, we reported 30-day data.\nMain results\nThe search identified eight trials including 500 participants. Six trials were conducted in the USA and two in Europe.\n1. Parenteral nutrition (PN) versus oral nutrition: based on one study with 157 participants, PN may increase postoperative complications within 30 days (risk ratio (RR) 1.40, 95% confidence interval (CI) 1.07 to 1.82; low-quality evidence). We downgraded the quality of evidence for serious study limitations (unclear risk of selection, performance and selective reporting bias) and serious imprecision. This corresponds to 198 more complications per 1000 participants (95% CI 35 more to 405 more). Length of hospital stay may be similar (mean difference (MD) 0.5 days higher, CI not reported; low-quality evidence).\n2. Immuno-enhancing nutrition versus standard nutrition: based on one study including 29 participants, immuno-enhancing nutrition may reduce 90-day postoperative complications (RR 0.31, 95% CI 0.08 to 1.23; low-quality evidence). These findings correspond to 322 fewer complications per 1000 participants (95% CI 429 fewer to 107 more). Length of hospital stay may be similar (MD 0.20 days, 95% CI 1.69 lower to 2.09 higher; low-quality evidence). We downgraded the quality of evidence of both outcomes for very serious imprecision.\n3. Preoperative oral nutritional support versus normal diet: based on one study including 28 participants, we are very uncertain if preoperative oral supplements reduces postoperative complications. We downgraded quality for serious study limitations (unclear risk of selection, performance, attrition and selective reporting bias) and very serious imprecision. The study did not report on length of hospital stay.\n4. Early postoperative feeding versus standard postoperative management: based on one study with 102 participants, early postoperative feeding may increase postoperative complications (very low-quality evidence) but we are very uncertain of this finding. We downgraded the quality of evidence for serious study limitations (unclear risk of selection and performance bias) and very serious imprecision. Length of hospital stay may be similar (MD 0.95 days less, CI not reported; low-quality evidence). We downgraded the quality of evidence for serious study limitations (unclear risk of selection and performance bias) and serious imprecision.\n5. Amino acid with dextrose versus dextrose: based on two studies with 104 participants, we are very uncertain whether amino acids reduce postoperative complications (very low-quality evidence). We are also very uncertain whether length of hospital stay is similar (very low-quality evidence). We downgraded the quality of evidence for both outcomes for serious study limitations (unclear and high risk of selection bias; unclear risk of performance, detection and selective reporting bias), serious indirectness related to the patient population and very serious imprecision.\n6. Branch chain amino acids versus dextrose only: based on one study including 19 participants, we are very uncertain whether complication rates are similar (very low-quality evidence). We downgraded the quality of evidence for serious study limitations (unclear risk of selection, performance, detection, attrition and selective reporting bias), serious indirectness related to the patient population and very serious imprecision. The study did not report on length of hospital stay.\n7. Perioperative oral nutritional supplements versus oral multivitamin and mineral supplement: based on one study with 61 participants, oral supplements compared to a multivitamin and mineral supplement may slightly decrease postoperative complications (low-quality evidence). These findings correspond to 135 fewer occurrences per 1000 participants (95% CI 256 fewer to 65 more). Length of hospital stay may be similar (low-quality evidence). We downgraded the quality of evidence of both outcomes for study limitations and imprecision.\nAuthors' conclusions\nBased on few, small and dated studies, with serious methodological limitations, we found limited evidence for a benefit of perioperative nutrition interventions. We rated the quality of evidence as low or very low, which underscores the urgent need for high-quality research studies to better inform nutritional support interventions for people undergoing surgery for bladder cancer.\n\nGiven the provided abstract, please respond to the following query: \"4. Early postoperative feeding versus standard care\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "based on one study that included 102 people, early postoperative feeding may increase postoperative complications after surgery, but we are very uncertain of this finding. Length of hospital stay may be similar." } ]
query-laysum
23375
[ { "role": "user", "content": "Abstract: Background\nConstruction workers are frequently exposed to various types of injury-inducing hazards. There are a number of injury prevention interventions, yet their effectiveness is uncertain.\nObjectives\nTo assess the effects of interventions for preventing injuries in construction workers.\nSearch methods\nWe searched the Cochrane Injuries Group's specialised register, CENTRAL (issue 3), MEDLINE, Embase and PsycINFO up to April 2017. The searches were not restricted by language or publication status. We also handsearched the reference lists of relevant papers and reviews.\nSelection criteria\nRandomised controlled trials, controlled before-after (CBA) studies and interrupted time-series (ITS) of all types of interventions for preventing fatal and non-fatal injuries among workers at construction sites.\nData collection and analysis\nTwo review authors independently selected studies, extracted data and assessed their risk of bias. For ITS studies, we re-analysed the studies and used an initial effect, measured as the change in injury rate in the year after the intervention, as well as a sustained effect, measured as the change in time trend before and after the intervention.\nMain results\nSeventeen studies (14 ITS and 3 CBA studies) met the inclusion criteria in this updated version of the review. The ITS studies evaluated the effects of: introducing or changing regulations that laid down safety and health requirements for the construction sites (nine studies), a safety campaign (two studies), a drug-free workplace programme (one study), a training programme (one study), and safety inspections (one study) on fatal and non-fatal occupational injuries. One CBA study evaluated the introduction of occupational health services such as risk assessment and health surveillance, one evaluated a training programme and one evaluated the effect of a subsidy for upgrading to safer scaffoldings. The overall risk of bias of most of the included studies was high, as it was uncertain for the ITS studies whether the intervention was independent from other changes and thus could be regarded as the main reason of change in the outcome. Therefore, we rated the quality of the evidence as very low for all comparisons.\nCompulsory interventions\nRegulatory interventions at national or branch level may or may not have an initial effect (effect size (ES) of −0.33; 95% confidence interval (CI) −2.08 to 1.41) and may or may not have a sustained effect (ES −0.03; 95% CI −0.30 to 0.24) on fatal and non-fatal injuries (9 ITS studies) due to highly inconsistent results (I² = 98%). Inspections may or may not have an effect on non-fatal injuries (ES 0.07; 95% CI −2.83 to 2.97; 1 ITS study).\nEducational interventions\nSafety training interventions may result in no significant reduction of non-fatal injuries (1 ITS study and 1 CBA study).\nInformational interventions\nWe found no studies that had evaluated informational interventions alone such as campaigns for risk communication.\nPersuasive interventions\nWe found no studies that had evaluated persuasive interventions alone such as peer feedback on workplace actions to increase acceptance of safe working methods.\nFacilitative interventions\nMonetary subsidies to companies may lead to a greater decrease in non-fatal injuries from falls to a lower level than no subsidies (risk ratio (RR) at follow-up: 0.93; 95% CI 0.30 to 2.91 from RR 3.89 at baseline; 1 CBA study).\nMultifaceted interventions\nA safety campaign intervention may result in an initial (ES −1.82; 95% CI −2.90 to −0.74) and sustained (ES −1.30; 95% CI −1.79 to −0.81) decrease in injuries at the company level (1 ITS study), but not at the regional level (1 ITS study). A multifaceted drug-free workplace programme at the company level may reduce non-fatal injuries in the year following implementation by −7.6 per 100 person-years (95% CI −11.2 to −4.0) and in the years thereafter by −2.0 per 100 person-years (95% CI −3.5 to −0.5) (1 ITS study). Introducing occupational health services may result in no decrease in fatal or non-fatal injuries (one CBA study).\nAuthors' conclusions\nThe vast majority of interventions to adopt safety measures recommended by standard texts on safety, consultants and safety courses have not been adequately evaluated. There is very low-quality evidence that introducing regulations as such may or may not result in a decrease in fatal and non-fatal injuries. There is also very low-quality evidence that regionally oriented safety campaigns, training, inspections or the introduction of occupational health services may not reduce non-fatal injuries in construction companies. There is very low-quality evidence that company-oriented safety interventions such as a multifaceted safety campaign, a multifaceted drug workplace programme and subsidies for replacement of scaffoldings may reduce non-fatal injuries among construction workers. More studies, preferably cluster-randomised controlled trials, are needed to evaluate different strategies to increase the employers' and workers' adherence to the safety measures prescribed by regulation.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Introducing regulations alone may or may not be effective for preventing non-fatal and fatal injuries in construction workers. Regionally oriented interventions such as a safety campaigns, training, inspections or occupational health services may not be effective for reducing non-fatal injuries in construction workers. However, a multifaceted safety campaign and a multifaceted drug-free workplace programme at the company level, along with subsidies for replacement of scaffoldings, may be effective in reducing non-fatal injuries.\nAdditional strategies are needed to increase the employers' and workers' adherence to the safety measures that are prescribed by regulation." } ]
query-laysum
23376
[ { "role": "user", "content": "Abstract: Background\nSeveral comparative randomised controlled trials (RCTs) have been performed including combinations of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors since the publication of a Cochrane Review on targeted therapy for metastatic renal cell carcinoma (mRCC) in 2008. This review represents an update of that original review.\nObjectives\nTo assess the effects of targeted therapies for clear cell mRCC in patients naïve to systemic therapy.\nSearch methods\nWe performed a comprehensive search with no restrictions on language or publication status. The date of the latest search was 18 June 2020.\nSelection criteria\nWe included randomised controlled trials, recruiting patients with clear cell mRCC naïve to previous systemic treatment. The index intervention was any TKI-based targeted therapy.\nData collection and analysis\nTwo review authors independently assessed the included studies and extracted data for the primary outcomes: progression-free survival (PFS), overall survival (OS) and serious adverse events (SAEs); and the secondary outcomes: health-related quality of life (QoL), response rate and minor adverse events (AEs). We performed statistical analyses using a random-effects model and rated the certainty of evidence according to the GRADE approach.\nMain results\nWe included 18 RCTs reporting on 11,590 participants randomised across 18 comparisons. This abstract focuses on the primary outcomes of select comparisons.\n1. Pazopanib versus sunitinib\nPazopanib may result in little to no difference in PFS as compared to sunitinib (hazard ratio (HR) 1.05, 95% confidence interval (CI) 0.90 to 1.23; 1 study, 1110 participants; low-certainty evidence). Based on the control event risk of 420 per 1000 in this trial at 12 months, this corresponds to 18 fewer participants experiencing PFS (95% CI 76 fewer to 38 more) per 1000 participants. Pazopanib may result in little to no difference in OS compared to sunitinib (HR 0.92, 95% CI 0.80 to 1.06; 1 study, 1110 participants; low-certainty evidence). Based on the control event risk of 550 per 1000 in this trial at 12 months, this corresponds to 27 more OSs (95% CI 19 fewer to 70 more) per 1000 participants. Pazopanib may result in little to no difference in SAEs as compared to sunitinib (risk ratio (RR) 1.01, 95% CI 0.94 to 1.09; 1 study, 1102 participants; low-certainty evidence). Based on the control event risk of 734 per 1000 in this trial, this corresponds to 7 more participants experiencing SAEs (95% CI 44 fewer to 66 more) per 1000 participants.\n2. Sunitinib versus avelumab and axitinib\nSunitinib probably reduces PFS as compared to avelumab plus axitinib (HR 1.45, 95% CI 1.17 to 1.80; 1 study, 886 participants; moderate-certainty evidence). Based on the control event risk of 550 per 1000 in this trial at 12 months, this corresponds to 130 fewer participants experiencing PFS (95% CI 209 fewer to 53 fewer) per 1000 participants. Sunitinib may result in little to no difference in OS (HR 1.28, 95% CI 0.92 to 1.79; 1 study, 886 participants; low-certainty evidence). Based on the control event risk of 890 per 1000 in this trial at 12 months, this would result in 29 fewer OSs (95% CI 78 fewer to 8 more) per 1000 participants. Sunitinib may result in little to no difference in SAEs (RR 1.01, 95% CI 0.93 to 1.10; 1 study, 873 participants; low-certainty evidence). Based on the control event risk of 705 per 1000 in this trial, this corresponds to 7 more SAEs (95% CI 49 fewer to 71 more) per 1000 participants.\n3. Sunitinib versus pembrolizumab and axitinib\nSunitinib probably reduces PFS as compared to pembrolizumab plus axitinib (HR 1.45, 95% CI 1.19 to 1.76; 1 study, 861 participants; moderate-certainty evidence). Based on the control event risk of 590 per 1000 in this trial at 12 months, this corresponds to 125 fewer participants experiencing PFS (95% CI 195 fewer to 56 fewer) per 1000 participants. Sunitinib probably reduces OS (HR 1.90, 95% CI 1.36 to 2.65; 1 study, 861 participants; moderate-certainty evidence). Based on the control event risk of 880 per 1000 in this trial at 12 months, this would result in 96 fewer OSs (95% CI 167 fewer to 40 fewer) per 1000 participants. Sunitinib may reduce SAEs as compared to pembrolizumab plus axitinib (RR 0.90, 95% CI 0.81 to 1.02; 1 study, 854 participants; low-certainty evidence) although the CI includes the possibility of no effect. Based on the control event risk of 604 per 1000 in this trial, this corresponds to 60 fewer SAEs (95% CI 115 fewer to 12 more) per 1000 participants.\n4. Sunitinib versus nivolumab and ipilimumab\nSunitinib may reduce PFS as compared to nivolumab plus ipilimumab (HR 1.30, 95% CI 1.11 to 1.52; 1 study, 847 participants; low-certainty evidence). Based on the control event risk of 280 per 1000 in this trial at 30 months' follow-up, this corresponds to 89 fewer PFSs (95% CI 136 fewer to 37 fewer) per 1000 participants. Sunitinib reduces OS (HR 1.52, 95% CI 1.23 to 1.89; 1 study, 847 participants; high-certainty evidence). Based on the control event risk 600 per 1000 in this trial at 30 months, this would result in 140 fewer OSs (95% CI 219 fewer to 67 fewer) per 1000 participants. Sunitinib probably increases SAEs (RR 1.37, 95% CI 1.22 to 1.53; 1 study, 1082 participants; moderate-certainty evidence). Based on the control event risk of 457 per 1000 in this trial, this corresponds to 169 more SAEs (95% CI 101 more to 242 more) per 1000 participants.\nAuthors' conclusions\nBased on the low to high certainty of evidence, several combinations of immune checkpoint inhibitors appear to be superior to single-agent targeted therapy in terms of PFS and OS, and with a favourable AE profile. Some single-agent targeted therapies demonstrated a similar or improved oncological outcome compared to others; minor differences were observed for AE within this group. The certainty of evidence was variable ranging from high to very low and all comparisons were based on single trials.\n\nGiven the provided abstract, please respond to the following query: \"3. Sunitinib versus pembrolizumab and axitinib (targeted agent versus immunotherapy + targeted agent)\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Sunitinib probably results in more progression and death but may slightly reduce serious unwanted effects compared to pembrolizumab and axitinib." } ]
query-laysum
23377
[ { "role": "user", "content": "Abstract: Background\nVitamin A deficiency (VAD) is a major public health problem in low- and middle-income countries, affecting 190 million children under five years of age and leading to many adverse health consequences, including death. Based on prior evidence and a previous version of this review, the World Health Organization has continued to recommend vitamin A supplementation (VAS) for children aged 6 to 59 months. The last version of this review was published in 2017, and this is an updated version of that review.\nObjectives\nTo assess the effects of vitamin A supplementation (VAS) for preventing morbidity and mortality in children aged six months to five years.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, six other databases, and two trials registers up to March 2021. We also checked reference lists and contacted relevant organisations and researchers to identify additional studies.\nSelection criteria\nRandomised controlled trials (RCTs) and cluster-RCTs evaluating the effect of synthetic VAS in children aged six months to five years living in the community. We excluded studies involving children in hospital and children with disease or infection. We also excluded studies evaluating the effects of food fortification, consumption of vitamin A rich foods, or beta-carotene supplementation.\nData collection and analysis\nFor this update, two review authors independently assessed studies for inclusion resolving discrepancies by discussion. We performed meta-analyses for outcomes, including all-cause and cause-specific mortality, disease, vision, and side effects. We used the GRADE approach to assess the quality of the evidence.\nMain results\nThe updated search identified no new RCTs.\nWe identified 47 studies, involving approximately 1,223,856 children. Studies were set in 19 countries: 30 (63%) in Asia, 16 of these in India; 8 (17%) in Africa; 7 (15%) in Latin America, and 2 (4%) in Australia. About one-third of the studies were in urban/periurban settings, and half were in rural settings; the remaining studies did not clearly report settings. Most studies included equal numbers of girls and boys and lasted about one year. The mean age of the children was about 33 months. The included studies were at variable overall risk of bias; however, evidence for the primary outcome was at low risk of bias.\nA meta-analysis for all-cause mortality included 19 trials (1,202,382 children). At longest follow-up, there was a 12% observed reduction in the risk of all-cause mortality for VAS compared with control using a fixed-effect model (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.83 to 0.93; high-certainty evidence).\nNine trials reported mortality due to diarrhoea and showed a 12% overall reduction for VAS (RR 0.88, 95% CI 0.79 to 0.98; 1,098,538 children; high-certainty evidence). There was no evidence of a difference for VAS on mortality due to measles (RR 0.88, 95% CI 0.69 to 1.11; 6 studies, 1,088,261 children; low-certainty evidence), respiratory disease (RR 0.98, 95% CI 0.86 to 1.12; 9 studies, 1,098,538 children; low-certainty evidence), and meningitis. VAS reduced the incidence of diarrhoea (RR 0.85, 95% CI 0.82 to 0.87; 15 studies, 77,946 children; low-certainty evidence), measles (RR 0.50, 95% CI 0.37 to 0.67; 6 studies, 19,566 children; moderate-certainty evidence), Bitot's spots (RR 0.42, 95% CI 0.33 to 0.53; 5 studies, 1,063,278 children; moderate-certainty evidence), night blindness (RR 0.32, 95% CI 0.21 to 0.50; 2 studies, 22,972 children; moderate-certainty evidence), and VAD (RR 0.71, 95% CI 0.65 to 0.78; 4 studies, 2262 children, moderate-certainty evidence). However, there was no evidence of a difference on incidence of respiratory disease (RR 0.99, 95% CI 0.92 to 1.06; 11 studies, 27,540 children; low-certainty evidence) or hospitalisations due to diarrhoea or pneumonia. There was an increased risk of vomiting within the first 48 hours of VAS (RR 1.97, 95% CI 1.44 to 2.69; 4 studies, 10,541 children; moderate-certainty evidence).\nAuthors' conclusions\nThis update identified no new eligible studies and the conclusions remain the same. VAS is associated with a clinically meaningful reduction in morbidity and mortality in children. Further placebo-controlled trials of VAS in children between six months and five years of age would not change the conclusions of this review, although studies that compare different doses and delivery mechanisms are needed. In populations with documented VAD, it would be unethical to conduct placebo-controlled trials.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The update identified no new studies. The review includes 47 RCTs representing 1,223,856 children. Studies took place in 19 countries: 30 (63%) in Asia, 16 of which were in India; 8 (17%) in Africa; 7 (15%) in Latin America, and 2 (4%) in Australia. About one-third of the studies were in urban/periurban settings, and half were in rural settings; the remaining studies did not clearly report settings. The average age of the children was about 33 months. Most studies included equal numbers of boys and girls and lasted about one year. The quality of the included studies was variable; however, it was unlikely that death rates were influenced by potential errors in the conduct of the studies." } ]
query-laysum
23378
[ { "role": "user", "content": "Abstract: Background\nNicotine replacement therapy (NRT) aims to replace nicotine from cigarettes. This helps to reduce cravings and withdrawal symptoms, and ease the transition from cigarette smoking to complete abstinence. Although there is high-certainty evidence that NRT is effective for achieving long-term smoking abstinence, it is unclear whether different forms, doses, durations of treatment or timing of use impacts its effects.\nObjectives\nTo determine the effectiveness and safety of different forms, deliveries, doses, durations and schedules of NRT, for achieving long-term smoking cessation.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group trials register for papers mentioning NRT in the title, abstract or keywords, most recently in April 2022.\nSelection criteria\nWe included randomised trials in people motivated to quit, comparing one type of NRT use with another. We excluded studies that did not assess cessation as an outcome, with follow-up of fewer than six months, and with additional intervention components not matched between arms. Separate reviews cover studies comparing NRT to control, or to other pharmacotherapies.\nData collection and analysis\nWe followed standard Cochrane methods. We measured smoking abstinence after at least six months, using the most rigorous definition available. We extracted data on cardiac adverse events (AEs), serious adverse events (SAEs) and study withdrawals due to treatment.\nMain results\nWe identified 68 completed studies with 43,327 participants, five of which are new to this update. Most completed studies recruited adults either from the community or from healthcare clinics. We judged 28 of the 68 studies to be at high risk of bias. Restricting the analysis only to those studies at low or unclear risk of bias did not significantly alter results for any comparisons apart from the preloading comparison, which tested the effect of using NRT prior to quit day whilst still smoking.\nThere is high-certainty evidence that combination NRT (fast-acting form plus patch) results in higher long-term quit rates than single form (risk ratio (RR) 1.27, 95% confidence interval (CI) 1.17 to 1.37; I2 = 12%; 16 studies, 12,169 participants). Moderate-certainty evidence, limited by imprecision, indicates that 42/44 mg patches are as effective as 21/22 mg (24-hour) patches (RR 1.09, 95% CI 0.93 to 1.29; I2 = 38%; 5 studies, 1655 participants), and that 21 mg patches are more effective than 14 mg (24-hour) patches (RR 1.48, 95% CI 1.06 to 2.08; 1 study, 537 participants). Moderate-certainty evidence, again limited by imprecision, also suggests a benefit of 25 mg over 15 mg (16-hour) patches, but the lower limit of the CI encompassed no difference (RR 1.19, 95% CI 1.00 to 1.41; I2 = 0%; 3 studies, 3446 participants).\nNine studies tested the effect of using NRT prior to quit day (preloading) in comparison to using it from quit day onward. There was moderate-certainty evidence, limited by risk of bias, of a favourable effect of preloading on abstinence (RR 1.25, 95% CI 1.08 to 1.44; I2 = 0%; 9 studies, 4395 participants).\nHigh-certainty evidence from eight studies suggests that using either a form of fast-acting NRT or a nicotine patch results in similar long-term quit rates (RR 0.90, 95% CI 0.77 to 1.05; I2 = 0%; 8 studies, 3319 participants).\nWe found no clear evidence of an effect of duration of nicotine patch use (low-certainty evidence); duration of combination NRT use (low- and very low-certainty evidence); or fast-acting NRT type (very low-certainty evidence).\nCardiac AEs, SAEs and withdrawals due to treatment were all measured variably and infrequently across studies, resulting in low- or very low-certainty evidence for all comparisons. Most comparisons found no clear evidence of an effect on these outcomes, and rates were low overall. More withdrawals due to treatment were reported in people using nasal spray compared to patches in one study (RR 3.47, 95% CI 1.15 to 10.46; 1 study, 922 participants; very low-certainty evidence) and in people using 42/44 mg patches in comparison to 21/22 mg patches across two studies (RR 4.99, 95% CI 1.60 to 15.50; I2 = 0%; 2 studies, 544 participants; low-certainty evidence).\nAuthors' conclusions\nThere is high-certainty evidence that using combination NRT versus single-form NRT and 4 mg versus 2 mg nicotine gum can result in an increase in the chances of successfully stopping smoking. Due to imprecision, evidence was of moderate certainty for patch dose comparisons. There is some indication that the lower-dose nicotine patches and gum may be less effective than higher-dose products. Using a fast-acting form of NRT, such as gum or lozenge, resulted in similar quit rates to nicotine patches. There is moderate-certainty evidence that using NRT before quitting may improve quit rates versus using it from quit date only; however, further research is needed to ensure the robustness of this finding. Evidence for the comparative safety and tolerability of different types of NRT use is limited. New studies should ensure that AEs, SAEs and withdrawals due to treatment are reported.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "NRT can be taken in many different forms, in different doses and for varying amounts of time. Some people start using NRT before they quit, while other people wait until quit day. This review looks at the different forms, doses, durations and schedules of NRT used to help people quit smoking, so we can better understand which of these work best to help people quit smoking for six months or longer. We also wanted to find out if any of these treatments were associated with cardiac (heart-related) or serious unwanted effects, and if anyone stopped participating in a study due to the NRT treatment they were advised to use." } ]
query-laysum
23379
[ { "role": "user", "content": "Abstract: Background\nWork-related musculoskeletal disorders are a group of musculoskeletal disorders that comprise one of the most common disorders related to occupational sick leave worldwide. Musculoskeletal disorders accounted for 21% to 28% of work absenteeism days in 2017/2018 in the Netherlands, Germany and the UK. There are several interventions that may be effective in tackling the high prevalence of work-related musculoskeletal disorders among workers, such as physical, cognitive and organisational interventions. In this review, we will focus on work breaks as a measure of primary prevention, which are a type of organisational intervention.\nObjectives\nTo compare the effectiveness of different work-break schedules for preventing work-related musculoskeletal symptoms and disorders in healthy workers, when compared to conventional or alternate work-break schedules.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, PsycINFO, SCOPUS, Web of Science, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform, to April/May 2019. In addition, we searched references of the included studies and of relevant literature reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) of work-break interventions for preventing work-related musculoskeletal symptoms and disorders among workers. The studies were eligible for inclusion when intervening on work-break frequency, duration and/or type, compared to conventional or an alternate work-break intervention. We included only those studies in which the investigated population included healthy, adult workers, who were free of musculoskeletal complaints during study enrolment, without restrictions to sex or occupation. The primary outcomes were newly diagnosed musculoskeletal disorders, self-reported musculoskeletal pain, discomfort or fatigue, and productivity or work performance. We considered workload changes as secondary outcomes.\nData collection and analysis\nTwo review authors independently screened titles, abstracts and full texts for study eligibility, extracted data and assessed risk of bias. We contacted authors for additional study data where required. We performed meta-analyses, where possible, and we assessed the overall quality of the evidence for each outcome of each comparison using the five GRADE considerations.\nMain results\nWe included six studies (373 workers), four parallel RCTs, one cross-over RCT, and one combined parallel plus cross-over RCT. At least 295 of the employees were female and at least 39 male; for the remaining 39 employees, the sex was not specified in the study trial. The studies investigated different work-break frequencies (five studies) and different work-break types (two studies). None of the studies investigated different work-break durations. We judged all studies to have a high risk of bias. The quality of the evidence for the primary outcomes of self-reported musculoskeletal pain, discomfort and fatigue was low; the quality of the evidence for the primary outcomes of productivity and work performance was very low. The studies were executed in Europe or Northern America, with none from low- to middle-income countries. One study could not be included in the data analyses, because no detailed results have been reported.\nChanges in the frequency of work breaks\nThere is low-quality evidence that additional work breaks may not have a considerable effect on musculoskeletal pain, discomfort or fatigue, when compared with no additional work breaks (standardised mean difference (SMD) -0.08; 95% CI -0.35 to 0.18; three studies; 225 participants). Additional breaks may not have a positive effect on productivity or work performance, when compared with no additional work breaks (SMD -0.07; 95% CI -0.33 to 0.19; three studies; 225 participants; very low-quality evidence).\nWe found low-quality evidence that additional work breaks may not have a considerable effect on participant-reported musculoskeletal pain, discomfort or fatigue (MD 1.80 on a 100-mm VAS scale; 95% CI -41.07 to 64.37; one study; 15 participants), when compared to work breaks as needed (i.e. microbreaks taken at own discretion). There is very low-quality evidence that additional work breaks may have a positive effect on productivity or work performance, when compared to work breaks as needed (MD 542.5 number of words typed per 3-hour recording session; 95% CI 177.22 to 907.78; one study; 15 participants).\nAdditional higher frequency work breaks may not have a considerable effect on participant-reported musculoskeletal pain, discomfort or fatigue (MD 11.65 on a 100-mm VAS scale; 95% CI -41.07 to 64.37; one study; 10 participants; low-quality evidence), when compared to additional lower frequency work breaks. We found very low-quality evidence that additional higher frequency work breaks may not have a considerable effect on productivity or work performance (MD -83.00 number of words typed per 3-hour recording session; 95% CI -305.27 to 139.27; one study; 10 participants), when compared to additional lower frequency work breaks.\nChanges in the duration of work breaks\nNo trials were identified that assessed the effect of different durations of work breaks.\nChanges in the type of work break\nWe found low-quality evidence that active breaks may not have a considerable positive effect on participant-reported musculoskeletal pain, discomfort and fatigue (MD -0.17 on a 1-7 NRS scale; 95% CI -0.71 to 0.37; one study; 153 participants), when compared to passive work breaks.\nRelaxation work breaks may not have a considerable effect on participant-reported musculoskeletal pain, discomfort or fatigue, when compared to physical work breaks (MD 0.20 on a 1-7 NRS scale; 95% CI -0.43 to 0.82; one study; 97 participants; low-quality evidence).\nAuthors' conclusions\nWe found low-quality evidence that different work-break frequencies may have no effect on participant-reported musculoskeletal pain, discomfort and fatigue. For productivity and work performance, evidence was of very low-quality that different work-break frequencies may have a positive effect. For different types of break, there may be no effect on participant-reported musculoskeletal pain, discomfort and fatigue according to low-quality evidence. Further high-quality studies are needed to determine the effectiveness of frequency, duration and type of work-break interventions among workers, if possible, with much higher sample sizes than the studies included in the current review. Furthermore, work-break interventions should be reconsidered, taking into account worker populations other than office workers, and taking into account the possibility of combining work-break intervention with other interventions such as ergonomic training or counselling, which may may possibly have an effect on musculoskeletal outcomes and work performance.\n\nGiven the provided abstract, please respond to the following query: \"Aim\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We wanted to find out whether different frequencies, durations and types of work breaks can prevent work-related musculoskeletal symptoms and disorders among healthy workers. We considered workers as healthy when they were free of musculoskeletal complaints at study enrolment." } ]
query-laysum
23380
[ { "role": "user", "content": "Abstract: Background\nApproximately 800 million women and children have anaemia, a condition thought to cause almost 9% of the global burden of years lived with disability. Around half this burden could be amenable to interventions that involve the provision of iron. Maize (corn) is one of the world's most important cereal grains and is cultivated across most of the globe. Several programmes around the world have fortified maize flour and other maize-derived foodstuffs with iron and other vitamins and minerals to combat anaemia and iron deficiency.\nObjectives\nTo assess the effects of iron fortification of maize flour, corn meal and fortified maize flour products for anaemia and iron status in the general population.\nSearch methods\nWe searched the following international and regional sources in December 2017 and January 2018: Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE; MEDLINE (R) In Process; Embase; Web of Science (both the Social Science Citation Index and the Science Citation Index); CINAHL Ebsco; POPLINE; AGRICOLA (agricola.nal.usda.gov); BIOSIS (ISI); Bibliomap and TRoPHI; IBECS; Scielo; Global Index Medicus - AFRO (includes African Index Medicus); EMRO (includes Index Medicus for the Eastern Mediterranean Region); LILACS; PAHO (Pan American Health Library); WHOLIS (WHO Library); WPRO (includes Western Pacific Region Index Medicus); IMSEAR, Index Medicus for the South-East Asian Region; IndMED, Indian medical journals; and the Native Health Research Database. We searched clinicaltrials.gov and the International Clinical Trials Registry Platform (ICTRP) for any ongoing or planned studies on 17 January 2018 and contacted authors of such studies to obtain further information or eligible data if available.\nFor assistance in identifying ongoing or unpublished studies, we also contacted relevant international organisations and agencies working in food fortification on 9 August 2016.\nSelection criteria\nWe included cluster- or individually randomised controlled trials and observational studies. Interventions included (central/industrial) fortification of maize flour or corn meal with iron alone or with other vitamins and minerals and provided to individuals over 2 years of age (including pregnant and lactating women) from any country.\nData collection and analysis\nTwo review authors independently assessed the eligibility of studies for inclusion, extracted data from included studies and assessed the risk of bias of the included studies. Trial designs with a comparison group were included to assess the effects of interventions. Trial designs without a control or comparison group (uncontrolled before-and-after studies) were included for completeness but were not considered in assessments of the overall effectiveness of interventions or used to draw conclusions regarding the effects of interventions in the review.\nMain results\nOur search yielded 4529 records. After initial screening of titles and abstracts, we reviewed the full text of 75 studies (80 records). We included 5 studies and excluded 70. All the included studies assessed the effects of providing maize products fortified with iron plus other vitamins and minerals versus unfortified maize flour. No studies compared this intervention to no intervention or looked at the relative effect of flour and products fortified with iron alone (without other vitamins and minerals). Three were randomised trials involving 2610 participants, and two were uncontrolled before-and-after studies involving 849 participants.\nOnly three studies contributed data for the meta-analysis and included children aged 2 to 11.9 years and women. Compared to unfortified maize flour, it is uncertain whether fortifying maize flour or corn meal with iron and other vitamins and minerals has any effect on anaemia (risk ratio (RR) 0.90, 95% confidence interval (CI) 0.58 to 1.40; 2 studies; 1027 participants; very low-certainty evidence), or on the risk of iron deficiency (RR 0.75, 95% CI 0.49 to 1.15; 2 studies; 1102 participants; very low-certainty evidence), haemoglobin concentration (mean difference (MD) 1.25 g/L, 95% CI −2.36 to 4.86 g/L; 3 studies; 1144 participants; very low-certainty evidence) or ferritin concentrations (MD 0.48 µg/L, 95% CI −0.37 to 1.33 µg/L; 1 study; 584 participants; very low-certainty evidence).\nNone of the studies reported on any adverse effects. We judged the certainty of the evidence to be very low based on GRADE, so we are uncertain whether the results reflect the true effect of the intervention. We downgraded evidence due to high risk of selection bias and unclear risk of performance bias in one of two included studies, high heterogeneity and wide CIs crossing the line of no effect for anaemia prevalence and haemoglobin concentration.\nAuthors' conclusions\nIt is uncertain whether fortifying maize flour with iron and other vitamins and minerals reduces the risk of anaemia or iron deficiency in children aged over 2 years or in adults. Moreover, the evidence is too uncertain to conclude whether iron-fortified maize flour, corn meal or fortified maize flour products have any effect on reducing the risk of anaemia or on improving haemoglobin concentration in the population.\nWe are uncertain whether fortification of maize flour with iron reduces anaemia among the general population, as the certainty of the evidence is very low. No studies reported on any adverse effects.\nPublic organisations funded three of the five included studies, while the private sector gave grants to universities to perform the other two. The presence of industry funding for some of these trials did not appear to positively influence results from these studies.\nThe reduced number of studies, including only two age groups (children and women of reproductive age), as well as the limited number of comparisons (only one out of the four planned) constitute the main limitations of this review.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Approximately 496 million non-pregnant women, 32 million pregnant women, and 273 million children were thought to be anaemic as of 2011. Iron deficiency is considered to be the single most prevalent nutrient deficiency worldwide, but at least half this burden is considered responsive to interventions that involve providing people with iron. Fortification means adding vitamins and minerals to foods to increase their nutritional value. In public health, fortifying staple foods is considered one way of reducing micronutrient deficiencies without changing usual and culturally acceptable diets.\nMaize (corn) is one of the world's most important cereal grains. In sub-Saharan Africa, some parts of Southeast Asia and Latin America, where iron deficiency is endemic, maize is a dietary staple for more than 200 million people. Fortification of maize flour with iron (and in some cases, other nutrients) is mandatory in Brazil, Costa Rica, El Salvador, Kenya, Mexico, Nigeria, Rwanda, South Africa, Tanzania, Uganda, the USA and Venezuela." } ]
query-laysum
23381
[ { "role": "user", "content": "Abstract: Background\nTeachers and school staff should be competent in managing asthma in schools. Demonstrated low levels of asthma knowledge mean that staff may not know how best to protect a child with asthma in their care, or may fail to take appropriate action in the event of a serious attack. Education about asthma could help to improve this knowledge and lead to better asthma outcomes for children.\nObjectives\nTo assess the effectiveness and safety of asthma education programmes for school staff, and to identify content and attributes underpinning them.\nSearch methods\nWe conducted the most recent searches on 29 November 2016.\nSelection criteria\nWe included randomised controlled trials comparing an intervention to educate school staff about asthma versus a control group. We included studies reported as full text, those published as abstract only and unpublished data.\nData collection and analysis\nAt least two review authors screened the searches, extracted outcome data and intervention characteristics from included studies and assessed risk of bias. Primary outcomes for the quantitative synthesis were emergency department (ED) or hospital visits, mortality and asthma control; we graded the main results and presented evidence in a 'Summary of findings' table. We planned a qualitative synthesis of intervention characteristics, but study authors were unable to provide the necessary information.\nWe analysed dichotomous data as odds ratios, and continuous data as mean differences or standardised mean differences, all with a random-effects model. We assessed clinical, methodological and statistical heterogeneity when performing meta-analyses, and we narratively described skewed data.\nMain results\nFive cluster-RCTs of 111 schools met the review eligibility criteria. Investigators measured outcomes in participating staff and often in children or parents, most often at between 1 and 12 months.\nAll interventions were educational programmes but duration, content and delivery varied; some involved elements of training for pupils or primary care providers. We noted risk of selection, performance, detection and attrition biases, although to a differing extent across studies and outcomes.\nQuanitative and qualitative analyses were limited. Only one study reported visits to the ED or hospital and provided data that were too skewed for analysis. No studies reported any deaths or adverse events. Studies did not report asthma control consistently, but results showed no difference between groups on the paediatric asthma quality of life questionnaire (mean difference (MD) 0.14, 95% confidence interval (CI) -0.03 to 0.31; 1005 participants; we downgraded the quality of evidence to low for risk of bias and indirectness). Data for symptom days, night-time awakenings, restricted activities of daily living and school absences were skewed or could not be analysed; some mean scores were better in the trained group, but most differences between groups were small and did not persist to 24 months.\nSchools that received asthma education were more adherent to asthma policies, and staff were better prepared; more schools that had received staff asthma training had written asthma policies compared with control schools, more intervention schools showed improvement in measures taken to prevent or manage exercise-induced asthma attacks and more staff at intervention schools reported that they felt able to administer salbutamol via a spacer. However, the quality of the evidence was low; results show imbalances at baseline, and confidence in the evidence was limited by risk of bias and imprecision. Staff knowledge was higher in groups that had received asthma education, although results were inconsistent and difficult to interpret owing to differences between scales (low quality).\nAvailable information about the interventions was insufficient for review authors to conduct a meaningful qualitative synthesis of the content that led to a successful intervention, or of the resources required to replicate results accurately.\nAuthors' conclusions\nAsthma education for school staff increases asthma knowledge and preparedness, but studies vary and all available evidence is of low quality. Studies have not yet captured whether this improvement in knowledge has led to appreciable benefits over the short term or the longer term for the safety and health of children with asthma in school. Randomised evidence does not contribute to our knowledge of content or attributes of interventions that lead to the best outcomes, or of resources required for successful implementation.\nComplete reporting of the content and resources of educational interventions is essential for assessment of their effectiveness and feasibility for implementation. This applies to both randomised and non-randomised studies, although the latter may be better placed to observe important clinical outcomes such as exacerbations and mortality in the longer term.\n\nGiven the provided abstract, please respond to the following query: \"Background to the question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Teachers and school staff need to know how to manage asthma in schools. If they have little knowledge of asthma, staff may not know how best to protect a child with asthma, or may fail to act in the event of a serious attack. We aimed to assess the benefits and possible harms of asthma education for school staff, and to explore how this education can best be delivered." } ]
query-laysum
23382
[ { "role": "user", "content": "Abstract: Background\nCerebral palsy (CP) is a heterogeneous group of non-progressive disorders of posture or movement, caused by a lesion of the developing brain. Osteoporosis is common in children with cerebral palsy, particularly in children with reduced gross motor function, and leads to an increased risk of fractures. Gross motor function in children with CP can be categorised using a tool called the Gross Motor Function Classification System (GMFCS). Bisphosphonate increases bone mineral density (BMD) and reduces fracture rates. Bisphosphonate is used widely in the treatment of adult osteoporosis. However, the use of bisphosphonate in children with CP remains controversial, due to a paucity of evidence and a lack of recent trials examining the efficacy and safety of bisphosphonate use in this population.\nObjectives\nTo examine the efficacy and safety of bisphosphonate therapy in the treatment of low BMD or secondary osteoporosis (or both) in children with cerebral palsy (GMFCS Levels III to V) who are under 18 years of age.\nSearch methods\nIn September 2020, we searched CENTRAL, MEDLINE, Embase, six other databases, and two trial registers for relevant studies. We also searched the reference lists of relevant systematic reviews, trials, and case studies identified by the search, and contacted the authors of relevant studies in an attempt to identify unpublished literature.\nSelection criteria\nAll relevant randomised controlled trials (RCTs), and quasi-RCTs, comparing at least one bisphosphonate (given at any dose, orally or intravenously) with placebo or no drug, for the treatment of low BMD or osteoporosis in children up to 18 years old, with cerebral palsy (GMFCS Levels III to V).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We were unable to conduct any meta-analyses due to insufficient data, and therefore provide a narrative assessment of the results.\nMain results\nWe found two relevant RCTs (34 participants). Both studies included participants with non-ambulatory CP or CP and osteoporosis. Participants in both studies were similar in severity of CP, age distribution, and sex distribution. The two trials used different bisphosphonate medications and different intervention durations, but further comparison of the interventions was not possible due to a lack of published data from one trial.\nOne trial received funding and support from research, academic, and hospital foundations, with pharmaceutical companies providing components of the calcium and vitamin supplement; the other trial did not report sources of funding. We judged one study at an overall high risk of bias; the other as overall unclear risk of bias.\nPrimary outcome. Compared to placebo or no treatment, both studies provided very low certainty evidence of improved BMD at least four months post-intervention in children treated with bisphosphonate. Only one study (12 participants) provided sufficient detail to assess a measure of the effect, and reported an improvement at six months post-intervention in lumbar spine z-score (mean difference (MD) 18%, 95% confidence interval (CI) 6.57 to 29.43; very low certainty evidence).\nSecondary outcomes. Very low certainty evidence from one study found that bisphosphonate reduced serum N-telopeptides (NTX) more than placebo; the other study reported that both bisphosphonate plus alfacalcidol and alfacalcidol alone reduced NTX, but did not compare groups.\nOne study reported inconclusive results between groups for serum bone-specific alkaline phosphatase (BAP). The other study reported that both bisphosphonate plus alfacalcidol and alfacalcidol alone reduced BAP, but did not compare groups.\nNeither study reported data for the effect of bisphosphonate treatment on changes in volumetric BMD in the distal radius or tibia, changes in fracture frequency, bone pain, or quality of life. One study reported that two participants had febrile events noted during their first dosing schedule, but no further adverse events were reported in either relevant study.\nAuthors' conclusions\nBased on the available evidence, there is very low certainty evidence that bisphosphonate treatment may improve bone health in children with cerebral palsy. We could only include one study with 14 participants in the assessment of the effect size; therefore, the precision of the effect estimate is low. We could only evaluate one planned primary outcome, as there was insufficient detail reported in the relevant studies.\nFurther research from RCTs on the effect and safety of bisphosphonate to improve bone health in children with cerebral palsy is required. These studies should clarify the optimal standard treatment regarding weight-bearing exercises, vitamin D and calcium supplementation, and should include fracture frequency as a primary outcome.\n\nGiven the provided abstract, please respond to the following query: \"Which studies were included in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We examined the available evidence for the use of bisphosphonate treatment in children with CP up to September 2020. We found two trials that compared the use of bisphosphonate treatment with placebo or no treatment to improve bone strength in children with CP. These two trials included a total of 34 participants with CP of similar levels of severity. Participants in both studies were 16 years of age or under, and there were equal numbers of boys and girls in each trial. Both trials included children who were unable to walk. The two trials used different types of bisphosphonate treatments, given for six months in one study and 12 months in the other. Further comparison of these treatments was not possible due to a lack of published information in one trial.\nOne trial was supported by research, academic and hospital foundations, with pharmaceutical companies donating components of the calcium and vitamin supplements. The other study did not report their funding sources." } ]
query-laysum
23383
[ { "role": "user", "content": "Abstract: Background\nPressure ulcers (also known as pressure sores, decubitus ulcers or bedsores) are localised injuries to the skin or underlying tissue, or both. Pressure ulcers are a disabling consequence of immobility. Electrical stimulation (ES) is widely used for the treatment of pressure ulcers. However, it is not clear whether ES is effective.\nObjectives\nTo determine the effects (benefits and harms) of electrical stimulation (ES) for treating pressure ulcers.\nSearch methods\nIn July 2019 we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. We did not impose any restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe included published and unpublished randomised controlled trials (RCTs) comparing ES (plus standard care) with sham/no ES (plus standard care) for treating pressure ulcers.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, extracted data, and assessed risk of bias. We assessed the certainty of evidence using GRADE.\nMain results\nWe included 20 studies with 913 participants. The mean age of participants ranged from 26 to 83 years; 50% were male. ES was administered for a median (interquartile range (IQR)) duration of five (4 to 8) hours per week. The chronicity of the pressure ulcers was variable, ranging from a mean of four days to more than 12 months. Most of the pressure ulcers were on the sacral and coccygeal region (30%), and most were stage III (45%). Half the studies were at risk of performance and detection bias, and 25% were at risk of attrition and selective reporting bias. Overall, the GRADE assessment of the certainty of evidence for outcomes was moderate to very low. Nineteen studies were conducted in four different settings, including rehabilitation and geriatric hospitals, medical centres, a residential care centre, and a community-based centre.\nES probably increases the proportion of pressure ulcers healed compared with no ES (risk ratio (RR) 1.99, 95% confidence interval (CI) 1.39 to 2.85; I2 = 0%; 11 studies, 501 participants (512 pressure ulcers)). We downgraded the evidence to moderate certainty due to risk of bias.\nIt is uncertain whether ES decreases pressure ulcer severity on a composite measure compared with no ES (mean difference (MD) -2.43, 95% CI -6.14 to 1.28; 1 study, 15 participants (15 pressure ulcers) and whether ES decreases the surface area of pressure ulcers when compared with no ES (12 studies; 494 participants (505 pressure ulcers)). Data for the surface area of pressure ulcers were not pooled because there was considerable statistical heterogeneity between studies (I2 = 96%) but the point estimates for the MD of each study ranged from -0.90 cm2 to 10.37 cm2. We downgraded the evidence to very low certainty due to risk of bias, inconsistency and imprecision.\nIt is uncertain whether ES decreases the time to complete healing of pressure ulcers compared with no ES (hazard ratio (HR) 1.06, 95% CI 0.47 to 2.41; I2 = 0%; 2 studies, 55 participants (55 pressure ulcers)). We downgraded the evidence to very low certainty due to risk of bias, indirectness and imprecision.\nES may be associated with an excess of, or difference in, adverse events (13 studies; 586 participants (602 pressure ulcers)). Data for adverse events were not pooled but the types of reported adverse events included skin redness, itchy skin, dizziness and delusions, deterioration of the pressure ulcer, limb amputation, and occasionally death. We downgraded the evidence to low certainty due to risk of selection and attrition bias and imprecision.\nES probably increases the rate of pressure ulcer healing compared with no ES (MD 4.59% per week, 95% CI 3.49 to 5.69; I2 = 25%; 12 studies, 561 participants (613 pressure ulcers)). We downgraded the evidence to moderate certainty due to risk of bias. We did not find any studies that looked at quality of life, depression, or consumers' perception of treatment effectiveness.\nAuthors' conclusions\nES probably increases the proportion of pressure ulcers healed and the rate of pressure ulcer healing (moderate certainty evidence), but its effect on time to complete healing is uncertain compared with no ES (very low certainty evidence). It is also uncertain whether ES decreases the surface area of pressure ulcers. The evidence to date is insufficient to support the widespread use of ES for pressure ulcers outside of research. Future research needs to focus on large-scale trials to determine the effect of ES on all key outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The aim of this review was to find out whether electrical stimulation (ES; an electrical current applied to the skin) can help heal pressure ulcers. We collected and analysed all relevant studies (randomised controlled trials) to answer this question and found 20 relevant studies." } ]
query-laysum
23384
[ { "role": "user", "content": "Abstract: Background\nCardiovascular diseases (CVD) are a major cause of disability and the leading cause of death worldwide. To reduce mortality and morbidity, prevention strategies such as following an optimal diet are crucial. In recent years, low-gluten and gluten-free diets have gained strong popularity in the general population. However, study results on the benefits of a gluten-reduced or gluten-free diet are conflicting, and it is unclear whether a gluten-reduced diet has an effect on the primary prevention of CVD.\nObjectives\nTo determine the effects of a gluten-reduced or gluten-free diet for the primary prevention of CVD in the general population.\nSearch methods\nWe systematically searched CENTRAL, MEDLINE, Embase, CINAHL and Web of Science up to June 2021 without language restrictions or restrictions regarding publication status. Additionally, we searched ClinicalTrials.gov for ongoing or unpublished trials and checked reference lists of included studies as well as relevant systematic reviews for additional studies.\nSelection criteria\nWe planned to include randomised controlled trials (RCTs) and non-randomised studies of interventions (NRSIs), such as prospective cohort studies, comparing a low-gluten or gluten-free diet or providing advice to decrease gluten consumption with no intervention, diet as usual, or a reference gluten-intake category. The population of interest comprised adults from the general population, including those at increased risk for CVD (primary prevention). We excluded cluster-RCTs, case-control studies, studies focusing on participants with a previous myocardial infarction and/or stroke, participants who have undergone a revascularisation procedure as well as participants with angina or angiographically-defined coronary heart disease, with a confirmed diagnosis of coeliac disease or with type 1 diabetes.\nData collection and analysis\nTwo review authors independently assessed eligibility of studies in a two-step procedure following Cochrane methods. Risk of bias (RoB) was assessed using the Cochrane risk of bias tool (RoB2) and the 'Risk Of Bias In Non-randomised Studies — of Interventions' (ROBINS-I) tool, and the certainty of evidence was rated using the GRADE approach.\nMain results\nOne RCT and three NRSIs (with an observational design reporting data on four cohorts: Health Professionals Follow-up Study (HPFS), Nurses' Health Study (NHS-I), NHS-II, UK Biobank) met the inclusion criteria. The RCT was conducted in Italy (60 participants, mean age 41 ± 12.1 years), two NRSIs (three cohorts, HPFS, NHS-I, NHS II) were conducted across the USA (269,282 health professionals aged 24 to 75 years) and one NRSI (Biobank cohort) was conducted across the UK (159,265 participants aged 49 to 62 years). Two NRSIs reported that the lowest gluten intake ranged between 0.0 g/day and 3.4 g/day and the highest gluten intake between 6.2 g/day and 38.4 g/day. The NRSI reporting data from the UK Biobank referred to a median gluten intake of 8.5 g/day with an interquartile range from 5.1 g/day to 12.4 g/day without providing low- and high-intake categories.\nCardiovascular mortality\nFrom a total of 269,282 participants, 3364 (1.3%) died due to cardiovascular events during 26 years of follow-up. Low-certainty evidence may show no association between gluten intake and cardiovascular mortality (adjusted hazard ratio (HR) for low- versus high-gluten intake 1.00, 95% confidence interval (CI) 0.95 to 1.06; 2 NRSIs (3 cohorts)).\nAll-cause mortality\nFrom a total of 159,265 participants, 6259 (3.9%) died during 11.1 years of follow-up. Very low-certainty evidence suggested that it is unclear whether gluten intake is associated with all-cause mortality (adjusted HR for low vs high gluten intake 1.00, 95% CI 0.99 to 1.01; 1 NRSI (1 cohort)).\nMyocardial infarction\nFrom a total of 110,017 participants, 4243 (3.9%) participants developed non-fatal myocardial infarction within 26 years. Low-certainty evidence suggested that gluten intake may not be associated with the development of non-fatal myocardial infarction (adjusted HR for low versus high gluten intake 0.99, 95% CI 0.89 to 1.10; 1 NRSI (2 cohorts)). Lowering gluten intake by 5 g/day also showed no association on the primary prevention of non-fatal and fatal myocardial infarction (composite endpoint) in linear dose-response meta-analyses (adjusted HR 1.02, 95% CI 0.98 to 1.06; 1 NRSI (2 cohorts)).\nCoronary risk factors\nType 2 diabetes\nFrom a total of 202,114 participants, 15,947 (8.0%) developed type 2 diabetes after a follow-up between 22 and 28 years. There was low-certainty evidence that a lower compared with a higher gluten intake may be associated with a slightly increased risk to develop type 2 diabetes (adjusted HR 1.14, 95% CI 1.07 to 1.22; 1 NRSI (3 cohorts)). Furthermore, lowering gluten intake by 5 g/day may be associated with a slightly increased risk to develop type 2 diabetes in linear dose-response meta-analyses (adjusted HR 1.12, 95% CI 1.08 to 1.16; 1 NRSI (3 cohorts)).\nBlood pressure, low-density lipoprotein level, body mass index (BMI)\nAfter six months of follow-up, very low-certainty evidence suggested that it is unclear whether gluten intake affects systolic blood pressure (mean difference (MD) -6.9, 95% CI -17.1 to 3.3 mmHg). There was also no difference between the interventions for diastolic blood pressure (MD -0.8, 95% CI -5.9 to 4.3 mmHg), low-density lipoprotein levels (MD -0.1, 95% CI -0.5 to 0.3 mmol/L) and BMI (MD -0.1, 95% CI -3.3 to 3.1 kg/m²).\nNo study reported data on adverse events or on other outcomes. Funding sources did not appear to have distorted the results in any of the studies.\nAuthors' conclusions\nVery low-certainty evidence suggested that it is unclear whether gluten intake is associated with all-cause mortality. Our findings also indicate that low-certainty evidence may show little or no association between gluten intake and cardiovascular mortality and non-fatal myocardial infarction. Low-certainty evidence suggested that a lower compared with a higher gluten intake may be associated with a slightly increased risk to develop type 2 diabetes — a major cardiovascular risk factor. For other cardiovascular risk factors it is unclear whether there is a difference between a gluten-free and normal diet. Given the limited findings from this review predominantly based on observational studies, no recommendations for practice can be made.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Cardiovascular diseases are disorders of the heart and blood vessels, including acute coronary and cerebrovascular events (such as myocardial infarction and stroke). Approximately 50% of disabilities resulting from cardiovascular disease are associated with a suboptimal diet. In particular, dietary gluten (a protein found in certain cereal grain) has been linked to a wide range of adverse health outcomes. For example, different gastrointestinal symptoms (such as malabsorption or diarrhoea) often occur in people affected by gluten-related diseases and the only effective treatment for these populations is an often life-long gluten-reduced or gluten-free diet. Gluten-reduced and gluten-free diets have also gained a strong popularity in the general population. The evidence on the benefits and harms of a gluten-reduced or gluten-free diet in the general population is, however, contradictory: While gluten avoidance may be associated with disease prevention, there are also concerns that a gluten-free or gluten-restricted diet may be suboptimal (due to a reduced consumption of whole-grain, a dietary key component)." } ]
query-laysum
23385
[ { "role": "user", "content": "Abstract: Background\nStress and a sedentary lifestyle are major determinants of cardiovascular disease (CVD). As tai chi involves exercise and can help in stress reduction, it may be effective in the primary prevention of CVD.\nObjectives\nTo determine the effectiveness of tai chi for the primary prevention of CVD.\nSearch methods\nWe searched the following electronic databases: the Cochrane Central Register of Controlled Trials (CENTRAL) (Issue 11, 2013); MEDLINE (Ovid) (1946 to November week 3, 2013); EMBASE Classic + EMBASE (Ovid) (1947 to 6 December 2013); Web of Science (Thomson Reuters) (1970 to 6 December 2013); PsycINFO (Ovid) (1806 to December week 1, 2013); Database of Abstracts of Reviews of Effects (DARE); Health Technology Assessment Database and Health Economics Evaluations Database (Issue 4, 2013). We also searched the Allied and complementary Medicine Database (AMED) and OpenGrey (inception to October 2012) and several Asian databases. We searched trial registers and reference lists of reviews for further studies. We applied no language restrictions.\nSelection criteria\nRandomised controlled trials of tai chi lasting at least three months involving healthy adults or adults at high risk of CVD. The comparison group was no intervention or minimal intervention. The outcomes of interest were CVD clinical events and CVD risk factors. We excluded trials involving multifactorial lifestyle interventions or focusing on weight loss to avoid confounding.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, abstracted the data and assessed the risk of bias.\nMain results\nWe identified 13 small trials (1520 participants randomised) and three ongoing trials. All studies had at least one domain with unclear risk of bias, and some studies were at high risk of bias for allocation concealment (one study) and selective reporting (two studies). Duration and style of tai chi differed between trials. Seven studies recruited 903 healthy participants, the other studies recruited people with borderline hypertension or hypertension, elderly people at high risk of falling, and people with hypertension with liver and kidney yin deficiency syndrome.\nNo studies reported on cardiovascular mortality, all-cause mortality or non-fatal events as most studies were short term (all studies had follow-up of one year or less). There was also considerable heterogeneity between studies, which meant that it was not possible to combine studies statistically for cardiovascular risk (I2 statistic for systolic blood pressure (SBP) was 96%, for diastolic blood pressure (DBP) 96%, for total cholesterol 96%, low-density lipoprotein-cholesterol (LDL-C) 95%, high-density lipoprotein-cholesterol (HDL-C) 98%, triglycerides 75%). Nine trials measured blood pressure, six individual trials found reductions in SBP (reductions ranged from -22.0 mmHg (95% confidence interval (CI) -26.3 to -17.7) to -11.5 mmHg (95% CI -21.5 to -1.46)), two trials found no clear evidence of a difference (however, CIs were wide and an increase or decrease in SBP cannot be ruled out), and one trial found an increase in SBP with tai chi (increase 5.2 mmHg, 95% CI 3.73 to 6.67). A similar pattern was seen for DBP: three trials found a reduction in DBP (reductions ranged from -12.2 mmHg (95% CI -15.8 to -8.7) to -4.43 mmHg (95% CI -7.14 to -1.72)) and three trials found no clear evidence of a difference, however again with wide CIs. Three trials reported lipid levels and two found reductions in total cholesterol, LDL-C and triglycerides (total cholesterol reductions ranged from -1.30 mmol/L (95% CI -1.57 to -1.03) to -0.50 mmol/L (95% CI -0.74 to -0.26): LDL-C reductions ranged from -0.76 mmol/L (95% CI -0.93 to -0.59) to -0.59 mmol/L (95% CI -0.80 to -0.38): triglyceride reductions ranged from -0.46 mmol/L (95% CI -0.62 to -0.30) to -0.37 mmol/L (95% CI -0.67 to-0.07)) and increased HDL-C with the intervention (HDL-C increases ranged from 0.61 mmol/L (95% CI 0.51 to 0.71) to 0.16 mmol/L (95% CI 0.02 to 0.30)), while the third study found no clear evidence of a difference between groups on lipid levels. Quality of life was measured in one trial: tai chi improved quality of life at three months. None of the included trials reported on adverse events, costs or occurrence of type 2 diabetes.\nAuthors' conclusions\nThere are currently no long-term trials examining tai chi for the primary prevention of CVD. Due to the limited evidence available currently no conclusions can be drawn as to the effectiveness of tai chi on CVD risk factors. There was some suggestion of beneficial effects of tai chi on CVD risk factors but this was not consistent across all studies. There was considerable heterogeneity between the studies included in this review and studies were small and at some risk of bias. Results of the ongoing trials will add to the evidence base but additional longer-term, high-quality trials are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We found 13 trials, none of them were large enough or of long enough duration to examine the effects of tai chi on reducing cardiovascular deaths or non-fatal endpoints. There were variations in the duration and style of tai chi and the follow-up of the interventions ranged from three to 12 months. Due to the small number of short-term studies and the variability between them, we were unable to determine conclusively whether or not tai chi was beneficial at reducing cardiovascular risk in healthy adults and adults at increased risk of CVD, although beneficial effects for CVD risk factors were seen in some studies. None of the included studies reported on adverse events. Longer-term, high-quality trials are needed in order to determine the effectiveness of tai chi for CVD prevention." } ]
query-laysum
23386
[ { "role": "user", "content": "Abstract: Background\nInadequate pain management after surgery increases the risk of postoperative complications and may predispose for chronic postsurgical pain. Perioperative ketamine may enhance conventional analgesics in the acute postoperative setting.\nObjectives\nTo evaluate the efficacy and safety of perioperative intravenous ketamine in adult patients when used for the treatment or prevention of acute pain following general anaesthesia.\nSearch methods\nWe searched CENTRAL, MEDLINE and Embase to July 2018 and three trials registers (metaRegister of controlled trials, ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP)) together with reference checking, citation searching and contact with study authors to identify additional studies.\nSelection criteria\nWe sought randomised, double-blind, controlled trials of adults undergoing surgery under general anaesthesia and being treated with perioperative intravenous ketamine. Studies compared ketamine with placebo, or compared ketamine plus a basic analgesic, such as morphine or non-steroidal anti-inflammatory drug (NSAID), with a basic analgesic alone.\nData collection and analysis\nTwo review authors searched for studies, extracted efficacy and adverse event data, examined issues of study quality and potential bias, and performed analyses. Primary outcomes were opioid consumption and pain intensity at rest and during movement at 24 and 48 hours postoperatively. Secondary outcomes were time to first analgesic request, assessment of postoperative hyperalgesia, central nervous system (CNS) adverse effects, and postoperative nausea and vomiting. We assessed the evidence using GRADE and created a 'Summary of findings' table.\nMain results\nWe included 130 studies with 8341 participants. Ketamine was given to 4588 participants and 3753 participants served as controls. Types of surgery included ear, nose or throat surgery, wisdom tooth extraction, thoracotomy, lumbar fusion surgery, microdiscectomy, hip joint replacement surgery, knee joint replacement surgery, anterior cruciate ligament repair, knee arthroscopy, mastectomy, haemorrhoidectomy, abdominal surgery, radical prostatectomy, thyroid surgery, elective caesarean section, and laparoscopic surgery. Racemic ketamine bolus doses were predominantly 0.25 mg to 1 mg, and infusions 2 to 5 µg/kg/minute; 10 studies used only S-ketamine and one only R-ketamine. Risk of bias was generally low or uncertain, except for study size; most had fewer than 50 participants per treatment arm, resulting in high heterogeneity, as expected, for most analyses. We did not stratify the main analysis by type of surgery or any other factor, such as dose or timing of ketamine administration, and used a non-stratified analysis.\nPerioperative intravenous ketamine reduced postoperative opioid consumption over 24 hours by 8 mg morphine equivalents (95% CI 6 to 9; 19% from 42 mg consumed by participants given placebo, moderate-quality evidence; 65 studies, 4004 participants). Over 48 hours, opioid consumption was 13 mg lower (95% CI 10 to 15; 19% from 67 mg with placebo, moderate-quality evidence; 37 studies, 2449 participants).\nPerioperative intravenous ketamine reduced pain at rest at 24 hours by 5/100 mm on a visual analogue scale (95% CI 4 to 7; 19% lower from 26/100 mm with placebo, high-quality evidence; 82 studies, 5004 participants), and at 48 hours by 5/100 mm (95% CI 3 to 7; 22% lower from 23/100 mm, high-quality evidence; 49 studies, 2962 participants). Pain during movement was reduced at 24 hours (6/100 mm, 14% lower from 42/100 mm, moderate-quality evidence; 29 studies, 1806 participants), and 48 hours (6/100 mm, 16% lower from 37 mm, low-quality evidence; 23 studies, 1353 participants).\nResults for primary outcomes were consistent when analysed by pain at rest or on movement, operation type, and timing of administration, or sensitivity to study size and pain intensity. No analysis by dose was possible. There was no difference when nitrous oxide was used. We downgraded the quality of the evidence once if numbers of participants were large but small-study effects were present, or twice if numbers were small and small-study effects likely but testing not possible.\nKetamine increased the time for the first postoperative analgesic request by 54 minutes (95% CI 37 to 71 minutes), from a mean of 39 minutes with placebo (moderate-quality evidence; 31 studies, 1678 participants). Ketamine reduced the area of postoperative hyperalgesia by 7 cm² (95% CI −11.9 to −2.2), compared with placebo (very low-quality evidence; 7 studies 333 participants). We downgraded the quality of evidence because of small-study effects or because the number of participants was below 400.\nCNS adverse events occurred in 52 studies, while 53 studies reported of absence of CNS adverse events. Overall, 187/3614 (5%) participants receiving ketamine and 122/2924 (4%) receiving control treatment experienced an adverse event (RR 1.2, 95% CI 0.95 to 1.4; high-quality evidence; 105 studies, 6538 participants). Ketamine reduced postoperative nausea and vomiting from 27% with placebo to 23% with ketamine (RR 0.88, 95% CI 0.81 to 0.96; the number needed to treat to prevent one episode of postoperative nausea and vomiting with perioperative intravenous ketamine administration was 24 (95% CI 16 to 54; high-quality evidence; 95 studies, 5965 participants).\nAuthors' conclusions\nPerioperative intravenous ketamine probably reduces postoperative analgesic consumption and pain intensity. Results were consistent in different operation types or timing of ketamine administration, with larger and smaller studies, and by higher and lower pain intensity. CNS adverse events were little different with ketamine or control. Perioperative intravenous ketamine probably reduces postoperative nausea and vomiting by a small extent, of arguable clinical relevance.\n\nGiven the provided abstract, please respond to the following query: \"Bottom line\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Ketamine injected into a vein at the time of operation reduces pain, nausea and vomiting, and use of opioid (morphine-like) painkillers after operation." } ]
query-laysum
23387
[ { "role": "user", "content": "Abstract: Background\nRheumatoid arthritis is an inflammatory polyarthritis that frequently affects the hands and wrists. Hand exercises are prescribed to improve mobility and strength, and thereby hand function.\nObjectives\nTo determine the benefits and harms of hand exercise in adults with rheumatoid arthritis.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library), MEDLINE, Embase, CINAHL, AMED, Physiotherapy Evidence Database (PEDro), OTseeker, Web of Science, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) up to July 2017.\nSelection criteria\nWe considered all randomised or quasi-randomised controlled trials that compared hand exercise with any non-exercise therapy.\nData collection and analysis\nWe used standard methodological procedures as outlined by the Cochrane Musculoskeletal Group.\nMain results\nWe included seven studies involving 841 people (aged 20 to 94 years) in the review. Most studies used validated diagnostic criteria and involved home programmes.\nVery low-quality evidence (due to risk of bias and imprecision) from one study indicated uncertainty about whether exercise improves hand function in the short term (< 3 months). On a 0 to 80 points hand function test (higher scores mean better function), the exercise group (n = 11) scored 76.1 points and control group (n = 13) scored 75 points.\nModerate-quality evidence (due to risk of bias) from one study indicated that exercise compared to usual care probably slightly improves hand function (mean difference (MD) 4.5, 95% confidence interval (CI) 1.58 to 7.42; n = 449) in the medium term (3 to 11 months) and in the long term (12 months or beyond) (MD 4.3, 95% CI 0.86 to 7.74; n = 438). The absolute change on a 0-to-100 hand function scale (higher scores mean better function) and number needed to treat for an additional beneficial outcome (NNTB) were 5% (95% CI 2% to 7%); 8 (95% CI 5 to 20) and 4% (95% CI 1% to 8%); 9 (95% CI 6 to 27), respectively. A 4% to 5% improvement indicates a minimal clinical benefit.\nVery low-quality evidence (due to risk of bias and imprecision) from two studies indicated uncertainty about whether exercise compared to no treatment improved pain (MD -27.98, 95% CI -48.93 to -7.03; n = 124) in the short term. The absolute change on a 0-to-100-millimetre scale (higher scores mean more pain) was -28% (95% CI -49% to -7%) and NNTB 2 (95% CI 2 to 11).\nModerate-quality evidence (due to risk of bias) from one study indicated that there is probably little or no difference between exercise and usual care on pain in the medium (MD -2.8, 95% CI - 6.96 to 1.36; n = 445) and long term (MD -3.7, 95% CI -8.1 to 0.7; n = 437). On a 0-to-100 scale, the absolute changes were -3% (95% CI -7% to 2%) and -4% (95% CI -8% to 1%), respectively.\nVery low-quality evidence (due to risk of bias and imprecision) from three studies (n = 141) indicated uncertainty about whether exercise compared to no treatment improved grip strength in the short term. The standardised mean difference for the left hand was 0.44 (95% CI 0.11 to 0.78), re-expressed as 3.5 kg (95% CI 0.87 to 6.1); and for the right hand 0.46 (95% CI 0.13 to 0.8), re-expressed as 4 kg (95% CI 1.13 to 7).\nHigh-quality evidence from one study showed that exercise compared to usual care has little or no benefit on mean grip strength (in kg) of both hands in the medium term (MD 1.4, 95% CI -0.27 to 3.07; n = 400), relative change 11% (95% CI -2% to 13%); and in the long term (MD 1.2, 95% CI -0.62 to 3.02; n = 355), relative change 9% (95% CI -5% to 23%).\nVery low-quality evidence (due to risk of bias and imprecision) from two studies (n = 120) indicated uncertainty about whether exercise compared to no treatment improved pinch strength (in kg) in the short term. The MD and relative change for the left and right hands were 0.51 (95% CI 0.13 to 0.9) and 44% (95% CI 11% to 78%); and 0.82 (95% CI 0.43 to 1.21) and 68% (95% CI 36% to 101%).\nHigh-quality evidence from one study showed that exercise compared to usual care has little or no benefit on mean pinch strength of both hands in the medium (MD 0.3, 95% CI -0.14 to 0.74; n = 396) and long term (MD 0.4, 95% CI -0.08 to 0.88; n = 351). The relative changes were 8% (95% CI -4% to 19%) and 10% (95% CI -2% to 22%).\nNo study evaluated the American College of Rheumatology 50 criteria.\nModerate-quality evidence (due to risk of bias) from one study indicated that people who also received exercise with strategies for adherence were probably more adherent than those who received routine care alone in the medium term (risk ratio 1.31, 95% CI 1.15 to 1.48; n = 438) and NNTB 6 (95% CI 4 to 10). In the long term, the risk ratio was 1.09 (95% CI 0.93 to 1.28; n = 422).\nModerate-quality evidence (due to risk of bias) from one study (n = 246) indicated no adverse events with exercising. The other six studies did not report adverse events.\nAuthors' conclusions\nIt is uncertain whether exercise improves hand function or pain in the short term. It probably slightly improves function but has little or no difference on pain in the medium and long term. It is uncertain whether exercise improves grip and pinch strength in the short term, and probably has little or no difference in the medium and long term. The ACR50 response is unknown. People who received exercise with adherence strategies were probably more adherent in the medium term than who did not receive exercise, but with little or no difference in the long term. Hand exercise probably does not lead to adverse events. Future research should consider hand and wrist function as their primary outcome, describe exercise following the TIDieR guidelines, and evaluate behavioural strategies.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "On a 0-to-80-points hand function test, (higher scores mean better function), people who did hand exercises rated their function 1.1 points higher in the short term (less than 3 months). People who did not exercise rated their function at 75 points.\nOn a 0-to-100 scale (higher scores mean better function), compared with people who did not exercise, people who exercised rated their hand function 5 points higher in the medium term (3 to 11 months) and 4 points higher in the long term (12 months or beyond). People who did not exercise rated their function at 52.1 points.\nOn a 0-to-100-millimetre pain scale (lower scores mean less pain), people who exercised rated their pain 28mm lower in the short term. People who did not exercise rated their pain at 68mm. On a 0-to-100-point scale (lower scores mean less pain), people who exercised rated their pain 3 points lower in the medium and 4 points lower in the long term. People who did not exercise rated their pain at 51.4 points.\nPeople who exercised had 3% and 4% improvement in the left and right-hand grip strength in the short term. People who did not exercise measured 14.3 kg and 15.6 kg, respectively. People who exercised had 1% improvement in the average grip strength of both hands in both medium and long term. People who did not exercise measured 13.2 kg.\nPeople who exercised had 4% and 6% improvement in the left and right hand pinch strength in the short term. People who did not exercise measured 1.2 kg and 1.2 kg, respectively. People who exercised had 2% and 3% improvement in the average pinch strength of both hands in the medium and long term. People who did not exercise measured 4 kg.\nIt must be noted that although there were apparently better scores in hand function, pain, and grip and pinch strength with hand exercise, these improvements may not equate to useful clinical benefits.\nNo studies evaluated the American College of Rheumatology 50 criteria (a measure that indicates greater than or equal to 50% improvement in patient's symptoms with treatment received).\nCompared to participants who received usual care, those who also received hand exercises with strategies to stick with the programme were 19% and 5% more likely to adhere to their programme in less than a year and beyond one year.\nOne study reported no adverse effects due to exercising. The remaining studies did not measure this outcome." } ]
query-laysum
23388
[ { "role": "user", "content": "Abstract: Background\nPain in the neonate is associated with acute behavioural and physiological changes. Cumulative pain is associated with morbidities, including adverse neurodevelopmental outcomes. Studies have shown a reduction in changes in physiological parameters and pain score measurements following pre-emptive analgesic administration in neonates experiencing pain or stress. Non-pharmacological measures (such as holding, swaddling and breastfeeding) and pharmacological measures (such as acetaminophen, sucrose and opioids) have been used for analgesia. This is an update of a review first published in 2006 and updated in 2012.\nObjectives\nThe primary objective was to evaluate the effectiveness of breastfeeding or supplemental breast milk in reducing procedural pain in neonates. The secondary objective was to conduct subgroup analyses based on the type of control intervention, gestational age and the amount of supplemental breast milk given.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and trial registries (ICTRP, ISRCTN and clinicaltrials.gov) in August 2022; searches were limited from 2011 forwards. We checked the reference lists of included studies and relevant systematic reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) or quasi-RCTs of breastfeeding or supplemental breast milk versus no treatment/other measures in neonates. We included both term (≥ 37 completed weeks postmenstrual age) and preterm infants (< 37 completed weeks' postmenstrual age) up to a maximum of 44 weeks' postmenstrual age. The study must have reported on either physiological markers of pain or validated pain scores.\nData collection and analysis\nWe assessed the methodological quality of the trials using the information provided in the studies and by personal communication with the authors. We extracted data on relevant outcomes, estimated the effect size and reported this as a mean difference (MD). We used the GRADE approach to assess the certainty of evidence.\nMain results\nOf the 66 included studies, 36 evaluated breastfeeding, 29 evaluated supplemental breast milk and one study compared them against each other. The procedures conducted in the studies were: heel lance (39), venipuncture (11), intramuscular vaccination (nine), eye examination for retinopathy of prematurity (four), suctioning (four) and adhesive tape removal as procedure (one). We noted marked heterogeneity in the control interventions and pain assessment measures amongst the studies. Since many studies included multiple arms with breastfeeding/supplemental breast milk as the main comparator, we were not able to synthesise all interventions together. Individual interventions are compared to breastfeeding/supplemental breast milk and reported. The numbers of studies/participants presented with the findings are not taken from pooled analyses (as is usual in Cochrane Reviews), but are the overall totals in each comparison.\nOverall, the included studies were at low risk of bias except for masking of intervention and outcome assessment, where nearly one-third of studies were at high risk of bias.\nBreastfeeding versus control\nBreastfeeding may reduce the increase in heart rate compared to holding by mother, skin-to-skin contact, bottle feeding mother's milk, moderate concentration of sucrose/glucose (20% to 33%) with skin-to-skin contact (low-certainty evidence, 8 studies, 784 participants).\nBreastfeeding likely reduces the duration of crying compared to no intervention, lying on table, rocking, heel warming, holding by mother, skin-to-skin contact, bottle feeding mother's milk and moderate concentration of glucose (moderate-certainty evidence, 16 studies, 1866 participants).\nBreastfeeding may reduce percentage time crying compared to holding by mother, skin-to-skin contact, bottle feeding mother's milk, moderate concentration sucrose and moderate concentration of sucrose with skin-to-skin contact (low-certainty evidence, 4 studies, 359 participants).\nBreastfeeding likely reduces the Neonatal Infant Pain Scale (NIPS) score compared to no intervention, holding by mother, heel warming, music, EMLA cream, moderate glucose concentration, swaddling, swaddling and holding (moderate-certainty evidence, 12 studies, 1432 participants).\nBreastfeeding may reduce the Neonatal Facial Coding System (NFCS) score compared to no intervention, holding, pacifier and moderate concentration of glucose (low-certainty evidence, 2 studies, 235 participants).\nBreastfeeding may reduce the Douleur Aigue Nouveau-né (DAN) score compared to positioning, holding or placebo (low-certainty evidence, 4 studies, 709 participants).\nIn the majority of the other comparisons there was little or no difference between the breastfeeding and control group in any of the outcome measures.\nSupplemental breast milk versus control\nSupplemental breast milk may reduce the increase in heart rate compared to water or no intervention (low-certainty evidence, 5 studies, 336 participants).\nSupplemental breast milk likely reduces the duration of crying compared to positioning, massage or placebo (moderate-certainty evidence, 11 studies, 1283 participants).\nSupplemental breast milk results in little or no difference in percentage time crying compared to placebo or glycine (low-certainty evidence, 1 study, 70 participants).\nSupplemental breast milk results in little or no difference in NIPS score compared to no intervention, pacifier, moderate concentration of sucrose, eye drops, gentle touch and verbal comfort, and breast milk odour and verbal comfort (low-certainty evidence, 3 studies, 291 participants).\nSupplemental breast milk may reduce NFCS score compared to glycine (overall low-certainty evidence, 1 study, 40 participants). DAN scores were lower when compared to massage and water; no different when compared to no intervention, EMLA and moderate concentration of sucrose; and higher when compared to rocking or pacifier (low-certainty evidence, 2 studies, 224 participants).\nDue to the high number of comparator interventions, other measures of pain were assessed in a very small number of studies in both comparisons, rendering the evidence of low certainty.\nThe majority of studies did not report on adverse events, considering the benign nature of the intervention. Those that reported on adverse events identified none in any participants.\nSubgroup analyses were not conducted due to the small number of studies.\nAuthors' conclusions\nModerate-/low-certainty evidence suggests that breastfeeding or supplemental breast milk may reduce pain in neonates undergoing painful procedures compared to no intervention/positioning/holding or placebo or non-pharmacological interventions. Low-certainty evidence suggests that moderate concentration (20% to 33%) glucose/sucrose may lead to little or no difference in reducing pain compared to breastfeeding. The effectiveness of breast milk for painful procedures should be studied in the preterm population, as there are currently a limited number of studies that have assessed its effectiveness in this population.\n\nGiven the provided abstract, please respond to the following query: \"Study funding sources\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The studies included in the review were not externally funded according to information given in the reports." } ]
query-laysum
23389
[ { "role": "user", "content": "Abstract: Background\nCancer cachexia is a multifactorial syndrome characterised by an ongoing loss of skeletal muscle mass, with or without a loss of fat mass, leading to progressive functional impairment. Physical exercise may attenuate cancer cachexia and its impact on patient function. This is the first update of an original Cochrane Review published in Issue 11, 2014, which found no studies to include.\nObjectives\nTo determine the effectiveness, acceptability and safety of exercise, compared with usual care, no treatment or active control, for cancer cachexia in adults.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and eight other databases to March 2020. We searched for ongoing studies in trial registries, checked reference lists and contacted experts to seek relevant studies.\nSelection criteria\nWe sought randomised controlled trials in adults with cancer cachexia, that compared a programme of exercise alone or in combination with another intervention, with usual care, no treatment or an active control group.\nData collection and analysis\nTwo review authors independently assessed titles and abstracts for relevance and extracted data on study design, participants, interventions and outcomes from potentially relevant articles. We used standard methodological procedures expected by Cochrane. Our primary outcome was lean body mass and secondary outcomes were adherence to exercise programme, adverse events, muscle strength and endurance, exercise capacity, fatigue and health-related quality of life. We assessed the certainty of evidence using GRADE and included two Summary of findings tables.\nMain results\nWe included four new studies in this update which overall randomised 178 adults with a mean age of 58 (standard deviation (SD) 8.2) years. Study sample size ranged from 20 to 60 participants and in three studies the proportion of men ranged from 52% to 82% (the fourth study was only available in abstract form). Three studies were from Europe: one in the UK and Norway; one in Belgium and one in Germany. The remaining study was in Canada. The types of primary cancer were head and neck (two studies), lung and pancreas (one study), and mixed (one study).\nWe found two comparisons: exercise alone (strength-based exercise) compared to usual care (one study; 20 participants); and exercise (strength-based exercise/endurance exercise) as a component of a multimodal intervention (pharmacological, nutritional or educational (or a combination) interventions) compared with usual care (three studies, 158 participants). Studies had unclear and high risk of bias for most domains.\nExercise plus usual care compared with usual care\nWe found one study (20 participants). There was no clear evidence of a difference for lean body mass (8 weeks: MD 6.40 kg, 95% CI –2.30 to 15.10; very low-certainty evidence).\nFor our secondary outcomes, all participants adhered to the exercise programme and no participant reported any adverse event during the study. There were no data for muscle strength and endurance, or maximal and submaximal exercise capacity. There was no clear evidence of a difference for either fatigue (4 to 20 scale, lower score was better) (8 weeks: MD –0.10, 95% CI –4.00 to 3.80; very low-certainty evidence) or health-related quality of life (0 to 104 scale, higher score was better) (8 weeks: MD 4.90, 95% CI –15.10 to 24.90; very low-certainty evidence).\nMultimodal intervention (exercise plus other interventions) plus usual care compared with usual care\nWe found three studies but outcome data were only available for two studies. There was no clear evidence of a difference for lean body mass (6 weeks: MD 7.89 kg, 95% CI –9.57 to 25.35; 1 study, 44 participants; very low-certainty evidence; 12 weeks: MD –2.00, 95% CI –8.00 to 4.00; one study, 60 participants; very low-certainty evidence).\nFor our secondary outcomes, there were no data reported on adherence to the exercise programme, endurance, or maximal exercise capacity. In one study (44 participants) there was no clear evidence of a difference for adverse events (patient episode report) (6 weeks: risk ratio (RR) 1.18, 95% CI 0.67 to 2.07; very low-certainty evidence). Another study assessed adverse events but reported no data and the third study did not assess this outcome. There was no clear evidence of a difference in muscle strength (6 weeks: MD 3.80 kg, 95% CI –2.87 to 10.47; 1 study, 44 participants; very low-certainty evidence; 12 weeks MD –5.00 kg, 95% CI –14.00 to 4.00; 1 study, 60 participants; very low-certainty evidence), submaximal exercise capacity (6 weeks: MD –16.10 m walked, 95% CI –76.53 to 44.33; 1 study, 44 participants; very low-certainty evidence; 12 weeks: MD –62.60 m walked, 95% CI –145.87 to 20.67; 1 study, 60 participants; very low-certainty evidence), fatigue (0 to 10 scale, lower score better) (6 weeks: MD 0.12, 95% CI –1.00 to 1.24; 1 study, 44 participants; very low-certainty evidence) or health-related quality of life (0 to 104 scale, higher score better) (12 weeks: MD –2.20, 95% CI –13.99 to 9.59; 1 study, 60 participants; very low-certainty evidence).\nAuthors' conclusions\nThe previous review identified no studies. For this update, our conclusions have changed with the inclusion of four studies. However, we are uncertain of the effectiveness, acceptability and safety of exercise for adults with cancer cachexia. Further high-quality randomised controlled trials are still required to test exercise alone or as part of a multimodal intervention to improve people's well-being throughout all phases of cancer care. We assessed the certainty of the body of evidence as very low, downgraded due to serious study limitations, imprecision and indirectness. We have very little confidence in the results and the true effect is likely to be substantially different from these. The findings of at least three more studies (one awaiting classification and two ongoing) are expected in the next review update.\n\nGiven the provided abstract, please respond to the following query: \"Key message\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We do not know if exercise is helpful or safe for people with cancer who experience loss of appetite and weight loss. This is because too few robust studies have tested exercise with this group of patients. We need researchers to conduct rigorous and better designed studies in this area in future to help patients and clinicians decide if exercise could be beneficial." } ]
query-laysum
23390
[ { "role": "user", "content": "Abstract: Background\nFollowing surgery, surgical wounds can be closed using a variety of devices including sutures (subcuticular or transdermal), staples and tissue adhesives. Subcuticular sutures are intradermal stitches (placed immediately below the epidermal layer). The increased availability of synthetic absorbable filaments (stitches which are absorbed by the body and do not have to be removed) has led to an increased use of subcuticular sutures. However, in non-obstetric surgery, there is still controversy about whether subcuticular sutures increase the incidence of wound complications.\nObjectives\nTo examine the efficacy and acceptability of subcuticular sutures for skin closure in non-obstetric surgery.\nSearch methods\nIn March 2019, we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nAll randomised controlled trials which compared subcuticular sutures with any other methods for skin closure in non-obstetric surgery were included in the review.\nData collection and analysis\nTwo review authors independently identified the trials, extracted data and carried out risk of bias and GRADE assessment of the certainty of the evidence.\nMain results\nWe included 66 studies (7487 participants); 11 included trials had more than two arms. Most trials had poorly-reported methodology, meaning that it is unclear whether they were at high risk of bias. Most trials compared subcuticular sutures with transdermal sutures, skin staples or tissue adhesives. Most outcomes prespecified in the review protocol were reported. The certainty of evidence varied from high to very low in the comparisons of subcuticular sutures with transdermal sutures or staples and tissue adhesives; the certainty of the evidence for the comparison with surgical tapes and zippers was low to very low. Most evidence was downgraded for imprecision or risk of bias.\nAlthough the majority of studies enrolled people who underwent CDC class 1 (clean) surgeries, two-thirds of participants were enrolled in studies which included CDC class 2 to 4 surgeries, such as appendectomies and gastrointestinal surgeries. Most participants were adults in a hospital setting.\nSubcuticular sutures versus transdermal sutures\nThere may be little difference in the incidence of SSI (risk ratio (RR) 1.10; 95% confidence interval (CI) 0.80 to 1.52; 3107 participants; low-certainty evidence).\nIt is uncertain whether subcuticular sutures reduce wound complications (RR 0.83; 95% CI 0.40 to 1.71; 1489 participants; very low-certainty evidence). Subcuticular sutures probably improve patient satisfaction (score from 1 to 10) (at 30 days; MD 1.60, 95% CI 1.32 to 1.88; 290 participants; moderate-certainty evidence). Wound closure time is probably longer when subcuticular sutures are used (MD 5.81 minutes; 95% CI 5.13 to 6.49 minutes; 585 participants; moderate-certainty evidence).\nSubcuticular sutures versus skin staples\nThere is moderate-certainty evidence that, when compared with skin staples, subcuticular sutures probably have little effect on SSI (RR 0.81, 95% CI 0.64 to 1.01; 4163 participants); but probably decrease the incidence of wound complications (RR 0.79, 95% CI 0.64 to 0.98; 2973 participants). Subcuticular sutures are associated with slightly higher patient satisfaction (score from 1 to 5) (MD 0.20, 95% CI 0.10 to 0.30; 1232 participants; high-certainty evidence). Wound closure time may also be longer compared with staples (MD 0.30 to 5.50 minutes; 1384 participants; low-certainty evidence).\nSubcuticular sutures versus tissue adhesives, surgical tapes and zippers\nThere is moderate-certainty evidence showing no clear difference in the incidence of SSI between participants treated with subcuticular sutures and those treated with tissue adhesives (RR 0.77, 95% CI 0.41 to 1.45; 869 participants). There is also no clear difference in the incidence of wound complications (RR 0.62, 95% CI 0.35 to 1.11; 1058 participants; low-certainty evidence). Subcuticular sutures may also achieve lower patient satisfaction ratings (score from 1 to 10) (MD -2.05, 95% CI -3.05 to -1.05; 131 participants) (low-certainty evidence). In terms of SSI incidence, the evidence is uncertain when subcuticular sutures are compared with surgical tapes (RR 1.31, 95% CI 0.40 to 4.27; 354 participants; very low-certainty evidence) or surgical zippers (RR 0.80, 95% CI 0.08 to 8.48; 424 participants; very low-certainty evidence). There may be little difference in the incidence of wound complications between participants treated with subcuticular sutures and those treated with surgical tapes (RR 0.90, 95% CI 0.61 to 1.34; 492 participants; low-certainty evidence). It is uncertain whether subcuticular sutures reduce the risk of wound complications compared with surgical zippers (RR 0.55, 95% CI 0.15 to 2.04; 424 participants; very low-certainty evidence). It is also uncertain whether it takes longer to close a wound with subcuticular sutures compared with tissue adhesives (MD -0.34 to 10.39 minutes; 895 participants), surgical tapes (MD 0.74 to 6.36 minutes; 169 participants) or zippers (MD 4.38 to 8.25 minutes; 424 participants) (very low-certainty evidence). No study reported results for patient satisfaction compared with surgical tapes or zippers.\nAuthors' conclusions\nThere is no clear difference in the incidence of SSI for subcuticular sutures in comparison with any other skin closure methods. Subcuticular sutures probably reduce wound complications compared with staples, and probably improve patient satisfaction compared with transdermal sutures or staples. However, tissue adhesives may improve patient satisfaction compared with subcuticular sutures, and transdermal sutures and skin staples may be quicker to apply than subcuticular sutures. The quality of the evidence ranged from high to very low; evidence for almost all comparisons was subject to some limitations. There seems to be no need for additional new trials to explore the comparison with staples because there are high-quality studies with large sample sizes and some ongoing studies. However, there is a need for studies exploring the comparisons with transdermal sutures, tissue adhesives, tapes and zippers, with high-quality studies and large sample sizes, including long-term assessments.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Surgeons have various options for closing surgical wounds at the end of an operation. Skin closure can be carried out with stitches (sutures) that go under the skin, stitches that go over the skin, staples (clips), tissue adhesives (glue), tapes or other devices. Sutures can be absorbable (the stitches dissolve into the body as part of the healing process and do not need removing) or non-absorbable (the stitches need removing once the wound has healed).\nSurgical site infections are a common problem after surgery and can cause a range of problems for patients. Surgical wounds can also cause unsightly scars if they do not heal correctly. We wanted to find out how stitches that go under the skin compare with other methods of closing surgical wounds in terms of infection, scarring, patient satisfaction, cost, pain, length of hospital stay and quality of life." } ]
query-laysum
23391
[ { "role": "user", "content": "Abstract: Background\nAbout 5% of school children have a specific learning disorder, defined as unexpected failure to acquire adequate abilities in reading, writing or mathematics that is not a result of reduced intellectual ability, inadequate teaching or social deprivation. Of these events, 80% are reading disorders. Polyunsaturated fatty acids (PUFAs), in particular, omega-3 and omega-6 fatty acids, which normally are abundant in the brain and in the retina, are important for learning. Some children with specific learning disorders have been found to be deficient in these PUFAs, and it is argued that supplementation of PUFAs may help these children improve their learning abilities.\nObjectives\n1. To assess effects on learning outcomes of supplementation of polyunsaturated fatty acids (PUFAs) for children with specific learning disorders.\n2. To determine whether adverse effects of supplementation of PUFAs are reported in these children.\nSearch methods\nIn November 2015, we searched CENTRAL, Ovid MEDLINE, Embase, PsycINFO, 10 other databases and two trials registers. We also searched the reference lists of relevant articles.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs comparing PUFAs with placebo or no treatment in children younger than 18 years with specific learning disabilities, as diagnosed in accordance with the fifth (or earlier) edition of theDiagnostic and Statistical Manual of Mental Disorders (DSM-5), or the 10th (or earlier) revision of the International Classification of Diseases (ICD-10) or equivalent criteria. We included children with coexisting developmental disorders such as attention deficit hyperactivity disorder (ADHD) or autism.\nData collection and analysis\nTwo review authors (MLT and KHT) independently screened the titles and abstracts of articles identified by the search and eliminated all studies that did not meet the inclusion criteria. We contacted study authors to ask for missing information and clarification, when needed. We used the GRADE approach to assess the quality of evidence.\nMain results\nTwo small studies involving 116 children, mainly boys between 10 and 18 years of age, met the inclusion criteria. One study was conducted in a school setting, the other at a specialised clinic. Both studies used three months of a combination of omega-3 and omega-6 supplements as the intervention compared with placebo. Although both studies had generally low risk of bias, we judged the risk of reporting bias as unclear in one study, and as high in the other study. In addition, one of the studies was funded by industry and reported active company involvement in the study.\nNone of the studies reported data on the primary outcomes of reading, writing, spelling and mathematics scores, as assessed by standardised tests.\nEvidence of low quality indicates that supplementation of PUFAs did not increase the risk of gastrointestinal disturbances (risk ratio 1.43, 95% confidence interval 0.25 to 8.15; two studies, 116 children). Investigators reported no other adverse effects.\nBoth studies reported attention deficit hyperactivity disorder (ADHD)-related behaviour outcomes. We were unable to combine the results in a meta-analysis because one study reported findings as a continuous outcome, and the other as a dichotomous outcome. No other secondary outcomes were reported.\nWe excluded one study because it used a cointervention (carnosine), and five other studies because they did not provide a robust diagnosis of a specific learning disorder. We identified one ongoing study and found three studies awaiting classification.\nAuthors' conclusions\nEvidence is insufficient to permit any conclusions about the effect of PUFAs on the learning abilities of children with specific learning disorders. Well-designed RCTs with clearly defined populations of children with specific learning disorders who have been diagnosed by standardised diagnostic criteria are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "None of the included studies reported effects of PUFAs on reading, writing, spelling or mathematical abilities of children.\nEvidence of low quality (because studies included few participants and showed evidence of bias) suggests that using PUFAs did not increase the risk of minor disturbances to the digestive system. Included studies reported no other types of adverse effects.\nBoth studies reported on ADHD-related behaviour. However, the format of available data did not allow us to readily combine them or to reach any conclusions. Included studies reported no other secondary outcomes." } ]
query-laysum
23392
[ { "role": "user", "content": "Abstract: Background\nCalcium channel blockers (CCBs) are used to manage hypertension which is highly prevalent among people with chronic kidney disease (CKD). The treatment for hypertension is particularly challenging in people undergoing dialysis.\nObjectives\nTo assess the benefits and harms of calcium channel blockers in patients with chronic kidney disease requiring dialysis.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies to 27 April 2020 through contact with the Information Specialist using search terms relevant to this review. Studies in the Specialised Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nAll randomised controlled trials (RCTs) and quasi-RCTs that compared any type of CCB with other CCB, different doses of the same CCB, other antihypertensives, control or placebo were included. The minimum study duration was 12 weeks.\nData collection and analysis\nTwo authors independently assessed study quality and extracted data. Statistical analyses were performed using a random-effects model and results expressed as risk ratio (RR), risk difference (RD) or mean difference (MD) with 95% confidence intervals (CI).\nMain results\nThis review included 13 studies (24 reports) randomising 1459 participants treated with long-term haemodialysis. Nine studies were included in the meta-analysis (622 participants). No studies were performed in children or in those undergoing peritoneal dialysis. Overall, risk of bias was assessed as unclear to high across most domains.\nRandom sequence generation and allocation concealment were at low risk of bias in eight and one studies, respectively. Two studies reported low risk methods for blinding of participants and investigators, and outcome assessment was blinded in 10 studies. Three studies were at low risk of attrition bias, eight studies were at low risk of selective reporting bias, and five studies were at low risk of other potential sources of bias. Overall, the certainty of the evidence was low to very low for all outcomes. No events were reported for cardiovascular death in any of the comparisons. Other side effects were rarely reported and studies were not designed to measure costs.\nFive studies (451 randomised adults) compared dihydropyridine CCBs to placebo or no treatment. Dihydropyridine CCBs may decrease predialysis systolic (1 study, 39 participants: MD -27.00 mmHg, 95% CI -43.33 to -10.67; low certainty evidence) and diastolic blood pressure level (2 studies, 76 participants; MD -13.56 mmHg, 95% CI -19.65 to -7.48; I2 = 0%, low certainty evidence) compared to placebo or no treatment. Dihydropyridine CCBs may make little or no difference to occurrence of intradialytic hypotension (2 studies, 287 participants; RR 0.54, 95% CI 0.25 to 1.15; I2 = 0%, low certainty evidence) compared to placebo or no treatment. Other side effects were not reported.\nEight studies (1037 randomised adults) compared dihydropyridine CCBs to other antihypertensives. Dihydropyridine CCBs may make little or no difference to predialysis systolic (4 studies, 180 participants: MD 2.44 mmHg, 95% CI -3.74 to 8.62; I2 = 0%, low certainty evidence) and diastolic blood pressure (4 studies, 180 participants: MD 1.49 mmHg, 95% CI -2.23 to 5.21; I2 = 0%, low certainty evidence) compared to other antihypertensives. There was no evidence of a difference in the occurrence of intradialytic hypotension (1 study, 92 participants: RR 2.88, 95% CI 0.12 to 68.79; very low certainty evidence) between dihydropyridine CCBs to other antihypertensives. Other side effects were not reported.\nDihydropyridine CCB may make little or no difference to predialysis systolic (1 study, 40 participants: MD -4 mmHg, 95% CI -11.99 to 3.99; low certainty evidence) and diastolic blood pressure (1 study, 40 participants: MD -3.00 mmHg, 95% CI -7.06 to 1.06; low certainty evidence) compared to non-dihydropyridine CCB. There was no evidence of a difference in other side effects (1 study, 40 participants: RR 0.13, 95% CI 0.01 to 2.36; very low certainty evidence) between dihydropyridine CCB and non-dihydropyridine CCB. Intradialytic hypotension was not reported.\nAuthors' conclusions\nThe benefits of CCBs over other antihypertensives on predialysis blood pressure levels and intradialytic hypotension among people with CKD who required haemodialysis were uncertain. Effects of CCBs on other side effects and cardiovascular death also remain uncertain. Dihydropyridine CCBs may decrease predialysis systolic and diastolic blood pressure level compared to placebo or no treatment. No studies were identified in children or peritoneal dialysis. Available studies have not been designed to measure the effects on costs. The shortcomings of the studies were that they recruited very few participants, had few events, had very short follow-up periods, some outcomes were not reported, and the reporting of outcomes such as changes in blood pressure was not done uniformly across studies.\nWell-designed RCTs, conducted in both adults and children with CKD requiring both haemodialysis and peritoneal dialysis, evaluating both dihydropyridine and non-dihydropyridine CCBs against other antihypertensives are required. Future research should be focused on outcomes relevant to patients (including death and cardiovascular disease), blood pressure changes, risk of side effects and healthcare costs to assist decision-making in clinical practice.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "People with long-term kidney disease or chronic kidney disease (CKD) often develop high blood pressure (hypertension), and those with advanced CKD need dialysis when their kidneys are no longer unable to function. Treatment for hypertension is often challenging for people with advanced CKD undergoing dialysis. Several medications are used to treat high blood pressure including calcium channel blockers (CCBs).\nWe wanted to find out whether the use of CCBs in people with CKD undergoing haemodialysis or peritoneal dialysis had any added benefits over other medications used to treat hypertension or placebo (no active treatment) in lowering the blood pressure, risk of death and undesired effects." } ]
query-laysum
23393
[ { "role": "user", "content": "Abstract: Background\nChronic venous insufficiency (CVI) is a condition in which veins are unable to transport blood unidirectionally towards the heart. CVI usually occurs in the lower limbs. It might result in considerable discomfort, with symptoms such as pain, itchiness and tiredness in the legs. Patients with CVI may also experience swelling and ulcers. Phlebotonics are a class of drugs often used to treat CVI. This is the second update of a review first published in 2005.\nObjectives\nTo assess the efficacy and safety of phlebotonics administered orally or topically for treatment of signs and symptoms of lower extremity CVI.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL databases and the World Health Organization International Clinical Trials Registry Platform and Clinicaltrials.gov trials register up to 12 November 2019. We searched the reference lists of the articles retrieved by electronic searches for additional citations. We also contacted authors of unpublished studies.\nSelection criteria\nWe included randomised, double-blind, placebo-controlled trials (RCTs) assessing the efficacy of phlebotonics (rutosides, hidrosmine, diosmine, calcium dobesilate, chromocarbe, Centella asiatica, disodium flavodate, French maritime pine bark extract, grape seed extract and aminaftone) in patients with CVI at any stage of the disease.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of included RCTs. We estimated the effects of treatment by using risk ratios (RRs), mean differences (MDs) and standardized mean differences (SMDs), according to the outcome assessed. We calculated 95% confidence intervals (CIs) and percentage of heterogeneity (I2). Outcomes of interest were oedema, quality of life (QoL), assessment of CVI and adverse events. We used GRADE criteria to assess the certainty of the evidence.\nMain results\nWe identified three new studies for this update. In total, 69 RCTs of oral phlebotonics were included, but only 56 studies (7690 participants, mean age 50 years) provided quantifiable data for the efficacy analysis. These studies used different phlebotonics (28 on rutosides, 11 on hidrosmine and diosmine, 10 on calcium dobesilate, two on Centella asiatica, two on aminaftone, two on French maritime pine bark extract and one on grape seed extract). No studies evaluating topical phlebotonics, chromocarbe, naftazone or disodium flavodate fulfilled the inclusion criteria.\nModerate-certainty evidence suggests that phlebotonics probably reduce oedema slightly in the lower legs, compared with placebo (RR 0.70, 95% CI 0.63 to 0.78; 13 studies; 1245 participants); and probably reduce ankle circumference (MD -4.27 mm, 95% CI -5.61 to -2.93 mm; 15 studies; 2010 participants). Moderate-certainty evidence shows that phlebotonics probably make little or no difference in QoL compared with placebo (SMD -0.06, 95% CI -0.22 to 0.10; five studies; 1639 participants); and similarly, may have little or no effect on ulcer healing (RR 0.94, 95% CI 0.79 to 1.13; six studies; 461 participants; low-certainty evidence). Thirty-seven studies reported on adverse events. Pooled data suggest that phlebotonics probably increase adverse events slightly, compared to placebo (RR 1.14, 95% CI 1.02 to 1.27; 37 studies; 5789 participants; moderate-certainty evidence). Gastrointestinal disorders were the most frequently reported adverse events. We downgraded our certainty in the evidence from 'high' to 'moderate' because of risk of bias concerns, and further to 'low' because of imprecision.\nAuthors' conclusions\nThere is moderate-certainty evidence that phlebotonics probably reduce oedema slightly, compared to placebo; moderate-certainty evidence of little or no difference in QoL; and low-certainty evidence that these drugs do not influence ulcer healing. Moderate-certainty evidence suggests that phlebotonics are probably associated with a higher risk of adverse events than placebo. Studies included in this systematic review provided only short-term safety data; therefore, the medium- and long-term safety of phlebotonics could not be estimated. Findings for specific groups of phlebotonics are limited due to small study numbers and heterogeneous results. Additional high-quality RCTs focusing on clinically important outcomes are needed to improve the evidence base.\n\nGiven the provided abstract, please respond to the following query: \"How-up-to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The evidence in this Cochrane Review is current to November 2019." } ]
query-laysum
23394
[ { "role": "user", "content": "Abstract: Background\nEffective oral hygiene measures carried out on a regular basis are vital to maintain good oral health. One-to-one oral hygiene advice (OHA) within the dental setting is often provided as a means to motivate individuals and to help achieve improved levels of oral health. However, it is unclear if one-to-one OHA in a dental setting is effective in improving oral health and what method(s) might be most effective and efficient.\nObjectives\nTo assess the effects of one-to-one OHA, provided by a member of the dental team within the dental setting, on patients' oral health, hygiene, behaviour, and attitudes compared to no advice or advice in a different format.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 10 November 2017); the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 10) in the Cochrane Library (searched 10 November 2017); MEDLINE Ovid (1946 to 10 November 2017); and Embase Ovid (1980 to 10 November 2017). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were also searched for ongoing trials (10 November 2017). No restrictions were placed on the language or date of publication when searching the electronic databases. Reference lists of relevant articles and previously published systematic reviews were handsearched. The authors of eligible trials were contacted, where feasible, to identify any unpublished work.\nSelection criteria\nWe included randomised controlled trials assessing the effects of one-to-one OHA delivered by a dental care professional in a dental care setting with a minimum of 8 weeks follow-up. We included healthy participants or participants who had a well-defined medical condition.\nData collection and analysis\nAt least two review authors carried out selection of studies, data extraction and risk of bias independently and in duplicate. Consensus was achieved by discussion, or involvement of a third review author if required.\nMain results\nNineteen studies met the criteria for inclusion in the review with data available for a total of 4232 participants. The included studies reported a wide variety of interventions, study populations, clinical outcomes and outcome measures. There was substantial clinical heterogeneity amongst the studies and it was not deemed appropriate to pool data in a meta-analysis. We summarised data by categorising similar interventions into comparison groups.\nComparison 1: Any form of one-to-one OHA versus no OHA\nFour studies compared any form of one-to-one OHA versus no OHA.\nTwo studies reported the outcome of gingivitis. Although one small study had contradictory results at 3 months and 6 months, the other study showed very low-quality evidence of a benefit for OHA at all time points (very low-quality evidence).\nThe same two studies reported the outcome of plaque. There was low-quality evidence that these interventions showed a benefit for OHA in plaque reduction at all time points.\nTwo studies reported the outcome of dental caries at 6 months and 12 months respectively. There was very low-quality evidence of a benefit for OHA at 12 months.\nComparison 2: Personalised one-to-one OHA versus routine one-to-one OHA\nFour studies compared personalised OHA versus routine OHA.\nThere was little evidence available that any of these interventions demonstrated a difference on the outcomes of gingivitis, plaque or dental caries (very low quality).\nComparison 3: Self-management versus professional OHA\nFive trials compared some form of self-management with some form of professional OHA.\nThere was little evidence available that any of these interventions demonstrated a difference on the outcomes of gingivitis or plaque (very low quality). None of the studies measured dental caries.\nComparison 4: Enhanced one-to-one OHA versus one-to-one OHA\nSeven trials compared some form of enhanced OHA with some form of routine OHA.\nThere was little evidence available that any of these interventions demonstrated a difference on the outcomes of gingivitis, plaque or dental caries (very low quality).\nAuthors' conclusions\nThere was insufficient high-quality evidence to recommend any specific one-to-one OHA method as being effective in improving oral health or being more effective than any other method. Further high-quality randomised controlled trials are required to determine the most effective, efficient method of one-to-one OHA for oral health maintenance and improvement. The design of such trials should be cognisant of the limitations of the available evidence presented in this Cochrane Review.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We judged the quality of the evidence to be very low due to problems with the design of the studies." } ]
query-laysum
23395
[ { "role": "user", "content": "Abstract: Background\nCleft lip and palate is one of the most common birth defects and can cause difficulties with feeding, speech and hearing, as well as psychosocial problems. Treatment of orofacial clefts is prolonged; it typically commences after birth and lasts until the child reaches adulthood or even into adulthood. Residual deformities, functional disturbances, or both, are frequently seen in adults with a repaired cleft. Conventional orthognathic surgery, such as Le Fort I osteotomy, is often performed for the correction of maxillary hypoplasia. An alternative intervention is distraction osteogenesis, which achieves bone lengthening by gradual mechanical distraction. This review is an update of the original version that was published in 2016.\nObjectives\nTo provide evidence regarding the effects and long-term results of maxillary distraction osteogenesis compared to orthognathic surgery for the treatment of hypoplastic maxilla in people with cleft lip and palate.\nSearch methods\nCochrane Oral Health’s Information Specialist searched the following databases: Cochrane Oral Health’s Trials Register (to 15 May 2018), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library, 2018, Issue 4), MEDLINE Ovid (1946 to 15 May 2018), Embase Ovid (1980 to 15 May 2018), and LILACS BIREME Virtual Health Library (Latin American and Caribbean Health Science Information database; from 1982 to 15 May 2018). The US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing maxillary distraction osteogenesis to conventional Le Fort I osteotomy for the correction of cleft lip and palate maxillary hypoplasia in non-syndromic cleft patients aged 15 years or older.\nData collection and analysis\nTwo review authors assessed studies for eligibility. Two review authors independently extracted data and assessed the risk of bias in the included studies. We contacted trial authors for clarification or missing information whenever possible. All standard methodological procedures expected by Cochrane were used.\nMain results\nWe found six publications involving a total of 47 participants requiring maxillary advancement of 4 mm to 10 mm. All of them related to a single trial performed between 2002 and 2008 at the University of Hong Kong, but not all of the publications reported outcomes from all 47 participants. The study compared maxillary distraction osteogenesis with orthognathic surgery, and included participants from 13 to 45 years of age.\nResults and conclusions should be interpreted with caution given the fact that this was a single trial at high risk of bias, with a small sample size.\nThe main outcomes assessed were hard and soft tissue changes, skeletal relapse, effects on speech and velopharyngeal function, psychological status, and clinical morbidities.\nBoth interventions produced notable hard and soft tissue improvements. Nevertheless, the distraction group demonstrated a greater maxillary advancement, evaluated as the advancement of Subspinale A-point: a mean difference of 4.40 mm (95% CI 0.24 to 8.56) was recorded two years postoperatively.\nHorizontal relapse of the maxilla was significantly less in the distraction osteogenesis group five years after surgery. A total forward movement of A-point of 2.27 mm was noted for the distraction group, whereas a backward movement of 2.53 mm was recorded for the osteotomy group (mean difference 4.8 mm, 95% CI 0.41 to 9.19).\nNo statistically significant differences could be detected between the groups in speech outcomes, when evaluated through resonance (hypernasality) at 17 months postoperatively (RR 0.11, 95% CI 0.01 to 1.85) and nasal emissions at 17 months postoperatively (RR 3.00, 95% CI 0.14 to 66.53), or in velopharyngeal function at the same time point (RR 1.28, 95% CI 0.65 to 2.52).\nMaxillary distraction initially lowered social self-esteem at least until the distractors were removed, at three months postoperatively, compared to the osteotomy group, but this improved over time and the distraction group had higher satisfaction with life in the long term (two years after surgery) (MD 2.95, 95% CI 014 to 5.76).\nAdverse effects, in terms of clinical morbidities, included mainly occlusal relapse and mucosal infection, with the frequency being similar between groups (3/15 participants in the distraction osteogenesis group and 3/14 participants in the osteotomy group). There was no severe harm to any participant.\nAuthors' conclusions\nThis review found only one small randomised controlled trial concerning the effectiveness of distraction osteogenesis compared to conventional orthognathic surgery. The available evidence is of very low quality, which indicates that further research is likely to change the estimate of the effect. Based on measured outcomes, distraction osteogenesis may produce more satisfactory results; however, further prospective research comprising assessment of a larger sample size with participants with different facial characteristics is required to confirm possible true differences between interventions.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "This review, produced through Cochrane Oral Health, examines the benefits and risks of distraction osteogenesis for advancing the upper jaw compared to conventional orthognathic surgery in adolescents and adults." } ]
query-laysum
23396
[ { "role": "user", "content": "Abstract: Background\nSurgery is the preferred treatment for resectable oesophageal cancers, and can be performed in different ways. Transhiatal oesophagectomy (oesophagectomy without thoracotomy, with a cervical anastomosis) is one way to resect oesophageal cancers. It can be performed laparoscopically or by open method. With other organs, laparoscopic surgery has been shown to reduce complications and length of hospital stay compared to open surgery. However, concerns remain about the safety of laparoscopic transhiatal oesophagectomy in terms of post-operative complications and oncological clearance compared with open transhiatal oesophagectomy.\nObjectives\nTo assess the benefits and harms of laparoscopic versus open oesophagectomy for people with oesophageal cancer undergoing transhiatal oesophagectomy.\nSearch methods\nWe electronically searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, Science Citation Index Expanded, and trials registers until August 2015. We also searched the references of included trials to identify further trials.\nSelection criteria\nWe considered randomised controlled trials and non-randomised studies comparing laparoscopic with open transhiatal oesophagectomy in patients with resectable oesophageal cancer, regardless of language, blinding, or publication status for the review.\nData collection and analysis\nThree review authors independently identified trials, assessed risk of bias and extracted data. We calculated the risk ratio (RR) or hazard ratio (HR) with 95% confidence intervals (CI), using both fixed-effect and random-effects models, with RevMan 5, based on intention-to-treat analyses.\nMain results\nWe found no randomised controlled trials on this topic. We included six non-randomised studies (five retrospective) that compared laparoscopic versus open transhiatal oesophagectomy (334 patients: laparoscopic = 154 patients; open = 180 patients); five studies (326 patients: laparoscopic = 151 patients; open = 175 patients) provided information for one or more outcomes. Most studies included a mixture of adenocarcinoma and squamous cell carcinoma and different stages of oesophageal cancer, without metastases. All the studies were at unclear or high risk of bias; the overall quality of evidence was very low for all the outcomes.\nThe differences between laparoscopic and open transhiatal oesophagectomy were imprecise for short-term mortality (laparoscopic = 0/151 (adjusted proportion based on meta-analysis estimate: 0.5%) versus open = 2/175 (1.1%); RR 0.44; 95% CI 0.05 to 4.09; participants = 326; studies = 5; I² = 0%); long-term mortality (HR 0.97; 95% CI 0.81 to 1.16; participants = 193; studies = 2; I² = 0%); anastomotic stenosis (laparoscopic = 4/36 (11.1%) versus open = 3/37 (8.1%); RR 1.37; 95% CI 0.33 to 5.70; participants = 73; studies = 1); short-term recurrence (laparoscopic = 1/16 (6.3%) versus open = 0/4 (0%); RR 0.88; 95% CI 0.04 to 18.47; participants = 20; studies = 1); long-term recurrence (HR 1.00; 95% CI 0.84 to 1.18; participants = 173; studies = 2); proportion of people who required blood transfusion (laparoscopic = 0/36 (0%) versus open = 6/37 (16.2%); RR 0.08; 95% CI 0.00 to 1.35; participants = 73; studies = 1); proportion of people with positive resection margins (laparoscopic = 15/102 (15.8%) versus open = 27/111 (24.3%); RR 0.65; 95% CI 0.37 to 1.12; participants = 213; studies = 3; I² = 0%); and the number of lymph nodes harvested during surgery (median difference between the groups varied from 12 less to 3 more lymph nodes in the laparoscopic compared to the open group; participants = 326; studies = 5).\nThe proportion of patients with serious adverse events was lower in the laparoscopic group (10/99, (10.3%) compared to the open group = 24/114 (21.1%); RR 0.49; 95% CI 0.24 to 0.99; participants = 213; studies = 3; I² = 0%); as it was for adverse events in the laparoscopic group = 37/99 (39.9%) versus the open group = 71/114 (62.3%); RR 0.64; 95% CI 0.48 to 0.86; participants = 213; studies = 3; I² = 0%); and the median lengths of hospital stay were significantly less in the laparoscopic group than the open group (three days less in all three studies that reported this outcome; number of participants = 266). There was lack of clarity as to whether the median difference in the quantity of blood transfused was statistically significant favouring laparoscopic oesophagectomy in the only study that reported this information. None of the studies reported post-operative dysphagia, health-related quality of life, time-to-return to normal activity (return to pre-operative mobility without caregiver support), or time-to-return to work.\nAuthors' conclusions\nThere are currently no randomised controlled trials comparing laparoscopic with open transhiatal oesophagectomy for patients with oesophageal cancers. In observational studies, laparoscopic transhiatal oesophagectomy is associated with fewer overall complications and shorter hospital stays than open transhiatal oesophagectomy. However, this association is unlikely to be causal. There is currently no information to determine a causal association in the differences between the two surgical approaches. Randomised controlled trials comparing laparoscopic transhiatal oesophagectomy with other methods of oesophagectomy are required to determine the optimal method of oesophagectomy.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Randomised controlled trials are the best types of studies to find out whether one treatment is better than another since it ensures that similar types of people receive the new and the old treatment. But we did not find randomised controlled trials; we identified six relevant non-randomised studies with a total of 334 patients, which compared laparoscopic and open surgeries. Since one of the studies did not provide usable results, five studies, with 326 patients, provided information for this review; laparoscopic surgery = 151 patients and open surgery = 175 patients. In four of these studies, historical information was collected from hospital records. In one study, new information was collected. In general, new information is considered to be more reliable than information from hospital records." } ]
query-laysum
23397
[ { "role": "user", "content": "Abstract: Background\nInfants born preterm are at increased risk of early hypernatraemia (above-normal blood sodium levels) and late hyponatraemia (below-normal blood sodium levels). There are concerns that imbalances of sodium intake may impact neonatal morbidities, growth and developmental outcomes.\nObjectives\nTo determine the effects of higher versus lower sodium supplementation in preterm infants.\nSearch methods\nWe searched CENTRAL in February 2023; and MEDLINE, Embase and trials registries in March and April 2022. We checked reference lists of included studies and systematic reviews where subject matter related to the intervention or population examined in this review. We compared early (< 7 days following birth), late (≥ 7 days following birth), and early and late sodium supplementation, separately.\nSelection criteria\nWe included randomised, quasi-randomised or cluster-randomised controlled trials that compared nutritional supplementation that included higher versus lower sodium supplementation in parenteral or enteral intake, or both. Eligible participants were preterm infants born before 37 weeks' gestational age or with a birth weight less than 2500 grams, or both. We excluded studies that had prespecified differential water intakes between groups.\nData collection and analysis\nTwo review authors independently assessed eligibility and risk of bias, and extracted data. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe included nine studies in total. However, we were unable to extract data from one study (20 infants); some studies contributed to more than one comparison. Eight studies (241 infants) were available for quantitative meta-analysis. Four studies (103 infants) compared early higher versus lower sodium intake, and four studies (138 infants) compared late higher versus lower sodium intake. Two studies (103 infants) compared intermediate sodium supplementation (≥ 3 mmol/kg/day to < 5 mmol/kg/day) versus no supplementation, and two studies (52 infants) compared higher sodium supplementation (≥ 5 mmol/kg/day) versus no supplementation. We assessed only two studies (63 infants) as low risk of bias.\nEarly (less than seven days following birth) higher versus lower sodium intake\nEarly higher versus lower sodium intake may not affect mortality (risk ratio (RR) 1.02, 95% confidence interval (CI) 0.38 to 2.72; I2 = 0%; 3 studies, 83 infants; low-certainty evidence). Neurodevelopmental follow-up was not reported. Early higher versus lower sodium intake may lead to a similar incidence of hyponatraemia < 130 mmol/L (RR 0.68, 95% CI 0.40 to 1.13; I2 = 0%; 3 studies, 83 infants; low-certainty evidence) but an increased incidence of hypernatraemia ≥ 150 mmol/L (RR 1.62, 95% CI 1.00 to 2.65; I2 = 0%; 4 studies, 103 infants; risk difference (RD) 0.17, 95% CI 0.01 to 0.34; number needed to treat for an additional harmful outcome 6, 95% CI 3 to 100; low-certainty evidence). Postnatal growth failure was not reported. The evidence is uncertain for an effect on necrotising enterocolitis (RR 4.60, 95% CI 0.23 to 90.84; 1 study, 46 infants; very low-certainty evidence). Chronic lung disease at 36 weeks was not reported.\nLate (seven days or more following birth) higher versus lower sodium intake\nLate higher versus lower sodium intake may not affect mortality (RR 0.13, 95% CI 0.01 to 2.20; 1 study, 49 infants; very low-certainty evidence). Neurodevelopmental follow-up was not reported. Late higher versus lower sodium intake may reduce the incidence of hyponatraemia < 130 mmol/L (RR 0.13, 95% CI 0.03 to 0.50; I2 = 0%; 2 studies, 69 infants; RD −0.42, 95% CI −0.59 to −0.24; number needed to treat for an additional beneficial outcome 2, 95% CI 2 to 4; low-certainty evidence). The evidence is uncertain for an effect on hypernatraemia ≥ 150 mmol/L (RR 7.88, 95% CI 0.43 to 144.81; I2 = 0%; 2 studies, 69 infants; very low-certainty evidence). A single small study reported that later higher versus lower sodium intake may reduce the incidence of postnatal growth failure (RR 0.25, 95% CI 0.09 to 0.69; 1 study; 29 infants; low-certainty evidence). The evidence is uncertain for an effect on necrotising enterocolitis (RR 0.07, 95% CI 0.00 to 1.25; 1 study, 49 infants; very low-certainty evidence) and chronic lung disease (RR 2.03, 95% CI 0.80 to 5.20; 1 study, 49 infants; very low-certainty evidence).\nEarly and late (day 1 to 28 after birth) higher versus lower sodium intake for preterm infants\nEarly and late higher versus lower sodium intake may not have an effect on hypernatraemia ≥ 150 mmol/L (RR 2.50, 95% CI 0.63 to 10.00; 1 study, 20 infants; very low-certainty evidence). No other outcomes were reported.\nAuthors' conclusions\nEarly (< 7 days following birth) higher sodium supplementation may result in an increased incidence of hypernatraemia and may result in a similar incidence of hyponatraemia compared to lower supplementation. We are uncertain if there are any effects on mortality or neonatal morbidity. Growth and longer-term development outcomes were largely unreported in trials of early sodium supplementation.\nLate (≥ 7 days following birth) higher sodium supplementation may reduce the incidence of hyponatraemia. We are uncertain if late higher intake affects the incidence of hypernatraemia compared to lower supplementation. Late higher sodium intake may reduce postnatal growth failure. We are uncertain if late higher sodium intake affects mortality, other neonatal morbidities or longer-term development.\nWe are uncertain if early and late higher versus lower sodium supplementation affects outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We found 4 studies with 103 babies that compared early (less than 7 days after birth), 4 studies with 138 babies that compared late (7 or more days after birth), and one study with 20 babies that reported early and late higher versus lower sodium intakes." } ]
query-laysum
23398
[ { "role": "user", "content": "Abstract: Background\nVitamin D and related compounds have been used to prevent osteoporotic fractures in older people. This is the third update of a Cochrane review first published in 1996.\nObjectives\nTo determine the effects of vitamin D or related compounds, with or without calcium, for preventing fractures in post-menopausal women and older men.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (to December 2012), the Cochrane Central Register of Controlled Trials (2012, Issue 12), MEDLINE (1966 to November Week 3 2012), EMBASE (1980 to 2012 Week 50), CINAHL (1982 to December 2012), BIOSIS (1985 to 3 January 2013), Current Controlled Trials (December 2012) and reference lists of articles.\nSelection criteria\nRandomised or quasi-randomised trials that compared vitamin D or related compounds, alone or with calcium, against placebo, no intervention or calcium alone, and that reported fracture outcomes in older people. The primary outcome was hip fracture.\nData collection and analysis\nTwo authors independently assessed trial risk of selection bias and aspects of methodological quality, and extracted data. Data were pooled, where possible, using the fixed-effect model, or the random-effects model when heterogeneity between studies appeared substantial.\nMain results\nWe included 53 trials with a total of 91,791 participants. Thirty-one trials, with sample sizes ranging from 70 to 36,282 participants, examined vitamin D (including 25-hydroxy vitamin D) with or without calcium in the prevention of fractures in community, nursing home or hospital inpatient populations. Twelve of these 31 trials had participants with a mean or median age of 80 years or over.\nAnother group of 22 smaller trials examined calcitriol or alfacalcidol (1-alphahydroxyvitamin D3), mostly with participants who had established osteoporosis. These trials were carried out in the setting of institutional referral clinics or hospitals.\nIn the assessment of risk of bias for random sequence generation, 21 trials (40%) were deemed to be at low risk, 28 trials (53%) at unclear risk and four trials at high risk (8%). For allocation concealment, 22 trials were at low risk (42%), 29 trials were at unclear risk (55%) and two trials were at high risk (4%).\nThere is high quality evidence that vitamin D alone, in the formats and doses tested, is unlikely to be effective in preventing hip fracture (11 trials, 27,693 participants; risk ratio (RR) 1.12, 95% confidence intervals (CI) 0.98 to 1.29) or any new fracture (15 trials, 28,271 participants; RR 1.03, 95% CI 0.96 to 1.11).\nThere is high quality evidence that vitamin D plus calcium results in a small reduction in hip fracture risk (nine trials, 49,853 participants; RR 0.84, 95% confidence interval (CI) 0.74 to 0.96; P value 0.01). In low-risk populations (residents in the community: with an estimated eight hip fractures per 1000 per year), this equates to one fewer hip fracture per 1000 older adults per year (95% CI 0 to 2). In high risk populations (residents in institutions: with an estimated 54 hip fractures per 1000 per year), this equates to nine fewer hip fractures per 1000 older adults per year (95% CI 2 to 14).\nThere is high quality evidence that vitamin D plus calcium is associated with a statistically significant reduction in incidence of new non-vertebral fractures. However, there is only moderate quality evidence of an absence of a statistically significant preventive effect on clinical vertebral fractures. There is high quality evidence that vitamin D plus calcium reduces the risk of any type of fracture (10 trials, 49,976 participants; RR 0.95, 95% CI 0.90 to 0.99).\nIn terms of the results for adverse effects: mortality was not adversely affected by either vitamin D or vitamin D plus calcium supplementation (29 trials, 71,032 participants, RR 0.97, 95% CI 0.93 to 1.01). Hypercalcaemia, which was usually mild (2.6 to 2.8 mmol/L), was more common in people receiving vitamin D or an analogue, with or without calcium (21 trials, 17,124 participants, RR 2.28, 95% CI 1.57 to 3.31), especially for calcitriol (four trials, 988 participants, RR 4.41, 95% CI 2.14 to 9.09), than in people receiving placebo or control. There was also a small increased risk of gastrointestinal symptoms (15 trials, 47,761 participants, RR 1.04, 95% CI 1.00 to 1.08), especially for calcium plus vitamin D (four trials, 40,524 participants, RR 1.05, 95% CI 1.01 to 1.09), and a significant increase in renal disease (11 trials, 46,548 participants, RR 1.16, 95% CI 1.02 to 1.33). Other systematic reviews have found an increased association of myocardial infarction with supplemental calcium; and evidence of increased myocardial infarction and stroke, but decreased cancer, with supplemental calcium plus vitamin D, without an overall effect on mortality.\nAuthors' conclusions\nVitamin D alone is unlikely to prevent fractures in the doses and formulations tested so far in older people. Supplements of vitamin D and calcium may prevent hip or any type of fracture. There was a small but significant increase in gastrointestinal symptoms and renal disease associated with vitamin D and calcium. This review found that there was no increased risk of death from taking calcium and vitamin D.\n\nGiven the provided abstract, please respond to the following query: \"Why might vitamin D help?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Vitamin D is necessary for building strong bone. Older people often have low vitamin D levels because of lack of exposure to sunlight and low consumption of vitamin D in their diet. Therefore, it has been suggested that taking additional vitamin D in the form of supplements may help to reduce the risk of fractures of the hip and other bones." } ]
query-laysum
23399
[ { "role": "user", "content": "Abstract: Background\nStents are placed in the femoropopliteal arteries for numerous reasons, such as atherosclerotic disease, the need for dissection, and perforation of the arteries, and can become stenosed with the passage of time. When a stent develops a flow-limiting stenosis, this process is known as \"in-stent stenosis.\" It is thought that in-stent restenosis is caused by a process known as \"intimal hyperplasia\" rather than by the progression of atherosclerotic disease. Management of in-stent restenosis may include performing balloon angioplasty, deploying another stent within the stenosed stent to force it open, and creating a bypass to deliver blood around the stent. The role of drug-eluting technologies, such as drug-eluting balloons (DEBs), in the management of in-stent restenosis is unclear. Drug-eluting balloons might function by coating the inside of stenosed stents with cytotoxic chemicals such as paclitaxel and by inhibiting the hyperplastic processes responsible for in-stent restenosis. It is important to perform this systematic review to evaluate the efficacy of DEB because of the potential for increased expenses associated with DEBs over uncoated balloon angioplasty, also known as plain old balloon angioplasty (POBA).\nObjectives\nTo assess the safety and efficacy of DEBs compared with uncoated balloon angioplasty in people with in-stent restenosis of the femoropopliteal arteries as assessed by criteria such as amputation-free survival, vessel patency, target lesion revascularization, binary restenosis rate, and death. We define \"in-stent restenosis\" as 50% or greater narrowing of a previously stented vessel by duplex ultrasound or angiography.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL databases and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to November 28, 2017. Review authors also undertook reference checking to identify additional studies.\nSelection criteria\nWe included all randomized controlled trials that compared DEBs versus uncoated balloon angioplasty for treatment of in-stent restenosis in the femoropopliteal arteries.\nData collection and analysis\nTwo review authors (AK, WA) independently selected appropriate trials and performed data extraction, assessment of trial quality, and data analysis. The senior review author (AD) adjudicated any disagreements.\nMain results\nThree trials that randomized a combined total of 263 participants met the review inclusion criteria. All three trials examined the treatment of symptomatic in-stent restenosis within the femoropopliteal arteries. These trials were carried out in Germany and Austria and used paclitaxel as the agent in the drug-eluting balloons. Two of the three trials were industry sponsored. Two companies manufactured the drug-eluting balloons (Eurocor, Bonn, Germany; Medtronic, Fridley, Minnesota, USA). The trials examined both anatomical and clinical endpoints. We noted heterogeneity in the frequency of bailout stenting deployment between studies as well as in the dosage of paclitaxel applied by the DEBs. Using GRADE assessment criteria, we determined that the certainty of evidence presented was very low for the outcomes of amputation, target lesion revascularization, binary restenosis, death, and improvement of one or more Rutherford categories. Most participants were followed up to 12 months, but one trial followed participants for up to 24 months.\nTrial results show no difference in the incidence of amputation between DEBs and uncoated balloon angioplasty. DEBs showed better outcomes for up to 24 months for target lesion revascularization (odds ratio (OR) 0.05, 95% confidence Interval (CI) 0.00 to 0.92 at six months; OR 0.24, 95% CI 0.08 to 0.70 at 24 months) and at six and 12 months for binary restenosis (OR 0.28, 95% CI 0.14 to 0.56 at six months; OR 0.34, 95% CI 0.15 to 0.76 at 12 months). Participants treated with DEBs also showed improvement of one or more Rutherford categories at six and 12 months (OR 1.81, 95% CI 1.02 to 3.21 at six months; OR 2.08, 95% CI 1.13 to 3.83 at 12 months). Data show no clear differences in death between DEBs and uncoated balloon angioplasty. Data were insufficient for subgroup or sensitivity analyses to be conducted.\nAuthors' conclusions\nBased on a meta-analysis of three trials with 263 participants, evidence suggests an advantage for DEBs compared with uncoated balloon angioplasty for anatomical endpoints such as target lesion revascularization (TLR) and binary restenosis, and for one clinical endpoint - improvement in Rutherford category post intervention for up to 24 months. However, the certainty of evidence for all these outcomes is very low due to the small number of included studies and participants and the high risk of bias in study design. Adequately powered and carefully constructed randomized controlled trials are needed to adequately investigate the role of drug-eluting technologies in the management of in-stent restenosis.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Results showed that DEBs were not better for participants than uncoated balloon angioplasty with regard to the need for amputation. At 24 months of follow-up, DEBs were associated with fewer target lesion revascularizations, which refers to the need to perform a procedure on a stent that had already been treated with a DEB or an uncoated balloon angioplasty for in-stent restenosis. DEBs were also found to have better binary restenosis rates, which refers to the percentage of treated stents that develop new stenosis after they have been treated with a DEB or an uncoated balloon angioplasty. Finally, more people who were treated with DEBs described improvement in their leg symptoms, as measured by a change in their Rutherford category. DEBs were not found to be better for participants than uncoated balloon angioplasty with regard to patient death." } ]
query-laysum
23400
[ { "role": "user", "content": "Abstract: Background\nUrinary incontinence can affect 40% to 60% of people admitted to hospital after a stroke, with 25% still having problems when discharged from hospital and 15% remaining incontinent after one year.\nThis is an update of a review published in 2005 and updated in 2008.\nObjectives\nTo assess the effects of interventions for treating urinary incontinence after stroke in adults at least one-month post-stroke.\nSearch methods\nWe searched the Cochrane Incontinence and Cochrane Stroke Specialised Registers (searched 30 October 2017 and 1 November 2017 respectively), which contain trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, CINAHL, ClinicalTrials.gov, WHO ICTRP and handsearched journals and conference proceedings.\nSelection criteria\nWe included randomised or quasi-randomised controlled trials.\nData collection and analysis\nTwo review authors independently undertook data extraction, risk of bias assessment and implemented GRADE.\nMain results\nWe included 20 trials (reporting 21 comparisons) with 1338 participants. Data for prespecified outcomes were not available except where reported below.\nIntervention versus no intervention/usual care\nBehavioural interventions: Low-quality evidence suggests behavioural interventions may reduce the mean number of incontinent episodes in 24 hours (mean difference (MD) –1.00, 95% confidence interval (CI) –2.74 to 0.74; 1 trial; 18 participants; P = 0.26). Further, low-quality evidence from two trials suggests that behavioural interventions may make little or no difference to quality of life (SMD -0.99, 95% CI -2.83 to 0.86; 55 participants).\nSpecialised professional input interventions: One trial of moderate-quality suggested structured assessment and management by continence nurse practitioners probably made little or no difference to the number of people continent three months after treatment (risk ratio (RR) 1.28, 95% CI 0.81 to 2.02; 121 participants; equivalent to an increase from 354 to 453 per 1000, 95% CI 287 to 715).\nComplementary therapy: Five trials assessed complementary therapy using traditional acupuncture, electroacupuncture and ginger-salt-partitioned moxibustion plus routine acupuncture. Low-quality evidence from five trials suggested that complementary therapy may increase the number of participants continent after treatment; participants in the treatment group were three times more likely to be continent (RR 2.82, 95% CI 1.57 to 5.07; 524 participants; equivalent to an increase from 193 to 544 per 1000, 95% CI 303 to 978). Adverse events were reported narratively in one study of electroacupuncture, reporting on bruising and postacupuncture abdominal pain in the intervention group.\nPhysical therapy: Two trials reporting three comparisons suggest that physical therapy using transcutaneous electrical nerve stimulation (TENS) may reduce the mean number of incontinent episodes in 24 hours (MD –4.76, 95% CI –8.10 to –1.41; 142 participants; low-quality evidence). One trial of TENS reporting two comparisons found that the intervention probably improves overall functional ability (MD 8.97, 95% CI 1.27 to 16.68; 81 participants; moderate-quality evidence).\nIntervention versus placebo\nPhysical therapy: One trial of physical therapy suggests TPTNS may make little or no difference to the number of participants continent after treatment (RR 0.75, 95% CI 0.19 to 3.04; 54 participants) or number of incontinent episodes (MD –1.10, 95% CI –3.99 to 1.79; 39 participants). One trial suggested improvement in the TPTNS group at 26-weeks (OR 0.04, 95% CI 0.004 to 0.41) but there was no evidence of a difference in perceived bladder condition at six weeks (OR 2.33, 95% CI 0.63 to 8.65) or 12 weeks (OR 1.22, 95% CI 0.29 to 5.17). Data from one trial provided no evidence that TPTNS made a difference to quality of life measured with the ICIQLUTSqol (MD 3.90, 95% CI –4.25 to 12.05; 30 participants). Minor adverse events, such as minor skin irritation and ankle cramping, were reported in one study.\nPharmacotherapy interventions: There was no evidence from one study that oestrogen therapy made a difference to the mean number of incontinent episodes per week in mild incontinence (paired samples, MD –1.71, 95% CI –3.51 to 0.09) or severe incontinence (paired samples, MD –6.40, 95% CI –9.47 to –3.33). One study reported no adverse events.\nSpecific intervention versus another intervention\nBehavioural interventions: One trial comparing a behavioural intervention (timed voiding) with a pharmacotherapy intervention (oxybutynin) contained no useable data.\nComplementary therapy: One trial comparing different acupuncture needles and depth of needle insertion to assess the effect on incontinence reported that, after four courses of treatment, 78.1% participants in the elongated needle group had no incontinent episodes versus 40% in the filiform needle group (57 participants). This trial was assessed as unclear or high for all types of bias apart from incomplete outcome data.\nCombined intervention versus single intervention\nOne trial compared a combined intervention (sensory motor biofeedback plus timed prompted voiding) against a single intervention (timed voiding). The combined intervention may make little or no difference to the number of participants continent after treatment (RR 0.55, 95% CI 0.06 to 5.21; 23 participants; equivalent to a decrease from 167 to 92 per 1000, 95% CI 10 to 868) or to the number of incontinent episodes (MD 2.20, 95% CI 0.12 to 4.28; 23 participants).\nSpecific intervention versus attention control\nPhysical therapy interventions: One study found TPTNS may make little or no difference to the number of participants continent after treatment compared to an attention control group undertaking stretching exercises (RR 1.33, 95% CI 0.38 to 4.72; 24 participants; equivalent to an increase from 250 to 333 per 1000, 95% CI 95 to 1000).\nAuthors' conclusions\nThere is insufficient evidence to guide continence care of adults in the rehabilitative phase after stroke. As few trials tested the same intervention, conclusions are drawn from few, usually small, trials. CIs were wide, making it difficult to ascertain if there were clinically important differences. Only four trials had adequate allocation concealment and many were limited by poor reporting, making it impossible to judge the extent to which they were prone to bias. More appropriately powered, multicentre trials of interventions are required to provide robust evidence for interventions to improve urinary incontinence after stroke.\n\nGiven the provided abstract, please respond to the following query: \"Authors' conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "High-quality trials comparing different treatments to usual care or no treatment involving larger numbers of participants are needed." } ]
query-laysum
23401
[ { "role": "user", "content": "Abstract: Background\nThis is an updated version of the original Cochrane review first published in 2008. Scoliosis surgery is often associated with substantial blood loss and potentially detrimental effects in children. Antifibrinolytic agents are often used to reduce perioperative blood loss. Clinical trials have evaluated their efficacy in children undergoing surgical correction of scoliosis, but no systematic review has been published. This review was first published in 2008 and was updated in 2016.\nObjectives\nTo assess the efficacy and safety of aprotinin, tranexamic acid and aminocaproic acid in reducing blood loss and transfusion requirements in children undergoing surgery for correction of idiopathic or secondary scoliosis.\nSearch methods\nWe ran the search for the previous review in June 2007. For this updated version, we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 7), MEDLINE (1946 to August week 1 2015), Embase (1947 to 2015 week 38), Latin American Caribbean Health Sciences Literature (LILACS) (1982 to 14 August 2015), Database of Abstracts of Reviews of Effects (DARE; 2015, Issue 2) and reference lists of reviews and retrieved articles for randomized controlled trials in any language. We also checked the clinical trial registry at http://www.clinicaltrials.gov on 8 October 2015.\nSelection criteria\nWe included blinded and unblinded randomized controlled trials (RCTs) that evaluated the effects of antifibrinolytics on perioperative blood loss in children 18 years of age or younger and undergoing scoliosis surgery.\nData collection and analysis\nTwo review authors independently extracted data. The primary outcome was total blood loss (intraoperative and postoperative combined). Secondary efficacy outcomes were the number of participants receiving blood transfusion (both autologous and allogeneic) or receiving allogeneic blood transfusion alone, and the total amount of blood transfused. Safety outcomes included the number of deaths, the number of participants reporting any adverse event or a serious adverse event, withdrawals due to adverse events and the number of participants experiencing a specific adverse event (i.e. renal insufficiency, hypersensitivity or thrombosis). We assessed methodological risk of bias for each included study and employed the GRADE approach to assess the overall quality of the evidence.\nMain results\nWe included three new studies (201 participants) in this updated review, for a total of nine studies (455 participants). All but one study employed placebo as the control group intervention. For the primary outcome, antifibrinolytic drugs decreased the amount of perioperative blood loss by 427 mL (95% confidence interval (CI) 251 to 603 mL), for a reduction of over 20% versus placebo. We rated the quality of evidence for our primary outcome as low on the basis of unclear risk of bias for several domains in most studies and the small total number of participants.\nFor secondary outcomes, fewer participants receiving antifibrinolytic drugs received transfusion (allogeneic or autologous) versus those receiving placebo (risk ratio (RR) 0.65, 95% CI 0.50 to 0.85, number needed to treat to prevent one additional harmful outcome (NNTp) 5; very low-quality evidence). Only two studies specifically evaluated the number of participants transfused with only allogeneic blood (risk difference (RD) -0.15, 95% CI -0.26 to -0.03, NNTp 7; very low-quality evidence). Antifibrinolytic drugs decreased the volume of blood transfused by 327 mL (95% CI -186 to -469 mL; low-quality evidence).\nNo study reported deaths in active or control groups. Data were insufficient to allow performance of meta-analysis for any safety outcome. No studies adequately described their methods in assessing safety. The only adverse event of note occurred in one study, when three participants in the placebo group developed postoperative deep vein thrombosis.\nAuthors' conclusions\nSince the last published version of this review (2008), we have found three new studies. Additional evidence shows that antifibrinolytics reduce the requirement for both autologous and allogeneic blood transfusion. Limited evidence of low to very low quality supports the use of antifibrinolytic drugs for reducing blood loss and decreasing the risk, and volume, of transfusion in children undergoing scoliosis surgery. Evidence is insufficient to support the use of a particular agent, although tranexamic acid may be preferred, given its widespread availability. The optimal dose regimen for any of these three agents has not been established. Although adverse events appear to occur infrequently, evidence is insufficient to confirm the safety of these agents, particularly for rare but potentially catastrophic events. No long-term safety data are available.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Antifibrinolytic drugs reduced the amount of blood lost during, or immediately after, surgery, by 427 millilitres (mL) (more than a 20% reduction in blood loss) and the amount of blood transfused during the same period by 327 mL. The number of children who received transfusions, either through blood from a donor or through a combination of blood from a donor and their own saved blood, was also significantly decreased. We assessed the quality of the evidence for all of these findings as low or very low because of the small numbers of participants, some concerns about study designs and imprecision in study findings.\nNo children included in any studies died, and very few adverse events were reported, although three children receiving placebo in one study developed a clot. However, these studies may not have adequately looked for adverse events, or study authors might not have reported them fully. Also, the number of children evaluated was too small and the time of follow-up too short to allow review authors to draw any conclusions about their safety." } ]
query-laysum
23402
[ { "role": "user", "content": "Abstract: Background\nPrevention of relapse is a major issue in the management of quiescent Crohn's disease (CD). Current therapies (e.g. methotrexate, biologics, 6-mercaptopurine and azathioprine) may be effective for maintaining remission in CD, but these drugs may cause significant adverse events. Interventions that are effective and safe for maintenance of remission in CD are desirable.\nObjectives\nThe primary objectives were to evaluate the efficacy and safety of enteral nutrition for the maintenance of remission in CD and to assess the impact of formula composition on effectiveness.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, the Cochrane IBD Group Specialized Register and clinicaltrials.gov from inception to 27 July 2018. We also searched references of retrieved studies and reviews.\nSelection criteria\nRandomised controlled trials (RCTs) including participants of any age with quiescent CD were considered for inclusion. Studies that compared enteral nutrition with no intervention, placebo or any other intervention were selected for review.\nData collection and analysis\nTwo authors independently screened studies for inclusion, extracted data and assessed methodological quality using the Cochrane risk of bias tool. The primary outcome was clinical or endoscopic relapse as defined by the primary studies. Secondary outcomes included anthropometric measures (i.e. height and weight), quality of life (QoL), adverse events, serious adverse events and withdrawal due to adverse events. We calculated the risk ratio and 95% confidence interval (CI) for dichotomous outcomes. For continuous outcomes, we calculated the mean difference and 95% CI. A random-effects model was used for the statistical analysis. We used the GRADE criteria to assess the overall certainty of the evidence supporting the primary outcome and selected secondary outcomes.\nMain results\nFour RCTs (262 adult participants) met the inclusion criteria. One study (N = 33) compared an elemental diet to a non-elemental (polymeric) diet. One study (N = 51) compared a half elemental diet to a regular free diet. Another study (N = 95) compared an elemental diet to 6-mercaptopurine (6-MP) or a no treatment control group. One study (N= 83) compared a polymeric diet to mesalamine. Two studies were rated as high risk of bias due to lack of blinding or incomplete outcome data. The other two studies were judged to have an unclear risk of bias. The studies were not pooled due to differences in control interventions and the way outcomes were assessed.\nThe effect of an elemental diet compared to a polymeric diet on remission rates or withdrawal due to adverse events is uncertain. Fifty-eight per cent (11/19) of participants in the elemental diet group relapsed at 12 months compared to 57% (8/14) of participants in the polymeric diet group (RR 1.01, 95% CI 0.56 to 1.84; very low certainty evidence). Thirty-two per cent (6/19) of participants in the elemental diet group were intolerant to the enteral nutritional formula because of taste or smell and were withdrawn from the study in the first 2 weeks compared to zero participants (0/14) in the polymeric diet group (RR 9.75, 95% CI 0.59 to 159.93; low certainty evidence). Anthropometric measures, QoL, adverse events and serious adverse events were not reported as outcomes.\nThe effect of an elemental diet (half of total daily calorie requirements) compared to a normal free diet on relapse rates is uncertain. Thirty-five per cent (9/26) of participants in the elemental diet group relapsed at 12 months compared to 64% (16/25) of participants in the free diet group (RR 0.54, 95% CI 0.30 to 0.99; very low certainty evidence). No adverse events were reported. This study reported no differences in weight change between the two diet groups. Height and QoL were not reported as outcomes.\nThe effect of an elemental diet compared to 6-MP on relapse rates or adverse events is uncertain. Thirty-eight per cent (12/32) of participants in the elemental diet group relapsed at 12 months compared to 23% (7/30) of participants in the 6-MP group (RR 1.61; 95% CI 0.73 to 3.53; very low certainty evidence). Three per cent (1/32) of participants in the elemental diet group had an adverse event compared to 13% (4/30) of participants in the 6-MP group (RR 0.23, 95% CI 0.03 to 1.98; low certainty evidence). Adverse events in the elemental diet group included surgery due to worsening CD. Adverse events in the 6-MP group included liver injury (n = 2), hair loss (n = 1) and surgery due to an abscess (n = 1). No serious adverse events or withdrawals due to adverse events were reported. Weight, height and QoL were not reported as outcomes\nThe effect of a polymeric diet compared to mesalamine on relapse rates and weight is uncertain. Forty-two per cent (18/43) of participants in the polymeric diet group relapsed at 6 months compared to 55% (22/40) of participants in the mesalamine group (RR 0.76; 95% CI 0.49 to 1.19; low certainty evidence). The mean difference in weight gain over the study period was 1.9 kg higher in the polymeric diet group compared to mesalamine (95% CI -4.62 to 8.42; low certainty evidence). Two participants in the polymeric diet group experienced nausea and four had diarrhoea. It is unclear if any participants in the mesalamine group had an adverse event. Height, QoL, serious adverse events and withdrawal due to adverse events were not reported as outcomes.\nAuthors' conclusions\nThe results for the outcomes assessed in this review are uncertain and no firm conclusions regarding the efficacy and safety of enteral nutrition in quiescent CD can be drawn. More research is needed to determine the efficacy and safety of using enteral nutrition as maintenance therapy in CD. Currently, there are four ongoing studies (estimated enrolment of 280 participants). This review will be updated when the results of these studies are available.\n\nGiven the provided abstract, please respond to the following query: \"What did the researchers investigate?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The researchers studied whether enteral nutrition helps to maintain remission in people with Crohn's disease. The researchers also investigated whether one type of enteral nutrition was better than another (e.g. elemental vs.non-elemental) for maintaining remission in people with Crohn's disease." } ]
query-laysum
23403
[ { "role": "user", "content": "Abstract: Background\nFetal compromise in the term pregnancy is suspected when the following clinical indicators are present: intrauterine growth restriction (IUGR), decreased fetal movement (DFM), or when investigations such as cardiotocography (CTG) and ultrasound reveal results inconsistent with standard measurements. Pathological results would necessitate the need for immediate delivery, but the management for ‘suspicious’ results remains unclear and varies widely across clinical centres. There is clinical uncertainty as to how to best manage women presenting with a suspected term compromised baby in an otherwise healthy pregnancy.\nObjectives\nTo assess, using the best available evidence, the effects of immediate delivery versus expectant management of the term suspected compromised baby on neonatal, maternal and long-term outcomes.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2015) and reference lists of retrieved studies.\nSelection criteria\nRandomised or quasi-randomised controlled trials comparing expectant management versus planned early delivery for women with a suspected compromised fetus from 37 weeks' gestation or more.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and assessed trial quality. Two review authors independently extracted data. Data were checked for accuracy. We assessed the quality of the evidence using the GRADE approach.\nMain results\nOf the 20 reports identified by the search strategy, we included three trials (546 participants: 269 to early delivery and 277 to expectant management), which met our inclusion criteria. Two of the trials compared outcomes in 492 pregnancies with IUGR of the fetus, and one in 54 pregnancies with oligohydramnios. All three trials were of reasonable quality and at low risk of bias. The level of evidence was graded moderate, low or very low, downgrading mostly for imprecision and for some indirectness. Overall, there was no difference in the primary neonatal outcomes of perinatal mortality (no deaths in either group, one trial, 459 women, evidence graded moderate), major neonatal morbidity (risk ratio (RR) 0.15, 95% confidence interval (CI) 0.01 to 2.81, one trial, 459 women, evidence graded low), or neurodevelopmental disability/impairment at two years of age (RR 2.04, 95% CI 0.62 to 6.69,one trial, 459 women, evidence graded low). There was no difference in the risk of necrotising enterocolitis (one trial, 333 infants) or meconium aspiration (one trial, 459 infants), There was also no difference in the reported primary maternal outcomes: maternal mortality (RR 3.07, 95% CI 0.13 to 74.87, one trial, 459 women, evidence graded low), and significant maternal morbidity (RR 0.92, 95% CI 0.38 to 2.22, one trial, 459 women, evidence graded low).\nThe gestational age at birth was on average 10 days earlier in women randomised to early delivery (mean difference (MD) -9.50, 95% CI -10.82 to -8.18, one trial, 459 women) and women in the early delivery group were significantly less likely to have a baby beyond 40 weeks' gestation (RR 0.10, 95% CI 0.01 to 0.67, one trial, 33 women). Significantly more infants in the planned early delivery group were admitted to intermediate care nursery (RR 1.28, 95% CI 1.02 to 1.61, two trials, 491 infants). There was no difference in the risk of respiratory distress syndrome, (one trial, 333 infants), Apgar score less than seven at five minutes (three trials, 546 infants), resuscitation required (one trial, 459 infants), mechanical ventilation (one trial, 337 infants), admission to neonatal intensive care unit (NICU) (RR 0.88, 95% CI 0.35 to 2.23, three trials, 545 infants, evidence graded very low), length of stay in NICU/SCN (one trial, 459 infants), and sepsis (two trials, 366 infants).\nBabies in the expectant management group were more likely to be < 2.3rd centile for birthweight (RR 0.51, 95% CI 0.36 to 0.73, two trials, 491 infants), however there was no difference in the proportion of babies with birthweight < 10th centile (RR 0.98, 95% CI 0.88 to 1.10). There was no difference in any of the reported maternal secondary outcomes including: caesarean section rates (RR 1.02, 95% CI 0.65 to 1.59, three trials, 546 women, evidence graded low), placental abruption (one trial, 459 women), pre-eclampsia (one trial, 459 women), vaginal birth (three trials 546 women), assisted vaginal birth (three trials 546 women), breastfeeding rates (one trial, 218 women), and number of weeks of breastfeeding after delivery one trial, 124 women). There was an expected increase in induction in the early delivery group (RR 2.05, 95% CI 1.78 to 2.37, one trial, 459 women).\nNo data were reported for the pre-specified secondary neonatal outcomes of the number of days of mechanical ventilation, moderate-severe hypoxic ischaemic encephalopathy or need for therapeutic hypothermia. Likewise, no data were reported for secondary maternal outcomes of postnatal infection, maternal satisfaction or views of care.\nAuthors' conclusions\nA policy for planned early delivery versus expectant management for a suspected compromised fetus at term does not demonstrate any differences in major outcomes of perinatal mortality, significant neonatal or maternal morbidity or neurodevelopmental disability. In women randomised to planned early delivery, the gestational age at birth was on average 10 days earlier, women were less likely to have a baby beyond 40 weeks' gestation, they were more likely to be induced and infants were more likely to be admitted to intermediate care nursery. There was also a significant difference in the proportion of babies with a birthweight centile < 2.3rd, however this did not translate into a reduction in morbidity. The review is informed by only one large trial and two smaller trials assessing fetuses with IUGR or oligohydramnios and therefore cannot be generalised to all term pregnancies with suspected fetal compromise. There are other indications for suspecting compromise in a fetus at or near term such as maternal perception of DFM, and ultrasound and/or CTG abnormalities. Future randomised trials need to assess effectiveness of timing of delivery for these indications.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "What care should be given to babies who are thought to be coping poorly towards the end of pregnancy (beyond 37 weeks)? A baby may be in this situation because the placenta is no longer functioning well and this means the baby may be short of nutrition or oxygen. We asked in this Cochrane review if it is better to induce labour or do a caesarean section (both ways of ensuring the baby is born earlier) rather than letting the pregnancy continue until labour starts by itself." } ]
query-laysum
23404
[ { "role": "user", "content": "Abstract: Background\nThis is an update of the Cochrane review published in Issue 5, 2011.\nWorldwide, cervical cancer is the fourth commonest cancer affecting women. High-risk human papillomavirus (HPV) infection is causative in 99.7% of cases. Other risk factors include smoking, multiple sexual partners, the presence of other sexually transmitted diseases and immunosuppression. Primary prevention strategies for cervical cancer focus on reducing HPV infection via vaccination and data suggest that this has the potential to prevent nearly 90% of cases in those vaccinated prior to HPV exposure. However, not all countries can afford vaccination programmes and, worryingly, uptake in many countries has been extremely poor. Secondary prevention, through screening programmes, will remain critical to reducing cervical cancer, especially in unvaccinated women or those vaccinated later in adolescence. This includes screening for the detection of pre-cancerous cells, as well as high-risk HPV.\nIn the UK, since the introduction of the Cervical Screening Programme in 1988, the associated mortality rate from cervical cancer has fallen. However, worldwide, there is great variation between countries in both coverage and uptake of screening. In some countries, national screening programmes are available whereas in others, screening is provided on an opportunistic basis. Additionally, there are differences within countries in uptake dependent on ethnic origin, age, education and socioeconomic status. Thus, understanding and incorporating these factors in screening programmes can increase the uptake of screening. This, together with vaccination, can lead to cervical cancer becoming a rare disease.\nObjectives\nTo assess the effectiveness of interventions aimed at women, to increase the uptake, including informed uptake, of cervical screening.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), Issue 6, 2020. MEDLINE, Embase and LILACS databases up to June 2020. We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field.\nSelection criteria\nRandomised controlled trials (RCTs) of interventions to increase uptake/informed uptake of cervical screening.\nData collection and analysis\nTwo review authors independently extracted data and assessed risk of bias. Where possible, the data were synthesised in a meta-analysis using standard Cochrane methodology.\nMain results\nComprehensive literature searches identified 2597 records; of these, 70 met our inclusion criteria, of which 69 trials (257,899 participants) were entered into a meta-analysis. The studies assessed the effectiveness of invitational and educational interventions, lay health worker involvement, counselling and risk factor assessment. Clinical and statistical heterogeneity between trials limited statistical pooling of data.\nOverall, there was moderate-certainty evidence to suggest that invitations appear to be an effective method of increasing uptake compared to control (risk ratio (RR) 1.71, 95% confidence interval (CI) 1.49 to 1.96; 141,391 participants; 24 studies). Additional analyses, ranging from low to moderate-certainty evidence, suggested that invitations that were personalised, i.e. personal invitation, GP invitation letter or letter with a fixed appointment, appeared to be more successful. More specifically, there was very low-certainty evidence to support the use of GP invitation letters as compared to other authority sources' invitation letters within two RCTs, one RCT assessing 86 participants (RR 1.69 95% CI 0.75 to 3.82) and another, showing a modest benefit, included over 4000 participants (RR 1.13, 95 % CI 1.05 to 1.21). Low-certainty evidence favoured personalised invitations (telephone call, face-to-face or targeted letters) as compared to standard invitation letters (RR 1.32, 95 % CI 1.11 to 1.21; 27,663 participants; 5 studies). There was moderate-certainty evidence to support a letter with a fixed appointment to attend, as compared to a letter with an open invitation to make an appointment (RR 1.61, 95 % CI 1.48 to 1.75; 5742 participants; 5 studies).\nLow-certainty evidence supported the use of educational materials (RR 1.35, 95% CI 1.18 to 1.54; 63,415 participants; 13 studies) and lay health worker involvement (RR 2.30, 95% CI 1.44 to 3.65; 4330 participants; 11 studies). Other less widely reported interventions included counselling, risk factor assessment, access to a health promotion nurse, photo comic book, intensive recruitment and message framing. It was difficult to deduce any meaningful conclusions from these interventions due to sparse data and low-certainty evidence. However, having access to a health promotion nurse and attempts at intensive recruitment may have increased uptake.\nOne trial reported an economic outcome and randomised 3124 participants within a national screening programme to either receive the standard screening invitation, which would incur a fee, or an invitation offering screening free of charge. No difference in the uptake at 90 days was found (574/1562 intervention versus 612/1562 control, (RR 0.94, 95% CI: 0.86 to 1.03).\nThe use of HPV self-testing as an alternative to conventional screening may also be effective at increasing uptake and this will be covered in a subsequent review. Secondary outcomes, including cost data, were incompletely documented. The majority of cluster-RCTs did not account for clustering or adequately report the number of clusters in the trial in order to estimate the design effect, so we did not selectively adjust the trials. It is unlikely that reporting of these trials would impact the overall conclusions and robustness of the results. Of the meta-analyses that could be performed, there was considerable statistical heterogeneity, and this should be borne in mind when interpreting these findings. Given this and the low to moderate evidence, further research may change these findings. The risk of bias in the majority of trials was unclear, and a number of trials suffered from methodological problems and inadequate reporting. We downgraded the certainty of evidence because of an unclear or high risk of bias with regards to allocation concealment, blinding, incomplete outcome data and other biases.\nAuthors' conclusions\nThere is moderate-certainty evidence to support the use of invitation letters to increase the uptake of cervical screening. Low-certainty evidence showed lay health worker involvement amongst ethnic minority populations may increase screening coverage, and there was also support for educational interventions, but it is unclear what format is most effective. The majority of the studies were from developed countries and so the relevance of low- and middle-income countries (LMICs), is unclear. Overall, the low-certainty evidence that was identified makes it difficult to infer as to which interventions were best, with exception of invitational interventions, where there appeared to be more reliable evidence.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The majority of the evidence was of a low to moderate certainty (quality) and further research may change these findings. For the majority of trials, the risk of bias was unclear, making it difficult to make firm assertions from their results." } ]
query-laysum
23405
[ { "role": "user", "content": "Abstract: Background\nPosttraumatic stress disorder (PTSD) is a prevalent and disabling disorder. Evidence that PTSD is characterised by specific psychobiological dysfunctions has contributed to a growing interest in the use of medication in its treatment.\nObjectives\nTo assess the effects of medication for reducing PTSD symptoms in adults with PTSD.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 11, November 2020); MEDLINE (1946-), Embase (1974-), PsycINFO (1967-) and PTSDPubs (all available years) either directly or via the Cochrane Common Mental Disorders Controlled Trials Register (CCMDCTR). We also searched international trial registers. The date of the latest search was 13 November 2020.\nSelection criteria\nAll randomised controlled trials (RCTs) of pharmacotherapy for adults with PTSD.\nData collection and analysis\nThree review authors (TW, JI, and NP) independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. We contacted investigators to obtain missing data. We stratified summary statistics by medication class, and by medication agent for all medications. We calculated dichotomous and continuous measures using a random-effects model, and assessed heterogeneity.\nMain results\nWe include 66 RCTs in the review (range: 13 days to 28 weeks; 7442 participants; age range 18 to 85 years) and 54 in the meta-analysis.\nFor the primary outcome of treatment response, we found evidence of beneficial effect for selective serotonin reuptake inhibitors (SSRIs) compared with placebo (risk ratio (RR) 0.66, 95% confidence interval (CI) 0.59 to 0.74; 8 studies, 1078 participants), which improved PTSD symptoms in 58% of SSRI participants compared with 35% of placebo participants, based on moderate-certainty evidence.\nFor this outcome we also found evidence of beneficial effect for the noradrenergic and specific serotonergic antidepressant (NaSSA) mirtazapine: (RR 0.45, 95% CI 0.22 to 0.94; 1 study, 26 participants) in 65% of people on mirtazapine compared with 22% of placebo participants, and for the tricyclic antidepressant (TCA) amitriptyline (RR 0.60, 95% CI 0.38 to 0.96; 1 study, 40 participants) in 50% of amitriptyline participants compared with 17% of placebo participants, which improved PTSD symptoms. These outcomes are based on low-certainty evidence. There was however no evidence of beneficial effect for the number of participants who improved with the antipsychotics (RR 0.51, 95% CI 0.16 to 1.67; 2 studies, 43 participants) compared to placebo, based on very low-certainty evidence.\nFor the outcome of treatment withdrawal, we found evidence of a harm for the individual SSRI agents compared with placebo (RR 1.41, 95% CI 1.07 to 1.87; 14 studies, 2399 participants). Withdrawals were also higher for the separate SSRI paroxetine group compared to the placebo group (RR 1.55, 95% CI 1.05 to 2.29; 5 studies, 1101 participants). Nonetheless, the absolute proportion of individuals dropping out from treatment due to adverse events in the SSRI groups was low (9%), based on moderate-certainty evidence. For the rest of the medications compared to placebo, we did not find evidence of harm for individuals dropping out from treatment due to adverse events.\nAuthors' conclusions\nThe findings of this review support the conclusion that SSRIs improve PTSD symptoms; they are first-line agents for the pharmacotherapy of PTSD, based on moderate-certainty evidence. The NaSSA mirtazapine and the TCA amitriptyline may also improve PTSD symptoms, but this is based on low-certainty evidence. In addition, we found no evidence of benefit for the number of participants who improved following treatment with the antipsychotic group compared to placebo, based on very low-certainty evidence. There remain important gaps in the evidence base, and a continued need for more effective agents in the management of PTSD.\n\nGiven the provided abstract, please respond to the following query: \"What question does this review aim to answer?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "- Is pharmacotherapy effective for reducing PTSD symptoms in adults with PTSD?" } ]
query-laysum
23406
[ { "role": "user", "content": "Abstract: Background\nRemdesivir is an antiviral medicine approved for the treatment of mild-to-moderate coronavirus disease 2019 (COVID-19). This led to widespread implementation, although the available evidence remains inconsistent. This update aims to fill current knowledge gaps by identifying, describing, evaluating, and synthesising all evidence from randomised controlled trials (RCTs) on the effects of remdesivir on clinical outcomes in COVID-19.\nObjectives\nTo assess the effects of remdesivir and standard care compared to standard care plus/minus placebo on clinical outcomes in patients treated for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register (which comprises the Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, Embase, ClinicalTrials.gov, World Health Organization (WHO) International Clinical Trials Registry Platform, and medRxiv) as well as Web of Science (Science Citation Index Expanded and Emerging Sources Citation Index) and WHO COVID-19 Global literature on coronavirus disease to identify completed and ongoing studies, without language restrictions. We conducted the searches on 31 May 2022.\nSelection criteria\nWe followed standard Cochrane methodology.\nWe included RCTs evaluating remdesivir and standard care for the treatment of SARS-CoV-2 infection compared to standard care plus/minus placebo irrespective of disease severity, gender, ethnicity, or setting.\nWe excluded studies that evaluated remdesivir for the treatment of other coronavirus diseases.\nData collection and analysis\nWe followed standard Cochrane methodology.\nTo assess risk of bias in included studies, we used the Cochrane RoB 2 tool for RCTs. We rated the certainty of evidence using the GRADE (Grading of Recommendations, Assessment, Development and Evaluation) approach for outcomes that were reported according to our prioritised categories: all-cause mortality, in-hospital mortality, clinical improvement (being alive and ready for discharge up to day 28) or worsening (new need for invasive mechanical ventilation or death up to day 28), quality of life, serious adverse events, and adverse events (any grade).\nWe differentiated between non-hospitalised individuals with asymptomatic SARS-CoV-2 infection or mild COVID-19 and hospitalised individuals with moderate to severe COVID-19.\nMain results\nWe included nine RCTs with 11,218 participants diagnosed with SARS-CoV-2 infection and a mean age of 53.6 years, of whom 5982 participants were randomised to receive remdesivir. Most participants required low-flow oxygen at baseline. Studies were mainly conducted in high- and upper-middle-income countries. We identified two studies that are awaiting classification and five ongoing studies.\nEffects of remdesivir in hospitalised individuals with moderate to severe COVID-19\nWith moderate-certainty evidence, remdesivir probably makes little or no difference to all-cause mortality at up to day 28 (risk ratio (RR) 0.93, 95% confidence interval (CI) 0.81 to 1.06; risk difference (RD) 8 fewer per 1000, 95% CI 21 fewer to 6 more; 4 studies, 7142 participants), day 60 (RR 0.85, 95% CI 0.69 to 1.05; RD 35 fewer per 1000, 95% CI 73 fewer to 12 more; 1 study, 1281 participants), or in-hospital mortality at up to day 150 (RR 0.93, 95% CI 0.84 to 1.03; RD 11 fewer per 1000, 95% CI 25 fewer to 5 more; 1 study, 8275 participants).\nRemdesivir probably increases the chance of clinical improvement at up to day 28 slightly (RR 1.11, 95% CI 1.06 to 1.17; RD 68 more per 1000, 95% CI 37 more to 105 more; 4 studies, 2514 participants; moderate-certainty evidence). It probably decreases the risk of clinical worsening within 28 days (hazard ratio (HR) 0.67, 95% CI 0.54 to 0.82; RD 135 fewer per 1000, 95% CI 198 fewer to 69 fewer; 2 studies, 1734 participants, moderate-certainty evidence).\nRemdesivir may make little or no difference to the rate of adverse events of any grade (RR 1.04, 95% CI 0.92 to 1.18; RD 23 more per 1000, 95% CI 46 fewer to 104 more; 4 studies, 2498 participants; low-certainty evidence), or serious adverse events (RR 0.84, 95% CI 0.65 to 1.07; RD 44 fewer per 1000, 95% CI 96 fewer to 19 more; 4 studies, 2498 participants; low-certainty evidence).\nWe considered risk of bias to be low, with some concerns for mortality and clinical course. We had some concerns for safety outcomes because participants who had died did not contribute information. Without adjustment, this leads to an uncertain amount of missing values and the potential for bias due to missing data.\nEffects of remdesivir in non-hospitalised individuals with mild COVID-19\nOne of the nine RCTs was conducted in the outpatient setting and included symptomatic people with a risk of progression. No deaths occurred within the 28 days observation period.\nWe are uncertain about clinical improvement due to very low-certainty evidence. Remdesivir probably decreases the risk of clinical worsening (hospitalisation) at up to day 28 (RR 0.28, 95% CI 0.11 to 0.75; RD 46 fewer per 1000, 95% CI 57 fewer to 16 fewer; 562 participants; moderate-certainty evidence). We did not find any data for quality of life.\nRemdesivir may decrease the rate of serious adverse events at up to 28 days (RR 0.27, 95% CI 0.10 to 0.70; RD 49 fewer per 1000, 95% CI 60 fewer to 20 fewer; 562 participants; low-certainty evidence), but it probably makes little or no difference to the risk of adverse events of any grade (RR 0.91, 95% CI 0.76 to 1.10; RD 42 fewer per 1000, 95% CI 111 fewer to 46 more; 562 participants; moderate-certainty evidence).\nWe considered risk of bias to be low for mortality, clinical improvement, and safety outcomes. We identified a high risk of bias for clinical worsening.\nAuthors' conclusions\nBased on the available evidence up to 31 May 2022, remdesivir probably has little or no effect on all-cause mortality or in-hospital mortality of individuals with moderate to severe COVID-19. The hospitalisation rate was reduced with remdesivir in one study including participants with mild to moderate COVID-19. It may be beneficial in the clinical course for both hospitalised and non-hospitalised patients, but certainty remains limited. The applicability of the evidence to current practice may be limited by the recruitment of participants from mostly unvaccinated populations exposed to early variants of the SARS-CoV-2 virus at the time the studies were undertaken.\nFuture studies should provide additional data on the efficacy and safety of remdesivir for defined core outcomes in COVID-19 research, especially for different population subgroups.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "This is an update of the initial review and the evidence is current to 31 May 2022." } ]
query-laysum
23407
[ { "role": "user", "content": "Abstract: Background\nGlaucoma is an optic neuropathy that leads to visual field defects and vision loss. It is the second leading cause of irreversible blindness in the world. Treatment for glaucoma aims to reduce intraocular pressure (IOP) to slow or prevent further vision loss. IOP can be lowered with medications, laser, or incisional surgery. Trabeculectomy is a surgical approach which lowers IOP by shunting aqueous humor to a subconjunctival bleb. Device-modified trabeculectomy techniques are intended to improve the durability and safety of this bleb-forming surgery. Trabeculectomy-modifying devices include the Ex-PRESS, the XEN Gel Stent, the PreserFlo MicroShunt, as well as antifibrotic materials such as Ologen, amniotic membrane, expanded polytetrafluoroethylene (ePTFE) membrane, Gelfilm and others. However, the comparative effectiveness and safety of these devices are uncertain.\nObjectives\nTo evaluate the benefits and harms of different devices as adjuncts to trabeculectomy on IOP control in eyes with glaucoma compared to standard trabeculectomy.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search was August 2021.\nSelection criteria\nWe included randomized controlled trials in participants with glaucoma comparing device-modified trabeculectomy techniques with standard trabeculectomy. We included studies that used antimetabolites in either or both treatment groups.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. change in IOP and 2. mean postoperative IOP at one year. Our secondary outcomes were 3. mean change in IOP from baseline, 4. mean postoperative IOP at any time point, 5. mean best-corrected visual acuity (BCVA), 6. visual field change, 7. quality of life, 8. proportion of participants who are drop-free at one year, 9. mean number of IOP lowering medications at one year, and 10. proportion of participants with complications.\nMain results\nEight studies met our inclusion criteria, of which seven were full-length journal articles and one was a conference abstract. The eight studies included 961 participants with glaucoma, and compared two types of devices implanted during trabeculectomy versus standard trabeculectomy. Seven studies (462 eyes, 434 participants) used the Ex-PRESS, and one study (527 eyes, 527 participants) used the PreserFlo MicroShunt. No studies using the XEN Gel Stent implantation met our criteria. The studies were conducted in North America, Europe, and Africa. Planned follow-up periods ranged from six months to five years. The studies were reported poorly, which limited our ability to judge risk of bias for many domains. None of the studies explicitly masked outcome assessment. We rated seven studies at high risk of detection bias.\nLow-certainty of evidence from five studies showed that using the Ex-PRESS plus trabeculectomy compared with standard trabeculectomy may be associated with a slightly lower IOP at one year (mean difference (MD) −1.76 mmHg, 95% confidence interval (CI) −2.81 to −0.70; 213 eyes). Moderate-certainty of evidence from one study showed that using the PreserFlo MicroShunt may be associated with a slightly higher IOP than standard trabeculectomy at one year (MD 3.20 mmHg, 95% CI 2.29 to 4.11). Participants who received standard trabeculectomy may have a higher risk of hypotony compared with those who received device-modified trabeculectomy, but the evidence is uncertain (RR 0.73, 95% CI 0.46 to 1.17; I² = 38%; P = 0.14). In the subgroup of participants who received the PreserFlo MicroShunt, there was a lower risk of developing hypotony or shallow anterior chamber compared with those receiving standard trabeculectomy (RR 0.44, 95% CI 0.25 to 0.79; 526 eyes). Device-modified trabeculectomy may lead to less subsequent cataract surgery within one year (RR 0.46, 95% CI 0.27 to 0.80; I² = 0%).\nAuthors' conclusions\nUse of an Ex-PRESS plus trabeculectomy may produce greater IOP reduction at one-year follow-up than standard trabeculectomy; however, due to potential biases and imprecision in effect estimates, the certainty of evidence is low. PreserFlo MicroShunt may be inferior to standard trabeculectomy in lowering IOP. However, PreserFlo MicroShunt may prevent postoperative hypotony and bleb leakage. Overall, device-modified trabeculectomy appears associated with a lower risk of cataract surgery within five years compared with standard trabeculectomy. Due to various limitations in the design and conduct of the included studies, the applicability of this evidence synthesis to other populations or settings is uncertain. Further research is needed to determine the effectiveness and safety of other devices in subgroup populations, such as people with different types of glaucoma, of various races and ethnicity, and with different lens types (e.g. phakic, pseudophakic).\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched medical databases for well-designed clinical studies in people with glaucoma comparing device-modified trabeculectomy techniques with standard trabeculectomy." } ]
query-laysum
23408
[ { "role": "user", "content": "Abstract: Background\nTinnitus affects 10% to 15% of the adult population, with about 20% of these experiencing symptoms that negatively affect quality of life. In England alone there are an estimated ¾ million general practice consultations every year where the primary complaint is tinnitus, equating to a major burden on healthcare services. Clinical management strategies include education and advice, relaxation therapy, tinnitus retraining therapy (TRT), cognitive behavioural therapy (CBT), sound enrichment using ear-level sound generators or hearing aids, and drug therapies to manage co-morbid symptoms such as insomnia, anxiety or depression. Hearing aids, sound generators and combination devices (amplification and sound generation within one device) are a component of many tinnitus management programmes and together with information and advice are a first line of management in audiology departments for someone who has tinnitus.\nObjectives\nTo assess the effects of sound therapy (using amplification devices and/or sound generators) for tinnitus in adults.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL, via the Cochrane Register of Studies); Ovid MEDLINE; Ovid Embase; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 23 July 2018.\nSelection criteria\nRandomised controlled trials (RCTs) recruiting adults with acute or chronic subjective idiopathic tinnitus. We included studies where the intervention involved hearing aids, sound generators or combination hearing aids and compared them to waiting list control, placebo or education/information only with no device. We also included studies comparing hearing aids to sound generators, combination hearing aids to hearing aids, and combination hearing aids to sound generators.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were tinnitus symptom severity as measured as a global score on multi-item tinnitus questionnaire and significant adverse effects as indicated by an increase in self-reported tinnitus loudness. Our secondary outcomes were depressive symptoms, symptoms of generalised anxiety, health-related quality of life and adverse effects associated with wearing the device such as pain, discomfort, tenderness or skin irritation, or ear infections. We used GRADE to assess the quality of evidence for each outcome; this is indicated in italics.\nMain results\nThis review included eight studies (with a total of 590 participants). Seven studies investigated the effects of hearing aids, four combination hearing aids and three sound generators. Seven studies were parallel-group RCTs and one had a cross-over design. In general, risk of bias was unclear due to lack of detail about sequence generation and allocation concealment. There was also little or no use of blinding.\nNo data for our outcomes were available for any of our three main comparisons (comparing hearing aids, sound generators and combination devices with a waiting list control group, placebo or education/information only). Data for our additional comparisons (comparing these devices with each other) were also few, with limited potential for data pooling.\nHearing aid only versus sound generator device only\nOne study compared patients fitted with sound generators versus those fitted with hearing aids and found no difference between them in their effects on our primary outcome, tinnitus symptom severity measured with the Tinnitus Handicap Inventory (THI) at 3, 6 or 12 months (low-quality evidence). The use of both types of device was associated with a clinically significant reduction in tinnitus symptom severity.\nCombination hearing aid versus hearing aid only\nThree studies compared combination hearing aids with hearing aids and measured tinnitus symptom severity using the THI or Tinnitus Functional Index. When we pooled the data we found no difference between them (standardised mean difference -0.15, 95% confidence interval -0.52 to 0.22; three studies; 114 participants) (low-quality evidence). The use of both types of device was again associated with a clinically significant reduction in tinnitus symptom severity.\nAdverse effects were not assessed in any of the included studies.\nNone of the studies measured the secondary outcomes of depressive symptoms or depression, anxiety symptoms or generalised anxiety, or health-related quality of life as measured by a validated instrument, nor the newly developed core outcomes tinnitus intrusiveness, ability to ignore, concentration, quality of sleep and sense of control.\nAuthors' conclusions\nThere is no evidence to support the superiority of sound therapy for tinnitus over waiting list control, placebo or education/information with no device. There is insufficient evidence to support the superiority or inferiority of any of the sound therapy options (hearing aid, sound generator or combination hearing aid) over each other. The quality of evidence for the reported outcomes, assessed using GRADE, was low. Using a combination device, hearing aid or sound generator might result in little or no difference in tinnitus symptom severity.\nFuture research into the effectiveness of sound therapy in patients with tinnitus should use rigorous methodology. Randomisation and blinding should be of the highest quality, given the subjective nature of tinnitus and the strong likelihood of a placebo response. The CONSORT statement should be used in the design and reporting of future studies. We also recommend the use of validated, patient-centred outcome measures for research in the field of tinnitus.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Is sound therapy (using amplification devices, sound generators or both) effective for tinnitus in adults?" } ]
query-laysum
23409
[ { "role": "user", "content": "Abstract: Background\nGroup A streptococcus (GAS) accounts for 20% to 40% of cases of pharyngitis in children; the remaining cases are caused by viruses. Compared with throat culture, rapid antigen detection tests (RADTs) offer diagnosis at the point of care (within five to 10 minutes).\nObjectives\nTo determine the diagnostic accuracy of RADTs for diagnosing GAS in children with pharyngitis. To assess the relative diagnostic accuracy of the two major types of RADTs (enzyme immunoassays (EIA) and optical immunoassays (OIA)) by indirect and direct comparison.\nSearch methods\nWe searched CENTRAL, MEDLINE, EMBASE, Web of Science, CDSR, DARE, MEDION and TRIP (January 1980 to July 2015). We also conducted related citations tracking via PubMed, handsearched reference lists of included studies and relevant review articles, and screened all articles citing included studies via Google Scholar.\nSelection criteria\nWe included studies that compared RADT for GAS pharyngitis with throat culture on a blood agar plate in a microbiology laboratory in children seen in ambulatory care.\nData collection and analysis\nTwo review authors independently screened titles and abstracts for relevance, assessed full texts for inclusion, and carried out data extraction and quality assessment using the QUADAS-2 tool. We used bivariate meta-analysis to estimate summary sensitivity and specificity, and to investigate heterogeneity across studies. We compared the accuracy of EIA and OIA tests using indirect and direct evidence.\nMain results\nWe included 98 unique studies in the review (116 test evaluations; 101,121 participants). The overall methodological quality of included studies was poor, mainly because many studies were at high risk of bias regarding patient selection and the reference standard used (in 73% and 43% of test evaluations, respectively). In studies in which all participants underwent both RADT and throat culture (105 test evaluations; 58,244 participants; median prevalence of participants with GAS was 29.5%), RADT had a summary sensitivity of 85.6%; 95% confidence interval (CI) 83.3 to 87.6 and a summary specificity of 95.4%; 95% CI 94.5 to 96.2. There was substantial heterogeneity in sensitivity across studies; specificity was more stable. There was no evidence of a trade-off between sensitivity and specificity. Heterogeneity in accuracy was not explained by study-level characteristics such as whether an enrichment broth was used before plating, mean age and clinical severity of participants, and GAS prevalence. The sensitivity of EIA and OIA tests was comparable (summary sensitivity 85.4% versus 86.2%). Sensitivity analyses showed that summary estimates of sensitivity and specificity were stable in low risk of bias studies.\nAuthors' conclusions\nIn a population of 1000 children with a GAS prevalence of 30%, 43 patients with GAS will be missed. Whether or not RADT can be used as a stand-alone test to rule out GAS will depend mainly on the epidemiological context. The sensitivity of EIA and OIA tests seems comparable. RADT specificity is sufficiently high to ensure against unnecessary use of antibiotics. Based on these results, we would expect that amongst 100 children with strep throat, 86 would be correctly detected with the rapid test while 14 would be missed and not receive antibiotic treatment.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "On average, rapid tests for strep throat had a sensitivity (ability to correctly detect people with the disease) of 86% and a specificity (ability to correctly identify people who do not have the disease) of 95%. There was substantial variability in rapid test performance across studies, which was not explained by study characteristics, including methodological quality. The two types of rapid tests under evaluation seemed to have comparable sensitivity (85.4% versus 86.2% for enzyme immunoassays and optical immunoassays, respectively). Based on these results, we would expect that amongst 100 children with strep throat, 86 would be correctly detected with the rapid test while 14 would be missed and not receive antibiotic treatment. Of 100 children with non-streptococcal sore throat, 95 would be correctly classified as such with the rapid test while 5 would be misdiagnosed as having strep throat and receive unnecessary antibiotics." } ]
query-laysum
23410
[ { "role": "user", "content": "Abstract: Background\nSomatoform disorders are characterised by chronic, medically unexplained physical symptoms (MUPS). Although different medications are part of treatment routines for people with somatoform disorders in clinics and private practices, there exists no systematic review or meta-analysis on the efficacy and tolerability of these medications. We aimed to synthesise to improve optimal treatment decisions.\nObjectives\nTo assess the effects of pharmacological interventions for somatoform disorders (specifically somatisation disorder, undifferentiated somatoform disorder, somatoform autonomic dysfunction, and pain disorder) in adults.\nSearch methods\nWe searched the Cochrane Depression, Anxiety and Neurosis Review Group's Specialised Register (CCDANCTR) (to 17 January 2014). This register includes relevant randomised controlled trials (RCTs) from The Cochrane Library (all years), MEDLINE (1950 to date), EMBASE (1974 to date), and PsycINFO (1967 to date). To identify ongoing trials, we searched ClinicalTrials.gov, Current Controlled Trials metaRegister, the World Health Organization International Clinical Trials Registry Platform, and the Chinese Clinical Trials Registry. For grey literature, we searched ProQuest Dissertation & Theses Database, OpenGrey, and BIOSIS Previews. We handsearched conference proceedings and reference lists of potentially relevant papers and systematic reviews and contacted experts in the field.\nSelection criteria\nWe selected RCTs or cluster RCTs of pharmacological interventions versus placebo, treatment as usual, another medication, or a combination of different medications for somatoform disorders in adults. We included people fulfilling standardised diagnostic criteria for somatisation disorder, undifferentiated somatoform disorder, somatoform autonomic dysfunction, or somatoform pain disorder.\nData collection and analysis\nOne review author and one research assistant independently extracted data and assessed risk of bias. Primary outcomes included the severity of MUPS on a continuous measure, and acceptability of treatment.\nMain results\nWe included 26 RCTs (33 reports), with 2159 participants, in the review. They examined the efficacy of different types of antidepressants, the combination of an antidepressant and an antipsychotic, antipsychotics alone, or natural products (NPs). The duration of the studies ranged between two and 12 weeks.\nOne meta-analysis of placebo-controlled studies showed no clear evidence of a significant difference between tricyclic antidepressants (TCAs) and placebo for the outcome severity of MUPS (SMD -0.13; 95% CI -0.39 to 0.13; 2 studies, 239 participants; I2 = 2%; low-quality evidence). For new-generation antidepressants (NGAs), there was very low-quality evidence showing they were effective in reducing the severity of MUPS (SMD -0.91; 95% CI -1.36 to -0.46; 3 studies, 243 participants; I2 = 63%). For NPs there was low-quality evidence that they were effective in reducing the severity of MUPS (SMD -0.74; 95% CI -0.97 to -0.51; 2 studies, 322 participants; I2 = 0%).\nOne meta-analysis showed no clear evidence of a difference between TCAs and NGAs for severity of MUPS (SMD -0.16; 95% CI -0.55 to 0.23; 3 studies, 177 participants; I2 = 42%; low-quality evidence). There was also no difference between NGAs and other NGAs for severity of MUPS (SMD -0.16; 95% CI -0.45 to 0.14; 4 studies, 182 participants; I2 = 0%).\nFinally, one meta-analysis comparing selective serotonin reuptake inhibitors (SSRIs) with a combination of SSRIs and antipsychotics showed low-quality evidence in favour of combined treatment for severity of MUPS (SMD 0.77; 95% CI 0.32 to 1.22; 2 studies, 107 participants; I2 = 23%).\nDifferences regarding the acceptability of the treatment (rate of all-cause drop-outs) were neither found between NGAs and placebo (RR 1.01, 95% CI 0.64 to 1.61; 2 studies, 163 participants; I2 = 0%; low-quality evidence) or NPs and placebo (RR 0.85, 95% CI 0.40 to 1.78; 3 studies, 506 participants; I2 = 0%; low-quality evidence); nor between TCAs and other medication (RR 1.48, 95% CI 0.59 to 3.72; 8 studies, 556 participants; I2 =14%; low-quality evidence); nor between antidepressants and the combination of an antidepressant and an antipsychotic (RR 0.80, 95% CI 0.25 to 2.52; 2 studies, 118 participants; I2 = 0%; low-quality evidence). Percental attrition rates due to adverse effects were high in all antidepressant treatments (0% to 32%), but low for NPs (0% to 1.7%).\nThe risk of bias was high in many domains across studies. Seventeen trials (65.4%) gave no information about random sequence generation and only two (7.7%) provided information about allocation concealment. Eighteen studies (69.2%) revealed a high or unclear risk in blinding participants and study personnel; 23 studies had high risk of bias relating to blinding assessors. For the comparison NGA versus placebo, there was relatively high imprecision and heterogeneity due to one outlier study. Although we identified 26 studies, each comparison only contained a few studies and small numbers of participants so the results were imprecise.\nAuthors' conclusions\nThe current review found very low-quality evidence for NGAs and low-quality evidence for NPs being effective in treating somatoform symptoms in adults when compared with placebo. There was some evidence that different classes of antidepressants did not differ in efficacy; however, this was limited and of low to very low quality. These results had serious shortcomings such as the high risk of bias, strong heterogeneity in the data, and small sample sizes. Furthermore, the significant effects of antidepressant treatment have to be balanced against the relatively high rates of adverse effects. Adverse effects produced by medication can have amplifying effects on symptom perceptions, particularly in people focusing on somatic symptoms without medical causes. We can only draw conclusions about short-term efficacy of the pharmacological interventions because no trial included follow-up assessments. For each of the comparisons where there were available data on acceptability rates (NGAs versus placebo, NPs versus placebo, TCAs versus other medication, and antidepressants versus a combination of an antidepressant and an antipsychotic), no clear differences between the intervention and comparator were found.\nFuture high-quality research should be carried out to determine the effectiveness of medications other than antidepressants, to compare antidepressants more thoroughly, and to follow-up participants over longer periods (the longest follow up was just 12 weeks). Another idea for future research would be to include other outcomes such as functional impairment or dysfunctional behaviours and cognitions as well as the classical outcomes such as symptom severity, depression, or anxiety.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Around 6 in 100 people are affected by long-term physical symptoms that have no clear medical cause (somatoform disorders). Symptoms can include pain, digestive problems, sexual or menstrual problems, breathing problems, and symptoms that mimic brain or nerve damage such as memory loss or sensory problems. Somatoform disorders often cause considerable distress and mean that people spend a lot of time consulting doctors and health professionals to try to find the cause of their symptoms and the correct treatment.\nGuidelines for the treatment of somatoform disorders recommend that people receive talking therapies alongside medication. In current practice many people are treated 'off label' with medications that are intended for the treatment of anxiety, depression, and other mental health problems. However, it is unclear why medications such as antidepressants help to reduce the severity of medically unexplained physical symptoms." } ]
query-laysum
23411
[ { "role": "user", "content": "Abstract: Background\nGlucocorticoids play a major role in the treatment of acute lymphoblastic leukaemia (ALL). However, supraphysiological doses can suppress the hypothalamic-pituitary-adrenal (HPA) axis. HPA axis suppression resulting in reduced cortisol response may cause an impaired stress response and an inadequate host defence against infection, which remain a cause of morbidity and death. Suppression commonly occurs in the first days after cessation of glucocorticoid therapy, but the exact duration is unclear. This review is the second update of a previously published Cochrane review.\nObjectives\nTo examine the occurrence and duration of HPA axis suppression after (each cycle of) glucocorticoid therapy for childhood ALL.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 11), MEDLINE/PubMed (from 1945 to December 2016), and Embase/Ovid (from 1980 to December 2016). In addition, we searched reference lists of relevant articles, conference proceedings (the International Society for Paediatric Oncology and the American Society of Clinical Oncology from 2005 up to and including 2016, and the American Society of Pediatric Hematology/Oncology from 2014 up to and including 2016), and ongoing trial databases (the International Standard Registered Clinical/Social Study Number (ISRCTN) register via http://www.controlled-trials.com, the National Institutes of Health (NIH) register via www.clinicaltrials.gov, and the International Clinical Trials Registry Platform (ICTRP) of the World Health Organization (WHO) via apps.who.int/trialsearch) on 27 December 2016.\nSelection criteria\nAll study designs, except case reports and patient series with fewer than 10 children, examining effects of glucocorticoid therapy for childhood ALL on HPA axis function.\nData collection and analysis\nTwo review authors independently performed study selection. One review author extracted data and assessed 'Risk of bias'; another review author checked this information.\nMain results\nWe identified 10 studies (total of 298 children; we identified two studies for this update) including two randomised controlled trials (RCTs) that assessed adrenal function. None of the included studies assessed the HPA axis at the level of the hypothalamus, the pituitary, or both. Owing to substantial differences between studies, we could not pool results. All studies had risk of bias issues. Included studies demonstrated that adrenal insufficiency occurs in nearly all children during the first days after cessation of glucocorticoid treatment for childhood ALL. Most children recovered within a few weeks, but a small number of children had ongoing adrenal insufficiency lasting up to 34 weeks.\nIncluded studies evaluated several risk factors for (prolonged) adrenal insufficiency. First, three studies including two RCTs investigated the difference between prednisone and dexamethasone in terms of occurrence and duration of adrenal insufficiency. The RCTs found no differences between prednisone and dexamethasone arms. In the other (observational) study, children who received prednisone recovered earlier than children who received dexamethasone. Second, treatment with fluconazole appeared to prolong the duration of adrenal insufficiency, which was evaluated in two studies. One of these studies reported that the effect was present only when children received fluconazole at a dose higher than 10 mg/kg/d. Finally, two studies evaluated the presence of infection, stress episodes, or both, as a risk factor for adrenal insufficiency. In one of these studies (an RCT), trial authors found no relationship between the presence of infection/stress and adrenal insufficiency. The other study found that increased infection was associated with prolonged duration of adrenal insufficiency.\nAuthors' conclusions\nWe concluded that adrenal insufficiency commonly occurs in the first days after cessation of glucocorticoid therapy for childhood ALL, but the exact duration is unclear. No data were available on the levels of the hypothalamus and the pituitary; therefore, we could draw no conclusions regarding these outcomes. Clinicians may consider prescribing glucocorticoid replacement therapy during periods of serious stress in the first weeks after cessation of glucocorticoid therapy for childhood ALL to reduce the risk of life-threatening complications. However, additional high-quality research is needed to inform evidence-based guidelines for glucocorticoid replacement therapy.\nSpecial attention should be paid to patients receiving fluconazole therapy, and perhaps similar antifungal drugs, as these treatments may prolong the duration of adrenal insufficiency, especially when administered at a dose higher than 10 mg/kg/d.\nFinally, it would be relevant to investigate further the relationship between present infection/stress and adrenal insufficiency in a larger, separate study specially designed for this purpose.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Adrenal insufficiency occurred in nearly all children during the first days after completion of glucocorticoid therapy. Most children recovered within a few weeks, but a small number had ongoing adrenal insufficiency lasting up to 34 weeks. Three studies looked into differences in duration of adrenal insufficiency between children who received prednisone and those who were given dexamethasone (two types of glucocorticoids). Two of these three studies found no differences. In the other study, children who received prednisone recovered earlier than those who received dexamethasone. Also, treatment with a certain antifungal drug (fluconazole) seemed to prolong the duration of adrenal insufficiency. Two studies investigated this. Finally, two studies evaluated the presence of infection/stress as a risk factor for adrenal insufficiency. One study found no relationship. The other study reported that increased infection was associated with a longer duration of adrenal insufficiency.\nMore high-quality research is needed to define the exact occurrence and duration of HPA axis suppression. Then adequate guidelines for glucocorticoid replacement therapy can be formulated." } ]
query-laysum
23412
[ { "role": "user", "content": "Abstract: Background\nHypertension is a major public health problem that increases the risk of cardiovascular and kidney diseases. Several studies have shown an inverse association between calcium intake and blood pressure, as small reductions in blood pressure have been shown to produce rapid reductions in vascular disease risk even in individuals with normal blood pressure ranges. This is the first update of the review to evaluate the effect of calcium supplementation in normotensive individuals as a preventive health measure.\nObjectives\nTo assess the efficacy and safety of calcium supplementation versus placebo or control for reducing blood pressure in normotensive people and for the prevention of primary hypertension.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomised controlled trials up to September 2020: the Cochrane Hypertension Specialised Register, CENTRAL (2020, Issue 9), Ovid MEDLINE, Ovid Embase, the WHO International Clinical Trials Registry Platform, and the US National Institutes of Health Ongoing Trials Register, ClinicalTrials.gov. We also contacted authors of relevant papers regarding further published and unpublished work. The searches had no language restrictions.\nSelection criteria\nWe selected trials that randomised normotensive people to dietary calcium interventions such as supplementation or food fortification versus placebo or control. We excluded quasi-random designs. The primary outcomes were hypertension (defined as blood pressure ≥ 140/90 mmHg) and blood pressure measures.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, abstracted the data and assessed the risks of bias. We used the GRADE approach to assess the certainty of evidence.\nMain results\nThe 2020 updated search identified four new trials. We included a total of 20 trials with 3512 participants, however we only included 18 for the meta-analysis with 3140 participants. None of the studies reported hypertension as a dichotomous outcome. The effect on systolic and diastolic blood pressure was: mean difference (MD) -1.37 mmHg, 95% confidence interval (CI) -2.08, -0.66; 3140 participants; 18 studies; I2 = 0%, high-certainty evidence; and MD -1.45, 95% CI -2.23, -0.67; 3039 participants; 17 studies; I2 = 45%, high-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for those younger than 35 years was: MD -1.86, 95% CI -3.45, -0.27; 452 participants; eight studies; I2 = 19%, moderate-certainty evidence; MD -2.50, 95% CI -4.22, -0.79; 351 participants; seven studies ; I2 = 54%, moderate-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for those 35 years or older was: MD -0.97, 95% CI -1.83, -0.10; 2688 participants; 10 studies; I2 = 0%, high-certainty evidence; MD -0.59, 95% CI -1.13, -0.06; 2688 participants; 10 studies; I2 = 0%, high-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for women was: MD -1.25, 95% CI -2.53, 0.03; 1915 participants; eight studies; I2 = 0%, high-certainty evidence; MD -1.04, 95% CI -1.86, -0.22; 1915 participants; eight studies; I2 = 4%, high-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for men was MD -2.14, 95% CI -3.71, -0.59; 507 participants; five studies; I2 = 8%, moderate-certainty evidence; MD -1.99, 95% CI -3.25, -0.74; 507 participants; five studies; I2 = 41%, moderate-certainty evidence, respectively. The effect was consistent in both genders regardless of baseline calcium intake.\nThe effect on systolic blood pressure was: MD -0.02, 95% CI -2.23, 2.20; 302 participants; 3 studies; I2 = 0%, moderate-certainty evidence with doses less than 1000 mg; MD -1.05, 95% CI -1.91, -0.19; 2488 participants; 9 studies; I2 = 0%, high-certainty evidence with doses 1000 to 1500 mg; and MD -2.79, 95% CI -4.71, 0.86; 350 participants; 7 studies; I2 = 0%, moderate-certainty evidence with doses more than 1500 mg. The effect on diastolic blood pressure was: MD -0.41, 95% CI -2.07, 1.25; 201 participants; 2 studies; I2 = 0, moderate-certainty evidence; MD -2.03, 95% CI -3.44, -0.62 ; 1017 participants; 8 studies; and MD -1.35, 95% CI -2.75, -0.05; 1821 participants; 8 studies; I2 = 51%, high-certainty evidence, respectively.\nNone of the studies reported adverse events.\nAuthors' conclusions\nAn increase in calcium intake slightly reduces both systolic and diastolic blood pressure in normotensive people, particularly in young people, suggesting a role in the prevention of hypertension. The effect across multiple prespecified subgroups and a possible dose response effect reinforce this conclusion. Even small reductions in blood pressure could have important health implications for reducing vascular disease. A 2 mmHg lower systolic blood pressure is predicted to produce about 10% lower stroke mortality and about 7% lower mortality from ischaemic heart disease.\nThere is a great need for adequately-powered clinical trials randomising young people. Subgroup analysis should involve basal calcium intake, age, sex, basal blood pressure, and body mass index. We also require assessment of side effects, optimal doses and the best strategy to improve calcium intake.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "This review analysed information from 20 trials of which 18 trials (3140participants) provided date for the effect of the intervention. We found that an increase in calcium intake slightly reduces both systolic and diastolic blood pressure by 1.37 mmHg lower and by 1.45 mmHg lower, respectively. This effect was higher with doses of calcium above 1000 mg/day. Systolic blood pressure was reduced by 1.05 mmHg with doses of calcium 1000 to 1500 mg/day and by 2.79 mmHg with doses of calcium equal to or over 1500 mg/day.\nWe noted a reduction in blood pressure in both men and women and at ages from 11 to 82 years old, but the reduction was greater among younger people. Systolic blood pressure was reduced by 1.86 mmHg among those less than 35 years and by 0.97 mmHg among those 35 years or older.\nNone of the studies reported adverse events. We need further research to determine the ideal dosage of supplementation and whether it is more effective and safer as part of the diet or as a supplement." } ]
query-laysum
23413
[ { "role": "user", "content": "Abstract: Background\nAdhesive capsulitis (frozen shoulder or painful stiff shoulder) is characterised by spontaneous onset of shoulder pain accompanied by progressive stiffness and disability. It is usually self-limiting but often has a prolonged course over two to three years.\nObjectives\nTo determine the effectiveness and safety of arthrographic distension of the glenohumeral joint in the treatment of adults with adhesive capsulitis.\nSearch methods\nWe searched the Cochrane Musculoskeletal Review Group Register, CENTRAL, MEDLINE, CINAHL, and EMBASE to November 2006, unrestricted by date or language.\nSelection criteria\nWe included randomised controlled trials and controlled clinical trials comparing arthrographic distension with placebo or other interventions.\nData collection and analysis\nTwo review authors independently assessed study quality and extracted data.\nMain results\nFive trials with 196 people were included. One three-arm trial (47 participants) compared arthrographic distension using steroid and air to distension using air alone and to steroid injection alone. One trial (46 participants) compared arthrographic distension using steroid and saline to placebo. Two trials (45 and 22 participants) compared arthrographic distension using steroid to steroid injection alone. One trial (36 participants) compared arthrographic distension using steroid and saline plus physical therapy to physical therapy alone. Trials included similar study participants, but quality and reporting of data were variable. Only one trial was at low risk of bias. No meta-analysis was performed.\nThe trial with low risk of bias demonstrated that distension with saline and steroid was better than placebo for pain (number needed to treat to benefit (NNTB) = 2), function (NNTB = 3) and range of movement at three weeks. This benefit was maintained at six and 12 weeks only for one of two scores measuring function (NNT = 3). A second trial with high risk of bias also reported that distension combined with physical therapy improved range of movement and median percent improvement in pain (but not pain score) at eight weeks compared to physical therapy alone. Three further trials, all at high risk of bias, reported conflicting, variable effects of arthrographic distension with steroid compared to distension alone, and arthrographic distension with steroid compared to intra-articular steroid injection. The trials reported a small number of minor adverse effects, mainly pain during and after the procedure.\nAuthors' conclusions\nThere is \"silver\" level evidence that arthrographic distension with saline and steroid provides short-term benefits in pain, range of movement and function in adhesive capsulitis. It is uncertain whether this is better than alternative interventions.\n\nGiven the provided abstract, please respond to the following query: \"Disability\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "One study found that at three weeks after treatment, people's disability was improved by 11 points on a scale of 0-100, possibly as few as 4 or as many as 11 points on a scale of 0-100. Another study found disability was improved by 17 points. This improvement could possibly be as low as 6 or as many as 28 points on a scale of 0-100.\nAt six weeks after treatment, people's disability was improved by 46 points on a scale of 0-500. This improvement could possibly be as little as 20 points or as many as 80 points on a scale of 0-500.\nAt 12 weeks after treatment, people's disability was improved by 54 points on a scale of 0-500. This improvement could be as little as 15 points or as many as 95 points on a scale of 0-500.\nThe numbers given are our best estimate. When possible, we have also presented a range because there is a 95 percent chance that the true effect of the treatment lies somewhere between that range." } ]
query-laysum
23414
[ { "role": "user", "content": "Abstract: Background\nNuts contain a number of nutritional attributes which may be cardioprotective. A number of epidemiological studies have shown that nut consumption may have a beneficial effect on people who have cardiovascular disease (CVD) risk factors. However, results from randomised controlled trials (RCTs) are less consistent.\nObjectives\nTo determine the effectiveness of nut consumption for the primary prevention of CVD.\nSearch methods\nWe searched the following electronic databases: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science Core Collection, CINAHL, Database of Abstracts of Reviews of Effects (DARE), Health Technology Assessment Database (HTA) and Health Economics Evaluations Database (HEED) up to 30 July 2015. We searched trial registers and reference lists of reviews for further studies. We did not apply any language restrictions.\nSelection criteria\nWe included RCTs of dietary advice to increase nut consumption or provision of nuts to increase consumption lasting at least three months and including healthy adults or adults at moderate and high risk of CVD. The comparison group was no intervention or minimal intervention. The outcomes of interest were CVD clinical events and CVD risk factors.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, abstracted the data and assessed the risk of bias in included trials.\nMain results\nWe included five trials (435 participants randomised) and one ongoing trial. One study is awaiting classification. All trials examined the provision of nuts to increase consumption rather than dietary advice. None of the included trials reported on the primary outcomes, CVD clinical events, but trials were small and short term. All five trials reported on CVD risk factors. Four of these trials provided data in a useable format for meta-analyses, but heterogeneity precluded meta-analysis for most of the analyses. Overall trials were judged to be at unclear risk of bias.\nThere were variable and inconsistent effects of nut consumption on CVD risk factors (lipid levels and blood pressure). Three trials monitored adverse events. One trial reported an allergic reaction to nuts and three trials reported no significant weight gain with increased nut consumption. None of the included trials reported on other secondary outcomes, occurrence of type 2 diabetes as a major risk factor for CVD, health-related quality of life and costs.\nAuthors' conclusions\nCurrently there is a lack of evidence for the effects of nut consumption on CVD clinical events in primary prevention and very limited evidence for the effects on CVD risk factors. No conclusions can be drawn and further high quality longer term and adequately powered trials are needed to answer the review question.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Cardiovascular diseases are a group of conditions that affect the heart and blood vessels. They are a major cause of death worldwide. The food we eat may influence the risk of getting cardiovascular disease. Nuts, if consumed regularly and at relatively high doses (50 g to 100 g), are believed to reduce total cholesterol and low-density lipoprotein (LDL) cholesterol (bad cholesterol)." } ]
query-laysum
23415
[ { "role": "user", "content": "Abstract: Background\nAssociations between nursing home residents’ oral health status and quality of life, respiratory tract infections, and nutritional status have been reported. Educational interventions for nurses or residents, or both, focusing on knowledge and skills related to oral health management may have the potential to improve residents’ oral health.\nObjectives\nTo assess the effects of oral health educational interventions for nursing home staff or residents, or both, to maintain or improve the oral health of nursing home residents.\nSearch methods\nWe searched the Cochrane Oral Health Trials Register (to 18 January 2016), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library, 2015, Issue 12), MEDLINE Ovid (1946 to 18 January 2016), Embase Ovid (1980 to 18 January 2016), CINAHL EBSCO (1937 to 18 January 2016), and Web of Science Conference Proceedings (1990 to 18 January 2016). We searched ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform for ongoing trials to 18 January 2016. In addition, we searched reference lists of identified articles and contacted experts in the field. We placed no restrictions on language or date of publication when searching the electronic databases.\nSelection criteria\nRandomised controlled trials (RCTs) and cluster-RCTs comparing oral health educational programmes for nursing staff or residents, or both with usual care or any other oral healthcare intervention.\nData collection and analysis\nTwo review authors independently screened articles retrieved from the searches for relevance, extracted data from included studies, assessed risk of bias for each included study, and evaluated the overall quality of the evidence. We retrieved data about the development and evaluation processes of complex interventions on the basis of the Criteria for Reporting the Development and Evaluation of Complex Interventions in healthcare: revised guideline (CReDECI 2). We contacted authors of relevant studies for additional information.\nMain results\nWe included nine RCTs involving 3253 nursing home residents in this review; seven of these trials used cluster randomisation. The mean resident age ranged from 78 to 86 years across studies, and most participants were women (more than 66% in all studies). The proportion of residents with dental protheses ranged from 62% to 87%, and the proportion of edentulous residents ranged from 32% to 90% across studies.\nEight studies compared educational interventions with information and practical components versus (optimised) usual care, while the ninth study compared educational interventions with information only versus usual care. All interventions included educational sessions on oral health for nursing staff (five trials) or for both staff and residents (four trials), and used more than one active component. Follow-up of included studies ranged from three months to five years.\nNo study showed overall low risk of bias. Four studies had a high risk of bias, and the other five studies were at unclear risk of bias.\nNone of the trials assessed our predefined primary outcomes 'oral health' and 'oral health-related quality of life'. All trials assessed our third primary outcome, 'dental or denture plaque'. Meta-analyses showed no evidence of a difference between interventions and usual care for dental plaque (mean difference -0.04, 95% confidence interval (CI) -0.26 to 0.17; six trials; 437 participants; low quality evidence) or denture plaque (standardised mean difference -0.60, 95% CI -1.25 to 0.05; five trials; 816 participants; low quality evidence). None of the studies assessed adverse events of the intervention.\nAuthors' conclusions\nWe found insufficient evidence to draw robust conclusions about the effects of oral health educational interventions for nursing home staff and residents. We did not find evidence of meaningful effects of educational interventions on any measure of residents' oral health; however, the quality of the available evidence is low. More adequately powered and high-quality studies using relevant outcome measures are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We could not identify a clear benefit of training of nurses and/or residents on residents' dental health as assessed by dental and denture plaque. No study assessed oral health, oral health-related quality of life or adverse events. As education programmes were not fully described, results do not allow for clear conclusions about the effectiveness or potential harm of specific oral health education interventions in nursing homes." } ]
query-laysum
23416
[ { "role": "user", "content": "Abstract: Background\nSince the introduction of endotracheal intubation in paediatrics, uncuffed endotracheal tubes (ETTs) have been the standard of care for children under eight years old, based on the presumption that complications, particularly postoperative stridor, are higher with cuffed ETTs. The major disadvantages of uncuffed ETTs cited for this shift in procedure include the difficulty in achieving tidal volumes due to leakage around an uncuffed ETT. To seal the airway adequately, uncuffed tubes may need to be exchanged for another tube with a larger diameter, which sometimes requires several attempts before the appropriate size is found. Uncuffed tubes also allow waste anaesthetic gases to escape, contributing significantly to operating room contamination and rendering the anaesthetic procedure more expensive. Our review summarizes the available data, to provide a current perspective on the use of cuffed versus uncuffed endotracheal tubes in children of eight years old or less.\nObjectives\nTo assess the risks and benefits of cuffed versus uncuffed endotracheal tubes during general anaesthesia in children up to eight years old.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, LILACS and Google Scholar databases from their inception until March 2017. We also searched databases of ongoing trials, and checked references and citations. We imposed no restriction by language.\nSelection criteria\nWe included randomized and quasi-randomized controlled trials in which the effects of using cuffed and uncuffed tubes were investigated in children up to eight years old undergoing general anaesthesia. We excluded studies conducted solely in newborn babies.\nData collection and analysis\nWe applied standard methodological procedures, as defined in the Methodological Expectations of Cochrane Intervention Reviews (MECIR).\nMain results\nWe included three trials (2804 children), comparing cuffed with uncuffed ETTs. We rated the risks of bias in all three trials as high. Outcome data were limited. The largest trial was supported by Microcuff GmbH, who provided the cuffed tubes used. The other two trials were small, and should be interpreted with caution. Based on the GRADE approach, we rated the quality of evidence as low to very low.\nTwo trials comparing cuffed versus uncuffed ETTs found no difference between the groups for postextubation stridor (risk ratio (RR) 0.93, 95% confidence interval (CI) 0.65 to 1.33; 2734 children; quality of evidence very low). However, those two trials demonstrated a statistically significantly lower rate of endotracheal tube exchange in the cuffed ETT group (RR 0.07, 95% CI 0.05 to 0.10; 2734 children; quality of evidence very low).\nOne trial with 70 participants found that costs per case were lower in the cuffed ETT group (mean difference (MD) EUR 19.0 lower; 95% CI 24.23 to 13.77 lower; quality of evidence low), since the higher cost of the cuffed tubes may be offset by the savings made with anaesthetic gases.\nNo clear evidence emerged to suggest any difference between cuffed and uncuffed tubes for outcomes such as the need to treat postextubation stridor with tracheal re-intubation (RR 1.85, 95% CI 0.17 to 19.76; 115 children; 2 trials; quality of evidence very low), epinephrine (RR 0.70, 95% CI 0.38 to 1.28; 115 children; 2 trials; quality of evidence very low) or corticosteroid (RR 0.87, 95% CI 0.51 to 1.49; 102 children; 1 trial; quality of evidence very low), or need for intensive care unit (ICU) admission to treat postextubation stridor (RR 2.77, 95% CI 0.30 to 25.78; 102 children; 1 trial; quality of evidence very low).\nNone of the trials included in this review evaluated the ability to deliver appropriate tidal volume.\nAuthors' conclusions\nImplications for practice\nWe are unable to draw definitive conclusions about the comparative effects of cuffed or non-cuffed endotracheal tubes in children undergoing general anaesthesia. Our confidence is limited by risks of bias, imprecision and indirectness. The lower requirement for exchange of tubes with cuffed ETTs was very low-quality evidence, and the requirement for less medical gas used and consequent lower cost was low-quality evidence. In some cases, tracheal re-intubation is required to guarantee an open airway when adequate oxygenation is difficult after removal of the tube, for a variety of reasons including stridor, muscle weakness or obstruction. No data were available to permit evaluation of whether appropriate tidal volumes were delivered.\nImplications for research\nLarge randomized controlled trials of high methodological quality should be conducted to help clarify the risks and benefits of cuffed ETTs for children. Such trials should investigate the capacity to deliver appropriate tidal volume. Future trials should also address cost effectiveness and respiratory complications. Such studies should correlate the age of the child with the duration of intubation, and with possible complications. Studies should also be conducted in newborn babies. Future research should be conducted to compare the effects of the different types or brands of cuffed tubes used worldwide. Finally, trials should be designed to perform more accurate assessments and to diagnose the complications encountered with cuffed compared to uncuffed ETTs.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion and future research\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Several gaps remain in the information available around this question. Large, well-conducted clinical trials should clarify factors such as the ability of these tubes to provide adequate amounts of oxygen, and the respiratory complications that occur with the wide use of cuffed tubes in children." } ]
query-laysum
23417
[ { "role": "user", "content": "Abstract: Background\nFatigue is the most commonly reported symptom in people with advanced cancer. Cancer-related fatigue (CRF) is pervasive and debilitating, and can greatly impact quality of life (QoL). CRF has a highly variable clinical presentation, likely due to a complex interaction of multiple factors. Corticosteroids are commonly used to improve CRF, but the benefits are unclear and there are significant adverse effects associated with long-term use. With the increasing survival of people with metastatic cancer, the long-term effects of medications are becoming increasingly relevant. Since the impact of CRF can be immensely debilitating and can negatively affect QoL, its treatment warrants further review.\nObjectives\nTo determine the benefits and harms of corticosteroids compared with placebo or an active comparator in adults with advanced cancer and CRF.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, Science Citation Index (ISI Web of Science), LILACS, and two clinical trial registries from inception to 18 July 2022.\nSelection criteria\nWe included randomised controlled trials in adults aged ≥18 years. We included participants with advanced cancer who were suffering from CRF. We included trials that randomised participants to corticosteroids at any dose, by any route, administered for the relief of CRF; compared to placebo or an active comparator, including supportive care or non-pharmacological treatments.\nData collection and analysis\nThree review authors independently assessed titles identified by the search strategy; two review authors assessed risk of bias; and two extracted data. We extracted the primary outcome of participant-reported fatigue relief using validated scales and secondary outcomes of adverse events, serious adverse events and QoL. We calculated the risk ratio with 95% confidence intervals (CIs) between groups for dichotomous outcomes. We measured arithmetic mean and standard deviation, and reported the mean difference (MD) with 95% CI between groups for continuous outcomes. We used standardised mean difference (SMD) with 95% CIs when an outcome was measured with different instruments measuring the same construct. We used a random-effects model to meta-analyse the outcome data. We rated the certainty of the evidence using GRADE and created two summary of findings tables.\nMain results\nWe included four studies with 297 enroled participants; data were available for only 239 participants. Three studies compared corticosteroid (equivalent ≤ 8 mg dexamethasone) to placebo. One study compared corticosteroid (dexamethasone 4 mg) to an active comparator (modafinil 100 mg). There were insufficient data to evaluate subgroups, such as dose and duration of treatment. One study had a high risk of performance and detection bias due to lack of blinding, and one study had a high risk of attrition bias. Otherwise, we assessed risks of bias as low or unclear.\nComparison 1: corticosteroids compared with placebo\nParticipant-reported fatigue relief\nThe was no clear difference between corticosteroids and placebo (SMD -0.46, 95% CI -1.07 to 0.14; 3 RCTs, 165 participants, very low-certainty evidence) for relief of fatigue at one week of the intervention. We downgraded the certainty of the evidence three times for study limitations due to unclear risk of bias, imprecision, and inconsistency.\nAdverse events\nThere was no clear difference in the occurrence of adverse events between groups, but the evidence is very uncertain (3 RCTs, 165 participants; very low-certainty evidence).\nSerious adverse events\nThere was no clear difference in the occurrence of serious adverse events between groups, but the evidence is very uncertain (2 RCTs, 118 participants; very low-certainty evidence).\nQuality of lIfe\nOne study reported QoL at one week using the Edmonton Symptom Assessment System (ESAS) well-being, and found no clear difference in QoL between groups (MD -0.58, 95% CI -1.93 to 0.77). Another study measured QoL using the Quality of Life Questionnaire for Cancer Patients Treated with Anticancer Drugs (QoL-ACD), and found no clear difference between groups. There was no clear difference between groups for either study, but the evidence is very uncertain (2 RCTs, 118 participants; very low-certainty evidence).\nComparison 2: corticosteroids compared with active comparator (modafinil)\nParticipant-reported fatigue relief\nThere was improvement in fatigue from baseline to two weeks in both groups (modafinil MD 10.15, 95% CI 7.43 to 12.87; dexamethasone MD 9.21, 95% CI 6.73 to 11.69), however no clear difference between the two groups (MD -0.94, 95% CI -4.49 to 2.61; 1 RCT, 73 participants, very low-certainty evidence). We downgraded the certainty of the evidence three times for very serious study limitations and imprecision.\nAdverse events\nThere was no clear difference in the occurrence of adverse events between groups, but the evidence is very uncertain (1 RCT, 73 participants; very low-certainty evidence).\nSerious adverse events\nThere were no serious adverse events reported in either group (1 RCT, 73 participants; very low-certainty evidence).\nQuality of lIfe\nOne study measured QoL at two weeks, using the ESAS-well-being. There was marked improvement in QoL from baseline in both groups (modafinil MD -2.43, 95% CI -2.88 to -1.98; dexamethasone MD -2.16, 95% CI -2.68 to -1.64), however no clear difference between the two groups (MD 0.27, 95% CI -0.39 to 0.93; 1 RCT, 73 participants, very low-certainty evidence).\nAuthors' conclusions\nThere is insufficient evidence to support or refute the use of systemic corticosteroids in adults with cancer and CRF.\nWe included four small studies that provided very low-certainty of evidence for the efficacy of corticosteroids in the management of CRF.\nFurther high-quality randomised controlled trials with larger sample sizes are required to determine the effectiveness of corticosteroids in this setting.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We wanted to find out if corticosteroid medicines were better than inactive medicine (placebo), or other medicines to improve fatigue associated with cancer. We also wanted to find out if corticosteroids were associated with unwanted side effects when used for fatigue caused by advanced cancer." } ]
query-laysum
23418
[ { "role": "user", "content": "Abstract: Background\nHypertensive disorders in pregnancy are significant contributors to maternal and perinatal morbidity and mortality. These disorders include well-controlled chronic hypertension, gestational hypertension (pregnancy-induced hypertension) and mild pre-eclampsia. The definitive treatment for these disorders is planned early delivery and the alternative is to manage the pregnancy expectantly if severe uncontrolled hypertension is not present, with close maternal and fetal monitoring. There are benefits and risks associated with both, so it is important to establish the safest option.\nObjectives\nTo assess the benefits and risks of a policy of planned early delivery versus a policy of expectant management in pregnant women with hypertensive disorders, at or near term (from 34 weeks onwards).\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth Trials Register (12 January 2016) and reference lists of retrieved studies.\nSelection criteria\nRandomised trials of a policy of planned early delivery (by induction of labour or by caesarean section) compared with a policy of delayed delivery (\"expectant management\") for women with hypertensive disorders from 34 weeks' gestation. Cluster-randomised trials would have been eligible for inclusion in this review, but we found none.\nStudies using a quasi-randomised design are not eligible for inclusion in this review. Similarly, studies using a cross-over design are not eligible for inclusion, because they are not a suitable study design for investigating hypertensive disorders in pregnancy.\nData collection and analysis\nTwo review authors independently assessed eligibility and risks of bias. Two review authors independently extracted data. Data were checked for accuracy.\nMain results\nWe included five studies (involving 1819 women) in this review.\nThere was a lower risk of composite maternal mortality and severe morbidity for women randomised to receive planned early delivery (risk ratio (RR) 0.69, 95% confidence interval (CI) 0.57 to 0.83, two studies, 1459 women (evidence graded high)). There were no clear differences between subgroups based on our subgroup analysis by gestational age, gestational week or condition. Planned early delivery was associated with lower risk of HELLP syndrome (RR 0.40, 95% CI 0.17 to 0.93, 1628 women; three studies) and severe renal impairment (RR 0.36, 95% CI 0.14 to 0.92, 100 women, one study).\nThere was not enough information to draw any conclusions about the effects on composite infant mortality and severe morbidity. We observed a high level of heterogeneity between the two studies in this analysis (two studies, 1459 infants, I2 = 87%, Tau2 = 0.98), so we did not pool data in meta-analysis. There were no clear differences between subgroups based on our subgroup analysis by gestational age, gestational week or condition. Planned early delivery was associated with higher levels of respiratory distress syndrome (RR 2.24, 95% CI 1.20 to 4.18, three studies, 1511 infants), and NICU admission (RR 1.65, 95% CI 1.13 to 2.40, four studies, 1585 infants).\nThere was no clear difference between groups for caesarean section (RR 0.91, 95% CI 0.78 to 1.07, 1728 women, four studies, evidence graded moderate), or in the duration of hospital stay for the mother after delivery of the baby (mean difference (MD) -0.16 days, 95% CI -0.46 to 0.15, two studies, 925 women, evidence graded moderate) or for the baby (MD -0.20 days, 95% CI -0.57 to 0.17, one study, 756 infants, evidence graded moderate).\nTwo fairly large, well-designed trials with overall low risk of bias contributed the majority of the evidence. Other studies were at low or unclear risk of bias. No studies attempted to blind participants or clinicians to group allocation, potentially introducing bias as women and staff would have been aware of the intervention and this may have affected aspects of care and decision-making.\nThe level of evidence was graded high (composite maternal mortality and morbidity), moderate (caesarean section, duration of hospital stay after delivery for mother, and duration of hospital stay after delivery for baby) or low (composite infant mortality and morbidity). Where the evidence was downgraded, it was mostly because the confidence intervals were wide, crossing both the line of no effect and appreciable benefit or harm.\nAuthors' conclusions\nFor women suffering from hypertensive disorders of pregnancy after 34 weeks, planned early delivery is associated with less composite maternal morbidity and mortality. There is no clear difference in the composite outcome of infant mortality and severe morbidity; however, this is based on limited data (from two trials) assessing all hypertensive disorders as one group.\nFurther studies are needed to look at the different types of hypertensive diseases and the optimal timing of delivery for these conditions. These studies should also include infant and maternal morbidity and mortality outcomes, caesarean section, duration of hospital stay after delivery for mother and duration of hospital stay after delivery for baby.\nAn individual patient meta-analysis on the data currently available would provide further information on the outcomes of the different types of hypertensive disease encountered in pregnancy.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched for evidence on 12 January 2016 and found five randomised studies, involving 1819 women. Two of the studies were large, high-quality studies, in women with gestational hypertension, mild pre-eclampsia or deteriorating existing hypertension at 34 to 37 weeks (704 women) or with gestational hypertension or mild pre-eclampsia at 36 to 41 weeks (756 women). Fewer women who received planned early delivery experienced severe adverse outcomes (1459 women, high-quality evidence). There was not enough information to draw any conclusions about the effects on the number of babies born with poor health, with a high level of variability between the two studies (1459 infants, low-quality evidence). There was no clear difference between planned early delivery and delayed delivery for the number of caesarean sections (four studies, 1728 women, moderate-quality evidence), or the duration of the mother’s hospital stay after the birth of the baby (two studies, 925 women, moderate-quality evidence) (or for the baby (one study, 756 infants, moderate-quality evidence)). More babies who were delivered early had breathing problems (respiratory distress syndrome, three studies, 1511 infants), or were admitted to the neonatal unit (four studies, 1585 infants). Fewer women who delivered early developed HELLP syndrome (three studies, 1628 women) or severe kidney problems (one study, 100 women).\nTwo studies compared women who had labour induced at 34 to 36 weeks and at 34 to 37 weeks with a comparison group who were monitored until 37 weeks, when induction was begun if labour had not started spontaneously. Three studies compared induction of labour at term or closer to term, at 37 completed weeks and at 36 to 41 weeks, with women who were monitored until 41 weeks when induction was begun if labour had not started spontaneously. Other inclusion and exclusion criteria also differed between the five studies.\nNo studies attempted to blind the women or their clinicians to which group they were in. Women and staff were aware of the intervention and this may have affected aspects of care and decision-making. Most of the evidence was of moderate quality, so we can be moderately certain about the findings." } ]
query-laysum
23419
[ { "role": "user", "content": "Abstract: Background\nCorruption is the abuse or complicity in abuse, of public or private position, power or authority to benefit oneself, a group, an organisation or others close to oneself; where the benefits may be financial, material or non-material. It is wide-spread in the health sector and represents a major problem.\nObjectives\nOur primary objective was to systematically summarise empirical evidence of the effects of strategies to reduce corruption in the health sector. Our secondary objective was to describe the range of strategies that have been tried and to guide future evaluations of promising strategies for which there is insufficient evidence.\nSearch methods\nWe searched 14 electronic databases up to January 2014, including: CENTRAL; MEDLINE; EMBASE; sociological, economic, political and other health databases; Human Resources Abstracts up to November 2010; Euroethics up to August 2015; and PubMed alerts from January 2014 to June 2016. We searched another 23 websites and online databases for grey literature up to August 2015, including the World Bank, the International Monetary Fund, the U4 Anti-Corruption Resource Centre, Transparency International, healthcare anti-fraud association websites and trial registries. We conducted citation searches in Science Citation Index and Google Scholar, and searched PubMed for related articles up to August 2015. We contacted corruption researchers in December 2015, and screened reference lists of articles up to May 2016.\nSelection criteria\nFor the primary analysis, we included randomised trials, non-randomised trials, interrupted time series studies and controlled before-after studies that evaluated the effects of an intervention to reduce corruption in the health sector. For the secondary analysis, we included case studies that clearly described an intervention to reduce corruption in the health sector, addressed either our primary or secondary objective, and stated the methods that the study authors used to collect and analyse data.\nData collection and analysis\nOne review author extracted data from the included studies and a second review author checked the extracted data against the reports of the included studies. We undertook a structured synthesis of the findings. We constructed a results table and 'Summaries of findings' tables. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the certainty of the evidence.\nMain results\nNo studies met the inclusion criteria of the primary analysis. We included nine studies that met the inclusion criteria for the secondary analysis.\nOne study found that a package of interventions coordinated by the US Department of Health and Human Services and Department of Justice recovered a large amount of money and resulted in hundreds of new cases and convictions each year (high certainty of the evidence). Another study from the USA found that establishment of an independent agency to investigate and enforce efforts against overbilling might lead to a small reduction in overbilling, but the certainty of this evidence was very low. A third study from India suggested that the impacts of coordinated efforts to reduce corruption through increased detection and enforcement are dependent on continued political support and that they can be limited by a dysfunctional judicial system (very low certainty of the evidence).\nOne study in South Korea and two in the USA evaluated increased efforts to investigate and punish corruption in clinics and hospitals without establishing an independent agency to coordinate these efforts. It is unclear whether these were effective because the evidence is of very low certainty.\nOne study from Kyrgyzstan suggested that increased transparency and accountability for co-payments together with reduction of incentives for demanding informal payments may reduce informal payments (low certainty of the evidence).\nOne study from Germany suggested that guidelines that prohibit hospital doctors from accepting any form of benefits from the pharmaceutical industry may improve doctors’ attitudes about the influence of pharmaceutical companies on their choice of medicines (low certainty of the evidence).\nA study in the USA, evaluated the effects of introducing a law that required pharmaceutical companies to report the gifts they gave to healthcare workers. Another study in the USA evaluated the effects of a variety of internal control mechanisms used by community health centres to stop corruption. The effects of these strategies is unclear because the evidence was of very low certainty.\nAuthors' conclusions\nThere is a paucity of evidence regarding how best to reduce corruption. Promising interventions include improvements in the detection and punishment of corruption, especially efforts that are coordinated by an independent agency. Other promising interventions include guidelines that prohibit doctors from accepting benefits from the pharmaceutical industry, internal control practices in community health centres, and increased transparency and accountability for co-payments combined with reduced incentives for informal payments. The extent to which increased transparency alone reduces corruption is uncertain. There is a need to monitor and evaluate the impacts of all interventions to reduce corruption, including their potential adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The aim of this Cochrane review is to assess the effectiveness of strategies to reduce corruption in the health sector. Cochrane researchers searched for all potentially relevant studies, and found nine studies that met their criteria." } ]
query-laysum
23420
[ { "role": "user", "content": "Abstract: Background\nFollowing surgery, surgical wounds can be closed using a variety of devices including sutures (subcuticular or transdermal), staples and tissue adhesives. Subcuticular sutures are intradermal stitches (placed immediately below the epidermal layer). The increased availability of synthetic absorbable filaments (stitches which are absorbed by the body and do not have to be removed) has led to an increased use of subcuticular sutures. However, in non-obstetric surgery, there is still controversy about whether subcuticular sutures increase the incidence of wound complications.\nObjectives\nTo examine the efficacy and acceptability of subcuticular sutures for skin closure in non-obstetric surgery.\nSearch methods\nIn March 2019, we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nAll randomised controlled trials which compared subcuticular sutures with any other methods for skin closure in non-obstetric surgery were included in the review.\nData collection and analysis\nTwo review authors independently identified the trials, extracted data and carried out risk of bias and GRADE assessment of the certainty of the evidence.\nMain results\nWe included 66 studies (7487 participants); 11 included trials had more than two arms. Most trials had poorly-reported methodology, meaning that it is unclear whether they were at high risk of bias. Most trials compared subcuticular sutures with transdermal sutures, skin staples or tissue adhesives. Most outcomes prespecified in the review protocol were reported. The certainty of evidence varied from high to very low in the comparisons of subcuticular sutures with transdermal sutures or staples and tissue adhesives; the certainty of the evidence for the comparison with surgical tapes and zippers was low to very low. Most evidence was downgraded for imprecision or risk of bias.\nAlthough the majority of studies enrolled people who underwent CDC class 1 (clean) surgeries, two-thirds of participants were enrolled in studies which included CDC class 2 to 4 surgeries, such as appendectomies and gastrointestinal surgeries. Most participants were adults in a hospital setting.\nSubcuticular sutures versus transdermal sutures\nThere may be little difference in the incidence of SSI (risk ratio (RR) 1.10; 95% confidence interval (CI) 0.80 to 1.52; 3107 participants; low-certainty evidence).\nIt is uncertain whether subcuticular sutures reduce wound complications (RR 0.83; 95% CI 0.40 to 1.71; 1489 participants; very low-certainty evidence). Subcuticular sutures probably improve patient satisfaction (score from 1 to 10) (at 30 days; MD 1.60, 95% CI 1.32 to 1.88; 290 participants; moderate-certainty evidence). Wound closure time is probably longer when subcuticular sutures are used (MD 5.81 minutes; 95% CI 5.13 to 6.49 minutes; 585 participants; moderate-certainty evidence).\nSubcuticular sutures versus skin staples\nThere is moderate-certainty evidence that, when compared with skin staples, subcuticular sutures probably have little effect on SSI (RR 0.81, 95% CI 0.64 to 1.01; 4163 participants); but probably decrease the incidence of wound complications (RR 0.79, 95% CI 0.64 to 0.98; 2973 participants). Subcuticular sutures are associated with slightly higher patient satisfaction (score from 1 to 5) (MD 0.20, 95% CI 0.10 to 0.30; 1232 participants; high-certainty evidence). Wound closure time may also be longer compared with staples (MD 0.30 to 5.50 minutes; 1384 participants; low-certainty evidence).\nSubcuticular sutures versus tissue adhesives, surgical tapes and zippers\nThere is moderate-certainty evidence showing no clear difference in the incidence of SSI between participants treated with subcuticular sutures and those treated with tissue adhesives (RR 0.77, 95% CI 0.41 to 1.45; 869 participants). There is also no clear difference in the incidence of wound complications (RR 0.62, 95% CI 0.35 to 1.11; 1058 participants; low-certainty evidence). Subcuticular sutures may also achieve lower patient satisfaction ratings (score from 1 to 10) (MD -2.05, 95% CI -3.05 to -1.05; 131 participants) (low-certainty evidence). In terms of SSI incidence, the evidence is uncertain when subcuticular sutures are compared with surgical tapes (RR 1.31, 95% CI 0.40 to 4.27; 354 participants; very low-certainty evidence) or surgical zippers (RR 0.80, 95% CI 0.08 to 8.48; 424 participants; very low-certainty evidence). There may be little difference in the incidence of wound complications between participants treated with subcuticular sutures and those treated with surgical tapes (RR 0.90, 95% CI 0.61 to 1.34; 492 participants; low-certainty evidence). It is uncertain whether subcuticular sutures reduce the risk of wound complications compared with surgical zippers (RR 0.55, 95% CI 0.15 to 2.04; 424 participants; very low-certainty evidence). It is also uncertain whether it takes longer to close a wound with subcuticular sutures compared with tissue adhesives (MD -0.34 to 10.39 minutes; 895 participants), surgical tapes (MD 0.74 to 6.36 minutes; 169 participants) or zippers (MD 4.38 to 8.25 minutes; 424 participants) (very low-certainty evidence). No study reported results for patient satisfaction compared with surgical tapes or zippers.\nAuthors' conclusions\nThere is no clear difference in the incidence of SSI for subcuticular sutures in comparison with any other skin closure methods. Subcuticular sutures probably reduce wound complications compared with staples, and probably improve patient satisfaction compared with transdermal sutures or staples. However, tissue adhesives may improve patient satisfaction compared with subcuticular sutures, and transdermal sutures and skin staples may be quicker to apply than subcuticular sutures. The quality of the evidence ranged from high to very low; evidence for almost all comparisons was subject to some limitations. There seems to be no need for additional new trials to explore the comparison with staples because there are high-quality studies with large sample sizes and some ongoing studies. However, there is a need for studies exploring the comparisons with transdermal sutures, tissue adhesives, tapes and zippers, with high-quality studies and large sample sizes, including long-term assessments.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched for studies that had been published up to March 2019." } ]
query-laysum
23421
[ { "role": "user", "content": "Abstract: Background\nNew, minimally invasive surgeries have emerged as alternatives to transurethral resection of the prostate (TURP) for the management of lower urinary tract symptoms (LUTS) in men with benign prostatic hyperplasia (BPH). Aquablation is a novel, minimally invasive, water-based therapy, combining image guidance and robotics for the removal of prostatic tissue.\nObjectives\nTo assess the effects of Aquablation for the treatment of lower urinary tract symptoms in men with benign prostatic hyperplasia.\nSearch methods\nWe performed a comprehensive search using multiple databases (the Cochrane Library, MEDLINE, Embase, Scopus, Web of Science, and LILACS), trials registries, other sources of grey literature, and conference proceedings published up to 11 February 2019, with no restrictions on the language or status of publication.\nSelection criteria\nWe included parallel-group randomised controlled trials (RCTs) and cluster-RCTs, as well as non-randomised observational prospective studies with concurrent comparison groups in which participants with BPH who underwent Aquablation.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion at each stage, and undertook data extraction and 'Risk of bias' and GRADE assessments of the certainty of the evidence. We considered review outcomes measured up to and including 12 months after randomisation as short-term and beyond 12 months as long-term.\nMain results\nWe included one RCT with 184 participants comparing Aquablation to TURP. The mean age and International Prostate Symptom Score were 65.9 years and 22.6, respectively. The mean prostate volume was 53.2 mL. We only found short-term data for all outcomes based on a single randomised trial.\nPrimary outcomes\nUp to 12 months, Aquablation likely results in a similar improvement in urologic symptom scores to TURP (mean difference (MD) −0.06, 95% confidence interval (CI) −2.51 to 2.39; participants = 174; moderate-certainty evidence). We downgraded the evidence certainty by one level due to study limitations. Aquablation may also result in similar quality of life when compared to TURP (MD 0.27, 95% CI −0.24 to 0.78; participants = 174, low-certainty evidence). We downgraded the evidence certainty by two levels due to study limitations and imprecision. Aquablation may result in little to no difference in major adverse events (risk ratio (RR) 0.84, 95% CI 0.31 to 2.26; participants = 181, very low-certainty evidence) but we are very uncertain of this finding. This would correspond to 15 fewer major adverse events per 1000 participants (95% CI 64 fewer to 116 more). We downgraded the evidence certainty by one level for study limitations and two levels for imprecision.\nSecondary outcomes\nUp to 12 months, Aquablation may result in little to no difference in retreatments (RR 1.68, 95% CI 0.18 to 15.83; participants = 181, very low-certainty evidence) but we are very uncertain of this finding. This would correspond to 10 more retreatments per 1000 participants (95% CI 13 fewer to 228 more). We downgraded the evidence certainty by one level due to study limitations and two levels for imprecision.\nAquablation may result in little to no difference in erectile function as measured by International Index of Erectile Function questionnaire Erectile Function domain compared to TURP (MD 2.31, 95% CI −0.63 to 5.25; participants = 64, very low-certainty evidence), and may cause slightly less ejaculatory dysfunction than TURP, as measured by Male Sexual Health Questionnaire for Ejaculatory Dysfunction (MD 2.57, 95% CI 0.60 to 4.53; participants = 121, very low-certainty evidence). However, we are very uncertain of both findings. We downgraded the evidence certainty by two levels due to study limitations and one level for imprecision for both outcomes.\nWe did not find other prospective, comparative studies comparing Aquablation to TURP or other procedures such as laser ablation, enucleation, or other minimally invasive therapies.\nAuthors' conclusions\nBased on short-term (up to 12 months) follow-up, the effect of Aquablation on urological symptoms is probably similar to that of TURP (moderate-certainty evidence). The effect on quality of life may also be similar (low-certainty evidence). We are very uncertain whether patients undergoing Aquablation are at higher or lower risk for major adverse events (very low-certainty evidence). We are very uncertain whether Aquablation may result in little to no difference in erectile function but offer a small improvement in preservation of ejaculatory function (both very low-certainty evidence). These conclusions are based on a single study of men with a prostate volume up to 80 mL in size. Longer-term data and comparisons with other modalities appear critical to a more thorough assessment of the role of Aquablation for the treatment of LUTS in men with BPH.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We looked for all studies that compared Aquablation to transurethral resection of the prostate. We included both studies in chance decided how men were treated and studies in which men and their urologist decided. We searched for studies up to 11 February 2019." } ]
query-laysum
23422
[ { "role": "user", "content": "Abstract: Background\nTo evaluate the effects of administering chemotherapy following surgery, or following surgery plus radiotherapy (known as adjuvant chemotherapy) in patients with early stage non-small cell lung cancer (NSCLC),we performed two systematic reviews and meta-analyses of all randomised controlled trials using individual participant data. Results were first published in The Lancet in 2010.\nObjectives\nTo compare, in terms of overall survival, time to locoregional recurrence, time to distant recurrence and recurrence-free survival:\nA. Surgery versus surgery plus adjuvant chemotherapy\nB. Surgery plus radiotherapy versus surgery plus radiotherapy plus adjuvant chemotherapy\nin patients with histologically diagnosed early stage NSCLC.\n(2)To investigate whether or not predefined patient subgroups benefit more or less from cisplatin-based chemotherapy in terms of survival.\nSearch methods\nWe supplemented MEDLINE and CANCERLIT searches (1995 to December 2013) with information from trial registers, handsearching relevant meeting proceedings and by discussion with trialists and organisations.\nSelection criteria\nWe included trials of a) surgery versus surgery plus adjuvant chemotherapy; and b) surgery plus radiotherapy versus surgery plus radiotherapy plus adjuvant chemotherapy, provided that they randomised NSCLC patients using a method which precluded prior knowledge of treatment assignment.\nData collection and analysis\nWe carried out a quantitative meta-analysis using updated information from individual participants from all randomised trials. Data from all patients were sought from those responsible for the trial. We obtained updated individual participant data (IPD) on survival, and date of last follow-up, as well as details of treatment allocated, date of randomisation, age, sex, histological cell type, stage, and performance status. To avoid potential bias, we requested information for all randomised patients, including those excluded from the investigators' original analyses. We conducted all analyses on intention-to-treat on the endpoint of survival. For trials using cisplatin-based regimens, we carried out subgroup analyses by age, sex, histological cell type, tumour stage, and performance status.\nMain results\nWe identified 35 trials evaluating surgery plus adjuvant chemotherapy versus surgery alone. IPD were available for 26 of these trials and our analyses are based on 8447 participants (3323 deaths) in 34 trial comparisons. There was clear evidence of a benefit of adding chemotherapy after surgery (hazard ratio (HR)= 0.86, 95% confidence interval (CI)= 0.81 to 0.92, p< 0.0001), with an absolute increase in survival of 4% at five years.\nWe identified 15 trials evaluating surgery plus radiotherapy plus chemotherapy versus surgery plus radiotherapy alone. IPD were available for 12 of these trials and our analyses are based on 2660 participants (1909 deaths) in 13 trial comparisons. There was also evidence of a benefit of adding chemotherapy to surgery plus radiotherapy (HR= 0.88, 95% CI= 0.81 to 0.97, p= 0.009). This represents an absolute improvement in survival of 4% at five years.\nFor both meta-analyses, we found similar benefits for recurrence outcomes and there was little variation in effect according to the type of chemotherapy, other trial characteristics or patient subgroup.\nWe did not undertake analysis of the effects of adjuvant chemotherapy on quality of life and adverse events. Quality of life information was not routinely collected during the trials, but where toxicity was assessed and mentioned in the publications, it was thought to be manageable. We considered the risk of bias in the included trials to be low.\nAuthors' conclusions\nResults from 47 trial comparisons and 11,107 patients demonstrate the clear benefit of adjuvant chemotherapy for these patients, irrespective of whether chemotherapy was given in addition to surgery or surgery plus radiotherapy. This is the most up-to-date and complete systematic review and individual participant data (IPD) meta-analysis that has been carried out.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Non-small cell lung cancer is the most common type of lung cancer. If the tumour is early stage, not too big and has not spread to other parts of the body, doctors usually operate to remove it. At the same time, they will also remove a bit of the lung, or the entire lung that has the tumour. They may also give radiotherapy (treatment with x-rays) after the operation, aiming to kill any remaining cancer cells. They may also give chemotherapy (drug treatment) after surgery to lower the risk of the cancer coming back. This treatment is called adjuvant chemotherapy.\n\nIn 1995, we did a systematic review and meta-analysis of individual participant data looking at adjuvant chemotherapy and surgery (with or without radiotherapy). It brought together information from all patients who took part in similar trials. These trials compared what happened to people with non-small cell lung cancer who were given chemotherapy after surgery (with or without radiotherapy) with those who had surgery without chemotherapy (with or without radiotherapy). We found that it was not clear whether chemotherapy helped patients with non-small cell lung cancer live longer.\n\nSince this study was completed, many new trials have been done. Therefore, we carried out a new systematic review and meta-analysis of individual participant data that included all trials, old and new. This study aimed to find out if giving chemotherapy after surgery (with or without radiotherapy) can a) help patients live longer, b) stop the cancer coming back (recurrence), and c) stop the cancer spreading to other parts of the body (metastases).\nWe carried out two studies called meta-analyses that included patients with non-small cell lung cancer that took part in randomised controlled trials comparing:\na) surgery versus surgery plus adjuvant chemotherapy; and\nb) surgery plus radiotherapy versus surgery plus radiotherapy plus adjuvant chemotherapy.\nResults were first published in the Lancet in 2010." } ]
query-laysum
23423
[ { "role": "user", "content": "Abstract: Background\nPoor retention of participants in randomised trials can lead to missing outcome data which can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to improve retention but few have been formally evaluated.\nObjectives\nTo quantify the effect of strategies to improve retention of participants in randomised trials and to investigate if the effect varied by trial setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Scopus, PsycINFO, CINAHL, Web of Science Core Collection (SCI-expanded, SSCI, CPSI-S, CPCI-SSH and ESCI) either directly with a specified search strategy or indirectly through the ORRCA database. We also searched the SWAT repository to identify ongoing or recently completed retention trials. We did our most recent searches in January 2020.\nSelection criteria\nWe included eligible randomised or quasi-randomised trials of evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance.\nData collection and analysis\nWe extracted data on: the retention strategy being evaluated; location of study; host trial setting; method of randomisation; numbers and proportions in each intervention and comparator group. We used a risk difference (RD) and 95% confidence interval (CI) to estimate the effectiveness of the strategies to improve retention. We assessed heterogeneity between trials. We applied GRADE to determine the certainty of the evidence within each comparison.\nMain results\nWe identified 70 eligible papers that reported data from 81 retention trials. We included 69 studies with more than 100,000 participants in the final meta-analyses, of which 67 studies evaluated interventions aimed at trial participants and two evaluated interventions aimed at trial staff involved in retention. All studies were in health care and most aimed to improve postal questionnaire response. Interventions were categorised into broad comparison groups: Data collection; Participants; Sites and site staff; Central study management; and Study design.\nThese intervention groups consisted of 52 comparisons, none of which were supported by high-certainty evidence as determined by GRADE assessment. There were four comparisons presenting moderate-certainty evidence, three supporting retention (self-sampling kits, monetary reward together with reminder or prenotification and giving a pen at recruitment) and one reducing retention (inclusion of a diary with usual follow-up compared to usual follow-up alone). Of the remaining studies, 20 presented GRADE low-certainty evidence and 28 presented very low-certainty evidence.\nOur findings do provide a priority list for future replication studies, especially with regard to comparisons that currently rely on a single study.\nAuthors' conclusions\nMost of the interventions we identified aimed to improve retention in the form of postal questionnaire response. There were few evaluations of ways to improve participants returning to trial sites for trial follow-up. None of the comparisons are supported by high-certainty evidence. Comparisons in the review where the evidence certainty could be improved with the addition of well-done studies should be the focus for future evaluations.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We identified 70 relevant articles, which reported 81 retention studies involving more than 100,000 participants, that had investigated different ways of trying to encourage randomised trial participants to provide data and stay in the trial. We organised these into broad comparison groups but, unfortunately, we are not able to say with confidence that any of the results we found is a true effect and not caused by other factors, such as flaws with the design of the studies. As such, the effect of ways to encourage people to stay involved in trials is still not clear and more research is needed to see if these retention methods really do work." } ]
query-laysum
23424
[ { "role": "user", "content": "Abstract: Background\nCoronary heart disease is the leading cause of mortality worldwide with approximately 7.4 million deaths each year. People with established coronary heart disease have a high risk of subsequent cardiovascular events including myocardial infarction, stroke, and cardiovascular death. Antibiotics might prevent such outcomes due to their antibacterial, antiinflammatory, and antioxidative effects. However, a randomised clinical trial and several observational studies have suggested that antibiotics may increase the risk of cardiovascular events and mortality. Furthermore, several non-Cochrane Reviews, that are now outdated, have assessed the effects of antibiotics for coronary heart disease and have shown conflicting results. No previous systematic review using Cochrane methodology has assessed the effects of antibiotics for coronary heart disease.\nObjectives\nWe assessed the benefits and harms of antibiotics compared with placebo or no intervention for the secondary prevention of coronary heart disease.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, LILACS, SCI-EXPANDED, and BIOSIS in December 2019 in order to identify relevant trials. Additionally, we searched TRIP, Google Scholar, and nine trial registries in December 2019. We also contacted 11 pharmaceutical companies and searched the reference lists of included trials, previous systematic reviews, and other types of reviews.\nSelection criteria\nRandomised clinical trials assessing the effects of antibiotics versus placebo or no intervention for secondary prevention of coronary heart disease in adult participants (≥18 years). Trials were included irrespective of setting, blinding, publication status, publication year, language, and reporting of our outcomes.\nData collection and analysis\nThree review authors independently extracted data. Our primary outcomes were all-cause mortality, serious adverse event according to the International Conference on Harmonization - Good Clinical Practice (ICH-GCP), and quality of life. Our secondary outcomes were cardiovascular mortality, myocardial infarction, stroke, and sudden cardiac death. Our primary time point of interest was at maximum follow-up. Additionally, we extracted outcome data at 24±6 months follow-up. We assessed the risks of systematic errors using Cochrane 'Rosk of bias' tool. We calculated risk ratios (RRs) with 95% confidence intervals (CIs) for dichotomous outcomes. We calculated absolute risk reduction (ARR) or increase (ARI) and number needed to treat for an additional beneficial outcome (NNTB) or for an additional harmful outcome (NNTH) if the outcome result showed a beneficial or harmful effect, respectively. The certainty of the body of evidence was assessed by GRADE.\nMain results\nWe included 38 trials randomising a total of 26,638 participants (mean age 61.6 years), with 23/38 trials reporting data on 26,078 participants that could be meta-analysed. Three trials were at low risk of bias and the 35 remaining trials were at high risk of bias. Trials assessing the effects of macrolides (28 trials; 22,059 participants) and quinolones (two trials; 4162 participants) contributed with the vast majority of the data.\nMeta-analyses at maximum follow-up showed that antibiotics versus placebo or no intervention seemed to increase the risk of all-cause mortality (RR 1.06; 95% CI 0.99 to 1.13; P = 0.07; I2 = 0%; ARI 0.48%; NNTH 208; 25,774 participants; 20 trials; high certainty of evidence), stroke (RR 1.14; 95% CI 1.00 to 1.29; P = 0.04; I2 = 0%; ARI 0.73%; NNTH 138; 14,774 participants; 9 trials; high certainty of evidence), and probably also cardiovascular mortality (RR 1.11; 95% CI 0.98 to 1.25; P = 0.11; I2= 0%; 4674 participants; 2 trials; moderate certainty of evidence). Little to no difference was observed when assessing the risk of myocardial infarction (RR 0.95; 95% CI 0.88 to 1.03; P = 0.23; I2 = 0%; 25,523 participants; 17 trials; high certainty of evidence). No evidence of a difference was observed when assessing sudden cardiac death (RR 1.08; 95% CI 0.90 to 1.31; P = 0.41; I2 = 0%; 4520 participants; 2 trials; moderate certainty of evidence).\nMeta-analyses at 24±6 months follow-up showed that antibiotics versus placebo or no intervention increased the risk of all-cause mortality (RR 1.25; 95% CI 1.06 to 1.48; P = 0.007; I2 = 0%; ARI 1.26%; NNTH 79 (95% CI 335 to 42); 9517 participants; 6 trials; high certainty of evidence), cardiovascular mortality (RR 1.50; 95% CI 1.17 to 1.91; P = 0.001; I2 = 0%; ARI 1.12%; NNTH 89 (95% CI 261 to 49); 9044 participants; 5 trials; high certainty of evidence), and probably also sudden cardiac death (RR 1.77; 95% CI 1.28 to 2.44; P = 0.0005; I2 = 0%; ARI 1.9%; NNTH 53 (95% CI 145 to 28); 4520 participants; 2 trials; moderate certainty of evidence). No evidence of a difference was observed when assessing the risk of myocardial infarction (RR 0.95; 95% CI 0.82 to 1.11; P = 0.53; I2 = 43%; 9457 participants; 5 trials; moderate certainty of evidence) and stroke (RR 1.17; 95% CI 0.90 to 1.52; P = 0.24; I2 = 0%; 9457 participants; 5 trials; high certainty of evidence).\nMeta-analyses of trials at low risk of bias differed from the overall analyses when assessing cardiovascular mortality at maximum follow-up. For all other outcomes, meta-analyses of trials at low risk of bias did not differ from the overall analyses.\nNone of the trials specifically assessed serious adverse event according to ICH-GCP.\nNo data were found on quality of life.\nAuthors' conclusions\nOur present review indicates that antibiotics (macrolides or quinolones) for secondary prevention of coronary heart disease seem harmful when assessing the risk of all-cause mortality, cardiovascular mortality, and stroke at maximum follow-up and all-cause mortality, cardiovascular mortality, and sudden cardiac death at 24±6 months follow-up. Current evidence does, therefore, not support the clinical use of macrolides and quinolones for the secondary prevention of coronary heart disease.\nFuture trials on the safety of macrolides or quinolones for the secondary prevention in patients with coronary heart disease do not seem ethical. In general, randomised clinical trials assessing the effects of antibiotics, especially macrolides and quinolones, need longer follow-up so that late-occurring adverse events can also be assessed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched various scientific databases from their inception to December 2019 and found 38 trials where people with coronary heart disease were randomly allocated to antibiotics versus placebo or no intervention. The 38 trials included 26,638 adults with a mean age of 61.6 years. 23 out of the 38 trials reported data on 26,078 participants that could be analysed. The vast majority of the data was contributed by trials assessing the effects of macrolide antibiotics (28 trials; 22,059 participants) and quinolone antibiotics (two trials; 4162 participants), while insufficient data were contributed by trials assessing the effects of tetracycline antibiotics (eight trials; 417 participants). Three trials were at low risk of bias and the remaining trials were at high risk of bias." } ]
query-laysum
23425
[ { "role": "user", "content": "Abstract: Background\nMetformin has been proposed as possibly a safer and more effective long-term treatment than the oral contraceptive pill (OCP) in women with polycystic ovary syndrome (PCOS). It is important to directly compare the efficacy and safety of metformin versus OCP in the long-term treatment of women with PCOS. This is an update of a Cochrane Review comparing insulin sensitising agents with the OCP and only includes studies on metformin.\nObjectives\nTo assess the effectiveness and safety of metformin versus the OCP (alone or in combination) in improving clinical, hormonal, and metabolic features of PCOS.\nSearch methods\nIn August 2019 we searched the Cochrane Gynaecology and Fertility Group Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase and CINAHL, the trial registers, handsearched references of the identified articles, and contacted experts in the field to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) of the use of metformin versus the OCP (alone or in combination) for women with PCOS.\nData collection and analysis\nWe used standard methods recommended by Cochrane. The primary review outcomes were the clinical parameters of hirsutism and adverse events, both severe (requiring stopping of medication), and minor. In the presence of substantial heterogeneity (I2 statistic > 50), which could be explained by pre-specified subgroup analyses on the basis of BMI, we reported the subgroups separately.\nMain results\nThis is a substantive update. We identified 38 additional studies. We included 44 RCTs (2253 women), which comprised 39 RCTs on adult women (2047 women) and five RCTs on adolescent women (206 women). Evidence quality ranged from very low to low. The main limitations were risk of bias, imprecision and inconsistency.\nMetformin versus the OCP\nIn adult women, we are uncertain of the effect of metformin compared to the OCP on hirsutism in subgroup body mass index (BMI) < 25 kg/m2 (mean difference (MD) 0.38, 95% confidence interval (CI) -0.44 to 1.19, 3 RCTs, n = 134, I2 = 50%, very low-quality evidence) and subgroup BMI > 30 kg/m2 (MD -0.38, 95% CI -1.93 to 1.17; 2 RCTs, n = 85, I2 = 34%, low-quality evidence). Metformin may be less effective in improving hirsutism compared to the OCP in the subgroup BMI 25 kg/m2 to 30 kg/m2 (MD 1.92, 95% CI 1.21 to 2.64, 5 RCTs, n = 254, I2 = 0%, low-quality evidence). Metformin may increase severe gastro-intestinal adverse events rate compared to the OCP (Peto odds ratio (OR) 6.42, 95% CI 2.98 to 13.84, 11 RCTs, n = 602, I2 = 0%, low-quality evidence). Metformin may decrease the incidence of severe other adverse events compared to the OCP (Peto OR 0.20, 95% CI 0.09 to 0.44, 8 RCTs, n = 363, I2 = 0%, low-quality evidence). There were no trials reporting on minor adverse events.\nIn adolescents, we are uncertain whether there is a difference between Metformin and the OCP, on hirsutism and adverse events.\nMetformin versus metformin combined with the OCP\nIn adult women, metformin may be less effective in improving hirsutism compared to Metformin combined with the OCP (MD 1.36, 95% CI 0.62 to 2.11, 3 RCTs, n = 135, I2= 9%, low-quality evidence). We are uncertain if there was a difference between metformin and metformin combined with the OCP for severe gastro-intestinal adverse events (OR 0.74, 95% CI 0.21 to 2.53, 3 RCTs, n = 171, I2 = 0%, low-quality evidence), or for severe other adverse events (OR 0.56, 95% CI 0.11 to 2.82, 2 RCTs, n = 109, I2 = 44%, low-quality evidence). There were no trials reporting on minor adverse events. In adolescents, there were no trials for this comparison.\nThe OCP versus metformin combined with the OCP\nIn adult women, the OCP may be less effective in improving hirsutism compared to metformin combined with the OCP (MD 0.54, 95% CI 0.20 to 0.89, 6 RCTs, n = 389, I2= 1%, low-quality evidence). The OCP may decrease the incidence of severe gastro-intestinal adverse events compared to metformin combined with the OCP (OR 0.20, 95% CI 0.06 to 0.72, 5 RCTs, n = 228, I2 = 0%, low-quality evidence). We are uncertain if there is a difference between the OCP and metformin combined with the OCP for severe other adverse events (OR 1.61, 95% CI 0.49 to 5.37, 4 RCTs, n = 159, I2 = 12%, low-quality evidence). The OCP may decrease the incidence of minor (gastro-intestinal) adverse events compared to metformin combined with the OCP (OR 0.06, 95% CI 0.01 to 0.44, 2 RCTs, n = 98, I2 = 0%, low-quality evidence). In adolescents, we are uncertain whether there is a difference between the OCP, compared to metformin combined with the OCP, on hirsutism or adverse events.\nAuthors' conclusions\nIn adult women with PCOS, metformin may be less effective in improving hirsutism compared to the OCP in the subgroup BMI 25 kg/m2 to 30 kg/m2 but we are uncertain if there was a difference between metformin and the OCP in subgroups BMI < 25 kg/m2 and BMI > 30kg/m2. Compared to the OCP, metformin may increase the incidence of severe gastro-intestinal adverse events and decrease the incidence of severe other adverse events with no trials reporting on minor adverse events. Either metformin alone or the OCP alone may be less effective in improving hirsutism compared to metformin combined with the OCP. We are uncertain whether there is a difference between the OCP alone and metformin alone compared to metformin combined with the OCP for severe or minor adverse events except for the OCP versus metformin combined with the OCP where the OCP may decrease the incidence of severe and minor gastro-intestinal adverse events.\nIn adolescent women with PCOS, we are uncertain whether there is a difference between any of the comparisons for hirsutism and adverse events due to either no evidence or very low-quality evidence.\nFurther large well-designed RCTs that stratify for BMI are needed to evaluate metformin versus the OCP and combinations in women with PCOS, in particular adolescent women.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The evidence was of very low to low quality. The main limitations in the evidence were poor reporting of study methods and a lack of both precision and consistency in the results." } ]
query-laysum
23426
[ { "role": "user", "content": "Abstract: Background\nSelf-administered survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource-intensive than other data collection methods. These survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a survey questionnaire could affect the quality of the responses collected.\nObjectives\nTo assess the impact that smartphone and tablet apps as a delivery mode have on the quality of survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects.\nSearch methods\nWe searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015.\nSelection criteria\nWe included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-administered survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-administered survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a survey questionnaire; differences in respondent's adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps.\nData collection and analysis\nTwo review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents). We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted.\nMain results\nWe included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study.\nRegarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents' daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates.\nAuthors' conclusions\nOur results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-administered survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Overall, there is not enough evidence to make clear recommendations about the impact that apps may have on survey questionnaire responses. Data equivalence may not be affected as long as the intended clinical application of a survey questionnaire and its intended frequency of administration is the same whether or not apps are used. Future research may need to consider how the design of the user interaction, survey questionnaire and intervention may affect data equivalence and the other outcomes evaluated in this review." } ]
query-laysum
23427
[ { "role": "user", "content": "Abstract: Background\nPeople with chronic obstructive pulmonary disease (COPD) show considerable variation in symptoms, limitations, and well-being; this often complicates medical care. A multi-disciplinary and multi-component programme that addresses different elements of care could improve quality of life (QoL) and exercise tolerance, while reducing the number of exacerbations.\nObjectives\nTo compare the effectiveness of integrated disease management (IDM) programmes versus usual care for people with chronic obstructive pulmonary disease (COPD) in terms of health-related quality of life (QoL), exercise tolerance, and exacerbation-related outcomes.\nSearch methods\nWe searched the Cochrane Airways Group Register of Trials, CENTRAL, MEDLINE, Embase, and CINAHL for potentially eligible studies. Searches were current as of September 2020.\nSelection criteria\nRandomised controlled trials (RCTs) that compared IDM programmes for COPD versus usual care were included. Interventions consisted of multi-disciplinary (two or more healthcare providers) and multi-treatment (two or more components) IDM programmes of at least three months' duration.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. If required, we contacted study authors to request additional data. We performed meta-analyses using random-effects modelling. We carried out sensitivity analyses for the quality of included studies and performed subgroup analyses based on setting, study design, dominant intervention components, and region.\nMain results\nAlong with 26 studies included in the 2013 Cochrane Review, we added 26 studies for this update, resulting in 52 studies involving 21,086 participants for inclusion in the meta-analysis. Follow-up periods ranged between 3 and 48 months and were classified as short-term (up to 6 months), medium-term (6 to 15 months), and long-term (longer than 15 months) follow-up. Studies were conducted in 19 different countries. The mean age of included participants was 67 years, and 66% were male. Participants were treated in all types of healthcare settings, including primary (n =15), secondary (n = 22), and tertiary care (n = 5), and combined primary and secondary care (n = 10). Overall, the level of certainty of evidence was moderate to high.\nWe found that IDM probably improves health-related QoL as measured by St. George's Respiratory Questionnaire (SGRQ) total score at medium-term follow-up (mean difference (MD) -3.89, 95% confidence interval (CI) -6.16 to -1.63; 18 RCTs, 4321 participants; moderate-certainty evidence). A comparable effect was observed at short-term follow-up (MD -3.78, 95% CI -6.29 to -1.28; 16 RCTs, 1788 participants). However, the common effect did not exceed the minimum clinically important difference (MCID) of 4 points. There was no significant difference between IDM and control for long-term follow-up and for generic QoL.\nIDM probably also leads to a large improvement in maximum and functional exercise capacity, as measured by six-minute walking distance (6MWD), at medium-term follow-up (MD 44.69, 95% CI 24.01 to 65.37; 13 studies, 2071 participants; moderate-certainty evidence). The effect exceeded the MCID of 35 metres and was even greater at short-term (MD 52.26, 95% CI 32.39 to 72.74; 17 RCTs, 1390 participants) and long-term (MD 48.83, 95% CI 16.37 to 80.49; 6 RCTs, 7288 participants) follow-up.\nThe number of participants with respiratory-related admissions was reduced from 324 per 1000 participants in the control group to 235 per 1000 participants in the IDM group (odds ratio (OR) 0.64, 95% CI 0.50 to 0.81; 15 RCTs, median follow-up 12 months, 4207 participants; high-certainty evidence). Likewise, IDM probably results in a reduction in emergency department (ED) visits (OR 0.69, 95%CI 0.50 to 0.93; 9 RCTs, median follow-up 12 months, 8791 participants; moderate-certainty evidence), a slight reduction in all-cause hospital admissions (OR 0.75, 95%CI 0.57 to 0.98; 10 RCTs, median follow-up 12 months, 9030 participants; moderate-certainty evidence), and fewer hospital days per person admitted (MD -2.27, 95% CI -3.98 to -0.56; 14 RCTs, median follow-up 12 months, 3563 participants; moderate-certainty evidence).\nStatistically significant improvement was noted on the Medical Research Council (MRC) Dyspnoea Scale at short- and medium-term follow-up but not at long-term follow-up. No differences between groups were reported for mortality, courses of antibiotics/prednisolone, dyspnoea, and depression and anxiety scores. Subgroup analysis of dominant intervention components and regions of study suggested context- and intervention-specific effects. However, some subgroup analyses were marked by considerable heterogeneity or included few studies. These results should therefore be interpreted with caution.\nAuthors' conclusions\nThis review shows that IDM probably results in improvement in disease-specific QoL, exercise capacity, hospital admissions, and hospital days per person. Future research should evaluate which combination of IDM components and which intervention duration are most effective for IDM programmes, and should consider contextual determinants of implementation and treatment effect, including process-related outcomes, long-term follow-up, and cost-effectiveness analyses.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Chronic obstructive pulmonary disease (COPD) is a chronic lung disease and is a major cause of ill health worldwide. People with COPD feel the impact of the disease in their daily life through symptoms such as breathlessness and coughing and acute worsening of symptoms in lung attacks.\nDifferent healthcare providers, such as doctors, nurses, and physiotherapists, typically provide different types of care to people with COPD (e.g. prescribe medication, guide self-management, provide education, present exercise training). Previously, people with COPD could visit one or more different healthcare providers, and these providers would work independently. The goal of an integrated disease management (IDM) programme is to include different components of care by which different healthcare providers are co-operating and collaborating to provide more efficient care of better quality." } ]
query-laysum
23428
[ { "role": "user", "content": "Abstract: Background\nStimulus deprivation amblyopia (SDA) develops due to an obstruction to the passage of light secondary to a condition such as cataract. The obstruction prevents formation of a clear image on the retina. SDA can be resistant to treatment, leading to poor visual prognosis. SDA probably constitutes less than 3% of all amblyopia cases, although precise estimates of prevalence are unknown. In high-income countries, most people present under the age of one year; in low- to middle-income countries, people are likely to be older at the time of presentation. The mainstay of treatment is correction of the obstruction (e.g., removal of the cataract) and then occlusion of the better-seeing eye, but regimens vary, can be difficult to execute, and traditionally are believed to lead to disappointing results.\nObjectives\nTo evaluate the effectiveness of occlusion therapy for SDA in an attempt to establish realistic treatment outcomes and to examine evidence of any dose–response effect and assess the effect of the duration, severity, and causative factor on the size and direction of the treatment effect.\nSearch methods\nWe searched CENTRAL (2018, Issue 12), which contains the Cochrane Eyes and Vision Trials Register; Ovid MEDLINE; Embase.com; and five other databases. We used no date or language restrictions in the electronic searches. We last searched the databases on 12 December 2018.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) and controlled clinical trials of participants with unilateral SDA with visual acuity worse than 0.2 LogMAR or equivalent. We specified no restrictions for inclusion based upon age, gender, ethnicity, comorbidities, medication use, or the number of participants.\nData collection and analysis\nWe used standard Cochrane methodology.\nMain results\nWe identified no trials that met the inclusion criteria specified in the protocol for this review.\nAuthors' conclusions\nWe found no evidence from RCTs or quasi-randomized trials on the effectiveness of any treatment for SDA. RCTs are needed in order to evaluate the safety and effectiveness of occlusion, duration of treatment, level of vision that can be realistically achieved, effects of age at onset and magnitude of visual defect, optimum occlusion regimen, and factors associated with satisfactory and unsatisfactory outcomes with the use of various interventions for SDA.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "There is a lack of evidence from randomized controlled trials (trials in which participants are randomly assigned to one treatment group or another) regarding the effects of occlusion treatment for SDA." } ]
query-laysum
23429
[ { "role": "user", "content": "Abstract: Background\nTemporary interruption of cerebral blood flow during carotid endarterectomy can be avoided by using a shunt across the clamped section of the carotid artery. The shunt may improve the outcome. This is an update of a Cochrane review originally published in 1996 and previously updated in 2002, 2009, and 2014.\nObjectives\nTo assess the effect of routine versus selective or no shunting, and to assess the best method for selective shunting on death, stroke, and other complications in people undergoing carotid endarterectomy under general anaesthesia.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (last searched April 2021), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2021, Issue 4), MEDLINE (1966 to April 2021), Embase (1980 to April 2021), and the Science Citation Index Expanded (SCI-EXPANDED) (1980 to April 2021). We also searched ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform, and handsearched relevant journals, conference proceedings, and reference lists.\nSelection criteria\nRandomised and quasi-randomised trials of routine shunting compared with no shunting or selective shunting, and trials that compared different shunting policies in people undergoing carotid endarterectomy.\nData collection and analysis\nThree independent review authors performed data extraction, selection, and analysis. A pooled Peto odds ratio (OR) and 95% confidence interval (CI) were computed for all outcomes of interest. Best and worse case scenarios were also calculated in case of unavailable data. Two authors independently assessed risk of bias, and quality of evidence using GRADE.\nMain results\nNo new trials were found for this updated review. Thus, six trials involving 1270 participants are included in this latest review: three trials involving 686 participants compared routine shunting with no shunting, one trial involving 200 participants compared routine shunting with selective shunting, one trial involving 253 participants compared selective shunting with and without near-infrared refractory spectroscopy monitoring, and the other trial involving 131 participants compared shunting with a combination of electroencephalographic and carotid pressure measurement with shunting by carotid pressure measurement alone. Only three trials comparing routine shunting and no shunting were eligible for meta-analysis. Major findings of this comparison found that the routine shunting had less risk of stroke-related death within 30 days of surgery (best case) than no shunting (Peto odds ratio (OR) 0.13, 95% confidence interval (CI) 0.02 to 0.96, I2 not applicable, P = 0.05, low-quality evidence), the routine shunting group had a lower stroke rate within 24 hours of surgery (Peto odds ratio (OR) 0.15, 95% CI 0.03 to 0.78, I2 = not applicable, P = 0.02, low-quality evidence), and ipsilateral stroke within 30 days of surgery (best case) (Peto OR 0.41, 95% CI 0.18 to 0.97, I2 = 52%, P = 0.04, low-quality evidence) than the no shunting group. No difference was found between the groups in terms of postoperative neurological deficit between selective shunting with and without near-infrared refractory spectroscopy monitoring. However, this analysis was inadequately powered to reliably detect the effect. There was no difference between the risk of ipsilateral stroke in participants selected for shunting with the combination of electroencephalographic and carotid pressure assessment compared with pressure assessment alone, although again the data were limited.\nAuthors' conclusions\nThis review concluded that the data available were too limited to either support or refute the use of routine or selective shunting in carotid endarterectomy when performed under general anaesthesia. Large-scale randomised trials of routine shunting versus selective shunting are required. No method of monitoring in selective shunting has been shown to produce better outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "A very limited number of trials suggested that routine shunting compared to no shunting in patients undergoing carotid endarterectomy under general anaesthesia resulted in a lower stroke-related death within 30 days of surgery, less stroke rate within 24 hours of surgery and ipsilateral rate reduction within 30 days of surgery. More trials are needed." } ]
query-laysum
23430
[ { "role": "user", "content": "Abstract: Background\nVaricoceles are associated with male subfertility; however, the mechanisms by which varicoceles affect fertility have yet to be satisfactorily explained. Several treatment options exist, including surgical or radiological treatment, however the safest and most efficient treatment remains unclear.\nObjectives\nTo evaluate the effectiveness and safety of surgical and radiological treatment of varicoceles on live birth rate, adverse events, pregnancy rate, varicocele recurrence, and quality of life amongst couples where the adult male has a varicocele, and the female partner of childbearing age has no fertility problems.\nSearch methods\nWe searched the following databases on 4 April 2020: the Cochrane Gynaecology and Fertility Group Specialised Register, CENTRAL, MEDLINE, Embase, PsycINFO, and CINAHL. We also searched the trial registries and reference lists of articles.\nSelection criteria\nWe included randomised controlled trials (RCTs) if they were relevant to the clinical question posed and compared different forms of surgical ligation, different forms of radiological treatments, surgical treatment compared to radiological treatment, or one of these aforementioned treatment forms compared to non-surgical methods, delayed treatment, or no treatment. We extracted data if the studies reported on live birth, adverse events, pregnancy, varicocele recurrence, and quality of life.\nData collection and analysis\nScreening of abstracts and full-text publications, alongside data extraction and 'Risk of bias' assessment, were done dually using the Covidence software. When we had sufficient data, we calculated random-effects (Mantel-Haenszel) meta-analyses; otherwise, we reported results narratively. We used the I2 statistic to analyse statistical heterogeneity. We planned to use funnel plots to assess publication bias in meta-analyses with at least 10 included studies. We dually rated the risk of bias of studies using the Cochrane 'Risk of bias' tool, and the certainty of evidence for each outcome using the GRADE approach.\nMain results\nWe identified 1897 citations after de-duplicating the search results. We excluded 1773 during title and abstract screening. From the 113 new full texts assessed in addition to the 10 studies (11 references) included in the previous version of this review, we included 38 new studies, resulting in a total of 48 studies (59 references) in the review providing data for 5384 participants. Two studies (three references) are ongoing studies and two studies are awaiting classification.\nTreatment versus non-surgical, non-radiological, delayed, or no treatment\nTwo studies comparing surgical or radiological treatment versus no treatment reported on live birth with differing directions of effect. As a result, we are uncertain whether surgical or radiological treatment improves live birth rates when compared to no treatment (risk ratio (RR) 2.27, 95% confidence interval (CI) 0.19 to 26.93; 2 RCTs, N = 204; I2 = 74%, very low-certainty evidence). Treatment may improve pregnancy rates compared to delayed or no treatment (RR 1.55, 95% CI 1.06 to 2.26; 13 RCTs, N = 1193; I2 = 65%, low-certainty evidence). This suggests that couples with no or delayed treatment have a 21% chance of pregnancy, whilst the pregnancy rate after surgical or radiological treatment is between 22% and 48%. We identified no evidence on adverse events, varicocele recurrence, or quality of life for this comparison.\nSurgical versus radiological treatment\nWe are uncertain about the effect of surgical versus radiological treatment on live birth and on the following adverse events: hydrocele formation, pain, epididymitis, haematoma, and suture granuloma. We are uncertain about the effect of surgical versus radiological treatment on pregnancy rate (RR 1.13, 95% CI 0.75 to 1.70; 5 RCTs, N = 456, low-certainty evidence) and varicocele recurrence (RR 1.31, 95% CI 0.82 to 2.08; 3 RCTs, N = 380, low-certainty evidence). We identified no evidence on quality of life for this comparison.\nSurgery versus other surgical treatment\nWe identified 19 studies comparing microscopic subinguinal surgical treatment to any other surgical treatment. Microscopic subinguinal surgical treatment probably improves pregnancy rates slightly compared to other surgical treatments (RR 1.18, 95% CI 1.02 to 1.36; 12 RCTs, N = 1473, moderate-certainty evidence). This suggests that couples with microscopic subinguinal surgical treatment have a 10% to 14% chance of pregnancy after treatment, whilst the pregnancy rate in couples after other surgical treatments is 10%. This procedure also probably reduces the risk of varicocele recurrence (RR 0.48, 95% CI 0.29, 0.79; 14 RCTs, N = 1565, moderate-certainty evidence). This suggests that 0.4% to 1.1% of men undergoing microscopic subinguinal surgical treatment experience recurrent varicocele, whilst 1.4% of men undergoing other surgical treatments do. Results for the following adverse events were inconclusive: hydrocele formation, haematoma, abdominal distension, testicular atrophy, wound infection, scrotal pain, and oedema. We identified no evidence on live birth or quality of life for this comparison.\nNine studies compared open inguinal surgical treatment to retroperitoneal surgical treatment. Due to small sample sizes and methodological limitations, we identified neither treatment type as superior or inferior to the other regarding adverse events, pregnancy rates, or varicocele recurrence. We identified no evidence on live birth or quality of life for this comparison.\nRadiological versus other radiological treatment\nOne study compared two types of radiological treatment (sclerotherapy versus embolisation) and reported 13% varicocele recurrence in both groups. Due to the broad confidence interval, no valid conclusion could be drawn (RR 1.00, 95% CI 0.16 to 6.20; 1 RCT, N = 30, very low-certainty evidence). We identified no evidence on live birth, adverse events, pregnancy, or quality of life for this comparison.\nAuthors' conclusions\nBased on the limited evidence, it remains uncertain whether any treatment (surgical or radiological) compared to no treatment in subfertile men may be of benefit on live birth rates; however, treatment may improve the chances for pregnancy. The evidence was also insufficient to determine whether surgical treatment was superior to radiological treatment. However, microscopic subinguinal surgical treatment probably improves pregnancy rates and reduces the risk of varicocele recurrence compared to other surgical treatments. High-quality, head-to-head comparative RCTs focusing on live birth rate and also assessing adverse events and quality of life are warranted.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We are uncertain whether surgical or radiological treatment improves live birth rates when compared to no treatment. Treatment may improve pregnancy rates compared to delayed or no treatment. The evidence suggests that couples with no or delayed treatment have a 21% chance of pregnancy, whilst the pregnancy rate after surgical or radiological treatment is between 22% and 48%. Data were lacking on adverse events, varicocele recurrence, and quality of life.\nWe are uncertain about the effect of surgical versus radiological treatment on live birth, pregnancy rate, varicocele recurrence, and the adverse event hydrocele formation. Data were lacking on quality of life for this comparison.\nMicroscopic subinguinal surgical treatment probably improves pregnancy rates slightly compared to other surgical treatments. This suggests that couples with microscopic subinguinal surgical treatment have a 10% to 14% chance of pregnancy after treatment, whilst the pregnancy rate in couples after other surgical treatments is 10%. This procedure also probably reduces the risk of varicocele recurrence. This suggests that 0.4% to 1.1% of men undergoing microscopic subinguinal surgical treatment experience recurrent varicocele, whilst 1.4% of men undergoing other surgical treatments do. Results on adverse events were inconclusive. Data were lacking on live birth and quality of life.\nWe are uncertain about the effects of open inguinal surgical treatment versus retroperitoneal surgical treatment on adverse events, pregnancy rates, or varicocele recurrence. Data were lacking on live birth and quality of life.\nWe are uncertain about the effects of radiological treatment (sclerotherapy versus embolisation) on varicocele recurrence. Data were lacking on live birth, adverse events, pregnancy, and quality of life." } ]
query-laysum
23431
[ { "role": "user", "content": "Abstract: Background\nChronic pelvic pain is a common and debilitating condition; its aetiology is multifactorial, involving social, psychological and biological factors. The management of chronic pelvic pain is challenging, as despite interventions involving surgery, many women remain in pain without a firm gynaecological diagnosis.\nObjectives\nTo assess the effectiveness and safety of non-surgical interventions for women with chronic pelvic pain.\nSearch methods\nWe searched the Menstrual Disorders and Subfertility Group Specialised Register. We also searched (from inception to 5 February 2014) AMED, CENTRAL, MEDLINE, EMBASE, PsycINFO, CINAHL and LILACS. We handsearched sources such as citation lists, trial registers and conference proceedings.\nSelection criteria\nRandomised controlled trials (RCTs) on non-surgical management of chronic pelvic pain were eligible for inclusion. We included studies of women with a diagnosis of pelvic congestion syndrome or adhesions but excluded those with pain known to be caused by endometriosis, primary dysmenorrhoea (period pain), active chronic pelvic inflammatory disease or irritable bowel syndrome. We considered studies of any non-surgical intervention, including lifestyle, physical, medical and psychological treatments.\nData collection and analysis\nStudy selection, quality assessment and data extraction were performed independently by two review authors. Meta-analysis was performed using the Peto odds ratio (Peto OR) for dichotomous outcomes and the mean difference (MD) for continuous outcomes, with 95% confidence intervals (CIs). The primary outcome measure was pain relief, and secondary outcome measures were psychological outcomes, quality of life, requirement for analgesia and adverse effects. The quality of the evidence was assessed by using GRADE methods.\nMain results\nTwenty-one RCTs were identified that involved non-surgical management of chronic pelvic pain: 13 trials were included in the review, and eight were excluded. The studies included a total of 750 women—406 women in the intervention groups and 344 in the control groups. Included studies had high attrition rates, and investigators often did not blind adequately or did not clearly describe randomisation procedures.\nMedical treatment versus placebo\nProgestogen (medroxyprogesterone acetate (MPA)) was more effective than placebo at the end of treatment in terms of the number of women achieving a greater than 50% reduction in visual analogue scale (VAS) pain score immediately after treatment (Peto OR 3.00, 95% CI 1.70 to 5.31, two studies, n = 204, I2 = 22%, moderate-quality evidence). Evidence of benefit was maintained up to nine months after treatment (Peto OR 2.09, 95% CI 1.18 to 3.71, two studies, n = 204, I2 = 0%, moderate-quality evidence). Women treated with progestogen reported more adverse effects (e.g. weight gain, bloatedness) than those given placebo (high-quality evidence). The estimated effect of lofexidine on pain outcomes when compared with placebo was compatible with benefit and harm (Peto OR 0.42, 95% CI 0.11 to 1.61, one study, 39 women, low-quality evidence). Women in the lofexidine group reported more adverse effects (including drowsiness and dry mouth) than women given placebo (moderate-quality evidence).\nHead-to-head comparisons of medical treatments\nHead-to-head comparisons showed that women taking goserelin had greater improvement in pelvic pain score (MD 3, 95% CI 2.08 to 3.92, one study, n = 47, moderate-quality evidence) at one year than those taking progestogen. Women taking gabapentin had a lower VAS pain score than those taking amytriptyline (MD -1.50, 95% CI -2.06 to -0.94, n = 40, low-quality evidence). Study authors reported that no statistically significant difference was observed in the rate of adverse effects among women taking gabapentin compared with women given amytriptyline. The study comparing goserelin versus progestogen did not report on adverse effects.\nPsychological treatment\nWomen who underwent reassurance ultrasound scans and received counselling were more likely to report improved pain than those treated with a standard 'wait and see' policy (Peto OR 6.77, 95% CI 2.83 to 16.19, n = 90, low-quality evidence). Significantly more women who had writing therapy as a disclosure reported improvement in pain than those in the non-disclosure group (Peto OR 4.47, 95% CI 1.41 to 14.13, n = 48, very low-quality evidence). No difference between groups in pain outcomes was noted when other psychological therapies were compared with standard care or placebo (quality of evidence ranged from very low to low). Studies did not report on adverse effects.\nComplementary therapy\nDistension of painful pelvic structures was more effective for pain when compared with counselling (MD 35.8, 95% CI 23.08 to 48.52 on a zero to 100 scale, one study, n = 48, moderate-quality evidence). No difference in pain levels was observed when magnetic therapy was compared with use of a control magnet (very low-quality evidence). Studies did not report on adverse effects.\nThe results of studies examining psychological and complementary therapies could not be combined to yield meaningful results.\nAuthors' conclusions\nEvidence of moderate quality supports progestogen as an option for chronic pelvic pain, with efficacy reported during treatment. In practice, this option may be most acceptable among women unconcerned about progestogenic adverse effects (e.g. weight gain, bloatedness—the most common adverse effects). Although some evidence suggests possible benefit of goserelin when compared with progestogen, gabapentin as compared with amytriptyline, ultrasound versus 'wait and see' and writing therapy versus non-disclosure, the quality of evidence is generally low, and evidence is drawn from single studies.\nGiven the prevalence and healthcare costs associated with chronic pelvic pain in women, RCTs of other medical, lifestyle and psychological interventions are urgently required.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The review concludes that evidence shows improvement of pain in women given a high dose of progestogen (50 mg medroxyprogesterone acetate) immediately post-treatment and for up to nine months after treatment. However, progestogen was associated with adverse effects such as weight gain and bloating. Women who underwent reassurance ultrasound scans and who received counselling were more likely to report improved pain than those whose treatment involved a 'wait and see' policy. Some evidence of benefit was seen with writing disclosure therapy and with distension of painful pelvic structures. No good evidence of benefit was noted with other interventions when compared with standard care or placebo.\nThe quality of the evidence was low or moderate for most comparisons, and in most cases evidence was derived from single small studies. Moreover, we were unable to draw meaningful conclusions on quality of life and physical and functional outcomes because of the large variation in outcome measures used by the included studies. Many interventions identified in this review involved only single studies with small sample sizes. Additional studies will be required in the future to replicate results obtained with the use of specific medical interventions." } ]
query-laysum
23432
[ { "role": "user", "content": "Abstract: Background\nGilles de la Tourette syndrome, or Tourette’s syndrome, is defined as the presence of both motor and vocal (phonic) tics for more than 12 months, that manifest before the age of 18 years, in the absence of secondary causes. Treatment of motor and phonic tics is difficult and challenging.\nObjectives\nTo determine the safety and effectiveness of botulinum toxin in treating motor and phonic tics in people with Tourette's syndrome, and to analyse the effect of botulinum toxin on premonitory urge and sensory tics.\nSearch methods\nWe searched the Cochrane Movement Disorders Group Trials Register, CENTRAL, MEDLINE, and two trials registers to 25 October 2017. We reviewed reference lists of relevant articles for additional trials.\nSelection criteria\nWe considered all randomised, controlled, double-blind studies comparing botulinum toxin to placebo or other medications for the treatment of motor and phonic tics in Tourette’s syndrome for this review. We sought both parallel group and cross-over studies of children or adults, at any dose, and for any duration.\nData collection and analysis\nWe followed standard Cochrane methods to select studies, assess risk of bias, extract and analyse data. All authors independently abstracted data onto standardized forms; disagreements were resolved by mutual discussion.\nMain results\nOnly one randomised placebo-controlled, double-blind cross-over study met our selection criteria. In this study, 20 participants with motor tics were enrolled over a three-year recruitment period; 18 (14 of whom had a diagnosis of Tourette’s syndrome) completed the study; in total, 21 focal motor tics were treated. Although we considered most bias domains to be at low risk of bias, the study recruited a small number of participants with relatively mild tics and provided limited data for our key outcomes. The effects of botulinum toxin injections on tic frequency, measured by videotape or rated subjectively, and on premonitory urge, are uncertain (very low-quality evidence). The quality of evidence for adverse events following botulinum toxin was very low. Nine people had muscle weakness following the injection, which could have led to unblinding of treatment group assignment. No data were available to evaluate whether botulinum injections led to immunoresistance to botulinum.\nAuthors' conclusions\nWe are uncertain about botulinum toxin effects in the treatment of focal motor and phonic tics in select cases, as we assessed the quality of the evidence as very low. Additional randomised controlled studies are needed to demonstrate the benefits and harms of botulinum toxin therapy for the treatment of motor and phonic tics in patients with Tourette’s syndrome.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "People with Tourette's syndrome often make repetitive and sudden movements (motor) or vocal noises (phonic) called tics. Botulinum toxin injections are used to treat motor and phonic tics in patients with Tourette’s syndrome. They are thought to weaken the mechanism in the body that makes the tics happen. There are conflicting reports regarding their effectiveness." } ]
query-laysum
23433
[ { "role": "user", "content": "Abstract: Background\nIncreasing age is associated with a natural decline in cognitive function and is the greatest risk factor for dementia. Cognitive decline and dementia are significant threats to independence and quality of life in older adults. Therefore, identifying interventions that help to maintain cognitive function in older adults or that reduce the risk of dementia is a research priority. Cognitive training uses repeated practice on standardised exercises targeting one or more cognitive domains and may be intended to improve or maintain optimal cognitive function. This review examines the effects of computerised cognitive training interventions lasting at least 12 weeks on the cognitive function of healthy adults aged 65 or older and has formed part of a wider project about modifying lifestyle to maintain cognitive function. We chose a minimum 12 weeks duration as a trade-off between adequate exposure to a sustainable intervention and feasibility in a trial setting.\nObjectives\nTo evaluate the effects of computerised cognitive training interventions lasting at least 12 weeks on cognitive function in cognitively healthy people in late life.\nSearch methods\nWe searched to 31 March 2018 in ALOIS (www.medicine.ox.ac.uk/alois), and we performed additional searches of MEDLINE, Embase, PsycINFO, CINAHL, ClinicalTrials.gov, and the WHO Portal/ICTRP (www.apps.who.int/trialsearch), to ensure that the search was as comprehensive and as up-to-date as possible to identify published, unpublished, and ongoing trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs, published or unpublished, reported in any language. Participants were cognitively healthy people, and at least 80% of the study population had to be aged 65 or older. Experimental interventions adhered to the following criteria: intervention was any form of interactive computerised cognitive intervention - including computer exercises, computer games, mobile devices, gaming console, and virtual reality - that involved repeated practice on standardised exercises of specified cognitive domain(s) for the purpose of enhancing cognitive function; the duration of the intervention was at least 12 weeks; cognitive outcomes were measured; and cognitive training interventions were compared with active or inactive control interventions.\nData collection and analysis\nWe performed preliminary screening of search results using a 'crowdsourcing' method to identify RCTs. At least two review authors working independently screened the remaining citations against inclusion criteria. At least two review authors also independently extracted data and assessed the risk of bias of included RCTs. Where appropriate, we synthesised data in random-effects meta-analyses, comparing computerised cognitive training (CCT) separately with active and inactive controls. We expressed treatment effects as standardised mean differences (SMDs) with 95% confidence intervals (CIs). We used GRADE methods to describe the overall quality of the evidence for each outcome.\nMain results\nWe identified eight RCTs with a total of 1183 participants. The duration of the interventions ranged from 12 to 26 weeks; in five trials, the duration of intervention was 12 or 13 weeks. The included studies had moderate risk of bias, and the overall quality of evidence was low or very low for all outcomes.\nWe compared CCT first against active control interventions, such as watching educational videos. Negative SMDs favour CCT over control. Trial results suggest slight improvement in global cognitive function at the end of the intervention period (12 weeks) (standardised mean difference (SMD) -0.31, 95% confidence interval (CI) -0.57 to -0.05; 232 participants; 2 studies; low-quality evidence). One of these trials also assessed global cognitive function 12 months after the end of the intervention; this trial provided no clear evidence of a persistent effect (SMD -0.21, 95% CI -0.66 to 0.24; 77 participants; 1 study; low-quality evidence). CCT may result in little or no difference at the end of the intervention period in episodic memory (12 to 17 weeks) (SMD 0.06, 95% CI -0.14 to 0.26; 439 participants; 4 studies; low-quality evidence) or working memory (12 to 16 weeks) (SMD -0.17, 95% CI -0.36 to 0.02; 392 participants; 3 studies; low-quality evidence). Because of the very low quality of the evidence, we are very uncertain about the effects of CCT on speed of processing and executive function.\nWe also compared CCT to inactive control (no interventions). We found no data on our primary outcome of global cognitive function. At the end of the intervention, CCT may lead to slight improvement in episodic memory (6 months) (mean difference (MD) in Rivermead Behavioural Memory Test (RBMT) -0.90 points, 95% confidence interval (CI) -1.73 to -0.07; 150 participants; 1 study; low-quality evidence) but can have little or no effect on executive function (12 weeks to 6 months) (SMD -0.08, 95% CI -0.31 to 0.15; 292 participants; 2 studies; low-quality evidence), working memory (16 weeks) (MD -0.08, 95% CI -0.43 to 0.27; 60 participants; 1 study; low-quality evidence), or verbal fluency (6 months) (MD -0.11, 95% CI -1.58 to 1.36; 150 participants; 1 study; low-quality evidence). We could not determine any effects on speed of processing because the evidence was of very low quality.\nWe found no evidence on quality of life, activities of daily living, or adverse effects in either comparison.\nAuthors' conclusions\nWe found low-quality evidence suggesting that immediately after completion of the intervention, small benefits of CCT may be seen for global cognitive function when compared with active controls, and for episodic memory when compared with an inactive control. These benefits are of uncertain clinical importance. We found no evidence that the effect on global cognitive function persisted 12 months later. Our confidence in the results was low, reflecting the overall quality of the evidence. In five of the eight trials, the duration of the intervention was just three months. The possibility that more extensive training could yield larger benefit remains to be more fully explored.\nWe found substantial literature on cognitive training, and collating all available scientific information posed problems. Duration of treatment may not be the best way to categorise interventions for inclusion. As the primary interest of older people and of guideline writers and policymakers involves sustained cognitive benefit, an alternative would be to categorise by length of follow-up after selecting studies that assess longer-term effects.\n\nGiven the provided abstract, please respond to the following query: \"What we did\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched the medical literature up to 15 March 2018 for trials that compared cognitive function of people aged 65 or older who had taken part in computerised cognitive training for a minimum of 12 weeks with cognitive function of a control group that had not done so. All participants should have been cognitively healthy at the start of the trials. For the comparison to be as fair as possible, it should have been decided randomly whether participants were in the cognitive training group or in the control group. We were primarily interested in overall measures of cognition. The choice of three months for the intervention was somewhat arbitrary, but we thought very short interventions were unlikely to have lasting effects, and we were interested in interventions that could be sustained over time." } ]
query-laysum
23434
[ { "role": "user", "content": "Abstract: Background\nComposite filling materials have been increasingly used for the restoration of posterior teeth in recent years as a tooth-coloured alternative to amalgam. As with any filling material composites have a finite life-span. Traditionally, replacement was the ideal approach to treat defective composite restorations, however, repairing composites offers an alternative more conservative approach to the tooth structure where restorations are partly still serviceable. Repairing the restoration has the potential of taking less time and may sometimes be performed without the use of local anaesthesia hence it may be less distressing for a patient when compared with replacement.\nObjectives\nTo evaluate the effects of replacing (with resin composite) versus repair (with resin composite) in the management of defective resin composite dental restorations in permanent molar and premolar teeth.\nSearch methods\nFor the identification of studies relevant to this review we searched the Cochrane Oral Health Group's Trials Register (to 24 July 2013); the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 6); MEDLINE via OVID (1946 to 24 July 2013); EMBASE via OVID (1980 to 24 July 2013); BIOSIS via Web of Knowledge (1969 to 24 July 2013); Web of Science (1945 to 24 July 2013); and OpenGrey (to 24 July 2013). Researchers, experts and organisations known to be involved in this field were contacted in order to trace unpublished or ongoing studies. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nTrials were selected if they met the following criteria: randomised controlled trial (including split-mouth studies), involving replacement and repair of resin composite restorations in adults with a defective molar restoration in a permanent molar or premolar teeth.\nData collection and analysis\nTwo review authors independently assessed titles and abstracts for each article identified by the searches in order to decide whether the article was likely to be relevant. Full papers were obtained for relevant articles and both review authors studied these. The Cochrane Collaboration statistical guidelines were to be followed for data synthesis.\nMain results\nThe search strategy retrieved 298 potentially eligible studies, after de-duplication. After examination of the titles and abstracts, full texts of potentially relevant studies were retrieved but none of the retrieved studies met the inclusion criteria of the review.\nAuthors' conclusions\nThere are no published randomised controlled trials relevant to this review question. There is therefore a need for methodologically sound randomised controlled trials that are reported according to the Consolidated Standards of Reporting Trials (CONSORT) statement (www.consort-statement.org/). Further research also needs to explore qualitatively the views of patients on repairing versus replacement and investigate themes around pain, anxiety and distress, time and costs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "No trials were found that were suitable for inclusion in this review." } ]
query-laysum
23435
[ { "role": "user", "content": "Abstract: Background\nEmbryo transfer (ET) was traditionally performed two days after oocyte retrieval; however, developments in culture media have allowed embryos to be maintained in culture for longer periods. Delaying transfer from Day two to Day three would allow for further development of the embryo and might have a positive effect on pregnancy outcomes.\nObjectives\nTo determine if there are any differences in live birth and pregnancy rates when embryo transfer is performed on day three after oocyte retrieval, compared with day two, in infertile couples undergoing treatment with in vitro fertilisation (IVF), including intracytoplasmic sperm injection (ICSI).\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register of Controlled Trials, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid), Embase (Ovid), PsycINFO (Ovid) from the inception of the databases to 26th April 2016. We also searched ClinicalTrials.gov and the WHO portal for ongoing trials plus citation lists of relevant publications, review articles and included studies, as well as abstracts of appropriate scientific meetings.\nSelection criteria\nRandomised controlled trials that compared Day 3 versus Day 2 embryo transfer after oocyte retrieval during an IVF or ICSI treatment cycle in infertile couples.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. We contacted study authors for additional information. The primary outcome measures were live birth rate and ongoing pregnancy rate.\nMain results\nWe included 15 studies. Fourteen studies reported data per woman (2894 women) and one study reported data per cycle (969 cycles). The quality of the evidence using the GRADE approach ranged from moderate quality to very low quality. The main reasons for downgrading evidence were poor methodological reporting, selective reporting, inconsistency and imprecision.\nLive birth per woman - Overall, there was no evidence of a difference in live birth rate between Day three and Day two embryo transfer (risk ratio (RR) 1.05, 95% confidence interval (CI) 0.89 to 1.23; three studies, n = 1200 women; I2 = 63%; very low quality evidence). The data suggest that if 32% of women who underwent a Day two embryo transfer had a live birth, then between 28% to 39% of women undergoing a Day three embryo transfer would have a live birth.\nOngoing pregnancy per woman - There was no evidence of a difference between Day three and Day two embryo transfer for ongoing pregnancy (RR 0.98, 95% CI 0.85 to 1.12; six studies, n = 1740 women; I2 = 52%; very low quality of evidence). The data suggest that if 33% of women undergoing a Day two embryo transfer had an ongoing pregnancy then between 28% to 37% of women undergoing a Day three embryo transfer would have an ongoing pregnancy.\nClinical pregnancy per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the chance of a clinical pregnancy (RR 1.08, 95% CI 0.98 to 1.19; 12 studies, n = 2461, I2 = 51%; very low quality evidence). The data suggest that if 39% of women undergoing Day two embryo transfer had a clinical pregnancy, then between 38% to 46% of women undergoing a Day three embryo transfer would have a clinical pregnancy.\nMultiple pregnancy per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the risk of a multiple pregnancy (RR 1.12, 95% CI 0.86 to 1.44; eight studies, n = 1837; I2 = 0%; moderate quality evidence). The data suggest that if 11% of women undergoing Day two embryo transfer had a multiple pregnancy, then between 9% to 15% of women undergoing a Day three embryo transfer would have a multiple pregnancy.\nMiscarriage rate per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the risk of miscarriage (RR 1.16, 95% CI 0.84 to 1.60; nine studies, n = 2153 women, I2 = 26%; moderate quality evidence). The data suggest that if 6% of women undergoing Day two embryo transfer had a miscarriage, then between 5% to 10% of women undergoing a Day three embryo transfer would have a miscarriage.\nEctopic pregnancy rate per woman - There was no evidence of a difference between Day three and Day two embryo transfer for the risk of ectopic pregnancy (RR 0.99, 95% CI 0.29 to 3.40; six studies, n = 1531 women, I2 = 0%; low quality evidence). The data suggest that if 0.7% of women undergoing Day two embryo transfer have an ectopic pregnancy, then between 0.2% to 2% of women undergoing Day three embryo transfer would have an ectopic pregnancy.\nSubgroup analysis for pregnancy outcomes did not identify any differential effect between IVF and ICSI.\nNone of the included studies prespecified complication rate (e.g. OHSS), fetal abnormality or women's evaluation of the procedure as outcomes in their studies.\nAuthors' conclusions\nTwelve of 15 studies contributed data that could be included in meta-analyses. The quality of the evidence ranged from moderate to very low. Only three of the 15 studies reported data for live birth, although the data for ongoing pregnancy and clinical pregnancy are consistent with the live birth data, suggesting no difference between Day three and Day two embryo transfer for these outcomes. There was no evidence of a difference identified between Day three and Day two embryo transfer for multiple pregnancy, miscarriage or ectopic pregnancy per woman randomised. No data were reported for complication rate, fetal abnormality or woman's evaluation of the procedure. The current evidence has not identified any evidence of differences in pregnancy outcomes between Day two and Day three embryo transfers. Any further studies comparing these timings of embryo transfer are unlikely to alter the findings and we suggest that this review no longer be updated.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Allocation concealment was poorly reported in the included studies and blinding was not possible (although we feel this is unlikely to affect pregnancy outcomes). Blinding of outcome assessors was not reported. The quality of the evidence ranged from moderate to very low. The main reasons for downgrading the evidence were poor reporting of study methods (risk of bias), lack of agreement between studies (inconsistency), low event rates and lack of accuracy (imprecision) for some outcomes and poor reporting of live birth outcomes (selective reporting).\nAny further studies comparing these timings of embryo transfer are unlikely to alter the findings and we do not plan to update this review again. Many of the trials included in this review have used outdated techniques that include stimulation, laboratory technology and transferring more than one embryo. We would direct the reader to the Glujovsky 2016 Cochrane review comparing Day 2/3 with day 5/6 embryo transfer." } ]
query-laysum
23436
[ { "role": "user", "content": "Abstract: Background\nRemote ischaemic conditioning (RIC) has been developed as a neuroprotective strategy to prevent and treat ischaemic stroke. It usually involves restricting blood flow to limbs and then releasing the ischaemic blood to promote a neuroprotective effect. Preclinical studies have suggested that RIC may have beneficial effects in ischaemic stroke patients and those at risk of ischaemic stroke. However, existing evidence is insufficient to demonstrate the efficacy and safety of RIC in preventing and treating ischaemic stroke.\nObjectives\nTo assess the benefits and harms of RIC for preventing ischaemic stroke and for treating people with ischaemic stroke and those at risk for ischaemic stroke.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (16 January 2018), the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 12) in the Cochrane Library (January 2018), MEDLINE Ovid (1946 to January 2018), Embase Ovid (1974 to January 2018), Web of Science Core Collection (1950 to January 2018) and three Chinese databases (January 2018). We also searched four ongoing trials registers, reference lists, and conference proceedings.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing RIC with sham RIC or medical management in people with ischaemic stroke or at risk of ischaemic stroke.\nData collection and analysis\nTwo review authors independently selected studies, assessed trial quality and risk of bias, and extracted data. We used the GRADE approach to assess the quality of the evidence.\nMain results\nWe included seven trials, involving 735 participants, in this review. We analysed the effects of RIC on preventing and treating ischaemic stroke respectively.\nWe evaluated risk of bias and judged it to be low for generation of allocation sequence in six studies and unclear in one study; unclear for allocation concealment in four studies and low in three studies; high for incomplete outcome data (attrition bias) in five studies and low in two studies; high for blinding in three studies and low in four studies; low for selective reporting; and high for other sources of bias in six studies and low in one study.\nWe included three trials (involving 371 participants) in the analysis of the effects of RIC on ischaemic stroke prevention. In people with symptomatic intracerebral artery stenosis, recurrent stroke was significantly reduced by RIC (risk ratio (RR) 0.32, 95% confidence interval (CI) 0.12 to 0.83; 2 trials, 182 participants, low-quality evidence). In people with carotid stenosis undergoing carotid stenting, there was no significant difference in the incidence of ischaemic stroke between participants treated with RIC and non-RIC (RR 0.22, 95% CI 0.01 to 4.03; 1 trial, 189 participants, low-quality evidence); however the stroke severity (assessed by infarct volume) was significantly lower in participants treated with RIC (mean difference (MD) -0.17 mL, 95% CI -0.23 to -0.11; 1 trial, 189 participants, low-quality evidence). Adverse events associated with RIC were significantly higher in participants treated with RIC (RR 10.91; 95% CI 2.01 to 59.28; 3 trials, 371 participants, low-quality evidence), but no severe adverse event was attributable to RIC treatment. No participants experienced death or cardiovascular events during the period of the studies; and no trial reported haemorrhagic stroke or improvement in neurological, phycological or cognitive impairment.\nWe included four trials (involving 364 participants) in the analysis of the effects of RIC on ischaemic stroke treatment. In acute ischaemic stroke, for people receiving intravenous thrombolysis, the rate of death or dependency was significantly increased by RIC treatment compared with non-RIC treatment (RR 2.34; 95% 1.19 to 4.61; 1 trial, 285 participants, low-quality evidence). In people with acute ischaemic stroke, there was no significant difference between RIC and non-RIC for reducing stroke severity as assessed by the National Institutes of Health Stroke Scale score and the final infarct volume (standardised mean difference (SMD) -0.24 mL, 95% CI -1.02 to 0.54; 2 trials, 175 participants, very low quality evidence). There was no significant difference between RIC and non-RIC for improving the psychological impairment (SMD -0.37 points, 95% CI -1.15 to 0.41; 1 trial, 26 participants, very low quality evidence) and the cognitive impairment (SMD -0.26 points; 95% CI -0.72 to 0.21; 3 trials, 79 participants, low-quality evidence) in people with acute ischaemic stroke and cerebral small vessel disease. No trial reported ischaemic stroke, recurrent ischaemic stroke, improvement in neurological impairment, hemorrhagic stroke, cardiovascular events, and RIC associated adverse events.\nAuthors' conclusions\nWe found low-quality evidence that RIC may reduce the risk of recurrent stroke in participants with intracerebral artery stenosis and reduce stroke severity in participants undergoing carotid stenting, but it may increase death or dependence in participants with acute ischaemic stroke who are undergoing intravenous thrombolysis. However, there is considerable uncertainty about these conclusions because of the small number of studies and low quality of the evidence.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Stroke is the leading cause of adult disability globally, and ischaemic stroke (caused by reduced blood flow) accounts for most strokes. Nearly a quarter of individuals with ischaemic stroke will experience further events (recurrent stroke). Remote ischaemic conditioning (RIC) is a strategy to protect and prevent damage to brain tissue by improving its ability to tolerate reduced blood flow. Studies have suggested that RIC may have beneficial effects for preventing and treating ischaemic stroke." } ]
query-laysum
23437
[ { "role": "user", "content": "Abstract: Background\nBipolar disorder is a chronic mental disorder with repetitive mania/hypomania as well as depressive episodes, which eventually results in marked impairment in overall functioning and health-related quality of life.  A worldwide prevalence rate of 2.4% has been reported. The risk of suicide is higher in people with bipolar disorder than those with other mental disorders. Therefore, effective management of bipolar disorder in the maintenance period is warranted to minimize the risk of relapse or recurrence. Although lithium has been the standard treatment of bipolar disorder for many years, it is associated with adverse effects and teratogenicity. Lamotrigine is approved to be expected for prevention of recurrence for the maintenance treatment of bipolar disorder. In addition, lamotrigine is as effective as lithium. Therefore, we performed a systematic review to confirm the efficacy and safety of lamotrigine in the maintenance treatment of bipolar disorder.\nObjectives\nTo assess the efficacy and tolerability of lamotrigine in the maintenance treatment of bipolar disorder.\nSearch methods\nWe searched Ovid MEDLINE, Embase, PsycINFO, the Cochrane Common Mental Disorders Group's Specialized Register (CCMDCTR) and the Cochrane Central Register of Controlled Trials (CENTRAL) from inception to 21 May 2021. We also searched international trial registries and contacted experts in the field.\nSelection criteria\nWe included randomized controlled trials enrolling adults with bipolar disorder who were treated with lamotrigine, placebo or lithium.\nData collection and analysis\nTwo reviews authors independently checked the eligibility of studies and extracted data using a standardized form. Data extracted included study characteristics, participant characteristics, intervention details, settings, and outcome measures in the term of efficacy and tolerability. Study information were then entered into RevMan web.\nMain results\nWe included 11 studies with a total of 2314 participants in this review; 1146 were randomized to lamotrigine, 869 were randomized to placebo and, 299 to lithium.\nWe rated all studies as having an unclear risk of bias in at least one domain of Cochrane's tool for assessing risk of bias, with the most commonly observed weakness being selection bias (random sequence generation and allocation concealment). We judged five studies to be at a high risk of detection bias (blinding of outcome assessment). These potential biases pose as major threat to the validity of the included studies in this review.\nOutcomes of efficacy showed a possible advantage of lamotrigine over placebo. The estimated risk ratio (RR) for recurrence of manic symptom at one year as measured by the Young Mania Rating Scale (YMRS) was 0.67, (95% confidence interval (CI) 0.51 to 0.87; 3 studies, 663 participants; low-certainty evidence) in favor of lamotrigine. The RR of clinical worsening with the need for additional psychotropic treatment (RR 0.82, 95% CI 0.70 to 0.98; 4 studies, 756 participants) based on moderate-certainty evidence. The possible benefits of lamotrigine were also seen for the outcome of treatment withdrawal due to any reason at 6-12 months after treatment (RR 0.88, 95% CI 0.78 to 0.99; 4 studies, 700 participants; moderate-certainty evidence). Regarding tolerability, our analyses showed that the incidence rates of adverse effects were similar between the lamotrigine group and the placebo group (short-term effect: RR 1.07, 95% CI 0.81 to 1.42; 5 studies, 1138 participants; very low-certainty evidence; long-term effect: RR 0.97, 95% CI 0.77 to 1.23; 4 studies, 756 participants; moderate-certainty evidence).\nIn the comparison between lamotrigine and lithium, efficacy was similar between groups except for recurrence of mania episode at one year. Recurrence of manic symptoms was higher in the lamotrigine group than that of the lithium group (RR 2.13, 95% CI 1.32 to 3.44; 3 studies, 602 participants; moderate-certainty evidence). Analysis of adverse effects at 6-12 months showed that a lower proportion of participants experienced at least one adverse effect when treated with lamotrigine compared to lithium (RR 0.70, 95% CI 0.51 to 0.96; 4 studies, 691 participants; moderate-certainty evidence).\nAuthors' conclusions\nLow- to moderate-certainty evidence collectively suggests that lamotrigine may be superior to placebo as a treatment modality for bipolar disorder. In comparison to lithium, people with bipolar disorder seem to tolerate lamotrigine better in the long run; however, the demonstrated efficacy in the maintenance of bipolar disorder was similar between the two groups.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "To investigate the clinical effectiveness (benefits and harms) of lamotrigine for maintenance therapy of bipolar disorder in comparison with placebo, combination therapy or existing drugs (e.g. lithium, olanzapine)." } ]
query-laysum
23438
[ { "role": "user", "content": "Abstract: Background\nSince the 1950s antipsychotic medication has been extensively used to treat people with chronic mental illnesses such as schizophrenia. These drugs, however, have also been associated with a wide range of adverse effects, including movement disorders such as tardive dyskinesia (TD) – a problem often seen as repetitive involuntary movements around the mouth and face. Various strategies have been examined to reduce a person's cumulative exposure to antipsychotics. These strategies include dose reduction, intermittent dosing strategies such as drug holidays, and antipsychotic cessation.\nObjectives\nTo determine whether a reduction or cessation of antipsychotic drugs is associated with a reduction in TD for people with schizophrenia (or other chronic mental illnesses) who have existing TD. Our secondary objective was to determine whether the use of specific antipsychotics for similar groups of people could be a treatment for TD that was already established.\nSearch methods\nWe updated previous searches of Cochrane Schizophrenia's study-based Register of Trials including the registers of clinical trials (16 July 2015 and 26 April 2017). We searched references of all identified studies for further trial citations. We also contacted authors of trials for additional information.\nSelection criteria\nWe included reports if they assessed people with schizophrenia or other chronic mental illnesses who had established antipsychotic-induced TD, and had been randomly allocated to (a) antipsychotic maintenance versus antipsychotic cessation (placebo or no intervention), (b) antipsychotic maintenance versus antipsychotic reduction (including intermittent strategies), (c) specific antipsychotics for the treatment of TD versus placebo or no intervention, and (d) specific antipsychotics versus other antipsychotics or versus any other drugs for the treatment of TD.\nData collection and analysis\nWe independently extracted data from these trials and estimated risk ratios (RR) or mean differences (MD), with 95% confidence intervals (CI). We assumed that people who dropped out had no improvement.\nMain results\nWe included 13 RCTs with 711 participants; eight of these studies were newly included in this 2017 update. One trial is ongoing.\nThere was low-quality evidence of a clear difference on no clinically important improvement in TD favouring switch to risperidone compared with antipsychotic cessation (with placebo) (1 RCT, 42 people, RR 0.45 CI 0.23 to 0.89, low-quality evidence). Because evidence was of very low quality for antipsychotic dose reduction versus antipsychotic maintenance (2 RCTs, 17 people, RR 0.42 95% CI 0.17 to 1.04, very low-quality evidence), and for switch to a new antipsychotic versus switch to another new antipsychotic (5 comparisons, 5 RCTs, 140 people, no meta-analysis, effects for all comparisons equivocal), we are uncertain about these effects. There was low-quality evidence of a significant difference on extrapyramidal symptoms: use of antiparkinsonism medication favouring switch to quetiapine compared with switch to haloperidol (1 RCT, 45 people, RR 0.45 CI 0.21 to 0.96, low-quality evidence). There was no evidence of a difference for switch to risperidone or haloperidol compared with antipsychotic cessation (with placebo) (RR 1 RCT, 48 people, RR 2.08 95% CI 0.74 to 5.86, low-quality evidence) and switch to risperidone compared with switch to haloperidol (RR 1 RCT, 37 people, RR 0.68 95% CI 0.34 to 1.35, very low-quality evidence).\nTrials also reported on secondary outcomes such as other TD symptom outcomes, other adverse events outcomes, mental state, and leaving the study early, but the quality of the evidence for all these outcomes was very low due mainly to small sample sizes, very wide 95% CIs, and risk of bias. No trials reported on social confidence, social inclusion, social networks, or personalised quality of life, outcomes that we designated as being important to patients.\nAuthors' conclusions\nLimited data from small studies using antipsychotic reduction or specific antipsychotic drugs as treatments for TD did not provide any convincing evidence of the value of these approaches. There is a need for larger trials of a longer duration to fully investigate this area.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The review includes 13 trials with a total of 711 people with schizophrenia and other psychiatric diagnoses." } ]
query-laysum
23439
[ { "role": "user", "content": "Abstract: Background\nFibromyalgia is a clinically defined chronic condition of unknown etiology characterized by chronic widespread pain that often co-exists with sleep disturbances, cognitive dysfunction and fatigue. People with fibromyalgia often report high disability levels and poor quality of life. Drug therapy, for example, with serotonin and noradrenaline reuptake inhibitors (SNRIs), focuses on reducing key symptoms and improving quality of life. This review updates and extends the 2013 version of this systematic review.\nObjectives\nTo assess the efficacy, tolerability and safety of serotonin and noradrenaline reuptake inhibitors (SNRIs) compared with placebo or other active drug(s) in the treatment of fibromyalgia in adults.\nSearch methods\nFor this update we searched CENTRAL, MEDLINE, Embase, the US National Institutes of Health and the World Health Organization (WHO) International Clinical Trials Registry Platform for published and ongoing trials and examined the reference lists of reviewed articles, to 8 August 2017.\nSelection criteria\nWe selected randomized, controlled trials of any formulation of SNRIs against placebo or any other active treatment of fibromyalgia in adults.\nData collection and analysis\nThree review authors independently extracted data, examined study quality, and assessed risk of bias. For efficacy, we calculated the number needed to treat for an additional beneficial outcome (NNTB) for pain relief of 50% or greater and of 30% or greater, patient's global impression to be much or very much improved, dropout rates due to lack of efficacy, and the standardized mean differences (SMD) for fatigue, sleep problems, health-related quality of life, mean pain intensity, depression, anxiety, disability, sexual function, cognitive disturbances and tenderness. For tolerability we calculated number needed to treat for an additional harmful outcome (NNTH) for withdrawals due to adverse events and for nausea, insomnia and somnolence as specific adverse events. For safety we calculated NNTH for serious adverse events. We undertook meta-analysis using a random-effects model. We assessed the evidence using GRADE and created a 'Summary of findings' table.\nMain results\nWe added eight new studies with 1979 participants for a total of 18 included studies with 7903 participants. Seven studies investigated duloxetine and nine studies investigated milnacipran against placebo. One study compared desvenlafaxine with placebo and pregabalin. One study compared duloxetine with L-carnitine. The majority of studies were at unclear or high risk of bias in three to five domains.\nThe quality of evidence of all comparisons of desvenlafaxine, duloxetine and milnacipran versus placebo in studies with a parallel design was low due to concerns about publication bias and indirectness, and very low for serious adverse events due to concerns about publication bias, imprecision and indirectness. The quality of evidence of all comparisons of duloxetine and desvenlafaxine with other active drugs was very low due to concerns about publication bias, imprecision and indirectness.\nDuloxetine and milnacipran had no clinically relevant benefit over placebo for pain relief of 50% or greater: 1274 of 4104 (31%) on duloxetine and milnacipran reported pain relief of 50% or greater compared to 591 of 2814 (21%) participants on placebo (risk difference (RD) 0.09, 95% confidence interval (CI) 0.07 to 0.11; NNTB 11, 95% CI 9 to 14). Duloxetine and milnacipran had a clinically relevant benefit over placebo in patient's global impression to be much or very much improved: 888 of 1710 (52%) on duloxetine and milnacipran (RD 0.19, 95% CI 0.12 to 0.26; NNTB 5, 95% CI 4 to 8) reported to be much or very much improved compared to 354 of 1208 (29%) of participants on placebo. Duloxetine and milnacipran had a clinically relevant benefit compared to placebo for pain relief of 30% or greater. RD was 0.10; 95% CI 0.08 to 0.12; NNTB 10, 95% CI 8 to 12. Duloxetine and milnacipran had no clinically relevant benefit for fatigue (SMD -0.13, 95% CI -0.18 to -0.08; NNTB 18, 95% CI 12 to 29), compared to placebo. There were no differences between either duloxetine or milnacipran and placebo in reducing sleep problems (SMD -0.07; 95 % CI -0.15 to 0.01). Duloxetine and milnacipran had no clinically relevant benefit compared to placebo in improving health-related quality of life (SMD -0.20, 95% CI -0.25 to -0.15; NNTB 11, 95% CI 8 to 14).\nThere were 794 of 4166 (19%) participants on SNRIs who dropped out due to adverse events compared to 292 of 2863 (10%) of participants on placebo (RD 0.07, 95% CI 0.04 to 0.10; NNTH 14, 95% CI 10 to 25). There was no difference in serious adverse events between either duloxetine, milnacipran or desvenlafaxine and placebo (RD -0.00, 95% CI -0.01 to 0.00).\nThere was no difference between desvenlafaxine and placebo in efficacy, tolerability and safety in one small trial.\nThere was no difference between duloxetine and desvenlafaxine in efficacy, tolerability and safety in two trials with active comparators (L-carnitine, pregabalin).\nAuthors' conclusions\nThe update did not change the major findings of the previous review. Based on low- to very low-quality evidence, the SNRIs duloxetine and milnacipran provided no clinically relevant benefit over placebo in the frequency of pain relief of 50% or greater, but for patient's global impression to be much or very much improved and in the frequency of pain relief of 30% or greater there was a clinically relevant benefit. The SNRIs duloxetine and milnacipran provided no clinically relevant benefit over placebo in improving health-related quality of life and in reducing fatigue. Duloxetine and milnacipran did not significantly differ from placebo in reducing sleep problems. The dropout rates due to adverse events were higher for duloxetine and milnacipran than for placebo. On average, the potential benefits of duloxetine and milnacipran in fibromyalgia were outweighed by their potential harms. However, a minority of people with fibromyalgia might experience substantial symptom relief without clinically relevant adverse events with duloxetine or milnacipran.\nWe did not find placebo-controlled studies with other SNRIs than desvenlafaxine, duloxetine and milnacipran.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "In August 2017, we updated our searches for clinical trials in which SNRIs were used to treat symptoms of fibromyalgia in adults. We found eight new studies since the previous version of the review. In total, we found 18 studies with 7903 participants. The studies were four to 27 weeks long and compared the SNRIs desvenlafaxine, duloxetine and milnacipran against a fake medication (placebo). We rated the quality of the evidence from studies using four levels: very low, low, moderate, or high. Very low-quality evidence means that we are very uncertain about the results. High-quality evidence means that we are very confident in the results." } ]
query-laysum
23440
[ { "role": "user", "content": "Abstract: Background\nProphylactic removal of asymptomatic disease-free impacted wisdom teeth is the surgical removal of wisdom teeth in the absence of symptoms and with no evidence of local disease. Impacted wisdom teeth may be associated with pathological changes, such as pericoronitis, root resorption, gum and alveolar bone disease (periodontitis), caries and the development of cysts and tumours. When surgical removal is performed in older people, the risk of postoperative complications, pain and discomfort is increased. Other reasons to justify prophylactic removal of asymptomatic disease-free impacted third molars have included preventing late lower incisor crowding, preventing damage to adjacent structures such as the second molar or the inferior alveolar nerve, in preparation for orthognathic surgery, in preparation for radiotherapy or during procedures to treat people with trauma to the affected area. Removal of asymptomatic disease-free wisdom teeth is a common procedure, and researchers must determine whether evidence supports this practice. This review is an update of an review originally published in 2005 and previously updated in 2012 and 2016.\nObjectives\nTo evaluate the effects of removal compared with retention (conservative management) of asymptomatic disease-free impacted wisdom teeth in adolescents and adults.\nSearch methods\nCochrane Oral Health’s Information Specialist searched the following databases: Cochrane Oral Health’s Trials Register (to 10 May 2019), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library, 2019, Issue 4), MEDLINE Ovid (1946 to 10 May 2019), and Embase Ovid (1980 to 10 May 2019). The US National Institutes of Health Trials Registry (ClinicalTrials.gov)and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. .\nSelection criteria\nWe included randomised controlled trials (RCTs), with no restriction on length of follow-up, comparing removal (or absence) with retention (or presence) of asymptomatic disease-free impacted wisdom teeth in adolescents or adults. We also considered quasi-RCTs and prospective cohort studies for inclusion if investigators measured outcomes with follow-up of five years or longer.\nData collection and analysis\nEight review authors screened search results and assessed the eligibility of studies for inclusion according to the review inclusion criteria. Eight review authors independently and in duplicate conducted the risk of bias assessments. When information was unclear, we contacted the study authors for additional information.\nMain results\nThis review update includes the same two studies that were identified in our previous version of the review: one RCT with a parallel-group design, which was conducted in a dental hospital setting in the United Kingdom, and one prospective cohort study, which was conducted in the private sector in the USA.\nPrimary outcome\nNo eligible studies in this review reported the effects of removal compared with retention of asymptomatic disease-free impacted wisdom teeth on health-related quality of life\nSecondary outcomes\nWe found only low- to very low-certainty evidence of the effects of removal compared with retention of asymptomatic disease-free impacted wisdom teeth for a limited number of secondary outcome measures.\nOne prospective cohort study, reporting data from a subgroup of 416 healthy male participants, aged 24 to 84 years, compared the effects of the absence (previous removal or agenesis) against the presence of asymptomatic disease-free impacted wisdom teeth on periodontitis and caries associated with the distal aspect of the adjacent second molar during a follow-up period of three to over 25 years. Very low-certainty evidence suggests that the presence of asymptomatic disease-free impacted wisdom teeth may be associated with increased risk of periodontitis affecting the adjacent second molar in the long term. In the same study, which is at serious risk of bias, there is insufficient evidence to demonstrate a difference in caries risk associated with the presence or absence of impacted wisdom teeth.\nOne RCT with 164 randomised and 77 analysed adolescent participants compared the effect of extraction with retention of asymptomatic disease-free impacted wisdom teeth on dimensional changes in the dental arch after five years. Participants (55% female) had previously undergone orthodontic treatment and had 'crowded' wisdom teeth. No evidence from this study, which was at high risk of bias, was found to suggest that removal of asymptomatic disease-free impacted wisdom teeth has a clinically significant effect on dimensional changes in the dental arch.\nThe included studies did not measure any of our other secondary outcomes: costs, other adverse events associated with retention of asymptomatic disease-free impacted wisdom teeth (pericoronitis, root resorption, cyst formation, tumour formation, inflammation/infection) and adverse effects associated with their removal (alveolar osteitis/postoperative infection, nerve injury, damage to adjacent teeth during surgery, bleeding, osteonecrosis related to medication/radiotherapy, inflammation/infection).\nAuthors' conclusions\nInsufficient evidence is available to determine whether asymptomatic disease-free impacted wisdom teeth should be removed or retained. Although retention of asymptomatic disease-free impacted wisdom teeth may be associated with increased risk of periodontitis affecting adjacent second molars in the long term, the evidence is very low certainty. Well-designed RCTs investigating long-term and rare effects of retention and removal of asymptomatic disease-free impacted wisdom teeth, in a representative group of individuals, are unlikely to be feasible. In their continuing absence, high quality, long-term prospective cohort studies may provide valuable evidence in the future. Given the current lack of available evidence, patient values should be considered and clinical expertise used to guide shared decision-making with people who have asymptomatic disease-free impacted wisdom teeth. If the decision is made to retain these teeth, clinical assessment at regular intervals to prevent undesirable outcomes is advisable.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "The available evidence is insufficient to tell us whether or not asymptomatic disease-free impacted wisdom teeth should be removed.\nThe included studies did not measure health-related quality of life, costs or side effects of taking teeth out.\nOne study (the cohort study), which was at serious risk of bias, found that keeping asymptomatic disease-free impacted wisdom teeth in the mouth may increase the risk of gum infection (periodontitis) affecting the adjacent second molar in the long term, but this evidence was very uncertain. In the same study, the evidence was insufficient to draw any conclusions about the effect on the risk of caries in the adjacent second molar.\nThe other study (the RCT) was also at high risk of bias. It measured crowding of the teeth in the mouth, and found that this may not be significantly affected by whether impacted wisdom teeth are kept in the mouth or removed." } ]
query-laysum
23441
[ { "role": "user", "content": "Abstract: Background\nEssential tremor is one of the most common movement disorders. Treatment primarily consists of pharmacological agents. While primidone and propranolol are well-established treatments in clinical practice, they may be ineffective in 25% to 55% of patients and can produce serious adverse events in a large percentage of them. For these reasons, it is worth evaluating the treatment alternatives for essential tremor. Some specialists have suggested that pregabalin could be a potentially useful agent, but there is uncertainty about its efficacy and safety.\nObjectives\nTo assess the effects of pregabalin versus placebo or other treatment for essential tremor in adults.\nSearch methods\nWe performed a systematic search without language restrictions to identify all relevant trials up to December 2015. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, NICE, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (ICTRP). We handsearched grey literature and examined the reference lists of identified studies and reviews.\nSelection criteria\nWe included all randomised controlled trials (RCTs) of pregabalin versus placebo or any other treatments. We included studies in which the diagnosis of ET was made according to accepted and validated diagnostic criteria. We excluded studies conducted in patients presenting secondary forms of tremor or reporting only neurophysiological parameters to assess outcomes.\nData collection and analysis\nTwo reviewers independently collected and extracted data using a data collection form. We assessed the risk of bias of the body of evidence, and we used inverse variance methods to analyse continuous outcomes and measurement scales. We compared the mean difference between treatment groups, and we combined results for dichotomous outcomes using Mantel-Haenszel methods and risk differences We used Review Manager software for data management and analysis.\nMain results\nWe only found one study eligible for this review (22 participants). We assessed the risk of bias for most domains as unclear. We graded the overall quality of evidence as very low. Compared to placebo, patients treated with pregabalin showed no significant improvement of motor tasks on the 36-point subscale of the Fahn-Tolosa-Marin Tremor Rating Scale (TRS) (MD −2.15 points; 95% CI −9.16 to 4.86) or on the 32-point functional abilities subscale of the TRS (MD −0.66 points; 95% CI −2.90 to 1.58).The limited evidence showed no difference in study withdrawal (Mantel-Haenszel RD −0.09; 95% CI −0.48 to 0.30) and presentation of adverse events between pregabalin and placebo (Mantel-Haenszel RD 0.18; 95% CI −0.13 to 0.50).\nAuthors' conclusions\nThe effects of pregabalin for treating essential tremor are uncertain because the quality of the evidence is very low. One small study did not highlight any effect of this treatment; however, the high risk of bias and the lack of other studies on this topic limit further conclusion.\n\nGiven the provided abstract, please respond to the following query: \"Backgound\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Essential tremor is the most common movement disorder. Although benign in terms of its effect on life expectancy, it is typically progressive and potentially disabling. Treatment consists primarily of pharmacological agents (propranolol and primidone as first-line therapy), which could be ineffective for 25% to 55% of patients. Some specialists have suggested that pregabalin could be a potentially useful drug for treating the condition." } ]
query-laysum
23442
[ { "role": "user", "content": "Abstract: Background\nThrombolytic therapy is usually reserved for people with clinically serious or massive pulmonary embolism (PE). Evidence suggests that thrombolytic agents may dissolve blood clots more rapidly than heparin and may reduce the death rate associated with PE. However, there are still concerns about the possible risk of adverse effects of thrombolytic therapy, such as major or minor haemorrhage. This is the fourth update of the Cochrane review first published in 2006.\nObjectives\nTo assess the effects of thrombolytic therapy for acute pulmonary embolism.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL databases and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 17 August 2020. We undertook reference checking to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared thrombolytic therapy followed by heparin versus heparin alone, heparin plus placebo, or surgical intervention for people with acute PE (massive/submassive). We did not include trials comparing two different thrombolytic agents or different doses of the same thrombolytic drug.\nData collection and analysis\nTwo review authors (ZZ, QH) assessed the eligibility and risk of bias of trials and extracted data. We calculated effect estimates using the odds ratio (OR) with a 95% confidence interval (CI) or the mean difference (MD) with a 95% CI. The primary outcomes of interest were death, recurrence of PE and haemorrhagic events. We assessed the certainty of the evidence using GRADE criteria.\nMain results\nWe identified three new studies for inclusion in this update. We included 21 trials in the review, with a total of 2401 participants. No studies compared thrombolytics versus surgical intervention. We were not able to include one study in the meta-analysis because it provided no extractable data. Most studies carried a high or unclear risk of bias related to randomisation and blinding.\nMeta-analysis showed that, compared to control (heparin alone or heparin plus placebo), thrombolytics plus heparin probably reduce both the odds of death (OR 0.58, 95% CI 0.38 to 0.88; 19 studies, 2319 participants; low-certainty evidence), and recurrence of PE (OR 0.54, 95% CI 0.32 to 0.91; 12 studies, 2050 participants; low-certainty evidence). Effects on mortality weakened when six studies at high risk of bias were excluded from analysis (OR 0.71, 95% CI 0.45 to 1.13; 13 studies, 2046 participants) and in the analysis of submassive PE participants (OR 0.61, 95% CI 0.37 to 1.02; 1993 participants). Effects on recurrence of PE also weakened after removing one study at high risk of bias for sensitivity analysis (OR 0.60, 95% CI 0.35 to 1.04; 11 studies, 1949 participants). We downgraded the certainty of evidence to low because of 'Risk of bias' concerns.\nMajor haemorrhagic events were probably more common in the thrombolytics group than in the control group (OR 2.84, 95% CI 1.92 to 4.20; 15 studies, 2101 participants; moderate-certainty evidence), as were minor haemorrhagic events (OR 2.97, 95% CI 1.66 to 5.30; 13 studies,1757 participants; low-certainty evidence). We downgraded the certainty of the evidence to moderate or low because of 'Risk of bias' concerns and inconsistency. Haemorrhagic stroke may occur more often in the thrombolytics group than in the control group (OR 7.59, 95% CI 1.38 to 41.72; 2 studies, 1091 participants).\nLimited data indicated that thrombolytics may benefit haemodynamic outcomes, perfusion lung scanning, pulmonary angiogram assessment, echocardiograms, pulmonary hypertension, coagulation parameters, composite clinical outcomes, need for escalation and survival time to a greater extent than heparin alone. However, the heterogeneity of the studies and the small number of participants involved warrant caution when interpreting results.\nThe length of hospital stay was shorter in the thrombolytics group than in the control group (mean difference (MD) −1.40 days, 95% CI −2.69 to −0.11; 5 studies, 368 participants). Haemodynamic decompensation may occur less in the thrombolytics group than in the control group (OR 0.36, 95% CI 0.20 to 0.66; 3 studies, 1157 participants). Quality of life was similar between the two treatment groups.\nNone of the included studies provided data on post-thrombotic syndrome or on cost comparison.\nAuthors' conclusions\nLow-certainty evidence suggests that thrombolytics may reduce death following acute pulmonary embolism compared with heparin (the effectiveness was mainly driven by one trial with massive PE). Thrombolytic therapy may be helpful in reducing the recurrence of pulmonary emboli but may cause more major and minor haemorrhagic events, including haemorrhagic stroke. More studies of high methodological quality are needed to assess safety and cost effectiveness of thrombolytic therapy for people with pulmonary embolism.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "A pulmonary embolus is a potentially fatal blood clot that lodges in the main artery of the lungs, straining the right side of the heart and affecting blood circulation. People with this condition are at risk of new emboli forming (recurrence). In the case of a massive pulmonary embolism, treatment to restore blood flow is urgently required. Heparin thins the blood, but newer drugs that actively break up the clots (thrombolytics) may act more quickly and may be more effective. These newer drugs include streptokinase, urokinase, and recombinant tissue-type plasminogen activator. The major complication of this treatment is bleeding." } ]
query-laysum
23443
[ { "role": "user", "content": "Abstract: Background\nCurrent recommendations for people with irritable bowel syndrome (IBS) to partake in physical activity are based on low-level evidence, do not incorporate evidence from all available randomised controlled trials (RCTs) and provide little information regarding potential adverse effects.\nObjectives\nTo assess the benefits and harms of physical activity interventions in adults diagnosed with irritable bowel syndrome and to explore possible effect moderators including type, setting and nature of physical activity interventions.\nSearch methods\nWe searched nine electronic databases including CENTRAL, MEDLINE and Embase to 5 November 2021. We handsearched reference lists and sought unpublished studies through trial registries.\nSelection criteria\nWe included RCTs involving adults (aged 18 years or older) diagnosed with IBS and conducted in any setting comparing a physical activity intervention with no intervention, usual care or wait-list control group or another physical activity intervention group and assessing a validated measure of symptoms, quality of life or bowel movement.\nData collection and analysis\nAt least two review authors independently selected studies for inclusion, extracted study data, and performed risk of bias and GRADE assessments to assess the certainty of evidence. We pooled studies that evaluated similar outcomes using a random-effects meta-analysis, and synthesised data from other studies narratively.\nMain results\nWe included 11 RCTs with data for 622 participants. Most (10/11) were set in high- or middle- to high-income countries, with five involving supervised physical activity, three unsupervised activity and three a mix of supervised and unsupervised activity. No trial was at low risk of bias. Four trials specified a minimally important difference for at least one assessed outcome measure. Data for 10 trials were obtained from published journal articles, with data for one obtained from an unpublished Masters degree thesis.\nIrritable bowel syndrome symptoms\nSix RCTs assessed the effectiveness of a physical activity intervention compared with usual care on global symptoms of IBS. Meta-analysis of five studies showed an observed improvement in reported symptoms following physical activity (standardised mean difference (SMD) –0.93, 95% confidence interval (CI) –1.44 to –0.42; 185 participants). We rated the certainty of evidence for this outcome as very low due to unclear and high risk of bias, inconsistency and imprecision from sparse data. This means physical activity may improve IBS symptoms but the evidence is very uncertain. The results of the remaining study supported the meta-analysis but were at unclear risk of bias and sample size was small.\nTwo studies assessed the effectiveness of a yoga intervention compared with a walking intervention on global IBS symptoms. Meta-analysis of these two studies found no conclusive evidence of an effect of yoga compared with walking on IBS symptoms (SMD –1.16, 95% CI –3.93 to 1.62; 124 participants). We rated the certainty of evidence as very low, meaning the evidence is very uncertain about the effect of yoga interventions compared with walking interventions on IBS symptoms.\nTwo studies assessed the effectiveness of a physical activity intervention (yoga) compared with medication. One reported no observed difference in global IBS symptoms, though CIs were wide, suggesting uncertainty in the observed estimates and risk of bias was high (MD –1.20, 95% CI –2.65 to 0.25; 21 participants). We excluded IBS symptom data for the remaining study as it used a non-validated method.\nOne study compared a yoga intervention with a dietary intervention and reported an observed improvement in symptoms with both interventions but neither intervention was superior to the other.\nQuality of life\nFive RCTs assessed the impact of physical activity on self-reported quality of life compared with usual care. Meta-analysis of data from four studies found no improvement in quality of life following a physical activity intervention (SMD 1.17, 95% CI –0.30 to 2.64; 134 participants; very low certainty due to risk of bias, inconsistency and imprecision). We rated the certainty of evidence as very low, meaning the evidence is very uncertain about the effect of physical activity interventions on quality-of-life outcomes in people with IBS.\nOne study assessed the impact on quality of life of a yoga intervention compared with walking and observed an improvement in the yoga group (MD 53.45, 95% CI 38.85 to 68.05; 97 participants ).\nOne study reported no observed difference in quality of life between a yoga and a dietary intervention.\nAbdominal pain\nTwo trials assessed the impact of physical activity compared with usual care on reported abdominal pain. Meta-analysis found no improvement in abdominal pain with physical activity compared with usual care (SMD 0.01, 95% CI –0.48 to 0.50; 64 participants). We rated the certainty of the evidence as very low due to risk of bias and imprecision, meaning the evidence is very uncertain about the effect of physical activity interventions on abdominal pain in people with IBS.\nOne study assessing the impact of a yoga intervention compared with walking advice reported no observed differences between groups on abdominal pain.\nOne study comparing a yoga intervention with a dietary intervention found neither intervention had a more beneficial impact than the other and both interventions did not conclusively reduce abdominal pain.\nThere was insufficient evidence to adequately assess adverse effects associated with physical activity due to a lack of reporting in trials. One study reported a musculoskeletal injury in a yoga intervention group but this did not result in withdrawal from the study.\nAuthors' conclusions\nFindings from a small body of evidence suggest that physical activity comprising of yoga, treadmill exercise or support to increase physical activity may improve symptoms but not quality of life or abdominal pain in people diagnosed with IBS but we have little confidence in these conclusions due to the very low certainty of evidence.\nThe numbers of reported adverse events were low and the certainty of these findings was very low for all comparisons, so no conclusions can be drawn.\nDiscussions with patients considering physical activity as part of symptom management should address the uncertainty in the evidence to ensure fully informed decisions. If deemed sufficiently important to patients and healthcare providers, higher quality research is needed to enable more certain conclusions.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched nine electronic databases and trial registries for all randomised controlled trials involving adults (18 years or older) diagnosed with irritable bowel syndrome that compared a physical activity intervention with no physical activity intervention in adults diagnosed with irritable bowel syndrome. We compared and summarised the results of these trials and rated our confidence in the overall evidence, based on factors such as study methods and the amount of information they provided." } ]
query-laysum
23444
[ { "role": "user", "content": "Abstract: Background\nVaricose veins are enlarged and tortuous veins, affecting up to one-third of the world's population. They can be a cause of chronic venous insufficiency, which is characterised by oedema, pigmentation, eczema, lipodermatosclerosis, atrophie blanche, and healed or active venous ulcers. Injection sclerotherapy (liquid or foam) is widely used for treatment of varicose veins aiming to transform the varicose veins into a fibrous cord. However, there is limited evidence regarding its effectiveness and safety, especially in patients with more severe disease. This is the second update of the review first published in 2002.\nObjectives\nTo assess the effectiveness and safety of injection sclerotherapy for the treatment of varicose veins.\nSearch methods\nFor this update, the Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, AMED, CINAHL, and LILACS databases, and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registries, on 20 July 2021.\nSelection criteria\nWe included all randomised controlled trials (RCTs) (including cluster-randomised trials and first phase cross-over studies) that used injection sclerotherapy for the treatment of varicose veins.\nData collection and analysis\nTwo review authors independently assessed, selected and extracted data. Disagreements were cross-checked by a third review author. We used Cochrane's Risk of bias tool to assess the risk of bias. The outcomes of interest were cosmetic appearance, complications, residual varicose veins, quality of life (QoL), persistence of symptoms, and recurrent varicose veins. We calculated risk ratios (RRs) or mean difference (MD) with 95% confidence intervals (CIs). We used the worst-case-scenario for dichotomous data imputation for intention-to-treat analyses. For continuous outcomes, we used the 'last-observation-carried-forward' for data imputation if there was balanced loss to follow-up. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included 23 new RCTs for this update, bringing the total to 28 studies involving 4278 participants. The studies differed in their design, and in which sclerotherapy method, agent or concentration was used. None of the included RCTs compared sclerotherapy to no intervention or to any pharmacological therapy. The certainty of the evidence was downgraded for risk of bias, low number of studies providing information for each outcome, low number of participants, clinical differences between the study participants, and wide CIs.\nSclerotherapy versus placebo\nFoam sclerotherapy may improve cosmetic appearance as measured by IPR-V (independent photography review - visible varicose veins scores) compared to placebo (polidocanol 1%: mean difference (MD) -0.76, 95% CI -0.91 to -0.60; 2 studies, 223 participants; very low-certainty evidence); however, deep vein thrombosis (DVT) rates may be slightly increased in this intervention group (RR 5.10, 95% CI 1.30 to 20.01; 3 studies, 302 participants; very low-certainty evidence). Residual varicose vein rates may be decreased following polidocanol 1% compared to placebo (RR 0.19, 95% CI 0.13 to 0.29; 2 studies, 225 participants; very low-certainty evidence). Following polidocanol 1% use, there may be a possible improvement in QoL as assessed using the VEINES-QOL/Sym questionnaire (MD 12.41, 95% CI 9.56 to 15.26; 3 studies, 299 participants; very low-certainty evidence), and possible improvement in varicose vein symptoms as assessed using the Venous Clinical Severity Score (VCSS) (MD -3.25, 95% CI -3.90 to -2.60; 2 studies, 223 participants; low-certainty evidence). Recurrent varicose veins were not reported for this comparison.\nFoam sclerotherapy versus foam sclerotherapy with different concentrations\nThree individual RCTs reported no evidence of a difference in cosmetic appearance after comparing different concentrations of the intervention; data could not be pooled for two of the three studies (RR 1.11, 95% CI 0.84 to 1.47; 1 study, 80 participants; very low-certainty evidence). Similarly, there was no clear difference in rates of thromboembolic complications when comparing one foam concentration with another (RR 1.47, 95% CI 0.41 to 5.33; 3 studies, 371 participants; very low-certainty evidence). Three RCTs investigating higher concentrations of polidocanol foam indicated the rate of residual varicose veins may be slightly decreased in the polidocanol 3% foam group compared to 1% (RR 0.67, 95% CI 0.43 to 1.04; 3 studies, 371 participants; moderate-certainty evidence). No clear improvement in QoL was detected. Two RCTs reported improved VCSS scores with increasing concentrations of foam. Persistence of symptoms were not reported for this comparison. There was no clear difference in recurrent varicose vein rates (RR 0.91, 95% CI 0.62 to 1.32; 1 study, 148 participants; low-certainty evidence).\nFoam sclerotherapy versus liquid sclerotherapy\nOne RCT reported on cosmetic appearance with no evidence of a difference between foam or liquid sclerotherapy (patient satisfaction scale MD 0.2, 95% CI -0.27 to 0.67; 1 study, 126 participants; very low-certainty evidence). None of the RCTs investigated thromboembolic complications, QoL or persistence of symptoms. Six studies individually showed there may be a benefit to polidocanol 3% foam over liquid sclerotherapy in reducing residual varicose vein rate; pooling data from two studies showed a RR of 0.51, with 95% CI 0.41 to 0.65; 203 participants; very low-certainty evidence. One study reported no clear difference in recurrent varicose vein rates when comparing sodium tetradecyl sulphate (STS) foam or liquid (RR 1.10, 95% CI 0.86 to 1.42; 1 study, 286 participants; very low-certainty evidence).\nSclerotherapy versus sclerotherapy with different substances\nFour RCTs compared sclerotherapy versus sclerotherapy with any other substance. We were unable to combine the data due to heterogeneity or assess the certainty of the evidence due to insufficient data.\nAuthors' conclusions\nThere is a very low to low-certainty evidence that, compared to placebo, sclerotherapy is an effective and safe treatment for varicose veins concerning cosmetic appearance, residual varicose veins, QoL, and persistence of symptoms. Rates of DVT may be slightly increased and there were no data concerning recurrent varicose veins. There was limited or no evidence for one concentration of foam compared to another; foam compared to liquid sclerotherapy; foam compared to any other substance; or one technique compared to another. There is a need for high-quality trials using standardised sclerosant doses, with clearly defined core outcome sets, and measurement time points to increase the certainty of the evidence.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics and key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "We searched for relevant studies and identified 28 randomised controlled trials, involving over 4278 participants (search current to 20 July 2021). We looked at cosmetic appearance, complications, residual varicose vein rates, quality of life, and symptomatic improvement related to sclerotherapy. There were no trials assessing treatment versus no interventions, sclerotherapy at different time intervals, or sclerotherapy compared with pharmacological therapy.\nOur results showed that, compared to placebo, sclerotherapy may improve cosmetic appearance and quality of life, and may reduce residual varicose veins and persistence of symptoms but the evidence is uncertain. Rates of deep vein thrombosis may be slightly increased but the evidence is uncertain. There were no data for recurrent varicose veins. There was not enough Information to conclude if one concentration of foam was better than another, or if foam was better than liquid or any other substance; or if there was any benefit of one injection technique compared to another." } ]
query-laysum
23445
[ { "role": "user", "content": "Abstract: Background\nUsing hypofractionation (fewer, larger doses of daily radiation) to treat localized prostate cancer may improve convenience and resource use. For hypofractionation to be feasible, it must be at least as effective for cancer-related outcomes and have comparable toxicity and quality of life outcomes as conventionally fractionated radiation therapy.\nObjectives\nTo assess the effects of hypofractionated external beam radiation therapy compared to conventionally fractionated external beam radiation therapy for men with clinically localized prostate cancer.\nSearch methods\nWe searched CENTRAL, MEDLINE (Ovid), Embase (Ovid) and trials registries from 1946 to 15 March 2019 with reference checking, citation searching and contact with study authors. Searches were not limited by language or publication status. We reran all searches within three months (15th March 2019) prior to publication.\nSelection criteria\nRandomized controlled comparisons which included men with clinically localized prostate adenocarcinoma where hypofractionated radiation therapy (external beam radiation therapy) to the prostate using hypofractionation (greater than 2 Gy per fraction) compared with conventionally fractionated radiation therapy to the prostate delivered using standard fractionation (1.8 Gy to 2 Gy per fraction).\nData collection and analysis\nWe used standard Cochrane methodology. Two authors independently assessed trial quality and extracted data. We used Review Manager 5 for data analysis and meta-analysis. We used the inverse variance method and random-effects model for data synthesis of time-to-event data with hazard ratios (HR) and 95% confidence intervals (CI) reported. For dichotomous data, we used the Mantel-Haenzel method and random-effects model to present risk ratios (RR) and 95% CI. We used GRADE to assess evidence quality for each outcome.\nMain results\nWe included 10 studies with 8278 men in our analysis comparing hypofractionation with conventional fractionation to treat prostate cancer.\nPrimary outcomes\nHypofractionation may result in little or no difference in prostate cancer-specific survival [PC-SS] (HR 1.00, 95% CI 0.72 to 1.39; studies = 8, participants = 7946; median follow-up 72 months; low-certainty evidence). For men in the intermediate-risk group undergoing conventional fractionation this corresponds to 976 per 1000 men alive after 6 years and 0 more (44 fewer to 18 more) alive per 1000 men undergoing hypofractionation.\nWe are uncertain about the effect of hypofractionation on late radiation therapy gastrointestinal (GI) toxicity (RR 1.10, 95% CI 0.68 to 1.78; studies = 4, participants = 3843; very low-certainty evidence).\nHypofractionation probably results in little or no difference to late radiation therapy genitourinary (GU) toxicity (RR 1.05, 95% CI 0.93 to 1.18; studies = 4, participants = 3843; moderate-certainty evidence). This corresponds to 262 per 1000 late GU radiation therapy toxicity events with conventional fractionation and 13 more (18 fewer to 47 more) per 1000 men when undergoing hypofractionation.\nSecondary outcomes\nHypofractionation results in little or no difference in overall survival (HR 0.94, 95% CI 0.83 to 1.07; 10 studies, 8243 participants; high-certainty evidence). For men in the intermediate-risk group undergoing conventional fractionation this corresponds to 869 per 1000 men alive after 6 years and 17 fewer (54 fewer to 17 more) participants alive per 1000 men when undergoing hypofractionation.\nHypofractionation may result in little to no difference in metastasis-free survival (HR 1.07, 95% CI 0.65 to 1.76; 5 studies, 4985 participants; low-certainty evidence). This corresponds to 981 men per 1000 men metastasis-free at 6 years when undergoing conventional fractionation and 5 more (58 fewer to 19 more) metastasis-free per 1000 when undergoing hypofractionation.\nHypofractionation likely results in a small, possibly unimportant reduction in biochemical recurrence-free survival based on Phoenix criteria (HR 0.88, 95% CI 0.68 to 1.13; studies = 5, participants = 2889; median follow-up 90 months to 108 months; moderate-certainty evidence). In men of the intermediate-risk group, this corresponds to 804 biochemical-recurrence free men per 1000 participants at six years with conventional fractionation and 42 fewer (134 fewer to 37 more) recurrence-free men per 1000 participants with hypofractionation\nHypofractionation likely results in little to no difference to acute GU radiation therapy toxicity (RR 1.03, 95% CI 0.95 to 1.11; 4 studies, 4174 participants at 12 to 18 weeks' follow-up; moderate-certainty evidence). This corresponds to 360 episodes of toxicity per 1000 participants with conventional fractionation and 11 more (18 fewer to 40 more) per 1000 when undergoing hypofractionation.\nAuthors' conclusions\nThese findings suggest that moderate hypofractionation (up to a fraction size of 3.4 Gy) results in similar oncologic outcomes in terms of disease-specific, metastasis-free and overall survival. There appears to be little to no increase in both acute and late toxicity.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "This evidence is current to 15 March 2019. The men studied were ages 64 years and over and had prostate cancer limited to the pelvis." } ]
query-laysum
23446
[ { "role": "user", "content": "Abstract: Background\nAdjuvant tamoxifen reduces the risk of breast cancer recurrence in women with oestrogen receptor-positive breast cancer. Tamoxifen also increases the risk of postmenopausal bleeding, endometrial polyps, hyperplasia, and endometrial cancer. The levonorgestrel-releasing intrauterine system (LNG-IUS) causes profound endometrial suppression. This systematic review considered the evidence that the LNG-IUS prevents the development of endometrial pathology in women taking tamoxifen as adjuvant endocrine therapy for breast cancer.\nObjectives\nTo determine the effectiveness and safety of the levonorgestrel intrauterine system (LNG-IUS) in pre- and postmenopausal women taking adjuvant tamoxifen following breast cancer for the outcomes of endometrial and uterine pathology including abnormal vaginal bleeding or spotting, and secondary breast cancer events.\nSearch methods\nWe searched the following databases on 29 June 2020; The Cochrane Gynaecology and Fertility Group specialised register, Cochrane Central Register of Controlled Trials, MEDLINE, Embase, PsycINFO and Cumulative Index to Nursing and Allied Health Literature. We searched the Cochrane Breast Cancer Group specialised register on 4 March 2020. We also searched two trials registers, checked references for relevant trials and contacted study authors and experts in the field to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) of women with breast cancer on adjuvant tamoxifen that compared the effectiveness of the LNG-IUS with endometrial surveillance versus endometrial surveillance alone on the incidence of endometrial pathology.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. The primary outcome measure was endometrial pathology (including polyps, endometrial hyperplasia, or endometrial cancer), diagnosed at hysteroscopy or endometrial biopsy. Secondary outcome measures included fibroids, abnormal vaginal bleeding or spotting, breast cancer recurrence, and breast cancer-related deaths. We rated the overall certainty of evidence using GRADE methods.\nMain results\nWe included four RCTs (543 women analysed) in this review. We judged the certainty of the evidence to be moderate for all of the outcomes, due to imprecision (i.e. limited sample sizes and low event rates). In the included studies, the active treatment arm was the 20 μg/day LNG-IUS plus endometrial surveillance; the control arm was endometrial surveillance alone.\nIn tamoxifen users, the LNG-IUS probably reduces the incidence of endometrial polyps compared to the control group over both a 12-month period (Peto odds ratio (OR) 0.22, 95% confidence interval (CI) 0.08 to 0.64, I² = 0%; 2 RCTs, n = 212; moderate-certainty evidence) and over a long-term follow-up period (24 to 60 months) (Peto OR 0.22, 95% CI 0.13 to 0.39; I² = 0%; 4 RCTs, n = 417; moderate-certainty evidence). For long-term follow-up, this suggests that if the incidence of endometrial polyps following endometrial surveillance alone is assumed to be 23.5%, the incidence following LNG-IUS with endometrial surveillance would be between 3.8% and 10.7%.\nThe LNG-IUS probably slightly reduces the incidence of endometrial hyperplasia compared with controls over a long-term follow-up period (24 to 60 months) (Peto OR 0.13, 95% CI 0.03 to 0.67; I² = 0%; 4 RCTs, n = 417; moderate-certainty evidence). This suggests that if the chance of endometrial hyperplasia following endometrial surveillance alone is assumed to be 2.8%, the chance following LNG-IUS with endometrial surveillance would be between 0.1% and 1.9%. However, it should be noted that there were only six cases of endometrial hyperplasia.\nThere was insufficient evidence to reach a conclusion regarding the incidence of endometrial cancer in tamoxifen users, as no studies reported cases of endometrial cancer.\nAt 12 months of follow-up, the LNG-IUS probably increases abnormal vaginal bleeding or spotting compared to the control group (Peto OR 7.26, 95% CI 3.37 to 15.66; I² = 0%; 3 RCTs, n = 376; moderate-certainty evidence). This suggests that if the chance of abnormal vaginal bleeding or spotting following endometrial surveillance alone is assumed to be 1.7%, the chance following LNG-IUS with endometrial surveillance would be between 5.6% and 21.5%. By 24 months of follow-up, abnormal vaginal bleeding or spotting occurs less frequently than at 12 months of follow-up, but is still more common in the LNG-IUS group than the control group (Peto OR 2.72, 95% CI 1.04 to 7.10; I² = 0%; 2 RCTs, n = 233; moderate-certainty evidence). This suggests that if the chance of abnormal vaginal bleeding or spotting following endometrial surveillance alone is assumed to be 4.2%, the chance following LNG-IUS with endometrial surveillance would be between 4.4% and 23.9%. By 60 months of follow-up, there were no cases of abnormal vaginal bleeding or spotting in either group.\nThe numbers of events for the following outcomes were low: fibroids (n = 13), breast cancer recurrence (n = 18), and breast cancer-related deaths (n = 16). As a result, there is probably little or no difference in these outcomes between the LNG-IUS treatment group and the control group.\nAuthors' conclusions\nThe LNG-IUS probably slightly reduces the incidence of benign endometrial polyps and endometrial hyperplasia in women with breast cancer taking tamoxifen. At 12 and 24 months of follow-up, the LNG-IUS probably increases abnormal vaginal bleeding or spotting among women in the treatment group compared to those in the control. Data were lacking on whether the LNG-IUS prevents endometrial cancer in these women. There is no clear evidence from the available RCTs that the LNG-IUS affects the risk of breast cancer recurrence or breast cancer-related deaths. Larger studies are necessary to assess the effects of the LNG-IUS on the incidence of endometrial cancer, and to determine whether the LNG-IUS might have an impact on the risk of secondary breast cancer events.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "This review suggests that the LNG-IUS probably slightly reduces the risk of endometrial polyps and endometrial hyperplasia over two to five years in women taking tamoxifen following breast cancer. The evidence suggests that if the incidence of endometrial polyps following endometrial surveillance alone is assumed to be 23.5%, the incidence following LNG-IUS plus endometrial surveillance would be between 3.8% and 10.7%. Evidence also suggests that if 2.8% of women who only had endometrial surveillance developed endometrial hyperplasia, the chance following LNG-IUS plus endometrial surveillance would be between 0.1% and 1.9%.\nThe LNG-IUS probably increases abnormal vaginal bleeding or spotting. After one year, the evidence suggests that if the incidence of abnormal vaginal bleeding or spotting following endometrial surveillance alone is assumed to be 1.7%, the incidence following LNG-IUS plus endometrial surveillance would be between 5.6% and 21.5%. After two years, if 4.2% of women who only had endometrial surveillance experienced abnormal vaginal bleeding or spotting, between 4.4% and 23.9% of women who had both surveillance and LNG-IUS would be expected to experience this. However by five years of follow-up, no women in either group reported abnormal vaginal bleeding or spotting.\nWe found insufficient evidence to reach a conclusion regarding the effect on incidence of endometrial cancer (a cancer originating in glandular tissue), fibroids, breast cancer recurrence, or breast cancer-related death." } ]
query-laysum
23447
[ { "role": "user", "content": "Abstract: Background\nPancreatoduodenectomy is a surgical procedure used to treat diseases of the pancreatic head and, less often, the duodenum. The most common disease treated is cancer, but pancreatoduodenectomy is also used for people with traumatic lesions and chronic pancreatitis. Following pancreatoduodenectomy, the pancreatic stump must be connected with the small bowel where pancreatic juice can play its role in food digestion. Pancreatojejunostomy (PJ) and pancreatogastrostomy (PG) are surgical procedures commonly used to reconstruct the pancreatic stump after pancreatoduodenectomy. Both of these procedures have a non-negligible rate of postoperative complications. Since it is unclear which procedure is better, there are currently no international guidelines on how to reconstruct the pancreatic stump after pancreatoduodenectomy, and the choice is based on the surgeon's personal preference.\nObjectives\nTo assess the effects of pancreaticogastrostomy compared to pancreaticojejunostomy on postoperative pancreatic fistula in participants undergoing pancreaticoduodenectomy.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 9), Ovid MEDLINE (1946 to 30 September 2016), Ovid Embase (1974 to 30 September 2016) and CINAHL (1982 to 30 September 2016). We also searched clinical trials registers (ClinicalTrials.gov and WHO ICTRP) and screened references of eligible articles and systematic reviews on this subject. There were no language or publication date restrictions.\nSelection criteria\nWe included all randomized controlled trials (RCTs) assessing the clinical outcomes of PJ compared to PG in people undergoing pancreatoduodenectomy.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration. We performed descriptive analyses of the included RCTs for the primary (rate of postoperative pancreatic fistula and mortality) and secondary outcomes (length of hospital stay, rate of surgical re-intervention, overall rate of surgical complications, rate of postoperative bleeding, rate of intra-abdominal abscess, quality of life, cost analysis). We used a random-effects model for all analyses. We calculated the risk ratio (RR) for dichotomous outcomes, and the mean difference (MD) for continuous outcomes (using PG as the reference) with 95% confidence intervals (CI) as a measure of variability.\nMain results\nWe included 10 RCTs that enrolled a total of 1629 participants. The characteristics of all studies matched the requirements to compare the two types of surgical reconstruction following pancreatoduodenectomy. All studies reported incidence of postoperative pancreatic fistula (the main complication) and postoperative mortality.\nOverall, the risk of bias in included studies was high; only one included study was assessed at low risk of bias.\nThere was little or no difference between PJ and PG in overall risk of postoperative pancreatic fistula (PJ 24.3%; PG 21.4%; RR 1.19, 95% CI 0.88 to 1.62; 7 studies; low-quality evidence). Inclusion of studies that clearly distinguished clinically significant pancreatic fistula resulted in us being uncertain whether PJ improved the risk of pancreatic fistula when compared with PG (19.3% versus 12.8%; RR 1.51, 95% CI 0.92 to 2.47; very low-quality evidence). PJ probably has little or no difference from PG in risk of postoperative mortality (3.9% versus 4.8%; RR 0.84, 95% CI 0.53 to 1.34; moderate-quality evidence).\nWe found low-quality evidence that PJ may differ little from PG in length of hospital stay (MD 1.04 days, 95% CI -1.18 to 3.27; 4 studies, N = 502) or risk of surgical re-intervention (11.6% versus 10.3%; RR 1.18, 95% CI 0.86 to 1.61; 7 studies, N = 1263). We found moderate-quality evidence suggesting little difference between PJ and PG in terms of risk of any surgical complication (46.5% versus 44.5%; RR 1.03, 95% CI 0.90 to 1.18; 9 studies, N = 1513). PJ may slightly improve the risk of postoperative bleeding (9.3% versus 13.8%; RR 0.69, 95% CI: 0.51 to 0.93; low-quality evidence; 8 studies, N = 1386), but may slightly worsen the risk of developing intra-abdominal abscess (14.7% versus 8.0%; RR 1.77, 95% CI 1.11 to 2.81; 7 studies, N = 1121; low quality evidence). Only one study reported quality of life (N = 320); PG may improve some quality of life parameters over PJ (low-quality evidence). No studies reported cost analysis data.\nAuthors' conclusions\nThere is no reliable evidence to support the use of pancreatojejunostomy over pancreatogastrostomy. Future large international studies may shed new light on this field of investigation.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Most studies had flaws in methodological quality, reporting or both. Overall, the quality of evidence was low." } ]
query-laysum
23448
[ { "role": "user", "content": "Abstract: Background\nAntipsychotic agents are often used to treat neuropsychiatric symptoms (NPS) in people with dementia although there is uncertainty about the effectiveness of their long-term use for this indication and concern that they may cause harm, including higher mortality. When behavioural strategies have failed and treatment with antipsychotic drugs is instituted, regular attempts to withdraw them have been recommended in guidelines. Physicians, nurses and families of older people with dementia may be reluctant to stop antipsychotics, fearing deterioration of NPS.\nThis is an update of a Cochrane Review published in 2013.\nObjectives\nTo evaluate whether withdrawal of antipsychotic agents is successful in older people with dementia and NPS in primary care or nursing home settings, to list the different strategies for withdrawal of antipsychotic agents in older participants with dementia and NPS, and to measure the effects of withdrawal of antipsychotic agents on participants' behaviour and assess safety.\nSearch methods\nWe searched the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group (ALOIS), the Cochrane Library, MEDLINE, Embase, PsycINFO, CINAHL, LILACS, clinical trials registries and grey literature sources up to 11 January 2018.\nSelection criteria\nWe included all randomised, controlled trials comparing an antipsychotic withdrawal strategy to continuation of antipsychotics in people with dementia who had been treated with an antipsychotic drug for at least three months.\nData collection and analysis\nWe used standard methodological procedures according to the Cochrane Handbook for Systematic Reviews of Interventions. We rated the quality of evidence for each outcome using the GRADE approach.\nMain results\nWe included 10 studies involving 632 participants. One new trial (19 participants) was added for this update.\nOne trial was conducted in a community setting, eight in nursing homes and one in both settings. Different types of antipsychotics at varying doses were discontinued in the studies. Both abrupt and gradual withdrawal schedules were used. Reported data were predominantly from studies at low or unclear risk of bias.\nWe included nine trials with 575 randomised participants that used a proxy outcome for overall success of antipsychotic withdrawal. Pooling data was not possible due to heterogeneity of outcome measures used. Based on assessment of seven studies, discontinuation may make little or no difference to whether or not participants complete the study (low-quality evidence).\nTwo trials included only participants with psychosis, agitation or aggression who had responded to antipsychotic treatment. In these two trials, stopping antipsychotics was associated with a higher risk of leaving the study early due to symptomatic relapse or a shorter time to symptomatic relapse.\nWe found low-quality evidence that discontinuation may make little or no difference to overall NPS, measured using various scales (7 trials, 519 participants). There was some evidence from subgroup analyses in two trials that discontinuation may reduce agitation for participants with less severe NPS at baseline, but may be associated with a worsening of NPS in participants with more severe NPS at baseline.\nNone of the studies assessed withdrawal symptoms. Adverse effects of antipsychotics (such as falls) were not systematically assessed. Low-quality evidence showed that discontinuation may have little or no effect on adverse events (5 trials, 381 participants), quality of life (2 trials, 119 participants), or cognitive function (5 trials, 365 participants).\nThere were insufficient data to determine whether discontinuation of antipsychotics has any effect on mortality (very low-quality evidence).\nAuthors' conclusions\nThere is low-quality evidence that antipsychotics may be successfully discontinued in older people with dementia and NPS who have been taking antipsychotics for at least three months, and that discontinuation may have little or no important effect on behavioural and psychological symptoms. This is consistent with the observation that most behavioural complications of dementia are intermittent and often do not persist for longer than three months. Discontinuation may have little or no effect on overall cognitive function. Discontinuation may make no difference to adverse events and quality of life. Based on the trials in this review, we are uncertain whether discontinuation of antipsychotics leads to a decrease in mortality.\nPeople with psychosis, aggression or agitation who responded well to long-term antipsychotic drug use, or those with more severe NPS at baseline, may benefit behaviourally from continuation of antipsychotics. Discontinuation may reduce agitation for people with mild NPS at baseline. However, these conclusions are based on few studies or small subgroups and further evidence of benefits and harms associated with withdrawal of antipsychotic is required in people with dementia and mild and severe NPS.\nThe overall conclusions of the review have not changed since 2013 and the number of available trials remains low.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "People with dementia may have symptoms and behavioural problems that can be distressing and difficult for carers to manage. Such symptoms (often described as neuropsychiatric symptoms, or NPS) include anxiety, apathy, depression, psychosis (hallucinations and delusions), wandering, repeating words or sounds, shouting, and behaving in agitated or aggressive ways, or both.\nAntipsychotic drugs are often prescribed with the aim of controlling these symptoms and behaviours, although most current guidance suggests these drugs should only be used for short periods of time for the most challenging behaviours. This is largely because these drugs are thought to have risks of side effects (including some that are serious), and because many behavioural problems improve without treatment. However, many people with dementia continue to take antipsychotic drugs over long periods of time.\nThis review investigated whether it is feasible for older people with dementia and NPS to stop antipsychotic drugs which they have been taking for at least three months. This is an update of a Cochrane Review published in 2013." } ]
query-laysum
23449
[ { "role": "user", "content": "Abstract: Background\nMénière's disease or syndrome is a chronic inner ear disorder that results in sporadic attacks of vertigo, sensorineural hearing loss, aural fullness and tinnitus.\nThere is no definitive treatment for Ménière's disease and treatment options range from dietary modification through medication to surgery.\nModification of diet, including restriction of salt, caffeine and alcohol intake, is a management option that is widely recommended to patients with Ménière's as a first-line treatment. There has not previously been a systematic review of this intervention.\nObjectives\nTo assess the effects of dietary restriction of salt, caffeine and alcohol intake in patients with Ménière's disease or syndrome.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trials Register; Central Register of Controlled Trials (CENTRAL); PubMed; Ovid Embase; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 28 March 2018.\nSelection criteria\nRandomised controlled trials of dietary modification, specifically salt, caffeine and alcohol restriction or substitution (or both), compared to no modification of these agents or a placebo intervention, in adult patients with Ménière's disease or syndrome.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were control of vertigo or decrease in vertigo attacks, and adverse effects. Secondary outcomes included hearing (change in hearing loss or its progression), tinnitus (severity), perception of aural fullness, well-being and quality of life (overall changes), and other adverse effects. We planned to use GRADE to assess the quality of the evidence for each outcome.\nMain results\nWe did not identify any studies that met the inclusion criteria for the review.\nAuthors' conclusions\nThere is no evidence from randomised controlled trials to support or refute the restriction of salt, caffeine or alcohol intake in patients with Ménière's disease or syndrome.\nHigh-quality research in this field is warranted. The best evidence may come from a randomised controlled trial comparing dietary interventions (e.g. low salt versus general healthy diet advice), using rigorous methodology for patient selection, randomisation and blinding, and strictly adhering to the Bárány Society diagnostic criteria. However, this research question might be more pragmatically addressed by using information from carefully constructed patient registries that include information on dietary intake of substances of interest such as salt, caffeine and alcohol. It will be important to address the question of any possible harms or unwanted effects of dietary modification.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n" }, { "role": "assistant", "content": "Approximately 200 per 100,000 people suffer from Ménière's. The condition is called Ménière's disease if no cause can be identified and Ménière's syndrome if there is a known reason for its development. Patients who suffer from it experience vertigo (dizzy episodes), hearing loss, a sensation of pressure in their ears and tinnitus (ringing or buzzing in the ears).\nAt present there is no standard treatment for Ménière's disease and options can range from dietary changes, to medicines and in some cases surgery. Ménière's disease is thought to be caused by disturbance of the volume or composition of the fluid in the inner ear (called endolymph). Dietary intake of salt can affect the concentrations of electrolytes (salts and minerals that can conduct electrical impulses in the body) in the blood, which in turn may affect the composition of the endolymph. Salt intake may therefore contribute to attacks and so restriction in the diet could be used to control both the volume and composition of the endolymph. Caffeine and alcohol intake can result in constriction of blood vessels (vasoconstriction) and could result in a reduction in the blood supply to the inner ear, which may make patients' symptoms worse. Many doctors advise dietary changes as a first-line treatment as it is thought to be a relatively simple and inexpensive option. We wanted to find out whether dietary changes are effective to ensure that patients are receiving the correct advice about treatment options, and to ensure that potentially more appropriate treatment is not delayed by spending time on ineffective interventions, resulting in unpleasant symptoms and disease progression." } ]