dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum | 1314 | [
{
"role": "user",
"content": "Abstract: Background\nDepression is one of the most common morbidities of the postnatal period. It has been associated with adverse outcomes for women, children, the wider family and society as a whole. Treatment is with psychosocial interventions or antidepressant medication, or both. The aim of this review is to evaluate the effectiveness of different antidepressants and to compare their effectiveness with placebo, treatment as usual or other forms of treatment. This is an update of a review last published in 2014.\nObjectives\nTo assess the effectiveness and safety of antidepressant drugs in comparison with any other treatment (psychological, psychosocial, or pharmacological), placebo, or treatment as usual for postnatal depression.\nSearch methods\nWe searched Cochrane Common Mental Disorders's Specialized Register, CENTRAL, MEDLINE, Embase and PsycINFO in May 2020. We also searched international trials registries and contacted experts in the field.\nSelection criteria\nWe included randomised controlled trials (RCTs) of women with depression during the first 12 months postpartum that compared antidepressant treatment (alone or in combination with another treatment) with any other treatment, placebo or treatment as usual.\nData collection and analysis\nTwo review authors independently extracted data from the study reports. We requested missing information from study authors wherever possible. We sought data to allow an intention-to-treat analysis. Where we identified sufficient comparable studies we pooled data and conducted random-effects meta-analyses.\nMain results\nWe identified 11 RCTs (1016 women), the majority of which were from English-speaking, high-income countries; two were from middle-income countries. Women were recruited from a mix of community-based, primary care, maternity and outpatient settings. Most studies used selective serotonin reuptake inhibitors (SSRIs), with treatment duration ranging from 4 to 12 weeks.\nMeta-analysis showed that there may be a benefit of SSRIs over placebo in response (55% versus 43%; pooled risk ratio (RR) 1.27, 95% confidence interval (CI) 0.97 to 1.66); remission (42% versus 27%; RR 1.54, 95% CI 0.99 to 2.41); and reduced depressive symptoms (standardised mean difference (SMD) −0.30, 95% CI −0.55 to −0.05; 4 studies, 251 women), at 5 to 12 weeks' follow-up. We were unable to conduct meta-analysis for adverse events due to variation in the reporting of this between studies. There was no evidence of a difference between acceptability of SSRI and placebo (27% versus 27%; RR 1.10, 95% CI 0.74 to 1.64; 4 studies; 233 women). The certainty of all the evidence for SSRIs was low or very low due to the small number of included studies and a number of potential sources of bias, including high rates of attrition.\nThere was insufficient evidence to assess the efficacy of SSRIs compared with other classes of antidepressants and of antidepressants compared with other pharmacological interventions, complementary medicines, psychological and psychosocial interventions or treatment as usual. A substantial proportion of women experienced adverse effects but there was no evidence of differences in the number of adverse effects between treatment groups in any of the studies. Data on effects on children, including breastfed infants, parenting, and the wider family were limited, although no adverse effects were noted.\nAuthors' conclusions\nThere remains limited evidence regarding the effectiveness and safety of antidepressants in the management of postnatal depression, particularly for those with more severe depression. We found low-certainty evidence that SSRI antidepressants may be more effective in treating postnatal depression than placebo as measured by response and remission rates. However, the low certainty of the evidence suggests that further research is very likely to have an important impact on our effect estimate. There is a continued imperative to better understand whether, and for whom, antidepressants or other treatments are more effective for postnatal depression, and whether some antidepressants are more effective or better tolerated than others.\nIn clinical practice, the findings of this review need to be contextualised by the extensive broader literature on antidepressants in the general population and perinatal clinical guidance, to inform an individualised risk-benefit clinical decision. Future RCTs should focus on larger samples, longer follow-up, comparisons with alternative treatment modalities and inclusion of child and parenting outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review found only a few relevant studies. There is some evidence that antidepressants may work better than a dummy pill for women with postnatal depression. There is not enough evidence comparing antidepressants to other treatments for postnatal depression. Clinicians need to consider study evidence from the general population and current clinical guidelines, along with the woman's illness history and current symptoms, to make an individualised risk-benefit treatment decision with the woman."
}
]
|
query-laysum | 1315 | [
{
"role": "user",
"content": "Abstract: Background\nIncontinence-associated dermatitis (IAD) is one of the most common skin problems in adults who are incontinent for urine, stool, or both. In practice, products and procedures are the same for both prevention and treatment of IAD.\nObjectives\nThe objective of this review was to assess the effectiveness of various products and procedures to prevent\nand treat incontinence-associated dermatitis in adults.\nSearch methods\nWe searched the Cochrane Incontinence Group Specialised Trials Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, CINAHL, ClinicalTrials.gov, WHO ICTRP and handsearching of journals and conference proceedings (searched 28 September 2016). Additionally we searched other electronic databases: CENTRAL(2015, Issue 4), MEDLINE (January 1946 to May Week 3 2015), MEDLINE In-Process (inception to 26 May 2015), CINAHL(December 1981 to 28 May 2015), Web of Science (WoS; inception to 28 May 2015) and handsearched conference proceedings (to June 2015) and the reference lists of relevant articles, and contacted authors and experts in the field.\nSelection criteria\nWe selected randomised controlled trials (RCTs) and quasi-RCTs, performed in any healthcare setting, with included participants over 18 years of age, with or without IAD. We included trials comparing the (cost) effectiveness of topical skin care products such as skin cleansers, moisturisers, and skin protectants of different compositions and skin care procedures aiming to prevent and treat IAD.\nData collection and analysis\nTwo review authors independently screened titles, abstracts and full-texts, extracted data, and assessed the risk of bias of the included trials.\nMain results\nWe included 13 trials with 1316 participants in a qualitative synthesis. Participants were incontinent for urine, stool, or both, and were residents in a nursing home or were hospitalised.\nEleven trials had a small sample size and short follow-up periods. .The overall risk of bias in the included studies was high. The data were not suitable for meta-analysis due to heterogeneity in participant population, skin care products, skin care procedures, outcomes, and measurement tools.\nNine trials compared different topical skin care products, including a combination of products. Two trials tested a structured skin care procedure. One trial compared topical skin care products alongside frequencies of application. One trial compared frequencies of application of topical skin care products.\nWe found evidence in two trials, being of low and moderate quality, that soap and water performed poorly in the prevention and treatment of IAD (primary outcomes of this review). The first trial indicated that the use of a skin cleanser might be more effective than the use of soap and water (risk ratio (RR) 0.39, 95% confidence interval (CI) 0.17 to 0.87; low quality evidence). The second trial indicated that a structured skin care procedure, being a washcloth with cleansing, moisturising, and protecting properties, might be more effective than soap and water (RR 0.31, 95% CI 0.12 to 0.79; moderate quality evidence). Findings from the other trials, all being of low to very low quality, suggest that applying a leave-on product (moisturiser, skin protectant, or a combination) might be more effective than not applying a leave-on product. No trial reported on the third primary outcome 'number of participants not satisfied with treatment' or on adverse effects.\nAuthors' conclusions\nLittle evidence, of very low to moderate quality, exists on the effects of interventions for preventing and treating IAD in adults. Soap and water performed poorly in the prevention and treatment of IAD. Application of leave-on products (moisturisers, skin protectants, or a combination) and avoiding soap seems to be more effective than withholding these products. The performance of leave-on products depends on the combination of ingredients, the overall formulation and the usage (e.g. amount applied). High quality confirmatory trials using standardised, and comparable prevention and treatment regimens in different settings/regions are required. Furthermore, to increase the comparability of trial results, we recommend the development of a core outcome set, including validated measurement tools. The evidence in this review is current up to 28 September 2016.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Incontinence-associated dermatitis (IAD) is an inflammation of the skin due to contact with urine or stool. IAD occurs in people with involuntary loss of urine or stool (incontinence). The main symptom of IAD is skin redness. In addition, bullae, skin lesions, and skin infection may occur. IAD affects one to five in ten incontinent adults and is a risk factor for pressure-related skin problems. To prevent and treat IAD, skin cleansing and skin care products are recommended. Many skin care products and procedures are available. The skin care products can be divided into cleansers, moisturisers, and protectants which may be combined (for example, a cleanser/moisturiser). In practice, products and procedures are the same for both prevention and treatment."
}
]
|
query-laysum | 1316 | [
{
"role": "user",
"content": "Abstract: Background\nChronic obstructive pulmonary disease (COPD) is associated with cough, sputum production or dyspnoea, and a reduction in lung function, quality of life, and life expectancy. Apart from smoking cessation, no other treatments that slow lung function decline are available. Roflumilast and cilomilast are oral phosphodiesterase-4 (PDE₄) inhibitors proposed to reduce the airway inflammation and bronchoconstriction seen in COPD. This Cochrane Review was first published in 2011, and was updated in 2017 and 2020.\nObjectives\nTo evaluate the efficacy and safety of oral PDE₄ inhibitors for management of stable COPD.\nSearch methods\nWe identified randomised controlled trials (RCTs) from the Cochrane Airways Trials Register (date of last search 9 March 2020). We found other trials at web-based clinical trials registers.\nSelection criteria\nWe included RCTs if they compared oral PDE₄ inhibitors with placebo in people with COPD. We allowed co-administration of standard COPD therapy.\nData collection and analysis\nWe used standard Cochrane methods. Two independent review authors selected trials for inclusion, extracted data, and assessed risk of bias. We resolved discrepancies by involving a third review author. We assessed our confidence in the evidence by using GRADE recommendations. Primary outcomes were change in lung function (minimally important difference (MID) = 100 mL) and quality of life (scale 0 to 100; higher score indicates more limitations).\nMain results\nWe found 42 RCTs that met the inclusion criteria and were included in the analyses for roflumilast (28 trials with 18,046 participants) or cilomilast (14 trials with 6457 participants) or tetomilast (1 trial with 84 participants), with a duration between six weeks and one year or longer. These trials included people across international study centres with moderate to very severe COPD (Global Initiative for Chronic Obstructive Lung Disease (GOLD) grades II to IV), with mean age of 64 years.\nWe judged risks of selection bias, performance bias, and attrition bias as low overall amongst the 39 published and unpublished trials.\nLung function\nTreatment with a PDE₄ inhibitor was associated with a small, clinically insignificant improvement in forced expiratory volume in one second (FEV₁) over a mean of 40 weeks compared with placebo (mean difference (MD) 49.33 mL, 95% confidence interval (CI) 44.17 to 54.49; participants = 20,815; studies = 29; moderate-certainty evidence). Forced vital capacity (FVC) and peak expiratory flow (PEF) were also improved over 40 weeks (FVC: MD 86.98 mL, 95% CI 74.65 to 99.31; participants = 22,108; studies = 17; high-certainty evidence; PEF: MD 6.54 L/min, 95% CI 3.95 to 9.13; participants = 4245; studies = 6; low-certainty evidence).\nQuality of life\nTrials reported improvements in quality of life over a mean of 33 weeks (St George's Respiratory Questionnaire (SGRQ) MD -1.06 units, 95% CI -1.68 to -0.43; participants = 7645 ; moderate-certainty evidence).\nIncidence of exacerbations\nTreatment with a PDE₄ inhibitor was associated with a reduced likelihood of COPD exacerbation over a mean of 40 weeks (odds ratio (OR) 0.78, 95% CI 0.73 to 0.84; participants = 20,382; studies = 27; high-certainty evidence), that is, for every 100 people treated with PDE₄ inhibitors, five more remained exacerbation-free during the study period compared with those given placebo (number needed to treat for an additional beneficial outcome (NNTB) 20, 95% CI 16 to 27). No change in COPD-related symptoms nor in exercise tolerance was found.\nAdverse events\nMore participants in the treatment groups experienced an adverse effect compared with control participants over a mean of 39 weeks (OR 1.30, 95% CI 1.22 to 1.38; participants = 21,310; studies = 30; low-certainty evidence). Participants experienced a range of gastrointestinal symptoms such as diarrhoea, nausea, vomiting, or dyspepsia. Diarrhoea was more commonly reported with PDE₄ inhibitor treatment (OR 3.20, 95% CI 2.74 to 3.50; participants = 20,623; studies = 29; high-certainty evidence), that is, for every 100 people treated with PDE₄ inhibitors, seven more suffered from diarrhoea during the study period compared with those given placebo (number needed to treat for an additional harmful outcome (NNTH) 15, 95% CI 13 to 17). The likelihood of psychiatric adverse events was higher with roflumilast 500 µg than with placebo (OR 2.13, 95% CI 1.79 to 2.54; participants = 11,168; studies = 15 (COPD pool data); moderate-certainty evidence). Roflumilast in particular was associated with weight loss during the trial period and with an increase in insomnia and depressive mood symptoms.\nParticipants treated with PDE₄ inhibitors were more likely to withdraw from trial participation; on average, 14% in the treatment groups withdrew compared with 8% in the control groups.\nMortality\nNo effect on mortality was found (OR 0.98, 95% CI 0.77 to 1.24; participants = 19,786; studies = 27; moderate-certainty evidence), although mortality was a rare event during these trials.\nAuthors' conclusions\nFor this current update, five new studies from the 2020 search contributed to existing findings but made little impact on outcomes described in earlier versions of this review.\nPDE₄ inhibitors offered a small benefit over placebo in improving lung function and reducing the likelihood of exacerbations in people with COPD; however, they had little impact on quality of life or on symptoms. Gastrointestinal adverse effects and weight loss were common, and the likelihood of psychiatric symptoms was higher, with roflumilast 500 µg.\nThe findings of this review provide cautious support for the use of PDE₄ inhibitors in COPD. In accordance with GOLD 2020 guidelines, they may have a place as add-on therapy for a subgroup of people with persistent symptoms or exacerbations despite optimal COPD management (e.g. people whose condition is not controlled by fixed-dose long-acting beta₂-agonist (LABA) and inhaled corticosteroid (ICS) combinations). More longer-term trials are needed to determine whether or not PDE₄ inhibitors modify FEV₁ decline, hospitalisation, or mortality in COPD.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "COPD is a progressive lung condition caused by damage from harmful chemicals breathed in and is predominantly seen in people who smoke tobacco. These chemicals cause inflammation and lung damage and increase mucus production in the lungs. This leads to periods of breathlessness and coughing called exacerbations (or flare-ups). Exacerbations make it harder for people to do their day-to-day tasks. Exacerbations become more frequent and severe over time. People vary in terms of how they are affected by COPD. This is related in part to the severity of the disease but also to differences in response to medicines, as well as fitness and co-existent conditions. For most people, the only way to prevent further lung damage is to stop smoking.\nMedicines prescribed to manage COPD generally aim to improve symptoms, reduce exacerbations, or both. In early stages, taking bronchodilators makes breathing easier by relaxing muscles in the lungs and widening airways, allowing more air to move freely into and out of the lungs.\nSome long-acting agents may reduce exacerbations. For example, steroid inhalers reduce inflammation in the lungs and thus modestly reduce the number of exacerbations.\nPhosphodiesterase-4 (PDE₄) inhibitors are a relatively new class of medicines marketed to improve COPD. They have both bronchodilator and anti-inflammatory effects. Two currently available medicines - roflumilast and cilomilast - are taken as a tablet. We collated and analysed results of existing trials to define the benefits and risks of PDE₄ inhibitors in COPD."
}
]
|
query-laysum | 1317 | [
{
"role": "user",
"content": "Abstract: Background\nA permanent upper (maxillary) canine tooth that grows into the roof of the mouth and frequently does not appear (erupt) is called a palatally displaced canine (PDC). The reported prevalence of PDC in the population varies between 1% and 3%. Management of the unerupted PDC can be lengthy, involving surgery to uncover the tooth and prolonged orthodontic (brace) treatment to straighten it; therefore, various procedures have been suggested to encourage a PDC to erupt without the need for surgical intervention.\nObjectives\nTo assess the efficacy, safety and cost-effectiveness of any interceptive procedure to promote the eruption of a PDC compared to no treatment or other interceptive procedures in young people aged 9 to 14 years old.\nSearch methods\nAn information specialist searched four bibliographic databases up to 3 February 2021 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCT) involving at least 80% of children aged between 9 and 14 years, who were diagnosed with an upper PDC and undergoing an intervention to enable the successful eruption of the unerupted PDC, which was compared with an untreated control group or another intervention.\nData collection and analysis\nTwo review authors, independently and in duplicate, examined titles, keywords, abstracts, full articles, extracted data and assessed risk of bias using the Cochrane Risk of Bias 1 tool (RoB1). The primary outcome was summarised with risk ratios (RR) and 95% confidence intervals (CI). We reported an intention-to-treat (ITT) analysis when data were available and a modified intention-to-treat (mITT) analysis if not. We also undertook several sensitivity analyses. We used summary of findings tables to present the main findings and our assessment of the certainty of the evidence.\nMain results\nWe included four studies, involving 199 randomised participants (164 analysed), 108 girls and 91 boys, 82 of whom were diagnosed with unilateral PDC and 117 with bilateral PDC. The participants were aged between 8 and 13 years at recruitment. The certainty of the evidence was very low and future research may change our conclusions.\nOne study (randomised 67 participants, 89 teeth) found that extracting the primary canine may increase the proportion of PDCs that successfully erupt into the mouth at 12 months compared with no extraction (RR 2.87, 95% CI 0.90 to 9.23; 45 participants, 45 PDCs analysed; very low-certainty evidence), but the CI included the possibility of no difference; therefore the evidence was uncertain. There was no evidence that extraction of the primary canine reduced the number of young people with a PDC referred for surgery at 12 months (RR 0.61 (95% CI 0.29 to 1.28).\nThree studies (randomised 132 participants, 227 teeth) found no difference in the proportion of successfully erupted PDCs at 18 months with a double primary tooth extraction compared with extraction of a single primary canine (RR 0.68, 95% CI 0.35 to 1.31; 119 participants analysed, 203 PDCs; mITT; very low-certainty evidence). Two of these studies found no difference in the proportions referred for surgical exposure between the single and the double primary extraction groups data at 48 months (RR 0.31, 95% CI 0.06 to 1.45).\nThere are some descriptive data suggesting that the more severe the displacement of the PDC towards the midline, the lower the proportion of successfully erupted PDCs with or without intervention.\nAuthors' conclusions\nThe evidence that extraction of the primary canine in a young person aged between 9 and 14 years diagnosed with a PDC may increase the proportion of erupted PDCs, without surgical intervention, is very uncertain. There is no evidence that double extraction of primary teeth increases the proportion of erupted PDC compared with a single primary tooth extraction at 18 months or the proportion referred for surgery by 48 months. Because we have only low to very low certainty in these findings, future research is necessary to help us know for sure the best way to deal with upper permanent teeth that are not erupting as expected.\n\nGiven the provided abstract, please respond to the following query: \"What were the main findings?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found four studies involving 199 children (195 analysed).\nThere is very weak evidence that extraction of the primary canine in children aged between 9 and 14 years may increase the probability that the PDC will successfully erupt into the mouth without the need for surgery by 12 months. There is no evidence it reduces the number of children needing surgery to correct their PDC.\nThere is no evidence that double extraction of primary teeth increases the proportion of erupted PDCs compared with a single primary tooth extraction by 18 months after treatment or that it reduces the number of children needing surgery to correct the PDC by 48 months.\nThere is some limited evidence suggesting that the severity of the displacement of the PDC towards the midline may be important in deciding whether or not to intervene. If it is very far from the midline, it may be less likely to be successful."
}
]
|
query-laysum | 1318 | [
{
"role": "user",
"content": "Abstract: Background\nGiven the significant impact epilepsy may have on the health-related quality of life (HRQOL) of individuals with epilepsy and their families, there is increasing clinical interest in evidence-based psychological treatments, aimed at enhancing psychological and seizure-related outcomes for this group.\nThis is an updated version of the original Cochrane Review published in Issue 10, 2017.\nObjectives\nTo assess the impact of psychological treatments for people with epilepsy on HRQOL outcomes.\nSearch methods\nFor this update, we searched the following databases on 12 August 2019, without language restrictions: Cochrane Register of Studies (CRS Web), which includes randomized or quasi-randomized controlled trials from the Specialized Registers of Cochrane Review Groups including Epilepsy, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid, 1946 to 09 August 2019), and PsycINFO (EBSCOhost, 1887 onwards), and from PubMed, Embase, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (ICTRP). We screened the references from included studies and relevant reviews, and contacted researchers in the field for unpublished studies.\nSelection criteria\nWe considered randomized controlled trials (RCTs) and quasi-RCTs for this review. HRQOL was the main outcome. For the operational definition of 'psychological treatments', we included a broad range of skills-based psychological treatments and education-only interventions designed to improve HRQOL, seizure frequency and severity, as well as psychiatric and behavioral health comorbidities for adults and children with epilepsy. These psychological treatments were compared to treatment as usual (TAU), an active control group (such as social support group), or antidepressant pharmacotherapy.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 36 completed RCTs, with a total of 3526 participants. Of these studies, 27 investigated skills-based psychological interventions. The remaining nine studies were education-only interventions. Six studies investigated interventions for children and adolescents, three studies investigated interventions for adolescents and adults, and the remaining studies investigated interventions for adults. Based on satisfactory clinical and methodological homogeneity, we pooled data from 11 studies (643 participants) that used the Quality of Life in Epilepsy-31 (QOLIE-31) or other QOLIE inventories (such as QOLIE-89 or QOLIE-31-P) convertible to QOLIE-31. We found significant mean changes for the QOLIE-31 total score and six subscales (emotional well-being, energy and fatigue, overall QoL, seizure worry, medication effects, and cognitive functioning). The mean changes in the QOLIE-31 total score (mean improvement of 5.23 points, 95% CI 3.02 to 7.44; P < 0.001), and the overall QoL score (mean improvement of 5.95 points, 95% CI 3.05 to 8.85; P < 0.001) exceeded the threshold of minimally important change (MIC: total score: 4.73 points; QoL score: 5.22 points), indicating a clinically meaningful postintervention improvement in HRQOL. We downgraded the certainty of the evidence provided by the meta-analysis due to serious risks of bias in some of the included studies. Consequently, these results provided moderate-certainty evidence that psychological treatments for adults with epilepsy may enhance overall HRQOL.\nAuthors' conclusions\nImplications for practice: Skills-based psychological interventions improve HRQOL in adults and adolescents with epilepsy. Adjunctive use of skills-based psychological treatments for adults and adolescents with epilepsy may provide additional benefits in HRQOL when these are incorporated into patient-centered management. We judge the evidence to be of moderate certainty.\nImplications for research: Investigators should strictly adhere to the CONSORT guidelines to improve the quality of reporting on their interventions. A thorough description of intervention protocols is necessary to ensure reproducibility.\nWhen examining the effectiveness of psychological treatments for people with epilepsy, the use of standardized HRQOL inventories, such as the Quality of Life in Epilepsy Inventories (QOLIE-31, QOLIE-31-P, and QOLIE-89) would increase comparability. Unfortunately, there is a critical gap in pediatric RCTs and RCTs including people with epilepsy and intellectual disabilities.\nFinally, in order to increase the overall quality of RCT study designs, adequate randomization with allocation concealment and blinded outcome assessment should be pursued. As attrition is often high in research that requires active participation, an intention-to-treat analysis should be carried out. Treatment fidelity and treatment competence should also be assessed. These important dimensions, which are related to 'Risk of bias' assessment, should always be reported.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Epilepsy is a brain-condition in which sudden bursts of intense electrical activity happen in the brain and cause the brain's messages to get mixed up, resulting in a seizure. Seizures affect people in different ways: they may cause unusual sensations, movements or feelings, loss of awareness, falls, stiffness or jerking. Epileptic seizures can occur repeatedly and without any triggers. Seizures can happen anytime and anywhere; they can come on suddenly and can happen often.\nEpilepsy can significantly impact a person’s wellbeing and quality of life. For instance, many people with epilepsy experience depression and anxiety, memory problems, unemployment and discrimination, adverse side effects from medications, challenges to independence and worries about seizures and their consequences.\nTreatments for epilepsy typically focus on stopping or reducing the number of seizures a person has with as little side effects as possible. However, psychological therapies, usually delivered by psychologists, psychiatrists or other healthcare professionals could improve well-being in people with epilepsy."
}
]
|
query-laysum | 1319 | [
{
"role": "user",
"content": "Abstract: Background\nOlder adults are the most sedentary segment of society, often spending in excess of 8.5 hours a day sitting. Large amounts of time spent sedentary, defined as time spend sitting or in a reclining posture without spending energy, has been linked to an increased risk of chronic diseases, frailty, loss of function, disablement, social isolation, and premature death.\nObjectives\nTo evaluate the effectiveness of interventions aimed at reducing sedentary behaviour amongst older adults living independently in the community compared to control conditions involving either no intervention or interventions that do not target sedentary behaviour.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, PsycINFO, PEDro, EPPI-Centre databases (Trials Register of Promoting Health Interventions (TRoPHI) and the Obesity and Sedentary behaviour Database), WHO ICTRP, and ClinicalTrials.gov up to 18 January 2021. We also screened the reference lists of included articles and contacted authors to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster-RCTs. We included interventions purposefully designed to reduce sedentary time in older adults (aged 60 or over) living independently in the community. We included studies if some of the participants had multiple comorbidities, but excluded interventions that recruited clinical populations specifically (e.g. stroke survivors).\nData collection and analysis\nTwo review authors independently screened titles and abstracts and full-text articles to determine study eligibility. Two review authors independently extracted data and assessed risk of bias. We contacted authors for additional data where required. Any disagreements in study screening or data extraction were settled by a third review author.\nMain results\nWe included seven studies in the review, six RCTs and one cluster-RCT, with a total of 397 participants. The majority of participants were female (n = 284), white, and highly educated. All trials were conducted in high-income countries. All studies evaluated individually based behaviour change interventions using a combination of behaviour change techniques such as goal setting, education, and behaviour monitoring or feedback. Four of the seven studies also measured secondary outcomes. The main sources of bias were related to selection bias (N = 2), performance bias (N = 6), blinding of outcome assessment (N = 2), and incomplete outcome data (N = 2) and selective reporting (N=1). The overall risk of bias was judged as unclear.\nPrimary outcomes\nThe evidence suggests that interventions to change sedentary behaviour in community-dwelling older adults may reduce sedentary time (mean difference (MD) −44.91 min/day, 95% confidence interval (CI) −93.13 to 3.32; 397 participants; 7 studies; I2 = 73%; low-certainty evidence). We could not pool evidence on the effect of interventions on breaks in sedentary behaviour or time spent in specific domains such as TV time, as data from only one study were available for these outcomes.\nSecondary outcomes\nWe are uncertain whether interventions to reduce sedentary behaviour have any impact on the physical or mental health outcomes of community-dwelling older adults. We were able to pool change data for the following outcomes.\n• Physical function (MD 0.14 Short Physical Performance Battery (SPPB) score, 95% CI −0.38 to 0.66; higher score is favourable; 98 participants; 2 studies; I2 = 26%; low-certainty evidence).\n• Waist circumference (MD 1.14 cm, 95% CI −1.64 to 3.93; 100 participants; 2 studies; I2 = 0%; low-certainty evidence).\n• Fitness (MD -5.16 m in the 6-minute walk test, 95% CI −36.49 to 26.17; higher score is favourable; 80 participants; 2 studies; I2 = 29%; low-certainty evidence).\n• Blood pressure: systolic (MD −3.91 mmHg, 95% CI -10.95 to 3.13; 138 participants; 3 studies; I2 = 73%; very low-certainty evidence) and diastolic (MD −0.06 mmHg, 95% CI −5.72 to 5.60; 138 participants; 3 studies; I2 = 97%; very low-certainty evidence).\n• Glucose blood levels (MD 2.20 mg/dL, 95% CI −6.46 to 10.86; 100 participants; 2 studies; I2 = 0%; low-certainty evidence).\nNo data were available on cognitive function, cost-effectiveness or adverse effects.\nAuthors' conclusions\nIt is not clear whether interventions to reduce sedentary behaviour are effective at reducing sedentary time in community-dwelling older adults. We are uncertain if these interventions have any impact on the physical or mental health of community-dwelling older adults. There were few studies, and the certainty of the evidence is very low to low, mainly due to inconsistency in findings and imprecision. Future studies should consider interventions aimed at modifying the environment, policy, and social and cultural norms. Future studies should also use device-based measures of sedentary time, recruit larger samples, and gather information about quality of life, cost-effectiveness, and adverse event data.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have only low confidence in these findings, due to low sample sizes and because some studies were conducted in ways that may have introduced errors into their results. The findings also combined results from studies using self-reported measures of sedentary time together with device-based measures."
}
]
|
query-laysum | 1321 | [
{
"role": "user",
"content": "Abstract: Background\nAsthma and gastro-oesophageal reflux disease (GORD) are common medical conditions that frequently co-exist. GORD has been postulated as a trigger for asthma; however, evidence remains conflicting. Proposed mechanisms by which GORD causes asthma include direct airway irritation from micro-aspiration and vagally mediated oesophagobronchial reflux. Furthermore, asthma might precipitate GORD. Thus a temporal association between the two does not establish that GORD triggers asthma.\nObjectives\nTo evaluate the effectiveness of GORD treatment in adults and children with asthma, in terms of its benefits for asthma.\nSearch methods\nThe Cochrane Airways Group Specialised Register, CENTRAL, MEDLINE, Embase, reference lists of articles, and online clinical trial databases were searched. The most recent search was conducted on 23 June 2020.\nSelection criteria\nWe included randomised controlled trials comparing treatment of GORD in adults and children with a diagnosis of both asthma and GORD versus no treatment or placebo.\nData collection and analysis\nA combination of two independent review authors extracted study data and assessed trial quality. The primary outcome of interest for this review was acute asthma exacerbation as reported by trialists.\nMain results\nThe systematic search yielded a total of 3354 citations; 23 studies (n = 2872 participants) were suitable for inclusion. Included studies reported data from participants in 25 different countries across Europe, North and South America, Asia, Australia, and the Middle East. Participants included in this review had moderate to severe asthma and a diagnosis of GORD and were predominantly adults presenting to a clinic for treatment. Only two studies assessed effects of intervention on children, and two assessed the impact of surgical intervention. The remainder were concerned with medical intervention using a variety of dosing protocols.\nThere was an uncertain reduction in the number of participants experiencing one or more moderate/severe asthma exacerbations with medical treatment for GORD (odds ratio 0.53, 95% confidence interval (CI) 0.17 to 1.63; 1168 participants, 2 studies; low-certainty evidence). None of the included studies reported data related to the other primary outcomes for this review: hospital admissions, emergency department visits, and unscheduled doctor visits.\nMedical treatment for GORD probably improved forced expiratory volume in one second (FEV₁) by a small amount (mean difference (MD) 0.10 L, 95% CI 0.05 to 0.15; 1333 participants, 7 studies; moderate-certainty evidence) as well as use of rescue medications (MD -0.71 puffs per day, 95% CI -1.20 to -0.22; 239 participants, 2 studies; moderate-certainty evidence). However, the benefit of GORD treatment for morning peak expiratory flow rate was uncertain (MD 6.02 L/min, 95% CI 0.56 to 11.47; 1262 participants, 5 studies). It is important to note that these mean improvements did not reach clinical importance. The benefit of GORD treatment for outcomes synthesised narratively including benefits of treatment for asthma symptoms, quality of life, and treatment preference was likewise uncertain. Data related to adverse events with intervention were generally underreported by the included studies, and those that were available indicated similar rates regardless of allocation to treatment or placebo.\nAuthors' conclusions\nEffects of GORD treatment on the primary outcomes of number of people experiencing one or more exacerbations and hospital utilisation remain uncertain. Medical treatment for GORD in people with asthma may provide small benefit for a number of secondary outcomes related to asthma management. This review determined with moderate certainty that with treatment, lung function measures improved slightly, and use of rescue medications for asthma control was reduced. Further, evidence is insufficient to assess results in children, or to compare surgery versus medical therapy.\n\nGiven the provided abstract, please respond to the following query: \"Bottom line\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Moderate-certainty evidence (as some of the included studies were poorly described) shows that with medical treatment for GORD, people with asthma may experience a small improvement in their lung function and may be able to reduce their need to use rescue medications. However, the impact of treatment for GORD on events such as asthma flare-ups, symptoms, or the need to go to the hospital or consult a doctor is uncertain. Additionally, there was not enough evidence, with only two studies reporting on each, to assess surgical treatment or the effectiveness of GORD treatment in children."
}
]
|
query-laysum | 1322 | [
{
"role": "user",
"content": "Abstract: Background\nAnorexia nervosa is a psychological condition characterised by self-starvation and fear or wait gain or other body image disturbance. The first line of treatment is specific psychological therapy; however, there is no consensus on best practice for treating people who develop severe and enduring anorexia nervosa (SEAN). Notably, there is no universal definition of SEAN.\nObjectives\nTo evaluate the benefits and harms of specific psychological therapies for severe and enduring anorexia nervosa compared with other specific therapies, non-specific therapies, no treatment/waiting list, antidepressant medication, dietary counselling alone, or treatment as usual.\nSearch methods\nWe used standard, extensive Cochrane search methods. The last search date was 22 July 2022.\nSelection criteria\nWe included parallel randomised controlled trials (RCTs) of people (any age) with anorexia nervosa of at least three years' duration. Eligible experimental interventions were any specific psychological therapy for improved physical and psychological health in anorexia nervosa, conducted in any treatment setting with no restrictions in terms of number of sessions, modality, or duration of therapy. Eligible comparator interventions included any other specific psychological therapy for anorexia nervosa, non-specific psychological therapy for mental health disorders, no treatment or waiting list, antipsychotic treatment (with or without psychological therapy), antidepressant treatment (with or without psychological therapy), dietary counselling, and treatment as usual as defined by the individual trials.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Our primary outcomes were clinical improvement (weight restoration to within the normal weight range for participant sample) and treatment non-completion. Results were presented using the GRADE appraisal tool.\nMain results\nWe found two eligible studies, but only one study provided usable data. This was a parallel-group RCT of 63 adults with SEAN who had an illness duration of at least seven years. The trial compared outpatient cognitive behaviour therapy for SEAN (CBT-SEAN) with specialist supportive clinical management for SEAN (SSCM-SE) over eight months. It is unclear if there is any difference between the effect of CBT-SEAN versus SSCM-SE on clinical improvement at 12 months (risk ratio (RR) 1.42, 95% confidence interval (CI) 0.66 to 3.05) or treatment non-completion (RR 1.72, 95% CI 0.45 to 6.59). There were no reported data on adverse effects. The trial was at high risk of performance and detection bias.\nWe rated the GRADE level of evidence as very low-certainty for both primary outcomes, downgrading for imprecision and risk of bias concerns.\nAuthors' conclusions\nThis review reports evidence from one trial that evaluated CBT-SEAN versus SSCM-SE. There was very low-certainty evidence of little or no difference in clinical improvement and treatment non-completion between the two therapies. There is a need for larger high-quality trials to determine the benefits of specific psychological therapies for people with SEAN. These should take into account the duration of illness as well as participants' previous experience with evidence-based psychological therapy for anorexia nervosa.\n\nGiven the provided abstract, please respond to the following query: \"Who will be interested in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with lived experience of SEAN and those who care for them, including healthcare providers, will be most interested in this review. People more broadly affected by anorexia nervosa and other eating disorders will also be interested."
}
]
|
query-laysum | 1323 | [
{
"role": "user",
"content": "Abstract: Background\nTai Chi, a systematic callisthenic exercise first developed in ancient China, involves a series of slow and rhythmic circular motions. It emphasises use of 'mind' or concentration to control breathing and circular body motions to facilitate flow of internal energy (i.e. 'qi') within the body. Normal flow of 'qi' is believed to be essential to sustain body homeostasis, ultimately leading to longevity. The effect of Tai Chi on balance and muscle strength in the elderly population has been reported; however, the effect of Tai Chi on dyspnoea, exercise capacity, pulmonary function and psychosocial status among people with chronic obstructive pulmonary disease (COPD) remains unclear.\nObjectives\n• To explore the effectiveness of Tai Chi in reducing dyspnoea and improving exercise capacity in people with COPD.\n• To determine the influence of Tai Chi on physiological and psychosocial functions among people with COPD.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register of trials (which included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Allied and Complementary Medicine Database (AMED) and PsycINFO); handsearched respiratory journals and meeting abstracts; and searched Chinese medical databases including Wanfang Data, Chinese Medical Current Contents (CMCC), Chinese Biomedical Database (CBM), China Journal Net (CJN) and China Medical Academic Conference (CMAC), from inception to September 2015. We checked the reference lists of all primary studies and review articles for relevant additional references.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing Tai Chi (Tai Chi alone or Tai Chi in addition to another intervention) versus control (usual care or another intervention identical to that used in the Tai Chi group) in people with COPD. Two independent review authors screened and selected studies.\nData collection and analysis\nTwo independent review authors extracted data from included studies and assessed risk of bias on the basis of suggested criteria listed in the Cochrane Handbook for Systematic Reviews of Interventions. We extracted post-programme data and entered them into RevMan software (version 5.3) for data synthesis and analysis.\nMain results\nWe included a total of 984 participants from 12 studies (23 references) in this analysis. We included only those involved in Tai Chi and the control group (i.e. 811 participants) in the final analysis. Study sample size ranged from 10 to 206, and mean age ranged from 61 to 74 years. Programmes lasted for six weeks to one year. All included studies were RCTs; three studies used allocation concealment, six reported blinded outcome assessors and three studies adopted an intention-to-treat approach to statistical analysis. No adverse events were reported. Quality of evidence of the outcomes ranged from very low to moderate.\nAnalysis was split into three comparisons: (1) Tai Chi versus usual care; (2) Tai Chi and breathing exercise versus breathing exercise alone; and (3) Tai Chi and exercise versus exercise alone.\nComparison of Tai Chi versus usual care revealed that Tai Chi demonstrated a longer six-minute walk distance (mean difference (MD) 29.64 metres, 95% confidence interval (CI) 10.52 to 48.77 metres; participants = 318; I2 = 59%) and better pulmonary function (i.e. forced expiratory volume in one second, MD 0.11 L, 95% CI 0.02 to 0.20 L; participants = 258; I2 = 0%) in post-programme data. However, the effects of Tai Chi in reducing dyspnoea level and improving quality of life remain inconclusive. Data are currently insufficient for evaluating the impact of Tai Chi on maximal exercise capacity, balance and muscle strength in people with COPD. Comparison of Tai Chi and other interventions (i.e. breathing exercise or exercise) versus other interventions shows no superiority and no additional effects on symptom improvement nor on physical and psychosocial outcomes with Tai Chi.\nAuthors' conclusions\nNo adverse events were reported, implying that Tai Chi is safe to practise in people with COPD. Evidence of very low to moderate quality suggests better functional capacity and pulmonary function in post-programme data for Tai Chi versus usual care. When Tai Chi in addition to other interventions was compared with other interventions alone, Tai Chi did not show superiority and showed no additional effects on symptoms nor on physical and psychosocial function improvement in people with COPD. With the diverse style and number of forms being adopted in different studies, the most beneficial protocol of Tai Chi style and number of forms could not be commented upon. Hence, future studies are warranted to address these topics.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included a total of 811 participants from 12 studies in the final analysis of this review. The number of participants in each study ranged from 10 to 206, and mean age ranged from 61 to 74 years. The programme lasted for six weeks to one year. Included studies adopted different styles and numerous forms of Tai Chi. The most commonly reported form is the simplified 24-form Yang-style Tai Chi."
}
]
|
query-laysum | 1324 | [
{
"role": "user",
"content": "Abstract: Background\nXpert MTB/RIF Ultra (Xpert Ultra) and Xpert MTB/RIF are World Health Organization (WHO)-recommended rapid nucleic acid amplification tests (NAATs) widely used for simultaneous detection of Mycobacterium tuberculosis complex and rifampicin resistance in sputum. To extend our previous review on extrapulmonary tuberculosis (Kohli 2018), we performed this update to inform updated WHO policy (WHO Consolidated Guidelines (Module 3) 2020).\nObjectives\nTo estimate diagnostic accuracy of Xpert Ultra and Xpert MTB/RIF for extrapulmonary tuberculosis and rifampicin resistance in adults with presumptive extrapulmonary tuberculosis.\nSearch methods\nCochrane Infectious Diseases Group Specialized Register, MEDLINE, Embase, Science Citation Index, Web of Science, Latin American Caribbean Health Sciences Literature, Scopus, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform, the International Standard Randomized Controlled Trial Number Registry, and ProQuest, 2 August 2019 and 28 January 2020 (Xpert Ultra studies), without language restriction.\nSelection criteria\nCross-sectional and cohort studies using non-respiratory specimens. Forms of extrapulmonary tuberculosis: tuberculous meningitis and pleural, lymph node, bone or joint, genitourinary, peritoneal, pericardial, disseminated tuberculosis. Reference standards were culture and a study-defined composite reference standard (tuberculosis detection); phenotypic drug susceptibility testing and line probe assays (rifampicin resistance detection).\nData collection and analysis\nTwo review authors independently extracted data and assessed risk of bias and applicability using QUADAS-2. For tuberculosis detection, we performed separate analyses by specimen type and reference standard using the bivariate model to estimate pooled sensitivity and specificity with 95% credible intervals (CrIs). We applied a latent class meta-analysis model to three forms of extrapulmonary tuberculosis. We assessed certainty of evidence using GRADE.\nMain results\n69 studies: 67 evaluated Xpert MTB/RIF and 11 evaluated Xpert Ultra, of which nine evaluated both tests. Most studies were conducted in China, India, South Africa, and Uganda. Overall, risk of bias was low for patient selection, index test, and flow and timing domains, and low (49%) or unclear (43%) for the reference standard domain. Applicability for the patient selection domain was unclear for most studies because we were unsure of the clinical settings.\nCerebrospinal fluid\nXpert Ultra (6 studies)\nXpert Ultra pooled sensitivity and specificity (95% CrI) against culture were 89.4% (79.1 to 95.6) (89 participants; low-certainty evidence) and 91.2% (83.2 to 95.7) (386 participants; moderate-certainty evidence). Of 1000 people where 100 have tuberculous meningitis, 168 would be Xpert Ultra-positive: of these, 79 (47%) would not have tuberculosis (false-positives) and 832 would be Xpert Ultra-negative: of these, 11 (1%) would have tuberculosis (false-negatives).\nXpert MTB/RIF (30 studies)\nXpert MTB/RIF pooled sensitivity and specificity against culture were 71.1% (62.8 to 79.1) (571 participants; moderate-certainty evidence) and 96.9% (95.4 to 98.0) (2824 participants; high-certainty evidence). Of 1000 people where 100 have tuberculous meningitis, 99 would be Xpert MTB/RIF-positive: of these, 28 (28%) would not have tuberculosis; and 901 would be Xpert MTB/RIF-negative: of these, 29 (3%) would have tuberculosis.\nPleural fluid\nXpert Ultra (4 studies)\nXpert Ultra pooled sensitivity and specificity against culture were 75.0% (58.0 to 86.4) (158 participants; very low-certainty evidence) and 87.0% (63.1 to 97.9) (240 participants; very low-certainty evidence). Of 1000 people where 100 have pleural tuberculosis, 192 would be Xpert Ultra-positive: of these, 117 (61%) would not have tuberculosis; and 808 would be Xpert Ultra-negative: of these, 25 (3%) would have tuberculosis.\nXpert MTB/RIF (25 studies)\nXpert MTB/RIF pooled sensitivity and specificity against culture were 49.5% (39.8 to 59.9) (644 participants; low-certainty evidence) and 98.9% (97.6 to 99.7) (2421 participants; high-certainty evidence). Of 1000 people where 100 have pleural tuberculosis, 60 would be Xpert MTB/RIF-positive: of these, 10 (17%) would not have tuberculosis; and 940 would be Xpert MTB/RIF-negative: of these, 50 (5%) would have tuberculosis.\nLymph node aspirate\nXpert Ultra (1 study)\nXpert Ultra sensitivity and specificity (95% confidence interval) against composite reference standard were 70% (51 to 85) (30 participants; very low-certainty evidence) and 100% (92 to 100) (43 participants; low-certainty evidence). Of 1000 people where 100 have lymph node tuberculosis, 70 would be Xpert Ultra-positive and 0 (0%) would not have tuberculosis; 930 would be Xpert Ultra-negative and 30 (3%) would have tuberculosis.\nXpert MTB/RIF (4 studies)\nXpert MTB/RIF pooled sensitivity and specificity against composite reference standard were 81.6% (61.9 to 93.3) (377 participants; low-certainty evidence) and 96.4% (91.3 to 98.6) (302 participants; low-certainty evidence). Of 1000 people where 100 have lymph node tuberculosis, 118 would be Xpert MTB/RIF-positive and 37 (31%) would not have tuberculosis; 882 would be Xpert MTB/RIF-negative and 19 (2%) would have tuberculosis.\nIn lymph node aspirate, Xpert MTB/RIF pooled specificity against culture was 86.2% (78.0 to 92.3), lower than that against a composite reference standard. Using the latent class model, Xpert MTB/RIF pooled specificity was 99.5% (99.1 to 99.7), similar to that observed with a composite reference standard.\nRifampicin resistance\nXpert Ultra (4 studies)\nXpert Ultra pooled sensitivity and specificity were 100.0% (95.1 to 100.0), (24 participants; low-certainty evidence) and 100.0% (99.0 to 100.0) (105 participants; moderate-certainty evidence). Of 1000 people where 100 have rifampicin resistance, 100 would be Xpert Ultra-positive (resistant): of these, zero (0%) would not have rifampicin resistance; and 900 would be Xpert Ultra-negative (susceptible): of these, zero (0%) would have rifampicin resistance.\nXpert MTB/RIF (19 studies)\nXpert MTB/RIF pooled sensitivity and specificity were 96.5% (91.9 to 98.8) (148 participants; high-certainty evidence) and 99.1% (98.0 to 99.7) (822 participants; high-certainty evidence). Of 1000 people where 100 have rifampicin resistance, 105 would be Xpert MTB/RIF-positive (resistant): of these, 8 (8%) would not have rifampicin resistance; and 895 would be Xpert MTB/RIF-negative (susceptible): of these, 3 (0.3%) would have rifampicin resistance.\nAuthors' conclusions\nXpert Ultra and Xpert MTB/RIF may be helpful in diagnosing extrapulmonary tuberculosis. Sensitivity varies across different extrapulmonary specimens: while for most specimens specificity is high, the tests rarely yield a positive result for people without tuberculosis. For tuberculous meningitis, Xpert Ultra had higher sensitivity and lower specificity than Xpert MTB/RIF against culture. Xpert Ultra and Xpert MTB/RIF had similar sensitivity and specificity for rifampicin resistance. Future research should acknowledge the concern associated with culture as a reference standard in paucibacillary specimens and consider ways to address this limitation.\n\nGiven the provided abstract, please respond to the following query: \"Why is using Xpert tests for extrapulmonary tuberculosis important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Tuberculosis is one of the top 10 causes of death worldwide. Tuberculosis mainly affects the lungs (pulmonary) but may occur in other parts of the body (extrapulmonary). When people receive proper and timely treatment, tuberculosis is usually curable. One problem involved in managing tuberculosis is that the bacteria become resistant to antibiotics. Not recognizing tuberculosis early may result in delayed diagnosis and treatment and increased illness and death. An incorrect tuberculosis diagnosis may result in increased anxiety and unnecessary treatment."
}
]
|
query-laysum | 1325 | [
{
"role": "user",
"content": "Abstract: Background\nParoxysmal nocturnal hemoglobinuria (PNH) is a chronic, not malignant, disease of the hematopoietic stem cells, associated with significant morbidity and mortality. It is a rare disease with an estimated incidence of 1.3 new cases per one million individuals per year. The treatment of PNH has been largely empirical and symptomatic, with blood transfusions, anticoagulation, and supplementation with folic acid or iron. Eculizumab, a biological agent that inhibits complement cascade, was developed for preventing hemolytic anemia and severe thrombotic episodes.\nObjectives\nTo assess the clinical benefits and harms of eculizumab for treating patients with paroxysmal nocturnal hemoglobinuria (PNH).\nSearch methods\nWe conducted a comprehensive search strategy. We searched the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library 2014, Issue 5), Ovid MEDLINE (from 1946 to 15 May 2014), EMBASE (from 1980 to 25 June 2014), and LILACS (from 1982 to 25 June 2014). We did not apply any language restrictions.\nSelection criteria\nWe included randomized controlled trials (RCTs) irrespective of their publication status or language. No limits were applied with respect to period of follow-up. We excluded quasi-RCTs. We included trials comparing eculizumab with placebo or best available therapy. We included any patient with a confirmed diagnosis of PNH. Primary outcome was overall survival.\nData collection and analysis\nWe independently performed a duplicate selection of eligible trials, risk of bias assessment, and data extraction. We estimated risk ratios (RRs) and 95% confidence interval (CIs) for dichotomous outcomes, and mean differences (MDs) and 95% CIs for continuous outcomes. We used a random-effects model for analysis.\nMain results\nWe identified one multicenter (34 sites) phase III RCT involving 87 participants. The trial compared eculizumab versus placebo, and was conducted in the US, Canada, Europe, and Australia with 26 weeks of follow-up. This small trial had high risk of bias in many domains (attrition and selective reporting). It was sponsored by a pharmaceutical company. No patients died during the study. By using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (scores can range from 0 to 100, with higher scores on the global health status and functioning scales indicating improvement), the trial showed improvement in health-related quality of life in patients treated with eculizumab (mean difference (MD) 19.4, 95% CI 8.25 to 30.55; P = 0.0007; low quality of evidence). By using the Functional Assessment of Chronic Illness Therapy Fatigue instrument (scores can range from 0 to 52, with higher scores indicating improvement in fatigue), the trial showed a reduction in fatigue (MD 10.4, 95% CI 9.97 to 10.83; P = 0.00001; moderate quality of evidence) in the eculizumab group compared with placebo. Eculizumab compared with placebo showed a greater proportion of patients with transfusion independence: 51% (22/43) versus 0% (0/44); risk ratio (RR) 46.02, 95% CI 2.88 to 735.53; P = 0.007; moderate quality of evidence; and withdrawal for any reason: 4.7% (2/43) versus 22.72% (10/44); RR 0.20, 95% CI 0.05 to 0.88; P = 0.03; moderate quality of evidence. Due to the low rate of events observed, the included trial did not show any difference between eculizumab and placebo in terms of serious adverse events: 9.3% (4/43) versus 20.4% (9/44); RR 0.15, 95% CI 0.15 to 1.37; P = 0.16; low quality of evidence. We did not observe any difference between intervention and placebo for the most frequent adverse events. One participant receiving placebo showed an episode of thrombosis. The trial did not assess overall survival, transformation to myelodysplastic syndrome and acute myelogenous leukemia, or development or recurrence of aplastic anemia on treatment.\nAuthors' conclusions\nThis review has detected an absence of evidence for eculizumab compared with placebo for treating paroxysmal nocturnal hemoglobinuria (PNH), in terms of overall survival, nonfatal thrombotic events, transformation to myelodysplastic syndrome and acute myelogenous leukemia, and development and recurrence of aplastic anemia on treatment. Current evidence indicates that compared with placebo, eculizumab increases health-related quality of life and increases transfusion independence. During the execution of the included trial, no patients died. Furthermore, the intervention seems to reduce fatigue and withdrawals for any reason. The safety profile of eculizumab is unclear. These conclusions are based on one small trial with risk of attrition and selective reporting bias.\nTherefore, prescription of eculizumab for treating patients with PNH can neither be supported nor rejected, unless new evidence from a large high quality trial alters this conclusion. Therefore, we urge the reader to interpret the trial results with much caution. Future trials on this issue should be conducted according to the SPIRIT statement and reported according to the CONSORT statement by independent investigators, and using the Foundation of Patient-Centered Outcomes Research recommendations.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified one study that included a limited number of patients comparing eculizumab with placebo. The study was published in 2006, and was conducted in the US, Canada, Europe, and Australia with 26 weeks of follow-up."
}
]
|
query-laysum | 1326 | [
{
"role": "user",
"content": "Abstract: Background\nHealth insurance has the potential to improve access to health care and protect people from the financial risks of diseases. However, health insurance coverage is often low, particularly for people most in need of protection, including children and other vulnerable populations.\nObjectives\nTo assess the effectiveness of strategies for expanding health insurance coverage in vulnerable populations.\nSearch methods\nWe searched Cochrane Central Register of Controlled Trials (CENTRAL), part of The Cochrane Library. www.thecochranelibrary.com (searched 2 November 2012), PubMed (searched 1 November 2012), EMBASE (searched 6 July 2012), Global Health (searched 6 July 2012), IBSS (searched 6 July 2012), WHO Library Database (WHOLIS) (searched 1 November 2012), IDEAS (searched 1 November 2012), ISI-Proceedings (searched 1 November 2012),OpenGrey (changed from OpenSIGLE) (searched 1 November 2012), African Index Medicus (searched 1 November 2012), BLDS (searched 1 November 2012), Econlit (searched 1 November 2012), ELDIS (searched 1 November 2012), ERIC (searched 1 November 2012), HERDIN NeON Database (searched 1 November 2012), IndMED (searched 1 November 2012), JSTOR (searched 1 November 2012), LILACS(searched 1 November 2012), NTIS (searched 1 November 2012), PAIS (searched 6 July 2012), Popline (searched 1 November 2012), ProQuest Dissertation &Theses Database (searched 1 November 2012), PsycINFO (searched 6 July 2012), SSRN (searched 1 November 2012), Thai Index Medicus (searched 1 November 2012), World Bank (searched 2 November 2012), WanFang (searched 3 November 2012), China National Knowledge Infrastructure (CHKD-CNKI) (searched 2 November 2012).\nIn addition, we searched the reference lists of included studies and carried out a citation search for the included studies via Web of Science to find other potentially relevant studies.\nSelection criteria\nRandomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before-after (CBA) studies and Interrupted time series (ITS) studies that evaluated the effects of strategies on increasing health insurance coverage for vulnerable populations. We defined strategies as measures to improve the enrolment of vulnerable populations into health insurance schemes. Two categories and six specified strategies were identified as the interventions.\nData collection and analysis\nAt least two review authors independently extracted data and assessed the risk of bias. We undertook a structured synthesis.\nMain results\nWe included two studies, both from the United States. People offered health insurance information and application support by community-based case managers were probably more likely to enrol their children into health insurance programmes (risk ratio (RR) 1.68, 95% confidence interval (CI) 1.44 to 1.96, moderate quality evidence) and were probably more likely to continue insuring their children (RR 2.59, 95% CI 1.95 to 3.44, moderate quality evidence). Of all the children that were insured, those in the intervention group may have been insured quicker (47.3 fewer days, 95% CI 20.6 to 74.0 fewer days, low quality evidence) and parents may have been more satisfied on average (satisfaction score average difference 1.07, 95% CI 0.72 to 1.42, low quality evidence).\nIn the second study applications were handed out in emergency departments at hospitals, compared to not handing out applications, and may have had an effect on enrolment (RR 1.5, 95% CI 1.03 to 2.18, low quality evidence).\nAuthors' conclusions\nCommunity-based case managers who provide health insurance information, application support, and negotiate with the insurer probably increase enrolment of children in health insurance schemes. However, the transferability of this intervention to other populations or other settings is uncertain. Handing out insurance application materials in hospital emergency departments may help increase the enrolment of children in health insurance schemes. Further studies evaluating the effectiveness of different strategies for expanding health insurance coverage in vulnerable population are needed in different settings, with careful attention given to study design.\n\nGiven the provided abstract, please respond to the following query: \"What this research says\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Both studies in this review took place in the USA and were aimed at uninsured children. In the first study, case managers contacted the families of uninsured Latin American children, gave them information about health insurance, helped them apply, and helped them appeal when a wrong decision was made. In the second study, insurance application forms were handed out to the families of children visiting hospital emergency departments. In both studies, these families were compared to families who were not given additional information or support. The studies showed the following:"
}
]
|
query-laysum | 1327 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis is a genetic disorder in which abnormal mucus in the lungs is associated with susceptibility to persistent infection. Pulmonary exacerbations are when symptoms of infection become more severe. Antibiotics are an essential part of treatment for exacerbations and inhaled antibiotics may be used alone or in conjunction with oral antibiotics for milder exacerbations or with intravenous antibiotics for more severe infections. Inhaled antibiotics do not cause the same adverse effects as intravenous antibiotics and may prove an alternative in people with poor access to their veins. This is an update of a previously published review.\nObjectives\nTo determine if treatment of pulmonary exacerbations with inhaled antibiotics in people with cystic fibrosis improves their quality of life, reduces time off school or work, and improves their long-term lung function.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Group's Cystic Fibrosis Trials Register. Date of the last search: 7 March 2022.\nWe also searched ClinicalTrials.gov, the Australia and New Zealand Clinical Trials Registry and WHO ICTRP for relevant trials. Date of last search: 3 May 2022.\nSelection criteria\nRandomised controlled trials in people with cystic fibrosis with a pulmonary exacerbation in whom treatment with inhaled antibiotics was compared to placebo, standard treatment or another inhaled antibiotic for between one and four weeks.\nData collection and analysis\nTwo review authors independently selected eligible trials, assessed the risk of bias in each trial and extracted data. They assessed the certainty of the evidence using the GRADE criteria. Authors of the included trials were contacted for more information.\nMain results\nFive trials with 183 participants are included in the review. Two trials (77 participants) compared inhaled antibiotics alone to intravenous antibiotics alone and three trials (106 participants) compared a combination of inhaled and intravenous antibiotics to intravenous antibiotics alone. Trials were heterogenous in design and two were only available in abstract form. Risk of bias was difficult to assess in most trials but, for four out of five trials, we judged there to be a high risk from lack of blinding and an unclear risk with regards to randomisation. Results were not fully reported and only limited data were available for analysis. One trial was a cross-over design and we only included data from the first intervention arm.\nInhaled antibiotics alone versus intravenous antibiotics alone\nOnly one trial (18 participants) reported a perceived improvement in lifestyle (quality of life) in both groups (very low-certainty evidence). Neither trial reported on time off work or school. Both trials measured lung function, but there was no difference reported between treatment groups (very low-certainty evidence). With regards to our secondary outcomes, one trial (18 participants) reported no difference in the need for additional antibiotics and the second trial (59 participants) reported on the time to next exacerbation. In neither case was a difference between treatments identified (both very low-certainty evidence). The single trial (18 participants) measuring adverse events and sputum microbiology did not observe any in either treatment group for either outcome (very low-certainty evidence).\nInhaled antibiotics plus intravenous antibiotics versus intravenous antibiotics alone\nInhaled antibiotics plus intravenous antibiotics may make little or no difference to quality of life compared to intravenous antibiotics alone. None of the trials reported time off work or school. All three trials measured lung function, but found no difference between groups in forced expiratory volume in one second (two trials; 44 participants; very low-certainty evidence) or vital capacity (one trial; 62 participants). None of the trials reported on the need for additional antibiotics. Inhaled plus intravenous antibiotics may make little difference to the time to next exacerbation; however, one trial (28 participants) reported on hospital admissions and found no difference between groups. There is likely no difference between groups in adverse events (very low-certainty evidence) and one trial (62 participants) reported no difference in the emergence of antibiotic-resistant organisms (very low-certainty evidence).\nAuthors' conclusions\nWe identified only low- or very low-certainty evidence to judge the effectiveness of inhaled antibiotics for the treatment of pulmonary exacerbations in people with cystic fibrosis. The included trials were not sufficiently powered to achieve their goals. Hence, we are unable to demonstrate whether one treatment was superior to the other or not. Further research is needed to establish whether inhaled tobramycin may be used as an alternative to intravenous tobramycin for some pulmonary exacerbations.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Inhaled antibiotics alone versus intravenous antibiotics alone\nOne trial (18 participants) reported a perceived improvement in lifestyle in both groups, but neither trial reported on time off work or school. Both trials measured lung function, but neither reported any difference between treatment groups. One trial (18 participants) reported no difference in the need for additional antibiotics and the second trial (59 participants) reported on the length of time until the next exacerbation - there was no difference between inhaled or intravenous antibiotics for either outcome. Only one trial (18 participants) measured adverse events and sputum microbiology, but did not find any difference between treatments for either outcome.\nInhaled antibiotics plus intravenous antibiotics versus intravenous antibiotics alone\nOne trial found that there is likely no difference in quality of life between treatments. All of the trials reported lung function, but found no difference between groups. One trial showed there to be little or no difference in the time until the next exacerbation. None of the trials reported on the need for additional antibiotics; however, one trial (28 participants) found no difference in the number of hospital admissions between groups. There may be no difference between groups in adverse events and one trial (62 participants) reported no difference in the appearance of antibiotic-resistant organisms."
}
]
|
query-laysum | 1328 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with a serious mental illness are more likely to smoke more and to be more dependent smokers than the general population. This may be due to a wide range of factors that could include a common aetiology to both smoking and the illness, self medication, smoking to alleviate adverse effects of medications, boredom in the existing environment, or a combination of these factors. It is important to undertake this review to facilitate improvements in both the health and safety of people with serious mental illness who smoke, and to reduce the overall burden of costs (both financial and health) to the smoker and, eventually, to the taxpayer.\nObjectives\nTo review the effects of smoking cessation advice for people with serious mental illness.\nSearch methods\nWe searched the Cochrane Schizophrenia Group Specialized Trials Register up to 2 April 2015, which is based on regular searches of CENTRAL, BIOSIS, PubMed, MEDLINE, EMBASE, CINAHL, PsycINFO, and trial registries. We also undertook unsystematic searches of a sample of the component databases (BNI, CINHAL, EMBASE, MEDLINE, and PsycINFO), up to 2 April 2015, and searched references of all identified studies\nSelection criteria\nWe planned to include all randomised controlled trials (RCTs) that focussed on smoking cessation advice versus standard care or comparing smoking cessation advice with other more focussed methods of delivering care or information.\nData collection and analysis\nThe review authors (PK, AC, and DB) independently screened search results but did not identify any trials that fulfilled the inclusion criteria of this review.\nMain results\nWe did not identify any RCTs that evaluated advice regarding smoking cessation for people with serious mental illness. The excluded studies illustrate that randomisation of packages of care relevant to smokers with serious mental illness is possible.\nAuthors' conclusions\nPeople with serious mental illness are more likely to smoke than the general population. Yet we could not find any high quality evidence to guide the smoking cessation advice healthcare professionals pass onto service users. This is an area where trials are possible and needed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Good quality Evidence relevant to smokers with serious mental illness is much needed, important. It is possible if a large, high quality trial is undertaken and gathers relevant, high quality evidence on advice to help people stop smoking.\nBen Gray, Senior Peer Researcher, McPin Foundation (http://mcpin.org/)."
}
]
|
query-laysum | 1329 | [
{
"role": "user",
"content": "Abstract: Background\nAlcohol use and misuse in young people is a major risk behaviour for mortality and morbidity. Motivational interviewing (MI) is a popular technique for addressing excessive drinking in young adults.\nObjectives\nTo assess the effects of motivational interviewing (MI) interventions for preventing alcohol misuse and alcohol-related problems in young adults.\nSearch methods\nWe identified relevant evidence from the Cochrane Central Register of Controlled Trials (CENTRAL) (2015, Issue 12), MEDLINE (January 1966 to July 2015), EMBASE (January 1988 to July 2015), and PsycINFO (1985 to July 2015). We also searched clinical trial registers and handsearched references of topic-related systematic reviews and the included studies.\nSelection criteria\nWe included randomised controlled trials in young adults up to the age of 25 years comparing MIs for prevention of alcohol misuse and alcohol-related problems with no intervention, assessment only or alternative interventions for preventing alcohol misuse and alcohol-related problems.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane.\nMain results\nWe included a total of 84 trials (22,872 participants), with 70/84 studies reporting interventions in higher risk individuals or settings. Studies with follow-up periods of at least four months were of more interest in assessing the sustainability of intervention effects and were also less susceptible to short-term reporting or publication bias. Overall, the risk of bias assessment showed that these studies provided moderate or low quality evidence.\nAt four or more months follow-up, we found effects in favour of MI for the quantity of alcohol consumed (standardised mean difference (SMD) −0.11, 95% confidence interval (CI) −0.15 to −0.06 or a reduction from 13.7 drinks/week to 12.5 drinks/week; moderate quality evidence); frequency of alcohol consumption (SMD −0.14, 95% CI −0.21 to −0.07 or a reduction in the number of days/week alcohol was consumed from 2.74 days to 2.52 days; moderate quality evidence); and peak blood alcohol concentration, or BAC (SMD −0.12, 95% CI −0.20 to 0.05, or a reduction from 0.144% to 0.131%; moderate quality evidence).\nWe found a marginal effect in favour of MI for alcohol problems (SMD −0.08, 95% CI −0.17 to 0.00 or a reduction in an alcohol problems scale score from 8.91 to 8.18; low quality evidence) and no effects for binge drinking (SMD −0.04, 95% CI −0.09 to 0.02, moderate quality evidence) or for average BAC (SMD −0.05, 95% CI −0.18 to 0.08; moderate quality evidence). We also considered other alcohol-related behavioural outcomes, and at four or more months follow-up, we found no effects on drink-driving (SMD −0.13, 95% CI −0.36 to 0.10; moderate quality of evidence) or other alcohol-related risky behaviour (SMD −0.15, 95% CI −0.31 to 0.01; moderate quality evidence).\nFurther analyses showed that there was no clear relationship between the duration of the MI intervention (in minutes) and effect size. Subgroup analyses revealed no clear subgroup effects for longer-term outcomes (four or more months) for assessment only versus alternative intervention controls; for university/college vs other settings; or for higher risk vs all/low risk participants.\nNone of the studies reported harms related to MI.\nAuthors' conclusions\nThe results of this review indicate that there are no substantive, meaningful benefits of MI interventions for preventing alcohol use, misuse or alcohol-related problems. Although we found some statistically significant effects, the effect sizes were too small, given the measurement scales used in the included studies, to be of relevance to policy or practice. Moreover, the statistically significant effects are not consistent for all misuse measures, and the quality of evidence is not strong, implying that any effects could be inflated by risk of bias.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, there is only low or moderate quality evidence for the effects found in this review. Many of the studies did not adequately describe how young people were allocated to the study groups or how they concealed the group allocation to participants and personnel. Study drop-outs were also an issue in many studies. These problems with study quality could result in inflated estimates of MI effects, so we cannot rule out the possibility that any slight effects observed in this review are overstated.\nThe US National Institutes of Health provided funding for half (42/84) of the studies included in this review. Twenty-nine studies provided no information about funding, and only eight papers had a clear conflict of interest statement."
}
]
|
query-laysum | 1330 | [
{
"role": "user",
"content": "Abstract: Background\nAcute respiratory failure is a common life-threatening complication of acute onset neuromuscular diseases, and may exacerbate chronic hypoventilation in patients with neuromuscular disease or chest wall disorders. Standard management includes oxygen supplementation, physiotherapy, cough assistance, and, whenever needed, antibiotics and intermittent positive pressure ventilation. Non-invasive mechanical ventilation (NIV) via nasal, buccal or full-face devices has become routine practice in many centres.\nObjectives\nThe primary objective of this review was to compare the efficacy of non-invasive ventilation with invasive ventilation in improving short-term survival in acute respiratory failure in people with neuromuscular disease and chest wall disorders. The secondary objectives were to compare the effects of NIV with those of invasive mechanical ventilation on improvement in arterial blood gas after 24 hours and lung function measurements after one month, incidence of barotrauma and ventilator-associated pneumonia, duration of mechanical ventilation, length of stay in the intensive care unit and length of hospital stay.\nSearch methods\nWe searched the following databases on 11 September 2017: the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE and Embase. We also searched conference proceedings and clinical trials registries.\nSelection criteria\nWe planned to include randomised or quasi-randomised trials with or without blinding. We planned to include trials performed in children or adults with acute onset neuromuscular diseases or chronic neuromuscular disease or chest wall disorders presenting with acute respiratory failure that compared the benefits and risks of invasive ventilation versus NIV.\nData collection and analysis\nTwo review authors reviewed searches and independently selected studies for assessment. We planned to follow standard Cochrane methodology for data collection and analysis.\nMain results\nWe did not identify any trials eligible for inclusion in the review.\nAuthors' conclusions\nAcute respiratory failure is a life-threatening complication of acute onset neuromuscular disease and of chronic neuromuscular disease and chest wall disorders. We found no randomised trials on which to elaborate evidence-based practice for the use of non-invasive versus invasive mechanical ventilation. For researchers, there is a need to design and conduct new randomised trials to compare NIV with invasive ventilation in acute neuromuscular respiratory failure. These trials should anticipate variations in treatment responses according to disease condition (acute onset versus acute exacerbation on chronic neuromuscular diseases) and according to the presence or absence of bulbar dysfunction.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up to date to 11 September 2017."
}
]
|
query-laysum | 1331 | [
{
"role": "user",
"content": "Abstract: Background\nPartial nephrectomy and radical nephrectomy are the relevant surgical therapy options for localised renal cell carcinoma. However, debate regarding the effects of these surgical approaches continues and it is important to identify and summarise high-quality studies to make surgical treatment recommendations.\nObjectives\nTo assess the effects of partial nephrectomy compared with radical nephrectomy for clinically localised renal cell carcinoma.\nSearch methods\nWe searched CENTRAL, MEDLINE, PubMed, Embase, Web of Science, BIOSIS, LILACS, Scopus, two trial registries and abstracts from three major conferences to 24 February 2017, together with reference lists; and contacted selected experts in the field.\nSelection criteria\nWe included a randomised controlled trial comparing partial and radical nephrectomy for participants with small renal masses.\nData collection and analysis\nOne review author screened all of the titles and abstracts; only citations that were clearly irrelevant were excluded at this stage. Next, two review authors independently assessed full-text reports, identified relevant studies, evaluated the eligibility of the studies for inclusion, assessed trial quality and extracted data. The update of the literature search was performed by two independent review authors. We used Review Manager 5 for data synthesis and data analyses.\nMain results\nWe identified one randomised controlled trial including 541 participants that compared partial nephrectomy to radical nephrectomy. The median follow-up was 9.3 years.\nBased on low quality evidence, we found that time-to-death of any cause was decreased using partial nephrectomy (HR 1.50, 95% CI 1.03 to 2.18). This corresponds to 79 more deaths (5 more to 173 more) per 1000. Also based on low quality evidence, we found no difference in serious adverse events (RR 2.04, 95% CI 0.19 to 22.34). Findings are consistent with 4 more surgery-related deaths (3 fewer to 78 more) per 1000.\nBased on low quality evidence, we found no difference in time-to-recurrence (HR 1.37, 95% CI 0.58 to 3.24). This corresponds to 12 more recurrences (14 fewer to 70 more) per 1000. Due to the nature of reporting, we were unable to analyse overall rates for immediate and long-term adverse events. We found no evidence on haemodialysis or quality of life.\nReasons for downgrading related to study limitations (lack of blinding, cross-over), imprecision and indirectness (a substantial proportion of patients were ultimately found not to have a malignant tumour). Based on the finding of a single trial, we were unable to conduct any subgroup or sensitivity analyses.\nAuthors' conclusions\nPartial nephrectomy may be associated with a decreased time-to-death of any cause. With regards to surgery-related mortality, cancer-specific survival and time-to-recurrence, partial nephrectomy appears to result in little to no difference.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Participants who had only the tumour taken out appear to be more likely to die from any cause than participants that had the tumour and the whole kidney taken out. There appeared to be little to no difference in the time until the tumour comes back or in the risk of serious complications resulting in death. We did not find any evidence as to how the groups compared when it comes to the need for haemodialysis or how their quality of life compared."
}
]
|
query-laysum | 1332 | [
{
"role": "user",
"content": "Abstract: Background\nSystemic lupus erythematosus (SLE) is a rare, chronic autoimmune inflammatory disease with a prevalence varying from 4.3 to 150 people in 100,000, or approximately five million people worldwide. Systemic manifestations frequently include internal organ involvement, a characteristic malar rash on the face, pain in joints and muscles, and profound fatigue. Exercise is purported to be beneficial for people with SLE. For this review, we focused on studies that examined all types of structured exercise as an adjunctive therapy in the management of SLE.\nObjectives\nTo evaluate the benefits and harms of structured exercise as adjunctive therapy for adults with SLE compared with usual pharmacological care, usual pharmacological care plus placebo and usual pharmacological care plus non-pharmacological care.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 30 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) of exercise as an adjunct to usual pharmacological treatment in SLE compared with placebo, usual pharmacological care alone and another non-pharmacological treatment. Major outcomes were fatigue, functional capacity, disease activity, quality of life, pain, serious adverse events, and withdrawals due to any reason, including any adverse events.\nData collection and analysis\nWe used standard Cochrane methods. Our major outcomes were 1. fatigue, 2. functional capacity, 3. disease activity, 4. quality of life, 5. pain, 6. serious adverse events, and 7. withdrawals due to any reason. Our minor outcomes were 8. responder rate, 9. aerobic fitness, 10. depression, and 11. anxiety. We used GRADE to assess certainty of evidence. The primary comparison was exercise compared with placebo.\nMain results\nWe included 13 studies (540 participants) in this review. Studies compared exercise as an adjunct to usual pharmacological care (antimalarials, immunosuppressants, and oral glucocorticoids) with usual pharmacological care plus placebo (one study); usual pharmacological care (six studies); and another non-pharmacological treatment such as relaxation therapy (seven studies). Most studies had selection bias, and all studies had performance and detection bias. We downgraded the evidence for all comparisons because of a high risk of bias and imprecision.\nExercise plus usual pharmacological care versus placebo plus usual pharmacological care\nEvidence from a single small study (17 participants) that compared whole body vibration exercise to whole body placebo vibration exercise (vibrations switched off) indicated that exercise may have little to no effect on fatigue, functional capacity, and pain (low-certainty evidence). We are uncertain whether exercise results in fewer or more withdrawals (very low-certainty evidence). The study did not report disease activity, quality of life, and serious adverse events.\nThe study measured fatigue using the self-reported Functional Assessment of Chronic Illness Therapy – Fatigue (FACIT-Fatigue), scale 0 to 52; lower score means less fatigue. People who did not exercise rated their fatigue at 38 points and those who did exercise rated their fatigue at 33 points (mean difference (MD) 5 points lower, 95% confidence interval (CI) 13.29 lower to 3.29 higher).\nThe study measured functional capacity using the self-reported 36-item Short Form health questionnaire (SF-36) Physical Function domain, scale 0 to 100; higher score means better function. People who did not exercise rated their functional capacity at 70 points and those who did exercise rated their functional capacity at 67.5 points (MD 2.5 points lower, 95% CI 23.78 lower to 18.78 higher).\nThe study measured pain using the SF-36 Pain domain, scale 0 to 100; lower scores mean less pain. People who did not exercise rated their pain at 43 points and those who did exercise rated their pain at 34 points (MD 9 points lower, 95% CI 28.88 lower to 10.88 higher).\nMore participants from the exercise group (3/11, 27%) withdrew from the study than the placebo group (1/10, 10%) (risk ratio (RR) 2.73, 95% CI 0.34 to 22.16).\nExercise plus usual pharmacological care versus usual pharmacological care alone\nThe addition of exercise to usual pharmacological care may have little to no effect on fatigue, functional capacity, and disease activity (low-certainty evidence). We are uncertain whether the addition of exercise improves pain (very low-certainty evidence), or results in fewer or more withdrawals (very low-certainty evidence). Serious adverse events and quality of life were not reported.\nExercise plus usual care versus another non-pharmacological intervention such as receiving information about the disease or relaxation therapy\nCompared with education or relaxation therapy, exercise may reduce fatigue slightly (low-certainty evidence), may improve functional capacity (low-certainty evidence), probably results in little to no difference in disease activity (moderate-certainty evidence), and may result in little to no difference in pain (low-certainty evidence). We are uncertain whether exercise results in fewer or more withdrawals (very low-certainty evidence). Quality of life and serious adverse events were not reported.\nAuthors' conclusions\nDue to low- to very low-certainty evidence, we are not confident on the benefits of exercise on fatigue, functional capacity, disease activity, and pain, compared with placebo, usual care, or advice and relaxation therapy. Harms data were not well reported.\n\nGiven the provided abstract, please respond to the following query: \"What is systemic lupus erythematosus?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "SLE (or 'lupus') is a disease in which the body's immune (defence) system mistakenly attacks healthy tissue in many parts of the body. It is a long-term disease (one that lasts longer than six weeks and is usually life-long). Often, SLE causes pain in joints and muscles, and extreme tiredness. Symptoms can improve temporarily, or worsen suddenly (flares)."
}
]
|
query-laysum | 1333 | [
{
"role": "user",
"content": "Abstract: Background\nOpportunistic fungal infections are a major cause of morbidity and mortality in neutropenic cancer patients and antifungal therapy is used both empirically and therapeutically in these patients.\nObjectives\nTo compare the benefits and harms of voriconazole with those of amphotericin B and fluconazole when used for prevention or treatment of invasive fungal infections in cancer patients with neutropenia.\nSearch methods\nCochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library (2014, Issue 1 2014), MEDLINE (to January 2014). Letters, abstracts and unpublished trials were accepted. Contact was made with trial authors and industry.\nSelection criteria\nRandomised clinical trials comparing voriconazole with amphotericin B or fluconazole.\nData collection and analysis\nData on mortality, invasive fungal infection, colonisation, use of additional (escape) antifungal therapy and adverse effects leading to discontinuation of therapy were extracted independently by two review authors.\nMain results\nThree trials were included. One trial compared voriconazole to liposomal amphotericin B as empirical treatment of fever of unknown origin (suspected fungal infection) in neutropenic cancer patients (849 patients, 58 deaths). The second trial compared voriconazole to amphotericin B deoxycholate in the treatment of confirmed and presumed invasive Aspergillus infections (391 patients, 98 deaths). The third trial compared fluconazole to voriconazole for prophylaxis of fungal infections in patients receiving allogeneic stem cell transplantation (600 patients, number of deaths not stated). In the first trial, voriconazole was significantly inferior to liposomal amphotericin B according to the trial authors' prespecified criteria. More patients died in the voriconazole group and a claimed significant reduction in the number of breakthrough fungal infections disappeared when patients arbitrarily excluded from the analysis by the trial authors were included. In the second trial, the deoxycholate preparation of amphotericin B was used without any indication of the use of premedication to counter side effects and replacement of electrolytes or use of salt water. This choice of comparator resulted in a marked difference in the duration of treatment on the trial drugs (77 days with voriconazole versus 10 days with amphotericin B) and precluded meaningful comparisons of the benefits and harms of the two drugs. The third trial failed to find a difference in fungal free survival or invasive fungal infections at 180 days when voriconazole was compared to fluconazole.\nAuthors' conclusions\nLiposomal amphotericin B is significantly more effective than voriconazole for empirical therapy of fungal infections in neutropenic cancer patients and should be preferred. For treatment of aspergillosis, there are no trials that have compared voriconazole with amphotericin B given under optimal conditions. For prophylactic fungal treatment in patients receiving allogeneic stem cell transplantation, there was no difference between voriconazole and fluconazole regarding fungal free survival or invasive fungal infections.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The first and second trial were seriously misleading. The first trial analysed the results of the study in a different way from that originally planned, which favoured the study drug voriconazole. The second study compared voriconazole to a drug (liposomal amphotericin B) that was given at substandard dose, which means the results of the study are not meaningful. The third study should have presented how many patients died but did not."
}
]
|
query-laysum | 1334 | [
{
"role": "user",
"content": "Abstract: Background\nIn-hospital growth of preterm infants remains a challenge in clinical practice. The high nutrient demands of preterm infants often lead to growth faltering. For preterm infants who cannot be fed maternal or donor breast milk or may require supplementation, preterm formulas with fat in the form of medium chain triglycerides (MCTs) or long chain triglycerides (LCTs) may be chosen to support nutrient utilization and to improve growth. MCTs are easily accessible to the preterm infant with an immature digestive system, and LCTs are beneficial for central nervous system development and visual function. Both have been incorporated into preterm formulas in varying amounts, but their effects on the preterm infant's short-term growth remain unclear. This is an update of a review originally published in 2002, then in 2007.\nObjectives\nTo determine the effects of formula containing high as opposed to low MCTs on early growth in preterm infants fed a diet consisting primarily of formula.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 8), in the Cochrane Library; Ovid MEDLINE Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily, and Ovid MEDLINE(R); MEDLINE via PubMed for the previous year; and Cumulative Index to Nursing and Allied Health Literature (CINAHL), on 16 September 2020. We also searched clinical trials databases and the reference lists of retrieved articles for randomized controlled trials (RCTs) and quasi-RCTs.\nSelection criteria\nWe included all randomized and quasi-randomized trials comparing the effects of feeding high versus low MCT formula (for a minimum of five days) on the short-term growth of preterm (< 37 weeks' gestation) infants. We defined high MCT formula as 30% or more by weight, and low MCT formula as less than 30% by weight. The infants must be on full enteral diets, and the allocated formula must be the predominant source of nutrition.\nData collection and analysis\nThe review authors assessed each study's quality and extracted data on growth parameters as well as adverse effects from included studies. All data used in analysis were continuous; therefore, mean differences with 95% confidence intervals were reported. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe identified 10 eligible trials (253 infants) and extracted relevant growth data from 7 of these trials (136 infants). These studies were found to provide evidence of very low to low certainty. Risk of bias was noted, as few studies described specific methods for random sequence generation, allocation concealment, or blinding. We found no evidence of differences in short-term growth parameters when high and low MCT formulas were compared.\nAs compared to low MCT formula, preterm infants fed high MCT formula showed little to no difference in weight gain velocity (g/kg/d) during the intervention, with a typical mean difference (MD) of -0.21 g/kg/d (95% confidence interval (CI) -1.24 to 0.83; 6 studies, 118 infants; low-certainty evidence). The analysis for weight gain (g/d) did not show evidence of differences, with an MD of 0.00 g/d (95% CI -5.93 to 5.93; 1 study, 18 infants; very low-certainty evidence), finding an average weight gain of 20 ± 5.9 versus 20 ± 6.9 g/d for high and low MCT groups, respectively. We found that length gain showed no difference between low and high MCT formulas, with a typical MD of 0.10 cm/week (95% CI -0.09 to 0.29; 3 studies, 61 infants; very low-certainty evidence). Head circumference gain also showed little to no difference during the intervention period, with an MD of -0.04 cm/week (95% CI -0.17 to 0.09; 3 studies, 61 infants; low-certainty evidence). Two studies reported skinfold thickness with different measurement definitions, and evidence was insufficient to determine if there was a difference (2 studies, 32 infants; very low-certainty evidence). There are conflicting data (5 studies) as to formula tolerance, with 4 studies reporting narrative results of no observed clinical difference and 1 study reporting higher incidence of signs of gastrointestinal intolerance in high MCT formula groups. There is no evidence of effect on the incidence of necrotizing enterocolitis (NEC), based on small numbers in two trials. Review authors found no studies addressing long-term growth parameters or neurodevelopmental outcomes.\nAuthors' conclusions\nWe found evidence of very low to low certainty suggesting no differences among short-term growth data for infants fed low versus high MCT formulas. Due to lack of evidence and uncertainty, neither formula type could be concluded to improve short-term growth outcomes or have fewer adverse effects. Further studies are necessary because the results from included studies are imprecise due to small numbers and do not address important long-term outcomes. Additional research should aim to clarify effects on formula tolerance and on long-term growth and neurodevelopmental outcomes, and should include larger study populations to better evaluate effect on NEC incidence.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "How does high versus low medium chain triglyceride content of formula impact short-term growth of preterm infants?"
}
]
|
query-laysum | 1335 | [
{
"role": "user",
"content": "Abstract: Background\nOver 35 million people are estimated to be living with dementia in the world and the societal costs are very high. Case management is a widely used and strongly promoted complex intervention for organising and co-ordinating care at the level of the individual, with the aim of providing long-term care for people with dementia in the community as an alternative to early admission to a care home or hospital.\nObjectives\nTo evaluate the effectiveness of case management approaches to home support for people with dementia, from the perspective of the different people involved (patients, carers, and staff) compared with other forms of treatment, including ‘treatment as usual’, standard community treatment and other non-case management interventions.\nSearch methods\nWe searched the following databases up to 31 December 2013: ALOIS, the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group,The Cochrane Library, MEDLINE, EMBASE, PsycINFO, CINAHL, LILACS, Web of Science (including Science Citation Index Expanded (SCI-EXPANDED) and Social Science Citation Index), Campbell Collaboration/SORO database and the Specialised Register of the Cochrane Effective Practice and Organisation of Care Group. We updated this search in March 2014 but results have not yet been incorporated.\nSelection criteria\nWe include randomised controlled trials (RCTs) of case management interventions for people with dementia living in the community and their carers. We screened interventions to ensure that they focused on planning and co-ordination of care.\nData collection and analysis\nWe used standard methodological procedures as required by The Cochrane Collaboration. Two review authors independently extracted data and made 'Risk of bias' assessments using Cochrane criteria. For continuous outcomes, we used the mean difference (MD) or standardised mean difference (SMD) between groups along with its confidence interval (95% CI). We applied a fixed- or random-effects model as appropriate. For binary or dichotomous data, we generated the corresponding odds ratio (OR) with 95% CI. We assessed heterogeneity by the I² statistic.\nMain results\nWe include 13 RCTs involving 9615 participants with dementia in the review. Case management interventions in studies varied. We found low to moderate overall risk of bias; 69% of studies were at high risk for performance bias.\nThe case management group were significantly less likely to be institutionalised (admissions to residential or nursing homes) at six months (OR 0.82, 95% CI 0.69 to 0.98, n = 5741, 6 RCTs, I² = 0%, P = 0.02) and at 18 months (OR 0.25, 95% CI 0.10 to 0.61, n = 363, 4 RCTs, I² = 0%, P = 0.003). However, the effects at 10 - 12 months (OR 0.95, 95% CI 0.83 to 1.08, n = 5990, 9 RCTs, I² = 48%, P = 0.39) and 24 months (OR 1.03, 95% CI 0.52 to 2.03, n = 201, 2 RCTs, I² = 0%, P = 0.94) were uncertain. There was evidence from one trial of a reduction in the number of days per month in a residential home or hospital unit in the case management group at six months (MD -5.80, 95% CI -7.93 to -3.67, n = 88, 1 RCT, P < 0.0001) and at 12 months (MD -7.70, 95% CI -9.38 to -6.02, n = 88, 1 RCT, P < 0.0001). One trial reported the length of time until participants were institutionalised at 12 months and the effects were uncertain (hazard ratio (HR): 0.66, 95% CI 0.38 to 1.14, P = 0.14). There was no difference in the number of people admitted to hospital at six (4 RCTs, 439 participants), 12 (5 RCTs, 585 participants) and 18 months (5 RCTs, 613 participants). For mortality at 4 - 6, 12, 18 - 24 and 36 months, and for participants' or carers' quality of life at 4, 6, 12 and 18 months, there were no significant effects. There was some evidence of benefits in carer burden at six months (SMD -0.07, 95% CI -0.12 to -0.01, n = 4601, 4 RCTs, I² = 26%, P = 0.03) but the effects at 12 or 18 months were uncertain. Additionally, some evidence indicated case management was more effective at reducing behaviour disturbance at 18 months (SMD -0.35, 95% CI -0.63 to -0.07, n = 206, 2 RCTs I² = 0%, P = 0.01) but effects were uncertain at four (2 RCTs), six (4 RCTs) or 12 months (5 RCTs).\nThe case management group showed a small significant improvement in carer depression at 18 months (SMD -0.08, 95% CI -0.16 to -0.01, n = 2888, 3 RCTs, I² = 0%, P = 0.03). Conversely, the case management group showed greater improvement in carer well-being in a single study at six months (MD -2.20 CI CI -4.14 to -0.26, n = 65, 1 RCT, P = 0.03) but the effects were uncertain at 12 or 18 months. There was some evidence that case management reduced the total cost of services at 12 months (SMD -0.07, 95% CI -0.12 to -0.02, n = 5276, 2 RCTs, P = 0.01) and incurred lower dollar expenditure for the total three years (MD= -705.00, 95% CI -1170.31 to -239.69, n = 5170, 1 RCT, P = 0.003). Data on a number of outcomes consistently indicated that the intervention group received significantly more community services.\nAuthors' conclusions\nThere is some evidence that case management is beneficial at improving some outcomes at certain time points, both in the person with dementia and in their carer. However, there was considerable heterogeneity between the interventions, outcomes measured and time points across the 13 included RCTs. There was some evidence from good-quality studies to suggest that admissions to care homes and overall healthcare costs are reduced in the medium term; however, the results at longer points of follow-up were uncertain. There was not enough evidence to clearly assess whether case management could delay institutionalisation in care homes. There were uncertain results in patient depression, functional abilities and cognition. Further work should be undertaken to investigate what components of case management are associated with improvement in outcomes. Increased consistency in measures of outcome would support future meta-analysis.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 13 randomised controlled trials (RCTs), including 9615 participants with dementia worldwide. Eleven RCTs also included carers. Studies were conducted in different countries, varied in size and healthcare systems and compared various types of case management interventions with usual care or augmented usual care."
}
]
|
query-laysum | 1336 | [
{
"role": "user",
"content": "Abstract: Background\nStrength training or aerobic exercise programmes, or both, might optimise muscle and cardiorespiratory function and prevent additional disuse atrophy and deconditioning in people with a muscle disease. This is an update of a review first published in 2004 and last updated in 2013. We undertook an update to incorporate new evidence in this active area of research.\nObjectives\nTo assess the effects (benefits and harms) of strength training and aerobic exercise training in people with a muscle disease.\nSearch methods\nWe searched Cochrane Neuromuscular's Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL in November 2018 and clinical trials registries in December 2018.\nSelection criteria\nRandomised controlled trials (RCTs), quasi-RCTs or cross-over RCTs comparing strength or aerobic exercise training, or both lasting at least six weeks, to no training in people with a well-described muscle disease diagnosis.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 14 trials of aerobic exercise, strength training, or both, with an exercise duration of eight to 52 weeks, which included 428 participants with facioscapulohumeral muscular dystrophy (FSHD), dermatomyositis, polymyositis, mitochondrial myopathy, Duchenne muscular dystrophy (DMD), or myotonic dystrophy. Risk of bias was variable, as blinding of participants was not possible, some trials did not blind outcome assessors, and some did not use an intention-to-treat analysis.\nStrength training compared to no training (3 trials)\nFor participants with FSHD (35 participants), there was low-certainty evidence of little or no effect on dynamic strength of elbow flexors (MD 1.2 kgF, 95% CI −0.2 to 2.6), on isometric strength of elbow flexors (MD 0.5 kgF, 95% CI −0.7 to 1.8), and ankle dorsiflexors (MD 0.4 kgF, 95% CI −2.4 to 3.2), and on dynamic strength of ankle dorsiflexors (MD −0.4 kgF, 95% CI −2.3 to 1.4).\nFor participants with myotonic dystrophy type 1 (35 participants), there was very low-certainty evidence of a slight improvement in isometric wrist extensor strength (MD 8.0 N, 95% CI 0.7 to 15.3) and of little or no effect on hand grip force (MD 6.0 N, 95% CI −6.7 to 18.7), pinch grip force (MD 1.0 N, 95% CI −3.3 to 5.3) and isometric wrist flexor force (MD 7.0 N, 95% CI −3.4 to 17.4).\nAerobic exercise training compared to no training (5 trials)\nFor participants with DMD there was very low-certainty evidence regarding the number of leg revolutions (MD 14.0, 95% CI −89.0 to 117.0; 23 participants) or arm revolutions (MD 34.8, 95% CI −68.2 to 137.8; 23 participants), during an assisted six-minute cycle test, and very low-certainty evidence regarding muscle strength (MD 1.7, 95% CI −1.9 to 5.3; 15 participants).\nFor participants with FSHD, there was low-certainty evidence of improvement in aerobic capacity (MD 1.1 L/min, 95% CI 0.4 to 1.8, 38 participants) and of little or no effect on knee extension strength (MD 0.1 kg, 95% CI −0.7 to 0.9, 52 participants).\nFor participants with dermatomyositis and polymyositis (14 participants), there was very low-certainty evidence regarding aerobic capacity (MD 14.6, 95% CI −1.0 to 30.2).\nCombined aerobic exercise and strength training compared to no training (6 trials)\nFor participants with juvenile dermatomyositis (26 participants) there was low-certainty evidence of an improvement in knee extensor strength on the right (MD 36.0 N, 95% CI 25.0 to 47.1) and left (MD 17 N 95% CI 0.5 to 33.5), but low-certainty evidence of little or no effect on maximum force of hip flexors on the right (MD −9.0 N, 95% CI −22.4 to 4.4) or left (MD 6.0 N, 95% CI −6.6 to 18.6). This trial also provided low-certainty evidence of a slight decrease of aerobic capacity (MD −1.2 min, 95% CI −1.6 to 0.9).\nFor participants with dermatomyositis and polymyositis (21 participants), we found very low-certainty evidence for slight increases in muscle strength as measured by dynamic strength of knee extensors on the right (MD 2.5 kg, 95% CI 1.8 to 3.3) and on the left (MD 2.7 kg, 95% CI 2.0 to 3.4) and no clear effect in isometric muscle strength of eight different muscles (MD 1.0, 95% CI −1.1 to 3.1). There was very low-certainty evidence that there may be an increase in aerobic capacity, as measured with time to exhaustion in an incremental cycle test (17.5 min, 95% CI 8.0 to 27.0) and power performed at VO2 max (maximal oxygen uptake) (18 W, 95% CI 15.0 to 21.0).\nFor participants with mitochondrial myopathy (18 participants), we found very low-certainty evidence regarding shoulder muscle (MD −5.0 kg, 95% CI −14.7 to 4.7), pectoralis major muscle (MD 6.4 kg, 95% CI −2.9 to 15.7), and anterior arm muscle strength (MD 7.3 kg, 95% CI −2.9 to 17.5). We found very low-certainty evidence regarding aerobic capacity, as measured with mean time cycled (MD 23.7 min, 95% CI 2.6 to 44.8) and mean distance cycled until exhaustion (MD 9.7 km, 95% CI 1.5 to 17.9).\nOne trial in myotonic dystrophy type 1 (35 participants) did not provide data on muscle strength or aerobic capacity following combined training. In this trial, muscle strength deteriorated in one person and one person had worse daytime sleepiness (very low-certainty evidence).\nFor participants with FSHD (16 participants), we found very low-certainty evidence regarding muscle strength, aerobic capacity and VO2 peak; the results were very imprecise.\nMost trials reported no adverse events other than muscle soreness or joint complaints (low- to very low-certainty evidence).\nAuthors' conclusions\nThe evidence regarding strength training and aerobic exercise interventions remains uncertain. Evidence suggests that strength training alone may have little or no effect, and that aerobic exercise training alone may lead to a possible improvement in aerobic capacity, but only for participants with FSHD. For combined aerobic exercise and strength training, there may be slight increases in muscle strength and aerobic capacity for people with dermatomyositis and polymyositis, and a slight decrease in aerobic capacity and increase in muscle strength for people with juvenile dermatomyositis. More research with robust methodology and greater numbers of participants is still required.\n\nGiven the provided abstract, please respond to the following query: \"Date up to date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The most recent search for evidence was in November 2018."
}
]
|
query-laysum | 1337 | [
{
"role": "user",
"content": "Abstract: Background\nThe prevalence of gestational diabetes mellitus (GDM) is increasing, with approximately 15% of pregnant women affected worldwide, varying by country, ethnicity and diagnostic thresholds. There are associated short- and long-term health risks for women and their babies.\nObjectives\nWe aimed to summarise the evidence from Cochrane systematic reviews on the effects of interventions for preventing GDM.\nMethods\nWe searched the Cochrane Database of Systematic Reviews (6 August 2019) with key words ‘gestational diabetes’ OR ’GDM’ to identify reviews pre-specifying GDM as an outcome. We included reviews of interventions in women who were pregnant or planning a pregnancy, irrespective of their GDM risk status. Two overview authors independently assessed eligibility, extracted data and assessed quality of evidence using ROBIS and GRADE tools. We assigned interventions to categories with graphic icons to classify the effectiveness of interventions as: clear evidence of benefit or harm (GRADE moderate- or high-quality evidence with a confidence interval (CI) that did not cross the line of no effect); clear evidence of no effect or equivalence (GRADE moderate- or high-quality evidence with a narrow CI crossing the line of no effect); possible benefit or harm (low-quality evidence with a CI that did not cross the line of no effect or GRADE moderate- or high-quality evidence with a wide CI); or unknown benefit or harm (GRADE low-quality evidence with a wide CI or very low-quality evidence).\nMain results\nWe included 11 Cochrane Reviews (71 trials, 23,154 women) with data on GDM. Nine additional reviews pre-specified GDM as an outcome, but did not identify GDM data in included trials. Ten of the 11 reviews were judged to be at low risk of bias and one review at unclear risk of bias. Interventions assessed included diet, exercise, a combination of diet and exercise, dietary supplements, pharmaceuticals, and management of other health problems in pregnancy. The quality of evidence ranged from high to very low.\nDiet\nUnknown benefit or harm: there was unknown benefit or harm of dietary advice versus standard care, on the risk of GDM: risk ratio (RR) 0.60, 95% CI 0.35 to 1.04; 5 trials; 1279 women; very low-quality evidence. There was unknown benefit or harm of a low glycaemic index diet versus a moderate-high glycaemic index diet on the risk of GDM: RR 0.91, 95% CI 0.63 to 1.31; 4 trials; 912 women; low-quality evidence.\nExercise\nUnknown benefit or harm: there was unknown benefit or harm for exercise interventions versus standard antenatal care on the risk of GDM: RR 1.10, 95% CI 0.66 to 1.84; 3 trials; 826 women; low-quality evidence.\nDiet and exercise combined\nPossible benefit: combined diet and exercise interventions during pregnancy versus standard care possibly reduced the risk of GDM: RR 0.85, 95% CI 0.71 to 1.01; 19 trials; 6633 women; moderate-quality evidence.\nDietary supplements\nClear evidence of no effect: omega-3 fatty acid supplementation versus none in pregnancy had no effect on the risk of GDM: RR 1.02, 95% CI 0.83 to 1.26; 12 trials; 5235 women; high-quality evidence.\nPossible benefit: myo-inositol supplementation during pregnancy versus control possibly reduced the risk of GDM: RR 0.43, 95% CI 0.29 to 0.64; 3 trials; 502 women; low-quality evidence.\nPossible benefit: vitamin D supplementation versus placebo or control in pregnancy possibly reduced the risk of GDM: RR 0.51, 95% CI 0.27 to 0.97; 4 trials; 446 women; low-quality evidence.\nUnknown benefit or harm: there was unknown benefit or harm of probiotic with dietary intervention versus placebo with dietary intervention (RR 0.37, 95% CI 0.15 to 0.89; 1 trial; 114 women; very low-quality evidence), or probiotic with dietary intervention versus control (RR 0.38, 95% CI 0.16 to 0.92; 1 trial; 111 women; very low-quality evidence) on the risk of GDM. There was unknown benefit or harm of vitamin D + calcium supplementation versus placebo (RR 0.33, 95% CI 0.01 to 7.84; 1 trial; 54 women; very low-quality evidence) or vitamin D + calcium + other minerals versus calcium + other minerals (RR 0.42, 95% CI 0.10 to 1.73; 1 trial; 1298 women; very low-quality evidence) on the risk of GDM.\nPharmaceutical\nPossible benefit: metformin versus placebo given to obese pregnant women possibly reduced the risk of GDM: RR 0.85, 95% CI 0.61 to 1.19; 3 trials; 892 women; moderate-quality evidence.\nUnknown benefit or harm:eight small trials with low- to very low-quality evidence showed unknown benefit or harm for heparin, aspirin, leukocyte immunisation or IgG given to women with a previous stillbirth on the risk of GDM.\nManagement of other health issues\nClear evidence of no effect: universal versus risk based screening of pregnant women for thyroid dysfunction had no effect on the risk of GDM: RR 0.93, 95% CI 0.70 to 1.25; 1 trial; 4516 women; moderate-quality evidence.\nUnknown benefit or harm: there was unknown benefit or harm of using fractional exhaled nitrogen oxide versus a clinical algorithm to adjust asthma therapy on the risk of GDM: RR 0.74, 95% CI 0.31 to 1.77; 1 trial; 210 women; low-quality evidence. There was unknown benefit or harm of pharmacist led multidisciplinary approach to management of maternal asthma versus standard care on the risk of GDM: RR 5.00, 95% CI 0.25 to 99.82; 1 trial; 58 women; low-quality evidence.\nAuthors' conclusions\nNo interventions to prevent GDM in 11 systematic reviews were of clear benefit or harm. A combination of exercise and diet, supplementation with myo-inositol, supplementation with vitamin D and metformin were of possible benefit in reducing the risk of GDM, but further high-quality evidence is needed. Omega-3-fatty acid supplementation and universal screening for thyroid dysfunction did not alter the risk of GDM. There was insufficient high-quality evidence to establish the effect on the risk of GDM of diet or exercise alone, probiotics, vitamin D with calcium or other vitamins and minerals, interventions in pregnancy after a previous stillbirth, and different asthma management strategies in pregnancy. There is a lack of trials investigating the effect of interventions prior to or between pregnancies on risk of GDM.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the Cochrane Library (August 2019) and identified 11 Cochrane Reviews that assessed interventions during pregnancy and reported on GDM. The reviews had findings from 71 randomised controlled trials involving 23,154 pregnant women. Interventions included diet, exercise, a combination of diet and exercise, dietary supplements, medications, and management of other health problems. The evidence from the trials ranged from very low to high quality. We identified a further 10 reviews that may provide more information on this topic in the future."
}
]
|
query-laysum | 1339 | [
{
"role": "user",
"content": "Abstract: Background\nGliomas are the most common primary brain tumour. They are graded using the WHO classification system, with Grade II-IV astrocytomas, oligodendrogliomas and oligoastrocytomas. Low-grade gliomas (LGGs) are WHO Grade II infiltrative brain tumours that typically appear solid and non-enhancing on magnetic resonance imaging (MRI) scans. People with LGG often have little or no neurologic deficit, so may opt for a watch-and-wait-approach over surgical resection, radiotherapy or both, as surgery can result in early neurologic disability. Occasionally, high-grade gliomas (HGGs, WHO Grade III and IV) may have the same MRI appearance as LGGs. Taking a watch-and-wait approach could be detrimental for the patient if the tumour progresses quickly. Advanced imaging techniques are increasingly used in clinical practice to predict the grade of the tumour and to aid clinical decision of when to intervene surgically. One such advanced imaging technique is magnetic resonance (MR) perfusion, which detects abnormal haemodynamic changes related to increased angiogenesis and vascular permeability, or \"leakiness\" that occur with aggressive tumour histology. These are reflected by changes in cerebral blood volume (CBV) expressed as rCBV (ratio of tumoural CBV to normal appearing white matter CBV) and permeability, measured by Ktrans.\nObjectives\nTo determine the diagnostic test accuracy of MR perfusion for identifying patients with primary solid and non-enhancing LGGs (WHO Grade II) at first presentation in children and adults. In performing the quantitative analysis for this review, patients with LGGs were considered disease positive while patients with HGGs were considered disease negative.\nTo determine what clinical features and methodological features affect the accuracy of MR perfusion.\nSearch methods\nOur search strategy used two concepts: (1) glioma and the various histologies of interest, and (2) MR perfusion. We used structured search strategies appropriate for each database searched, which included: MEDLINE (Ovid SP), Embase (Ovid SP), and Web of Science Core Collection (Science Citation Index Expanded and Conference Proceedings Citation Index). The most recent search for this review was run on 9 November 2016.\nWe also identified 'grey literature' from online records of conference proceedings from the American College of Radiology, European Society of Radiology, American Society of Neuroradiology and European Society of Neuroradiology in the last 20 years.\nSelection criteria\nThe titles and abstracts from the search results were screened to obtain full-text articles for inclusion or exclusion. We contacted authors to clarify or obtain missing/unpublished data.\nWe included cross-sectional studies that performed dynamic susceptibility (DSC) or dynamic contrast-enhanced (DCE) MR perfusion or both of untreated LGGs and HGGs, and where rCBV and/or Ktrans values were reported. We selected participants with solid and non-enhancing gliomas who underwent MR perfusion within two months prior to histological confirmation. We excluded studies on participants who received radiation or chemotherapy before MR perfusion, or those without histologic confirmation.\nData collection and analysis\nTwo review authors extracted information on study characteristics and data, and assessed the methodological quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. We present a summary of the study characteristics and QUADAS-2 results, and rate studies as good quality when they have low risk of bias in the domains of reference standard of tissue diagnosis and flow and timing between MR perfusion and tissue diagnosis.\nIn the quantitative analysis, LGGs were considered disease positive, while HGGs were disease negative. The sensitivity refers to the proportion of LGGs detected by MR perfusion, and specificity as the proportion of detected HGGs. We constructed two-by-two tables with true positives and false negatives as the number of correctly and incorrectly diagnosed LGG, respectively, while true negatives and false positives are the number of correctly and incorrectly diagnosed HGG, respectively.\nMeta-analysis was performed on studies with two-by-two tables, with further sensitivity analysis using good quality studies. Limited data precluded regression analysis to explore heterogeneity but subgroup analysis was performed on tumour histology groups.\nMain results\nSeven studies with small sample sizes (4 to 48) met our inclusion criteria. These were mostly conducted in university hospitals and mostly recruited adult patients. All studies performed DSC MR perfusion and described heterogeneous acquisition and post-processing methods. Only one study performed DCE MR perfusion, precluding quantitative analysis.\nUsing patient-level data allowed selection of individual participants relevant to the review, with generally low risks of bias for the participant selection, reference standard and flow and timing domains. Most studies did not use a pre-specified threshold, which was considered a significant source of bias, however this did not affect quantitative analysis as we adopted a common rCBV threshold of 1.75 for the review. Concerns regarding applicability were low.\nFrom published and unpublished data, 115 participants were selected and included in the meta-analysis. Average rCBV (range) of 83 LGGs and 32 HGGs were 1.29 (0.01 to 5.10) and 1.89 (0.30 to 6.51), respectively. Using the widely accepted rCBV threshold of <1.75 to differentiate LGG from HGG, the summary sensitivity/specificity estimates were 0.83 (95% CI 0.66 to 0.93)/0.48 (95% CI 0.09 to 0.90). Sensitivity analysis using five good quality studies yielded sensitivity/specificity of 0.80 (95% CI 0.61 to 0.91)/0.67 (95% CI 0.07 to 0.98). Subgroup analysis for tumour histology showed sensitivity/specificity of 0.92 (95% CI 0.55 to 0.99)/0.42 (95% CI 0.02 to 0.95) in astrocytomas (6 studies, 55 participants) and 0.77 (95% CI 0.46 to 0.93)/0.53 (95% CI 0.14 to 0.88) in oligodendrogliomas+oligoastrocytomas (6 studies, 56 participants). Data were too sparse to investigate any differences across subgroups.\nAuthors' conclusions\nThe limited available evidence precludes reliable estimation of the performance of DSC MR perfusion-derived rCBV for the identification of grade in untreated solid and non-enhancing LGG from that of HGG. Pooled data yielded a wide range of estimates for both sensitivity (range 66% to 93% for detection of LGGs) and specificity (range 9% to 90% for detection of HGGs). Other clinical and methodological features affecting accuracy of the technique could not be determined from the limited data. A larger sample size of both LGG and HGG, preferably using a standardised scanning approach and with an updated reference standard incorporating molecular profiles, is required for a definite conclusion.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The analysis included results from 115 patients. The results indicate that in theory, if MR perfusion were to be used in 100 patients with brain tumours that look like LGG on standard MRI scan, of whom 72 actually have LGG, then:\n- an estimated 74 will have an MR perfusion result indicating that they have LGGs, and of these 15 will have HGGs;\n- an estimated 26 will have an MR perfusion result indicating that they have HGGs, and of these 13 will have LGGs."
}
]
|
query-laysum | 1340 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis is a multi-system disease characterised by the production of thick secretions causing recurrent pulmonary infection, often with unusual bacteria. This leads to lung destruction and eventually death through respiratory failure. There are no antibiotics in development that exert a new mode of action and many of the current antibiotics are ineffective in eradicating the bacteria once chronic infection is established. Antibiotic adjuvants - therapies that act by rendering the organism more susceptible to attack by antibiotics or the host immune system, by rendering it less virulent or killing it by other means, would be a significant therapeutic advance. This is an update of a previously published review.\nObjectives\nTo determine if antibiotic adjuvants improve clinical and microbiological outcome of pulmonary infection in people with cystic fibrosis.\nSearch methods\nWe searched the Cystic Fibrosis Trials Register which is compiled from database searches, hand searches of appropriate journals and conference proceedings.\nDate of most recent search: 16 January 2020.\nWe also searched MEDLINE (all years) on 14 February 2019 and ongoing trials registers on 06 April 2020.\nSelection criteria\nRandomised controlled trials and quasi-randomised controlled trials of a therapy exerting an antibiotic adjuvant mechanism of action compared to placebo or no therapy for people with cystic fibrosis.\nData collection and analysis\nTwo of the authors independently assessed and extracted data from identified trials.\nMain results\nWe identified 42 trials of which eight (350 participants) that examined antibiotic adjuvant therapies are included. Two further trials are ongoing and five are awaiting classification. The included trials assessed β-carotene (one trial, 24 participants), garlic (one trial, 34 participants), KB001-A (a monoclonal antibody) (two trials, 196 participants), nitric oxide (two trials, 30 participants) and zinc supplementation (two trials, 66 participants). The zinc trials recruited children only, whereas the remaining trials recruited both adults and children. Three trials were located in Europe, one in Asia and four in the USA.\nThree of the interventions measured our primary outcome of pulmonary exacerbations (β-carotene, mean difference (MD) -8.00 (95% confidence interval (CI) -18.78 to 2.78); KB001-A, risk ratio (RR) 0.25 (95% CI 0.03 to 2.40); zinc supplementation, RR 1.85 (95% CI 0.65 to 5.26). β-carotene and KB001-A may make little or no difference to the number of exacerbations experienced (low-quality evidence); whereas, given the moderate-quality evidence we found that zinc probably makes no difference to this outcome.\nRespiratory function was measured in all of the included trials. β-carotene and nitric oxide may make little or no difference to forced expiratory volume in one second (FEV1) (low-quality evidence), whilst garlic probably makes little or no difference to FEV1 (moderate-quality evidence). It is uncertain whether zinc or KB001-A improve FEV1 as the certainty of this evidence is very low.\nFew adverse events were seen across all of the different interventions and the adverse events that were reported were mild or not treatment-related (quality of the evidence ranged from very low to moderate).\nOne of the trials (169 participants) comparing KB001-A and placebo, reported on the time to the next course of antibiotics; results showed there is probably no difference between groups, HR 1.00 (95% CI 0.69 to 1.45) (moderate-quality evidence). Quality of life was only reported in the two KB001-A trials, which demonstrated that there may be little or no difference between KB001-A and placebo (low-quality evidence). Sputum microbiology was measured and reported for the trials of KB001-A and nitric oxide (four trials). There was very low-quality evidence of a numerical reduction in Pseudomonas aeruginosa density with KB001-A, but it was not significant. The two trials looking at the effects of nitric oxide reported significant reductions in Staphylococcus aureus and near-significant reductions in Pseudomonas aeruginosa, but the quality of this evidence is again very low.\nAuthors' conclusions\nWe could not identify an antibiotic adjuvant therapy that we could recommend for treating of lung infection in people with cystic fibrosis. The emergence of increasingly resistant bacteria makes the reliance on antibiotics alone challenging for cystic fibrosis teams. There is a need to explore alternative strategies, such as the use of adjuvant therapies. Further research is required to provide future therapeutic options.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "None of the treatments led to a longer time until the next flare up of lung disease compared to the people who received placebo. For the other measures we looked at (lung function, side effects, quality of life or number of infections) there were also no differences between the people given the adjuvant and the people given the placebo. All of the trials measured side effects of the treatment but these were mostly mild and happened in both the people receiving the treatment and those that did not.\nNone of the therapies for enhancing the actions of antibiotics which we found, showed a significant benefit for either lung function, rate of infection or quality of life. More randomised controlled trials are needed before we can recommend the routine use of any of these therapies."
}
]
|
query-laysum | 1341 | [
{
"role": "user",
"content": "Abstract: Background\nAging populations are at increased risk of postoperative complications. New methods to provide care for older people recovering from surgery may reduce surgery-related complications. Comprehensive geriatric assessment (CGA) has been shown to improve some outcomes for medical patients, such as enabling them to continue living at home, and has been proposed to have positive impacts for surgical patients. CGA is a coordinated, multidisciplinary collaboration that assesses the medical, psychosocial and functional capabilities and limitations of an older person, with the goal of establishing a treatment plan and long-term follow-up.\nObjectives\nTo assess the effectiveness of CGA interventions compared to standard care on the postoperative outcomes of older people admitted to hospital for surgical care.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL and two clinical trials registers on 13 January 2017. We also searched grey literature for additional citations.\nSelection criteria\nRandomized trials of people undergoing surgery aged 65 years and over comparing CGA with usual surgical care and reporting any of our primary (mortality and discharge to an increased level of care) or secondary (length of stay, re-admission, total cost and postoperative complication) outcomes. We excluded studies if the participants did not receive a complete CGA, did not undergo surgery, and if the study recruited participants aged less than 65 years or from a setting other than an acute care hospital.\nData collection and analysis\nTwo review authors independently screened, assessed risk of bias, extracted data and assessed certainty of evidence from identified articles. We expressed dichotomous treatment effects as risk ratio (RR) with 95% confidence intervals and continuous outcomes as mean difference (MD).\nMain results\nWe included eight randomised trials, seven recruited people recovering from a hip fracture (N = 1583) and one elective surgical oncology trial (N = 260), conducted in North America and Europe. For two trials CGA was done pre-operatively and postoperatively for the remaining. Six trials had adequate randomization, five had low risk of performance bias and four had low risk of detection bias. Blinding of participants was not possible. All eight trials had low attrition rates and seven reported all expected outcomes.\nCGA probably reduces mortality in older people with hip fracture (RR 0.85, 95% CI 0.68 to 1.05; 5 trials, 1316 participants, I² = 0%; moderate-certainty evidence). The intervention reduces discharge to an increased level of care (RR 0.71, 95% CI 0.55 to 0.92; 5 trials, 941 participants, I² = 0%; high-certainty evidence).\nLength of stay was highly heterogeneous, with mean difference between participants allocated to the intervention and the control groups ranging between -12.8 and 8.3 days. CGA probably leads to slightly reduced length of stay (4 trials, 841 participants, moderate-certainty evidence). The intervention probably makes little or no difference in re-admission rates (RR 1.00, 95% CI 0.76 to 1.32; 3 trials, 741 participants, I² = 37%; moderate-certainty evidence).\nCGA probably slightly reduces total cost (1 trial, 397 participants, moderate-certainty evidence). The intervention may make little or no difference for major postoperative complications (2 trials, 579 participants, low-certainty evidence) and delirium rates (RR 0.75, 95% CI 0.60 to 0.94, 3 trials, 705 participants, I² = 0%; low-certainty evidence).\nAuthors' conclusions\nThere is evidence that CGA can improve outcomes in people with hip fracture. There are not enough studies to determine when CGA is most effective in relation to surgical intervention or if CGA is effective in surgical patients presenting with conditions other than hip fracture.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We last searched for new studies on 13 January 2017."
}
]
|
query-laysum | 1342 | [
{
"role": "user",
"content": "Abstract: Background\nIn many low- and middle-income countries women are encouraged to give birth in clinics and hospitals so that they can receive care from skilled birth attendants. A skilled birth attendant (SBA) is a health worker such as a midwife, doctor, or nurse who is trained to manage normal pregnancy and childbirth. (S)he is also trained to identify, manage, and refer any health problems that arise for mother and baby. The skills, attitudes and behaviour of SBAs, and the extent to which they work in an enabling working environment, impact on the quality of care provided. If any of these factors are missing, mothers and babies are likely to receive suboptimal care.\nObjectives\nTo explore the views, experiences, and behaviours of skilled birth attendants and those who support them; to identify factors that influence the delivery of intrapartum and postnatal care in low- and middle-income countries; and to explore the extent to which these factors were reflected in intervention studies.\nSearch methods\nOur search strategies specified key and free text terms related to the perinatal period, and the health provider, and included methodological filters for qualitative evidence syntheses and for low- and middle-income countries. We searched MEDLINE, OvidSP (searched 21 November 2016), Embase, OvidSP (searched 28 November 2016), PsycINFO, OvidSP (searched 30 November 2016), POPLINE, K4Health (searched 30 November 2016), CINAHL, EBSCOhost (searched 30 November 2016), ProQuest Dissertations and Theses (searched 15 August 2013), Web of Science (searched 1 December 2016), World Health Organization Reproductive Health Library (searched 16 August 2013), and World Health Organization Global Health Library for WHO databases (searched 1 December 2016).\nSelection criteria\nWe included qualitative studies that focused on the views, experiences, and behaviours of SBAs and those who work with them as part of the team. We included studies from all levels of health care in low- and middle-income countries.\nData collection and analysis\nOne review author extracted data and assessed study quality, and another review author checked the data. We synthesised data using the best fit framework synthesis approach and assessed confidence in the evidence using the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach. We used a matrix approach to explore whether the factors identified by health workers in our synthesis as important for providing maternity care were reflected in the interventions evaluated in the studies in a related intervention review.\nMain results\nWe included 31 studies that explored the views and experiences of different types of SBAs, including doctors, midwives, nurses, auxiliary nurses and their managers. The included studies took place in Africa, Asia, and Latin America.\nOur synthesis pointed to a number of factors affecting SBAs’ provision of quality care. The following factors were based on evidence assessed as of moderate to high confidence. Skilled birth attendants reported that they were not always given sufficient training during their education or after they had begun clinical work. Also, inadequate staffing of facilities could increase the workloads of skilled birth attendants, make it difficult to provide supervision and result in mothers being offered poorer care. In addition, SBAs did not always believe that their salaries and benefits reflected their tasks and responsibilities and the personal risks they undertook. Together with poor living and working conditions, these issues were seen to increase stress and to negatively affect family life. Some SBAs also felt that managers lacked capacity and skills, and felt unsupported when their workplace concerns were not addressed.\nPossible causes of staff shortages in facilities included problems with hiring and assigning health workers to facilities where they were needed; lack of funding; poor management and bureaucratic systems; and low salaries. Skilled birth attendants and their managers suggested factors that could help recruit, keep, and motivate health workers, and improve the quality of care; these included good-quality housing, allowances for extra work, paid vacations, continuing education, appropriate assessments of their work, and rewards.\nSkilled birth attendants’ ability to provide quality care was also limited by a lack of equipment, supplies, and drugs; blood and the infrastructure to manage blood transfusions; electricity and water supplies; and adequate space and amenities on maternity wards. These factors were seen to reduce SBAs’ morale, increase their workload and infection risk, and make them less efficient in their work. A lack of transport sometimes made it difficult for SBAs to refer women on to higher levels of care. In addition, women’s negative perceptions of the health system could make them reluctant to accept referral.\nWe identified some other factors that also may have affected the quality of care, which were based on findings assessed as of low or very low confidence. Poor teamwork and lack of trust and collaboration between health workers appeared to negatively influence care. In contrast, good collaboration and teamwork appeared to increase skilled birth attendants’ motivation, their decision-making abilities, and the quality of care. Skilled birth attendants’ workloads and staff shortages influenced their interactions with mothers. In addition, poor communication undermined trust between skilled birth attendants and mothers.\nAuthors' conclusions\nMany factors influence the care that SBAs are able to provide to mothers during childbirth. These include access to training and supervision; staff numbers and workloads; salaries and living conditions; and access to well-equipped, well-organised healthcare facilities with water, electricity, and transport. Other factors that may play a role include the existence of teamwork and of trust, collaboration, and communication between health workers and with mothers. Skilled birth attendants reported many problems tied to all of these factors.\n\nGiven the provided abstract, please respond to the following query: \"Main findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 31 studies conducted in Africa, Asia, and Latin America. Participants were skilled birth attendants including doctors, midwives, nurses, auxiliary nurses and their managers.\nOur synthesis pointed to several factors that affected skilled birth attendants’ provision of quality care. The following factors are based on evidence assessed as of moderate to high confidence. Skilled birth attendants reported that they sometimes had insufficient training during their education or after they had begun work. Where facilities lacked staff, skilled birth attendants’ workloads could increase, it could become difficult to provide supervision, and mothers could receive poorer care. In addition, skilled birth attendants did not always believe that their salaries and benefits reflected their tasks and responsibilities and the personal risks they undertook. Together with poor living and working conditions, these issues could lead to stress and affect skilled birth attendants' family life. Some skilled birth attendants felt that managers lacked capacity and skills, and they felt unsupported when their workplace concerns were not addressed.\nPossible causes of staff shortages included problems with hiring and assigning health workers to health facilities; lack of funding; poor management and bureaucratic systems; and low salaries. Skilled birth attendants and their managers suggested factors that could help recruit, keep, and motivate health workers, and improve the quality of their work; these included good-quality housing, allowances for extra work, paid vacations, continued education, proper assessments of their work, and rewards.\nSkilled birth attendants’ ability to provide quality care was also limited by a lack of equipment, drugs, and supplies; blood and the infrastructure to manage blood transfusions; electricity and water supplies; and adequate space and amenities on maternity wards. These factors were seen to reduce skilled birth attendants’ morale, increase their workload and infection risk, and make them less efficient in their work. A lack of transport sometimes made it difficult for skilled birth attendants to refer women to higher levels of care. In addition, women’s negative perceptions of the health system could make them reluctant to accept referral.\nWe identified some other factors that also may have affected the quality of care, which were based on findings assessed as of low or very low confidence. Poor teamwork and lack of trust and collaboration between health workers appeared to negatively influence care. In contrast, good collaboration and teamwork appeared to increase skilled birth attendants’ motivation, their decision-making abilities, and the quality of care. Skilled birth attendants’ workloads and staff shortages influenced their interactions with mothers. In addition, poor communication undermined trust between skilled birth attendants and mothers."
}
]
|
query-laysum | 1343 | [
{
"role": "user",
"content": "Abstract: Background\n'Infertility' is defined as the failure to achieve pregnancy after 12 months or more of regular unprotected sexual intercourse. One in six couples experience a delay in becoming pregnant. In vitro fertilisation (IVF) is one of the assisted reproductive techniques used to enable couples to achieve a live birth. One of the processes involved in IVF is embryo culture in an incubator, where a stable environment is created and maintained. The incubators are set at approximately 37°C, which is based on the human core body temperature, although several studies have shown that this temperature may in fact be lower in the female reproductive tract and that this could be beneficial. In this review we have included randomised controlled trials which compared different temperatures of embryo culture.\nObjectives\nTo assess different temperatures of embryo culture for human assisted reproduction, which may lead to higher live birth rates.\nSearch methods\nWe searched the following databases and trial registers: the Cochrane Gynaecology and Fertility (CGF) Group Specialised Register of Controlled Trials, the Cochrane Central Register of Studies Online, MEDLINE, Embase, PsycINFO, CINAHL, clinicaltrials.gov, The World Health Organization International Trials Registry Platform search portal, DARE, Web of Knowledge, OpenGrey, LILACS database, PubMed and Google Scholar. Furthermore, we manually searched the references of relevant articles and contacted experts in the field to obtain additional data. We did not restrict the search by language or publication status. We performed the last search on 6 March 2019.\nSelection criteria\nTwo review authors independently screened the titles and abstracts of articles retrieved by the search. Full texts of potentially eligible randomised controlled trials (RCTs) were obtained and screened. We included all RCTs which compared different temperatures of embryo culture in IVF or intracytoplasmic sperm injection (ICSI), with a minimum difference in temperature between the two incubators of ≥ 0.5°C. The search process is shown in the PRISMA flow chart.\nData collection and analysis\nTwo review authors independently assessed trial eligibility and risk of bias and extracted data from the included studies; the third review author resolved any disagreements. We contacted trial authors to provide additional data. The primary review outcomes were live birth and miscarriage. Clinical pregnancy, ongoing pregnancy, multiple pregnancy and adverse events were secondary outcomes. All extracted data were dichotomous outcomes, and odds ratios (OR) were calculated with 95% confidence intervals (CIs) on an intention-to-treat basis. We assessed the overall quality of the evidence for the main comparisons using GRADE methods.\nMain results\nWe included three RCTs, with a total of 563 women, that compared incubation of embryos at 37.0°C or 37.1°C with a lower incubator temperature (37.0°C versus 36.6°C, 37.1°C versus 36.0°C, 37.0° versus 36.5°C). Live birth, miscarriage, clinical pregnancy, ongoing pregnancy and multiple pregnancy were reported. After additional information from the authors, we confirmed one study as having no adverse events; the other two studies did not report adverse events. We did not perform a meta-analysis as there were not enough studies included per outcome. Live birth was not graded since there were no data of interest available. The evidence for the primary outcome, miscarriage, was of very low quality. The evidence for the secondary outcomes, clinical pregnancy, ongoing pregnancy and multiple pregnancy was also of very low quality. We downgraded the evidence because of high risk of bias (for performance bias) and imprecision due to limited included studies and wide CIs.\nOnly one study reported the primary outcome, live birth (n = 52). They performed randomisation at the level of oocytes and not per woman, and used a paired design whereby two embryos, one from 36.0°C and one from 37.0°C, were transferred. The data from this study were not interpretable in a meaningful way and therefore not presented. Only one study reported miscarriage. We are uncertain whether incubation at a lower temperature decreases the miscarriage (odds ratio (OR) 0.90, 95% CI 0.52 to 1.55; 1 study, N = 412; very low-quality evidence).\nOf the two studies that reported clinical pregnancy, only one of them performed randomisation per woman. We are uncertain whether a lower temperature improves clinical pregnancy compared to 37°C for embryo incubation (OR 1.08, 95% CI 0.73 to 1.60; 1 study, N = 412; very low-quality evidence). For the outcome, ongoing pregnancy, we are uncertain if a lower temperature is better than 37°C (OR 1.10, 95% CI 0.75 to 1.62; 1 study, N = 412; very low quality-evidence). Multiple pregnancy was reported by two studies, one of which used a paired design, which made it impossible to report the data per temperature. We are uncertain if a temperature lower than 37°C reduces multiple pregnancy (OR 0.80, 95% CI 0.31 to 2.07; 1 study, N = 412; very low-quality evidence). There was insufficient evidence to make a conclusion regarding adverse events, as no studies reported data suitable for analysis.\nAuthors' conclusions\nThis review evaluated different temperatures for embryo culture during IVF. There is a lack of evidence for the majority of outcomes in this review. Based on very low-quality evidence, we are uncertain if incubating at a lower temperature than 37°C improves pregnancy outcomes. More RCTs are needed for comparing different temperatures of embryo culture which require reporting of clinical outcomes as live birth, miscarriage, clinical pregnancy and adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Only one study reported the primary outcome live birth, but due to a small sample size, randomisation on oocytes and paired design, no conclusions could be made. We are uncertain if incubating at a lower temperature than 37°C is beneficial for the following outcomes; miscarriage, clinical pregnancy, ongoing pregnancy and multiple pregnancy. Looking at clinical pregnancy, if women have a 55% chance of a clinical pregnancy with culturing embryos at 37°C, the clinical pregnancy rate using a lower temperature would be between 47% and 66%. Adverse events were mostly not reported; only one study reported no adverse events. Because the number of studies was limited, and each study reported different outcomes, more randomised controlled trials (RCTs) are needed in this field.Quality of evidence\nThe quality of the evidence (using GRADE) was very low due to high risk of bias (performance bias) and imprecision (limited amount of studies and wide confidence intervals). Based on the limited and very low-quality included studies, there is no evidence that a lower temperature may enhance live birth rates or any of the other studied outcomes."
}
]
|
query-laysum | 1344 | [
{
"role": "user",
"content": "Abstract: Background\nHepatitis B vaccine has been recommended for use in people living with HIV (PLHIV) mostly because of the similarities in routes of infection and their prevalence in the same geographic areas. PLHIV may not develop sero-protection after receiving standard hepatitis B vaccine due to their compromised immune status.\nObjectives\nTo evaluate the efficacy of hepatitis B virus vaccine in PLHIV compared to placebo or no vaccine.\nSearch methods\nWe searched 6 English language databases in July 2012, and updated the search in June 2013 and August 2014. We searched the grey literature, conference proceedings, specialised web sites, and contacted experts in the field.\nSelection criteria\nRandomised controlled trials of hepatitis B vaccine compared to placebo or no vaccine, evaluating relevant outcomes of efficacy and safety.\nData collection and analysis\nTwo review authors independently sought and extracted data on study design, participants, hepatitis B infection, hepatitis B related morbidity and mortality, anti-HBs immunogenicity and adverse effects related to vaccines from published articles or through correspondence with authors. Data were analysed qualitatively.\nMain results\nOne double-blind randomised controlled trial with 26 participants who were on antiretroviral therapy (ART), comparing hepatitis B vaccine to placebo conducted in Spain met our eligibility criteria and was included in this review. The study ran for three years and participants were followed up on a monthly basis. The study reported adequate humoral response to vaccine at 12 months and no local or systematic side effects in both intervention and control groups. This humoral response was lost when the participants stopped taking ART. The sample size of the study was small and the study was conducted in a high income setting unlike the areas of highest burden of hepatitis B and HIV co-infections.\nAuthors' conclusions\nThe evidence from this study is insufficient to support any recommendations regarding the use of hepatitis B vaccine in PLHIV. Neither does this evidence demonstrate that hepatitis B vaccine is unsafe in PLHIV. Further randomised controlled trials in high prevalence areas are required to generate evidence on the long term efficacy and safety of hepatitis B vaccine in PLHIV with and without ART. Different regimens and routes of administration should also be explored.\n\nGiven the provided abstract, please respond to the following query: \"Key Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The single study in this review showed improved immunity against hepatitis B among people living with HIV and taking antiretroviral therapy at 12 months. This immunity was lost once they stopped taking antiretroviral therapy. No side-effects were reported."
}
]
|
query-laysum | 1345 | [
{
"role": "user",
"content": "Abstract: Background\nExercise training is commonly recommended for individuals with fibromyalgia. This review examined the effects of supervised group aquatic training programs (led by an instructor). We defined aquatic training as exercising in a pool while standing at waist, chest, or shoulder depth. This review is part of the update of the 'Exercise for treating fibromyalgia syndrome' review first published in 2002, and previously updated in 2007.\nObjectives\nThe objective of this systematic review was to evaluate the benefits and harms of aquatic exercise training in adults with fibromyalgia.\nSearch methods\nWe searched The Cochrane Library 2013, Issue 2 (Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Cochrane Central Register of Controlled Trials, Health Technology Assessment Database, NHS Economic Evaluation Database), MEDLINE, EMBASE, CINAHL, PEDro, Dissertation Abstracts, WHO international Clinical Trials Registry Platform, and AMED, as well as other sources (i.e., reference lists from key journals, identified articles, meta-analyses, and reviews of all types of treatment for fibromyalgia) from inception to October 2013. Using Cochrane methods, we screened citations, abstracts, and full-text articles. Subsequently, we identified aquatic exercise training studies.\nSelection criteria\nSelection criteria were: a) full-text publication of a randomized controlled trial (RCT) in adults diagnosed with fibromyalgia based on published criteria, and b) between-group data for an aquatic intervention and a control or other intervention. We excluded studies if exercise in water was less than 50% of the full intervention.\nData collection and analysis\nWe independently assessed risk of bias and extracted data (24 outcomes), of which we designated seven as major outcomes: multidimensional function, self reported physical function, pain, stiffness, muscle strength, submaximal cardiorespiratory function, withdrawal rates and adverse effects. We resolved discordance through discussion. We evaluated interventions using mean differences (MD) or standardized mean differences (SMD) and 95% confidence intervals (95% CI). Where two or more studies provided data for an outcome, we carried out meta-analysis. In addition, we set and used a 15% threshold for calculation of clinically relevant differences.\nMain results\nWe included 16 aquatic exercise training studies (N = 881; 866 women and 15 men). Nine studies compared aquatic exercise to control, five studies compared aquatic to land-based exercise, and two compared aquatic exercise to a different aquatic exercise program.\nWe rated the risk of bias related to random sequence generation (selection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias), blinding of outcome assessors (detection bias), and other bias as low. We rated blinding of participants and personnel (selection and performance bias) and allocation concealment (selection bias) as low risk and unclear. The assessment of the evidence showed limitations related to imprecision, high statistical heterogeneity, and wide confidence intervals.\nAquatic versus control\nWe found statistically significant improvements (P value < 0.05) in all of the major outcomes. Based on a 100-point scale, multidimensional function improved by six units (MD -5.97, 95% CI -9.06 to -2.88; number needed to treat (NNT) 5, 95% CI 3 to 9), self reported physical function by four units (MD -4.35, 95% CI -7.77 to -0.94; NNT 6, 95% CI 3 to 22), pain by seven units (MD -6.59, 95% CI -10.71 to -2.48; NNT 5, 95% CI 3 to 8), and stiffness by 18 units (MD -18.34, 95% CI -35.75 to -0.93; NNT 3, 95% CI 2 to 24) more in the aquatic than the control groups. The SMD for muscle strength as measured by knee extension and hand grip was 0.63 standard deviations higher compared to the control group (SMD 0.63, 95% CI 0.20 to 1.05; NNT 4, 95% CI 3 to 12) and cardiovascular submaximal function improved by 37 meters on six-minute walk test (95% CI 4.14 to 69.92). Only two major outcomes, stiffness and muscle strength, met the 15% threshold for clinical relevance (improved by 27% and 37% respectively). Withdrawals were similar in the aquatic and control groups and adverse effects were poorly reported, with no serious adverse effects reported.\nAquatic versus land-based\nThere were no statistically significant differences between interventions for multidimensional function, self reported physical function, pain or stiffness: 0.91 units (95% CI -4.01 to 5.83), -5.85 units (95% CI -12.33 to 0.63), -0.75 units (95% CI -10.72 to 9.23), and two units (95% CI -8.88 to 1.28) respectively (all based on a 100-point scale), or in submaximal cardiorespiratory function (three seconds on a 100-meter walk test, 95% CI -1.77 to 7.77). We found a statistically significant difference between interventions for strength, favoring land–based training (2.40 kilo pascals grip strength, 95% CI 4.52 to 0.28). None of the outcomes in the aquatic versus land comparison reached clinically relevant differences of 15%. Withdrawals were similar in the aquatic and land groups and adverse effects were poorly reported, with no serious adverse effects in either group.\nAquatic versus aquatic (Ai Chi versus stretching in the water, exercise in pool water versus exercise in sea water)\nAmong the major outcomes the only statistically significant difference between interventions was for stiffness, favoring Ai Chi (1.00 on a 100-point scale, 95% CI 0.31 to 1.69).\nAuthors' conclusions\nLow to moderate quality evidence relative to control suggests that aquatic training is beneficial for improving wellness, symptoms, and fitness in adults with fibromyalgia. Very low to low quality evidence suggests that there are benefits of aquatic and land-based exercise, except in muscle strength (very low quality evidence favoring land). No serious adverse effects were reported.\n\nGiven the provided abstract, please respond to the following query: \"Background: what is fibromyalgia and what is aquatic training?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with fibromyalgia have persistent, widespread body pain and often experience symptoms such as fatigue, stiffness, depression, and difficulty sleeping.\nAquatic training is exercising in a pool while standing at waist, chest, or shoulder depth. This review examined the effects of supervised group aquatic training programs (led by an instructor)."
}
]
|
query-laysum | 1346 | [
{
"role": "user",
"content": "Abstract: Background\nTobacco smoking is the leading preventable cause of death worldwide, which makes it essential to stimulate smoking cessation. The financial cost of smoking cessation treatment can act as a barrier to those seeking support. We hypothesised that provision of financial assistance for people trying to quit smoking, or reimbursement of their care providers, could lead to an increased rate of successful quit attempts. This is an update of the original 2005 review.\nObjectives\nThe primary objective of this review was to assess the impact of reducing the costs for tobacco smokers or healthcare providers for using or providing smoking cessation treatment through healthcare financing interventions on abstinence from smoking. The secondary objectives were to examine the effects of different levels of financial support on the use or prescription of smoking cessation treatment, or both, and on the number of smokers making a quit attempt (quitting smoking for at least 24 hours). We also assessed the cost effectiveness of different financial interventions, and analysed the costs per additional quitter, or per quality-adjusted life year (QALY) gained.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialised Register in September 2016.\nSelection criteria\nWe considered randomised controlled trials (RCTs), controlled trials and interrupted time series studies involving financial benefit interventions to smokers or their healthcare providers, or both.\nData collection and analysis\nTwo reviewers independently extracted data and assessed the quality of the included studies. We calculated risk ratios (RR) for individual studies on an intention-to-treat basis and performed meta-analysis using a random-effects model.\nMain results\nIn the current update, we have added six new relevant studies, resulting in a total of 17 studies included in this review involving financial interventions directed at smokers or healthcare providers, or both.\nFull financial interventions directed at smokers had a favourable effect on abstinence at six months or longer when compared to no intervention (RR 1.77, 95% CI 1.37 to 2.28, I² = 33%, 9333 participants). There was no evidence that full coverage interventions increased smoking abstinence compared to partial coverage interventions (RR 1.02, 95% CI 0.71 to 1.48, I² = 64%, 5914 participants), but partial coverage interventions were more effective in increasing abstinence than no intervention (RR 1.27 95% CI 1.02 to 1.59, I² = 21%, 7108 participants). The economic evaluation showed costs per additional quitter ranging from USD 97 to USD 7646 for the comparison of full coverage with partial or no coverage.\nThere was no clear evidence of an effect on smoking cessation when we pooled two trials of financial incentives directed at healthcare providers (RR 1.16, CI 0.98 to 1.37, I² = 0%, 2311 participants).\nFull financial interventions increased the number of participants making a quit attempt when compared to no interventions (RR 1.11, 95% CI 1.04 to 1.17, I² = 15%, 9065 participants). There was insufficient evidence to show whether partial financial interventions increased quit attempts compared to no interventions (RR 1.13, 95% CI 0.98 to 1.31, I² = 88%, 6944 participants).\nFull financial interventions increased the use of smoking cessation treatment compared to no interventions with regard to various pharmacological and behavioural treatments: nicotine replacement therapy (NRT): RR 1.79, 95% CI 1.54 to 2.09, I² = 35%, 9455 participants; bupropion: RR 3.22, 95% CI 1.41 to 7.34, I² = 71%, 6321 participants; behavioural therapy: RR 1.77, 95% CI 1.19 to 2.65, I² = 75%, 9215 participants.\nThere was evidence that partial coverage compared to no coverage reported a small positive effect on the use of bupropion (RR 1.15, 95% CI 1.03 to 1.29, I² = 0%, 6765 participants). Interventions directed at healthcare providers increased the use of behavioural therapy (RR 1.69, 95% CI 1.01 to 2.86, I² = 85%, 25820 participants), but not the use of NRT and/or bupropion (RR 0.94, 95% CI 0.76 to 1.18, I² = 6%, 2311 participants).\nWe assessed the quality of the evidence for the main outcome, abstinence from smoking, as moderate. In most studies participants were not blinded to the different study arms and researchers were not blinded to the allocated interventions. Furthermore, there was not always sufficient information on attrition rates. We detected some imprecision but we judged this to be of minor consequence on the outcomes of this study.\nAuthors' conclusions\nFull financial interventions directed at smokers when compared to no financial interventions increase the proportion of smokers who attempt to quit, use smoking cessation treatments, and succeed in quitting. There was no clear and consistent evidence of an effect on smoking cessation from financial incentives directed at healthcare providers. We are only moderately confident in the effect estimate because there was some risk of bias due to a lack of blinding in participants and researchers, and insufficient information on attrition rates.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For the update of this review, we searched studies on the effect of financial interventions on smoking cessation treatment and success in September 2016. We found six new relevant studies, resulting in a total of 17 studies.\nWe found 15 studies directed at smokers. Covering all the costs of smoking cessation treatment for smokers (free treatment) when compared to providing no financial benefits increased the number of smokers who attempted to quit (4 studies, 9065 participants), used smoking cessation treatments (7 studies, 9455 participants), and succeeded in quitting (6 studies, 9333 participants).\nWe found three studies directed at healthcare providers. The two studies that investigated the effect of a financial intervention on quit success (2311 participants) did not clearly show an increase in quit rates. Financial interventions directed at healthcare providers also did not have an effect on the use of smoking cessation medication (2 studies, 2311 participants). However, financial interventions did increase the number of smokers who used smoking cessation counselling (3 studies, 25,820 participants).\nInformation on the costs of the intervention was available for eight studies (33,488 participants). The economic evaluation of the individual studies showed that although the absolute differences in quitting were small, the costs per person successfully quitting were low or moderate."
}
]
|
query-laysum | 1347 | [
{
"role": "user",
"content": "Abstract: Background\nMany women express concern about their ability to produce enough milk, and insufficient milk is frequently cited as the reason for supplementation and early termination of breastfeeding. When addressing this concern, it is important first to consider the influence of maternal and neonatal health, infant suck, proper latch, and feeding frequency on milk production, and that steps be taken to correct or compensate for any contributing issues.\nOral galactagogues are substances that stimulate milk production. They may be pharmacological or non-pharmacological (natural). Natural galactagogues are usually botanical or other food agents. The choice between pharmacological or natural galactagogues is often influenced by familiarity and local customs. Evidence for the possible benefits and harms of galactagogues is important for making an informed decision on their use.\nObjectives\nTo assess the effect of oral galactagogues for increasing milk production in non-hospitalised breastfeeding mother-term infant pairs.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register, ClinicalTrials.gov, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), Health Research and Development Network - Phillippines (HERDIN), Natural Products Alert (Napralert), the personal reference collection of author LM, and reference lists of retrieved studies (4 November 2019).\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs (including published abstracts) comparing oral galactagogues with placebo, no treatment, or another oral galactagogue in mothers breastfeeding healthy term infants. We also included cluster-randomised trials but excluded cross-over trials.\nData collection and analysis\nWe used standard Cochrane Pregnancy and Childbirth methods for data collection and analysis. Two to four review authors independently selected the studies, assessed the risk of bias, extracted data for analysis and checked accuracy. Where necessary, we contacted the study authors for clarification.\nMain results\nForty-one RCTs involving 3005 mothers and 3006 infants from at least 17 countries met the inclusion criteria. Studies were conducted either in hospitals immediately postpartum or in the community. There was considerable variation in mothers, particularly in parity and whether or not they had lactation insufficiency. Infants' ages at commencement of the studies ranged from newborn to 6 months. The overall certainty of evidence was low to very low because of high risk of biases (mainly due to lack of blinding), substantial clinical and statistical heterogeneity, and imprecision of measurements.\nPharmacological galactagogues\nNine studies compared a pharmacological galactagogue (domperidone, metoclopramide, sulpiride, thyrotropin-releasing hormone) with placebo or no treatment.\nThe primary outcome of proportion of mothers who continued breastfeeding at 3, 4 and 6 months was not reported. Only one study (metoclopramide) reported on the outcome of infant weight, finding little or no difference (mean difference (MD) 23.0 grams, 95% confidence interval (CI) -47.71 to 93.71; 1 study, 20 participants; low-certainty evidence).\nThree studies (metoclopramide, domperidone, sulpiride) reported on milk volume, finding pharmacological galactagogues may increase milk volume (MD 63.82 mL, 95% CI 25.91 to 101.72; I² = 34%; 3 studies, 151 participants; low-certainty evidence). Subgroup analysis indicates there may be increased milk volume with each drug, but with varying CIs.\nThere was limited reporting of adverse effects, none of which could be meta-analysed. Where reported, they were limited to minor complaints, such as tiredness, nausea, headache and dry mouth (very low-certainty evidence). No adverse effects were reported for infants.\nNatural galactagogues\nTwenty-seven studies compared natural oral galactagogues (banana flower, fennel, fenugreek, ginger, ixbut, levant cotton, moringa, palm dates, pork knuckle, shatavari, silymarin, torbangun leaves or other natural mixtures) with placebo or no treatment.\nOne study (Mother's Milk Tea) reported breastfeeding rates at six months with a concluding statement of \"no significant difference\" (no data and no measure of significance provided, 60 participants, very low-certainty evidence).\nThree studies (fennel, fenugreek, moringa, mixed botanical tea) reported infant weight but could not be meta-analysed due to substantial clinical and statistical heterogeneity (I2 = 60%, 275 participants, very low-certainty evidence). Subgroup analysis shows we are very uncertain whether fennel or fenugreek improves infant weight, whereas moringa and mixed botanical tea may increase infant weight compared to placebo. Thirteen studies (Bu Xue Sheng Ru, Chanbao, Cui Ru, banana flower, fenugreek, ginger, moringa, fenugreek, ginger and turmeric mix, ixbut, mixed botanical tea, Sheng Ru He Ji, silymarin, Xian Tong Ru, palm dates; 962 participants) reported on milk volume, but meta-analysis was not possible due to substantial heterogeneity (I2 = 99%). The subgroup analysis for each intervention suggested either benefit or little or no difference (very low-certainty evidence). There was limited reporting of adverse effects, none of which could be meta-analysed. Where reported, they were limited to minor complaints such as mothers with urine that smelled like maple syrup and urticaria in infants (very low-certainty evidence).\nGalactagogue versus galactagogue\nEight studies (Chanbao; Bue Xue Sheng Ru, domperidone, moringa, fenugreek, palm dates, torbangun, moloco, Mu Er Wu You, Kun Yuan Tong Ru) compared one oral galactagogue with another. We were unable to perform meta-analysis because there was only one small study for each match-up, so we do not know if one galactagogue is better than another for any outcome.\nAuthors' conclusions\nDue to extremely limited, very low certainty evidence, we do not know whether galactagogues have any effect on proportion of mothers who continued breastfeeding at 3, 4 and 6 months. There is low-certainty evidence that pharmacological galactagogues may increase milk volume. There is some evidence from subgroup analyses that natural galactagogues may benefit infant weight and milk volume in mothers with healthy, term infants, but due to substantial heterogeneity of the studies, imprecision of measurements and incomplete reporting, we are very uncertain about the magnitude of the effect. We are also uncertain if one galactagogue performs better than another. With limited data on adverse effects, we are uncertain if there are any concerning adverse effects with any particular galactagogue; those reported were minor complaints.\nHigh-quality RCTs on the efficacy and safety of galactagogues are urgently needed. A set of core outcomes to standardise infant weight and milk volume measurement is also needed, as well as a strong basis for the dose and dosage form used.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for evidence from randomised controlled studies up to 4 November 2019 and identified 41 eligible studies involving 3005 mothers and 3006 infants from at least 17 countries. The studies varied widely in babies ages, type of milk boosters investigated, how long they were taken, and how outcomes were reported. Medications included sulpiride, metoclopramide, domperidone and thyrotropin-releasing hormone. Natural interventions included banana flower, fennel, fenugreek, ginger, ixbut, levant cotton, moringa, palm dates, pork knuckle, shatavari, silymarin, torbangun leaves, and a variety of natural mixtures as teas or soups."
}
]
|
query-laysum | 1348 | [
{
"role": "user",
"content": "Abstract: Background\nNeovascular age-related macular degeneration (AMD) is the leading cause of legal blindness in elderly populations of industrialised countries. Bevacizumab (Avastin®) and ranibizumab (Lucentis®) are targeted biological drugs (a monoclonal antibody) that inhibit vascular endothelial growth factor, an angiogenic cytokine that promotes vascular leakage and growth, thereby preventing its pathological angiogenesis. Ranibizumab is approved for intravitreal use to treat neovascular AMD, while bevacizumab is approved for intravenous use as a cancer therapy. However, due to the biological similarity of the two drugs, bevacizumab is widely used off-label to treat neovascular AMD.\nObjectives\nTo assess the systemic safety of intravitreal bevacizumab (brand name Avastin®; Genentech/Roche) compared with intravitreal ranibizumab (brand name Lucentis®; Novartis/Genentech) in people with neovascular AMD. Primary outcomes were death and All serious systemic adverse events (All SSAEs), the latter as a composite outcome in accordance with the International Conference on Harmonisation Good Clinical Practice. Secondary outcomes examined specific SSAEs: fatal and non-fatal myocardial infarctions, strokes, arteriothrombotic events, serious infections, and events grouped in some Medical Dictionary for Regulatory Activities System Organ Classes (MedDRA SOC). We assessed the safety at the longest available follow-up to a maximum of two years.\nSearch methods\nWe searched CENTRAL, MEDLINE, EMBASE and other online databases up to 27 March 2014. We also searched abstracts and clinical study presentations at meetings, trial registries, and contacted authors of included studies when we had questions.\nSelection criteria\nRandomised controlled trials (RCTs) directly comparing intravitreal bevacizumab (1.25 mg) and ranibizumab (0.5 mg) in people with neovascular AMD, regardless of publication status, drug dose, treatment regimen, or follow-up length, and whether the SSAEs of interest were reported in the trial report.\nData collection and analysis\nTwo authors independently selected studies and assessed the risk of bias for each study. Three authors independently extracted data.\nWe conducted random-effects meta-analyses for the primary and secondary outcomes. We planned a pre-specified analysis to explore deaths and All SSAEs at the one-year follow-up.\nMain results\nWe included data from nine studies (3665 participants), including six published (2745 participants) and three unpublished (920 participants) RCTs, none supported by industry. Three studies excluded participants at high cardiovascular risk, increasing clinical heterogeneity among studies. The studies were well designed, and we did not downgrade the quality of the evidence for any of the outcomes due to risk of bias. Although the estimated effects of bevacizumab and ranibizumab on our outcomes were similar, we downgraded the quality of the evidence due to imprecision.\nAt the maximum follow-up (one or two years), the estimated risk ratio (RR) of death with bevacizumab compared with ranibizumab was 1.10 (95% confidence interval (CI) 0.78 to 1.57, P value = 0.59; eight studies, 3338 participants; moderate quality evidence). Based on the event rates in the studies, this gives a risk of death with ranibizumab of 3.4% and with bevacizumab of 3.7% (95% CI 2.7% to 5.3%).\nFor All SSAEs, the estimated RR was 1.08 (95% CI 0.90 to 1.31, P value = 0.41; nine studies, 3665 participants; low quality evidence). Based on the event rates in the studies, this gives a risk of SSAEs of 22.2% with ranibizumab and with bevacizumab of 24% (95% CI 20% to 29.1%).\nFor the secondary outcomes, we could not detect any difference between bevacizumab and ranibizumab, with the exception of gastrointestinal disorders MedDRA SOC where there was a higher risk with bevacizumab (RR 1.82; 95% CI 1.04 to 3.19, P value = 0.04; six studies, 3190 participants).\nPre-specified analyses of deaths and All SSAEs at one-year follow-up did not substantially alter the findings of our review.\nFixed-effect analysis for deaths did not substantially alter the findings of our review, but fixed-effect analysis of All SSAEs showed an increased risk for bevacizumab (RR 1.12; 95% CI 1.00 to 1.26, P value = 0.04; nine studies, 3665 participants): the meta-analysis was dominated by a single study (weight = 46.9%).\nThe available evidence was sensitive to the exclusion of CATT or unpublished results. For All SSAEs, the exclusion of CATT moved the overall estimate towards no difference (RR 1.01; 95% CI 0.82 to 1.25, P value = 0.92), while the exclusion of LUCAS yielded a larger RR, with more SSAEs in the bevacizumab group, largely driven by CATT (RR 1.19; 95% CI 1.06 to 1.34, P value = 0.004). The exclusion of all unpublished studies produced a RR of 1.12 for death (95% CI 0.78 to 1.62, P value = 0.53) and a RR of 1.21 for SSAEs (95% CI 1.06 to 1.37, P value = 0.004), indicating a higher risk of SSAEs in those assigned to bevacizumab than ranibizumab.\nAuthors' conclusions\nThis systematic review of non-industry sponsored RCTs could not determine a difference between intravitreal bevacizumab and ranibizumab for deaths, All SSAEs, or specific subsets of SSAEs in the first two years of treatment, with the exception of gastrointestinal disorders. The current evidence is imprecise and might vary across levels of patient risks, but overall suggests that if a difference exists, it is likely to be small. Health policies for the utilisation of ranibizumab instead of bevacizumab as a routine intervention for neovascular AMD for reasons of systemic safety are not sustained by evidence. The main results and quality of evidence should be verified once all trials are fully published.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our review found the systemic safety of bevacizumab for neovascular AMD to be similar to that of ranibizumab, except for gastrointestinal disorders, which was a part of a secondary analysis.\nIf 1000 people were treated with ranibizumab for one or two years, 34 would die. If treated instead with bevacizumab, between 27 and 53 of them would die. If 1000 people were treated with ranibizumab, 222 would experience one or more SSAEs. If 1000 people were treated instead with bevacizumab, between 200 and 291 would experience such an event. Deaths are likely to be unrelated to the administration of drugs."
}
]
|
query-laysum | 1349 | [
{
"role": "user",
"content": "Abstract: Background\nUnemployment is associated with decreased health which may be a reason or a consequence of becoming unemployed. Decreased health can inhibit re-employment.\nObjectives\nTo assess the effectiveness of health-improving interventions for obtaining employment in unemployed job seekers.\nSearch methods\nWe searched (3 May 2018, updated 13 August 2019) the Cochrane Central Register of Controlled Trials, MEDLINE, Scopus, PsycINFO, CINAHL, SocINDEX, OSH Update, ClinicalTrials.gov, the WHO trials portal, and also reference lists of included studies and selected reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) of the effectiveness of health-improving interventions for obtaining employment in unemployed job seekers. The primary outcome was re-employment reported as the number or percentage of participants who obtained employment. Our secondary outcomes were health and work ability.\nData collection and analysis\nTwo authors independently screened studies, extracted outcome data, and assessed risk of bias. We pooled study results with random-effect models and reported risk ratios (RRs) with 95% confidence intervals (CIs) and assessed the overall quality of the evidence for each comparison using the GRADE approach.\nMain results\nWe included 15 randomised controlled trials (16 interventions) with a total of 6397 unemployed participants. Eight studies evaluated therapeutic interventions such as cognitive behavioural therapy, physical exercise, and health-related advice and counselling and, in seven studies, interventions were combined using therapeutic methods and job-search training.\nTherapeutic interventions\nTherapeutic interventions compared to no intervention may increase employment at an average of 11 months follow-up but the evidence is very uncertain (RR = 1.41, 95% CI 1.07 to 1.87, n = 1142, 8 studies with 9 interventions, I² = 52%, very low-quality evidence). There is probably no difference in the effects of therapeutic interventions compared to no intervention on mental health (SMD 0.12, 95% CI -0.06 to 0.29, n = 530, 2 studies, low-quality evidence) and on general health (SMD 0.19, 95% CI -0.04 to 0.41, n = 318, 1 study, moderate-quality evidence).\nCombined interventions\nCombined interventions probably increase employment slightly compared to no intervention at an average of 10 months follow-up (RR 1.12, 95% CI 1.06 to 1.20, n = 4101, 6 studies, I² = 7%).\nThere were no studies that measured work-ability, adverse events, or cost-effectiveness.\nAuthors' conclusions\nInterventions combining therapeutic methods and job-search training probably have a small beneficial effect in increasing employment. Therapeutic interventions may have an effect on re-employment, but we are very uncertain. Therapeutic interventions may not improve health in unemployed job seekers. Large high-quality RCTs targeting short-term or long-term unemployed people are needed to increase the quality of the evidence. A cost-effectiveness assessment is needed of the small beneficial effects.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 15 randomised controlled trials involving 6397 participants. Interventions included therapeutic methods such as cognitive behavioural therapy, physical exercise, and health-related advice and counselling, or they were combined interventions that included therapeutics and job-search training. We used the data from these studies about the number of unemployed participants who obtained a job. We also collected data about general and mental health. There were no studies that reported work ability outcomes.\nResults showed that combined interventions (therapeutic interventions combined with job-search training) probably slightly increase the number of unemployed people who get a job compared to those who do not follow an intervention.Therapeutic interventions may increase the number of unemployed people who get a job compared to those who do not follow an intervention but the evidence is very uncertain. Therapeutic interventions probably make no difference to mental and general health compared to no intervention."
}
]
|
query-laysum | 1350 | [
{
"role": "user",
"content": "Abstract: Background\nMelanoma has one of the fastest rising incidence rates of any cancer. It accounts for a small percentage of skin cancer cases but is responsible for the majority of skin cancer deaths. Although history-taking and visual inspection of a suspicious lesion by a clinician are usually the first in a series of ‘tests’ to diagnose skin cancer, dermoscopy has become an important tool to assist diagnosis by specialist clinicians and is increasingly used in primary care settings. Dermoscopy is a magnification technique using visible light that allows more detailed examination of the skin compared to examination by the naked eye alone. Establishing the additive value of dermoscopy over and above visual inspection alone across a range of observers and settings is critical to understanding its contribution for the diagnosis of melanoma and to future understanding of the potential role of the growing number of other high-resolution image analysis techniques.\nObjectives\nTo determine the diagnostic accuracy of dermoscopy alone, or when added to visual inspection of a skin lesion, for the detection of cutaneous invasive melanoma and atypical intraepidermal melanocytic variants in adults. We separated studies according to whether the diagnosis was recorded face-to-face (in-person), or based on remote (image-based), assessment.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception up to August 2016: CENTRAL; MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nStudies of any design that evaluated dermoscopy in adults with lesions suspicious for melanoma, compared with a reference standard of either histological confirmation or clinical follow-up. Data on the accuracy of visual inspection, to allow comparisons of tests, was included only if reported in the included studies of dermoscopy.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic threshold were missing. We estimated accuracy using hierarchical summary receiver operating characteristic (SROC),methods. Analysis of studies allowing direct comparison between tests was undertaken. To facilitate interpretation of results, we computed values of sensitivity at the point on the SROC curve with 80% fixed specificity and values of specificity with 80% fixed sensitivity. We investigated the impact of in-person test interpretation; use of a purposely developed algorithm to assist diagnosis; observer expertise; and dermoscopy training.\nMain results\nWe included a total of 104 study publications reporting on 103 study cohorts with 42,788 lesions (including 5700 cases), providing 354 datasets for dermoscopy. The risk of bias was mainly low for the index test and reference standard domains and mainly high or unclear for participant selection and participant flow. Concerns regarding the applicability of study findings were largely scored as ‘high’ concern in three of four domains assessed. Selective participant recruitment, lack of reproducibility of diagnostic thresholds and lack of detail on observer expertise were particularly problematic.\nThe accuracy of dermoscopy for the detection of invasive melanoma or atypical intraepidermal melanocytic variants was reported in 86 datasets; 26 for evaluations conducted in person (dermoscopy added to visual inspection), and 60 for image-based evaluations (diagnosis based on interpretation of dermoscopic images). Analyses of studies by prior testing revealed no obvious effect on accuracy; analyses were hampered by the lack of studies in primary care, lack of relevant information and the restricted inclusion of lesions selected for biopsy or excision. Accuracy was higher for in-person diagnosis compared to image-based evaluations (relative diagnostic odds ratio (RDOR) 4.6, 95% confidence interval (CI) 2.4 to 9.0; P < 0.001).\nWe compared accuracy for (a), in-person evaluations of dermoscopy (26 evaluations; 23,169 lesions and 1664 melanomas),versus visual inspection alone (13 evaluations; 6740 lesions and 459 melanomas), and for (b), image-based evaluations of dermoscopy (60 evaluations; 13,475 lesions and 2851 melanomas),versus image-based visual inspection (11 evaluations; 1740 lesions and 305 melanomas). For both comparisons, meta-analysis found dermoscopy to be more accurate than visual inspection alone, with RDORs of (a), 4.7 (95% CI 3.0 to 7.5; P < 0.001), and (b), 5.6 (95% CI 3.7 to 8.5; P < 0.001). For a), the predicted difference in sensitivity at a fixed specificity of 80% was 16% (95% CI 8% to 23%; 92% for dermoscopy + visual inspection versus 76% for visual inspection), and predicted difference in specificity at a fixed sensitivity of 80% was 20% (95% CI 7% to 33%; 95% for dermoscopy + visual inspection versus 75% for visual inspection). For b) the predicted differences in sensitivity was 34% (95% CI 24% to 46%; 81% for dermoscopy versus 47% for visual inspection), at a fixed specificity of 80%, and predicted difference in specificity was 40% (95% CI 27% to 57%; 82% for dermoscopy versus 42% for visual inspection), at a fixed sensitivity of 80%.\nUsing the median prevalence of disease in each set of studies ((a), 12% for in-person and (b), 24% for image-based), for a hypothetical population of 1000 lesions, an increase in sensitivity of (a), 16% (in-person), and (b), 34% (image-based), from using dermoscopy at a fixed specificity of 80% equates to a reduction in the number of melanomas missed of (a), 19 and (b), 81 with (a), 176 and (b), 152 false positive results. An increase in specificity of (a), 20% (in-person), and (b), 40% (image-based), at a fixed sensitivity of 80% equates to a reduction in the number of unnecessary excisions from using dermoscopy of (a), 176 and (b), 304 with (a), 24 and (b), 48 melanomas missed.\nThe use of a named or published algorithm to assist dermoscopy interpretation (as opposed to no reported algorithm or reported use of pattern analysis), had no significant impact on accuracy either for in-person (RDOR 1.4, 95% CI 0.34 to 5.6; P = 0.17), or image-based (RDOR 1.4, 95% CI 0.60 to 3.3; P = 0.22), evaluations. This result was supported by subgroup analysis according to algorithm used. We observed higher accuracy for observers reported as having high experience and for those classed as ‘expert consultants’ in comparison to those considered to have less experience in dermoscopy, particularly for image-based evaluations. Evidence for the effect of dermoscopy training on test accuracy was very limited but suggested associated improvements in sensitivity.\nAuthors' conclusions\nDespite the observed limitations in the evidence base, dermoscopy is a valuable tool to support the visual inspection of a suspicious skin lesion for the detection of melanoma and atypical intraepidermal melanocytic variants, particularly in referred populations and in the hands of experienced users. Data to support its use in primary care are limited, however, it may assist in triaging suspicious lesions for urgent referral when employed by suitably trained clinicians. Formal algorithms may be of most use for dermoscopy training purposes and for less expert observers, however reliable data comparing approaches using dermoscopy in person are lacking.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review included 104 studies reporting data for people with lesions suspected of melanoma. The main results for the diagnosis of melanoma (including very early melanomas), are based on 86 of the studies, 26 of which provide information on the accuracy of dermoscopy added to in-person visual inspection of a skin lesion and 60 provide information based on examination of dermoscopic images without the patient being present.\nThe 26 in-person studies provide the most relevant data for the use of dermoscopy in practice and their results are summarised here. A total of 23,169 suspicious skin lesions were included in the 26 studies and 13 of them also provided information on the accuracy of visual inspection of a lesion without the use of dermoscopy. The results suggest that dermoscopy is more accurate than visual inspection on its own, both for identifying melanoma correctly and excluding things that are not melanoma.\nThe studies used different ways of deciding whether a skin lesion was a melanoma or not, which means that we cannot be exactly sure about how much better dermoscopy is compared to visual inspection alone. Instead we can give an illustrative example of the expected effect of the increase in accuracy using a group of 1000 lesions, of which 120 (12%), are melanoma. In order to see how much better dermoscopy is in identifying melanoma correctly when compared to just looking at the skin, we have to assume that both lead to the same number of lesions being falsely diagnosed as melanoma (we assumed that 176 of the 880 lesions without melanoma would have an incorrect diagnosis of melanoma). In this fixed situation, adding dermoscopy to visual inspection would correctly identify an extra 19 melanomas (110 compared with 91), that would have been missed by just looking at the skin alone. In other words, more melanomas would be correctly identified.\nIn order to see how much better dermoscopy is in deciding if a skin lesion is not a melanoma when compared to just looking at the skin, we have to assume that both lead to the same number of melanomas being correctly diagnosed (in this case we assumed that 96 out of the 120 melanomas would be correctly diagnosed). In this situation, adding in dermoscopy to visual inspection would reduce the number of lesions being wrongly diagnosed as being melanoma by 176 (a reduction from 220 in the visual inspection group to 44 lesions in the dermoscopy group). In other words, more lesions that were not melanoma would be correctly identified and fewer people would end up being sent for surgery.\nValue of visual inspection checklists and effect of observer expertise\nThere was no evidence that use of a checklist to help dermoscopy interpretation changed diagnostic accuracy. Accuracy was better (with fewer missed melanomas and fewer people having unnecessary surgery), when the diagnosis was made by people with more clinical expertise and training."
}
]
|
query-laysum | 1351 | [
{
"role": "user",
"content": "Abstract: Background\nPrimary biliary cholangitis (previously primary biliary cirrhosis) is a chronic liver disease caused by the destruction of small intra-hepatic bile ducts resulting in stasis of bile (cholestasis), liver fibrosis, and liver cirrhosis. The optimal pharmacological treatment of primary biliary cholangitis remains uncertain.\nObjectives\nTo assess the comparative benefits and harms of different pharmacological interventions in the treatment of primary biliary cholangitis through a network meta-analysis and to generate rankings of the available pharmacological interventions according to their safety and efficacy. However, it was not possible to assess whether the potential effect modifiers were similar across different comparisons. Therefore, we did not perform the network meta-analysis and instead assessed the comparative benefits and harms of different interventions using standard Cochrane methodology.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 2), MEDLINE, Embase, Science Citation Index Expanded, World Health Organization International Clinical Trials Registry Platform, and randomised controlled trials registers to February 2017 to identify randomised clinical trials on pharmacological interventions for primary biliary cholangitis.\nSelection criteria\nWe included only randomised clinical trials (irrespective of language, blinding, or publication status) in participants with primary biliary cholangitis. We excluded trials which included participants who had previously undergone liver transplantation. We considered any of the various pharmacological interventions compared with each other or with placebo or no intervention.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We calculated the odds ratio (OR) and rate ratio with 95% confidence intervals (CI) using both fixed-effect and random-effects models based on available-participant analysis with Review Manager 5. We assessed risk of bias according to Cochrane, controlled risk of random errors with Trial Sequential Analysis, and assessed the quality of the evidence using GRADE.\nMain results\nWe identified 74 trials including 5902 participants that met the inclusion criteria of this review. A total of 46 trials (4274 participants) provided information for one or more outcomes. All the trials were at high risk of bias in one or more domains. Overall, all the evidence was low or very low quality. The proportion of participants with symptoms varied from 19.9% to 100% in the trials that reported this information. The proportion of participants who were antimitochondrial antibody (AMA) positive ranged from 80.8% to 100% in the trials that reported this information. It appeared that most trials included participants who had not received previous treatments or included participants regardless of the previous treatments received. The follow-up in the trials ranged from 1 to 96 months.\nThe proportion of people with mortality (maximal follow-up) was higher in the methotrexate group versus the no intervention group (OR 8.83, 95% CI 1.01 to 76.96; 60 participants; 1 trial; low quality evidence). The proportion of people with mortality (maximal follow-up) was lower in the azathioprine group versus the no intervention group (OR 0.56, 95% CI 0.32 to 0.98; 224 participants; 2 trials; I2 = 0%; low quality evidence). However, it has to be noted that a large proportion of participants (25%) was excluded from the trial that contributed most participants to this analysis and the results were not reliable. There was no evidence of a difference in any of the remaining comparisons. The proportion of people with serious adverse events was higher in the D-penicillamine versus no intervention group (OR 28.77, 95% CI 1.57 to 526.67; 52 participants; 1 trial; low quality evidence). The proportion of people with serious adverse events was higher in the obeticholic acid plus ursodeoxycholic acid (UDCA) group versus the UDCA group (OR 3.58, 95% CI 1.02 to 12.51; 216 participants; 1 trial; low quality evidence). There was no evidence of a difference in any of the remaining comparisons for serious adverse events (proportion) or serious adverse events (number of events). None of the trials reported health-related quality of life at any time point.\nFunding: nine trials had no special funding or were funded by hospital or charities; 31 trials were funded by pharmaceutical companies; and 34 trials provided no information on source of funding.\nAuthors' conclusions\nBased on very low quality evidence, there is currently no evidence that any intervention is beneficial for primary biliary cholangitis. However, the follow-up periods in the trials were short and there is significant uncertainty in this issue. Further well-designed randomised clinical trials are necessary. Future randomised clinical trials ought to be adequately powered; performed in people who are generally seen in the clinic rather than in highly selected participants; employ blinding; avoid post-randomisation dropouts or planned cross-overs; should have sufficient follow-up period (e.g. five or 10 years or more); and use clinically important outcomes such as mortality, health-related quality of life, cirrhosis, decompensated cirrhosis, and liver transplantation. Alternatively, very large groups of participants should be randomised to facilitate shorter trial duration.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There was no reliable evidence of decrease in the deaths between any of the interventions versus no intervention. There was no evidence of decrease in serious complications or complications of any severity between any of the treatments and no treatment. None of the trials reported health-related quality of life (a measure of a person's satisfaction with their life and health) at any time point.\nOverall, there is currently no evidence of benefit of any intervention in primary biliary cholangitis. There is significant uncertainty in this issue and further high-quality randomised clinical trials are required."
}
]
|
query-laysum | 1352 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the Cochrane review published in 2002.\nColorectal cancer (CRC) is a major cause of morbidity and mortality in industrialised countries. Experimental evidence has supported the hypothesis that dietary fibre may protect against the development of CRC, although epidemiologic data have been inconclusive.\nObjectives\nTo assess the effect of dietary fibre on the recurrence of colorectal adenomatous polyps in people with a known history of adenomatous polyps and on the incidence of CRC compared to placebo. Further, to identify the reported incidence of adverse effects, such as abdominal pain or diarrhoea, that resulted from the fibre intervention.\nSearch methods\nWe identified randomised controlled trials (RCTs) from Cochrane Colorectal Cancer's Specialised Register, CENTRAL, MEDLINE and Embase (search date, 4 April 2016). We also searched ClinicalTrials.gov and WHO International Trials Registry Platform on October 2016.\nSelection criteria\nWe included RCTs or quasi-RCTs. The population were those having a history of adenomatous polyps, but no previous history of CRC, and repeated visualisation of the colon/rectum after at least two-years' follow-up. Dietary fibre was the intervention. The primary outcomes were the number of participants with: 1. at least one adenoma, 2. more than one adenoma, 3. at least one adenoma greater than or equal to 1 cm, or 4. a new diagnosis of CRC. The secondary outcome was the number of adverse events.\nData collection and analysis\nTwo reviewers independently extracted data, assessed trial quality and resolved discrepancies by consensus. We used risk ratios (RR) and risk difference (RD) with 95% confidence intervals (CI) to measure the effect. If statistical significance was reached, we reported the number needed to treat for an additional beneficial outcome (NNTB) or harmful outcome (NNTH). We combined the study data using the fixed-effect model if it was clinically, methodologically, and statistically reasonable.\nMain results\nWe included seven studies, of which five studies with 4798 participants provided data for analyses in this review. The mean ages of the participants ranged from 56 to 66 years. All participants had a history of adenomas, which had been removed to achieve a polyp-free colon at baseline. The interventions were wheat bran fibre, ispaghula husk, or a comprehensive dietary intervention with high fibre whole food sources alone or in combination. The comparators were low-fibre (2 to 3 g per day), placebo, or a regular diet. The combined data showed no statistically significant difference between the intervention and control groups for the number of participants with at least one adenoma (5 RCTs, n = 3641, RR 1.04, 95% CI 0.95 to 1.13, low-quality evidence), more than one adenoma (2 RCTs, n = 2542, RR 1.06, 95% CI 0.94 to 1.20, low-quality evidence), or at least one adenoma 1 cm or greater (4 RCTs, n = 3224, RR 0.99, 95% CI 0.82 to 1.20, low-quality evidence) at three to four years. The results on the number of participants diagnosed with colorectal cancer favoured the control group over the dietary fibre group (2 RCTS, n = 2794, RR 2.70, 95% CI 1.07 to 6.85, low-quality evidence). After 8 years of comprehensive dietary intervention, no statistically significant difference was found in the number of participants with at least one recurrent adenoma (1 RCT, n = 1905, RR 0.97, 95% CI 0.78 to 1.20), or with more than one adenoma (1 RCT, n = 1905, RR 0.89, 95% CI 0.64 to 1.24). More participants given ispaghula husk group had at least one recurrent adenoma than the control group (1 RCT, n = 376, RR 1.45, 95% CI 1.01 to 2.08). Other analyses by types of fibre intervention were not statistically significant. The overall dropout rate was over 16% in these trials with no reasons given for these losses. Sensitivity analysis incorporating these missing data shows that none of the results can be considered as robust; when the large numbers of participants lost to follow-up were assumed to have had an event or not, the results changed sufficiently to alter the conclusions that we would draw. Therefore, the reliability of the findings may have been compromised by these missing data (attrition bias) and should be interpreted with caution.\nAuthors' conclusions\nThere is a lack of evidence from existing RCTs to suggest that increased dietary fibre intake will reduce the recurrence of adenomatous polyps in those with a history of adenomatous polyps within a two to eight year period. However, these results may be unreliable and should be interpreted cautiously, not only because of the high rate of loss to follow-up, but also because adenomatous polyp is a surrogate outcome for the unobserved true endpoint CRC. Longer-term trials with higher dietary fibre levels are needed to enable confident conclusion.\n\nGiven the provided abstract, please respond to the following query: \"Search Date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 4 April 2016."
}
]
|
query-laysum | 1353 | [
{
"role": "user",
"content": "Abstract: Background\nNuts contain a number of nutritional attributes which may be cardioprotective. A number of epidemiological studies have shown that nut consumption may have a beneficial effect on people who have cardiovascular disease (CVD) risk factors. However, results from randomised controlled trials (RCTs) are less consistent.\nObjectives\nTo determine the effectiveness of nut consumption for the primary prevention of CVD.\nSearch methods\nWe searched the following electronic databases: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science Core Collection, CINAHL, Database of Abstracts of Reviews of Effects (DARE), Health Technology Assessment Database (HTA) and Health Economics Evaluations Database (HEED) up to 30 July 2015. We searched trial registers and reference lists of reviews for further studies. We did not apply any language restrictions.\nSelection criteria\nWe included RCTs of dietary advice to increase nut consumption or provision of nuts to increase consumption lasting at least three months and including healthy adults or adults at moderate and high risk of CVD. The comparison group was no intervention or minimal intervention. The outcomes of interest were CVD clinical events and CVD risk factors.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, abstracted the data and assessed the risk of bias in included trials.\nMain results\nWe included five trials (435 participants randomised) and one ongoing trial. One study is awaiting classification. All trials examined the provision of nuts to increase consumption rather than dietary advice. None of the included trials reported on the primary outcomes, CVD clinical events, but trials were small and short term. All five trials reported on CVD risk factors. Four of these trials provided data in a useable format for meta-analyses, but heterogeneity precluded meta-analysis for most of the analyses. Overall trials were judged to be at unclear risk of bias.\nThere were variable and inconsistent effects of nut consumption on CVD risk factors (lipid levels and blood pressure). Three trials monitored adverse events. One trial reported an allergic reaction to nuts and three trials reported no significant weight gain with increased nut consumption. None of the included trials reported on other secondary outcomes, occurrence of type 2 diabetes as a major risk factor for CVD, health-related quality of life and costs.\nAuthors' conclusions\nCurrently there is a lack of evidence for the effects of nut consumption on CVD clinical events in primary prevention and very limited evidence for the effects on CVD risk factors. No conclusions can be drawn and further high quality longer term and adequately powered trials are needed to answer the review question.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included five trials (435 participants), one of which had two treatment arms. All five trials investigated the effects of eating nuts. No studies were found which investigated the effect of giving advice to eat more nuts. None of the studies reported on deaths or cardiovascular events. None of the results show a clear effect on total cholesterol levels and blood pressure. One study reported one case of an allergic reaction to nuts. Three studies reported no significant weight gain with increased nut consumption. No other adverse events were reported."
}
]
|
query-laysum | 1354 | [
{
"role": "user",
"content": "Abstract: Background\nDental professionals are well placed to help their patients stop using tobacco products. Large proportions of the population visit the dentist regularly. In addition, the adverse effects of tobacco use on oral health provide a context that dental professionals can use to motivate a quit attempt.\nObjectives\nTo assess the effectiveness, adverse events and oral health effects of tobacco cessation interventions offered by dental professionals.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group's Specialised Register up to February 2020.\nSelection criteria\nWe included randomised and quasi-randomised clinical trials assessing tobacco cessation interventions conducted by dental professionals in the dental practice or community setting, with at least six months of follow-up.\nData collection and analysis\nTwo review authors independently reviewed abstracts for potential inclusion and extracted data from included trials. We resolved disagreements by consensus. The primary outcome was abstinence from all tobacco use (e.g. cigarettes, smokeless tobacco) at the longest follow-up, using the strictest definition of abstinence reported. Individual study effects and pooled effects were summarised as risk ratios (RR) and 95% confidence intervals (CI), using Mantel-Haenszel random-effects models to combine studies where appropriate. We assessed statistical heterogeneity with the I2 statistic. We summarised secondary outcomes narratively.\nMain results\nTwenty clinical trials involving 14,897 participants met the criteria for inclusion in this review. Sixteen studies assessed the effectiveness of interventions for tobacco-use cessation in dental clinics and four assessed this in community (school or college) settings. Five studies included only smokeless tobacco users, and the remaining studies included either smoked tobacco users only, or a combination of both smoked and smokeless tobacco users. All studies employed behavioural interventions, with four offering nicotine treatment (nicotine replacement therapy (NRT) or e-cigarettes) as part of the intervention. We judged three studies to be at low risk of bias, one to be at unclear risk of bias, and the remaining 16 studies to be at high risk of bias.\nCompared with usual care, brief advice, very brief advice, or less active treatment, we found very low-certainty evidence of benefit from behavioural support provided by dental professionals, comprising either one session (RR 1.86, 95% CI 1.01 to 3.41; I2 = 66%; four studies, n = 6328), or more than one session (RR 1.90, 95% CI 1.17 to 3.11; I2 = 61%; seven studies, n = 2639), on abstinence from tobacco use at least six months from baseline. We found moderate-certainty evidence of benefit from behavioural interventions provided by dental professionals combined with the provision of NRT or e-cigarettes, compared with no intervention, usual care, brief, or very brief advice only (RR 2.76, 95% CI 1.58 to 4.82; I2 = 0%; four studies, n = 1221). We did not detect a benefit from multiple-session behavioural support provided by dental professionals delivered in a high school or college, instead of a dental setting (RR 1.51, 95% CI 0.86 to 2.65; I2 = 83%; three studies, n = 1020; very low-certainty evidence). Only one study reported adverse events or oral health outcomes, making it difficult to draw any conclusions.\nAuthors' conclusions\nThere is very low-certainty evidence that quit rates increase when dental professionals offer behavioural support to promote tobacco cessation. There is moderate-certainty evidence that tobacco abstinence rates increase in cigarette smokers if dental professionals offer behavioural support combined with pharmacotherapy. Further evidence is required to be certain of the size of the benefit and whether adding pharmacological interventions is more effective than behavioural support alone. Future studies should use biochemical validation of abstinence so as to preclude the risk of detection bias. There is insufficient evidence on whether these interventions lead to adverse effects, but no reasons to suspect that these effects would be specific to interventions delivered by dental professionals. There was insufficient evidence that interventions affected oral health.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of our review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Behavioural programmes involving dental professionals and NRT or e-cigarettes probably help more people to stop smoking. On average, 74 out of 1000 people stopped compared with 27 out of 1000 people who did not receive behavioural support (evidence from four studies in 1221 people).\nSeveral sessions of behavioural programmes involving dental professionals may help people to stop using tobacco. On average,106 out of 1000 people stopped compared with 56 out of 1000 people who did not receive behavioural support (seven studies; 2639 people).\nA single session of a behavioural programme may also help people to stop: on average, 45 out of 1000 people stopped compared with 24 out of 1000 who did not receive behavioural support (four studies; 6328 people).\nWe are uncertain about the effect of advice and support from dental professionals in settings other than a dental practice (such as in a school or college), because the studies that tested this were too small to show a reliable effect (three studies; 1020 people).\nWe are uncertain if behavioural programmes given by dental professionals had any unwanted effects, because only one study reported this information."
}
]
|
query-laysum | 1355 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of an original Cochrane review published in The Cochrane Library, 2011, Issue 1.\nVulval intraepithelial neoplasia (VIN) is a pre-malignant condition of the vulval skin. This uncommon chronic skin condition of the vulva is associated with a high risk of recurrence and the potential to progress to vulval cancer. The condition is complicated by its multicentric and multifocal nature. The incidence of this condition appears to be rising, particularly in the younger age group. There is a lack of consensus on the optimal surgical treatment method. However, the rationale for the surgical treatment of VIN has been to treat the symptoms and exclude any underlying malignancy, with the continued aim of preserving the vulval anatomy and function. Repeated treatments affect local cosmesis and cause psychosexual morbidity, thus impacting the individual's quality of life.\nObjectives\nTo evaluate the effectiveness and safety of surgical interventions in women with high-grade VIN.\nSearch methods\nWe searched the Cochrane Gynaecological Cancer Group Trials Register and the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE up to 1 September 2015. We also searched registers of clinical trials, abstracts of scientific meetings and reference lists of included studies, and contacted experts in the field.\nSelection criteria\nRandomised controlled trials (RCTs) that compared surgical interventions in adult women diagnosed with high-grade VIN.\nData collection and analysis\nTwo review authors independently abstracted data and assessed risk of bias.\nMain results\nWe identified one RCT, including 30 women, that met our inclusion criteria; this trial reported data on carbon dioxide (CO2) laser surgery versus cavitational ultrasonic surgical aspiration (CUSA). There were no differences in the risks of disease recurrence after one year of follow-up, pain, scarring, dysuria or burning, adhesions, infection, abnormal discharge or eschar between women who underwent CO2 laser surgery and those who received CUSA. The trial lacked statistical power, due to the small number of women in each group and the low number of observed events, but was at low risk of bias.\nAuthors' conclusions\nThe included trial lacked statistical power, due to the small number of women in each group and the low number of observed events. The absence of reliable evidence regarding the effectiveness and safety of the two surgical techniques for the management of VIN therefore precludes any definitive guidance or recommendations for clinical practice.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Due to the small number of participants with high-grade VIN included in the trial there was insufficient evidence to conclude that either surgical technique is superior over the other. This review highlights the need for further high-quality, well-designed trials."
}
]
|
query-laysum | 1356 | [
{
"role": "user",
"content": "Abstract: Background\nCryptococcal disease remains one of the main causes of death in HIV-positive people who have low cluster of differentiation 4 (CD4) cell counts. Currently, the World Health Organization (WHO) recommends screening HIV-positive people with low CD4 counts for cryptococcal antigenaemia (CrAg), and treating those who are CrAg-positive. This Cochrane Review examined the effects of an approach where those with low CD4 counts received regular prophylactic antifungals, such as fluconazole.\nObjectives\nTo assess the efficacy and safety of antifungal drugs for the primary prevention of cryptococcal disease in adults and children who are HIV-positive.\nSearch methods\nWe searched the CENTRAL, MEDLINE PubMed, Embase OVID, CINAHL EBSCOHost, WHO International Clinical Trials Registry Platform (WHO ICTRP), ClinicalTrials.gov, conference proceedings for the International AIDS Society (IAS) and Conference on Retroviruses and Opportunistic Infections (CROI), and reference lists of relevant articles up to 31 August 2017.\nSelection criteria\nRandomized controlled trials of adults and children, who are HIV-positive with low CD4 counts, without a current or prior diagnosis of cryptococcal disease that compared any antifungal drug taken as primary prophylaxis to placebo or standard care.\nData collection and analysis\nTwo review authors independently assessed eligibility and risk of bias, and extracted and analysed data. The primary outcome was all-cause mortality. We summarized all outcomes using risk ratios (RR) with 95% confidence intervals (CI). Where appropriate, we pooled data in meta-analyses. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nNine trials, enrolling 5426 participants, met the inclusion criteria of this review. Six trials administered fluconazole, while three trials administered itraconazole.\nAntifungal prophylaxis may make little or no difference to all-cause mortality (RR 1.07, 95% CI 0.80 to 1.43; 6 trials, 3220 participants; low-certainty evidence). For cryptococcal specific outcomes, prophylaxis probably reduces the risk of developing cryptococcal disease (RR 0.29, 95% CI 0.17 to 0.49; 7 trials, 5000 participants; moderate-certainty evidence), and probably reduces deaths due to cryptococcal disease (RR 0.29, 95% CI 0.11 to 0.72; 5 trials, 3813 participants; moderate-certainty evidence). Fluconazole prophylaxis may make no clear difference to the risk of developing clinically resistant Candida disease (RR 0.93, 95% CI 0.56 to 1.56; 3 trials, 1198 participants; low-certainty evidence); however, there may be an increased detection of fluconazole-resistant Candida isolates from surveillance cultures (RR 1.25, 95% CI 1.00 to 1.55; 3 trials, 539 participants; low-certainty evidence). Antifungal prophylaxis was generally well-tolerated with probably no clear difference in the risk of discontinuation of antifungal prophylaxis compared with placebo (RR 1.01, 95% CI 0.91 to 1.13; 4 trials, 2317 participants; moderate-certainty evidence). Antifungal prophylaxis may also make no difference to the risk of having any adverse event (RR 1.07, 95% CI 0.88 to 1.30; 4 trials, 2317 participants; low-certainty evidence), or a serious adverse event (RR 1.08, 95% CI 0.83 to 1.41; 4 trials, 888 participants; low-certainty evidence) when compared to placebo or standard care.\nAuthors' conclusions\nAntifungal prophylaxis reduced the risk of developing and dying from cryptococcal disease. Therefore, where CrAG screening is not available, antifungal prophylaxis may be used in patients with low CD4 counts at diagnosis and who are at risk of developing cryptococcal disease.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that regularly taking antifungal medication prevented HIV-positive people who had low CD4 counts from developing cryptococcal disease. We also found that primary prophylaxis probably reduced the number of people dying specifically from cryptococcal disease. However it probably did not reduce the number of people dying overall."
}
]
|
query-laysum | 1357 | [
{
"role": "user",
"content": "Abstract: Background\nSick newborn and preterm infants frequently are not able to be fed enterally, necessitating parenteral fluid and nutrition. Potential benefits of higher parenteral amino acid (AA) intake for improved nitrogen balance, growth, and infant health may be outweighed by the infant's ability to utilise high intake of parenteral AA, especially in the days after birth.\nObjectives\nThe primary objective is to determine whether higher versus lower intake of parenteral AA is associated with improved growth and disability-free survival in newborn infants receiving parenteral nutrition.\nSecondary objectives include determining whether:\n• higher versus lower starting or initial intake of amino acids is associated with improved growth and disability-free survival without side effects;\n• higher versus lower intake of amino acids at maximal intake is associated with improved growth and disability-free survival without side effects; and\n• increased amino acid intake should replace non-protein energy intake (glucose and lipid), should be added to non-protein energy intake, or should be provided simultaneously with non-protein energy intake.\nWe conducted subgroup analyses to look for any differences in the effects of higher versus lower intake of amino acids according to gestational age, birth weight, age at commencement, and condition of the infant, or concomitant increases in fluid intake.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (2 June 2017), MEDLINE (1966 to 2 June 2017), Embase (1980 to 2 June 2017), and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1982 to 2 June 2017). We also searched clinical trials databases, conference proceedings, and citations of articles.\nSelection criteria\nRandomised controlled trials of higher versus lower intake of AAs as parenteral nutrition in newborn infants. Comparisons of higher intake at commencement, at maximal intake, and at both commencement and maximal intake were performed.\nData collection and analysis\nTwo review authors independently selected trials, assessed trial quality, and extracted data from included studies. We performed fixed-effect analyses and expressed treatment effects as mean difference (MD), risk ratio (RR), and risk difference (RD) with 95% confidence intervals (CIs) and assessed the quality of evidence using the GRADE approach.\nMain results\nThirty-two studies were eligible for inclusion. Six were short-term biochemical tolerance studies, one was in infants at > 35 weeks' gestation, one in term surgical newborns, and three yielding no usable data. The 21 remaining studies reported clinical outcomes in very preterm or low birth weight infants for inclusion in meta-analysis for this review.\nHigher AA intake had no effect on mortality before hospital discharge (typical RR 0.90, 95% CI 0.69 to 1.17; participants = 1407; studies = 14; I2 = 0%; quality of evidence: low). Evidence was insufficient to show an effect on neurodevelopment and suggest no reported benefit (quality of evidence: very low). Higher AA intake was associated with a reduction in postnatal growth failure (< 10th centile) at discharge (typical RR 0.74, 95% CI 0.56 to 0.97; participants = 203; studies = 3; I2 = 22%; typical RD -0.15, 95% CI -0.27 to -0.02; number needed to treat for an additional beneficial outcome (NNTB) 7, 95% CI 4 to 50; quality of evidence: very low). Subgroup analyses found reduced postnatal growth failure in infants that commenced on high amino acid intake (> 2 to ≤ 3 g/kg/day); that occurred with increased amino acid and non-protein caloric intake; that commenced on intake at < 24 hours' age; and that occurred with early lipid infusion.\nHigher AA intake was associated with a reduction in days needed to regain birth weight (MD -1.14, 95% CI -1.73 to -0.56; participants = 950; studies = 13; I2 = 77%). Data show varying effects on growth parameters and no consistent effects on anthropometric z-scores at any time point, as well as increased growth in head circumference at discharge (MD 0.09 cm/week, 95% CI 0.06 to 0.13; participants = 315; studies = 4; I2 = 90%; quality of evidence: very low).\nHigher AA intake was not associated with effects on days to full enteral feeds, late-onset sepsis, necrotising enterocolitis, chronic lung disease, any or severe intraventricular haemorrhage, or periventricular leukomalacia. Data show a reduction in retinopathy of prematurity (typical RR 0.44, 95% CI 0.21 to 0.93; participants = 269; studies = 4; I2 = 31%; quality of evidence: very low) but no difference in severe retinopathy of prematurity.\nHigher AA intake was associated with an increase in positive protein balance and nitrogen balance. Potential biochemical intolerances were reported, including risk of abnormal blood urea nitrogen (typical RR 2.77, 95% CI 2.13 to 3.61; participants = 688; studies = 7; I2 = 6%; typical RD 0.26, 95% CI 0.20 to 0.32; number needed to treat for an additional harmful outcome (NNTH) 4; 95% CI 3 to 5; quality of evidence: high). Higher amino acid intake in parenteral nutrition was associated with a reduction in hyperglycaemia (> 8.3 mmol/L) (typical RR 0.69, 95% CI 0.49 to 0.96; participants = 505; studies = 5; I2 = 68%), although the incidence of hyperglycaemia treated with insulin was not different.\nAuthors' conclusions\nLow-quality evidence suggests that higher AA intake in parenteral nutrition does not affect mortality. Very low-quality evidence suggests that higher AA intake reduces the incidence of postnatal growth failure. Evidence was insufficient to show an effect on neurodevelopment. Very low-quality evidence suggests that higher AA intake reduces retinopathy of prematurity but not severe retinopathy of prematurity. Higher AA intake was associated with potentially adverse biochemical effects resulting from excess amino acid load, including azotaemia. Adequately powered trials in very preterm infants are required to determine the optimal intake of AA and effects of caloric balance in parenteral nutrition on the brain and on neurodevelopment.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review included 21 studies that reported clinical outcomes in very preterm or low birth weight infants. Reporting was incomplete for all outcomes. Searches for studies were conducted in June 2017."
}
]
|
query-laysum | 1358 | [
{
"role": "user",
"content": "Abstract: Background\nPatellofemoral pain syndrome (PFPS) is a common knee problem, which particularly affects adolescents and young adults. PFPS, which is characterised by retropatellar (behind the kneecap) or peripatellar (around the kneecap) pain, is often referred to as anterior knee pain. The pain mostly occurs when load is put on the knee extensor mechanism when climbing stairs, squatting, running, cycling or sitting with flexed knees. Exercise therapy is often prescribed for this condition.\nObjectives\nTo assess the effects (benefits and harms) of exercise therapy aimed at reducing knee pain and improving knee function for people with patellofemoral pain syndrome.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (May 2014), the Cochrane Central Register of Controlled Trials (2014, Issue 4), MEDLINE (1946 to May 2014), EMBASE (1980 to 2014 Week 20), PEDro (to June 2014), CINAHL (1982 to May 2014) and AMED (1985 to May 2014), trial registers (to June 2014) and conference abstracts.\nSelection criteria\nRandomised and quasi-randomised trials evaluating the effect of exercise therapy on pain, function and recovery in adolescents and adults with patellofemoral pain syndrome. We included comparisons of exercise therapy versus control (e.g. no treatment) or versus another non-surgical therapy; or of different exercises or exercise programmes.\nData collection and analysis\nTwo review authors independently selected trials based on pre-defined inclusion criteria, extracted data and assessed risk of bias. Where appropriate, we pooled data using either fixed-effect or random-effects methods. We selected the following seven outcomes for summarising the available evidence: pain during activity (short-term: ≤ 3 months); usual pain (short-term); pain during activity (long-term: > 3 months); usual pain (long-term); functional ability (short-term); functional ability (long-term); and recovery (long-term).\nMain results\nIn total, 31 heterogeneous trials including 1690 participants with patellofemoral pain are included in this review. There was considerable between-study variation in patient characteristics (e.g. activity level) and diagnostic criteria for study inclusion (e.g. minimum duration of symptoms) and exercise therapy. Eight trials, six of which were quasi-randomised, were at high risk of selection bias. We assessed most trials as being at high risk of performance bias and detection bias, which resulted from lack of blinding.\nThe included studies, some of which contributed to more than one comparison, provided evidence for the following comparisons: exercise therapy versus control (10 trials); exercise therapy versus other conservative interventions (e.g. taping; eight trials evaluating different interventions); and different exercises or exercise programmes. The latter group comprised: supervised versus home exercises (two trials); closed kinetic chain (KC) versus open KC exercises (four trials); variants of closed KC exercises (two trials making different comparisons); other comparisons of other types of KC or miscellaneous exercises (five trials evaluating different interventions); hip and knee versus knee exercises (seven trials); hip versus knee exercises (two studies); and high- versus low-intensity exercises (one study). There were no trials testing exercise medium (land versus water) or duration of exercises. Where available, the evidence for each of seven main outcomes for all comparisons was of very low quality, generally due to serious flaws in design and small numbers of participants. This means that we are very unsure about the estimates. The evidence for the two largest comparisons is summarised here.\nExercise versus control. Pooled data from five studies (375 participants) for pain during activity (short-term) favoured exercise therapy: mean difference (MD) -1.46, 95% confidence interval (CI) -2.39 to -0.54. The CI included the minimal clinically important difference (MCID) of 1.3 (scale 0 to 10), indicating the possibility of a clinically important reduction in pain. The same finding applied for usual pain (short-term; two studies, 41 participants), pain during activity (long-term; two studies, 180 participants) and usual pain (long-term; one study, 94 participants). Pooled data from seven studies (483 participants) for functional ability (short-term) also favoured exercise therapy; standardised mean difference (SMD) 1.10, 95% CI 0.58 to 1.63. Re-expressed in terms of the Anterior Knee Pain Score (AKPS; 0 to 100), this result (estimated MD 12.21 higher, 95% CI 6.44 to 18.09 higher) included the MCID of 10.0, indicating the possibility of a clinically important improvement in function. The same finding applied for functional ability (long-term; three studies, 274 participants). Pooled data (two studies, 166 participants) indicated that, based on the 'recovery' of 250 per 1000 in the control group, 88 more (95% CI 2 fewer to 210 more) participants per 1000 recovered in the long term (12 months) as a result of exercise therapy.\nHip plus knee versus knee exercises. Pooled data from three studies (104 participants) for pain during activity (short-term) favoured hip and knee exercise: MD -2.20, 95% CI -3.80 to -0.60; the CI included a clinically important effect. The same applied for usual pain (short-term; two studies, 46 participants). One study (49 participants) found a clinically important reduction in pain during activity (long-term) for hip and knee exercise. Although tending to favour hip and knee exercises, the evidence for functional ability (short-term; four studies, 174 participants; and long-term; two studies, 78 participants) and recovery (one study, 29 participants) did not show that either approach was superior.\nAuthors' conclusions\nThis review has found very low quality but consistent evidence that exercise therapy for PFPS may result in clinically important reduction in pain and improvement in functional ability, as well as enhancing long-term recovery. However, there is insufficient evidence to determine the best form of exercise therapy and it is unknown whether this result would apply to all people with PFPS. There is some very low quality evidence that hip plus knee exercises may be more effective in reducing pain than knee exercise alone.\nFurther randomised trials are warranted but in order to optimise research effort and engender the large multicentre randomised trials that are required to inform practice, these should be preceded by research that aims to identify priority questions and attain agreement and, where practical, standardisation regarding diagnostic criteria and measurement of outcome.\n\nGiven the provided abstract, please respond to the following query: \"Results of the search and description of studies\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the medical literature until May 2014 and found 31 relevant studies involving 1690 participants with patellofemoral pain. The studies varied a lot in the characteristics of their study populations (e.g. activity levels and duration of their symptoms) and type of exercises. We assessed most trials as being at high risk of bias because the people, often the trial participants, who assessed outcome knew what treatment group they were in.\nThe included studies, some of which contributed to more than one comparison, provided evidence for the following comparisons: exercise therapy versus control (10 trials); exercise therapy versus other conservative interventions (e.g. applying adhesive tape over the knee; eight trials evaluating different interventions); and different exercises or exercise programmes. The latter group comprised: supervised versus home exercises (two trials); foot fixed (closed kinetic chain) versus foot free (open kinetic chain) exercises (four trials); variants of closed kinetic chain exercises (two trials making different comparisons; other comparisons of other types of kinetic chain or miscellaneous exercises (five trials evaluating different interventions); hip and knee versus knee exercises (seven trials); hip versus knee exercises (two studies); and high- versus low-intensity exercises (one study). There were no trials testing the exercise medium (land versus water) or duration of exercises."
}
]
|
query-laysum | 1359 | [
{
"role": "user",
"content": "Abstract: Background\nIn patients with acute lung injury (ALI) and acute respiratory distress syndrome (ARDS), mortality remains high. These patients require mechanical ventilation, which has been associated with ventilator-induced lung injury. High levels of positive end-expiratory pressure (PEEP) could reduce this condition and improve patient survival. This is an updated version of the review first published in 2013.\nObjectives\nTo assess the benefits and harms of high versus low levels of PEEP in adults with ALI and ARDS.\nSearch methods\nFor our previous review, we searched databases from inception until 2013. For this updated review, we searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS, and the Web of Science from inception until May 2020. We also searched for ongoing trials (www.trialscentral.org; www.clinicaltrial.gov; www.controlled-trials.com), and we screened the reference lists of included studies.\nSelection criteria\nWe included randomised controlled trials that compared high versus low levels of PEEP in ALI and ARDS participants who were intubated and mechanically ventilated in intensive care for at least 24 hours.\nData collection and analysis\nTwo review authors assessed risk of bias and extracted data independently. We contacted investigators to identify additional published and unpublished studies. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included four new studies (1343 participants) in this review update. In total, we included 10 studies (3851 participants). We found evidence of risk of bias in six studies, and the remaining studies fulfilled all criteria for low risk of bias. In eight studies (3703 participants), a comparison was made between high and low levels of PEEP, with the same tidal volume in both groups. In the remaining two studies (148 participants), the tidal volume was different between high- and low-level groups.\nIn the main analysis, we assessed mortality occurring before hospital discharge only in studies that compared high versus low PEEP, with the same tidal volume in both groups. Evidence suggests that high PEEP may result in little to no difference in mortality compared to low PEEP (risk ratio (RR) 0.97, 95% confidence interval (CI) 0.90 to 1.04; I² = 15%; 7 studies, 3640 participants; moderate-certainty evidence).\nIn addition, high PEEP may result in little to no difference in barotrauma (RR 1.00, 95% CI 0.64 to 1.57; I² = 63%; 9 studies, 3791 participants; low-certainty evidence). High PEEP may improve oxygenation in patients up to the first and third days of mechanical ventilation (first day: mean difference (MD) 51.03, 95% CI 35.86 to 66.20; I² = 85%; 6 studies, 2594 participants; low-certainty evidence; third day: MD 50.32, 95% CI 34.92 to 65.72; I² = 83%; 6 studies, 2309 participants; low-certainty evidence) and probably improves oxygenation up to the seventh day (MD 28.52, 95% CI 20.82 to 36.21; I² = 0%; 5 studies, 1611 participants; moderate-certainty evidence). Evidence suggests that high PEEP results in little to no difference in the number of ventilator-free days (MD 0.45, 95% CI -2.02 to 2.92; I² = 81%; 3 studies, 1654 participants; low-certainty evidence). Available data were insufficient to pool the evidence for length of stay in the intensive care unit.\nAuthors' conclusions\nModerate-certainty evidence shows that high levels compared to low levels of PEEP do not reduce mortality before hospital discharge. Low-certainty evidence suggests that high levels of PEEP result in little to no difference in the risk of barotrauma. Low-certainty evidence also suggests that high levels of PEEP improve oxygenation up to the first and third days of mechanical ventilation, and moderate-certainty evidence indicates that high levels of PEEP improve oxygenation up to the seventh day of mechanical ventilation. As in our previous review, we found clinical heterogeneity - mainly within participant characteristics and methods of titrating PEEP - that does not allow us to draw definitive conclusions regarding the use of high levels of PEEP in patients with ALI and ARDS. Further studies should aim to determine the appropriate method of using high levels of PEEP and the advantages and disadvantages associated with high levels of PEEP in different ARDS and ALI patient populations.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find evidence from randomised controlled trials on the benefits and harms of high versus low levels of lung positive end-expiratory pressure (PEEP). We wanted to focus on adult patients with acute lung injury (ALI) and acute respiratory distress syndrome (ARDS). These patients have low oxygen levels in the blood and therefore reduced tissue oxygenation.\nPEEP is pressure in the lungs (alveolar pressure) at the end of each breath (expiration). In mechanically ventilated patients, PEEP works against passive emptying of the lung and collapse of air sacs (alveoli). Collapse of air sacs can lead to incomplete inflation of the lung on the next breath and reduced oxygenation. PEEP is used to improve oxygenation."
}
]
|
query-laysum | 1360 | [
{
"role": "user",
"content": "Abstract: Background\nCleft lip and palate is one of the most common birth defects and can cause difficulties with feeding, speech and hearing, as well as psychosocial problems. Treatment of orofacial clefts is prolonged; it typically commences after birth and lasts until the child reaches adulthood or even into adulthood. Residual deformities, functional disturbances, or both, are frequently seen in adults with a repaired cleft. Conventional orthognathic surgery, such as Le Fort I osteotomy, is often performed for the correction of maxillary hypoplasia. An alternative intervention is distraction osteogenesis, which achieves bone lengthening by gradual mechanical distraction. This review is an update of the original version that was published in 2016.\nObjectives\nTo provide evidence regarding the effects and long-term results of maxillary distraction osteogenesis compared to orthognathic surgery for the treatment of hypoplastic maxilla in people with cleft lip and palate.\nSearch methods\nCochrane Oral Health’s Information Specialist searched the following databases: Cochrane Oral Health’s Trials Register (to 15 May 2018), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library, 2018, Issue 4), MEDLINE Ovid (1946 to 15 May 2018), Embase Ovid (1980 to 15 May 2018), and LILACS BIREME Virtual Health Library (Latin American and Caribbean Health Science Information database; from 1982 to 15 May 2018). The US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing maxillary distraction osteogenesis to conventional Le Fort I osteotomy for the correction of cleft lip and palate maxillary hypoplasia in non-syndromic cleft patients aged 15 years or older.\nData collection and analysis\nTwo review authors assessed studies for eligibility. Two review authors independently extracted data and assessed the risk of bias in the included studies. We contacted trial authors for clarification or missing information whenever possible. All standard methodological procedures expected by Cochrane were used.\nMain results\nWe found six publications involving a total of 47 participants requiring maxillary advancement of 4 mm to 10 mm. All of them related to a single trial performed between 2002 and 2008 at the University of Hong Kong, but not all of the publications reported outcomes from all 47 participants. The study compared maxillary distraction osteogenesis with orthognathic surgery, and included participants from 13 to 45 years of age.\nResults and conclusions should be interpreted with caution given the fact that this was a single trial at high risk of bias, with a small sample size.\nThe main outcomes assessed were hard and soft tissue changes, skeletal relapse, effects on speech and velopharyngeal function, psychological status, and clinical morbidities.\nBoth interventions produced notable hard and soft tissue improvements. Nevertheless, the distraction group demonstrated a greater maxillary advancement, evaluated as the advancement of Subspinale A-point: a mean difference of 4.40 mm (95% CI 0.24 to 8.56) was recorded two years postoperatively.\nHorizontal relapse of the maxilla was significantly less in the distraction osteogenesis group five years after surgery. A total forward movement of A-point of 2.27 mm was noted for the distraction group, whereas a backward movement of 2.53 mm was recorded for the osteotomy group (mean difference 4.8 mm, 95% CI 0.41 to 9.19).\nNo statistically significant differences could be detected between the groups in speech outcomes, when evaluated through resonance (hypernasality) at 17 months postoperatively (RR 0.11, 95% CI 0.01 to 1.85) and nasal emissions at 17 months postoperatively (RR 3.00, 95% CI 0.14 to 66.53), or in velopharyngeal function at the same time point (RR 1.28, 95% CI 0.65 to 2.52).\nMaxillary distraction initially lowered social self-esteem at least until the distractors were removed, at three months postoperatively, compared to the osteotomy group, but this improved over time and the distraction group had higher satisfaction with life in the long term (two years after surgery) (MD 2.95, 95% CI 014 to 5.76).\nAdverse effects, in terms of clinical morbidities, included mainly occlusal relapse and mucosal infection, with the frequency being similar between groups (3/15 participants in the distraction osteogenesis group and 3/14 participants in the osteotomy group). There was no severe harm to any participant.\nAuthors' conclusions\nThis review found only one small randomised controlled trial concerning the effectiveness of distraction osteogenesis compared to conventional orthognathic surgery. The available evidence is of very low quality, which indicates that further research is likely to change the estimate of the effect. Based on measured outcomes, distraction osteogenesis may produce more satisfactory results; however, further prospective research comprising assessment of a larger sample size with participants with different facial characteristics is required to confirm possible true differences between interventions.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence on which this review is based is up to date as of 15 May 2018. We found six relevant articles to include in this review. All are related to one single study conducted in Hong Kong between 2002 and 2008. The study involved 47 participants aged 13 to 45 years of age. It investigated the effects of the two surgical procedures on alteration of face morphology, stability of upper jaw after surgery, speech and velopharyngeal function (ability to close the gap between the soft palate and nasal cavity to produce sound), psychological status of the participants and clinical side effects."
}
]
|
query-laysum | 1361 | [
{
"role": "user",
"content": "Abstract: Background\nRapid implementation of robotic transabdominal surgery has resulted in the need for re-evaluation of the most suitable form of anaesthesia. The overall objective of anaesthesia is to minimize perioperative risk and discomfort for patients both during and after surgery. Anaesthesia for patients undergoing robotic assisted surgery is different from anaesthesia for patients undergoing open or laparoscopic surgery; new anaesthetic concerns accompany robotic assisted surgery.\nObjectives\nTo assess outcomes related to the choice of total intravenous anaesthesia (TIVA) or inhalational anaesthesia for adults undergoing transabdominal robotic assisted laparoscopic gynaecological, urological or gastroenterological surgery.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2016 Issue 5), Ovid MEDLINE (1946 to May 2016), Embase via OvidSP (1982 to May 2016), the Cumulative Index to Nursing and Allied Health Literature (CINAHL) via EBSCOhost (1982 to May 2016) and the Institute for Scientific Information (ISI) Web of Science (1956 to May 2016). We also searched the International Standard Randomized Controlled Trial Number (ISRCTN) Registry and Clinical trials gov for ongoing trials (May 2016).\nSelection criteria\nWe searched for randomized controlled trials (RCTs) including adults, aged 18 years and older, of both genders, treated with transabdominal robotic assisted laparoscopic gynaecological, urological or gastroenterological surgery and focusing on outcomes of TIVA or inhalational anaesthesia.\nData collection and analysis\nWe used standard methodological procedures of Cochrane. Study findings were not suitable for meta-analysis.\nMain results\nWe included three single-centre, two-arm RCTs involving 170 participants. We found one ongoing trial. All included participants were male and were undergoing radical robotic assisted laparoscopic radical prostatectomy (RALRP). The men were between 50 and 75 years of age and met criteria for American Society of Anesthesiologists physical classification scores (ASA) I, ll and III.\nWe found evidence showing no clinically meaningful differences in postoperative pain between the two types of anaesthetics (mean difference (MD) in visual analogue scale (VAS) scores at one to six hours was -2.20 (95% confidence interval (CI) -10.62 to 6.22; P = 0.61) in a sample of 62 participants from one study. Low-quality evidence suggests that propofol reduces postoperative nausea and vomiting (PONV) over the short term (one to six hours after surgery) after RALRP compared with inhalational anaesthesia (sevoflurane, desflurane) (MD -1.70, 95% CI -2.59 to -0.81; P = 0.0002).\nWe found low-quality evidence suggesting that propofol may prevent an increase in intraocular pressure (IOP) after pneumoperitoneum and steep Trendelenburg positioning compared with sevoflurane (MD -3.90, 95% CI -6.34 to -1.46; P = 0.002) with increased IOP from baseline to 30 minutes in steep Trendelenburg. However, it is unclear whether this surrogate outcome translates directly to clinical avoidance of ocular complications during surgery. No studies addressed the secondary outcomes of adverse effects, all-cause mortality, respiratory or circulatory complications, cognitive dysfunction, length of stay or costs. Overall the quality of evidence was low to very low, as all studies were small, single-centre trials providing unclear descriptions of methods.\nAuthors' conclusions\nIt is unclear which anaesthetic technique is superior - TIVA or inhalational - for transabdominal robotic assisted surgery in urology, gynaecology and gastroenterology, as existing evidence is scarce, is of low quality and has been generated from exclusively male patients undergoing robotic radical prostatectomy.\nAn ongoing trial, which includes participants of both genders with a focus on quality of recovery, might have an impact on future evidence related to this topic.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was considered low, as included studies were small and provided an unclear description of methods. All participants in these studies were men who were undergoing robotic assisted laparoscopic removal of the prostate. An ongoing trial includes men and women undergoing abdominal robotic surgery, also with a focus on the quality of recovery. Additional studies are needed."
}
]
|
query-laysum | 1362 | [
{
"role": "user",
"content": "Abstract: Background\nDepressive illness is common in old age. Prevalence in the community of case level depression is around 15% and milder forms of depression are more common. It causes significant distress and disability. The number of people over the age of 60 years is expected to double by 2050 and so interventions for this often long-term and recurrent condition are increasingly important. The causes of late-life depression differ from depression in younger adults and so it is appropriate to study it separately.\nThis is an update of a Cochrane review first published in 2012.\nObjectives\nTo examine the efficacy of antidepressants and psychological therapies in preventing the relapse and recurrence of depression in older people.\nSearch methods\nWe performed a search of the Cochrane Common Mental Disorders Group's specialised register (the CCMDCTR) to 13 July 2015. The CCMDCTR includes relevant randomised controlled trials (RCTs) from the following bibliographic databases: The Cochrane Library (all years), MEDLINE (1950 to date), EMBASE (1974 to date), and PsycINFO (1967 to date). We also conducted a cited reference search on 13 July 2015 of the Web of Science for citations of primary reports of included studies.\nSelection criteria\nBoth review authors independently selected studies. We included RCTs involving people aged 60 years and over successfully treated for an episode of depression and randomised to receive continuation and maintenance treatment with antidepressants, psychological therapies, or a combination.\nData collection and analysis\nTwo review authors independently extracted data. The primary outcome for benefit was recurrence rate of depression (reaching a cut-off on any depression rating scale) at 12 months and the primary outcome for harm was drop-outs at 12 months. Secondary outcomes included relapse/recurrence rates at other time points, global impression of change, social functioning, and deaths. We performed meta-analysis using risk ratio (RR) for dichotomous outcomes and mean difference (MD) for continuous outcomes, with 95% confidence intervals (CI).\nMain results\nThis update identified no further trials. Seven studies from the previous review met the inclusion criteria (803 participants). Six compared antidepressant medication with placebo; two involved psychological therapies. There was marked heterogeneity between the studies.\nComparing antidepressants with placebo on the primary outcome for benefit, there was a statistically significant difference favouring antidepressants in reducing recurrence compared with placebo at 12 months with a GRADE rating of low for quality of evidence (three RCTs, n = 247, RR 0.67, 95% CI 0.54 to 0.82; number needed to treat for an additional beneficial outcome (NNTB) 5). Comparing antidepressants with placebo on the primary outcome for harms, there was no difference in drop-out rates at 12 months' follow-up, with a GRADE rating of low.\nThere was no significant difference between psychological treatment and antidepressant in recurrence rates at 12 months (one RCT, n = 53) or between combination treatment and antidepressant alone at 12 months.\nAuthors' conclusions\nThis updated Cochrane review supports the findings of the original 2012 review. The long-term benefits and harm of continuing antidepressant medication in the prevention of recurrence of depression in older people are not clear and no firm treatment recommendations can be made on the basis of this review. Continuing antidepressant medication for 12 months appears to be helpful with no increased harms; however, this was based on only three small studies, relatively few participants, use of a range of antidepressant classes, and clinically heterogeneous populations. Comparisons at other time points did not reach statistical significance.\nData on psychological therapies and combined treatments were too limited to draw any conclusions on benefits and harms.\nThe quality of the evidence used in reaching these conclusions was low and the review does not, therefore, offer clear guidance to clinicians and patients on best practice and matching interventions to particular patient characteristics.\nOf note, we identified no new studies that evaluated pharmacological or psychological interventions in the continuation and maintenance treatment of depression in older people. We are aware of studies conducted since the previous review that included both older people and adults under the age of 65 years, but these fall outside of the remit of this review. We believe that there remains a need for studies solely recruiting older people, particularly the 'older old' with comorbid medical problems. However, these studies are likely to be challenging to conduct and may not, so far, have been prioritised by funders.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Depression is a common problem among older people and causes considerable disability. Even after successful treatment, it frequently recurs.\nThe causes of depression in older people are more diverse than in younger adults and, as the number of older people is steadily increasing, it is important to study the effects of treatments specifically in older adults. Treatments commonly used are antidepressant drugs and psychological treatments (talking treatments)."
}
]
|
query-laysum | 1363 | [
{
"role": "user",
"content": "Abstract: Background\nPanic disorder is characterised by the presence of recurrent unexpected panic attacks, discrete periods of fear or anxiety that have a rapid onset and include symptoms such as racing heart, chest pain, sweating and shaking. Panic disorder is common in the general population, with a lifetime prevalence of 1% to 4%. A previous Cochrane meta-analysis suggested that psychological therapy (either alone or combined with pharmacotherapy) can be chosen as a first-line treatment for panic disorder with or without agoraphobia. However, it is not yet clear whether certain psychological therapies can be considered superior to others. In order to answer this question, in this review we performed a network meta-analysis (NMA), in which we compared eight different forms of psychological therapy and three forms of a control condition.\nObjectives\nTo assess the comparative efficacy and acceptability of different psychological therapies and different control conditions for panic disorder, with or without agoraphobia, in adults.\nSearch methods\nWe conducted the main searches in the CCDANCTR electronic databases (studies and references registers), all years to 16 March 2015. We conducted complementary searches in PubMed and trials registries. Supplementary searches included reference lists of included studies, citation indexes, personal communication to the authors of all included studies and grey literature searches in OpenSIGLE. We applied no restrictions on date, language or publication status.\nSelection criteria\nWe included all relevant randomised controlled trials (RCTs) focusing on adults with a formal diagnosis of panic disorder with or without agoraphobia. We considered the following psychological therapies: psychoeducation (PE), supportive psychotherapy (SP), physiological therapies (PT), behaviour therapy (BT), cognitive therapy (CT), cognitive behaviour therapy (CBT), third-wave CBT (3W) and psychodynamic therapies (PD). We included both individual and group formats. Therapies had to be administered face-to-face. The comparator interventions considered for this review were: no treatment (NT), wait list (WL) and attention/psychological placebo (APP). For this review we considered four short-term (ST) outcomes (ST-remission, ST-response, ST-dropouts, ST-improvement on a continuous scale) and one long-term (LT) outcome (LT-remission/response).\nData collection and analysis\nAs a first step, we conducted a systematic search of all relevant papers according to the inclusion criteria. For each outcome, we then constructed a treatment network in order to clarify the extent to which each type of therapy and each comparison had been investigated in the available literature. Then, for each available comparison, we conducted a random-effects meta-analysis. Subsequently, we performed a network meta-analysis in order to synthesise the available direct evidence with indirect evidence, and to obtain an overall effect size estimate for each possible pair of therapies in the network. Finally, we calculated a probabilistic ranking of the different psychological therapies and control conditions for each outcome.\nMain results\nWe identified 1432 references; after screening, we included 60 studies in the final qualitative analyses. Among these, 54 (including 3021 patients) were also included in the quantitative analyses. With respect to the analyses for the first of our primary outcomes, (short-term remission), the most studied of the included psychological therapies was CBT (32 studies), followed by BT (12 studies), PT (10 studies), CT (three studies), SP (three studies) and PD (two studies).\nThe quality of the evidence for the entire network was found to be low for all outcomes. The quality of the evidence for CBT vs NT, CBT vs SP and CBT vs PD was low to very low, depending on the outcome. The majority of the included studies were at unclear risk of bias with regard to the randomisation process. We found almost half of the included studies to be at high risk of attrition bias and detection bias. We also found selective outcome reporting bias to be present and we strongly suspected publication bias. Finally, we found almost half of the included studies to be at high risk of researcher allegiance bias.\nOverall the networks appeared to be well connected, but were generally underpowered to detect any important disagreement between direct and indirect evidence. The results showed the superiority of psychological therapies over the WL condition, although this finding was amplified by evident small study effects (SSE). The NMAs for ST-remission, ST-response and ST-improvement on a continuous scale showed well-replicated evidence in favour of CBT, as well as some sparse but relevant evidence in favour of PD and SP, over other therapies. In terms of ST-dropouts, PD and 3W showed better tolerability over other psychological therapies in the short term. In the long term, CBT and PD showed the highest level of remission/response, suggesting that the effects of these two treatments may be more stable with respect to other psychological therapies. However, all the mentioned differences among active treatments must be interpreted while taking into account that in most cases the effect sizes were small and/or results were imprecise.\nAuthors' conclusions\nThere is no high-quality, unequivocal evidence to support one psychological therapy over the others for the treatment of panic disorder with or without agoraphobia in adults. However, the results show that CBT - the most extensively studied among the included psychological therapies - was often superior to other therapies, although the effect size was small and the level of precision was often insufficient or clinically irrelevant. In the only two studies available that explored PD, this treatment showed promising results, although further research is needed in order to better explore the relative efficacy of PD with respect to CBT. Furthermore, PD appeared to be the best tolerated (in terms of ST-dropouts) among psychological treatments. Unexpectedly, we found some evidence in support of the possible viability of non-specific supportive psychotherapy for the treatment of panic disorder; however, the results concerning SP should be interpreted cautiously because of the sparsity of evidence regarding this treatment and, as in the case of PD, further research is needed to explore this issue. Behaviour therapy did not appear to be a valid alternative to CBT as a first-line treatment for patients with panic disorder with or without agoraphobia.\n\nGiven the provided abstract, please respond to the following query: \"Which studies were included in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases up to 16 March 2015 to find all studies (specifically randomised controlled trials) of talking therapies in the treatment of panic disorder with or without agoraphobia. To be included in the review studies had to include people with a clear diagnosis of panic disorder with or without agoraphobia.\nWe included 60 studies in the review. Fifty-four of the included studies (involving 3021 participants) were used in numerical analyses. The review authors rated the overall quality of the studies as low to very low."
}
]
|
query-laysum | 1364 | [
{
"role": "user",
"content": "Abstract: Background\nDiabetic retinopathy is a complication of diabetes in which high blood sugar levels damage the blood vessels in the retina. Sometimes new blood vessels grow in the retina, and these can have harmful effects; this is known as proliferative diabetic retinopathy. Laser photocoagulation is an intervention that is commonly used to treat diabetic retinopathy, in which light energy is applied to the retina with the aim of stopping the growth and development of new blood vessels, and thereby preserving vision.\nObjectives\nTo assess the effects of laser photocoagulation for diabetic retinopathy compared to no treatment or deferred treatment.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 5), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to June 2014), EMBASE (January 1980 to June 2014), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 3 June 2014.\nSelection criteria\nWe included randomised controlled trials (RCTs) where people (or eyes) with diabetic retinopathy were randomly allocated to laser photocoagulation or no treatment or deferred treatment. We excluded trials of lasers that are no longer in routine use. Our primary outcome was the proportion of people who lost 15 or more letters (3 lines) of best-corrected visual acuity (BCVA) as measured on a logMAR chart at 12 months. We also looked at longer-term follow-up of the primary outcome at two to five years. Secondary outcomes included mean best corrected distance visual acuity, severe visual loss, mean near visual acuity, progression of diabetic retinopathy, quality of life, pain, loss of driving licence, vitreous haemorrhage and retinal detachment.\nData collection and analysis\nWe used standard methods as expected by the Cochrane Collaboration. Two review authors selected studies and extracted data.\nMain results\nWe identified a large number of trials of laser photocoagulation of diabetic retinopathy (n = 83) but only five of these studies were eligible for inclusion in the review, i.e. they compared laser photocoagulation with currently available lasers to no (or deferred) treatment. Three studies were conducted in the USA, one study in the UK and one study in Japan. A total of 4786 people (9503 eyes) were included in these studies. The majority of participants in four of these trials were people with proliferative diabetic retinopathy; one trial recruited mainly people with non-proliferative retinopathy. Four of the studies evaluated panretinal photocoagulation with argon laser and one study investigated selective photocoagulation of non-perfusion areas. Three studies compared laser treatment to no treatment and two studies compared laser treatment to deferred laser treatment. All studies were at risk of performance bias because the treatment and control were different and no study attempted to produce a sham treatment. Three studies were considered to be at risk of attrition bias.\nAt 12 months there was little difference between eyes that received laser photocoagulation and those allocated to no treatment (or deferred treatment), in terms of loss of 15 or more letters of visual acuity (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.89 to 1.11; 8926 eyes; 2 RCTs, low quality evidence). Longer term follow-up did not show a consistent pattern, but one study found a 20% reduction in risk of loss of 15 or more letters of visual acuity at five years with laser treatment. Treatment with laser reduced the risk of severe visual loss by over 50% at 12 months (RR 0.46, 95% CI 0.24 to 0.86; 9276 eyes; 4 RCTs, moderate quality evidence). There was a beneficial effect on progression of diabetic retinopathy with treated eyes experiencing a 50% reduction in risk of progression of diabetic retinopathy (RR 0.49, 95% CI 0.37 to 0.64; 8331 eyes; 4 RCTs, low quality evidence) and a similar reduction in risk of vitreous haemorrhage (RR 0.56, 95% CI 0.37 to 0.85; 224 eyes; 2 RCTs, low quality evidence).\nNone of the studies reported near visual acuity or patient-relevant outcomes such as quality of life, pain, loss of driving licence or adverse effects such as retinal detachment.\nWe did not plan any subgroup analyses, but there was a difference in baseline risk in participants with non-proliferative retinopathy compared to those with proliferative retinopathy. With the small number of included studies we could not do a formal subgroup analysis comparing effect in proliferative and non-proliferative retinopathy.\nAuthors' conclusions\nThis review provides evidence that laser photocoagulation is beneficial in treating proliferative diabetic retinopathy. We judged the evidence to be moderate or low, depending on the outcome. This is partly related to reporting of trials conducted many years ago, after which panretinal photocoagulation has become the mainstay of treatment of proliferative diabetic retinopathy.\nFuture Cochrane Reviews on variations in the laser treatment protocol are planned. Future research on laser photocoagulation should investigate the combination of laser photocoagulation with newer treatments such as anti-vascular endothelial growth factors (anti-VEGFs).\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We did not find very many studies and those we found were done quite a long time ago when standards of trial conduct and reporting were lower. We judged the quality of the evidence to be low, with the exception of the results for severe visual loss, which we judged to be moderate quality evidence."
}
]
|
query-laysum | 1365 | [
{
"role": "user",
"content": "Abstract: Background\nDietary antioxidants, such as vitamin C, in the epithelial lining and lining fluids of the lung may be beneficial in the reduction of oxidative damage (Arab 2002). They may therefore be of benefit in reducing symptoms of inflammatory airway conditions such as asthma, and may also be beneficial in reducing exercise-induced bronchoconstriction, which is a well-recognised feature of asthma and is considered a marker of airways inflammation. However, the association between dietary antioxidants and asthma severity or exercise-induced bronchoconstriction is not fully understood.\nObjectives\nTo examine the effects of vitamin C supplementation on exacerbations and health-related quality of life (HRQL) in adults and children with asthma or exercise-induced bronchoconstriction compared to placebo or no vitamin C.\nSearch methods\nWe identified trials from the Cochrane Airways Group's Specialised Register (CAGR). The Register contains trial reports identified through systematic searches of a number of bibliographic databases, and handsearching of journals and meeting abstracts. We also searched trial registry websites. The searches were conducted in December 2012.\nSelection criteria\nWe included randomised controlled trials (RCTs). We included both adults and children with a diagnosis of asthma. In separate analyses we considered trials with a diagnosis of exercise-induced bronchoconstriction (or exercise-induced asthma). We included trials comparing vitamin C supplementation with placebo, or vitamin C supplementation with no supplementation. We included trials where the asthma management of both treatment and control groups provided similar background therapy. The primary focus of the review is on daily vitamin C supplementation to prevent exacerbations and improve HRQL. The short-term use of vitamin C at the time of exacerbations or for cold symptoms in people with asthma are outside the scope of this review.\nData collection and analysis\nTwo review authors independently screened the titles and abstracts of potential studies, and subsequently screened full text study reports for inclusion. We used standard methods expected by The Cochrane Collaboration.\nMain results\nA total of 11 trials with 419 participants met our inclusion criteria. In 10 studies the participants were adults and only one was in children. Reporting of study design was inadequate to determine risk of bias for most of the studies and poor availability of data for our key outcomes may indicate some selective outcome reporting. Four studies were parallel-group and the remainder were cross-over studies. Eight studies included people with asthma and three studies included 40 participants with exercise-induced asthma. Five studies reported results using single-dose regimes prior to bronchial challenges or exercise tests. There was marked heterogeneity in vitamin C dosage regimes used in the selected studies, compounding the difficulties in carrying out meaningful analyses.\nOne study on 201 adults with asthma reported no significant difference in our primary outcome, health-related quality of life (HRQL), and overall the quality of this evidence was low. There were no data available to evaluate the effects of vitamin C supplementation on our other primary outcome, exacerbations in adults. One small study reported data on asthma exacerbations in children and there were no exacerbations in either the vitamin C or placebo groups (very low quality evidence). In another study conducted in 41 adults, exacerbations were not defined according to our criteria and the data were not available in a format suitable for evaluation by our methods. Lung function and symptoms data were contributed by single studies. We rated the quality of this evidence as moderate, but further research is required to assess any clinical implications that may be related to the changes in these parameters. In each of these outcomes there was no significant difference between vitamin C and placebo. No adverse events at all were reported; again this is very low quality evidence.\nStudies in exercise-induced bronchoconstriction suggested some improvement in lung function measures with vitamin C supplementation, but theses studies were few and very small, with limited data and we judged the quality of the evidence to be low.\nAuthors' conclusions\nCurrently, evidence is not available to provide a robust assessment on the use of vitamin C in the management of asthma or exercise-induced bronchoconstriction. Further research is very likely to have an important impact on our confidence in the estimates of effect and is likely to change the estimates. There is no indication currently that vitamin C can be recommended as a therapeutic agent in asthma. There was some indication that vitamin C was helpful in exercise-induced breathlessness in terms of lung function and symptoms; however, as these findings were provided only by small studies they are inconclusive. Most published studies to date are too small and inconsistent to provide guidance. Well-designed trials with good quality clinical endpoints, such as exacerbation rates and health-related quality of life scores, are required.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review considered the question of whether vitamin C may be helpful for people with asthma or exercise-induced breathlessness."
}
]
|
query-laysum | 1366 | [
{
"role": "user",
"content": "Abstract: Background\nAlveolar bone changes following tooth extraction can compromise prosthodontic rehabilitation. Alveolar ridge preservation (ARP) has been proposed to limit these changes and improve prosthodontic and aesthetic outcomes when implants are used. This is an update of the Cochrane Review first published in 2015.\nObjectives\nTo assess the clinical effects of various materials and techniques for ARP after tooth extraction compared with extraction alone or other methods of ARP, or both, in patients requiring dental implant placement following healing of extraction sockets.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 19 March 2021), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2021, Issue 2), MEDLINE Ovid (1946 to 19 March 2021), Embase Ovid (1980 to 19 March 2021), Latin American and Caribbean Health Science Information database (1982 to 19 March 2021), Web of Science Conference Proceedings (1990 to 19 March 2021), Scopus (1966 to 19 March 2021), ProQuest Dissertations and Theses (1861 to 19 March 2021), and OpenGrey (to 19 March 2021). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. A number of journals were also handsearched.\nSelection criteria\nWe included all randomised controlled trials (RCTs) on the use of ARP techniques with at least six months of follow-up. Outcome measures were: changes in the bucco-lingual/palatal width of alveolar ridge, changes in the vertical height of the alveolar ridge, complications, the need for additional augmentation prior to implant placement, aesthetic outcomes, implant failure rates, peri-implant marginal bone level changes, changes in probing depths and clinical attachment levels at teeth adjacent to the extraction site, and complications of future prosthodontic rehabilitation.\nData collection and analysis\nWe selected trials, extracted data, and assessed risk of bias in duplicate. Corresponding authors were contacted to obtain missing information. We estimated mean differences (MD) for continuous outcomes and risk ratios (RR) for dichotomous outcomes, with 95% confidence intervals (95% CI). We constructed 'Summary of findings' tables to present the main findings and assessed the certainty of the evidence using GRADE.\nMain results\nWe included 16 RCTs conducted worldwide involving a total of 524 extraction sites in 426 adult participants. We assessed four trials as at overall high risk of bias and the remaining trials at unclear risk of bias. Nine new trials were included in this update with six new trials in the category of comparing ARP to extraction alone and three new trials in the category of comparing different grafting materials.\nARP versus extraction: from the seven trials comparing xenografts with extraction alone, there is very low-certainty evidence of a reduction in loss of alveolar ridge width (MD -1.18 mm, 95% CI -1.82 to -0.54; P = 0.0003; 6 studies, 184 participants, 201 extraction sites), and height (MD -1.35 mm, 95% CI -2.00 to -0.70; P < 0.0001; 6 studies, 184 participants, 201 extraction sites) in favour of xenografts, but we found no evidence of a significant difference for the need for additional augmentation (RR 0.68, 95% CI 0.29 to 1.62; P = 0.39; 4 studies, 154 participants, 156 extraction sites; very low-certainty evidence) or in implant failure rate (RR 1.00, 95% CI 0.07 to 14.90; 2 studies, 70 participants/extraction sites; very low-certainty evidence). From the one trial comparing alloplasts versus extraction, there is very low-certainty evidence of a reduction in loss of alveolar ridge height (MD -3.73 mm; 95% CI -4.05 to -3.41; 1 study, 15 participants, 60 extraction sites) in favour of alloplasts. This single trial did not report any other outcomes.\nDifferent grafting materials for ARP: three trials (87 participants/extraction sites) compared allograft versus xenograft, two trials (37 participants, 55 extraction sites) compared alloplast versus xenograft, one trial (20 participants/extraction sites) compared alloplast with and without membrane, one trial (18 participants, 36 extraction sites) compared allograft with and without synthetic cell-binding peptide P-15, and one trial (30 participants/extraction sites) compared alloplast with different particle sizes. The evidence was of very low certainty for most comparisons and insufficient to determine whether there are clinically significant differences between different ARP techniques based on changes in alveolar ridge width and height, the need for additional augmentation prior to implant placement, or implant failure.\nWe found no trials which evaluated parameters relating to clinical attachment levels, specific aesthetic or prosthodontic outcomes for any of the comparisons.\nNo serious adverse events were reported with most trials indicating that the procedure was uneventful. Among the complications reported were delayed healing with partial exposure of the buccal plate at suture removal, postoperative pain and swelling, moderate glazing, redness and oedema, membrane exposure and partial loss of grafting material, and fibrous adhesions at the cervical part of previously preserved sockets, for the comparisons xenografts versus extraction, allografts versus xenografts, alloplasts versus xenografts, and alloplasts with and without membrane.\nAuthors' conclusions\nARP techniques may minimise the overall changes in residual ridge height and width six months after extraction but the evidence is very uncertain. There is lack of evidence of any differences in the need for additional augmentation at the time of implant placement, implant failure, aesthetic outcomes, or any other clinical parameters due to lack of information or long-term data. There is no evidence of any clinically significant difference between different grafting materials and barriers used for ARP. Further long-term RCTs that follow CONSORT guidelines (www.consort-statement.org) are necessary.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We do not know what works best to preserve jaw bone after tooth extraction. It is not clear :\n- if ARP is better than no ARP; or\n- if some ARP materials and techniques are better than others.\nThis is because the evidence currently available is not sufficiently robust.\nFuture studies that report their methods clearly and follow people over long periods will help to strengthen the evidence and draw conclusions."
}
]
|
query-laysum | 1367 | [
{
"role": "user",
"content": "Abstract: Background\nIndications for the use of negative pressure wound therapy (NPWT) are broad and include prophylaxis for surgical site infections (SSIs). Existing evidence for the effectiveness of NPWT on postoperative wounds healing by primary closure remains uncertain.\nObjectives\nTo assess the effects of NPWT for preventing SSI in wounds healing through primary closure, and to assess the cost-effectiveness of NPWT in wounds healing through primary closure.\nSearch methods\nIn January 2021, we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries and references of included studies, systematic reviews and health technology reports. There were no restrictions on language, publication date or study setting.\nSelection criteria\nWe included trials if they allocated participants to treatment randomly and compared NPWT with any other type of wound dressing, or compared one type of NPWT with another.\nData collection and analysis\nAt least two review authors independently assessed trials using predetermined inclusion criteria. We carried out data extraction, assessment using the Cochrane risk of bias tool, and quality assessment according to Grading of Recommendations, Assessment, Development and Evaluations methodology. Our primary outcomes were SSI, mortality, and wound dehiscence.\nMain results\nIn this fourth update, we added 18 new randomised controlled trials (RCTs) and one new economic study, resulting in a total of 62 RCTs (13,340 included participants) and six economic studies. Studies evaluated NPWT in a wide range of surgeries, including orthopaedic, obstetric, vascular and general procedures. All studies compared NPWT with standard dressings. Most studies had unclear or high risk of bias for at least one key domain.\nPrimary outcomes\nEleven studies (6384 participants) which reported mortality were pooled. There is low-certainty evidence showing there may be a reduced risk of death after surgery for people treated with NPWT (0.84%) compared with standard dressings (1.17%) but there is uncertainty around this as confidence intervals include risk of benefits and harm; risk ratio (RR) 0.78 (95% CI 0.47 to 1.30; I2 = 0%). Fifty-four studies reported SSI; 44 studies (11,403 participants) were pooled. There is moderate-certainty evidence that NPWT probably results in fewer SSIs (8.7% of participants) than treatment with standard dressings (11.75%) after surgery; RR 0.73 (95% CI 0.63 to 0.85; I2 = 29%). Thirty studies reported wound dehiscence; 23 studies (8724 participants) were pooled. There is moderate-certainty evidence that there is probably little or no difference in dehiscence between people treated with NPWT (6.62%) and those treated with standard dressing (6.97%), although there is imprecision around the estimate that includes risk of benefit and harms; RR 0.97 (95% CI 0.82 to 1.16; I2 = 4%). Evidence was downgraded for imprecision, risk of bias, or a combination of these.\nSecondary outcomes\nThere is low-certainty evidence for the outcomes of reoperation and seroma; in each case, confidence intervals included both benefit and harm. There may be a reduced risk of reoperation favouring the standard dressing arm, but this was imprecise: RR 1.13 (95% CI 0.91 to 1.41; I2 = 2%; 18 trials; 6272 participants). There may be a reduced risk of seroma for people treated with NPWT but this is imprecise: the RR was 0.82 (95% CI 0.65 to 1.05; I2 = 0%; 15 trials; 5436 participants). For skin blisters, there is low-certainty evidence that people treated with NPWT may be more likely to develop skin blisters compared with those treated with standard dressing (RR 3.55; 95% CI 1.43 to 8.77; I2 = 74%; 11 trials; 5015 participants). The effect of NPWT on haematoma is uncertain (RR 0.79; 95 % CI 0.48 to 1.30; I2 = 0%; 17 trials; 5909 participants; very low-certainty evidence). There is low-certainty evidence of little to no difference in reported pain between groups. Pain was measured in different ways and most studies could not be pooled; this GRADE assessment is based on all fourteen trials reporting pain; the pooled RR for the proportion of participants who experienced pain was 1.52 (95% CI 0.20, 11.31; I2 = 34%; two studies; 632 participants).\nCost-effectiveness\nSix economic studies, based wholly or partially on trials in our review, assessed the cost-effectiveness of NPWT compared with standard care. They considered NPWT in five indications: caesarean sections in obese women; surgery for lower limb fracture; knee/hip arthroplasty; coronary artery bypass grafts; and vascular surgery with inguinal incisions. They calculated quality-adjusted life-years or an equivalent, and produced estimates of the treatments' relative cost-effectiveness. The reporting quality was good but the evidence certainty varied from moderate to very low. There is moderate-certainty evidence that NPWT in surgery for lower limb fracture was not cost-effective at any threshold of willingness-to-pay and that NPWT is probably cost-effective in obese women undergoing caesarean section. Other studies found low or very low-certainty evidence indicating that NPWT may be cost-effective for the indications assessed.\nAuthors' conclusions\nPeople with primary closure of their surgical wound and treated prophylactically with NPWT following surgery probably experience fewer SSIs than people treated with standard dressings but there is probably no difference in wound dehiscence (moderate-certainty evidence). There may be a reduced risk of death after surgery for people treated with NPWT compared with standard dressings but there is uncertainty around this as confidence intervals include risk of benefit and harm (low-certainty evidence). People treated with NPWT may experience more instances of skin blistering compared with standard dressing treatment (low-certainty evidence). There are no clear differences in other secondary outcomes where most evidence is low or very low-certainty. Assessments of cost-effectiveness of NPWT produced differing results in different indications. There is a large number of ongoing studies, the results of which may change the findings of this review. Decisions about use of NPWT should take into account surgical indication and setting and consider evidence for all outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "NPWT is a sealed wound dressing attached to a vacuum pump which sucks fluid away from the wound. This may assist with wound healing and reduce risk of infection. We wanted to find out whether NPWT was better compared with standard wound dressings (usually gauze and tape) for treating people who had had surgery and had wounds which had been closed. We were interested in complications including SSI; wound reopening (dehiscence) and death for any reason. We also looked at several other outcomes including the need for another operation, the need to be admitted to hospital again, pain, quality of life, as well as some specific types of complications (haematoma (an accumulation of blood under the skin), seroma (an accumulation of clear fluid under the skin), skin blisters).\nWe also wanted to find out whether NPWT was cost-effective for treating people who had closed surgical wounds."
}
]
|
query-laysum | 1368 | [
{
"role": "user",
"content": "Abstract: Background\nReducing the transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a global priority. Contact tracing identifies people who were recently in contact with an infected individual, in order to isolate them and reduce further transmission. Digital technology could be implemented to augment and accelerate manual contact tracing. Digital tools for contact tracing may be grouped into three areas: 1) outbreak response; 2) proximity tracing; and 3) symptom tracking. We conducted a rapid review on the effectiveness of digital solutions to contact tracing during infectious disease outbreaks.\nObjectives\nTo assess the benefits, harms, and acceptability of personal digital contact tracing solutions for identifying contacts of an identified positive case of an infectious disease.\nSearch methods\nAn information specialist searched the literature from 1 January 2000 to 5 May 2020 in CENTRAL, MEDLINE, and Embase. Additionally, we screened the Cochrane COVID-19 Study Register.\nSelection criteria\nWe included randomised controlled trials (RCTs), cluster-RCTs, quasi-RCTs, cohort studies, cross-sectional studies and modelling studies, in general populations. We preferentially included studies of contact tracing during infectious disease outbreaks (including COVID-19, Ebola, tuberculosis, severe acute respiratory syndrome virus, and Middle East respiratory syndrome) as direct evidence, but considered comparative studies of contact tracing outside an outbreak as indirect evidence.\nThe digital solutions varied but typically included software (or firmware) for users to install on their devices or to be uploaded to devices provided by governments or third parties. Control measures included traditional or manual contact tracing, self-reported diaries and surveys, interviews, other standard methods for determining close contacts, and other technologies compared to digital solutions (e.g. electronic medical records).\nData collection and analysis\nTwo review authors independently screened records and all potentially relevant full-text publications. One review author extracted data for 50% of the included studies, another extracted data for the remaining 50%; the second review author checked all the extracted data. One review author assessed quality of included studies and a second checked the assessments. Our outcomes were identification of secondary cases and close contacts, time to complete contact tracing, acceptability and accessibility issues, privacy and safety concerns, and any other ethical issue identified. Though modelling studies will predict estimates of the effects of different contact tracing solutions on outcomes of interest, cohort studies provide empirically measured estimates of the effects of different contact tracing solutions on outcomes of interest. We used GRADE-CERQual to describe certainty of evidence from qualitative data and GRADE for modelling and cohort studies.\nMain results\nWe identified six cohort studies reporting quantitative data and six modelling studies reporting simulations of digital solutions for contact tracing. Two cohort studies also provided qualitative data. Three cohort studies looked at contact tracing during an outbreak, whilst three emulated an outbreak in non-outbreak settings (schools). Of the six modelling studies, four evaluated digital solutions for contact tracing in simulated COVID-19 scenarios, while two simulated close contacts in non-specific outbreak settings.\nModelling studies\nTwo modelling studies provided low-certainty evidence of a reduction in secondary cases using digital contact tracing (measured as average number of secondary cases per index case - effective reproductive number (Reff)). One study estimated an 18% reduction in Reff with digital contact tracing compared to self-isolation alone, and a 35% reduction with manual contact-tracing. Another found a reduction in Reff for digital contact tracing compared to self-isolation alone (26% reduction) and a reduction in Reff for manual contact tracing compared to self-isolation alone (53% reduction). However, the certainty of evidence was reduced by unclear specifications of their models, and assumptions about the effectiveness of manual contact tracing (assumed 95% to 100% of contacts traced), and the proportion of the population who would have the app (53%).\nCohort studies\nTwo cohort studies provided very low-certainty evidence of a benefit of digital over manual contact tracing. During an Ebola outbreak, contact tracers using an app found twice as many close contacts per case on average than those using paper forms. Similarly, after a pertussis outbreak in a US hospital, researchers found that radio-frequency identification identified 45 close contacts but searches of electronic medical records found 13. The certainty of evidence was reduced by concerns about imprecision, and serious risk of bias due to the inability of contact tracing study designs to identify the true number of close contacts.\nOne cohort study provided very low-certainty evidence that an app could reduce the time to complete a set of close contacts. The certainty of evidence for this outcome was affected by imprecision and serious risk of bias. Contact tracing teams reported that digital data entry and management systems were faster to use than paper systems and possibly less prone to data loss.\nTwo studies from lower- or middle-income countries, reported that contact tracing teams found digital systems simpler to use and generally preferred them over paper systems; they saved personnel time, reportedly improved accuracy with large data sets, and were easier to transport compared with paper forms. However, personnel faced increased costs and internet access problems with digital compared to paper systems.\nDevices in the cohort studies appeared to have privacy from contacts regarding the exposed or diagnosed users. However, there were risks of privacy breaches from snoopers if linkage attacks occurred, particularly for wearable devices.\nAuthors' conclusions\nThe effectiveness of digital solutions is largely unproven as there are very few published data in real-world outbreak settings. Modelling studies provide low-certainty evidence of a reduction in secondary cases if digital contact tracing is used together with other public health measures such as self-isolation. Cohort studies provide very low-certainty evidence that digital contact tracing may produce more reliable counts of contacts and reduce time to complete contact tracing. Digital solutions may have equity implications for at-risk populations with poor internet access and poor access to digital technology.\nStronger primary research on the effectiveness of contact tracing technologies is needed, including research into use of digital solutions in conjunction with manual systems, as digital solutions are unlikely to be used alone in real-world settings. Future studies should consider access to and acceptability of digital solutions, and the resultant impact on equity. Studies should also make acceptability and uptake a primary research question, as privacy concerns can prevent uptake and effectiveness of these technologies.\n\nGiven the provided abstract, please respond to the following query: \"Why is this question important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The global COVID-19 pandemic highlights the importance of accurate and timely contact tracing. Contact tracing tells people that they may have been near someone with - or showing symptoms of - an infectious disease, allowing them to self-isolate and helping to stop the spread of infection. Traditionally, contact tracing begins with notification that someone has an infectious disease. They are asked to recall their contacts, going back two to three days before symptom onset. This is time-consuming and may not always give a complete picture, so digital aids could help contact tracers.\nDigital contact tracing uses technology to track and trace contacts. Individuals download an app onto their smartphones and record location and symptom information, or their devices might use location-finding technology, like Bluetooth or GPS (global positioning system). If the user is infected, the technology identifies close contacts and/or secondary infections (people to whom they passed the disease), and informs people whom they have been near. The technology identifies where the infection was passed on and its duration (the context).\nHowever, problems may occur where access to technology is limited, in low-income settings or for elderly people, for example. Also, some people see it as an invasion of privacy and are suspicious of how their data will be used.\nWe wanted to know whether digital contact tracing, compared to manual contact tracing, is effective in reducing the spread of infection, as measured by secondary infections, identifying close contacts, tracing a complete set of contacts, and identifying the context of infection."
}
]
|
query-laysum | 1369 | [
{
"role": "user",
"content": "Abstract: Background\nDiarrhoea is a major cause of death and disease, especially among young children in low-income countries. In these settings, many infectious agents associated with diarrhoea are spread through water contaminated with faeces.\nIn remote and low-income settings, source-based water quality improvement includes providing protected groundwater (springs, wells, and bore holes), or harvested rainwater as an alternative to surface sources (rivers and lakes). Point-of-use water quality improvement interventions include boiling, chlorination, flocculation, filtration, or solar disinfection, mainly conducted at home.\nObjectives\nTo assess the effectiveness of interventions to improve water quality for preventing diarrhoea.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register (11 November 2014), CENTRAL (the Cochrane Library, 7 November 2014), MEDLINE (1966 to 10 November 2014), EMBASE (1974 to 10 November 2014), and LILACS (1982 to 7 November 2014). We also handsearched relevant conference proceedings, contacted researchers and organizations working in the field, and checked references from identified studies through 11 November 2014.\nSelection criteria\nRandomized controlled trials (RCTs), quasi-RCTs, and controlled before-and-after studies (CBA) comparing interventions aimed at improving the microbiological quality of drinking water with no intervention in children and adults.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. We used meta-analyses to estimate pooled measures of effect, where appropriate, and investigated potential sources of heterogeneity using subgroup analyses. We assessed the quality of evidence using the GRADE approach.\nMain results\nForty-five cluster-RCTs, two quasi-RCTs, and eight CBA studies, including over 84,000 participants, met the inclusion criteria. Most included studies were conducted in low- or middle-income countries (LMICs) (50 studies) with unimproved water sources (30 studies) and unimproved or unclear sanitation (34 studies). The primary outcome in most studies was self-reported diarrhoea, which is at high risk of bias due to the lack of blinding in over 80% of the included studies.\nSource-based water quality improvements\nThere is currently insufficient evidence to know if source-based improvements such as protected wells, communal tap stands, or chlorination/filtration of community sources consistently reduce diarrhoea (one cluster-RCT, five CBA studies, very low quality evidence). We found no studies evaluating reliable piped-in water supplies delivered to households.\nPoint-of-use water quality interventions\nOn average, distributing water disinfection products for use at the household level may reduce diarrhoea by around one quarter (Home chlorination products: RR 0.77, 95% CI 0.65 to 0.91; 14 trials, 30,746 participants, low quality evidence; flocculation and disinfection sachets: RR 0.69, 95% CI 0.58 to 0.82, four trials, 11,788 participants, moderate quality evidence). However, there was substantial heterogeneity in the size of the effect estimates between individual studies.\nPoint-of-use filtration systems probably reduce diarrhoea by around a half (RR 0.48, 95% CI 0.38 to 0.59, 18 trials, 15,582 participants, moderate quality evidence). Important reductions in diarrhoea episodes were shown with ceramic filters, biosand systems and LifeStraw® filters; (Ceramic: RR 0.39, 95% CI 0.28 to 0.53; eight trials, 5763 participants, moderate quality evidence; Biosand: RR 0.47, 95% CI 0.39 to 0.57; four trials, 5504 participants, moderate quality evidence; LifeStraw®: RR 0.69, 95% CI 0.51 to 0.93; three trials, 3259 participants, low quality evidence). Plumbed in filters have only been evaluated in high-income settings (RR 0.81, 95% CI 0.71 to 0.94, three trials, 1056 participants, fixed effects model).\nIn low-income settings, solar water disinfection (SODIS) by distribution of plastic bottles with instructions to leave filled bottles in direct sunlight for at least six hours before drinking probably reduces diarrhoea by around a third (RR 0.62, 95% CI 0.42 to 0.94; four trials, 3460 participants, moderate quality evidence).\nIn subgroup analyses, larger effects were seen in trials with higher adherence, and trials that provided a safe storage container. In most cases, the reduction in diarrhoea shown in the studies was evident in settings with improved and unimproved water sources and sanitation.\nAuthors' conclusions\nInterventions that address the microbial contamination of water at the point-of-use may be important interim measures to improve drinking water quality until homes can be reached with safe, reliable, piped-in water connections. The average estimates of effect for each individual point-of-use intervention generally show important effects. Comparisons between these estimates do not provide evidence of superiority of one intervention over another, as such comparisons are confounded by the study setting, design, and population.\nFurther studies assessing the effects of household connections and chlorination at the point of delivery will help improve our knowledge base. As evidence suggests effectiveness improves with adherence, studies assessing programmatic approaches to optimising coverage and long-term utilization of these interventions among vulnerable populations could also help strategies to improve health outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What the research says\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is currently insufficient evidence to know if source-based improvements in water supplies, such as protected wells and communal tap stands or treatment of communal supplies, consistently reduce diarrhoea in low-income settings (very low quality evidence). We found no trials evaluating reliable piped-in water supplies to people's homes.\nOn average, distributing disinfection products for use in the home may reduce diarrhoea by around one quarter in the case of chlorine products (low quality evidence), and around a third in the case of flocculation and disinfection sachets (moderate quality evidence).\nWater filtration at home probably reduces diarrhoea by around a half (moderate quality evidence), and effects were consistently seen with ceramic filters (moderate quality evidence), biosand systems (moderate quality evidence) and LifeStraw® filters (low quality evidence). Plumbed-in filtration has only been evaluated in high-income settings (low quality evidence).\nIn low-income settings, distributing plastic bottles with instructions to leave filled bottles in direct sunlight for at least six hours before drinking probably reduces diarrhoea by around a third (moderate quality evidence).\nResearch assessing the effects of household connections and chlorination at the point of delivery will help improve our knowledge base. Evidence indicates the more people use the various interventions for improving water quality, the larger the effects, so research into practical approaches to increase coverage and help assure long term use of them in poor groups will help improve impact."
}
]
|
query-laysum | 1370 | [
{
"role": "user",
"content": "Abstract: Background\nDepression and anxiety are the most frequent indication for which antidepressants are prescribed. Long-term antidepressant use is driving much of the internationally observed rise in antidepressant consumption. Surveys of antidepressant users suggest that 30% to 50% of long-term antidepressant prescriptions had no evidence-based indication. Unnecessary use of antidepressants puts people at risk of adverse events. However, high-certainty evidence is lacking regarding the effectiveness and safety of approaches to discontinuing long-term antidepressants.\nObjectives\nTo assess the effectiveness and safety of approaches for discontinuation versus continuation of long-term antidepressant use for depressive and anxiety disorders in adults.\nSearch methods\nWe searched all databases for randomised controlled trials (RCTs) until January 2020.\nSelection criteria\nWe included RCTs comparing approaches to discontinuation with continuation of antidepressants (or usual care) for people with depression or anxiety who are prescribed antidepressants for at least six months. Interventions included discontinuation alone (abrupt or taper), discontinuation with psychological therapy support, and discontinuation with minimal intervention. Primary outcomes were successful discontinuation rate,relapse (as defined by authors of the original study), withdrawal symptoms, and adverse events. Secondary outcomes were depressive symptoms, anxiety symptoms, quality of life, social and occupational functioning, and severity of illness.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane.\nMain results\nWe included 33 studies involving 4995 participants. Nearly all studies were conducted in a specialist mental healthcare service and included participants with recurrent depression (i.e. two or more episodes of depression prior to discontinuation). All included trials were at high risk of bias. The main limitation of the review is bias due to confounding withdrawal symptoms with symptoms of relapse of depression. Withdrawal symptoms (such as low mood, dizziness) may have an effect on almost every outcome including adverse events, quality of life, social functioning, and severity of illness.\nAbrupt discontinuation\nThirteen studies reported abrupt discontinuation of antidepressant.\nVery low-certainty evidence suggests that abrupt discontinuation without psychological support may increase risk of relapse (hazard ratio (HR) 2.09, 95% confidence interval (CI) 1.59 to 2.74; 1373 participants, 10 studies) and there is insufficient evidence of its effect on adverse events (odds ratio (OR) 1.11, 95% CI 0.62 to 1.99; 1012 participants, 7 studies; I² = 37%) compared to continuation of antidepressants, without specific assessment of withdrawal symptoms. Evidence about the effects of abrupt discontinuation on withdrawal symptoms (1 study) is very uncertain.\nNone of these studies included successful discontinuation rate as a primary endpoint.\nDiscontinuation by \"taper\"\nEighteen studies examined discontinuation by \"tapering\" (one week or longer). Most tapering regimens lasted four weeks or less.\nVery low-certainty evidence suggests that \"tapered\" discontinuation may lead to higher risk of relapse (HR 2.97, 95% CI 2.24 to 3.93; 1546 participants, 13 studies) with no or little difference in adverse events (OR 1.06, 95% CI 0.82 to 1.38; 1479 participants, 7 studies; I² = 0%) compared to continuation of antidepressants, without specific assessment of withdrawal symptoms. Evidence about the effects of discontinuation on withdrawal symptoms (1 study) is very uncertain.\nDiscontinuation with psychological support\nFour studies reported discontinuation with psychological support. Very low-certainty evidence suggests that initiation of preventive cognitive therapy (PCT), or MBCT, combined with \"tapering\" may result in successful discontinuation rates of 40% to 75% in the discontinuation group (690 participants, 3 studies). Data from control groups in these studies were requested but are not yet available.\nLow-certainty evidence suggests that discontinuation combined with psychological intervention may result in no or little effect on relapse (HR 0.89, 95% CI 0.66 to 1.19; 690 participants, 3 studies) compared to continuation of antidepressants. Withdrawal symptoms were not measured. Pooling data on adverse events was not possible due to insufficient information (3 studies).\nDiscontinuation with minimal intervention\nLow-certainty evidence from one study suggests that a letter to the general practitioner (GP) to review antidepressant treatment may result in no or little effect on successful discontinuation rate compared to usual care (6% versus 8%; 146 participants, 1 study) or on relapse (relapse rate 26% vs 13%; 146 participants, 1 study). No data on withdrawal symptoms nor adverse events were provided.\nNone of the studies used low-intensity psychological interventions such as online support or a changed pharmaceutical formulation that allows tapering with low doses over several months. Insufficient data were available for the majority of people taking antidepressants in the community (i.e. those with only one or no prior episode of depression), for people aged 65 years and older, and for people taking antidepressants for anxiety.\nAuthors' conclusions\nCurrently, relatively few studies have focused on approaches to discontinuation of long-term antidepressants. We cannot make any firm conclusions about effects and safety of the approaches studied to date. The true effect and safety are likely to be substantially different from the data presented due to assessment of relapse of depression that is confounded by withdrawal symptoms. All other outcomes are confounded with withdrawal symptoms. Most tapering regimens were limited to four weeks or less. In the studies with rapid tapering schemes the risk of withdrawal symptoms may be similar to studies using abrupt discontinuation which may influence the effectiveness of the interventions. Nearly all data come from people with recurrent depression.\nThere is an urgent need for trials that adequately address withdrawal confounding bias, and carefully distinguish relapse from withdrawal symptoms. Future studies should report key outcomes such as successful discontinuation rate and should include populations with one or no prior depression episodes in primary care, older people, and people taking antidepressants for anxiety and use tapering schemes longer than 4 weeks.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our search up until January 2020 found 33 studies, which included 4995 adult participants. Most people in these studies had recurrent depression (two or more episodes of depression before stopping antidepressants), and most were recruited from specialist mental healthcare services. In 13 studies, the antidepressant was stopped abruptly; in 18 studies, the antidepressant was stopped gradually over several weeks (\"tapering\"); in four studies, psychological therapy support was also offered; and in one study, stopping was prompted by a letter to the GP with guidance on tapering. Most tapering schemes lasted four weeks or less."
}
]
|
query-laysum | 1371 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis (CF) is a common life-shortening genetic condition caused by a variant in the cystic fibrosis transmembrane conductance regulator (CFTR) protein. A class II CFTR variant F508del (found in up to 90% of people with CF (pwCF)) is the commonest CF-causing variant. The faulty protein is degraded before reaching the cell membrane, where it needs to be to effect transepithelial salt transport. The F508del variant lacks meaningful CFTR function and corrective therapy could benefit many pwCF. Therapies in this review include single correctors and any combination of correctors and potentiators.\nObjectives\nTo evaluate the effects of CFTR correctors (with or without potentiators) on clinically important benefits and harms in pwCF of any age with class II CFTR mutations (most commonly F508del).\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Cystic Fibrosis Trials Register, reference lists of relevant articles and online trials registries. Most recent search: 14 October 2020.\nSelection criteria\nRandomised controlled trials (RCTs) (parallel design) comparing CFTR correctors to control in pwCF with class II mutations.\nData collection and analysis\nTwo authors independently extracted data, assessed risk of bias and evidence quality (GRADE); we contacted investigators for additional data.\nMain results\nWe included 19 RCTs (2959 participants), lasting between 1 day and 24 weeks; an extension of two lumacaftor-ivacaftor studies provided additional 96-week safety data (1029 participants). We assessed eight monotherapy RCTs (344 participants) (4PBA, CPX, lumacaftor, cavosonstat and FDL169), six dual-therapy RCTs (1840 participants) (lumacaftor-ivacaftor or tezacaftor-ivacaftor) and five triple-therapy RCTs (775 participants) (elexacaftor-tezacaftor-ivacaftor or VX-659-tezacaftor-ivacaftor); below we report only the data from elexacaftor-tezacaftor-ivacaftor combination which proceeded to Phase 3 trials. In 14 RCTs participants had F508del/F508del genotypes, in three RCTs F508del/minimal function (MF) genotypes and in two RCTs both genotypes.\nRisk of bias judgements varied across different comparisons. Results from 11 RCTs may not be applicable to all pwCF due to age limits (e.g. adults only) or non-standard design (converting from monotherapy to combination therapy).\nMonotherapy\nInvestigators reported no deaths or clinically-relevant improvements in quality of life (QoL). There was insufficient evidence to determine any important effects on lung function.\nNo placebo-controlled monotherapy RCT demonstrated differences in mild, moderate or severe adverse effects (AEs); the clinical relevance of these events is difficult to assess with their variety and small number of participants (all F508del/F508del).\nDual therapy\nInvestigators reported no deaths (moderate- to high-quality evidence). QoL scores (respiratory domain) favoured both lumacaftor-ivacaftor and tezacaftor-ivacaftor therapy compared to placebo at all time points. At six months lumacaftor 600 mg or 400 mg (both once daily) plus ivacaftor improved Cystic Fibrosis Questionnaire (CFQ) scores slightly compared with placebo (mean difference (MD) 2.62 points (95% confidence interval (CI) 0.64 to 4.59); 1061 participants; high-quality evidence). A similar effect was observed for twice-daily lumacaftor (200 mg) plus ivacaftor (250 mg), but with low-quality evidence (MD 2.50 points (95% CI 0.10 to 5.10)). The mean increase in CFQ scores with twice-daily tezacaftor (100 mg) and ivacaftor (150 mg) was approximately five points (95% CI 3.20 to 7.00; 504 participants; moderate-quality evidence). At six months, the relative change in forced expiratory volume in one second (FEV1) % predicted improved with combination therapies compared to placebo by: 5.21% with once-daily lumacaftor-ivacaftor (95% CI 3.61% to 6.80%; 504 participants; high-quality evidence); 2.40% with twice-daily lumacaftor-ivacaftor (95% CI 0.40% to 4.40%; 204 participants; low-quality evidence); and 6.80% with tezacaftor-ivacaftor (95% CI 5.30 to 8.30%; 520 participants; moderate-quality evidence).\nMore pwCF reported early transient breathlessness with lumacaftor-ivacaftor, odds ratio 2.05 (99% CI 1.10 to 3.83; 739 participants; high-quality evidence). Over 120 weeks (initial study period and follow-up) systolic blood pressure rose by 5.1 mmHg and diastolic blood pressure by 4.1 mmHg with twice-daily 400 mg lumacaftor-ivacaftor (80 participants; high-quality evidence). The tezacaftor-ivacaftor RCTs did not report these adverse effects.\nPulmonary exacerbation rates decreased in pwCF receiving additional therapies to ivacaftor compared to placebo: lumacaftor 600 mg hazard ratio (HR) 0.70 (95% CI 0.57 to 0.87; 739 participants); lumacaftor 400 mg, HR 0.61 (95% CI 0.49 to 0.76; 740 participants); and tezacaftor, HR 0.64 (95% CI, 0.46 to 0.89; 506 participants) (moderate-quality evidence).\nTriple therapy\nThree RCTs of elexacaftor to tezacaftor-ivacaftor in pwCF (aged 12 years and older with either one or two F508del variants) reported no deaths (high-quality evidence). All other evidence was graded as moderate quality. In 403 participants with F508del/minimal function (MF) elexacaftor-tezacaftor-ivacaftor improved QoL respiratory scores (MD 20.2 points (95% CI 16.2 to 24.2)) and absolute change in FEV1 (MD 14.3% predicted (95% CI 12.7 to 15.8)) compared to placebo at 24 weeks. At four weeks in 107 F508del/F508del participants, elexacaftor-tezacaftor-ivacaftor improved QoL respiratory scores (17.4 points (95% CI 11.9 to 22.9)) and absolute change in FEV1 (MD 10.0% predicted (95% CI 7.5 to 12.5)) compared to tezacaftor-ivacaftor. There was probably little or no difference in the number or severity of AEs between elexacaftor-tezacaftor-ivacaftor and placebo or control (moderate-quality evidence). In 403 F508del/F508del participants, there was a longer time to protocol-defined pulmonary exacerbation with elexacaftor-tezacaftor-ivacaftor over 24 weeks (moderate-quality evidence).\nAuthors' conclusions\nThere is insufficient evidence that corrector monotherapy has clinically important effects in pwCF with F508del/F508del.\nBoth dual therapies (lumacaftor-ivacaftor, tezacaftor-ivacaftor) result in similar improvements in QoL and respiratory function with lower pulmonary exacerbation rates. Lumacaftor-ivacaftor was associated with an increase in early transient shortness of breath and longer-term increases in blood pressure (not observed for tezacaftor-ivacaftor). Tezacaftor-ivacaftor has a better safety profile, although data are lacking in children under 12 years. In this population, lumacaftor-ivacaftor had an important impact on respiratory function with no apparent immediate safety concerns; but this should be balanced against the blood pressure increase and shortness of breath seen in longer-term adult data when considering lumacaftor-ivacaftor.\nThere is high-quality evidence of clinical efficacy with probably little or no difference in AEs for triple (elexacaftor-tezacaftor-ivacaftor) therapy in pwCF with one or two F508del variants aged 12 years or older. Further RCTs are required in children (under 12 years) and those with more severe respiratory function.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of the evidence varied from low to high. There were generally few details about study design, so we could not make clear judgements on potential biases. We had fewer concerns with the larger more recent studies. In 10 studies, some results were not analysed or reported. Some findings were based on studies that were too small to show important effects and for nine studies the results may not be applicable to all pwCF due to the age (i.e. only adults or only children studied) or an unusual design (pwCF received monotherapy and then combination therapy)."
}
]
|
query-laysum | 1372 | [
{
"role": "user",
"content": "Abstract: Background\nThe World Health Organization (WHO) End TB Strategy stresses universal access to drug susceptibility testing (DST). DST determines whether Mycobacterium tuberculosis bacteria are susceptible or resistant to drugs. Xpert MTB/XDR is a rapid nucleic acid amplification test for detection of tuberculosis and drug resistance in one test suitable for use in peripheral and intermediate level laboratories. In specimens where tuberculosis is detected by Xpert MTB/XDR, Xpert MTB/XDR can also detect resistance to isoniazid, fluoroquinolones, ethionamide, and amikacin.\nObjectives\nTo assess the diagnostic accuracy of Xpert MTB/XDR for pulmonary tuberculosis in people with presumptive pulmonary tuberculosis (having signs and symptoms suggestive of tuberculosis, including cough, fever, weight loss, night sweats).\nTo assess the diagnostic accuracy of Xpert MTB/XDR for resistance to isoniazid, fluoroquinolones, ethionamide, and amikacin in people with tuberculosis detected by Xpert MTB/XDR, irrespective of rifampicin resistance (whether or not rifampicin resistance status was known) and with known rifampicin resistance.\nSearch methods\nWe searched multiple databases to 23 September 2021. We limited searches to 2015 onwards as Xpert MTB/XDR was launched in 2020.\nSelection criteria\nDiagnostic accuracy studies using sputum in adults with presumptive or confirmed pulmonary tuberculosis. Reference standards were culture (pulmonary tuberculosis detection); phenotypic DST (pDST), genotypic DST (gDST),composite (pDST and gDST) (drug resistance detection).\nData collection and analysis\nTwo review authors independently reviewed reports for eligibility and extracted data using a standardized form. For multicentre studies, we anticipated variability in the type and frequency of mutations associated with resistance to a given drug at the different centres and considered each centre as an independent study cohort for quality assessment and analysis. We assessed methodological quality with QUADAS-2, judging risk of bias separately for each target condition and reference standard. For pulmonary tuberculosis detection, owing to heterogeneity in participant characteristics and observed specificity estimates, we reported a range of sensitivity and specificity estimates and did not perform a meta-analysis. For drug resistance detection, we performed meta-analyses by reference standard using bivariate random-effects models. Using GRADE, we assessed certainty of evidence of Xpert MTB/XDR accuracy for detection of resistance to isoniazid and fluoroquinolones in people irrespective of rifampicin resistance and to ethionamide and amikacin in people with known rifampicin resistance, reflecting real-world situations. We used pDST, except for ethionamide resistance where we considered gDST a better reference standard.\nMain results\nWe included two multicentre studies from high multidrug-resistant/rifampicin-resistant tuberculosis burden countries, reporting on six independent study cohorts, involving 1228 participants for pulmonary tuberculosis detection and 1141 participants for drug resistance detection. The proportion of participants with rifampicin resistance in the two studies was 47.9% and 80.9%. For tuberculosis detection, we judged high risk of bias for patient selection owing to selective recruitment. For ethionamide resistance detection, we judged high risk of bias for the reference standard, both pDST and gDST, though we considered gDST a better reference standard.\nPulmonary tuberculosis detection\n- Xpert MTB/XDR sensitivity range, 98.3% (96.1 to 99.5) to 98.9% (96.2 to 99.9) and specificity range, 22.5% (14.3 to 32.6) to 100.0% (86.3 to 100.0); median prevalence of pulmonary tuberculosis 91.3%, (interquartile range, 89.3% to 91.8%), (2 studies; 1 study reported on 2 cohorts, 1228 participants; very low-certainty evidence, sensitivity and specificity).\nDrug resistance detection\nPeople irrespective of rifampicin resistance\n- Isoniazid resistance: Xpert MTB/XDR summary sensitivity and specificity (95% confidence interval (CI)) were 94.2% (87.5 to 97.4) and 98.5% (92.6 to 99.7) against pDST, (6 cohorts, 1083 participants, moderate-certainty evidence, sensitivity and specificity).\n- Fluoroquinolone resistance: Xpert MTB/XDR summary sensitivity and specificity were 93.2% (88.1 to 96.2) and 98.0% (90.8 to 99.6) against pDST, (6 cohorts, 1021 participants; high-certainty evidence, sensitivity; moderate-certainty evidence, specificity).\nPeople with known rifampicin resistance\n- Ethionamide resistance: Xpert MTB/XDR summary sensitivity and specificity were 98.0% (74.2 to 99.9) and 99.7% (83.5 to 100.0) against gDST, (4 cohorts, 434 participants; very low-certainty evidence, sensitivity and specificity).\n- Amikacin resistance: Xpert MTB/XDR summary sensitivity and specificity were 86.1% (75.0 to 92.7) and 98.9% (93.0 to 99.8) against pDST, (4 cohorts, 490 participants; low-certainty evidence, sensitivity; high-certainty evidence, specificity).\nOf 1000 people with pulmonary tuberculosis, detected as tuberculosis by Xpert MTB/XDR:\n- where 50 have isoniazid resistance, 61 would have an Xpert MTB/XDR result indicating isoniazid resistance: of these, 14/61 (23%) would not have isoniazid resistance (FP); 939 (of 1000 people) would have a result indicating the absence of isoniazid resistance: of these, 3/939 (0%) would have isoniazid resistance (FN).\n- where 50 have fluoroquinolone resistance, 66 would have an Xpert MTB/XDR result indicating fluoroquinolone resistance: of these, 19/66 (29%) would not have fluoroquinolone resistance (FP); 934 would have a result indicating the absence of fluoroquinolone resistance: of these, 3/934 (0%) would have fluoroquinolone resistance (FN).\n- where 300 have ethionamide resistance, 296 would have an Xpert MTB/XDR result indicating ethionamide resistance: of these, 2/296 (1%) would not have ethionamide resistance (FP); 704 would have a result indicating the absence of ethionamide resistance: of these, 6/704 (1%) would have ethionamide resistance (FN).\n- where 135 have amikacin resistance, 126 would have an Xpert MTB/XDR result indicating amikacin resistance: of these, 10/126 (8%) would not have amikacin resistance (FP); 874 would have a result indicating the absence of amikacin resistance: of these, 19/874 (2%) would have amikacin resistance (FN).\nAuthors' conclusions\nReview findings suggest that, in people determined by Xpert MTB/XDR to be tuberculosis-positive, Xpert MTB/XDR provides accurate results for detection of isoniazid and fluoroquinolone resistance and can assist with selection of an optimised treatment regimen. Given that Xpert MTB/XDR targets a limited number of resistance variants in specific genes, the test may perform differently in different settings. Findings in this review should be interpreted with caution. Sensitivity for detection of ethionamide resistance was based only on Xpert MTB/XDR detection of mutations in the inhA promoter region, a known limitation. High risk of bias limits our confidence in Xpert MTB/XDR accuracy for pulmonary tuberculosis.\nXpert MTB/XDR's impact will depend on its ability to detect tuberculosis (required for DST), prevalence of resistance to a given drug, health care infrastructure, and access to other tests.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "How accurate is Xpert MTB/XDR for detecting pulmonary tuberculosis and resistance to tuberculosis drugs (i.e. isoniazid, fluoroquinolones, ethionamide, and amikacin) in adults?"
}
]
|
query-laysum | 1373 | [
{
"role": "user",
"content": "Abstract: Background\nInappropriate polypharmacy is a particular concern in older people and is associated with negative health outcomes. Choosing the best interventions to improve appropriate polypharmacy is a priority, so that many medicines may be used to achieve better clinical outcomes for patients. This is the third update of this Cochrane Review.\nObjectives\nTo assess the effects of interventions, alone or in combination, in improving the appropriate use of polypharmacy and reducing medication-related problems in older people.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and two trials registers up until 13 January 2021, together with handsearching of reference lists to identify additional studies. We ran updated searches in February 2023 and have added potentially eligible studies to 'Characteristics of studies awaiting classification'.\nSelection criteria\nFor this update, we included randomised trials only. Eligible studies described interventions affecting prescribing aimed at improving appropriate polypharmacy (four or more medicines) in people aged 65 years and older, which used a validated tool to assess prescribing appropriateness. These tools can be classified as either implicit tools (judgement-based/based on expert professional judgement) or explicit tools (criterion-based, comprising lists of drugs to be avoided in older people).\nData collection and analysis\nFour review authors independently reviewed abstracts of eligible studies, and two authors extracted data and assessed the risk of bias of the included studies. We pooled study-specific estimates, and used a random-effects model to yield summary estimates of effect and 95% confidence intervals (CIs). We assessed the overall certainty of evidence for each outcome using the GRADE approach.\nMain results\nWe identified 38 studies, which includes an additional 10 in this update. The included studies consisted of 24 randomised trials and 14 cluster-randomised trials. Thirty-six studies examined complex, multi-faceted interventions of pharmaceutical care (i.e. the responsible provision of medicines to improve patients' outcomes), in a variety of settings. Interventions were delivered by healthcare professionals such as general physicians, pharmacists, nurses and geriatricians, and most were conducted in high-income countries. Assessments using the Cochrane risk of bias tool found that there was a high and/or unclear risk of bias across a number of domains. Based on the GRADE approach, the overall certainty of evidence for each pooled outcome ranged from low to very low.\nIt is uncertain whether pharmaceutical care improves medication appropriateness (as measured by an implicit tool) (mean difference (MD) -5.66, 95% confidence interval (CI) -9.26 to -2.06; I2 = 97%; 8 studies, 947 participants; very low-certainty evidence). It is uncertain whether pharmaceutical care reduces the number of potentially inappropriate medications (PIMs) (standardised mean difference (SMD) -0.19, 95% CI -0.34 to -0.05; I2 = 67%; 9 studies, 2404 participants; very low-certainty evidence). It is uncertain whether pharmaceutical care reduces the proportion of patients with one or more PIM (risk ratio (RR) 0.81, 95% CI 0.68 to 0.98; I2 = 84%; 13 studies, 4534 participants; very low-certainty evidence). Pharmaceutical care may slightly reduce the number of potential prescribing omissions (PPOs) (SMD -0.48, 95% CI -1.05 to 0.09; I2 = 92%; 3 studies, 691 participants; low-certainty evidence), however it must be noted that this effect estimate is based on only three studies, which had serious limitations in terms of risk of bias. Likewise, it is uncertain whether pharmaceutical care reduces the proportion of patients with one or more PPO (RR 0.50, 95% CI 0.27 to 0.91; I2 = 95%; 7 studies, 2765 participants; very low-certainty evidence).\nPharmaceutical care may make little or no difference to hospital admissions (data not pooled; 14 studies, 4797 participants; low-certainty evidence). Pharmaceutical care may make little or no difference to quality of life (data not pooled; 16 studies, 7458 participants; low-certainty evidence). Medication-related problems were reported in 10 studies (6740 participants) using different terms (e.g. adverse drug reactions, drug-drug interactions). No consistent intervention effect on medication-related problems was noted across studies. This also applied to studies examining adherence to medication (nine studies, 3848 participants).\nAuthors' conclusions\nIt is unclear whether interventions to improve appropriate polypharmacy resulted in clinically significant improvement. Since the last update of this review in 2018, there appears to have been an increase in the number of studies seeking to address potential prescribing omissions and more interventions being delivered by multidisciplinary teams.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Review authors searched for studies that had been published up to January 2021."
}
]
|
query-laysum | 1374 | [
{
"role": "user",
"content": "Abstract: Background\nDysmenorrhoea is a common gynaecological problem consisting of painful cramps accompanying menstruation, which in the absence of any underlying abnormality is known as primary dysmenorrhoea. Research has shown that women with dysmenorrhoea have high levels of prostaglandins, hormones known to cause cramping abdominal pain. Nonsteroidal anti-inflammatory drugs (NSAIDs) are drugs that act by blocking prostaglandin production. They inhibit the action of cyclooxygenase (COX), an enzyme responsible for the formation of prostaglandins. The COX enzyme exists in two forms, COX-1 and COX-2. Traditional NSAIDs are considered 'non-selective' because they inhibit both COX-1 and COX-2 enzymes. More selective NSAIDs that solely target COX-2 enzymes (COX-2-specific inhibitors) were launched in 1999 with the aim of reducing side effects commonly reported in association with NSAIDs, such as indigestion, headaches and drowsiness.\nObjectives\nTo determine the effectiveness and safety of NSAIDs in the treatment of primary dysmenorrhoea.\nSearch methods\nWe searched the following databases in January 2015: Cochrane Menstrual Disorders and Subfertility Group Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL, November 2014 issue), MEDLINE, EMBASE and Web of Science. We also searched clinical trials registers (ClinicalTrials.gov and ICTRP). We checked the abstracts of major scientific meetings and the reference lists of relevant articles.\nSelection criteria\nAll randomised controlled trial (RCT) comparisons of NSAIDs versus placebo, other NSAIDs or paracetamol, when used to treat primary dysmenorrhoea.\nData collection and analysis\nTwo review authors independently selected the studies, assessed their risk of bias and extracted data, calculating odds ratios (ORs) for dichotomous outcomes and mean differences for continuous outcomes, with 95% confidence intervals (CIs). We used inverse variance methods to combine data. We assessed the overall quality of the evidence using GRADE methods.\nMain results\nWe included 80 randomised controlled trials (5820 women). They compared 20 different NSAIDs (18 non-selective and two COX-2-specific) versus placebo, paracetamol or each other.\nNSAIDs versus placebo\nAmong women with primary dysmenorrhoea, NSAIDs were more effective for pain relief than placebo (OR 4.37, 95% CI 3.76 to 5.09; 35 RCTs, I2 = 53%, low quality evidence). This suggests that if 18% of women taking placebo achieve moderate or excellent pain relief, between 45% and 53% taking NSAIDs will do so.\nHowever, NSAIDs were associated with more adverse effects (overall adverse effects: OR 1.29, 95% CI 1.11 to 1.51, 25 RCTs, I2 = 0%, low quality evidence; gastrointestinal adverse effects: OR 1.58, 95% CI 1.12 to 2.23, 14 RCTs, I2 = 30%; neurological adverse effects: OR 2.74, 95% CI 1.66 to 4.53, seven RCTs, I2 = 0%, low quality evidence). The evidence suggests that if 10% of women taking placebo experience side effects, between 11% and 14% of women taking NSAIDs will do so.\nNSAIDs versus other NSAIDs\nWhen NSAIDs were compared with each other there was little evidence of the superiority of any individual NSAID for either pain relief or safety. However, the available evidence had little power to detect such differences, as most individual comparisons were based on very few small trials.\nNon-selective NSAIDs versus COX-2-specific selectors\nOnly two of the included studies utilised COX-2-specific inhibitors (etoricoxib and celecoxib). There was no evidence that COX-2-specific inhibitors were more effective or tolerable for the treatment of dysmenorrhoea than traditional NSAIDs; however data were very scanty.\nNSAIDs versus paracetamol\nNSAIDs appeared to be more effective for pain relief than paracetamol (OR 1.89, 95% CI 1.05 to 3.43, three RCTs, I2 = 0%, low quality evidence). There was no evidence of a difference with regard to adverse effects, though data were very scanty.\nMost of the studies were commercially funded (59%); a further 31% failed to state their source of funding.\nAuthors' conclusions\nNSAIDs appear to be a very effective treatment for dysmenorrhoea, though women using them need to be aware of the substantial risk of adverse effects. There is insufficient evidence to determine which (if any) individual NSAID is the safest and most effective for the treatment of dysmenorrhoea. We rated the quality of the evidence as low for most comparisons, mainly due to poor reporting of study methods.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review found that NSAIDs appear to be very effective in relieving period pain. The evidence suggests that if 18% of women taking placebo achieve moderate or excellent pain relief, between 45% and 53% taking NSAIDs will do so. NSAIDs appear to work better than paracetamol, but it is unclear whether any one NSAID is safer or more effective than others.\nNSAIDs commonly cause adverse effects (side effects), including indigestion, headaches and drowsiness. The evidence suggests that if 10% of women taking placebo experience side effects, between 11% and 14% of women taking NSAIDs will do so.\nBased on two studies that made head-to-head comparisons, there was no evidence that newer types of NSAID (known as COX-2-specific inhibitors) are more effective for the treatment of dysmenorrhoea than traditional NSAIDs (known as non-selective inhibitors), nor that there is a difference between them with regard to adverse effects."
}
]
|
query-laysum | 1375 | [
{
"role": "user",
"content": "Abstract: Background\nHerpes zoster ophthalmicus affects the eye and vision, and is caused by the reactivation of the varicella zoster virus in the distribution of the first division of the trigeminal nerve. An aggressive management of acute herpes zoster ophthalmicus with systemic antiviral medication is generally recommended as the standard first-line treatment for herpes zoster ophthalmicus infections. Both acyclovir and its prodrug valacyclovir are medications that are approved for the systemic treatment of herpes zoster. Although it is known that valacyclovir has an improved bioavailability and steadier plasma concentration, it is currently unclear as to whether this leads to better treatment results and less ocular complications.\nObjectives\nTo assess the effects of valacyclovir versus acyclovir for the systemic antiviral treatment of herpes zoster ophthalmicus in immunocompetent patients.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register; 2016, Issue 5), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to June 2016), Embase (January 1980 to June 2016), Web of Science Conference Proceedings Citation Index-Science (CPCI-S; January 1990 to June 2016), BIOSIS Previews (January 1969 to June 2016), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov), and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP; www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 13 June 2016.\nSelection criteria\nWe considered all randomised controlled trials (RCTs) in which systemic valacyclovir was compared to systemic acyclovir medication for treatment of herpes zoster ophthalmicus. There were no language restrictions.\nData collection and analysis\nTwo review authors independently selected trials, evaluated the risk of bias in included trials, and extracted and analysed data. We did not conduct a meta-analysis, as only one study was included. We assessed the certainty of the evidence for the selected outcomes using the GRADE approach.\nMain results\nOne study fulfilled the inclusion criteria. In this multicentre, randomised double-masked study carried out in France, 110 immunocompetent people with herpes zoster ophthalmicus, diagnosed within 72 hours of skin eruption, were treated, with 56 participants allocated to the valacyclovir group and 54 to the acyclovir group. The study was poorly reported and we judged it to be unclear risk of bias for most domains.\nPersistent ocular lesions after 6 months were observed in 2/56 people in the valacyclovir group compared with 1/54 people in the acyclovir group (risk ratio (RR) 1.93 (95% CI 0.18 to 20.65); very low certainty evidence. Dendritic ulcer appeared in 3/56 patients treated with valacyclovir, while 1/54 suffered in the acyclovir group (RR 2.89; 95% confidence interval (CI) 0.31 to 26.96); very low certainty evidence), uveitis in 7/56 people in the valacyclovir group compared with 9/54 in the acyclovir group (RR 0.96; 95% CI 0.36 to 2.57); very low certainty evidence). Similarly, there was uncertainty as to the comparative effects of these two treatments on post-herpetic pain, and side effects (vomiting, eyelid or facial edema, disseminated zoster). Due to concerns about imprecision (small number of events and large confidence intervals) and study limitations, the certainty of evidence using the GRADE approach was rated as low to very low for the use of valacyclovir compared to acyclovir.\nAuthors' conclusions\nThis review included data from only one study, which had methodological limitations. As such, our results indicated uncertainty of the relative benefits and harms of valacyclovir over acyclovir in herpes zoster ophthalmicus, despite its widespread use for this condition. Further well-designed and adequately powered trials are needed. These trials should include outcomes important to patients, including compliance.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out if valacyclovir performs better than acyclovir in the treatment of a painful itchy rash caused by the chickenpox virus (herpes zoster ophthalmicus). Cochrane researchers collected and analysed all relevant studies to answer this question and found one study."
}
]
|
query-laysum | 1376 | [
{
"role": "user",
"content": "Abstract: Background\nOesophageal cancer is the sixth most common cause of cancer-related mortality in the world. Currently surgery is the recommended treatment modality when possible. However, it is unclear whether non-surgical treatment options is equivalent to oesophagectomy in terms of survival.\nObjectives\nTo assess the benefits and harms of non-surgical treatment versus oesophagectomy for people with oesophageal cancer.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Science Citation Index, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) up to 4th March 2016. We also screened reference lists of included studies.\nSelection criteria\nTwo review authors independently screened all titles and abstracts of articles obtained from the literature searches and selected references for further assessment. For these selected references, we based trial inclusion on assessment of the full-text articles.\nData collection and analysis\nTwo review authors independently extracted study data. We calculated the risk ratio (RR) with 95% confidence interval (CI) for binary outcomes, the mean difference (MD) or the standardised mean difference (SMD) with 95% CI for continuous outcomes, and the hazard ratio (HR) for time-to-event outcomes. We performed meta-analyses where it was meaningful.\nMain results\nEight trials, which included 1132 participants in total, met the inclusion criteria of this Cochrane review. These trials were at high risk of bias trials. One trial (which included five participants) did not contribute any data to this Cochrane review, and we excluded 13 participants in the remaining trials after randomisation; this left a total of 1114 participants, 510 randomised to non-surgical treatment and 604 to surgical treatment for analysis. The non-surgical treatment was definitive chemoradiotherapy in five trials and definitive radiotherapy in three trials. All participants were suitable for major surgery. Most of the data were from trials that compared chemoradiotherapy with surgery. There was no difference in long-term mortality between chemoradiotherapy and surgery (HR 0.88, 95% CI 0.76 to 1.03; 602 participants; four studies; low quality evidence). The long-term mortality was higher in radiotherapy than surgery (HR 1.39, 95% CI 1.18 to 1.64; 512 participants; three studies; very low quality evidence). There was no difference in long-term recurrence between non-surgical treatment and surgery (HR 0.96, 95% CI 0.80 to 1.16; 349 participants; two studies; low quality evidence). The difference between non-surgical and surgical treatments was imprecise for short-term mortality (RR 0.39, 95% CI 0.11 to 1.35; 689 participants; five studies; very low quality evidence), the proportion of participants with serious adverse in three months (RR 0.61, 95% CI 0.25 to 1.47; 80 participants; one study; very low quality evidence), and proportion of people with local recurrence at maximal follow-up (RR 0.89, 95% CI 0.70 to 1.12; 449 participants; three studies; very low quality evidence). The health-related quality of life was higher in non-surgical treatment between four weeks and three months after treatment (Spitzer Quality of Life Index; MD 0.93, 95% CI 0.24 to 1.62; 165 participants; one study; very low quality evidence). The difference between non-surgical and surgical treatments was imprecise for medium-term health-related quality of life (three months to two years after treatment) (Spitzer Quality of Life Index; MD −0.95, 95% CI −2.10 to 0.20; 62 participants; one study; very low quality of evidence). The proportion of people with dysphagia at the last follow-up visit prior to death was higher with definitive chemoradiotherapy compared to surgical treatment (RR 1.48, 95% CI 1.01 to 2.19; 139 participants; one study; very low quality evidence).\nAuthors' conclusions\nBased on low quality evidence, chemoradiotherapy appears to be at least equivalent to surgery in terms of short-term and long-term survival in people with oesophageal cancer (squamous cell carcinoma type) who are fit for surgery and are responsive to induction chemoradiotherapy. However, there is uncertainty in the comparison of definitive chemoradiotherapy versus surgery for oesophageal cancer (adenocarcinoma type) and we cannot rule out significant benefits or harms of definitive chemoradiotherapy versus surgery. Based on very low quality evidence, the proportion of people with dysphagia at the last follow-up visit prior to death was higher with definitive chemoradiotherapy compared to surgery. Based on very low quality evidence, radiotherapy results in less long-term survival than surgery in people with oesophageal cancer who are fit for surgery. However, there is a risk of bias and random errors in these results, although the risk of bias in the studies included in this systematic review is likely to be lower than in non-randomised studies.\nFurther trials at low risk of bias are necessary. Such trials need to compare endoscopic treatment with surgical treatment in early stage oesophageal cancer (carcinoma in situ and Stage Ia), and definitive chemoradiotherapy with surgical treatments in other stages of oesophageal cancer, and should measure and report patient-oriented outcomes. Early identification of responders to chemoradiotherapy and better second-line treatment for non-responders will also increase the need and acceptability of trials that compare definitive chemoradiotherapy with surgery.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of evidence was low or very low because the included studies were small and had errors in study design. As a result, there is a lot of uncertainty regarding the results."
}
]
|
query-laysum | 1377 | [
{
"role": "user",
"content": "Abstract: Background\nWhilst antipsychotics are the mainstay of treatment for schizophrenia spectrum disorders, there have been numerous attempts to identify biomarkers that can predict treatment response. One potential marker may be psychomotor abnormalities, including catatonic symptoms. Early studies suggested that catatonic symptoms predict poor treatment response, whilst anecdotal reports of rare adverse events have been invoked against antipsychotics. The efficacy and safety of antipsychotics in the treatment of this subtype of schizophrenia have rarely been studied in randomised controlled trials (RCTs).\nObjectives\nTo compare the effects of any single antipsychotic medication with another antipsychotic or with other pharmacological agents, electroconvulsive therapy (ECT), other non-pharmacological neuromodulation therapies (e.g. transcranial magnetic stimulation), or placebo for treating positive, negative, and catatonic symptoms in people who have schizophrenia spectrum disorders with catatonic symptoms.\nSearch methods\nWe searched the Cochrane Schizophrenia Group's Study-Based Register of Trials, which is based on CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, PubMed, ClinicalTrials.gov, the ISRCTN registry, and WHO ICTRP, on 19 September 2021. There were no language, date, document type, or publication status limitations for inclusion of records in the register. We also manually searched reference lists from the included studies, and contacted study authors when relevant.\nSelection criteria\nAll RCTs comparing any single antipsychotic medication with another antipsychotic or with other pharmacological agents, ECT, other non-pharmacological neuromodulation therapies, or placebo for people who have schizophrenia spectrum disorders with catatonic symptoms.\nData collection and analysis\nTwo review authors independently inspected citations, selected studies, extracted data, and appraised study quality. For binary outcomes, we planned to calculate risk ratios and their 95% confidence intervals (CI) on an intention-to-treat basis. For continuous outcomes, we planned to calculate mean differences between groups and their 95% CI. We assessed risk of bias for the included studies, and created a summary of findings table; however, we did not assess the certainty of the evidence using the GRADE approach because there was no quantitative evidence in the included study.\nMain results\nOut of 53 identified reports, one RCT including 14 hospitalised adults with schizophrenia and catatonic symptoms met the inclusion criteria of the review. The study, which was conducted in India and lasted only three weeks, compared risperidone with ECT in people who did not respond to an initial lorazepam trial.\nThere were no usable data reported on the primary efficacy outcomes of clinically important changes in positive, negative, or catatonic symptoms. Whilst both study groups improved in catatonia scores on the Bush-Francis Catatonia Rating Scale (BFCRS), the ECT group showed significantly greater improvement at week 3 endpoint (mean +/− estimated standard deviation; 0.68 +/− 4.58; N = 8) than the risperidone group (6.04 +/− 4.58; N = 6; P = 0.035 of a two-way analysis of variance (ANOVA) for repeated measures originally conducted in the trial). Similarly, both groups improved on the Positive and Negative Syndrome Scale (PANSS) scores by week 3, but ECT showed significantly greater improvement in positive symptoms scores compared with risperidone (P = 0.04). However, data on BFCRS scores in the ECT group appeared to be skewed, and mean PANSS scores were not reported, thereby precluding further analyses of both BFCRS and PANSS data according to the protocol.\nAlthough no cases of neuroleptic malignant syndrome were reported, extrapyramidal symptoms as a primary safety outcome were reported in three cases in the risperidone group. Conversely, headache (N = 6), memory loss (N = 4), and a prolonged seizure were reported in people receiving ECT. These adverse effects, which were assessed as specific for antipsychotics and ECT, respectively, were the only adverse effects reported in the study. However, the exact number of participants with adverse events was not clearly reported in both groups, precluding further analysis.\nOur results were based only on a single study with a very small sample size, short duration of treatment, unclear or high risk of bias due to unclear randomisation methods, possible imbalance in baseline characteristics, skewed data, and selective reporting. Data on outcomes of general functioning, global state, quality of life, and service use, as well as data on specific phenomenology and duration of catatonic symptoms, were not reported.\nAuthors' conclusions\nWe found only one small, short-term trial suggesting that risperidone may improve catatonic and positive symptoms scale scores amongst people with schizophrenia spectrum disorders and catatonic symptoms, but that ECT may result in greater improvement in the first three weeks of treatment. Due to small sample size, methodological shortcomings and brief duration of the study, as well as risk of bias, the evidence from this review is of very low quality. We are uncertain if these are true effects, limiting any conclusions that can be drawn from the evidence. No cases of neuroleptic malignant syndrome were reported, but we cannot rule out the risk of this or other rare adverse events in larger population samples.\nHigh-quality trials continue to be necessary to differentiate treatments for people with symptoms of catatonia in schizophrenia spectrum disorders. The lack of consensus on the psychopathology of catatonia remains a barrier to defining treatments for people with schizophrenia. Better understanding of the efficacy and safety of antipsychotics may clarify treatment for this unique subtype of schizophrenia.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that looked at any antipsychotic medicine compared with other antipsychotics, other pharmacological agents, electroconvulsive therapy, other brain stimulation therapies, or placebo in people with schizophrenia spectrum disorders with catatonic symptoms. We compared and summarised the results of the studies and rated our confidence in the evidence based on factors such as study methods and sizes."
}
]
|
query-laysum | 1378 | [
{
"role": "user",
"content": "Abstract: Background\nIt has been reported that people with COVID-19 and pre-existing autoantibodies against type I interferons are likely to develop an inflammatory cytokine storm responsible for severe respiratory symptoms. Since interleukin 6 (IL-6) is one of the cytokines released during this inflammatory process, IL-6 blocking agents have been used for treating people with severe COVID-19.\nObjectives\nTo update the evidence on the effectiveness and safety of IL-6 blocking agents compared to standard care alone or to a placebo for people with COVID-19.\nSearch methods\nWe searched the World Health Organization (WHO) International Clinical Trials Registry Platform, the Living OVerview of Evidence (L·OVE) platform, and the Cochrane COVID-19 Study Register to identify studies on 7 June 2022.\nSelection criteria\nWe included randomized controlled trials (RCTs) evaluating IL-6 blocking agents compared to standard care alone or to placebo for people with COVID-19, regardless of disease severity.\nData collection and analysis\nPairs of researchers independently conducted study selection, extracted data and assessed risk of bias. We assessed the certainty of evidence using the GRADE approach for all critical and important outcomes. In this update we amended our protocol to update the methods used for grading evidence by establishing minimal important differences for the critical outcomes.\nMain results\nThis update includes 22 additional trials, for a total of 32 trials including 12,160 randomized participants all hospitalized for COVID-19 disease. We identified a further 17 registered RCTs evaluating IL-6 blocking agents without results available as of 7 June 2022.\nThe mean age range varied from 56 to 75 years; 66.2% (8051/12,160) of enrolled participants were men. One-third (11/32) of included trials were placebo-controlled. Twenty-two were published in peer-reviewed journals, three were reported as preprints, two trials had results posted only on registries, and results from five trials were retrieved from another meta-analysis. Eight were funded by pharmaceutical companies.\nTwenty-six included studies were multicenter trials; four were multinational and 22 took place in single countries. Recruitment of participants occurred between February 2020 and June 2021, with a mean enrollment duration of 21 weeks (range 1 to 54 weeks). Nineteen trials (60%) had a follow-up of 60 days or more. Disease severity ranged from mild to critical disease. The proportion of participants who were intubated at study inclusion also varied from 5% to 95%. Only six trials reported vaccination status; there were no vaccinated participants included in these trials, and 17 trials were conducted before vaccination was rolled out.\nWe assessed a total of six treatments, each compared to placebo or standard care. Twenty trials assessed tocilizumab, nine assessed sarilumab, and two assessed clazakizumab. Only one trial was included for each of the other IL-6 blocking agents (siltuximab, olokizumab, and levilimab). Two trials assessed more than one treatment.\nEfficacy and safety of tocilizumab and sarilumab compared to standard care or placebo for treating COVID-19\nAt day (D) 28, tocilizumab and sarilumab probably result in little or no increase in clinical improvement (tocilizumab: risk ratio (RR) 1.05, 95% confidence interval (CI) 1.00 to 1.11; 15 RCTs, 6116 participants; moderate-certainty evidence; sarilumab: RR 0.99, 95% CI 0.94 to 1.05; 7 RCTs, 2425 participants; moderate-certainty evidence). For clinical improvement at ≥ D60, the certainty of evidence is very low for both tocilizumab (RR 1.10, 95% CI 0.81 to 1.48; 1 RCT, 97 participants; very low-certainty evidence) and sarilumab (RR 1.22, 95% CI 0.91 to 1.63; 2 RCTs, 239 participants; very low-certainty evidence).\nThe effect of tocilizumab on the proportion of participants with a WHO Clinical Progression Score (WHO-CPS) of level 7 or above remains uncertain at D28 (RR 0.90, 95% CI 0.72 to 1.12; 13 RCTs, 2117 participants; low-certainty evidence) and that for sarilumab very uncertain (RR 1.10, 95% CI 0.90 to 1.33; 5 RCTs, 886 participants; very low-certainty evidence).\nTocilizumab reduces all cause-mortality at D28 compared to standard care/placebo (RR 0.88, 95% CI 0.81 to 0.94; 18 RCTs, 7428 participants; high-certainty evidence). The evidence about the effect of sarilumab on this outcome is very uncertain (RR 1.06, 95% CI 0.86 to 1.30; 9 RCTs, 3305 participants; very low-certainty evidence).\nThe evidence is uncertain for all cause-mortality at ≥ D60 for tocilizumab (RR 0.91, 95% CI 0.80 to 1.04; 9 RCTs, 2775 participants; low-certainty evidence) and very uncertain for sarilumab (RR 0.95, 95% CI 0.84 to 1.07; 6 RCTs, 3379 participants; very low-certainty evidence).\nTocilizumab probably results in little to no difference in the risk of adverse events (RR 1.03, 95% CI 0.95 to 1.12; 9 RCTs, 1811 participants; moderate-certainty evidence). The evidence about adverse events for sarilumab is uncertain (RR 1.12, 95% CI 0.97 to 1.28; 4 RCT, 860 participants; low-certainty evidence).\nThe evidence about serious adverse events is very uncertain for tocilizumab (RR 0.93, 95% CI 0.81 to 1.07; 16 RCTs; 2974 participants; very low-certainty evidence) and uncertain for sarilumab (RR 1.09, 95% CI 0.97 to 1.21; 6 RCTs; 2936 participants; low-certainty evidence).\nEfficacy and safety of clazakizumab, olokizumab, siltuximab and levilimab compared to standard care or placebo for treating COVID-19\nThe evidence about the effects of clazakizumab, olokizumab, siltuximab, and levilimab comes from only one or two studies for each blocking agent, and is uncertain or very uncertain.\nAuthors' conclusions\nIn hospitalized people with COVID-19, results show a beneficial effect of tocilizumab on all-cause mortality in the short term and probably little or no difference in the risk of adverse events compared to standard care alone or placebo. Nevertheless, both tocilizumab and sarilumab probably result in little or no increase in clinical improvement at D28.\nEvidence for an effect of sarilumab and the other IL-6 blocking agents on critical outcomes is uncertain or very uncertain. Most of the trials included in our review were done before the waves of different variants of concern and before vaccination was rolled out on a large scale.\nAn additional 17 RCTs of IL-6 blocking agents are currently registered with no results yet reported. The number of pending studies and the number of participants planned is low. Consequently, we will not publish further updates of this review.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of our review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Compared to placebo or standard treatment, treatment with tocilizumab:\n- reduces the number of people with COVID-19 dying, of any cause, around 28 days;\n- probably makes little or no difference on clinical improvement around 28 days;\n- probably results in little or no difference in unwanted effects.\nWe are uncertain about the effects of tocilizumab treatment on:\n- clinical improvement around 60 days;\n- severity of COVID-19; that is, how many patients needed a ventilator or additional organ support or died of COVID-19 around 28 days;\n- how many patients die, of any cause, around 60 days.\nCompared to placebo or standard treatment, treatment with sarilumab:\n- probably makes little or no difference in clinical improvement (defined as leaving the hospital or improvement in COVID-19 symptoms) around 28 days.\nWe are uncertain about the treatment effects and unwanted events of sarilumab, clazakizumab, olokizumab, siltuximab, and levilimab, compared to placebo or standard treatment."
}
]
|
query-laysum | 1379 | [
{
"role": "user",
"content": "Abstract: Background\nPsychological problems are common complications following stroke that can cause stroke survivors to lack the motivation to take part in activities of daily living. Motivational interviewing provides a specific way for enhancing intrinsic motivation, which may help to improve activities of daily living for stroke survivors.\nObjectives\nTo investigate the effect of motivational interviewing for improving activities of daily living after stroke.\nSearch methods\nWe searched the Cochrane Stroke Group's Trials Register (November 2014), the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 1), MEDLINE (1948 to March 2015), EMBASE (1980 to March 2015), CINAHL (1982 to March 2015), AMED (1985 to March 2015), PsycINFO (1806 to March 2015), PsycBITE (March 2015) and four Chinese databases. In an effort to identify further published, unpublished and ongoing trials, we searched ongoing trials registers and conference proceedings, checked reference lists, and contacted authors of relevant studies.\nSelection criteria\nRandomised controlled trials (RCTs) comparing motivational interviewing with no intervention, sham motivational interviewing or other psychological therapy for people with stroke were eligible.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, extracted eligible data and assessed risk of bias. Outcome measures included activities of daily living, mood and death.\nMain results\nOne study involving a total of 411 participants, which compared motivational interviewing with usual care, met our inclusion criteria. The results of this review did not show significant differences between groups receiving motivational interviewing or usual stroke care for participants who were not dependent on others for activities of daily living, nor on the death rate after three-month and 12-month follow-up, but participants receiving motivational interviewing were more likely to have a normal mood than those who received usual care at three-months and 12-months follow-up.\nAuthors' conclusions\nThere is insufficient evidence to support the use of motivational interviewing for improving activities of daily living after stroke. Further well designed RCTs are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence we found from a single study was insufficient to support the use of motivational interviewing for improving activities of daily living after stroke, but participants receiving motivational interviewing were more likely to have a normal mood than those who received usual care."
}
]
|
query-laysum | 1380 | [
{
"role": "user",
"content": "Abstract: Background\nNon-invasive ventilation (NIV) with bi-level positive airway pressure (BiPAP) is commonly used to treat patients admitted to hospital with acute hypercapnic respiratory failure (AHRF) secondary to an acute exacerbation of chronic obstructive pulmonary disease (AECOPD).\nObjectives\nTo compare the efficacy of NIV applied in conjunction with usual care versus usual care involving no mechanical ventilation alone in adults with AHRF due to AECOPD. The aim of this review is to update the evidence base with the goals of supporting clinical practice and providing recommendations for future evaluation and research.\nSearch methods\nWe identified trials from the Cochrane Airways Group Specialised Register of trials (CAGR), which is derived from systematic searches of bibliographic databases including the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Allied and Complementary Medicine Database (AMED), and PsycINFO, and through handsearching of respiratory journals and meeting abstracts. This update to the original review incorporates the results of database searches up to January 2017.\nSelection criteria\nAll randomised controlled trials that compared usual care plus NIV (BiPAP) versus usual care alone in an acute hospital setting for patients with AECOPD due to AHRF were eligible for inclusion. AHRF was defined by a mean admission pH < 7.35 and mean partial pressure of carbon dioxide (PaCO2) > 45 mmHg (6 kPa). Primary review outcomes were mortality during hospital admission and need for endotracheal intubation. Secondary outcomes included hospital length of stay, treatment intolerance, complications, changes in symptoms, and changes in arterial blood gases.\nData collection and analysis\nTwo review authors independently applied the selection criteria to determine study eligibility, performed data extraction, and determined risk of bias in accordance with Cochrane guidelines. Review authors undertook meta-analysis for data that were both clinically and statistically homogenous, and analysed data as both one overall pooled sample and according to two predefined subgroups related to exacerbation severity (admission pH between 7.35 and 7.30 vs below 7.30) and NIV treatment setting (intensive care unit-based vs ward-based). We reported results for mortality, need for endotracheal intubation, and hospital length of stay in a 'Summary of findings' table and rated their quality in accordance with GRADE criteria.\nMain results\nWe included in the review 17 randomised controlled trials involving 1264 participants. Available data indicate that mean age at recruitment was 66.8 years (range 57.7 to 70.5 years) and that most participants (65%) were male. Most studies (12/17) were at risk of performance bias, and for most (14/17), the risk of detection bias was uncertain. These risks may have affected subjective patient-reported outcome measures (e.g. dyspnoea) and secondary review outcomes, respectively.\nUse of NIV decreased the risk of mortality by 46% (risk ratio (RR) 0.54, 95% confidence interval (CI) 0.38 to 0.76; N = 12 studies; number needed to treat for an additional beneficial outcome (NNTB) 12, 95% CI 9 to 23) and decreased the risk of needing endotracheal intubation by 65% (RR 0.36, 95% CI 0.28 to 0.46; N = 17 studies; NNTB 5, 95% CI 5 to 6). We graded both outcomes as 'moderate' quality owing to uncertainty regarding risk of bias for several studies. Inspection of the funnel plot related to need for endotracheal intubation raised the possibility of some publication bias pertaining to this outcome. NIV use was also associated with reduced length of hospital stay (mean difference (MD) -3.39 days, 95% CI -5.93 to -0.85; N = 10 studies), reduced incidence of complications (unrelated to NIV) (RR 0.26, 95% CI 0.13 to 0.53; N = 2 studies), and improvement in pH (MD 0.05, 95% CI 0.02 to 0.07; N = 8 studies) and in partial pressure of oxygen (PaO2) (MD 7.47 mmHg, 95% CI 0.78 to 14.16 mmHg; N = 8 studies) at one hour. A trend towards improvement in PaCO2 was observed, but this finding was not statistically significant (MD -4.62 mmHg, 95% CI -11.05 to 1.80 mmHg; N = 8 studies). Post hoc analysis revealed that this lack of benefit was due to the fact that data from two studies at high risk of bias showed baseline imbalance for this outcome (worse in the NIV group than in the usual care group). Sensitivity analysis revealed that exclusion of these two studies resulted in a statistically significant positive effect of NIV on PaCO2. Treatment intolerance was significantly greater in the NIV group than in the usual care group (risk difference (RD) 0.11, 95% CI 0.04 to 0.17; N = 6 studies). Results of analysis showed a non-significant trend towards reduction in dyspnoea with NIV compared with usual care (standardised mean difference (SMD) -0.16, 95% CI -0.34 to 0.02; N = 4 studies). Subgroup analyses revealed no significant between-group differences.\nAuthors' conclusions\nData from good quality randomised controlled trials show that NIV is beneficial as a first-line intervention in conjunction with usual care for reducing the likelihood of mortality and endotracheal intubation in patients admitted with acute hypercapnic respiratory failure secondary to an acute exacerbation of chronic obstructive pulmonary disease (COPD). The magnitude of benefit for these outcomes appears similar for patients with acidosis of a mild (pH 7.30 to 7.35) versus a more severe nature (pH < 7.30), and when NIV is applied within the intensive care unit (ICU) or ward setting.\n\nGiven the provided abstract, please respond to the following query: \"How did we answer the question?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed all available evidence up to January 2017 regarding effects of NIV combined with usual care compared with usual care alone (involving no ventilation). Because up to 20% of people with COPD who have respiratory failure can die from it, we looked at the number of deaths as the primary outcome. We also looked at need for intubation and time spent in hospital."
}
]
|
query-laysum | 1381 | [
{
"role": "user",
"content": "Abstract: Background\nGrowing expenditures on prescription medicines represent a major challenge to many health systems. Cap and co-payment policies are intended as an incentive to deter unnecessary or marginal utilisation, and to reduce third-party payer expenditures by shifting parts of the financial burden from insurers to patients, thus increasing their financial responsibility for prescription medicines. Direct patient payment policies include caps (maximum numbers of prescriptions or medicines that are reimbursed), fixed co-payments (patients pay a fixed amount per prescription or medicine), co-insurance (patients pay a percentage of the price), ceilings (patients pay the full price or part of the cost up to a ceiling, after which medicines are free or are available at reduced cost) and tier co-payments (differential co-payments usually assigned to generic and brand medicines). This is the first update of the original review.\nObjectives\nTo determine the effects of cap and co-payment (cost-sharing) policies on use of medicines, healthcare utilisation, health outcomes and costs (expenditures).\nSearch methods\nFor this update, we searched the following databases and websites: The Cochrane Central Register of Controlled Trials (CENTRAL) (including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register, Cochrane Library; MEDLINE, Ovid; EMBASE, Ovid; IPSA, EBSCO; EconLit, ProQuest; Worldwide Political Science Abstracts, ProQuest; PAIS International, ProQuest; INRUD Bibliography; WHOLIS, WHO; LILACS), VHL; Global Health Library WHO; PubMed, NHL; SCOPUS; SciELO, BIREME; OpenGrey; JOLIS Library Network; OECD Library; World Bank e-Library; World Health Organization, WHO; World Bank Documents & Reports; International Clinical Trials Registry Platform (ICTRP), WHO; ClinicalTrials.gov, NIH. We searched all databases during January and February 2013, apart from SciELO, which we searched in January 2012, and ICTRP and ClinicalTrials.gov, which we searched in March 2014.\nSelection criteria\nWe defined policies in this review as laws, rules or financial or administrative orders made by governments, non-government organisations or private insurers. We included randomised controlled trials, non-randomised controlled trials, interrupted time series studies, repeated measures studies and controlled before-after studies of cap or co-payment policies for a large jurisdiction or system of care. To be included, a study had to include an objective measure of at least one of the following outcomes: medicine use, healthcare utilisation, health outcomes or costs (expenditures).\nData collection and analysis\nTwo review authors independently extracted data and assessed study limitations. We reanalysed time series data for studies with sufficient data, if appropriate analyses were not reported.\nMain results\nWe included 32 full-text articles (17 new) reporting evaluations of 39 different interventions (one study - Newhouse 1993 - comprises five papers). We excluded from this update eight controlled before-after studies included in the previous version of this review, because they included only one site in their intervention or control groups. Five papers evaluated caps, and six evaluated a cap with co-insurance and a ceiling. Six evaluated fixed co-payment, two evaluated tiered fixed co-payment, 10 evaluated a ceiling with fixed co-payment and 10 evaluated a ceiling with co-insurance. Only one evaluation was a randomised trial. The certainty of the evidence was found to be generally low to very low.\nIncreasing the amount of money that people pay for medicines may reduce insurers’ medicine expenditures and may reduce patients’ medicine use. This may include reductions in the use of life-sustaining medicines as well as medicines that are important in treating chronic conditions and medicines for asymptomatic conditions. These types of interventions may lead to small decreases in or uncertain effects on healthcare utilisation. We found no studies that reliably reported the effects of these types of interventions on health outcomes.\nAuthors' conclusions\nThe diversity of interventions and outcomes addressed across studies and differences in settings, populations and comparisons made it difficult to summarise results across studies. Cap and co-payment polices may reduce the use of medicines and reduce medicine expenditures for health insurers. However, they may also reduce the use of life-sustaining medicines or medicines that are important in treating chronic, including symptomatic, conditions and, consequently, could increase the use of healthcare services. Fixed co-payment with a ceiling and tiered fixed co-payment may be less likely to reduce the use of essential medicines or to increase the use of healthcare services.\n\nGiven the provided abstract, please respond to the following query: \"What are pharmaceutical payment policies?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Large quantities of healthcare funds are spent on medicines, and these amounts are increasing. Spending more on medicines could mean having less money for other healthcare or non-healthcare services. Also, misuse, overuse and underuse of appropriate medicines can lead to wasted resources and health hazards. Health insurers therefore are looking for ways of ensuring better use of medicines and controlling the costs of medicines, while still ensuring that patients get the medicines they need.\nPatient payment policies vary with respect to the medicines included, the patient groups targeted, the amount of money patients are expected to pay and the ways in which they are expected to pay. Different policies may be used alone or together and include the following.\n1. With a cap policy, patients are reimbursed for their prescription medicines up to a maximum amount, then are expected to pay costs higher than this amount.\n2. With a fixed co-payment policy, patients pay a fixed amount per medicine or prescription.\n3. With a co-insurance policy, patients pay a set percentage of the price of the prescription or medicine, rather than a fixed fee.\n4. With a ceiling policy, patients pay full cost or part of the cost up to a certain amount, then are given medicines for free or at reduced cost.\nThese policies may lead people to use fewer medicines or to choose cheaper medicines. Although they may deter people from using unnecessary medicines, these policies may cause harm."
}
]
|
query-laysum | 1382 | [
{
"role": "user",
"content": "Abstract: Background\nRobotic-assisted laparoscopic prostatectomy (RALP) is widely used to surgically treat clinically localized prostate cancer. It is typically performed using an approach (standard RALP) that mimics open retropubic prostatectomy by dissecting the so-called space of Retzius anterior to the bladder. An alternative, Retzius-sparing (or posterior approach) RALP (RS-RALP) has been described, which is reported to have better continence outcomes but may be associated with a higher risk of incomplete resection and positive surgical margins (PSM).\nObjectives\nTo assess the effects of RS-RALP compared to standard RALP for the treatment of clinically localized prostate cancer.\nSearch methods\nWe performed a comprehensive search of the Cochrane Library, MEDLINE, Embase, three other databases, trials registries, other sources of the grey literature, and conference proceedings, up to June 2020. We applied no restrictions on publication language or status.\nSelection criteria\nWe included trials where participants were randomized to RS-RALP or standard RALP for clinically localized prostate cancer.\nData collection and analysis\nTwo review authors independently classified and abstracted data from the included studies. Primary outcomes were: urinary continence recovery within one week after catheter removal, at three months after surgery, and serious adverse events. Secondary outcomes were: urinary continence recovery six and 12 months after surgery, potency recovery 12 months after surgery, positive surgical margins (PSM), biochemical recurrence-free survival (BCRFS), and urinary and sexual function quality of life. We performed statistical analyses using a random-effects model. We rated the certainty of evidence using the GRADE approach.\nMain results\nOur search identified six records of five unique randomized controlled trials, of which two were published studies, one was in press, and two were abstract proceedings. There were 571 randomized participants, of whom 502 completed the trials. Mean age of participants was 64.6 years and mean prostate-specific antigen was 6.9 ng/mL. About 54.2% of participants had cT1c disease, 38.6% had cT2a-b disease, and 7.1 % had cT2c disease.\nPrimary outcomes\nRS-RALP probably improves continence within one week after catheter removal (risk ratio (RR) 1.74, 95% confidence interval (CI) 1.41 to 2.14; I2 = 0%; studies = 4; participants = 410; moderate-certainty evidence). Assuming 335 per 1000 men undergoing standard RALP are continent at this time point, this corresponds to 248 more men per 1000 (137 more to 382 more) reporting continence recovery.\nRS-RALP may increase continence at three months after surgery compared to standard RALP (RR 1.33, 95% CI 1.06 to 1.68; I2 = 86%; studies = 5; participants = 526; low-certainty evidence). Assuming 750 per 1000 men undergoing standard RALP are continent at this time point, this corresponds to 224 more men per 1000 (41 more to 462 more) reporting continence recovery.\nWe are very uncertain about the effects of RS-RALP on serious adverse events compared to standard RALP (RR 1.40, 95% CI 0.47 to 4.17; studies = 2; participants = 230; very low-certainty evidence).\nSecondary outcomes\nThere is probably little to no difference in continence recovery at 12 months after surgery (RR 1.01, 95% CI 0.97 to 1.04; I2 = 0%; studies = 2; participants = 222; moderate-certainty evidence). Assuming 982 per 1000 men undergoing standard RALP are continent at this time point, this corresponds to 10 more men per 1000 (29 fewer to 39 more) reporting continence recovery.\nWe are very uncertain about the effect of RS-RALP on potency recovery 12 months after surgery (RR 0.98, 95% CI 0.54 to 1.80; studies = 1; participants = 55; very low-certainty evidence).\nRS-RALP may increase PSMs (RR 1.95, 95% CI 1.19 to 3.20; I2 = 0%; studies = 3; participants = 308; low-certainty evidence) indicating a higher risk for prostate cancer recurrence. Assuming 129 per 1000 men undergoing standard RALP have positive margins, this corresponds to 123 more men per 1000 (25 more to 284 more) with PSMs.\nWe are very uncertain about the effect of RS-RALP on BCRFS compared to standard RALP (hazard ratio (HR) 0.45, 95% CI 0.13 to 1.60; I2 = 32%; studies = 2; participants = 218; very low-certainty evidence).\nAuthors' conclusions\nFindings of this review indicate that RS-RALP may result in better continence outcomes than standard RALP up to six months after surgery. Continence outcomes at 12 months may be similar. Downsides of RS-RALP may be higher positive margin rates. We are very uncertain about the effect on BCRFS and potency outcomes. Longer-term oncologic and functional outcomes are lacking, and no preplanned subgroup analyses could be performed to explore the observed heterogeneity. Surgeons should discuss these trade-offs and the limitations of the evidence with their patients when considering this approach.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Urologists often use a robot to remove the prostate in men with prostate cancer. After surgery, men have a catheter that drains urine from their bladder. When this is removed, most men leak urine for some time. This problem is called incontinence and usually gets better within six to 12 months in most men. However, it can be very bothersome during this time."
}
]
|
query-laysum | 1383 | [
{
"role": "user",
"content": "Abstract: Background\nAn abdominal aortic aneurysm (AAA) is an abnormal dilation in the diameter of the abdominal aorta of 50% or more of the normal diameter or greater than 3 cm in total. The risk of rupture increases with the diameter of the aneurysm, particularly above a diameter of approximately 5.5 cm. Perioperative and postoperative morbidity is common following elective repair in people with AAA. Prehabilitation or preoperative exercise is the process of enhancing an individual’s functional capacity before surgery to improve postoperative outcomes. Studies have evaluated exercise interventions for people waiting for AAA repair, but the results of these studies are conflicting.\nObjectives\nTo assess the effects of exercise programmes on perioperative and postoperative morbidity and mortality associated with elective abdominal aortic aneurysm repair.\nSearch methods\nWe searched the Cochrane Vascular Specialised register, Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL (Cumulative Index to Nursing and Allied Health Literature), and Physiotherapy Evidence Database (PEDro) databases, and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 6 July 2020. We also examined the included study reports' bibliographies to identify other relevant articles.\nSelection criteria\nWe considered randomised controlled trials (RCTs) examining exercise interventions compared with usual care (no exercise; participants maintained normal physical activity) for people waiting for AAA repair.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, assessed the included studies, extracted data and resolved disagreements by discussion. We assessed the methodological quality of studies using the Cochrane risk of bias tool and collected results related to the outcomes of interest: post-AAA repair mortality; perioperative and postoperative complications; length of intensive care unit (ICU) stay; length of hospital stay; number of days on a ventilator; change in aneurysm size pre- and post-exercise; and quality of life. We used GRADE to evaluate certainty of the evidence. For dichotomous outcomes, we calculated the risk ratio (RR) with the corresponding 95% confidence interval (CI).\nMain results\nThis review identified four RCTs with a total of 232 participants with clinically diagnosed AAA deemed suitable for elective intervention, comparing prehabilitation exercise therapy with usual care (no exercise). The prehabilitation exercise therapy was supervised and hospital-based in three of the four included trials, and in the remaining trial the first session was supervised in hospital, but subsequent sessions were completed unsupervised in the participants’ homes. The dose and schedule of the prehabilitation exercise therapy varied across the trials with three to six sessions per week and a duration of one hour per session for a period of one to six weeks. The types of exercise therapy included circuit training, moderate-intensity continuous exercise and high-intensity interval training.\nAll trials were at a high risk of bias. The certainty of the evidence for each of our outcomes was low to very low. We downgraded the certainty of the evidence because of risk of bias and imprecision (small sample sizes). Overall, we are uncertain whether prehabilitation exercise compared to usual care (no exercise) reduces the occurrence of 30-day (or longer if reported) mortality post-AAA repair (RR 1.33, 95% CI 0.31 to 5.77; 3 trials, 192 participants; very low-certainty evidence). Compared to usual care (no exercise), prehabilitation exercise may decrease the occurrence of cardiac complications (RR 0.36, 95% CI 0.14 to 0.92; 1 trial, 124 participants; low-certainty evidence) and the occurrence of renal complications (RR 0.31, 95% CI 0.11 to 0.88; 1 trial, 124 participants; low-certainty evidence). We are uncertain whether prehabilitation exercise, compared to usual care (no exercise), decreases the occurrence of pulmonary complications (RR 0.49, 95% 0.26 to 0.92; 2 trials, 144 participants; very low-certainty evidence), decreases the need for re-intervention (RR 1.29, 95% 0.33 to 4.96; 2 trials, 144 participants; very low-certainty evidence) or decreases postoperative bleeding (RR 0.57, 95% CI 0.18 to 1.80; 1 trial, 124 participants; very low-certainty evidence). There was little or no difference between the exercise and usual care (no exercise) groups in length of ICU stay, length of hospital stay and quality of life.\nNone of the studies reported data for the number of days on a ventilator and change in aneurysm size pre- and post-exercise outcomes.\nAuthors' conclusions\nDue to very low-certainty evidence, we are uncertain whether prehabilitation exercise therapy reduces 30-day mortality, pulmonary complications, need for re-intervention or postoperative bleeding. Prehabilitation exercise therapy might slightly reduce cardiac and renal complications compared with usual care (no exercise). More RCTs of high methodological quality, with large sample sizes and long-term follow-up, are needed. Important questions should include the type and cost-effectiveness of exercise programmes, the minimum number of sessions and programme duration needed to effect clinically important benefits, and which groups of participants and types of repair benefit most.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics and key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the literature on 6 July 2020, and we found four trials that included 232 participants with AAA who were on a waiting list for AAA surgery. The trials randomly assigned participants into two groups, one with exercise before surgery and another with usual care (no exercise before surgery, participants maintained normal physical activity). The types of exercise included circuit training, moderate-intensity continuous exercise and high-intensity interval training. In three of the four trials, the participants in the exercise group were supervised by healthcare professionals in hospital when they did their exercise sessions. In the other trial, the first exercise session was supervised in hospital, and the following sessions were completed by the participants on their own in their own homes. The number and length of the exercise sessions was different in the trials. Some exercise sessions took place three times a week and some took place six times a week. In some trials participants exercised for one week and some trials' participants exercised for six weeks before their surgery.\nLimited information from a small number of trials showed that exercise before AAA surgery might slightly reduce heart and kidney complications after surgery, compared to no exercise (usual care) before AAA surgery. We are uncertain whether exercise before AAA surgery reduces death within 30 days of AAA surgery, lung complications, the need for further treatment or bleeding after surgery, compared to no exercise before AAA surgery. There was little or no difference between the exercise and usual care groups in length of intensive care unit stay, length of hospital stay and quality of life. None of the studies reported information for the number of days participants were on a ventilator and change in AAA size before and after exercise."
}
]
|
query-laysum | 1384 | [
{
"role": "user",
"content": "Abstract: Background\nStroke is a leading cause of morbidity and mortality worldwide. Antiplatelet agents are considered to be the cornerstone for secondary prevention of stroke, but the role of using multiple antiplatelet agents early after stroke or transient ischaemic attack (TIA) to improve outcomes has not been established.\nObjectives\nTo determine the effectiveness and safety of initiating, within 72 hours after an ischaemic stroke or TIA, multiple antiplatelet agents versus fewer antiplatelet agents to prevent stroke recurrence. The analysis explores the evidence for different drug combinations.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (last searched 6 July 2020), the Cochrane Central Register of Controlled Trials (CENTRAL) (Issue 7 of 12, 2020) (last searched 6 July 2020), MEDLINE Ovid (from 1946 to 6 July 2020), Embase (1980 to 6 July 2020), ClinicalTrials.gov, and the WHO ICTRP. We also searched the reference lists of identified studies and reviews and used the Science Citation Index Cited Reference search for forward tracking of included studies.\nSelection criteria\nWe selected all randomised controlled trials (RCTs) that compared the use of multiple versus fewer antiplatelet agents initiated within 72 hours after stroke or TIA.\nData collection and analysis\nWe extracted data from eligible studies for the primary outcomes of stroke recurrence and vascular death, and secondary outcomes of myocardial infarction; composite outcome of stroke, myocardial infarction, and vascular death; intracranial haemorrhage; extracranial haemorrhage; ischaemic stroke; death from all causes; and haemorrhagic stroke. We computed an estimate of treatment effect and performed a test for heterogeneity between trials. We analysed data on an intention-to-treat basis and assessed bias for all studies. We rated the certainty of the evidence using the GRADE approach.\nMain results\nWe included 15 RCTs with a total of 17,091 participants. Compared with fewer antiplatelet agents, multiple antiplatelet agents were associated with a significantly lower risk of stroke recurrence (5.78% versus 7.84%, risk ratio (RR) 0.73, 95% confidence interval (CI) 0.66 to 0.82; P < 0.001; moderate-certainty evidence) with no significant difference in vascular death (0.60% versus 0.66%, RR 0.98, 95% CI 0.66 to 1.45; P = 0.94; moderate-certainty evidence). There was a higher risk of intracranial haemorrhage (0.42% versus 0.21%, RR 1.92, 95% CI 1.05 to 3.50; P = 0.03; low-certainty evidence) and extracranial haemorrhage (6.38% versus 2.81%, RR 2.25, 95% CI 1.88 to 2.70; P < 0.001; high-certainty evidence) with multiple antiplatelet agents. On secondary analysis of dual versus single antiplatelet agent therapy, benefit for stroke recurrence (5.73% versus 8.06%, RR 0.71, 95% CI 0.62 to 0.80; P < 0.001; moderate-certainty evidence) was maintained as well as risk of extracranial haemorrhage (1.24% versus 0.40%, RR 3.08, 95% CI 1.74 to 5.46; P < 0.001; high-certainty evidence). The composite outcome of stroke, myocardial infarction, and vascular death (6.37% versus 8.77%, RR 0.72, 95% CI 0.64 to 0.82; P < 0.001; moderate-certainty evidence) and ischaemic stroke (6.30% versus 8.94%, RR 0.70, 95% CI 0.61 to 0.81; P < 0.001; high-certainty evidence) were significantly in favour of dual antiplatelet therapy, whilst the risk of intracranial haemorrhage became less significant (0.34% versus 0.21%, RR 1.53, 95% CI 0.76 to 3.06; P = 0.23; low-certainty evidence).\nAuthors' conclusions\nMultiple antiplatelet agents are more effective in reducing stroke recurrence but increase the risk of haemorrhage compared to one antiplatelet agent. The benefit in reduction of stroke recurrence seems to outweigh the harm for dual antiplatelet agents initiated in the acute setting and continued for one month. Further studies are required in different populations to establish comprehensive safety profiles and long-term outcomes to establish duration of therapy.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that multiple antiplatelet drugs reduced the risk of stroke recurrence but increased the risk of bleeding compared with fewer antiplatelet drugs. Two antiplatelet drugs appear to be more effective in preventing early stroke recurrence than a single antiplatelet drug, but there is an increased risk of side effects, especially bleeding. The benefits of dual antiplatelet drugs started immediately after a stroke seems to outweigh the risks for the first month."
}
]
|
query-laysum | 1385 | [
{
"role": "user",
"content": "Abstract: Background\nEchocardiogram is the reference standard for the diagnosis of haemodynamically significant patent ductus arteriosus (hsPDA) in preterm infants. A simple blood assay for brain natriuretic peptide (BNP) or amino-terminal pro-B-type natriuretic peptide (NT-proBNP) may be useful in the diagnosis and management of hsPDA, but a summary of the diagnostic accuracy has not been reviewed recently.\nObjectives\nPrimary objective: To determine the diagnostic accuracy of the cardiac biomarkers BNP and NT-proBNP for diagnosis of haemodynamically significant patent ductus arteriosus (hsPDA) in preterm neonates. Our secondary objectives were: to compare the accuracy of BNP and NT-proBNP; and to explore possible sources of heterogeneity among studies evaluating BNP and NT-proBNP, including type of commercial assay, chronological age of the infant at testing, gestational age at birth, whether used to initiate medical or surgical treatment, test threshold, and criteria of the reference standard (type of echocardiographic parameter used for diagnosis, clinical symptoms or physical signs if data were available).\nSearch methods\nWe searched the following databases in September 2021: MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and Web of Science. We also searched clinical trial registries and conference abstracts. We checked references of included studies and conducted cited reference searches of included studies. We did not apply any language or date restrictions to the electronic searches or use methodological filters, so as to maximise sensitivity.\nSelection criteria\nWe included prospective or retrospective, cohort or cross-sectional studies, which evaluated BNP or NT-proBNP (index tests) in preterm infants (participants) with suspected hsPDA (target condition) in comparison with echocardiogram (reference standard).\nData collection and analysis\nTwo authors independently screened title/abstracts and full-texts, resolving any inclusion disagreements through discussion or with a third reviewer. We extracted data from included studies to create 2 × 2 tables. Two independent assessors performed quality assessment using the Quality Assessment of Diagnostic-Accuracy Studies-2 (QUADAS 2) tool. We excluded studies that did not report data in sufficient detail to construct 2 × 2 tables, and where this information was not available from the primary investigators. We used bivariate and hierarchical summary receiver operating characteristic (HSROC) random-effects models for meta-analysis and generated summary receiver operating characteristic space (ROC) curves. Since both BNP and NTproBNP are continuous variables, sensitivity and specificity were reported at multiple thresholds. We dealt with the threshold effect by reporting summary ROC curves without summary points.\nMain results\nWe included 34 studies: 13 evaluated BNP and 21 evaluated NT-proBNP in the diagnosis of hsPDA. Studies varied by methodological quality, type of commercial assay, thresholds, age at testing, gestational age and whether the assay was used to initiate medical or surgical therapy. We noted some variability in the definition of hsPDA among the included studies.\nFor BNP, the summary curve is reported in the ROC space (13 studies, 768 infants, low-certainty evidence). The estimated specificities from the ROC curve at fixed values of sensitivities at median (83%), lower and upper quartiles (79% and 92%) were 93.6% (95% confidence interval (CI) 77.8 to 98.4), 95.5% (95% CI 83.6 to 98.9) and 81.1% (95% CI 50.6 to 94.7), respectively. Subgroup comparisons revealed differences by type of assay and better diagnostic accuracy at lower threshold cut-offs (< 250 pg/ml compared to ≥ 250 pg/ml), testing at gestational age < 30 weeks and chronological age at testing at one to three days. Data were insufficient for subgroup analysis of whether the BNP testing was indicated for medical or surgical management of PDA.\nFor NT-proBNP, the summary ROC curve is reported in the ROC space (21 studies, 1459 infants, low-certainty evidence). The estimated specificities from the ROC curve at fixed values of sensitivities at median (92%), lower and upper quartiles (85% and 94%) were 83.6% (95% CI 73.3 to 90.5), 90.6% (95% CI 83.8 to 94.7) and 79.4% (95% CI 67.5 to 87.8), respectively. Subgroup analyses by threshold (< 6000 pg/ml and ≥ 6000 pg/ml) did not reveal any differences. Subgroup analysis by mean gestational age (< 30 weeks vs 30 weeks and above) showed better accuracy with < 30 weeks, and chronological age at testing (days one to three vs over three) showed testing at days one to three had better diagnostic accuracy. Data were insufficient for subgroup analysis of whether the NTproBNP testing was indicated for medical or surgical management of PDA.\nWe performed meta-regression for BNP and NT-proBNP using the covariates: assay type, threshold, mean gestational age and chronological age; none of the covariates significantly affected summary sensitivity and specificity.\nAuthors' conclusions\nLow-certainty evidence suggests that BNP and NT-proBNP have moderate accuracy in diagnosing hsPDA and may work best as a triage test to select infants for echocardiography. The studies evaluating the diagnostic accuracy of BNP and NT-proBNP for hsPDA varied considerably by assay characteristics (assay kit and threshold) and infant characteristics (gestational and chronological age); hence, generalisability between centres is not possible. We recommend that BNP or NT-proBNP assays be locally validated for specific populations and outcomes, to initiate therapy or follow response to therapy.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified a total of 34 studies (13 studies for BNP and 21 studies for NT-proBNP) to answer this question.\nThe results of these studies suggest that if the BNP blood test was performed in a group of 100 preterm infants who may have a large problematic PDA, the BNP blood test would miss only two of these infants but may wrongly detect large PDA in 13 of the 100 infants. Similarly, if the NT-proBNP blood test was performed in a group of 100 preterm infants who may have a large problematic PDA, the NT-proBNP blood test would miss only two of these infants but may wrongly detect large PDA in 11 of the 100 infants."
}
]
|
query-laysum | 1386 | [
{
"role": "user",
"content": "Abstract: Background\nGranulosa cell tumour is a rare gynaecological tumour of the ovary with recurrences many years after initial diagnosis and treatment. Evidence-based management of granulosa cell tumour of the ovary is limited, and treatment has not been standardised. Surgery, including fertility-sparing procedures for young women, has traditionally been the standard treatment. Adjuvant treatments following surgery have been based on non-randomised trials. A combination of bleomycin, etoposide and cisplatin (BEP) has traditionally been used for treatment of advanced and/or recurrent disease that cannot be optimally managed surgically.\nObjectives\nTo evaluate the effectiveness and safety of different treatment modalities offered in current practice for the management of primary, residual and recurrent adult-onset granulosa cell tumours (GCTs) of the ovary.\nSearch methods\nWe searched the Cochrane Gynaecological Cancer Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE up to December 2013. We also searched registers of clinical trials, abstracts of scientific meetings and reference lists of included studies.\nSelection criteria\nWe searched for randomised controlled trials (RCTs), quasi-RCTs and observational studies that examined women with adult-onset granulosa cell tumours of the ovary (primary and recurrent). For non-randomised studies, we included studies that used multivariate analysis to adjust for baseline characteristics.\nData collection and analysis\nTwo review authors independently abstracted data and assessed risk of bias. Studies were heterogeneous with respect to treatment comparisons, so data were not synthesised in meta-analyses, and methods for assessing heterogeneity were not needed. Risk of bias in included studies was assessed by using the six core items used to assess RCTs and by evaluating four additional criteria specifically addressing risk of bias in non-randomised studies.\nMain results\nFive retrospective cohort studies (535 women with a diagnosis of GCT) that used appropriate statistical methods for adjustment were included in the review.\nTwo studies, which carried out multivariate analyses that attempted to identify factors associated with better outcomes (in terms of overall survival), reported no apparent evidence of a difference in overall survival between surgical approaches, whether a participant underwent lymphadenectomy or received adjuvant chemotherapy or radiotherapy. Only percentage of survival for all participants combined was reported in two trials and was not reported at all in one study.\nOne study showed that women who received postoperative radiotherapy had lower risk of disease recurrence compared with those who underwent surgery alone (adjusted hazard ratio (HR) 0.3, 95% confidence interval (CI) 0.1 to 0.6, P value 0.04). Three studies reportedthat there was no evidence of differences in disease recurrence based on execution and type of adjuvant chemotherapy or on type of surgery or surgical approach, other than that surgical staging may be important. One study described no apparent evidence of a difference in disease recurrence between fertility-sparing surgery and conventional surgery. Recurrence-free survival was not reported in one study.\nToxicity and adverse event data were incompletely reported in the five studies. None of the five studies reported on quality of life (QoL). All studies were at very high risk of bias.\nAuthors' conclusions\nOne study showed a lower recurrence rate with the use of adjuvant radiotherapy, although this study was at high risk of bias and the results should be interpreted with caution. After evaluating the five small retrospective studies, we are unable to reach any firm conclusions as to the effectiveness and safety of different types and approaches of surgery, including conservative surgery, as well as adjuvant chemotherapy or radiotherapy, for management of GCTs of the ovary. The available evidence is very limited, and the review provides only low-quality evidence. Further research is very likely to have an important impact on our confidence in the estimate of effect and may alter our findings.\nIdeally, multinational RCTs are needed to answer these questions. The disease is relatively rare and generally has a good prognosis. RCTs are challenging to conduct, but three ongoing trials have been identified, demonstrating that they are feasible, although two of these studies are single-arm trials. The study that may be able to provide answers to the question of which chemotherapeutic regimen should be selected for management of sex cord stromal tumours is an ongoing, randomised, phase 2 study, led by the Gynaecological Oncology Group to compare the efficacy of carboplatin and paclitaxel versus standard BEP. These investigators are also looking into the value of inhibin A and inhibin B as predictive biomarkers. Additional trials are required to assess toxicity and QoL associated with different treatment regimens as well as the safety of conservative surgical options.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Previous studies have assessed chemotherapy (different combination regimens) with or without radiotherapy following surgery. This review aimed to examine the effects of various treatment methods, including fertility-sparing surgery, on the survival of women with GCT of the ovary."
}
]
|
query-laysum | 1387 | [
{
"role": "user",
"content": "Abstract: Background\nPhysical restraints (PR), such as bedrails and belts in chairs or beds, are commonly used for older people receiving long-term care, despite clear evidence for the lack of effectiveness and safety, and widespread recommendations that their use should be avoided. This systematic review of the efficacy and safety of interventions to prevent and reduce the use of physical restraints outside hospital settings, i.e. in care homes and the community, updates our previous review published in 2011.\nObjectives\nTo evaluate the effects of interventions to prevent and reduce the use of physical restraints for older people who require long-term care (either at home or in residential care facilities)\nSearch methods\nWe searched ALOIS, the Cochrane Dementia and Cognitive Improvement Group's register, MEDLINE (Ovid Sp), Embase (Ovid SP), PsycINFO (Ovid SP), CINAHL (EBSCOhost), Web of Science Core Collection (ISI Web of Science), LILACS (BIREME), ClinicalTrials.gov and the World Health Organization's meta-register, the International Clinical Trials Registry Portal, on 3 August 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and controlled clinical trials (CCTs) that investigated the effects of interventions intended to prevent or reduce the use of physical restraints in older people who require long-term care. Studies conducted in residential care institutions or in the community, including patients' homes, were eligible for inclusion. We assigned all included interventions to categories based on their mechanisms and components.\nData collection and analysis\nTwo review authors independently selected the publications for inclusion, extracted study data, and assessed the risk of bias of all included studies. Primary outcomes were the number or proportion of people with at least one physical restraint, and serious adverse events related to PR use, such as death or serious injuries. We performed meta-analyses if necessary data were available. If meta-analyses were not feasible, we reported results narratively. We used GRADE methods to describe the certainty of the evidence.\nMain results\nWe identified six new studies and included 11 studies with 19,003 participants in this review update. All studies were conducted in long-term residential care facilities. Ten studies were RCTs and one study a CCT. All studies included people with dementia. The mean age of the participants was approximately 85 years.\nFour studies investigated organisational interventions aiming to implement a least-restraint policy; six studies investigated simple educational interventions; and one study tested an intervention that provided staff with information about residents' fall risk. The control groups received usual care only in most studies although, in two studies, additional information materials about physical restraint reduction were provided.\nWe judged the risk of selection bias to be high or unclear in eight studies. Risk of reporting bias was high in one study and unclear in eight studies.\nThe organisational interventions intended to promote a least-restraint policy included a variety of components, such as education of staff, training of 'champions' of low-restraint practice, and components which aimed to facilitate a change in institutional policies and culture of care. We found moderate-certainty evidence that organisational interventions aimed at implementation of a least-restraint policy probably lead to a reduction in the number of residents with at least one use of PR (RR 0.86, 95% CI 0.78 to 0.94; 3849 participants, 4 studies) and a large reduction in the number of residents with at least one use of a belt for restraint (RR 0.54, 95% CI 0.40 to 0.73; 2711 participants, 3 studies). No adverse events occurred in the one study which reported this outcome. There was evidence from one study that organisational interventions probably reduce the duration of physical restraint use. We found that the interventions may have little or no effect on the number of falls or fall-related injuries (low-certainty evidence) and probably have little or no effect on the number of prescribed psychotropic medications (moderate-certainty evidence). One study found that organisational interventions result in little or no difference in quality of life (high-certainty evidence) and another study found that they may make little or no difference to agitation (low-certainty evidence).\nThe simple educational interventions were intended to increase knowledge and change staff attitudes towards PR. As well as providing education, some interventions included further components to support change, such as ward-based guidance. We found pronounced between-group baseline imbalances in PR prevalence in some of the studies, which might have occurred because of the small number of clusters in the intervention and control groups. One study did not assess bedrails, which is the most commonly used method of restraint in nursing homes. Regarding the number of residents with at least one restraint, the results were inconsistent. We found very-low certainty evidence and we are uncertain about the effects of simple educational interventions on the number of residents with PR. None of the studies assessed or reported any serious adverse events. We found moderate-certainty evidence that simple educational interventions probably result in little or no difference in restraint intensity and may have little or no effect on falls, fall-related injuries, or agitation (low-certainty evidence each). Based on very low-certainty evidence we are uncertain about the effects of simple educational interventions on the number of participants with a prescription of at least one psychotropic medication.\nOne study investigated an intervention that provided information about residents' fall risk to the nursing staff. We found low-certainty evidence that providing information about residents' fall risk may result in little or no difference in the mean number of PR or the number of falls. The study did not assess overall adverse events.\nAuthors' conclusions\nOrganisational interventions aimed to implement a least-restraint policy probably reduce the number of residents with at least one PR and probably largely reduce the number of residents with at least one belt. We are uncertain whether simple educational interventions reduce the use of physical restraints, and interventions providing information about residents' fall risk may result in little to no difference in the use of physical restraints. These results apply to long-term care institutions; we found no studies from community settings.\n\nGiven the provided abstract, please respond to the following query: \"What is the conclusion?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Organisational interventions aimed at reducing use of PR through changing policy and practice in care homes are probably effective at reducing the number of people restrained overall and especially with belts. Reducing restraints did no lead to a higher number of people with falls. We are uncertain whether simple educational interventions reduce the use of PR, and interventions providing information about residents' fall risk may have little or no effect on the use of PR. All the evidence came from studies in institutions and it may not apply to care in people's own homes."
}
]
|
query-laysum | 1388 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the Cochrane review published in 2002.\nColorectal cancer (CRC) is a major cause of morbidity and mortality in industrialised countries. Experimental evidence has supported the hypothesis that dietary fibre may protect against the development of CRC, although epidemiologic data have been inconclusive.\nObjectives\nTo assess the effect of dietary fibre on the recurrence of colorectal adenomatous polyps in people with a known history of adenomatous polyps and on the incidence of CRC compared to placebo. Further, to identify the reported incidence of adverse effects, such as abdominal pain or diarrhoea, that resulted from the fibre intervention.\nSearch methods\nWe identified randomised controlled trials (RCTs) from Cochrane Colorectal Cancer's Specialised Register, CENTRAL, MEDLINE and Embase (search date, 4 April 2016). We also searched ClinicalTrials.gov and WHO International Trials Registry Platform on October 2016.\nSelection criteria\nWe included RCTs or quasi-RCTs. The population were those having a history of adenomatous polyps, but no previous history of CRC, and repeated visualisation of the colon/rectum after at least two-years' follow-up. Dietary fibre was the intervention. The primary outcomes were the number of participants with: 1. at least one adenoma, 2. more than one adenoma, 3. at least one adenoma greater than or equal to 1 cm, or 4. a new diagnosis of CRC. The secondary outcome was the number of adverse events.\nData collection and analysis\nTwo reviewers independently extracted data, assessed trial quality and resolved discrepancies by consensus. We used risk ratios (RR) and risk difference (RD) with 95% confidence intervals (CI) to measure the effect. If statistical significance was reached, we reported the number needed to treat for an additional beneficial outcome (NNTB) or harmful outcome (NNTH). We combined the study data using the fixed-effect model if it was clinically, methodologically, and statistically reasonable.\nMain results\nWe included seven studies, of which five studies with 4798 participants provided data for analyses in this review. The mean ages of the participants ranged from 56 to 66 years. All participants had a history of adenomas, which had been removed to achieve a polyp-free colon at baseline. The interventions were wheat bran fibre, ispaghula husk, or a comprehensive dietary intervention with high fibre whole food sources alone or in combination. The comparators were low-fibre (2 to 3 g per day), placebo, or a regular diet. The combined data showed no statistically significant difference between the intervention and control groups for the number of participants with at least one adenoma (5 RCTs, n = 3641, RR 1.04, 95% CI 0.95 to 1.13, low-quality evidence), more than one adenoma (2 RCTs, n = 2542, RR 1.06, 95% CI 0.94 to 1.20, low-quality evidence), or at least one adenoma 1 cm or greater (4 RCTs, n = 3224, RR 0.99, 95% CI 0.82 to 1.20, low-quality evidence) at three to four years. The results on the number of participants diagnosed with colorectal cancer favoured the control group over the dietary fibre group (2 RCTS, n = 2794, RR 2.70, 95% CI 1.07 to 6.85, low-quality evidence). After 8 years of comprehensive dietary intervention, no statistically significant difference was found in the number of participants with at least one recurrent adenoma (1 RCT, n = 1905, RR 0.97, 95% CI 0.78 to 1.20), or with more than one adenoma (1 RCT, n = 1905, RR 0.89, 95% CI 0.64 to 1.24). More participants given ispaghula husk group had at least one recurrent adenoma than the control group (1 RCT, n = 376, RR 1.45, 95% CI 1.01 to 2.08). Other analyses by types of fibre intervention were not statistically significant. The overall dropout rate was over 16% in these trials with no reasons given for these losses. Sensitivity analysis incorporating these missing data shows that none of the results can be considered as robust; when the large numbers of participants lost to follow-up were assumed to have had an event or not, the results changed sufficiently to alter the conclusions that we would draw. Therefore, the reliability of the findings may have been compromised by these missing data (attrition bias) and should be interpreted with caution.\nAuthors' conclusions\nThere is a lack of evidence from existing RCTs to suggest that increased dietary fibre intake will reduce the recurrence of adenomatous polyps in those with a history of adenomatous polyps within a two to eight year period. However, these results may be unreliable and should be interpreted cautiously, not only because of the high rate of loss to follow-up, but also because adenomatous polyp is a surrogate outcome for the unobserved true endpoint CRC. Longer-term trials with higher dietary fibre levels are needed to enable confident conclusion.\n\nGiven the provided abstract, please respond to the following query: \"We asked\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Does nutritional supplement of dietary fibre prevent recurrence of precancerous polyps and cancer in the bowel in participants with a history of polyps having been removed to achieve a polyp-free colon at baseline for the intervention."
}
]
|
query-laysum | 1389 | [
{
"role": "user",
"content": "Abstract: Background\nCoronary artery disease is a major public health problem affecting both developed and developing countries. Acute coronary syndromes include unstable angina and myocardial infarction with or without ST-segment elevation (electrocardiogram sector is higher than baseline). Ventricular arrhythmia after myocardial infarction is associated with high risk of mortality. The evidence is out of date, and considerable uncertainty remains about the effects of prophylactic use of lidocaine on all-cause mortality, in particular, in patients with suspected myocardial infarction.\nObjectives\nTo determine the clinical effectiveness and safety of prophylactic lidocaine in preventing death among people with myocardial infarction.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2015, Issue 3), MEDLINE Ovid (1946 to 13 April 2015), EMBASE (1947 to 13 April 2015) and Latin American Caribbean Health Sciences Literature (LILACS) (1986 to 13 April 2015). We also searched Web of Science (1970 to 13 April 2013) and handsearched the reference lists of included papers. We applied no language restriction in the search.\nSelection criteria\nWe included randomised controlled trials assessing the effects of prophylactic lidocaine for myocardial infarction. We considered all-cause mortality, cardiac mortality and overall survival at 30 days after myocardial infarction as primary outcomes.\nData collection and analysis\nWe performed study selection, risk of bias assessment and data extraction in duplicate. We estimated risk ratios (RRs) for dichotomous outcomes and measured statistical heterogeneity using I2. We used a random-effects model and conducted trial sequential analysis.\nMain results\nWe identified 37 randomised controlled trials involving 11,948 participants. These trials compared lidocaine versus placebo or no intervention, disopyramide, mexiletine, tocainide, propafenone, amiodarone, dimethylammonium chloride, aprindine and pirmenol. Overall, trials were underpowered and had high risk of bias. Ninety-seven per cent of trials (36/37) were conducted without an a priori sample size estimation. Ten trials were sponsored by the pharmaceutical industry. Trials were conducted in 17 countries, and intravenous intervention was the most frequent route of administration.\nIn trials involving participants with proven or non-proven acute myocardial infarction, lidocaine versus placebo or no intervention showed no significant differences regarding all-cause mortality (213/5879 (3.62%) vs 199/5848 (3.40%); RR 1.02, 95% CI 0.82 to 1.27; participants = 11727; studies = 18; I2 = 15%); low-quality evidence), cardiac mortality (69/4184 (1.65%) vs 62/4093 (1.51%); RR 1.03, 95% CI 0.70 to 1.50; participants = 8277; studies = 12; I2 = 12%; low-quality evidence) and prophylaxis of ventricular fibrillation (76/5128 (1.48%) vs 103/4987 (2.01%); RR 0.78, 95% CI 0.55 to 1.12; participants = 10115; studies = 16; I2 = 18%; low-quality evidence). In terms of sinus bradycardia, lidocaine effect is imprecise compared with effects of placebo or no intervention (55/1346 (4.08%) vs 49/1203 (4.07%); RR 1.09, 95% CI 0.66 to 1.80; participants = 2549; studies = 8; I2 = 21%; very low-quality evidence). In trials involving only participants with proven acute myocardial infarction, lidocaine versus placebo or no intervention showed no significant differences in all-cause mortality (148/2747 (5.39%) vs 135/2506 (5.39%); RR 1.01, 95% CI 0.79 to 1.30; participants = 5253; studies = 16; I2 = 9%; low-quality evidence). No significant differences were noted between lidocaine and any other antiarrhythmic drug in terms of all-cause mortality and ventricular fibrillation. Data on overall survival 30 days after myocardial infarction were not reported. Lidocaine compared with placebo or no intervention increased risk of asystole (35/3393 (1.03%) vs 14/3443 (0.41%); RR 2.32, 95% CI 1.26 to 4.26; participants = 6826; studies = 4; I2 = 0%; very low-quality evidence) and dizziness/drowsiness (74/1259 (5.88%) vs 16/1274 (1.26%); RR 3.85, 95% CI 2.29 to 6.47; participants = 2533; studies = 6; I2 = 0%; low-quality evidence). Overall, safety data were poorly reported and adverse events may have been underestimated. Trial sequential analyses suggest that additional trials may not be needed for reliable conclusions to be drawn regarding these outcomes.\nAuthors' conclusions\nThis Cochrane review found evidence of low quality to suggest that prophylactic lidocaine has very little or no effect on mortality or ventricular fibrillation in people with acute myocardial infarction. The safety profile is unclear. This conclusion is based on randomised controlled trials with high risk of bias. However (disregarding the risk of bias), trial sequential analysis suggests that additional trials may not be needed to disprove an intervention effect of 20% relative risk reduction. Smaller risk reductions might require additional higher trials.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 37 trials conducted between 1969 and 1999. The evidence is current up to April 2015. Trials were conducted in Australia, Belgium, Brazil, Canada, Denmark, France, Germany, Italy, New Zealand, Northern Ireland, Norway, Poland, Sweden, Switzerland, The Netherlands, United Kingdom and United States of America and included 11,948 participants. Trials were conducted in pre-hospital and in-hospital settings and included individuals with or without proved acute myocardial infarction. Some trials did not limit results to acute myocardial infarction only. Lidocaine was given by intravenous (bolus and/or infusion) and intramuscular (alone or in combination with intravenous dosage) routes. Overall, trials included small sample sizes and reported low numbers of events. All trials had high risk of bias. Ten trials were sponsored by the pharmaceutical industry."
}
]
|
query-laysum | 1390 | [
{
"role": "user",
"content": "Abstract: Background\nFluoxetine is a serotonin reuptake inhibitor indicated for major depression. It is also thought to affect weight control: this seems to happen through appetite changes resulting in decreased food intake and normalisation of unusual eating behaviours. However, the benefit-risk ratio of this off-label medication is unclear.\nObjectives\nTo assess the effects of fluoxetine for overweight or obese adults.\nSearch methods\nWe searched the Cochrane Library, MEDLINE, Embase, LILACS, the ICTRP Search Portal and ClinicalTrials.gov and World Health Organization (WHO) ICTRP Search Portal. The last date of the search was December 2018 for all databases, to which we applied no language restrictions .\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing the administration of fluoxetine versus placebo, other anti-obesity agents, non-pharmacological therapy or no treatment in overweight or obese adults without depression, mental illness or abnormal eating patterns.\nData collection and analysis\nTwo review authors independently screened abstracts and titles for relevance. Screening for inclusion, data extraction and risk of bias assessment was performed by one author and checked by the second. We assessed trials for the overall certainty of the evidence using the GRADE instrument. For additional information we contacted trial authors by email. We performed random-effects meta-analyses and calculated the risk ratio (RR) with 95% confidence intervals (95% CI) for dichotomous outcomes and the mean difference (MD) with 95% CI for continuous outcomes.\nMain results\nWe identified 1036 records, scrutinized 52 full-text articles and included 19 completed RCTs (one trial is awaiting assessment). A total of 2216 participants entered the trials, 1280 participants were randomly assigned to fluoxetine (60 mg/d, 40 mg/d, 20 mg/d and 10 mg/d) and 936 participants were randomly assigned to various comparison groups (placebo; the anti-obesity agents diethylpropion, fenproporex, mazindol, sibutramine, metformin, fenfluramine, dexfenfluramine, fluvoxamine, 5-hydroxy-tryptophan; no treatment; and omega-3 gel). Within the 19 RCTs there were 56 trial arms. Fifteen trials were parallel RCTs and four were cross-over RCTs. The participants in the included trials were followed up for periods between three weeks and one year. The certainty of the evidence was low or very low: the majority of trials had a high risk of bias in one or more of the risk of bias domains.\nFor our main comparison group — fluoxetine versus placebo — and across all fluoxetine dosages and durations of treatment, the MD was −2.7 kg (95% CI −4 to −1.4; P < 0.001; 10 trials, 956 participants; low-certainty evidence). The 95% prediction interval ranged between −7.1 kg and 1.7 kg. The MD in body mass index (BMI) reduction across all fluoxetine dosages compared with placebo was −1.1 kg/m² (95% CI −3.7 to 1.4; 3 trials, 97 participants; very low certainty evidence). Only nine placebo-controlled trials reported adverse events. A total of 399 out of 627 participants (63.6%) receiving fluoxetine compared with 352 out of 626 participants (56.2%) receiving placebo experienced an adverse event. Random-effects meta-analysis showed an increase in the risk of having at least one adverse event of any type in the fluoxetine groups compared with placebo (RR 1.18, 95% CI 0.99 to 1.42; P = 0.07; 9 trials, 1253 participants; low-certainty evidence). The 95% prediction interval ranged between 0.74 and 1.88. Following fluoxetine treatment the adverse events of dizziness, drowsiness, fatigue, insomnia and nausea were observed approximately twice as often compared to placebo. A total of 15 out of 197 participants (7.6%) receiving fluoxetine compared with 12 out of 196 participants (6.1%) receiving placebo experienced depression. The RR across all fluoxetine doses compared with placebo was 1.20 (95% CI 0.57 to 2.52; P = 0.62; 3 trials, 393 participants; very low certainty evidence). All-cause mortality, health-related quality of life and socioeconomic effects were not reported.\nThe comparisons of fluoxetine with other anti-obesity agents (3 trials, 234 participants), omega-3 gel (1 trial, 48 participants) and no treatment (1 trial, 60 participants) showed inconclusive results (very low certainty evidence).\nAuthors' conclusions\nLow-certainty evidence suggests that off-label fluoxetine may decrease weight compared with placebo. However, low-certainty evidence suggests an increase in the risk for dizziness, drowsiness, fatigue, insomnia and nausea following fluoxetine treatment.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For our main comparison group — fluoxetine compared with placebo — and for all fluoxetine doses there was a 2.7 kg weight loss in favour of fluoxetine. We are uncertain, however, if an additional study would again show a benefit for fluoxetine. A total of 399 out of 627 participants (63.6%) receiving fluoxetine compared with 352 out of 626 participants (56.2%) receiving placebo experienced a side effect. Dizziness, drowsiness, fatigue, insomnia and nausea were observed approximately twice as often after fluoxetine compared to placebo. A total of 15 out of 197 participants (7.6%) receiving fluoxetine compared with 12 out of 196 participants (6.1%) receiving placebo experienced depression. The studies did not report on death from any cause, health-related quality of life and socioeconomic effects.\nThis evidence is up to date as of December 2018."
}
]
|
query-laysum | 1391 | [
{
"role": "user",
"content": "Abstract: Background\nAround 16 million cases of whooping cough (pertussis) occur worldwide each year, mostly in low-income countries. Much of the morbidity of whooping cough in children and adults is due to the effects of the paroxysmal cough. Cough treatments proposed include corticosteroids, beta2-adrenergic agonists, pertussis-specific immunoglobulin, antihistamines and possibly leukotriene receptor antagonists (LTRAs).\nObjectives\nTo assess the effectiveness and safety of interventions to reduce the severity of paroxysmal cough in whooping cough in children and adults.\nSearch methods\nWe updated our searches of the Cochrane Central Register of Controlled Trials (CENTRAL, 2014, Issue 1), which contains the Cochrane Acute Respiratory Infections Group's Specialised Register, the Database of Abstracts of Reviews of Effects (DARE 2014, Issue 2), accessed from The Cochrane Library, MEDLINE (1950 to 30 January 2014), EMBASE (1980 to 30 January 2014), AMED (1985 to 30 January 2014), CINAHL (1980 to 30 January 2014) and LILACS (30 January 2014). We searched Current Controlled Trials to identify trials in progress.\nSelection criteria\nWe selected randomised controlled trials (RCTs) and quasi-RCTs of any intervention (excluding antibiotics and vaccines) to suppress the cough in whooping cough.\nData collection and analysis\nTwo review authors (SB, MT) independently selected trials, extracted data and assessed the quality of each trial for this review in 2009. Two review authors (SB, KW) independently reviewed additional studies identified by the updated searches in 2012 and 2014. The primary outcome was frequency of paroxysms of coughing. Secondary outcomes were frequency of vomiting, frequency of whoop, frequency of cyanosis (turning blue), development of serious complications, mortality from any cause, side effects due to medication, admission to hospital and duration of hospital stay.\nMain results\nWe included 12 trials of varying sample sizes (N = 9 to 135), mainly from high-income countries, including a total of 578 participants. Ten trials recruited children (N = 448 participants). Two trials recruited adolescents and adults (N = 130 participants). We considered only three trials to be of high methodological quality (one trial each of diphenhydramine, pertussis immunoglobulin and montelukast). Included studies did not show a statistically significant benefit for any of the interventions. Only six trials, including a total of 196 participants, reported data in sufficient detail for analysis. Diphenhydramine did not change coughing episodes; the mean difference (MD) of coughing spells per 24 hours was 1.9; 95% confidence interval (CI) -4.7 to 8.5 (N = 49 participants from one trial). One trial on pertussis immunoglobulin reported a possible mean reduction of -3.1 whoops per 24 hours (95% CI -6.2 to 0.02, N = 47 participants) but no change in hospital stay (MD -0.7 days; 95% CI -3.8 to 2.4, N = 46 participants). Dexamethasone did not show a clear decrease in length of hospital stay (MD -3.5 days; 95% CI -15.3 to 8.4, N = 11 participants from one trial) and salbutamol showed no change in coughing paroxysms per day (MD -0.2; 95% CI -4.1 to 3.7, N = 42 participants from two trials). Only one trial comparing pertussis immunoglobulin versus placebo (N = 47 participants) reported data on adverse events: 4.3% in the treatment group (rash) versus 5.3% in the placebo group (loose stools, pain and swelling at injection site).\nAuthors' conclusions\nThere is insufficient evidence to draw conclusions about the effectiveness of interventions for the cough in whooping cough. More high-quality trials are needed to assess the effectiveness of potential antitussive treatments in patients with whooping cough.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Six studies including 196 participants reported their results in enough detail for us to assess them. Based on these results, antihistamines (one study, 49 participants), pertussis immunoglobulin (one study, 24 participants) and salbutamol (two studies, 42 participants) did not reduce the number of coughing bouts in patients with whooping cough. Neither pertussis immunoglobulin (one study, 46 participants) nor steroids (one study, 11 participants) decreased the length of time participants spent in hospital. One study reported similar rates of side effects in participants treated with pertussis immunoglobulin (4%; rash) or placebo (5%; loose stools, pain and swelling of the skin around where the injection was given). Studies of antihistamines, salbutamol and steroids did not report any results on side effects. None of these six studies reported any results on vomiting, cyanosis, serious complications, death or admission to hospital."
}
]
|
query-laysum | 1392 | [
{
"role": "user",
"content": "Abstract: Background\nHuman African trypanosomiasis, or sleeping sickness, is a severe disease affecting people in the poorest parts of Africa. It is usually fatal without treatment. Conventional treatments require days of intravenous infusion, but a recently developed drug, fexinidazole, can be given orally. Another oral drug candidate, acoziborole, is undergoing clinical development and will be considered in subsequent editions.\nObjectives\nTo evaluate the effectiveness and safety of currently used drugs for treating second-stage Trypanosoma brucei gambiense trypanosomiasis (gambiense human African trypanosomiasis, g-HAT).\nSearch methods\nOn 14 May 2021, we searched the Cochrane Infectious Diseases Group Specialized Register, the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, Latin American and Caribbean Health Science Information database, BIOSIS, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform. We also searched reference lists of included studies, contacted researchers working in the field, and contacted relevant organizations.\nSelection criteria\nEligible studies were randomized controlled trials that included adults and children with second-stage g-HAT, treated with anti-trypanosomal drugs currently in use.\nData collection and analysis\nTwo review authors extracted data and assessed risk of bias; a third review author acted as an arbitrator if needed. The included trial only reported dichotomous outcomes, which we presented as risk ratio (RR) or risk difference (RD) with 95% confidence intervals (CI).\nMain results\nWe included one trial comparing fexinidazole to nifurtimox combined with eflornithine (NECT). This trial was conducted between October 2012 and November 2016 in the Democratic Republic of the Congo and the Central African Republic, and included 394 participants. The study reported on efficacy and safety, with up to 24 months' follow-up. We judged the study to be at low risk of bias in all domains except blinding; as the route of administration and dosing regimens differed between treatment groups, participants and personnel were not blinded, resulting in a high risk of performance bias.\nMortality with fexinidazole may be higher at 24 months compared to NECT. There were 9/264 deaths in the fexinidazole group and 2/130 deaths in the NECT group (RR 2.22, 95% CI 0.49 to 10.11; 394 participants; low-certainty evidence). None of the deaths were related to treatment.\nFexinidazole likely results in an increase in the number of people relapsing during follow-up, with 14 participants in the fexinidazole group (14/264) and none in the NECT group (0/130) relapsing at 24 months (RD 0.05, 95% CI 0.02 to 0.08; 394 participants; moderate-certainty evidence).\nWe are uncertain whether there is any difference between the drugs regarding the incidence of serious adverse events at 24 months. (31/264 with fexinidazole and 13/130 with NECT group at 24 months). Adverse events were common with both drugs (247/264 with fexinidazole versus 121/130 with NECT), with no difference between groups (RR 1.01, 95% CI 0.95 to 1.06; 394 participants; moderate-certainty evidence).\nAuthors' conclusions\nOral treatment with fexinidazole is much easier to administer than conventional treatment, but deaths and relapse appear to be more common. However, the advantages or an oral option are considerable, in terms of convenience, avoiding hospitalisation and multiple intravenous infusions, thus increasing adherence.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Gambiense human African trypanosomiasis (g-HAT), or sleeping sickness, is a severe disease transmitted through the bite of infected tsetse flies found in rural parts of sub-Saharan Africa. Sleeping sickness has two clinical stages. This review only examines treating the second-stage, where people develop symptoms caused by invasion of the central nervous system (CNS), resulting in changes in the nervous system. Death is inevitable without treatment. Drugs for treatment are few, often require intravenous infusion every day over several weeks, and have serious side effects. In this review we aimed to compare the effects of current drugs for gambiense sleeping sickness and we examined nifurtimox-eflornithine combination (NECT) with a new drug, fexinidazole, that can be taken orally."
}
]
|
query-laysum | 1393 | [
{
"role": "user",
"content": "Abstract: Background\nMost people admitted to hospitals worldwide require a vascular access device (VAD). Hundreds of millions of VADs are inserted annually in the USA with reports of over a billion peripheral intravenous catheters used annually worldwide. Numerous reports suggest that a team approach for the assessment, insertion, and maintenance of VADs improves clinical outcomes, the patient experience, and healthcare processes.\nObjectives\nTo compare the use of the vascular access specialist team (VAST) for VAD insertion and care to a generalist model approach for hospital or community participants requiring a VAD in terms of insertion success, device failure, and cost-effectiveness.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 1); Ovid MEDLINE (1950 to 7 February 2018); Ovid Embase (1980 to 7 February 2018); EBSCO CINAHL (1982 to 7 February 2018); Web of Science Conference Proceedings Citation Index - Science and Social Science and Humanities (1990 to 7 February 2018); and Google Scholar. We searched the following trial registries: Australian and New Zealand Clinical Trials Register (www.anzctr.org.au); ClinicalTrials.gov (www.clinicaltrials.gov); Current Controlled Trials (www.controlled-trials.com/mrct); HKU Clinical Trials Registry (www.hkclinicaltrials.com); Clinical Trials Registry - India (ctri.nic.in/Clinicaltrials/login.php); UK Clinical Trials Gateway (www.controlled-trials.com/ukctr/); and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) (www.who.int/trialsearch). We searched all databases on 7 February 2018.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) that evaluated the effectiveness of VAST or specialist inserters for their impact on clinical outcomes.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane and used Covidence software to assist with file management.\nMain results\nWe retrieved 2398 citations: 30 studies were eligible for further examination of their full text, and we found one registered clinical trial in progress. No studies could be included in the analysis or review. We assigned one study as awaiting classification, as it has not been accepted for publication.\nAuthors' conclusions\nThis systematic review failed to locate relevant published RCTs to support or refute the assertion that vascular access specialist teams are superior to the generalist model. A vascular access specialist team has advanced knowledge with regard to insertion techniques, clinical care, and management of vascular access devices, whereas a generalist model comprises nurses, doctors, or other designated healthcare professionals in the healthcare facility who may have less advanced insertion techniques and who care for vascular access devices amongst other competing clinical tasks. However, this conclusion may change once the one study awaiting classification and one ongoing study are published. There is a need for good-quality RCTs to evaluate the efficacy of a vascular access specialist team approach for vascular access device insertion and care for the prevention of failure.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We describe a VAST as the grouping of healthcare personnel who have advanced knowledge and skills in the assessment, insertion, care, and management of vascular access devices, such as infusion/intravenous, intravenous therapy teams, as well as individual vascular access specialists (nurse, doctor, therapist, technician, and physician assistant).\nOur goal was to evaluate if the VAST approach is superior to a generalist approach.\nWe define a generalist model or approach as a larger groups of nurses, doctors, or other designated healthcare professionals in the healthcare facility who have less advanced skills and knowledge in inserting and managing vascular access devices.\nWe define a vascular access device (VAD) as a catheter (thin tube) inserted into veins or ports that can be implanted under the skin, allowing fluids and medicines to be delivered into veins. Catheters inserted into arteries can be used to monitor therapy. The most common VAD, the peripheral intravenous catheter (PIVC), may remain in place for many days before removal. Implanted VADs or catheters in central veins can usually remain for many weeks, months, and in some cases, particularly with ports, years. Vascular access devices are used for giving fluids (infusion therapy) and intravenous (injected into a vein) medicines, taking blood samples, and invasive monitoring, and are often crucial in providing treatment and care. The use of VADs and infusion therapy extends across almost all medical, surgical, and critical care specialties, and occurs in hospital, long-term care, and home care settings.\nThere are several risks related to the insertion of a VAD and its ongoing care that can cause the device to fail (to become no longer suitable for care). One significant postinsertion complication includes catheter-related venous thrombosis (clot formation). People with cancer or who are critically ill and may require additional medical interventions to treat thrombosis are particularly at risk for this complication. A risk of infusion-related inflammation of the vein (phlebitis or thrombophlebitis) exists for the PIVC when the cannulated vein becomes painful with other potential signs such as a red appearance at the insertion site. Infection risks such as catheter-related bloodstream infections are associated with all VADs; preventing such occurrences is a healthcare priority. Catheter-related bloodstream infections are associated with longer hospital stay, serious illness, death, and increased health service costs."
}
]
|
query-laysum | 1394 | [
{
"role": "user",
"content": "Abstract: Background\nVenous thromboembolism (VTE), which comprises deep vein thrombosis (DVT) and pulmonary embolism (PE), is the leading cause of preventable death in hospitalised people and the third most common cause of mortality in surgical patients. People undergoing bariatric surgery have the additional risk factor of being overweight. Although VTE prophylaxis in surgical patients is well established, the best way to prevent VTE in those undergoing bariatric surgery is less clear.\nObjectives\nTo evaluate the benefits and harms of pharmacological interventions (alone or in combination) on venous thromboembolism and other health outcomes in people undergoing bariatric surgery compared to the same pharmacological intervention administered at a different dose or frequency, the same pharmacological intervention or started at a different time point, another pharmacological intervention, no intervention or placebo.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 1 November 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in males and females of any age undergoing bariatric surgery comparing pharmacological interventions for VTE (alone or in combination) with the same pharmacological intervention administered at a different dose or frequency, the same pharmacological intervention started at a different time point, a different pharmacological intervention, no treatment or placebo.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. VTE and 2. major bleeding. Our secondary outcomes were 1. all-cause mortality, 2. VTE-related mortality, 3. PE, 4. DVT, 5. adverse effects and 6. quality of life. We used GRADE to assess certainty of evidence for each outcome.\nMain results\nWe included seven RCTs with 1045 participants. Data for meta-analysis were available from all participants.\nFour RCTs (597 participants) compared higher-dose heparin to standard-dose heparin: one of these studies (139 participants) used unfractionated heparin (UFH) and the other three (458 participants) used low-molecular-weight heparin (LMWH). One study compared heparin versus pentasaccharide (198 participants), and one study compared starting heparin before versus after bariatric surgery (100 participants). One study (150 participants) compared combined mechanical and pharmacological (enoxaparin) prophylaxis versus mechanical prophylaxis alone. The duration of the interventions ranged from seven to 15 days, and follow-up ranged from 10 to 180 days.\nHigher-dose heparin versus standard-dose heparin\nCompared to standard-dose heparin, higher-dose heparin may result in little or no difference in the risk of VTE (RR 0.55, 95% CI 0.05 to 5.99; 4 studies, 597 participants) or major bleeding (RR 1.19, 95% CI 0.48 to 2.96; I2 = 8%; 4 studies, 597 participants; low-certainty) in people undergoing bariatric surgery. The evidence on all-cause mortality, VTE-related mortality, PE, DVT and adverse events (thrombocytopenia) is uncertain (effect not estimable or very low-certainty evidence).\nHeparin versus pentasaccharide\nHeparin compared to a pentasaccharide after bariatric surgery may result in little or no difference in the risk of VTE (RR 0.83, 95% CI 0.19 to 3.61; 1 study, 175 participants) or DVT (RR 0.83, 95% CI 0.19 to 3.61; 1 study, 175 participants). The evidence on major bleeding, PE and mortality is uncertain (effect not estimable or very low-certainty evidence).\nHeparin started before versus after the surgical procedure\nStarting prophylaxis with heparin 12 hours before surgery versus after surgery may result in little or no difference in the risk of VTE (RR 0.11, 95% CI 0.01 to 2.01; 1 study, 100 participants) or DVT (RR 0.11, 95% CI 0.01 to 2.01; 1 study, 100 participants). The evidence on major bleeding, all-cause mortality and VTE-related mortality is uncertain (effect not estimable or very low-certainty evidence). We were unable to assess the effect of this intervention on PE or adverse effects, as the study did not measure these outcomes.\nCombined mechanical and pharmacological prophylaxis versus mechanical prophylaxis alone\nCombining mechanical and pharmacological prophylaxis (started 12 hours before surgery) may reduce VTE events in people undergoing bariatric surgery compared to mechanical prophylaxis alone (RR 0.05, 95% CI 0.00 to 0.89; number needed to treat for an additional beneficial outcome (NNTB) = 9; 1 study, 150 participants; low-certainty). We were unable to assess the effect of this intervention on major bleeding or morality (effect not estimable), or on PE or adverse events (not measured).\nNo studies measured quality of life.\nAuthors' conclusions\nHigher-dose heparin may make little or no difference to venous thromboembolism or major bleeding in people undergoing bariatric surgery when compared to standard-dose heparin.\nHeparin may make little or no difference to venous thromboembolism in people undergoing bariatric surgery when compared to pentasaccharide. There are inadequate data to draw conclusions about the effects of heparin compared to pentasaccharide on major bleeding.\nStarting prophylaxis with heparin 12 hours before bariatric surgery may make little or no difference to venous thromboembolism in people undergoing bariatric surgery when compared to starting heparin after bariatric surgery. There are inadequate data to draw conclusions about the effects of heparin started before versus after surgery on major bleeding.\nCombining mechanical and pharmacological prophylaxis (started 12 hours before surgery) may reduce VTE events in people undergoing bariatric surgery when compared to mechanical prophylaxis alone. No data are available relating to major bleeding.\nThe certainty of the evidence is limited by small sample sizes, few or no events, and risk of bias concerns. Future trials must be sufficiently large to enable analysis of relevant clinical outcomes, and should standardise the time of treatment and follow-up. They should also address the effect of direct oral anticoagulants and antiplatelets, preferably grouping them according to the type of intervention.\n\nGiven the provided abstract, please respond to the following query: \"What is venous thromboembolism?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Venous thromboembolism (VTE) is a clinical condition that usually starts when a blood clot forms inside a vein. The condition includes both deep vein thrombosis (when the clot forms in a deep vein, usually in the legs) and pulmonary embolism (when the clot forms in or reaches a blood vessel in the lungs). Both situations can substantially reduce quality of life and can be life-threatening. People with obesity or who undergo surgery are more likely to experience VTE. Therefore, people undergoing a bariatric surgical procedure (aimed at reducing bodyweight by restricting food intake or absorption) are at particularly high risk. However, it is unclear whether these people should receive the same intervention to prevent VTE as people with other clinical conditions."
}
]
|
query-laysum | 1395 | [
{
"role": "user",
"content": "Abstract: Background\nThis review is one of a series of rapid reviews that Cochrane contributors have prepared to inform the 2020 COVID-19 pandemic.\nWhen new respiratory infectious diseases become widespread, such as during the COVID-19 pandemic, healthcare workers’ adherence to infection prevention and control (IPC) guidelines becomes even more important. Strategies in these guidelines include the use of personal protective equipment (PPE) such as masks, face shields, gloves and gowns; the separation of patients with respiratory infections from others; and stricter cleaning routines. These strategies can be difficult and time-consuming to adhere to in practice. Authorities and healthcare facilities therefore need to consider how best to support healthcare workers to implement them.\nObjectives\nTo identify barriers and facilitators to healthcare workers’ adherence to IPC guidelines for respiratory infectious diseases.\nSearch methods\nWe searched OVID MEDLINE on 26 March 2020. As we searched only one database due to time constraints, we also undertook a rigorous and comprehensive scoping exercise and search of the reference lists of key papers. We did not apply any date limit or language limits.\nSelection criteria\nWe included qualitative and mixed-methods studies (with an identifiable qualitative component) that focused on the experiences and perceptions of healthcare workers towards factors that impact on their ability to adhere to IPC guidelines for respiratory infectious diseases. We included studies of any type of healthcare worker with responsibility for patient care. We included studies that focused on IPC guidelines (local, national or international) for respiratory infectious diseases in any healthcare setting. These selection criteria were framed by an understanding of the needs of health workers during the COVID-19 pandemic.\nData collection and analysis\nFour review authors independently assessed the titles, abstracts and full texts identified by the search. We used a prespecified sampling frame to sample from the eligible studies, aiming to capture diverse respiratory infectious disease types, geographical spread and data-rich studies. We extracted data using a data extraction form designed for this synthesis. We assessed methodological limitations using an adapted version of the Critical Skills Appraisal Programme (CASP) tool. We used a ‘best fit framework approach’ to analyse and synthesise the evidence. This provided upfront analytical categories, with scope for further thematic analysis. We used the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach to assess our confidence in each finding. We examined each review finding to identify factors that may influence intervention implementation and developed implications for practice.\nMain results\nWe found 36 relevant studies and sampled 20 of these studies for our analysis. Ten of these studies were from Asia, four from Africa, four from Central and North America and two from Australia. The studies explored the views and experiences of nurses, doctors and other healthcare workers when dealing with severe acute respiratory syndrome (SARS), H1N1, MERS (Middle East respiratory syndrome), tuberculosis (TB), or seasonal influenza. Most of these healthcare workers worked in hospitals; others worked in primary and community care settings.\nThe review points to several barriers and facilitators that influenced healthcare workers’ ability to adhere to IPC guidelines. The following factors are based on findings assessed as of moderate to high confidence.\nHealthcare workers felt unsure as to how to adhere to local guidelines when they were lengthy and ambiguous or did not reflect national or international guidelines. They could feel overwhelmed because local guidelines were constantly changing. They also described how IPC strategies led to increased workloads and fatigue, for instance because they had to use PPE and take on additional cleaning. Healthcare workers described how their responses to IPC guidelines were influenced by the level of support they felt that they received from their management team.\nClear communication about IPC guidelines was seen as vital. But healthcare workers pointed to a lack of training about the infection itself and about how to use PPE. They also thought it was a problem when training was not mandatory.\nSufficient space to isolate patients was also seen as vital. A lack of isolation rooms, anterooms and shower facilities was identified as a problem. Other important practical measures described by healthcare workers included minimising overcrowding, fast-tracking infected patients, restricting visitors, and providing easy access to handwashing facilities.\nA lack of PPE, and provision of equipment that was of poor quality, was a serious concern for healthcare workers and managers. They also pointed to the need to adjust the volume of supplies as infection outbreaks continued.\nHealthcare workers believed that they followed IPC guidance more closely when they saw its value. Some healthcare workers felt motivated to follow the guidance because of fear of infecting themselves or their families, or because they felt responsible for their patients. Some healthcare workers found it difficult to use masks and other equipment when it made patients feel isolated, frightened or stigmatised. Healthcare workers also found masks and other equipment uncomfortable to use. The workplace culture could also influence whether healthcare workers followed IPC guidelines or not.\nAcross many of the findings, healthcare workers pointed to the importance of including all staff, including cleaning staff, porters, kitchen staff and other support staff when implementing IPC guidelines.\nAuthors' conclusions\nHealthcare workers point to several factors that influence their ability and willingness to follow IPC guidelines when managing respiratory infectious diseases. These include factors tied to the guideline itself and how it is communicated, support from managers, workplace culture, training, physical space, access to and trust in personal protective equipment, and a desire to deliver good patient care. The review also highlights the importance of including all facility staff, including support staff, when implementing IPC guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Healthcare workers point to several factors that influence their ability and willingness to follow IPC guidelines when managing respiratory infectious diseases. These include factors linked to the guideline content and how it is communicated, support from managers, workplace culture, training, physical space, access to and trust in personal protective equipment (PPE), and a desire to deliver good patient care. The review also highlights the importance of including all healthcare facilities staff when implementing guidelines."
}
]
|
query-laysum | 1396 | [
{
"role": "user",
"content": "Abstract: Background\nAsthma management guidelines recommend low-dose inhaled corticosteroids (ICS) as first-line therapy for adults and adolescents with persistent asthma. The addition of anti-leukotriene agents to ICS offers a therapeutic option in cases of suboptimal control with daily ICS.\nObjectives\nTo assess the efficacy and safety of anti-leukotriene agents added to ICS compared with the same dose, an increased dose or a tapering dose of ICS (in both arms) for adults and adolescents 12 years of age and older with persistent asthma. Also, to determine whether any characteristics of participants or treatments might affect the magnitude of response.\nSearch methods\nWe identified relevant studies from the Cochrane Airways Group Specialised Register of Trials, which is derived from systematic searches of bibliographic databases including the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PsycINFO, the Allied and Complementary Medicine Database (AMED), the Cumulative Index to Nursing and Allied Health Literature (CINAHL) and the trial registries clinicaltrials.gov and ICTRP from inception to August 2016.\nSelection criteria\nWe searched for randomised controlled trials (RCTs) of adults and adolescents 12 years of age and older on a maintenance dose of ICS for whom investigators added anti-leukotrienes to the ICS and compared treatment with the same dose, an increased dose or a tapering dose of ICS for at least four weeks.\nData collection and analysis\nWe used standard methods expected by Cochrane. The primary outcome was the number of participants with exacerbations requiring oral corticosteroids (except when both groups tapered the dose of ICS, in which case the primary outcome was the % reduction in ICS dose from baseline with maintained asthma control). Secondary outcomes included markers of exacerbation, lung function, asthma control, quality of life, withdrawals and adverse events.\nMain results\nWe included in the review 37 studies representing 6128 adult and adolescent participants (most with mild to moderate asthma). Investigators in these studies used three leukotriene receptor antagonists (LTRAs): montelukast (n = 24), zafirlukast (n = 11) and pranlukast (n = 2); studies lasted from four weeks to five years.\nAnti-leukotrienes and ICS versus same dose of ICS\nOf 16 eligible studies, 10 studies, representing 2364 adults and adolescents, contributed data. Anti-leukotriene agents given as adjunct therapy to ICS reduced by half the number of participants with exacerbations requiring oral corticosteroids (risk ratio (RR) 0.50, 95% confidence interval (CI) 0.29 to 0.86; 815 participants; four studies; moderate quality); this is equivalent to a number needed to treat for additional beneficial outcome (NNTB) over six to 16 weeks of 22 (95% CI 16 to 75). Only one trial including 368 participants reported mortality and serious adverse events, but events were too infrequent for researchers to draw a conclusion. Four trials reported all adverse events, and the pooled result suggested little difference between groups (RR 1.06, 95% CI 0.92 to 1.22; 1024 participants; three studies; moderate quality). Investigators noted between-group differences favouring the addition of anti-leukotrienes for morning peak expiratory flow rate (PEFR), forced expiratory volume in one second (FEV1), asthma symptoms and night-time awakenings, but not for reduction in β2-agonist use or evening PEFR.\nAnti-leukotrienes and ICS versus higher dose of ICS\nOf 15 eligible studies, eight studies, representing 2008 adults and adolescents, contributed data. Results showed no statistically significant difference in the number of participants with exacerbations requiring oral corticosteroids (RR 0.90, 95% CI 0.58 to 1.39; 1779 participants; four studies; moderate quality) nor in all adverse events between groups (RR 0.96, 95% CI 0.89 to 1.03; 1899 participants; six studies; low quality). Three trials reported no deaths among 834 participants. Results showed no statistically significant differences in lung function tests including morning PEFR and FEV1 nor in asthma control measures including use of rescue β2-agonists or asthma symptom scores.\nAnti-leukotrienes and ICS versus tapering dose of ICS\nSeven studies, representing 1150 adults and adolescents, evaluated the combination of anti-leukotrienes and tapering-dose of ICS compared with tapering-dose of ICS alone and contributed data. Investigators observed no statistically significant difference in % change from baseline ICS dose (mean difference (MD) -3.05, 95% CI -8.13 to 2.03; 930 participants; four studies; moderate quality), number of participants with exacerbations requiring oral corticosteroids (RR 0.46, 95% CI 0.20 to 1.04; 542 participants; five studies; low quality) or all adverse events (RR 0.95, 95% CI 0.83 to 1.08; 1100 participants; six studies; moderate quality). Serious adverse events occurred more frequently among those taking anti-leukotrienes plus tapering ICS than in those taking tapering doses of ICS alone (RR 2.44, 95% CI 1.52 to 3.92; 621 participants; two studies; moderate quality), but deaths were too infrequent for researchers to draw any conclusions about mortality. Data showed no improvement in lung function nor in asthma control measures.\nAuthors' conclusions\nFor adolescents and adults with persistent asthma, with suboptimal asthma control with daily use of ICS, the addition of anti-leukotrienes is beneficial for reducing moderate and severe asthma exacerbations and for improving lung function and asthma control compared with the same dose of ICS. We cannot be certain that the addition of anti-leukotrienes is superior, inferior or equivalent to a higher dose of ICS. Scarce available evidence does not support anti-leukotrienes as an ICS sparing agent, and use of LTRAs was not associated with increased risk of withdrawals or adverse effects, with the exception of an increase in serious adverse events when the ICS dose was tapered. Information was insufficient for assessment of mortality.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Is adding an anti-leukotriene to ICS better than using an ICS alone for adults and adolescents 12 years of age and older with persistent asthma?"
}
]
|
query-laysum | 1397 | [
{
"role": "user",
"content": "Abstract: Background\nWhen orthodontic treatment is provided with fixed appliances, it is sometimes necessary to move the upper molar teeth backwards (distalise) to create space or help to overcome anchorage requirements. This can be achieved with the use of extraoral or intraoral appliances. The most common appliance is extraoral headgear, which requires considerable patient co-operation. Further, reports of serious injuries have been published. Intraoral appliances have been developed to overcome such shortcomings. The comparative effects of extraoral and intraoral appliances have not been fully evaluated.\nObjectives\nTo assess the effects of orthodontic treatment for distalising upper first molars in children and adolescents.\nSearch methods\nWe searched the following electronic databases: the Cochrane Oral Health Group's Trials Register (to 10 December 2012), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 11), MEDLINE via OVID (1946 to 10 December 2012) and EMBASE via OVID (1980 to 10 December 2012). No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nRandomised clinical trials involving the use of removable or fixed orthodontic appliances intended to distalise upper first molars in children and adolescents.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration. We performed data extraction and assessment of the risk of bias independently and in duplicate. We contacted authors to clarify the inclusion criteria of the studies.\nMain results\nTen studies, reporting data from 354 participants, were included in this review, the majority of which were carried out in a university dental hospital setting. The studies were published between 2005 and 2011 and were conducted in Europe and in Brazil. The age range of participants was from nine to 15 years, with an even distribution of males and females in seven of the studies, and a slight predominance of female patients in three of the studies. The quality of the studies was generally poor; seven studies were at an overall high risk of bias, three studies were at an unclear risk of bias, and we judged no study to be at low risk of bias.\nWe carried out random-effects meta-analyses as appropriate for the primary clinical outcomes of movement of upper first molars (mm), and loss of anterior anchorage, where there were sufficient data reported in the primary studies. Four studies, involving 159 participants, compared a distalising appliance to an untreated control. Meta-analyses were not undertaken for all primary outcomes due to incomplete reporting of all summary statistics, expected outcomes, and differences between the types of appliances. The degree and direction of molar movement and loss of anterior anchorage varied with the type of appliance. Four studies, involving 150 participants, compared a distalising appliance versus headgear. The mean molar movement for intraoral distalising appliances was -2.20 mm and -1.04 mm for headgear. There was a statistically significant difference in mean distal molar movement (mean difference (MD) -1.45 mm; 95% confidence interval (CI) -2.74 to -0.15) favouring intraoral appliances compared to headgear (four studies, high or unclear risk of bias, 150 participants analysed). However, a statistically significant difference in mean mesial upper incisor movement (MD 1.82 mm; 95% CI 1.39 to 2.24) and overjet (fixed-effect: MD 1.64 mm; 95% CI 1.26 to 2.02; two studies, unclear risk of bias, 70 participants analysed) favoured headgear, i.e. there was less loss of anterior anchorage with headgear. We reported direct comparisons of intraoral appliances narratively due to the variation in interventions (three studies, high or unclear risk of bias, 93 participants randomised). All appliances were reported to provide some degree of distal movement, and loss of anterior anchorage varied with the type of appliance.\nNo included studies reported on the incidence of adverse effects (harm, injury), number of attendances or rate of non-compliance.\nAuthors' conclusions\nIt is suggested that intraoral appliances are more effective than headgear in distalising upper first molars. However, this effect is counteracted by loss of anterior anchorage, which was not found to occur with headgear when compared with intraoral distalising appliance in a small number of studies. The number of trials assessing the effects of orthodontic treatment for distilisation is low, and the current evidence is of low or very low quality.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The main question addressed by this review is how effective are orthodontic appliances in moving the upper teeth backwards in children and adolescents."
}
]
|
query-laysum | 1398 | [
{
"role": "user",
"content": "Abstract: Background\nPsychostimulant misuse is a continuously growing medical and social burden. There is no evidence proving the efficacy of pharmacotherapy. Psychosocial interventions could be a valid approach to help patients in reducing or ceasing drug consumption.\nObjectives\nTo assess the effects of psychosocial interventions for psychostimulant misuse in adults.\nSearch methods\nWe searched the Cochrane Drugs and Alcohol Group Specialised Register (via CRSLive); Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE; EMBASE; CINAHL; Web of Science and PsycINFO, from inception to November 2015. We also searched for ongoing and unpublished studies via ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (apps.who.int/trialsearch/).\nAll searches included non-English language literature. We handsearched references of topic-related systematic reviews and the included studies.\nSelection criteria\nWe included randomised controlled trials comparing any psychosocial intervention with no intervention, treatment as usual (TAU) or a different intervention in adults with psychostimulant misuse or dependence.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane.\nMain results\nWe included a total of 52 trials (6923 participants).\nThe psychosocial interventions considered in the studies were: cognitive behavioural therapy (19 studies), contingency management (25 studies), motivational interviewing (5 studies), interpersonal therapy (3 studies), psychodynamic therapy (1 study), 12-step facilitation (4 studies).\nWe judged most of the studies to be at unclear risk of selection bias; blinding of personnel and participants was not possible for the type of intervention, so all the studies were at high risk of performance bias with regard to subjective outcomes; the majority of studies did not specify whether the outcome assessors were blind. We did not consider it likely that the objective outcomes were influenced by lack of blinding.\nThe comparisons made were: any psychosocial intervention versus no intervention (32 studies), any psychosocial intervention versus TAU (6 studies), and one psychosocial intervention versus an alternative psychosocial intervention (13 studies). Five of included studies did not provide any useful data for inclusion in statistical synthesis.\nWe found that, when compared to no intervention, any psychosocial treatment: reduced the dropout rate (risk ratio (RR): 0.83, 95% confidence interval (CI) 0.76 to −0.91, 24 studies, 3393 participants, moderate quality evidence); increased continuous abstinence at the end of treatment (RR: 2.14, 95% CI 1.27 to −3.59, 8 studies, 1241 participants, low quality evidence); did not significantly increase continuous abstinence at the longest follow-up (RR: 2.12, 95% CI 0.77 to −5.86, 4 studies, 324 participants, low quality evidence); significantly increased the longest period of abstinence: (standardised mean difference (SMD): 0.48, 95% CI 0.34 to 0.63, 10 studies, 1354 participants, high quality evidence). However, it should be noted that the in the vast majority of the studies in this comparison the specific psychosocial treatment assessed in the experimental arm was given in add on to treatment as usual or to another specific psychosocial or pharmacological treatment which was received by both groups. So, many of the control groups in this comparison were not really untreated. Receiving some amount of treatment is not the same as not receiving any intervention, so we could argue that the overall effect of the experimental psychosocial treatment could be smaller if given in add on to TAU or to another intervention than if given to participants not receiving any intervention; this could translate to a smaller magnitude of the effect of the psychosocial intervention when it is given in add on.\nWhen compared to TAU, any psychosocial treatment reduced dropout rate (RR: 0.72, 95% CI 0.59 to 0.89, 6 studies, 516 participants, moderate quality evidence), did not increase continuous abstinence at the end of treatment (RR: 1.27, 95% CI 0.94 to 1.72, 2 studies, 224 participants, low quality evidence), did not increase longest period of abstinence (MD −3.15 days, 95% CI −10.35 to 4.05, 1 study, 110 participants, low quality evidence). No studies in this comparison assessed the outcome of continuous abstinence at longest follow-up.\nThere were few studies comparing two or more psychosocial interventions, with small sample sizes and considerable heterogeneity in terms of the types of interventions assessed. None reported significant results.\nNone of the studies reported harms related to psychosocial interventions.\nAuthors' conclusions\nThe addition of any psychosocial treatment to treatment as usual (usually characterised by group counselling or case management) probably reduces the dropout rate and increases the longest period of abstinence. It may increase the number of people achieving continuous abstinence at the end of treatment, although this might not be maintained at longest follow-up. The most studied and the most promising psychosocial approach to be added to treatment as usual is probably contingency management. However, the other approaches were only analysed in a few small studies, so we cannot rule out the possibility that the results were not significant because of imprecision. When compared to TAU, any psychosocial treatment may improve adherence, but it may not improve abstinence at the end of treatment or the longest period of abstinence.\nThe majority of the studies took place in the United States, and this could limit the generalisability of the findings, because the effects of psychosocial treatments could be strongly influenced by the social context and ethnicity. The results of our review do not answer the most relevant clinical question, demonstrating which is the most effective type of psychosocial approach.\nFurther studies should directly compare contingency management with the other psychosocial approaches.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that, compared to no intervention, any psychosocial intervention probably improves treatment adherence and may increase abstinence at the end of treatment; however, people may not be able to stay clean several months after the end of treatment. Finally, we found that people undergoing specific psychosocial interventions stay clean for a longer time without using stimulants. However, the vast majority of the studies we looked at assessed a specific psychosocial treatment added to treatment as usual or compared it to another specific psychosocial or pharmacological treatment. So, control groups were not really untreated. This could have led to an underestimation of the true effect of the psychosocial interventions.\nWe found that, when compared to TAU, any psychosocial treatment probably improves adherence but may not improve abstinence at the end of treatment nor help participants to stay clean for a longer time.\nWe could not draw any conclusions on which is the most effective psychosocial treatment based on direct comparisons. Most of the studies took place in the United States, and this could limit the generalisability of the findings, because the effects of psychosocial treatments could be strongly influenced by the social context and ethnicity.\nNone of the studies reported harms related to psychosocial interventions."
}
]
|
query-laysum | 1399 | [
{
"role": "user",
"content": "Abstract: Background\nVestibular migraine is a form of migraine where one of the main features is recurrent attacks of vertigo. These episodes are often associated with other features of migraine, including headache and sensitivity to light or sound. These unpredictable and severe attacks of vertigo can lead to a considerable reduction in quality of life. The condition is estimated to affect just under 1% of the population, although many people remain undiagnosed. A number of pharmacological interventions have been used or proposed to be used as prophylaxis for this condition, to help reduce the frequency of the attacks. These are predominantly based on treatments that are in use for headache migraine, with the belief that the underlying pathophysiology of these conditions is similar.\nObjectives\nTo assess the benefits and harms of pharmacological treatments used for prophylaxis of vestibular migraine.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 23 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with definite or probable vestibular migraine comparing beta-blockers, calcium channel blockers, antiepileptics, antidepressants, diuretics, monoclonal antibodies against calcitonin gene-related peptide (or its receptor), botulinum toxin or hormonal modification with either placebo or no treatment. We excluded studies with a cross-over design, unless data from the first phase of the study could be identified.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) improvement in headache, 6) improvement in other migrainous symptoms and 7) other adverse effects. We considered outcomes reported at three time points: < 3 months, 3 to < 6 months, > 6 to 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included three studies with a total of 209 participants. One evaluated beta-blockers and the other two evaluated calcium channel blockers. We did not identify any evidence for the remaining interventions of interest.\nBeta-blockers versus placebo\nOne study (including 130 participants, 61% female) evaluated the use of 95 mg metoprolol once daily for six months, compared to placebo. The proportion of people who reported improvement in vertigo was not assessed in this study. Some data were reported on the frequency of vertigo attacks at six months and the occurrence of serious adverse effects. However, this is a single, small study and for all outcomes the certainty of evidence was low or very low. We are unable to draw meaningful conclusions from the numerical results.\nCalcium channel blockers versus no treatment\nTwo studies, which included a total of 79 participants (72% female), assessed the use of 10 mg flunarizine once daily for three months, compared to no intervention. All of the evidence for this comparison was of very low certainty. Most of our outcomes were only reported by a single study, therefore we were unable to conduct any meta-analysis. Some data were reported on improvement in vertigo and change in vertigo, but no information was available regarding serious adverse events. We are unable to draw meaningful conclusions from the numerical results, as these data come from single, small studies and the certainty of the evidence was very low.\nAuthors' conclusions\nThere is very limited evidence from placebo-controlled randomised trials regarding the efficacy and potential harms of pharmacological interventions for prophylaxis of vestibular migraine. We only identified evidence for two of our interventions of interest (beta-blockers and calcium channel blockers) and all evidence was of low or very low certainty. Further research is necessary to identify whether these treatments are effective at improving symptoms and whether there are any harms associated with their use.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies including adults that compared different medications to either no treatment or placebo (dummy) treatment. We used standard methods to assess the certainty of the evidence. We rated our confidence in the evidence, based on factors such as study methods, the number of participants in them and the consistency of findings across studies."
}
]
|
query-laysum | 1401 | [
{
"role": "user",
"content": "Abstract: Background\nPlasma volume expanders are used in connection to paracentesis in people with cirrhosis to prevent reduction of effective plasma volume, which may trigger deleterious effect on haemodynamic balance, and increase morbidity and mortality. Albumin is considered the standard product against which no plasma expansion or other plasma expanders, e.g. other colloids (polygeline , dextrans, hydroxyethyl starch solutions, fresh frozen plasma), intravenous infusion of ascitic fluid, crystalloids, or mannitol have been compared. However, the benefits and harms of these plasma expanders are not fully clear.\nObjectives\nTo assess the benefits and harms of any plasma volume expanders such as albumin, other colloids (polygeline, dextrans, hydroxyethyl starch solutions, fresh frozen plasma), intravenous infusion of ascitic fluid, crystalloids, or mannitol versus no plasma volume expander or versus another plasma volume expander for paracentesis in people with cirrhosis and large ascites.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS, CNKI, VIP, Wanfang, Science Citation Index Expanded, and Conference Proceedings Citation Index until January 2019. Furthermore, we searched FDA, EMA, WHO (last search January 2019), www.clinicaltrials.gov/, and www.controlled-trials.com/ for ongoing trials.\nSelection criteria\nRandomised clinical trials, no matter their design or year of publication, publication status, and language, assessing the use of any type of plasma expander versus placebo, no intervention, or a different plasma expander in connection with paracentesis for ascites in people with cirrhosis. We considered quasi-randomised, retrieved with the searches for randomised clinical trials only, for reports on harms.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We calculated the risk ratio (RR) or mean difference (MD) using the fixed-effect model and the random-effects model meta-analyses, based on the intention-to-treat principle, whenever possible. If the fixed-effect and random-effects models showed different results, then we made our conclusions based on the analysis with the highest P value (the more conservative result). We assessed risks of bias of the individual trials using predefined bias risk domains. We assessed the certainty of the evidence at an outcome level, using GRADE, and constructed 'Summary of Findings' tables for seven of our review outcomes.\nMain results\nWe identified 27 randomised clinical trials for inclusion in this review (24 published as full-text articles and 3 as abstracts). Five of the trials, with 271 participants, assessed plasma expanders (albumin in four trials and ascitic fluid in one trial) versus no plasma expander. The remaining 22 trials, with 1321 participants, assessed one type of plasma expander, i.e. dextran, hydroxyethyl starch, polygeline, intravenous infusion of ascitic fluid, crystalloids, or mannitol versus another type of plasma expander, i.e. albumin in 20 of these trials and polygeline in one trial. Twenty-five trials provided data for quantitative meta-analysis. According to the Child-Pugh classification, most participants were at an intermediate to advanced stage of liver disease in the absence of hepatocellular carcinoma, recent gastrointestinal bleeding, infections, and hepatic encephalopathy. All trials were assessed as at overall high risk of bias. Ten trials seemed not to have been funded by industry; twelve trials were considered unclear about funding; and five trials were considered funded by industry or a for-profit institution.\nWe found no evidence of a difference in effect between plasma expansion versus no plasma expansion on mortality (RR 0.52, 95% CI 0.06 to 4.83; 248 participants; 4 trials; very low certainty); renal impairment (RR 0.32, 95% CI 0.02 to 5.88; 181 participants; 4 trials; very low certainty); other liver-related complications (RR 1.61, 95% CI 0.79 to 3.27; 248 participants; 4 trials; very low certainty); and non-serious adverse events (RR 1.04, 95% CI 0.32 to 3.40; 158 participants; 3 trials; very low certainty). Two of the trials stated that no serious adverse events occurred while the remaining trials did not report on this outcome. No trial reported data on health-related quality of life.\nWe found no evidence of a difference in effect between experimental plasma expanders versus albumin on mortality (RR 1.03, 95% CI 0.82 to 1.30; 1014 participants; 14 trials; very low certainty); serious adverse events (RR 0.89, 95% CI 0.10 to 8.30; 118 participants; 2 trials; very low certainty); renal impairment (RR 1.17, 95% CI 0.71 to 1.91; 1107 participants; 17 trials; very low certainty); other liver-related complications (RR 1.10, 95% CI 0.82 to 1.48; 1083 participants; 16 trials; very low certainty); and non-serious adverse events (RR 1.37, 95% CI 0.66 to 2.85; 977 participants; 14 trials; very low certainty). We found no data on heath-related quality of life and refractory ascites.\nAuthors' conclusions\nOur systematic review and meta-analysis did not find any benefits or harms of plasma expanders versus no plasma expander or of one plasma expander such as polygeline, dextrans, hydroxyethyl starch, intravenous infusion of ascitic fluid, crystalloids, or mannitol versus albumin on primary or secondary outcomes. The data originated from few, small, mostly short-term trials at high risks of systematic errors (bias) and high risks of random errors (play of chance). GRADE assessments concluded that the evidence was of very low certainty. Therefore, we can neither demonstrate or discard any benefit of plasma expansion versus no plasma expansion, and differences between one plasma expander versus another plasma expander.\nLarger trials at low risks of bias are needed to assess the role of plasma expanders in connection with paracentesis. Such trials should be conducted according to the SPIRIT guidelines and reported according to the CONSORT guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The data came from only few, small, mostly short-term trials, at high risks of systematic errors (bias) and high risks of random errors (play of chance). Accordingly, we concluded that the certainty of evidence for each of our prespecified review outcomes was very low."
}
]
|
query-laysum | 1402 | [
{
"role": "user",
"content": "Abstract: Background\nPartial nephrectomy and radical nephrectomy are the relevant surgical therapy options for localised renal cell carcinoma. However, debate regarding the effects of these surgical approaches continues and it is important to identify and summarise high-quality studies to make surgical treatment recommendations.\nObjectives\nTo assess the effects of partial nephrectomy compared with radical nephrectomy for clinically localised renal cell carcinoma.\nSearch methods\nWe searched CENTRAL, MEDLINE, PubMed, Embase, Web of Science, BIOSIS, LILACS, Scopus, two trial registries and abstracts from three major conferences to 24 February 2017, together with reference lists; and contacted selected experts in the field.\nSelection criteria\nWe included a randomised controlled trial comparing partial and radical nephrectomy for participants with small renal masses.\nData collection and analysis\nOne review author screened all of the titles and abstracts; only citations that were clearly irrelevant were excluded at this stage. Next, two review authors independently assessed full-text reports, identified relevant studies, evaluated the eligibility of the studies for inclusion, assessed trial quality and extracted data. The update of the literature search was performed by two independent review authors. We used Review Manager 5 for data synthesis and data analyses.\nMain results\nWe identified one randomised controlled trial including 541 participants that compared partial nephrectomy to radical nephrectomy. The median follow-up was 9.3 years.\nBased on low quality evidence, we found that time-to-death of any cause was decreased using partial nephrectomy (HR 1.50, 95% CI 1.03 to 2.18). This corresponds to 79 more deaths (5 more to 173 more) per 1000. Also based on low quality evidence, we found no difference in serious adverse events (RR 2.04, 95% CI 0.19 to 22.34). Findings are consistent with 4 more surgery-related deaths (3 fewer to 78 more) per 1000.\nBased on low quality evidence, we found no difference in time-to-recurrence (HR 1.37, 95% CI 0.58 to 3.24). This corresponds to 12 more recurrences (14 fewer to 70 more) per 1000. Due to the nature of reporting, we were unable to analyse overall rates for immediate and long-term adverse events. We found no evidence on haemodialysis or quality of life.\nReasons for downgrading related to study limitations (lack of blinding, cross-over), imprecision and indirectness (a substantial proportion of patients were ultimately found not to have a malignant tumour). Based on the finding of a single trial, we were unable to conduct any subgroup or sensitivity analyses.\nAuthors' conclusions\nPartial nephrectomy may be associated with a decreased time-to-death of any cause. With regards to surgery-related mortality, cancer-specific survival and time-to-recurrence, partial nephrectomy appears to result in little to no difference.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the medical literature until 24 February 2017. We included one study with 541 study participants that randomly assigned participants with localised tumours of the kidney that were thought to be cancerous. On average, participants were followed for 9.3 years."
}
]
|
query-laysum | 1403 | [
{
"role": "user",
"content": "Abstract: Background\nUrinary catheterisation is a common procedure, with approximately 15% to 25% of all people admitted to hospital receiving short-term (14 days or less) indwelling urethral catheterisation at some point during their care. However, the use of urinary catheters is associated with an increased risk of developing urinary tract infection. Catheter-associated urinary tract infection (CAUTI) is one of the most common hospital-acquired infections. It is estimated that around 20% of hospital-acquired bacteraemias arise from the urinary tract and are associated with mortality of around 10%.\nThis is an update of a Cochrane Review first published in 2005 and last published in 2007.\nObjectives\nTo assess the effects of strategies for removing short-term (14 days or less) indwelling catheters in adults.\nSearch methods\nWe searched the Cochrane Incontinence Specialised Register, which contains trials identified from CENTRAL, MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, CINAHL, ClinicalTrials.gov, WHO ICTRP, and handsearching of journals and conference proceedings (searched 17 March 2020), and reference lists of relevant articles.\nSelection criteria\nWe included all randomised controlled trials (RCTs) and quasi-RCTs that evaluated the effectiveness of practices undertaken for the removal of short-term indwelling urethral catheters in adults for any reason in any setting.\nData collection and analysis\nTwo review authors performed abstract and full-text screening of all relevant articles. At least two review authors independently performed risk of bias assessment, data abstraction and GRADE assessment.\nMain results\nWe included 99 trials involving 12,241 participants. We judged the majority of trials to be at low or unclear risk of selection and detection bias, with a high risk of performance bias. We also deemed most trials to be at low risk of attrition and reporting bias. None of the trials reported on quality of life. The majority of participants across the trials had undergone some form of surgical procedure.\nThirteen trials involving 1506 participants compared the removal of short-term indwelling urethral catheters at one time of day (early morning removal group between 6 am to 7 am) versus another (late night removal group between 10 pm to midnight). Catheter removal late at night may slightly reduce the risk of requiring recatheterisation compared with early morning (RR 0.71, 95% CI 0.53 to 0.96; 10 RCTs, 1920 participants; low-certainty evidence). We are uncertain if there is any difference between early morning and late night removal in the risk of developing symptomatic CAUTI (RR 1.00, 95% CI 0.61 to 1.63; 1 RCT, 41 participants; very low-certainty evidence). We are uncertain whether the time of day makes a difference to the risk of dysuria (RR 2.20; 95% CI 0.70 to 6.86; 1 RCT, 170 participants; low-certainty evidence).\nSixty-eight trials involving 9247 participants compared shorter versus longer durations of catheterisation. Shorter durations may increase the risk of requiring recatheterisation compared with longer durations (RR 1.81, 95% CI 1.35 to 2.41; 44 trials, 5870 participants; low-certainty evidence), but probably reduce the risk of symptomatic CAUTI (RR 0.52, 95% CI 0.45 to 0.61; 41 RCTs, 5759 participants; moderate-certainty evidence) and may reduce the risk of dysuria (RR 0.42, 95% CI 0.20 to 0.88; 7 RCTs; 1398 participants; low-certainty evidence).\nSeven trials involving 714 participants compared policies of clamping catheters versus free drainage. There may be little to no difference between clamping and free drainage in terms of the risk of requiring recatheterisation (RR 0.82, 95% CI 0.55 to 1.21; 5 RCTs; 569 participants; low-certainty evidence). We are uncertain if there is any difference in the risk of symptomatic CAUTI (RR 0.99, 95% CI 0.60 to 1.63; 2 RCTs, 267 participants; very low-certainty evidence) or dysuria (RR 0.84, 95% CI 0.46 to 1.54; 1 trial, 79 participants; very low-certainty evidence).\nThree trials involving 402 participants compared the use of prophylactic alpha blockers versus no intervention or placebo. We are uncertain if prophylactic alpha blockers before catheter removal has any effect on the risk of requiring recatheterisation (RR 1.18, 95% CI 0.58 to 2.42; 2 RCTs, 184 participants; very low-certainty evidence) or risk of symptomatic CAUTI (RR 0.20, 95% CI 0.01 to 4.06; 1 trial, 94 participants; very low-certainty evidence). None of the included trials investigating prophylactic alpha blockers reported the number of participants with dysuria.\nAuthors' conclusions\nThere is some evidence to suggest the removal of indwelling urethral catheters late at night rather than early in the morning may reduce the number of people who require recatheterisation. It appears that catheter removal after shorter compared to longer durations probably reduces the risk of symptomatic CAUTI and may reduce the risk of dysuria. However, it may lead to more people requiring recatheterisation. The other evidence relating to the risk of symptomatic CAUTI and dysuria is too uncertain to allow us to draw any conclusions.\nDue to the low certainty of the majority of the evidence presented here, the results of further research are likely to change our findings and to have a further impact on clinical practice. This systematic review has highlighted the need for a standardised set of core outcomes, which should be measured and reported by all future trials comparing strategies for the removal of short-term urinary catheters. Future trials should also study the effects of short-term indwelling urethral catheter removal on non-surgical patients.\n\nGiven the provided abstract, please respond to the following query: \"Shorter compared to longer use of catheters\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People who have their catheters removed after a shorter length of time are probably less likely to develop UTIs and may be less likely to experience painful urination compared with those who have their catheters for longer. However, we also found that people may be more likely to need the catheter reinserting if they have the catheter for a shorter compared with a longer time."
}
]
|
query-laysum | 1404 | [
{
"role": "user",
"content": "Abstract: Background\nTuberculosis and malnutrition are linked in a complex relationship. Tuberculosis may cause undernutrition through increased metabolic demands and decreased intake, and nutritional deficiencies may worsen the disease, or delay recovery by depressing important immune functions. At present, there is no evidence-based nutritional guidance for adults and children being treated for tuberculosis.\nObjectives\nTo assess the effects of oral nutritional supplements in people being treated with antituberculous drug therapy for active tuberculosis.\nSearch methods\nWe searched the Cochrane Infectious Disease Group Specialized Register, Cochrane Central Register of Controlled Trials (CENTRAL; Issue 1, 2016), MEDLINE (from 1946 to 4 February 2016), EMBASE (from 1980 to 4 February 2016), LILACS (from 1982 to 4 February 2016), the metaRegister of Controlled Trials (mRCT), the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), and the Indian Journal of Tuberculosis up to 4 February 2016, and checked the reference lists of all included studies.\nSelection criteria\nRandomized controlled trials that compared any oral nutritional supplement given for at least four weeks with no nutritional intervention, placebo, or dietary advice only for people being treated for active tuberculosis. The primary outcomes of interest were all-cause death, and cure at six and 12 months.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, and extracted data and assessed the risk of bias in the included trials. We presented the results as risk ratios (RR) for dichotomous variables, and mean differences (MD) for continuous variables, with 95% confidence intervals (CIs). Where appropriate, we pooled data from trials with similar interventions and outcomes. We assessed the quality of the evidence using the Grading of Recommendation Assessment, Development and Evaluation (GRADE) approach.\nMain results\nThirty-five trials, including 8283 participants, met the inclusion criteria of this review.\nMacronutrient supplementation\nSix trials assessed the provision of free food, or high-energy supplements. Only two trials measured total dietary intake, and in both trials the intervention increased calorie consumption compared to controls.\nThe available trials were too small to reliably prove or exclude clinically important benefits on mortality (RR 0.34, 95% CI 0.10 to 1.20; four trials, 567 participants, very low quality evidence), cure (RR 0.91, 95% CI 0.59 to 1.41; one trial, 102 participants, very low quality evidence), or treatment completion (data not pooled; two trials, 365 participants, very low quality evidence).\nSupplementation probably produces a modest increase in weight gain during treatment for active tuberculosis, although this was not seen consistently across all trials (data not pooled; five trials, 883 participants, moderate quality evidence). Two small studies provide some evidence that quality of life may also be improved but the trials were too small to have much confidence in the result (data not pooled; two trials, 134 participants, low quality evidence).\nMicronutrient supplementation\nSix trials assessed multi-micronutrient supplementation in doses up to 10 times the dietary reference intake, and 18 trials assessed single or dual micronutrient supplementation.\nRoutine multi-micronutrient supplementation may have little or no effect on mortality in HIV-negative people with tuberculosis (RR 0.86, 95% CI 0.46 to 1.6; four trials, 1219 participants, low quality evidence), or HIV-positive people who are not taking antiretroviral therapy (RR 0.92, 95% CI 0.69 to 1.23; three trials, 1429 participants, moderate quality evidence). There is insufficient evidence to know if supplementation improves cure (no trials), treatment completion (RR 0.99, 95% CI 0.95 to 1.04; one trial, 302 participants, very low quality evidence), or the proportion of people who remain sputum positive during the first eight weeks (RR 0.92, 95% CI 0.63 to 1.35; two trials, 1020 participants, very low quality evidence). However, supplementation may have little or no effect on weight gain during treatment (data not pooled; five trials, 2940 participants, low quality evidence), and no studies have assessed the effect on quality of life.\nPlasma levels of vitamin A appear to increase following initiation of tuberculosis treatment regardless of supplementation. In contrast, supplementation probably does improve plasma levels of zinc, vitamin D, vitamin E, and selenium, but this has not been shown to have clinically important benefits. Of note, despite multiple studies of vitamin D supplementation in different doses, statistically significant benefits on sputum conversion have not been demonstrated.\nAuthors' conclusions\nThere is currently insufficient research to know whether routinely providing free food, or energy supplements improves tuberculosis treatment outcomes, but it probably improves weight gain in some settings.\nAlthough blood levels of some vitamins may be low in people starting treatment for active tuberculosis, there is currently no reliable evidence that routinely supplementing above recommended daily amounts has clinical benefits.\n\nGiven the provided abstract, please respond to the following query: \"Effect of providing nutritional supplements to people being treated for tuberculosis\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We currently don't know if providing free food to tuberculosis patients, as hot meals or ration parcels, reduces death or improves cure (very low quality evidence). However, it probably does improve weight gain in some settings (moderate quality evidence), and may improve quality of life (low quality evidence).\nRoutinely providing multi-micronutrient supplements may have little or no effect on deaths in HIV-negative people with tuberculosis (low quality evidence), or HIV-positive people who are not taking anti-retroviral therapy (moderate quality evidence). We currently don't know if micronutrient supplements have any effect on tuberculosis treatment outcomes (very low quality evidence), but they may have no effect on weight gain (low quality evidence). No studies have assessed the effect on quality of life.\nPlasma levels of vitamin A appear to increase after starting tuberculosis treatment regardless of supplementation. In contrast, supplementation probably does improve plasma levels of zinc, vitamin D, vitamin E, and selenium, but this has not been shown to have clinically important benefits. Despite multiple studies of vitamin D supplementation in different doses, statistically significant benefits on sputum conversion have not been demonstrated."
}
]
|
query-laysum | 1405 | [
{
"role": "user",
"content": "Abstract: Background\nMénière's disease is a condition that causes recurrent episodes of vertigo, associated with hearing loss and tinnitus. Corticosteroids are sometimes administered directly into the middle ear to treat this condition (through the tympanic membrane). The underlying cause of Ménière's disease is unknown, as is the way in which this treatment may work. The efficacy of this intervention in preventing vertigo attacks, and their associated symptoms, is currently unclear.\nObjectives\nTo evaluate the benefits and harms of intratympanic corticosteroids versus placebo or no treatment in people with Ménière's disease.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 14 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with a diagnosis of Ménière's disease comparing intratympanic corticosteroids with either placebo or no treatment. We excluded studies with follow-up of less than three months, or with a cross-over design (unless data from the first phase of the study could be identified).\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) change in hearing, 6) change in tinnitus and 7) other adverse effects (including tympanic membrane perforation). We considered outcomes reported at three time points: 3 to < 6 months, 6 to ≤ 12 months and > 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included 10 studies with a total of 952 participants. All studies used the corticosteroid dexamethasone, with doses ranging from approximately 2 mg to 12 mg.\nImprovement in vertigo\nIntratympanic corticosteroids may make little or no difference to the number of people who report an improvement in their vertigo at 6 to ≤ 12 months follow-up (intratympanic corticosteroids 96.8%, placebo 96.6%, risk ratio (RR) 1.00, 95% confidence interval (CI) 0.92 to 1.10; 2 studies; 60 participants; low-certainty evidence) or at more than 12 months follow-up (intratympanic corticosteroids 100%, placebo 96.3%; RR 1.03, 95% CI 0.87 to 1.23; 2 studies; 58 participants; low-certainty evidence). However, we note the large improvement in the placebo group for these trials, which causes challenges in interpreting these results.\nChange in vertigo\nAssessed with a global score\nOne study (44 participants) assessed the change in vertigo at 3 to < 6 months using a global score, which considered the frequency, duration and severity of vertigo. This is a single, small study and the certainty of the evidence was very low. We are unable to draw meaningful conclusions from the numerical results.\nAssessed by frequency of vertigo\nThree studies (304 participants) assessed the change in frequency of vertigo episodes at 3 to < 6 months. Intratympanic corticosteroids may slightly reduce the frequency of vertigo episodes. The proportion of days affected by vertigo was 0.05 lower (absolute difference -5%) in those receiving intratympanic corticosteroids (95% CI -0.07 to -0.02; 3 studies; 472 participants; low-certainty evidence). This is equivalent to a difference of approximately 1.5 days fewer per month affected by vertigo in the corticosteroid group (with the control group having vertigo on approximately 2.5 to 3.5 days per month at the end of follow-up, and those receiving corticosteroids having vertigo on approximately 1 to 2 days per month). However, this result should be interpreted with caution - we are aware of unpublished data at this time point in which corticosteroids failed to show a benefit over placebo.\nOne study also assessed the change in frequency of vertigo at 6 to ≤ 12 months and > 12 months follow-up. However, this is a single, small study and the certainty of the evidence was very low. Therefore, we are unable to draw meaningful conclusions from the numerical results.\nSerious adverse events\nFour studies reported this outcome. There may be little or no effect on the occurrence of serious adverse events with intratympanic corticosteroids, but the evidence is very uncertain (intratympanic corticosteroids 3.0%, placebo 4.4%; RR 0.64, 95% CI 0.22 to 1.85; 4 studies; 500 participants; very low-certainty evidence).\nAuthors' conclusions\nThe evidence for intratympanic corticosteroids in the treatment of Ménière's disease is uncertain. There are relatively few published RCTs, which all consider the same type of corticosteroid (dexamethasone). We also have concerns about publication bias in this area, with the identification of two large RCTs that remain unpublished. The evidence comparing intratympanic corticosteroids to placebo or no treatment is therefore all low- or very low-certainty. This means that we have very low confidence that the effects reported are accurate estimates of the true effect of these interventions. Consensus on the appropriate outcomes to measure in studies of Ménière's disease is needed (i.e. a core outcome set) in order to guide future studies in this area, and enable meta-analysis of the results. This must include appropriate consideration of the potential harms of treatment, as well as the benefits. Finally, we would also highlight the responsibility that trialists have to ensure results are available, regardless of the outcome of their study.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared intratympanic corticosteroids to either no treatment or sham (placebo) treatment."
}
]
|
query-laysum | 1406 | [
{
"role": "user",
"content": "Abstract: Background\nHip fracture is a major cause of morbidity and mortality in older people, and its impact on society is substantial. After surgery, people require rehabilitation to help them recover. Multidisciplinary rehabilitation is where rehabilitation is delivered by a multidisciplinary team, supervised by a geriatrician, rehabilitation physician or other appropriate physician. This is an update of a Cochrane Review first published in 2009.\nObjectives\nTo assess the effects of multidisciplinary rehabilitation, in either inpatient or ambulatory care settings, for older people with hip fracture.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, CENTRAL, MEDLINE and Embase (October 2020), and two trials registers (November 2019).\nSelection criteria\nWe included randomised and quasi-randomised trials of post-surgical care using multidisciplinary rehabilitation of older people (aged 65 years or over) with hip fracture. The primary outcome – 'poor outcome' – was a composite of mortality and decline in residential status at long-term (generally one year) follow-up. The other 'critical' outcomes were health-related quality of life, mortality, dependency in activities of daily living, mobility, and related pain.\nData collection and analysis\nPairs of review authors independently performed study selection, assessed risk of bias and extracted data. We pooled data where appropriate and used GRADE for assessing the certainty of evidence for each outcome.\nMain results\nThe 28 included trials involved 5351 older (mean ages ranged from 76.5 to 87 years), usually female, participants who had undergone hip fracture surgery. There was substantial clinical heterogeneity in the trial interventions and populations. Most trials had unclear or high risk of bias for one or more items, such as blinding-related performance and detection biases. We summarise the findings for three comparisons below.\nInpatient rehabilitation: multidisciplinary rehabilitation versus 'usual care'\nMultidisciplinary rehabilitation was provided primarily in an inpatient setting in 20 trials.\nMultidisciplinary rehabilitation probably results in fewer cases of 'poor outcome' (death or deterioration in residential status, generally requiring institutional care) at 6 to 12 months' follow-up (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.80 to 0.98; 13 studies, 3036 participants; moderate-certainty evidence). Based on an illustrative risk of 347 people with hip fracture with poor outcome in 1000 people followed up between 6 and 12 months, this equates to 41 (95% CI 7 to 69) fewer people with poor outcome after multidisciplinary rehabilitation. Expressed in terms of numbers needed to treat for an additional harmful outcome (NNTH), 25 patients (95% CI 15 to 100) would need to be treated to avoid one 'poor outcome'. Subgroup analysis by type of multidisciplinary rehabilitation intervention showed no evidence of subgroup differences.\nMultidisciplinary rehabilitation may result in fewer deaths in hospital but the confidence interval does not exclude a small increase in the number of deaths (RR 0.77, 95% CI 0.58 to 1.04; 11 studies, 2455 participants; low-certainty evidence). A similar finding applies at 4 to 12 months' follow-up (RR 0.91, 95% CI 0.80 to 1.05; 18 studies, 3973 participants; low-certainty evidence). Multidisciplinary rehabilitation may result in fewer people with poorer mobility at 6 to 12 months' follow-up (RR 0.83, 95% CI 0.71 to 0.98; 5 studies, 1085 participants; low-certainty evidence).\nDue to very low-certainty evidence, we have little confidence in the findings for marginally better quality of life after multidisciplinary rehabilitation (1 study). The same applies to the mixed findings of some or no difference from multidisciplinary rehabilitation on dependence in activities of daily living at 1 to 4 months' follow-up (measured in various ways by 11 studies), or at 6 to 12 months' follow-up (13 studies). Long-term hip-related pain was not reported.\nAmbulatory setting: supported discharge and multidisciplinary home rehabilitation versus 'usual care'\nThree trials tested this comparison in 377 people mainly living at home. Due to very low-certainty evidence, we have very little confidence in the findings of little to no between-group difference in poor outcome (death or move to a higher level of care or inability to walk) at one year (3 studies); quality of life at one year (1 study); in mortality at 4 or 12 months (2 studies); in independence in personal activities of daily living (1 study); in moving permanently to a higher level of care (2 studies) or being unable to walk (2 studies). Long-term hip-related pain was not reported.\nOne trial tested this comparison in 240 nursing home residents. There is low-certainty evidence that there may be no or minimal between-group differences at 12 months in 'poor outcome' defined as dead or unable to walk; or in mortality at 4 months or 12 months. Due to very low-certainty evidence, we have very little confidence in the findings of no between-group differences in dependency at 4 weeks or at 12 months, or in quality of life, inability to walk or pain at 12 months.\nAuthors' conclusions\nIn a hospital inpatient setting, there is moderate-certainty evidence that rehabilitation after hip fracture surgery, when delivered by a multidisciplinary team and supervised by an appropriate medical specialist, results in fewer cases of 'poor outcome' (death or deterioration in residential status). There is low-certainty evidence that multidisciplinary rehabilitation may result in fewer deaths in hospital and at 4 to 12 months; however, it may also result in slightly more. There is low-certainty evidence that multidisciplinary rehabilitation may reduce the numbers of people with poorer mobility at 12 months. No conclusions can be drawn on other outcomes, for which the evidence is of very low certainty.\nThe generally very low-certainty evidence available for supported discharge and multidisciplinary home rehabilitation means that we are very uncertain whether the findings of little or no difference for all outcomes between the intervention and usual care is true.\nGiven the prevalent clinical emphasis on early discharge, we suggest that research is best orientated towards early supported discharge and identifying the components of multidisciplinary inpatient rehabilitation to optimise patient recovery within hospital and the components of multidisciplinary rehabilitation, including social care, subsequent to hospital discharge.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 28 studies with 5351 older people who’d had hip fracture surgery. They were aged on average from 76.5 to 87 years and most were women."
}
]
|
query-laysum | 1407 | [
{
"role": "user",
"content": "Abstract: Background\nSchizophrenia is a disabling psychotic disorder characterised by positive symptoms of delusions, hallucinations, disorganised speech and behaviour; and negative symptoms such as affective flattening and lack of motivation. Cognitive behavioural therapy (CBT) is a psychological intervention that aims to change the way in which a person interprets and evaluates their experiences, helping them to identify and link feelings and patterns of thinking that underpin distress. CBT models targeting symptoms of psychosis (CBTp) have been developed for many mental health conditions including schizophrenia. CBTp has been suggested as a useful add-on therapy to medication for people with schizophrenia. While CBT for people with schizophrenia was mainly developed as an individual treatment, it is expensive and a group approach may be more cost-effective. Group CBTp can be defined as a group intervention targeting psychotic symptoms, based on the cognitive behavioural model. In group CBTp, people work collaboratively on coping with distressing hallucinations, analysing evidence for their delusions, and developing problem-solving and social skills. However, the evidence for effectiveness is far from conclusive.\nObjectives\nTo investigate efficacy and acceptability of group CBT applied to psychosis compared with standard care or other psychosocial interventions, for people with schizophrenia or schizoaffective disorder.\nSearch methods\nOn 10 February 2021, we searched the Cochrane Schizophrenia Group's Study-Based Register of Trials, which is based on CENTRAL, MEDLINE, Embase, four other databases and two trials registries. We handsearched the reference lists of relevant papers and previous systematic reviews and contacted experts in the field for supplemental data.\nSelection criteria\nWe selected randomised controlled trials allocating adults with schizophrenia to receive either group CBT for schizophrenia, compared with standard care, or any other psychosocial intervention (group or individual).\nData collection and analysis\nWe complied with Cochrane recommended standard of conduct for data screening and collection. Where possible, we calculated risk ratio (RR) and 95% confidence interval (CI) for binary data and mean difference (MD) and 95% CI for continuous data. We used a random-effects model for analyses. We assessed risk of bias for included studies and created a summary of findings table using GRADE.\nMain results\nThe review includes 24 studies (1900 participants). All studies compared group CBTp with treatments that a person with schizophrenia would normally receive in a standard mental health service (standard care) or any other psychosocial intervention (group or individual). None of the studies compared group CBTp with individual CBTp. Overall risk of bias within the trials was moderate to low.\nWe found no studies reporting data for our primary outcome of clinically important change. With regard to numbers of participants leaving the study early, group CBTp has little or no effect compared to standard care or other psychosocial interventions (RR 1.22, 95% CI 0.94 to 1.59; studies = 13, participants = 1267; I2 = 9%; low-certainty evidence). Group CBTp may have some advantage over standard care or other psychosocial interventions for overall mental state at the end of treatment for endpoint scores on the Positive and Negative Syndrome Scale (PANSS) total (MD –3.73, 95% CI –4.63 to –2.83; studies = 12, participants = 1036; I2 = 5%; low-certainty evidence). Group CBTp seems to have little or no effect on PANSS positive symptoms (MD –0.45, 95% CI –1.30 to 0.40; studies =8, participants = 539; I2 = 0%) and on PANSS negative symptoms scores at the end of treatment (MD –0.73, 95% CI –1.68 to 0.21; studies = 9, participants = 768; I2 = 65%). Group CBTp seems to have an advantage over standard care or other psychosocial interventions on global functioning measured by Global Assessment of Functioning (GAF; MD –3.61, 95% CI –6.37 to –0.84; studies = 5, participants = 254; I2 = 0%; moderate-certainty evidence), Personal and Social Performance Scale (PSP; MD 3.30, 95% CI 2.00 to 4.60; studies = 1, participants = 100), and Social Disability Screening Schedule (SDSS; MD –1.27, 95% CI –2.46 to –0.08; studies = 1, participants = 116). Service use data were equivocal with no real differences between treatment groups for number of participants hospitalised (RR 0.78, 95% CI 0.38 to 1.60; studies = 3, participants = 235; I2 = 34%). There was no clear difference between group CBTp and standard care or other psychosocial interventions endpoint scores on depression and quality of life outcomes, except for quality of life measured by World Health Organization Quality of Life Assessment Instrument (WHOQOL-BREF) Psychological domain subscale (MD –4.64, 95% CI –9.04 to –0.24; studies = 2, participants = 132; I2 = 77%). The studies did not report relapse or adverse effects.\nAuthors' conclusions\nGroup CBTp appears to be no better or worse than standard care or other psychosocial interventions for people with schizophrenia in terms of leaving the study early, service use and general quality of life. Group CBTp seems to be more effective than standard care or other psychosocial interventions on overall mental state and global functioning scores. These results may not be widely applicable as each study had a low sample size. Therefore, no firm conclusions concerning the efficacy of group CBTp for people with schizophrenia can currently be made. More high-quality research, reporting useable and relevant data is needed.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Studies with larger sample sizes and low risk of bias should be conducted. Longer-term outcomes need to be addressed to establish whether any effect is transient or long-lasting. Trials should improve their quality of data reporting."
}
]
|
query-laysum | 1408 | [
{
"role": "user",
"content": "Abstract: Background\nThis is the first update of a review published in 2009. Sustained moderate to severe elevations in resting blood pressure leads to a critically important clinical question: What class of drug to use first-line? This review attempted to answer that question.\nObjectives\nTo quantify the mortality and morbidity effects from different first-line antihypertensive drug classes: thiazides (low-dose and high-dose), beta-blockers, calcium channel blockers, ACE inhibitors, angiotensin II receptor blockers (ARB), and alpha-blockers, compared to placebo or no treatment.\nSecondary objectives: when different antihypertensive drug classes are used as the first-line drug, to quantify the blood pressure lowering effect and the rate of withdrawal due to adverse drug effects, compared to placebo or no treatment.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomized controlled trials up to November 2017: the Cochrane Hypertension Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (from 1946), Embase (from 1974), the World Health Organization International Clinical Trials Registry Platform, and ClinicalTrials.gov. We contacted authors of relevant papers regarding further published and unpublished work.\nSelection criteria\nRandomized trials (RCT) of at least one year duration, comparing one of six major drug classes with a placebo or no treatment, in adult patients with blood pressure over 140/90 mmHg at baseline. The majority (over 70%) of the patients in the treatment group were taking the drug class of interest after one year. We included trials with both hypertensive and normotensive patients in this review if the majority (over 70%) of patients had elevated blood pressure, or the trial separately reported outcome data on patients with elevated blood pressure.\nData collection and analysis\nThe outcomes assessed were mortality, stroke, coronary heart disease (CHD), total cardiovascular events (CVS), decrease in systolic and diastolic blood pressure, and withdrawals due to adverse drug effects. We used a fixed-effect model to to combine dichotomous outcomes across trials and calculate risk ratio (RR) with 95% confidence interval (CI). We presented blood pressure data as mean difference (MD) with 99% CI.\nMain results\nThe 2017 updated search failed to identify any new trials. The original review identified 24 trials with 28 active treatment arms, including 58,040 patients. We found no RCTs for ARBs or alpha-blockers. These results are mostly applicable to adult patients with moderate to severe primary hypertension. The mean age of participants was 56 years, and mean duration of follow-up was three to five years.\nHigh-quality evidence showed that first-line low-dose thiazides reduced mortality (11.0% with control versus 9.8% with treatment; RR 0.89, 95% CI 0.82 to 0.97); total CVS (12.9% with control versus 9.0% with treatment; RR 0.70, 95% CI 0.64 to 0.76), stroke (6.2% with control versus 4.2% with treatment; RR 0.68, 95% CI 0.60 to 0.77), and coronary heart disease (3.9% with control versus 2.8% with treatment; RR 0.72, 95% CI 0.61 to 0.84).\nLow- to moderate-quality evidence showed that first-line high-dose thiazides reduced stroke (1.9% with control versus 0.9% with treatment; RR 0.47, 95% CI 0.37 to 0.61) and total CVS (5.1% with control versus 3.7% with treatment; RR 0.72, 95% CI 0.63 to 0.82), but did not reduce mortality (3.1% with control versus 2.8% with treatment; RR 0.90, 95% CI 0.76 to 1.05), or coronary heart disease (2.7% with control versus 2.7% with treatment; RR 1.01, 95% CI 0.85 to 1.20).\nLow- to moderate-quality evidence showed that first-line beta-blockers did not reduce mortality (6.2% with control versus 6.0% with treatment; RR 0.96, 95% CI 0.86 to 1.07) or coronary heart disease (4.4% with control versus 3.9% with treatment; RR 0.90, 95% CI 0.78 to 1.03), but reduced stroke (3.4% with control versus 2.8% with treatment; RR 0.83, 95% CI 0.72 to 0.97) and total CVS (7.6% with control versus 6.8% with treatment; RR 0.89, 95% CI 0.81 to 0.98).\nLow- to moderate-quality evidence showed that first-line ACE inhibitors reduced mortality (13.6% with control versus 11.3% with treatment; RR 0.83, 95% CI 0.72 to 0.95), stroke (6.0% with control versus 3.9% with treatment; RR 0.65, 95% CI 0.52 to 0.82), coronary heart disease (13.5% with control versus 11.0% with treatment; RR 0.81, 95% CI 0.70 to 0.94), and total CVS (20.1% with control versus 15.3% with treatment; RR 0.76, 95% CI 0.67 to 0.85).\nLow-quality evidence showed that first-line calcium channel blockers reduced stroke (3.4% with control versus 1.9% with treatment; RR 0.58, 95% CI 0.41 to 0.84) and total CVS (8.0% with control versus 5.7% with treatment; RR 0.71, 95% CI 0.57 to 0.87), but not coronary heart disease (3.1% with control versus 2.4% with treatment; RR 0.77, 95% CI 0.55 to 1.09), or mortality (6.0% with control versus 5.1% with treatment; RR 0.86, 95% CI 0.68 to 1.09).\nThere was low-quality evidence that withdrawals due to adverse effects were increased with first-line low-dose thiazides (5.0% with control versus 11.3% with treatment; RR 2.38, 95% CI 2.06 to 2.75), high-dose thiazides (2.2% with control versus 9.8% with treatment; RR 4.48, 95% CI 3.83 to 5.24), and beta-blockers (3.1% with control versus 14.4% with treatment; RR 4.59, 95% CI 4.11 to 5.13). No data for these outcomes were available for first-line ACE inhibitors or calcium channel blockers. The blood pressure data were not used to assess the effect of the different classes of drugs as the data were heterogeneous, and the number of drugs used in the trials differed.\nAuthors' conclusions\nFirst-line low-dose thiazides reduced all morbidity and mortality outcomes in adult patients with moderate to severe primary hypertension. First-line ACE inhibitors and calcium channel blockers may be similarly effective, but the evidence was of lower quality. First-line high-dose thiazides and first-line beta-blockers were inferior to first-line low-dose thiazides.\n\nGiven the provided abstract, please respond to the following query: \".\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ".Duration of these studies ranged from three to five years. Drug classes studied included thiazide diuretics, beta-blockers, ACE inhibitors, and calcium channel blockers."
}
]
|
query-laysum | 1409 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review previously published in 2013. This review is one in a series of Cochrane Reviews investigating pair-wise monotherapy comparisons.\nEpilepsy is a common neurological condition in which abnormal electrical discharges from the brain cause recurrent unprovoked seizures. It is believed that with effective drug treatment, up to 70% of individuals with active epilepsy have the potential to become seizure-free and go into long-term remission shortly after starting drug therapy with a single antiepileptic drug in monotherapy.\nWorldwide, phenytoin is a commonly used antiepileptic drug. It is important to know how newer drugs, such as oxcarbazepine, compare with commonly used standard treatments.\nObjectives\nTo review the time to treatment failure, remission and first seizure with oxcarbazepine compared to phenytoin, when used as monotherapy in people with focal onset seizures or generalised tonic-clonic seizures (with or without other generalised seizure types).\nSearch methods\nWe searched the following databases on 20 August 2018: the Cochrane Register of Studies (CRS Web), which includes the Cochrane Epilepsy Group Specialized Register and the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid, 1946 to 20 August 2018), ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We handsearched relevant journals and contacted pharmaceutical companies, original trial investigators and experts in the field.\nSelection criteria\nWe included randomised controlled trials comparing monotherapy with either oxcarbazepine or phenytoin in children or adults with focal onset seizures or generalised onset tonic-clonic seizures.\nData collection and analysis\nThis was an individual participant data (IPD) review. Our primary outcome was time to treatment failure and our secondary outcomes were time to first seizure post-randomisation, time to six-month and 12-month remission, and incidence of adverse events. We used Cox proportional hazards regression models to obtain trial-specific estimates of hazard ratios (HRs) with 95% confidence intervals (CIs), using the generic inverse variance method to obtain the overall pooled HR and 95% CI.\nMain results\nIndividual participant data were available for 480 out of a total of 517 participants (93%), from two out of three included trials. For remission outcomes, a HR of less than one indicated an advantage for phenytoin; and for first seizure and treatment failure outcomes, a HR of less than one indicated an advantage for oxcarbazepine.\nThe results for time to treatment failure for any reason related to treatment showed a potential advantage of oxcarbazepine over phenytoin, but this was not statistically significant (pooled HR adjusted for epilepsy type: 0.78 95% CI 0.53 to 1.14, 476 participants, two trials, moderate-quality evidence). Our analysis showed that treatment failure due to adverse events occurred later on with oxcarbazepine than phenytoin (pooled HR for all participants: 0.22 (95% CI 0.10 to 0.51, 480 participants, two trials, high-quality evidence). Our analysis of time to treatment failure due to lack of efficacy showed no clear difference between the drugs (pooled HR for all participants: 1.17 (95% CI 0.31 to 4.35), 480 participants, two trials, moderate-quality evidence).\nWe found no clear or statistically significant differences between drugs for any of the secondary outcomes of the review: time to first seizure post-randomisation (pooled HR adjusted for epilepsy type: 0.97 95% CI 0.75 to 1.26, 468 participants, two trials, moderate-quality evidence); time to 12-month remission (pooled HR adjusted for epilepsy type 1.04 95% CI 0.77 to 1.41, 468 participants, two trials, moderate-quality evidence) and time to six-month remission (pooled HR adjusted for epilepsy type: 1.06 95% CI 0.82 to 1.36, 468 participants, two trials, moderate-quality evidence).\nThe most common adverse events reported in more than 10% of participants on either drug were somnolence (28% of total participants, with similar rates for both drugs), headache (15% of total participants, with similar rates for both drugs), dizziness (14.5% of total participants, reported by slightly more participants on phenytoin (18%) than oxcarbazepine (11%)) and gum hyperplasia (reported by substantially more participants on phenytoin (18%) than oxcarbazepine (2%)).\nThe results of this review are applicable mainly to individuals with focal onset seizures; 70% of included individuals experienced seizures of this type at baseline. The two studies included in IPD meta-analysis were generally of good methodological quality but the design of the studies may have biased the results for the secondary outcomes (time to first seizure post-randomisation, time to six-month and 12-month remission) as seizure recurrence data were not collected following treatment failure or withdrawal from the study. In addition, misclassification of epilepsy type may have impacted on results, particularly for individuals with generalised onset seizures.\nAuthors' conclusions\nHigh-quality evidence provided by this review indicates that treatment failure due to adverse events occurs significantly later with oxcarbazepine than phenytoin. For individuals with focal onset seizures, moderate-quality evidence suggests that oxcarbazepine may be superior to phenytoin in terms of treatment failure for any reason, seizure recurrence and seizure remission. Therefore, oxcarbazepine may be a preferable alternative treatment than phenytoin, particularly for individuals with focal onset seizures. The evidence in this review which relates to individuals with generalised onset seizures is of low quality and does not inform current treatment policy.\nWe recommend that future trials should be designed to the highest quality possible with regards to choice of population, classification of seizure type, duration of follow-up (including continued follow-up after failure or withdrawal of randomised treatment), choice of outcomes and analysis, and presentation of results.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review found that people taking oxcarbazepine stop taking treatment because of side effects significantly later than people taking phenytoin. Our results also showed that people with focal onset seizures taking phenytoin may stop taking treatment for any reason earlier than people with focal onset seizures taking oxcarbazepine. The results also suggest that people with focal onset seizures taking oxcarbazepine may experience a repeat seizure later, and achieve freedom from seizures earlier, than people with focal onset seizures taking phenytoin. There was no clear difference between the drugs in terms of withdrawal from the treatment, seizure recurrence and seizure remission for individuals with generalised onset seizures."
}
]
|
query-laysum | 1410 | [
{
"role": "user",
"content": "Abstract: Background\nNausea and vomiting remain a problem for children undergoing treatment for malignancies despite new antiemetic therapies. Optimising antiemetic regimens could improve quality of life by reducing nausea, vomiting, and associated clinical problems. This is an update of the original systematic review.\nObjectives\nTo assess the effectiveness and adverse events of pharmacological interventions in controlling anticipatory, acute, and delayed nausea and vomiting in children and young people (aged less than 18 years) about to receive or receiving chemotherapy.\nSearch methods\nSearches included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, LILACS, PsycINFO, conference proceedings of the American Society of Clinical Oncology, International Society of Paediatric Oncology, Multinational Association of Supportive Care in Cancer, and ISI Science and Technology Proceedings Index from incept to December 16, 2014, and trial registries from their earliest records to December 2014. We examined references of systematic reviews and contacted trialists for information on further studies. We also screened the reference lists of included studies.\nSelection criteria\nTwo review authors independently screened abstracts in order to identify randomised controlled trials (RCTs) that compared a pharmacological antiemetic, cannabinoid, or benzodiazepine with placebo or any alternative active intervention in children and young people (less than 18 years) with a diagnosis of cancer who were to receive chemotherapy.\nData collection and analysis\nTwo review authors independently extracted outcome and quality data from each RCT. When appropriate, we undertook meta-analysis.\nMain results\nWe included 34 studies that examined a range of different antiemetics, used different doses and comparators, and reported a variety of outcomes. The quality and quantity of included studies limited the exploration of heterogeneity to narrative approaches only.\nThe majority of quantitative data related to the complete control of acute vomiting (27 studies). Adverse events were reported in 29 studies and nausea outcomes in 16 studies.\nTwo studies assessed the addition of dexamethasone to 5-HT3 antagonists for complete control of vomiting (pooled risk ratio (RR) 2.03; 95% confidence interval (CI) 1.35 to 3.04). Three studies compared granisetron 20 mcg/kg with 40 mcg/kg for complete control of vomiting (pooled RR 0.93; 95% CI 0.80 to 1.07). Three studies compared granisetron with ondansetron for complete control of acute nausea (pooled RR 1.05; 95% CI 0.94 to 1.17; 2 studies), acute vomiting (pooled RR 2.26; 95% CI 2.04 to 2.51; 3 studies), delayed nausea (pooled RR 1.13; 95% CI 0.93 to 1.38; 2 studies), and delayed vomiting (pooled RR 1.13; 95% CI 0.98 to 1.29; 2 studies). No other pooled analyses were possible.\nNarrative synthesis suggests that 5-HT3 antagonists are more effective than older antiemetic agents, even when these agents are combined with a steroid. Cannabinoids are probably effective but produce frequent side effects.\nAuthors' conclusions\nOur overall knowledge of the most effective antiemetics to prevent chemotherapy-induced nausea and vomiting in childhood is incomplete. Future research should be undertaken in consultation with children, young people, and families that have experienced chemotherapy and should make use of validated, age-appropriate measures. This review suggests that 5-HT3 antagonists are effective in patients who are to receive emetogenic chemotherapy, with granisetron or palonosetron possibly better than ondansetron. Adding dexamethasone improves control of vomiting, although the risk-benefit profile of adjunctive steroid remains uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "How effective are medications to prevent nausea and vomiting in children and young people undergoing chemotherapy?"
}
]
|
query-laysum | 1411 | [
{
"role": "user",
"content": "Abstract: Background\nAccess to combination antiretroviral therapy has turned HIV into a chronic and manageable disease for many. This increased chronicity has been mirrored by increased prevalence of health-related challenges experienced by people living with HIV (Rusch 2004). Exercise is a key strategy for people living with HIV and by rehabilitation professionals to address these disablements; however, knowledge about the effects of exercise among adults living with HIV still is emerging.\nObjectives\nTo examine the safety and effectiveness of aerobic exercise interventions on immunologic and virologic, cardiopulmonary, psychologic outcomes and strength, weight, and body composition in adults living with HIV.\nSearch methods\nSearches of MEDLINE, EMBASE, SCIENCE CITATION INDEX, CINAHL, HEALTHSTAR, PsycINFO, SPORTDISCUS and Cochrane Review Group Databases were conducted between 1980 and June 2009. Searches of published and unpublished abstracts and proceedings from major international and national HIV/AIDS conferences were conducted, as well as a handsearch of reference lists and tables of contents of relevant journals and books.\nSelection criteria\nWe included studies of randomised controlled trials (RCTs) comparing aerobic exercise interventions with no aerobic exercise interventions or another exercise or treatment modality, performed at least three times per week for at least four weeks among adults (18 years of age or older) living with HIV.\nData collection and analysis\nData on study design, participants, interventions, outcomes, and methodological quality were abstracted from included studies by two reviewers. Meta-analyses, using RevMan 5 computer software, were performed on outcomes when possible.\nMain results\nA total of 14 studies met inclusion criteria for this review and 30 meta-analyses over several updates were performed. Main results indicated that performing constant or interval aerobic exercise, or a combination of constant aerobic exercise and progressive resistive exercise for at least 20 minutes at least three times per week for at least five weeks appears to be safe and may lead to significant improvements in selected outcomes of cardiopulmonary fitness (maximum oxygen consumption), body composition (leg muscle area, percent body fat), and psychological status (depression-dejection symptoms). These findings are limited to participants who continued to exercise and for whom there were adequate follow-up data.\nAuthors' conclusions\nAerobic exercise appears to be safe and may be beneficial for adults living with HIV. These findings are limited by the small sample sizes and large withdrawal rates described in the studies. Future research would benefit from participant follow-up and intention-to-treat analysis. Further research is required to determine the optimal parameters in which aerobic exercise may be most beneficial for adults living with HIV.\n\nGiven the provided abstract, please respond to the following query: \"Performing aerobic exercise or a combination of aerobic exercise and resistive exercise for at least 20 minutes, at least three times per week for at least five weeks appears to be safe and may improve fitness, body composition, and well-being for adults living with HIV.\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Exercise is used by many people living with HIV to improve fitness, well-being, and body image. Exercise also is used as a strategy to diminish the health-related consequences of HIV and associated treatments. This review of 14 trials found that performing constant or interval aerobic exercise, or a combination of constant aerobic exercise and progressive resistive exercise, for at least 20 minutes, three times per week for at least five weeks appears to be safe and may be able to improve fitness, body composition, and well-being for adults living with HIV. More high-quality studies are needed to better evaluate the evidence on under-represented groups, such as women and older adults living with HIV, and those who discontinue their exercise programs."
}
]
|
query-laysum | 1412 | [
{
"role": "user",
"content": "Abstract: Background\nEditor's note: The anti-inflammatory drug rofecoxib (Vioxx) was withdrawn from the market at the end of September 2004 after it was shown that long-term use (greater than 18 months) could increase the risk of heart attack and stroke in a study of secondary prevention of adenoma recurrence. Further information is available at www.vioxx.com.\nRofecoxib is a selective cyclooxygenase-2 (COX-2) inhibitor previously licensed for treating acute and chronic pain; it was associated with fewer gastrointestinal adverse events than conventional NSAIDs. An earlier Cochrane review (Barden 2005) showed that rofecoxib is at least as effective as conventional non-steroidal anti-inflammatory drugs (NSAIDs) for postoperative pain.\nObjectives\nTo assess the analgesic efficacy and adverse effects of rofecoxib in single oral doses for moderate and severe postoperative pain.\nSearch methods\nWe searched Cochrane CENTRAL, MEDLINE, EMBASE and the Oxford Pain Relief Database for studies to June 2009.\nSelection criteria\nRandomised, double blind, placebo-controlled trials of single dose orally administered rofecoxib in adults with moderate to severe acute postoperative pain.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. Pain relief or pain intensity data were extracted and converted into the dichotomous outcome of number of participants with at least 50% pain relief over 4 to 6 hours, from which relative risk and number needed to treat to benefit (NNT) were calculated. Numbers of participants using rescue medication over specified time periods, and time to use of rescue medication, were sought as additional measures of efficacy. Information on adverse events and withdrawals was collected.\nMain results\nTwenty new studies and seven from the earlier review met the inclusion criteria. Twenty-four studies were in dental surgery and three in other types of surgery. In total, 2636 participants were treated with rofecoxib 50 mg, 20 with rofecoxib 500 mg, and 1251 with placebo. The NNT for at least 50% pain relief over 4 to 6 hours with rofecoxib 50 mg was 2.2 (2.0 to 2.3) in all studies combined, 1.9 (1.8 to 2.0) in dental studies, and 6.8 (4.6 to 13) in other types of surgery. The median time to use of rescue medication was 14 hours for rofecoxib 50 mg and 2 hours for placebo. Significantly fewer participants used rescue medication following rofecoxib 50 mg than with placebo. Adverse events did not differ from placebo.\nAuthors' conclusions\nRofecoxib 50 mg (two to four times the standard daily dose for chronic pain) is an effective single dose oral analgesic for acute postoperative pain in adults, with a relatively long duration of action.\n\nGiven the provided abstract, please respond to the following query: \"Editor's note: The anti-inflammatory drug rofecoxib (Vioxx) was withdrawn from the market at the end of September 2004 after it was shown that long-term use (greater than 18 months) could increase the risk of heart attack and stroke in a study of secondary prevention of adenoma recurrence. Further information is available at www.vioxx.com.\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Editor's note: The anti-inflammatory drug rofecoxib (Vioxx) was withdrawn from the market at the end of September 2004 after it was shown that long-term use (greater than 18 months) could increase the risk of heart attack and stroke in a study of secondary prevention of adenoma recurrence. Further information is available at www.vioxx.com."
}
]
|
query-laysum | 1413 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the original Cochrane Review published in September 2014. The most common primary brain tumours in adults are gliomas. Gliomas span a spectrum from low to high grade and are graded pathologically on a scale of one to four according to the World Health Organization (WHO) classification. High-grade glioma (HGG) carries a poor prognosis. Grade IV glioma is known as glioblastoma and carries a median survival in treated patients of about 15 months. Glioblastomas are rich in blood vessels (i.e. highly vascular) and also rich in a protein known as vascular endothelial growth factor (VEGF) that promotes new blood vessel formation (the process of angiogenesis). Anti-angiogenic agents inhibit the process of new blood vessel formation and promote regression of existing vessels. Several anti-angiogenic agents have been investigated in clinical trials, both in newly diagnosed and recurrent HGG, showing preliminary promising results. This review was undertaken to report on the benefits and harms associated with the use of anti-angiogenic agents in the treatment of HGGs.\nObjectives\nTo evaluate the efficacy and toxicity of anti-angiogenic therapy in people with high-grade glioma (HGG). The intervention can be used in two broad groups: at first diagnosis as part of 'adjuvant' therapy, or in the setting of recurrent disease.\nSearch methods\nWe conducted updated searches to identify published and unpublished randomised controlled trials (RCTs), including the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 9), MEDLINE and Embase to October 2018. We handsearched proceedings of relevant oncology conferences up to 2018. We also searched trial registries for ongoing studies.\nSelection criteria\nRCTs evaluating the use of anti-angiogenic therapy to treat HGG versus the same therapy without anti-angiogenic therapy.\nData collection and analysis\nReview authors screened the search results and reviewed the abstracts of potentially relevant articles before retrieving the full text of eligible articles.\nMain results\nAfter a comprehensive literature search, we identified 11 eligible RCTs (3743 participants), of which 7 were included in the original review (2987 participants). There was significant design heterogeneity in the included studies, especially in the response assessment criteria used. All eligible studies were restricted to glioblastomas and there were no eligible studies evaluating other HGGs. Ten studies were available as fully published peer-reviewed manuscripts, and one study was available in abstract form. The overall risk of bias in included studies was low. This risk was based upon low rates of selection bias, detection bias, attrition bias and reporting bias. The 11 studies included in this review did not show an improvement in overall survival with the addition of anti-angiogenic therapy (pooled hazard ratio (HR) of 0.95, 95% confidence interval (CI) 0.88 to 1.02; P = 0.16; 11 studies, 3743 participants; high-certainty evidence). However, pooled analysis from 10 studies (3595 participants) showed improvement in progression-free survival with the addition of anti-angiogenic therapy (HR 0.73, 95% CI 0.68 to 0.79; P < 0.00001; high-certainty evidence).\nWe carried out additional analyses of overall survival and progression-free survival according to treatment setting and for anti-angiogenic therapy combined with chemotherapy compared to chemotherapy alone. Pooled analysis of overall survival in either the adjuvant or recurrent setting did not show an improvement (HR 0.93, 95% CI 0.86 to 1.02; P = 0.12; 8 studies, 2833 participants; high-certainty evidence and HR 0.99, 95% CI 0.85 to 1.16; P = 0.90; 3 studies, 910 participants; moderate-certainty evidence, respectively). Pooled analysis of overall survival for anti-angiogenic therapy combined with chemotherapy compared to chemotherapy also did not clearly show an improvement (HR 0.92, 95% CI 0.85 to 1.00; P = 0.05; 11 studies, 3506 participants; low-certainty evidence). The progression-free survival in the subgroups all showed findings that demonstrated improvements in progression-free survival with the addition of anti-angiogenic therapy. Pooled analysis of progression-free survival in both the adjuvant and recurrent setting showed an improvement (HR 0.75, 95% CI 0.69 to 0.82; P < 0.00001; 8 studies, 2833 participants; high-certainty evidence and HR 0.64, 95% CI 0.54 to 0.76; P < 0.00001; 2 studies, 762 participants; moderate-certainty evidence, respectively). Pooled analysis of progression-free survival for anti-angiogenic therapy combined with chemotherapy compared to chemotherapy alone showed an improvement (HR 0.72, 95% CI 0.66 to 0.77; P < 0.00001; 10 studies, 3464 participants). Similar to trials of anti-angiogenic therapies in other solid tumours, adverse events related to this class of therapy included hypertension and proteinuria, poor wound healing, and the potential for thromboembolic events, although generally, the rate of grade 3 and 4 adverse events was low (< 14.1%) and in keeping with the literature. The impact of anti-angiogenic therapy on quality of life varied between studies.\nAuthors' conclusions\nThe use of anti-angiogenic therapy does not significantly improve overall survival in newly diagnosed people with glioblastoma. Thus, there is insufficient evidence to support the use of anti-angiogenic therapy for people with newly diagnosed glioblastoma at this time. Overall there is a lack of evidence of a survival advantage for anti-angiogenic therapy over chemotherapy in recurrent glioblastoma. When considering the combination anti-angiogenic therapy with chemotherapy compared with the same chemotherapy alone, there may possibly be a small improvement in overall survival. While there is strong evidence that bevacizumab (an anti-angiogenic drug) prolongs progression-free survival in newly diagnosed and recurrent glioblastoma, the impact of this on quality of life and net clinical benefit for patients remains unclear. Not addressed here is whether subsets of people with glioblastoma may benefit from anti-angiogenic therapies, nor their utility in other HGG histologies.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "After a comprehensive search up to October 2018, we identified 11 eligible randomised clinical trials (totaling 3743 participants). All eligible studies were restricted to glioblastomas; there were no eligible studies that included other brain tumour types. The largest trials were conducted in newly diagnosed people with glioblastoma, treated with anti-angiogenic therapy. Overall, the trials included in this systematic review did not show improvement in overall survival with the use of anti-angiogenic therapy. Overall, the clinical trials in bevacizumab-treated glioblastoma did prolong the time until tumour growth (progression-free survival).Key resultsAnti-angiogenic therapy does not significantly prolong life in newly diagnosed people with glioblastoma. In recurrent glioblastoma although there is no evidence of prolonging life over chemotherapy, when anti-angiogenic therapies are used in combination with certain chemotherapy regimes there may be a small improvement in survival. Anti-angiogenic agents delay tumour progression on magnetic resonance imaging (MRI) scans but is commonly associated with side effects, such as high blood pressure and protein in the urine."
}
]
|
query-laysum | 1414 | [
{
"role": "user",
"content": "Abstract: Background\nPostoperative pancreatic fistula is a common and serious complication following pancreaticoduodenectomy. Duct-to-mucosa pancreaticojejunostomy has been used in many centers to reconstruct pancreatic digestive continuity following pancreatoduodenectomy, however, its efficacy and safety are uncertain.\nObjectives\nTo assess the benefits and harms of duct-to-mucosa pancreaticojejunostomy versus other types of pancreaticojejunostomy for the reconstruction of pancreatic digestive continuity in participants undergoing pancreaticoduodenectomy, and to compare the effects of different duct-to-mucosa pancreaticojejunostomy techniques.\nSearch methods\nWe searched the Cochrane Library (2021, Issue 1), MEDLINE (1966 to 9 January 2021), Embase (1988 to 9 January 2021), and Science Citation Index Expanded (1982 to 9 January 2021).\nSelection criteria\nWe included all randomized controlled trials (RCTs) that compared duct-to-mucosa pancreaticojejunostomy with other types of pancreaticojejunostomy (e.g. invagination pancreaticojejunostomy, binding pancreaticojejunostomy) in participants undergoing pancreaticoduodenectomy. We also included RCTs that compared different types of duct-to-mucosa pancreaticojejunostomy in participants undergoing pancreaticoduodenectomy.\nData collection and analysis\nTwo review authors independently identified the studies for inclusion, collected the data, and assessed the risk of bias. We performed the meta-analyses using Review Manager 5. We calculated the risk ratio (RR) for dichotomous outcomes and the mean difference (MD) for continuous outcomes with 95% confidence intervals (CIs). For all analyses, we used the random-effects model. We used the Cochrane RoB 1 tool to assess the risk of bias. We used GRADE to assess the certainty of the evidence for all outcomes.\nMain results\nWe included 11 RCTs involving a total of 1696 participants in the review. One RCT was a dual-center study; the other 10 RCTs were single-center studies conducted in: China (4 studies); Japan (2 studies); USA (1 study); Egypt (1 study); Germany (1 study); India (1 study); and Italy (1 study). The mean age of participants ranged from 54 to 68 years. All RCTs were at high risk of bias.\nDuct-to-mucosa versus any other type of pancreaticojejunostomy\nWe included 10 RCTs involving 1472 participants comparing duct-to-mucosa pancreaticojejunostomy with invagination pancreaticojejunostomy: 732 participants were randomized to the duct-to-mucosa group, and 740 participants were randomized to the invagination group after pancreaticoduodenectomy. Comparing the two techniques, the evidence is very uncertain for the rate of postoperative pancreatic fistula (grade B or C; RR 1.45, 95% CI 0.64 to 3.26; 7 studies, 1122 participants; very low-certainty evidence), postoperative mortality (RR 0.77, 95% CI 0.39 to 1.49; 10 studies, 1472 participants; very low-certainty evidence), rate of surgical reintervention (RR 1.12, 95% CI 0.65 to 1.95; 10 studies, 1472 participants; very low-certainty evidence), rate of postoperative bleeding (RR 0.85, 95% CI 0.51 to 1.42; 9 studies, 1275 participants; very low-certainty evidence), overall rate of surgical complications (RR 1.12, 95% CI 0.92 to 1.36; 5 studies, 750 participants; very low-certainty evidence), and length of hospital stay (MD -0.41 days, 95% CI -1.87 to 1.04; 4 studies, 658 participants; very low-certainty evidence). The studies did not report adverse events or quality of life outcomes.\nOne type of duct-to-mucosa pancreaticojejunostomy versus a different type of duct-to-mucosa pancreaticojejunostomy\nWe included one RCT involving 224 participants comparing duct-to-mucosa pancreaticojejunostomy using the modified Blumgart technique with duct-to-mucosa pancreaticojejunostomy using the traditional interrupted technique: 112 participants were randomized to the modified Blumgart group, and 112 participants were randomized to the traditional interrupted group after pancreaticoduodenectomy. Comparing the two techniques, the evidence is very uncertain for the rate of postoperative pancreatic fistula (grade B or C; RR 1.51, 95% CI 0.61 to 3.75; 1 study, 210 participants; very low-certainty evidence), postoperative mortality (there were no deaths in either group; 1 study, 210 participants; very low-certainty evidence), rate of surgical reintervention (RR 1.93, 95% CI 0.18 to 20.91; 1 study, 210 participants; very low-certainty evidence), rate of postoperative bleeding (RR 2.89, 95% CI 0.12 to 70.11; 1 study, 210 participants; very low-certainty evidence), overall rate of surgical complications (RR 1.10, 95% CI 0.80 to 1.51; 1 study, 210 participants; very low-certainty evidence), and length of hospital stay (15 days versus 15 days; 1 study, 210 participants; very low-certainty evidence). The study did not report adverse events or quality of life outcomes.\nAuthors' conclusions\nThe evidence is very uncertain about the effects of duct-to-mucosa pancreaticojejunostomy compared to invagination pancreaticojejunostomy on any of the outcomes, including rate of postoperative pancreatic fistula (grade B or C), postoperative mortality, rate of surgical reintervention, rate of postoperative bleeding, overall rate of surgical complications, and length of hospital stay. The evidence is also very uncertain whether duct-to-mucosa pancreaticojejunostomy using the modified Blumgart technique is superior, equivalent or inferior to duct-to-mucosa pancreaticojejunostomy using the traditional interrupted technique. None of the studies reported adverse events or quality of life outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out whether duct-to-mucosa pancreaticojejunostomy is better than any other type of pancreaticojejunostomy to improve:\n- rate of postoperative pancreatic fistula (proportion of participants with postoperative pancreatic fistula);\n- death rate.\nWe also wanted to find out if duct-to-mucosa pancreaticojejunostomy is associated with any unwanted effects."
}
]
|
query-laysum | 1415 | [
{
"role": "user",
"content": "Abstract: Background\nUlcerative colitis is an inflammatory condition affecting the colon, with an annual incidence of approximately 10 to 20 per 100,000 people. The majority of people with ulcerative colitis can be put into remission, leaving a group who do not respond to first- or second-line therapies. There is a significant proportion of people who experience adverse effects with current therapies. Consequently, new alternatives for the treatment of ulcerative colitis are constantly being sought. Probiotics are live microbial feed supplements that may beneficially affect the host by improving intestinal microbial balance, enhancing gut barrier function and improving local immune response.\nObjectives\nTo assess the efficacy of probiotics compared with placebo or standard medical treatment (5-aminosalicylates, sulphasalazine or corticosteroids) for the induction of remission in people with active ulcerative colitis.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and two other databases on 31 October 2019. We contacted authors of relevant studies and manufacturers of probiotics regarding ongoing or unpublished trials that may be relevant to the review, and we searched ClinicalTrials.gov. We also searched references of trials for any additional trials.\nSelection criteria\nRandomised controlled trials (RCTs) investigating the effectiveness of probiotics compared to standard treatments or placebo in the induction of remission of active ulcerative colitis. We considered both adults and children, with studies reporting outcomes of clinical, endoscopic, histologic or surgical remission as defined by study authors\nData collection and analysis\nTwo review authors independently conducted data extraction and 'Risk of bias' assessment of included studies. We analysed data using Review Manager 5. We expressed dichotomous and continuous outcomes as risk ratios (RRs) and mean differences (MDs) with 95% confidence intervals (CIs). We assessed the certainty of the evidence using the GRADE methodology.\nMain results\nIn this review, we included 14 studies (865 randomised participants) that met the inclusion criteria. Twelve of the studies looked at adult participants and two studies looked at paediatric participants with mild to moderate ulcerative colitis, the average age was between 12.5 and 47.7 years. The studies compared probiotics to placebo, probiotics to 5-ASA and a combination of probiotics plus 5-ASA compared to 5-ASA alone. Seven studies used a single probiotic strain and seven used a mixture of strains. The studies ranged from two weeks to 52 weeks. The risk of bias was high for all except two studies due to allocation concealment, blinding of participants, incomplete reports of outcome data and selective reporting. This led to GRADE ratings of the evidence ranging from moderate to very low.\nProbiotics versus placebo\nProbiotics may induce clinical remission when compared to placebo (RR 1.73, 95% CI 1.19 to 2.54; 9 studies, 594 participants; low-certainty evidence; downgraded due to imprecision and risk of bias, number needed to treat for an additional beneficial outcome (NNTB) 5). Probiotics may lead to an improvement in clinical disease scores (RR 2.29, 95% CI 1.13 to 4.63; 2 studies, 54 participants; downgraded due to risk of bias and imprecision).\nThere may be little or no difference in minor adverse events, but the evidence is of very low certainty (RR 1.04, 95% CI 0.42 to 2.59; 7 studies, 520 participants). Reported adverse events included abdominal bloating and discomfort. Probiotics did not lead to any serious adverse events in any of the seven studies that reported on it, however five adverse events were reported in the placebo arm of one study (RR 0.09, CI 0.01 to 1.66; 1 study, 526 participants; very low-certainty evidence; downgraded due to high risk of bias and imprecision). Probiotics may make little or no difference to withdrawals due to adverse events (RR 0.85, 95% CI 0.42 to 1.72; 4 studies, 401 participants; low-certainty evidence).\nProbiotics versus 5-ASA\nThere may be little or no difference in the induction of remission with probiotics when compared to 5-ASA (RR 0.92, 95% CI 0.73 to 1.16; 1 study, 116 participants; low-certainty evidence; downgraded due to risk of bias and imprecision). There may be little or no difference in minor adverse events, but the evidence is of very low certainty (RR 1.33, 95% CI 0.53 to 3.33; 1 study, 116 participants). Reported adverse events included abdominal pain, nausea, headache and mouth ulcers. There were no serious adverse events with probiotics, however perforated sigmoid diverticulum and respiratory failure in a patient with severe emphysema were reported in the 5-ASA arm (RR 0.21, 95% CI 0.01 to 4.22; 1 study, 116 participants; very low-certainty evidence).\nProbiotics combined with 5-ASA versus 5-ASA alone\nLow-certainty evidence from a single study shows that when combined with 5-ASA, probiotics may slightly improve the induction of remission (based on the Sunderland disease activity index) compared to 5-ASA alone (RR 1.22 CI 1.01 to 1.47; 1 study, 84 participants; low-certainty evidence; downgraded due to unclear risk of bias and imprecision). No information about adverse events was reported.\nTime to remission, histological and biochemical outcomes were sparsely reported in the studies. None of the other secondary outcomes (progression to surgery, need for additional therapy, quality of life scores, or steroid withdrawal) were reported in any of the studies.\nAuthors' conclusions\nLow-certainty evidence suggests that probiotics may induce clinical remission in active ulcerative colitis when compared to placebo. There may be little or no difference in clinical remission with probiotics alone compared to 5-ASA. There is limited evidence from a single study which failed to provide a definition of remission, that probiotics may slightly improve the induction of remission when used in combination with 5-ASA. There was no evidence to assess whether probiotics are effective in people with severe and more extensive disease, or if specific preparations are superior to others. Further targeted and appropriately designed RCTs are needed to address the gaps in the evidence base. In particular, appropriate powering of studies and the use of standardised participant groups and outcome measures in line with the wider field are needed, as well as reporting to minimise risk of bias.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out whether probiotics can induce remission in people with ulcerative colitis. We analysed data from 14 studies to answer this question."
}
]
|
query-laysum | 1416 | [
{
"role": "user",
"content": "Abstract: Background\nCannabis has a long history of medicinal use. Cannabis-based medications (cannabinoids) are based on its active element, delta-9-tetrahydrocannabinol (THC), and have been approved for medical purposes. Cannabinoids may be a useful therapeutic option for people with chemotherapy-induced nausea and vomiting that respond poorly to commonly used anti-emetic agents (anti-sickness drugs). However, unpleasant adverse effects may limit their widespread use.\nObjectives\nTo evaluate the effectiveness and tolerability of cannabis-based medications for chemotherapy-induced nausea and vomiting in adults with cancer.\nSearch methods\nWe identified studies by searching the following electronic databases: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO and LILACS from inception to January 2015. We also searched reference lists of reviews and included studies. We did not restrict the search by language of publication.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared a cannabis-based medication with either placebo or with a conventional anti-emetic in adults receiving chemotherapy.\nData collection and analysis\nAt least two review authors independently conducted eligibility and risk of bias assessment, and extracted data. We grouped studies based on control groups for meta-analyses conducted using random effects. We expressed efficacy and tolerability outcomes as risk ratio (RR) with 95% confidence intervals (CI).\nMain results\nWe included 23 RCTs. Most were of cross-over design, on adults undergoing a variety of chemotherapeutic regimens ranging from moderate to high emetic potential for a variety of cancers. The majority of the studies were at risk of bias due to either lack of allocation concealment or attrition. Trials were conducted between 1975 and 1991. No trials involved comparison with newer anti-emetic drugs such as ondansetron.\nComparison with placebo\nPeople had more chance of reporting complete absence of vomiting (3 trials; 168 participants; RR 5.7; 95% CI 2.6 to 12.6; low quality evidence) and complete absence of nausea and vomiting (3 trials; 288 participants; RR 2.9; 95% CI 1.8 to 4.7; moderate quality evidence) when they received cannabinoids compared with placebo. The percentage of variability in effect estimates that was due to heterogeneity rather than chance was not important (I2 = 0% in both analyses).\nPeople had more chance of withdrawing due to an adverse event (2 trials; 276 participants; RR 6.9; 95% CI 1.96 to 24; I2 = 0%; very low quality evidence) and less chance of withdrawing due to lack of efficacy when they received cannabinoids, compared with placebo (1 trial; 228 participants; RR 0.05; 95% CI 0.0 to 0.89; low quality evidence). In addition, people had more chance of 'feeling high' when they received cannabinoids compared with placebo (3 trials; 137 participants; RR 31; 95% CI 6.4 to 152; I2 = 0%).\nPeople reported a preference for cannabinoids rather than placebo (2 trials; 256 participants; RR 4.8; 95% CI 1.7 to 13; low quality evidence).\nComparison with other anti-emetics\nThere was no evidence of a difference between cannabinoids and prochlorperazine in the proportion of participants reporting no nausea (5 trials; 258 participants; RR 1.5; 95% CI 0.67 to 3.2; I2 = 63%; low quality evidence), no vomiting (4 trials; 209 participants; RR 1.11; 95% CI 0.86 to 1.44; I2 = 0%; moderate quality evidence), or complete absence of nausea and vomiting (4 trials; 414 participants; RR 2.0; 95% CI 0.74 to 5.4; I2 = 60%; low quality evidence). Sensitivity analysis where the two parallel group trials were pooled after removal of the five cross-over trials showed no difference (RR 1.1; 95% CI 0.70 to 1.7) with no heterogeneity (I2 = 0%).\nPeople had more chance of withdrawing due to an adverse event (5 trials; 664 participants; RR 3.9; 95% CI 1.3 to 12; I2 = 17%; low quality evidence), due to lack of efficacy (1 trial; 42 participants; RR 3.5; 95% CI 1.4 to 8.9; very low quality evidence) and for any reason (1 trial; 42 participants; RR 3.5; 95% CI 1.4 to 8.9; low quality evidence) when they received cannabinoids compared with prochlorperazine.\nPeople had more chance of reporting dizziness (7 trials; 675 participants; RR 2.4; 95% CI 1.8 to 3.1; I2 = 12%), dysphoria (3 trials; 192 participants; RR 7.2; 95% CI 1.3 to 39; I2 = 0%), euphoria (2 trials; 280 participants; RR 18; 95% CI 2.4 to 133; I2 = 0%), 'feeling high' (4 trials; 389 participants; RR 6.2; 95% CI 3.5 to 11; I2 = 0%) and sedation (8 trials; 947 participants; RR 1.4; 95% CI 1.2 to 1.8; I2 = 31%), with significantly more participants reporting the incidence of these adverse events with cannabinoids compared with prochlorperazine.\nPeople reported a preference for cannabinoids rather than prochlorperazine (7 trials; 695 participants; RR 3.3; 95% CI 2.2 to 4.8; I2 = 51%; low quality evidence).\nIn comparisons with metoclopramide, domperidone and chlorpromazine, there was weaker evidence, based on fewer trials and participants, for higher incidence of dizziness with cannabinoids.\nTwo trials with 141 participants compared an anti-emetic drug alone with a cannabinoid added to the anti-emetic drug. There was no evidence of differences between groups; however, the majority of the analyses were based on one small trial with few events.\nQuality of the evidence\nThe trials were generally at low to moderate risk of bias in terms of how they were designed and do not reflect current chemotherapy and anti-emetic treatment regimens. Furthermore, the quality of evidence arising from meta-analyses was graded as low for the majority of the outcomes analysed, indicating that we are not very confident in our ability to say how well the medications worked. Further research is likely to have an important impact on the results.\nAuthors' conclusions\nCannabis-based medications may be useful for treating refractory chemotherapy-induced nausea and vomiting. However, methodological limitations of the trials limit our conclusions and further research reflecting current chemotherapy regimens and newer anti-emetic drugs is likely to modify these conclusions.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "As many as three-quarters of people who receive chemotherapy experience nausea (feeling sick) and vomiting (being sick), which many find distressing. While conventional anti-sickness medicines are effective, they do not work for everyone, all of the time. Therapeutic drugs based on the active ingredient of cannabis, known as THC (delta-9-tetrahydrocannabinol), have been approved for use as anti-sickness medicines in some countries."
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.