dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum | 5456 | [
{
"role": "user",
"content": "Abstract: Background\nBeta-blockers are an essential part of standard therapy in adult congestive heart failure and therefore, are expected to be beneficial in children. However, congestive heart failure in children differs from that in adults in terms of characteristics, aetiology, and drug clearance. Therefore, paediatric needs must be specifically investigated. This is an update of a Cochrane review previously published in 2009.\nObjectives\nTo assess the effect of beta-adrenoceptor-blockers (beta-blockers) in children with congestive heart failure.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, and LILACS up to November 2015. Bibliographies of identified studies were checked. No language restrictions were applied.\nSelection criteria\nRandomised, controlled, clinical trials investigating the effect of beta-blocker therapy on paediatric congestive heart failure.\nData collection and analysis\nTwo review authors independently extracted and assessed data from the included trials.\nMain results\nWe identified four new studies for the review update; the review now includes seven studies with 420 participants. Four small studies with 20 to 30 children each, and two larger studies of 80 children each, showed an improvement of congestive heart failure with beta-blocker therapy. A larger study with 161 participants showed no evidence of benefit over placebo in a composite measure of heart failure outcomes. The included studies showed no significant difference in mortality or heart transplantation rates between the beta-blocker and control groups. No significant adverse events were reported with beta-blockers, apart from one episode of complete heart block. A meta-analysis of left ventricular ejection fraction (LVEF) and fractional shortening (LVFS) data showed a very small improvement with beta-blockers.\nHowever, there were vast differences in the age, age range, and health of the participants (aetiology and severity of heart failure; heterogeneity of diagnoses and co-morbidities); there was a range of treatments across studies (choice of beta-blocker, dosing, duration of treatment); and a lack of standardised methods and outcome measures. Therefore, the primary outcomes could not be pooled in meta-analyses.\nAuthors' conclusions\nThere is not enough evidence to support or discourage the use of beta-blockers in children with congestive heart failure, or to propose a paediatric dosing scheme. However, the sparse data available suggested that children with congestive heart failure might benefit from beta-blocker treatment. Further investigations in clearly defined populations with standardised methodology are required to establish guidelines for therapy. Pharmacokinetic investigations of beta-blockers in children are also required to provide effective dosing in future trials.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There were not enough data to recommend or discourage the use of beta-blockers in children with congestive heart failure. However, the current available data suggest that children with heart failure might benefit from beta-blocker treatment. Further investigations are required to establish guidelines for therapy."
}
]
|
query-laysum | 5457 | [
{
"role": "user",
"content": "Abstract: Background\nBreakdown of the developmentally immature epidermal barrier may permit entry for micro-organisms leading to invasive infection in preterm infants. Topical emollients may improve skin integrity and barrier function and thereby prevent invasive infection, a major cause of mortality and morbidity in preterm infants.\nObjectives\nTo assess the effect of topical application of emollients (ointments, creams, or oils) on the risk of invasive infection and mortality in preterm infants.\nSearch methods\nWe searched CENTRAL via Cochrane Register of Studies (CRS) Web and MEDLINE via Ovid (updated 08 January 2021) and the reference lists of retrieved articles.\nSelection criteria\nRandomised or quasi-randomised controlled trials that assessed the effect of prophylactic application of topical emollient on the risk of invasive infection, mortality, other morbidity, and growth and development in preterm infants.\nData collection and analysis\nWe used the standard methods of Cochrane Neonatal. Two review authors separately evaluated trial quality, extracted data, and synthesised effect estimates using risk ratio (RR), risk difference (RD), and mean difference. We used the GRADE approach to assess the certainty of evidence for effects on mortality and invasive infection.\nMain results\nWe included 22 trials with a total of 5578 infant participants. The main potential sources of bias were lack of clarity on the methods used to generate random sequences and conceal allocation in half of the trials, and lack of masking of parents, caregivers, clinicians, and investigators in all of the trials.\nEight trials (2086 infants) examined the effect of topical ointments or creams. Most participants were very preterm infants cared for in healthcare facilities in high-income countries. Meta-analyses suggested that topical ointments or creams may have little or no effect on invasive infection (RR 1.13, 95% confidence interval (CI) 0.97 to 1.31; low certainty evidence) or mortality (RR 0.94, 95% CI 0.82 to 1.08; low certainty evidence).\nFifteen trials (3492 infants) assessed the effect of topical plant or vegetable oils. Most of these trials were undertaken in low- or middle-income countries and were based in healthcare facilities. One large (2249 infants) community-based trial occurred in a rural field practice in India. Meta-analyses suggested that topical oils may reduce invasive infection (RR 0.71, 95% CI 0.52 to 0.96; I² = 52%; low certainty evidence) but have little or no effect on mortality (RR 0.94, 95% CI 0.82 to 1.08, I² = 3%; low certainty evidence).\nOne trial (316 infants) that compared petroleum-based ointment versus sunflower seed oil in very preterm infants in Bangladesh showed little or no effect on invasive infection (RR 0.91, 95% CI 0.57 to 1.46; low certainty evidence), but suggested that ointment may lower mortality slightly (RR 0.82, 95% CI 0.68 to 0.98; RD -0.12, 95% CI -0.23 to -0.01; number needed to treat for an additional beneficial outcome 8, 95% CI 4 to 100; low certainty evidence). One trial (64 infants) that assessed the effect of coconut oil versus mineral oil in preterm infants with birth weight 1500 g to 2000 g in India reported no episodes of invasive infection or death in either group (very low certainty evidence).\nAuthors' conclusions\nThe level of certainty about the effects of emollient therapy on invasive infection or death in preterm infants is low. Since these interventions are mostly inexpensive, readily accessible, and generally acceptable, further good-quality randomised controlled trials in healthcare facilities, and in community settings in low- or middle-income countries, may be justified.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "These analyses provide low certainty evidence about the effects of emollient therapy on serious infection or death in preterm infants. Since these interventions are mostly inexpensive, readily accessible, and generally acceptable, further good-quality randomised controlled trials in healthcare facilities, and in community-settings in low- or middle-income countries, may be justified."
}
]
|
query-laysum | 5458 | [
{
"role": "user",
"content": "Abstract: Background\nOvarian cancer is the seventh most common cancer among women and the leading cause of death in women with gynaecological malignancies. Opinions differ regarding the role of ultra-radical (extensive) cytoreductive surgery in ovarian cancer treatment.\nObjectives\nTo evaluate the effectiveness and morbidity associated with ultra-radical/extensive surgery in the management of advanced-stage epithelial ovarian cancer.\nSearch methods\nWe searched CENTRAL (2021, Issue 11), MEDLINE Ovid and Embase Ovid up to November 2021. We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field.\nSelection criteria\nRandomised controlled trials (RCTs) or non-randomised studies (NRS), analysed using multivariate methods, that compared ultra-radical/extensive and standard surgery in women with advanced primary epithelial ovarian cancer.\nData collection and analysis\nTwo review authors independently assessed whether potentially relevant studies met the inclusion criteria, abstracted data and assessed the risk of bias. We identified three NRS and conducted meta-analyses where possible.\nMain results\nWe identified three retrospective observational studies for inclusion in the review. Two studies included women exclusively undergoing upfront primary debulking surgery (PDS) and the other study including both PDS and interval debulking surgical (IDS) procedures. All studies were at critical risk of bias due to retrospective and non-randomised study designs.\nMeta-analysis of two studies, assessing 397 participants, found that women who underwent radical procedures, as part of PDS, may have a lower risk of mortality compared to women who underwent standard surgery (adjusted HR 0.60, 95% CI 0.43 to 0.82; I2 = 0%; very low-certainty evidence), but the evidence is very uncertain. The results were robust to a sensitivity analysis including women with more-extensive disease (carcinomatosis) (adjusted HR 0.61, 95% CI 0.44 to 0.85; I2 = 0%; n = 283, very low-certainty evidence), but the evidence is very uncertain.\nOne study reported a comparison of radical versus standard surgical procedures associated with both PDS and IDS procedures, but a multivariate analysis was only undertaken for disease-free survival (DFS) and therefore the certainty of the evidence was not assessable for overall survival (OS) and remains very low. The lack of reporting of OS meant the study was at high risk of bias for selective reporting of outcomes.\nOne study, 203 participants, found that women who underwent radical procedures as part of PDS may have a lower risk of disease progression or death compared to women who underwent standard surgery (adjusted HR 0.62, 95% CI 0.42 to 0.92; very low-certainty evidence), but the evidence is very uncertain. The results were robust to a sensitivity analysis in one study including women with carcinomatosis (adjusted HR 0.52, 95% CI 0.33 to 0.82; n = 139; very low-certainty evidence), but the evidence is very uncertain.\nA combined analysis in one study found that women who underwent radical procedures (using both PDS and IDS) may have an increased chance of disease progression or death than those who received standard surgery (adjusted HR 1.60, 95% CI 1.11 to 2.31; I2 = 0%; n = 527; very low-certainty evidence), but the evidence is very uncertain. In absolute and unadjusted terms, the DFS was 19.3 months in the standard surgery group, 15.8 in the PDS group and 15.9 months in the IDS group.\nAll studies were at critical risk of bias and we only identified very low-certainty evidence for all outcomes reported in the review. Perioperative mortality, adverse events and quality of life (QoL) outcomes were either not reported or inadequately reported in the included studies. Two studies reported perioperative mortality (death within 30 days of surgery), but they did not use any statistical adjustment. In total, there were only four deaths within 30 days of surgery in both studies. All were observed in the standard surgery group, but we did not report a risk ratio (RR) to avoid potentially misleading results with so few deaths and very low-certainty evidence. Similarly, one study reported postoperative morbidity, but the authors did not use any statistical adjustment. Postoperative morbidity occurred more commonly in women who received ultra-radical surgery compared to standard surgery, but the certainty of the evidence was very low.\nAuthors' conclusions\nWe found only very low-certainty evidence comparing ultra-radical surgery and standard surgery in women with advanced ovarian cancer. The evidence was limited to retrospective, NRSs and so is at critical risk of bias. The results may suggest that ultra-radical surgery could result in improved OS, but results are based on very few women who were chosen to undergo each intervention, rather than a randomised study and intention-to-treat analysis, and so the evidence is very uncertain. Results for progression/DFS were inconsistent and evidence was sparse. QoL and morbidity was incompletely or not reported in the three included studies.\nA separate prognostic review assessing residual disease as a prognostic factor in this area has been addressed elsewhere, which demonstrates the prognostic effect of macroscopic debulking to no macroscopic residual disease.\nIn order to aid existing guidelines, the role of ultra-radical surgery in the management of advanced-stage ovarian cancer could be addressed through the conduct of a sufficiently powered, RCT comparing ultra-radical and standard surgery, or well-designed NRSs, if this is not possible.\n\nGiven the provided abstract, please respond to the following query: \"Review methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the scientific literature for studies comparing ultra-radical and standard surgery for women with advanced ovarian cancer. We looked for randomised controlled trials, which are regarded as the best type of study, and for non-randomised studies that were analysed using methods that allow for differences between the groups of women receiving different types of surgery."
}
]
|
query-laysum | 5459 | [
{
"role": "user",
"content": "Abstract: Background\nFluid excess may place people undergoing surgery at risk for various complications. Hypertonic salt solution (HS) maintains intravascular volume with less intravenous fluid than isotonic salt (IS) solutions, but may increase serum sodium. This review was published in 2010 and updated in 2016.\nObjectives\nTo determine the benefits and harms of HS versus IS solutions administered for fluid resuscitation to people undergoing surgery.\nSearch methods\nIn this updated review we have searched the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 4, 2016); MEDLINE (January 1966 to April 2016); EMBASE (January 1980 to April 2016); LILACS (January 1982 to April 2016) and CINAHL (January 1982 to April 2016) without language restrictions. We conducted the original search on April 30th, 2007, and reran it on April 8th, 2016.\nSelection criteria\nWe have included randomized clinical trials (RCTs) comparing HS to IS in people undergoing surgery, irrespective of blinding, language, and publication status.\nData collection and analysis\nTwo independent review authors read studies that met our selection criteria. We collected study information and data using a data collection sheet with predefined parameters. We have assessed the impact of HS administration on mortality, organ failure, fluid balance, serum sodium, serum osmolarity, diuresis and physiologic measures of cardiovascular function. We have pooled the data using the mean difference (MD) for continuous outcomes. We evaluated heterogeneity between studies by I² percentage. We consider studies with an I² of 0% to 30% to have no or little heterogeneity, 30% to 60% as having moderate heterogeneity, and more than 60% as having high heterogeneity. In studies with low heterogeneity we have used a fixed-effect model, and a random-effects model for studies with moderate to high heterogeneity.\nMain results\nWe have included 18 studies with 1087 participants of whom 545 received HS compared to 542 who received IS. All participants were over 18 years of age and all trials excluded high-risk patients (ASA IV). All trials assessed haematological parameters peri-operatively and up to three days post-operatively.\nThere were three (< 1%) deaths reported in the IS group and four (< 1%) in the HS group, as assessed at 90 days in one study. There were no reports of serious adverse events. Most participants were in a positive fluid balance postoperatively (4.4 L IS and 2.5 L HS), with the excess significantly less in HS participants (MD -1.92 L, 95% confidence interval (CI) -2.61 to -1.22 L; P < 0.00001). IS participants received a mean volume of 2.4 L and HS participants received 1.49 L, significantly less fluid than IS-treated participants (MD -0.91 L, 95% CI -1.24 to -0.59 L; P < 0.00001). The maximum average serum sodium ranged between 138.5 and 159 in HS groups compared to between 136 and 143 meq/L in the IS groups. The maximum serum sodium was significantly higher in HS participants (MD 7.73, 95% CI 5.84 to 9.62; P < 0.00001), although the level remained within normal limits (136 to 146 meq/L).\nA high degree of heterogeneity appeared to be related to considerable differences in the dose of HS between studies. The quality of the evidence for the outcomes reported ranged from high to very low. The risk of bias for many of the studies could not be determined for performance and detection bias, criteria that we assess as likely to impact the study outcomes.\nAuthors' conclusions\nHS reduces the volume of intravenous fluid required to maintain people undergoing surgery but transiently increases serum sodium. It is not known if HS affects survival and morbidity, but this should be examined in randomized controlled trials that are designed and powered to test these outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For deaths and adverse events the trials lacked sufficient size and duration to adequately assess differences. We assessed the quality of evidence for deaths to be very low, and future studies are likely to change the result reported here.\nThe reporting of the highest amount of sodium is of moderate quality. The measuring of blood sodium during an operation is a common measurement that is unlikely to be misrepresented."
}
]
|
query-laysum | 5460 | [
{
"role": "user",
"content": "Abstract: Background\nRemote cognitive assessments are increasingly needed to assist in the detection of cognitive disorders, but the diagnostic accuracy of telephone- and video-based cognitive screening remains unclear.\nObjectives\nTo assess the test accuracy of any multidomain cognitive test delivered remotely for the diagnosis of any form of dementia.\nTo assess for potential differences in cognitive test scoring when using a remote platform, and where a remote screener was compared to the equivalent face-to-face test.\nSearch methods\nWe searched ALOIS, the Cochrane Dementia and Cognitive Improvement Group Specialized Register, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, Web of Science, LILACS, and ClinicalTrials.gov (www.clinicaltrials.gov/) databases on 2 June 2021. We performed forward and backward searching of included citations.\nSelection criteria\nWe included cross-sectional studies, where a remote, multidomain assessment was administered alongside a clinical diagnosis of dementia or equivalent face-to-face test.\nData collection and analysis\nTwo review authors independently assessed risk of bias and extracted data; a third review author moderated disagreements. Our primary analysis was the accuracy of remote assessments against a clinical diagnosis of dementia. Where data were available, we reported test accuracy as sensitivity and specificity. We did not perform quantitative meta-analysis as there were too few studies at individual test level.\nFor those studies comparing remote versus in-person use of an equivalent screening test, if data allowed, we described correlations, reliability, differences in scores and the proportion classified as having cognitive impairment for each test.\nMain results\nThe review contains 31 studies (19 differing tests, 3075 participants), of which seven studies (six telephone, one video call, 756 participants) were relevant to our primary objective of describing test accuracy against a clinical diagnosis of dementia. All studies were at unclear or high risk of bias in at least one domain, but were low risk in applicability to the review question. Overall, sensitivity of remote tools varied with values between 26% and 100%, and specificity between 65% and 100%, with no clearly superior test.\nAcross the 24 papers comparing equivalent remote and in-person tests (14 telephone, 10 video call), agreement between tests was good, but rarely perfect (correlation coefficient range: 0.48 to 0.98).\nAuthors' conclusions\nDespite the common and increasing use of remote cognitive assessment, supporting evidence on test accuracy is limited. Available data do not allow us to suggest a preferred test. Remote testing is complex, and this is reflected in the heterogeneity seen in tests used, their application, and their analysis. More research is needed to describe accuracy of contemporary approaches to remote cognitive assessment. While data comparing remote and in-person use of a test were reassuring, thresholds and scoring rules derived from in-person testing may not be applicable when the equivalent test is adapted for remote use.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked at various memory and thinking tests. Many tests have been developed over time and they differ in their content and application, but most are based on a modification of a traditional in-person test."
}
]
|
query-laysum | 5461 | [
{
"role": "user",
"content": "Abstract: Background\nCurrent guidelines recommend that patients with acute exacerbations of chronic obstructive pulmonary disease (COPD) should be treated with systemic corticosteroid for seven to 14 days. Intermittent systemic corticosteroid use is cumulatively associated with adverse effects such as osteoporosis, hyperglycaemia and muscle weakness. Shorter treatment could reduce adverse effects.\nObjectives\nTo compare the efficacy of short-duration (seven or fewer days) and conventional longer-duration (longer than seven days) systemic corticosteroid treatment of adults with acute exacerbations of COPD.\nSearch methods\nSearches were carried out using the Cochrane Airways Group Specialised Register of Trials, MEDLINE and CENTRAL (Cochrane Central Register of Controlled Trials) and ongoing trials registers up to March 2017.\nSelection criteria\nRandomised controlled trials comparing different durations of systemic corticosteroid defined as short (i.e. seven or fewer days) or longer (i.e. longer than seven days). Other interventions—bronchodilators and antibiotics—were standardised. Studies with participants requiring assisted ventilation were excluded.\nData collection and analysis\nWe used standard methodological procedures as expected by The Cochrane Collaboration.\nMain results\nEight studies with 582 participants met the inclusion criteria, of which five studies conducted in hospitals with 519 participants (range 28 to 296) contributed to the meta-analysis. Mean ages of study participants were 65 to 73 years, the proportion of male participants varied (58% to 84%) and COPD was classified as severe or very severe. Corticosteroid treatment was given at equivalent daily doses for three to seven days for short-duration treatment and for 10 to 15 days for longer-duration treatment. Five studies administered oral prednisolone (30 mg in four, tapered in one), and two studies provided intravenous corticosteroid treatment. Studies contributing to the meta-analysis were at low risk of selection, performance, detection and attrition bias. In four studies we did not find a difference in risk of treatment failure between short-duration and longer-duration systemic corticosteroid treatment (n = 457; odds ratio (OR) 0.72, 95% confidence interval (CI) 0.36 to 1.46)), which was equivalent to 22 fewer per 1000 for short-duration treatment (95% CI 51 fewer to 34 more). No difference in risk of relapse (a new event) was observed between short-duration and longer-duration systemic corticosteroid treatment (n = 457; OR 1.04, 95% CI 0.70 to 1.56), which was equivalent to nine fewer per 1000 for short-duration treatment (95% CI 68 fewer to 100 more). Time to the next COPD exacerbation did not differ in one large study that was powered to detect non-inferiority and compared five days versus 14 days of systemic corticosteroid treatment (n = 311; hazard ratio 0.95, 95% CI 0.66 to 1.37). In five studies no difference in the likelihood of an adverse event was found between short-duration and longer-duration systemic corticosteroid treatment (n = 503; OR 0.89, 95% CI 0.46 to 1.69, or nine fewer per 1000 (95% CI 44 fewer to 51 more)). Length of hospital stay (n = 421; mean difference (MD) -0.61 days, 95% CI -1.51 to 0.28) and lung function at the end of treatment (n = 185; MD FEV1 -0.04 L; 95% CI -0.19 to 0.10) did not differ between short-duration and longer-duration treatment.\nAuthors' conclusions\nInformation from a new large study has increased our confidence that five days of oral corticosteroids is likely to be sufficient for treatment of adults with acute exacerbations of COPD, and this review suggests that the likelihood is low that shorter courses of systemic corticosteroids (of around five days) lead to worse outcomes than are seen with longer (10 to 14 days) courses. We graded most available evidence as moderate in quality because of imprecision; further research may have an important impact on our confidence in the estimates of effect or may change the estimates. The studies in this review did not include people with mild or moderate COPD; further studies comparing short-duration systemic corticosteroid versus conventional longer-duration systemic corticosteroid for treatment of adults with acute exacerbations of COPD are required.\n\nGiven the provided abstract, please respond to the following query: \"How did we answer the question?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked for all studies that compared oral or injected corticosteroid treatment given for seven or fewer days versus treatment given for longer than seven days in people with acute exacerbations of COPD."
}
]
|
query-laysum | 5462 | [
{
"role": "user",
"content": "Abstract: Background\nCoronary heart disease (CHD) is the single most common cause of death globally. However, with falling CHD mortality rates, an increasing number of people live with CHD and may need support to manage their symptoms and improve prognosis. Cardiac rehabilitation is a complex multifaceted intervention which aims to improve the health outcomes of people with CHD. Cardiac rehabilitation consists of three core modalities: education, exercise training and psychological support. This is an update of a Cochrane systematic review previously published in 2011, which aims to investigate the specific impact of the educational component of cardiac rehabilitation.\nObjectives\n1. To assess the effects of patient education delivered as part of cardiac rehabilitation, compared with usual care on mortality, morbidity, health-related quality of life (HRQoL) and healthcare costs in patients with CHD.\n2. To explore the potential study level predictors of the effects of patient education in patients with CHD (e.g. individual versus group intervention, timing with respect to index cardiac event).\nSearch methods\nWe updated searches from the previous Cochrane review, by searching the Cochrane Central Register of Controlled Trials (CENTRAL) (Cochrane Library, Issue 6, 2016), MEDLINE (Ovid), Embase (Ovid), PsycINFO (Ovid) and CINAHL (EBSCO) in June 2016. Three trials registries, previous systematic reviews and reference lists of included studies were also searched. No language restrictions were applied.\nSelection criteria\n1. Randomised controlled trials (RCTs) where the primary interventional intent was education delivered as part of cardiac rehabilitation.\n2. Studies with a minimum of six-months follow-up and published in 1990 or later.\n3. Adults with a diagnosis of CHD.\nData collection and analysis\nTwo review authors independently screened all identified references for inclusion based on the above inclusion criteria. One author extracted study characteristics from the included trials and assessed their risk of bias; a second review author checked data. Two independent reviewers extracted outcome data onto a standardised collection form. For dichotomous variables, risk ratios and 95% confidence intervals (CI) were derived for each outcome. Heterogeneity amongst included studies was explored qualitatively and quantitatively. Where appropriate and possible, results from included studies were combined for each outcome to give an overall estimate of treatment effect. Given the degree of clinical heterogeneity seen in participant selection, interventions and comparators across studies, we decided it was appropriate to pool studies using random-effects modelling. We planned to undertake subgroup analysis and stratified meta-analysis, sensitivity analysis and meta-regression to examine potential treatment effect modifiers. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to evaluate the quality of the evidence and the GRADE profiler (GRADEpro GDT) to create summary of findings tables.\nMain results\nThis updated review included a total of 22 trials which randomised 76,864 people with CHD to an education intervention or a 'no education' comparator. Nine new trials (8215 people) were included for this update. We judged most included studies as low risk of bias across most domains. Educational 'dose' ranged from one 40 minute face-to-face session plus a 15 minute follow-up call, to a four-week residential stay with 11 months of follow-up sessions. Control groups received usual medical care, typically consisting of referral to an outpatient cardiologist, primary care physician, or both.\nWe found no difference in effect of education-based interventions on total mortality (13 studies, 10,075 participants; 189/5187 (3.6%) versus 222/4888 (4.6%); random effects risk ratio (RR) 0.80, 95% CI 0.60 to 1.05; moderate quality evidence). Individual causes of mortality were reported rarely, and we were unable to report separate results for cardiovascular mortality or non-cardiovascular mortality. There was no evidence of a difference in effect of education-based interventions on fatal and/or non fatal myocardial infarction (MI) (2 studies, 209 participants; 7/107 (6.5%) versus 12/102 (11.8%); random effects RR 0.63, 95% CI 0.26 to 1.48; very low quality of evidence). However, there was some evidence of a reduction with education in fatal and/or non-fatal cardiovascular events (2 studies, 310 studies; 21/152 (13.8%) versus 61/158 (38.6%); random effects RR 0.36, 95% CI 0.23 to 0.56; low quality evidence). There was no evidence of a difference in effect of education on the rate of total revascularisations (3 studies, 456 participants; 5/228 (2.2%) versus 8/228 (3.5%); random effects RR 0.58, 95% CI 0.19 to 1.71; very low quality evidence) or hospitalisations (5 studies, 14,849 participants; 656/10048 (6.5%) versus 381/4801 (7.9%); random effects RR 0.93, 95% CI 0.71 to 1.21; very low quality evidence). There was no evidence of a difference between groups for all cause withdrawal (17 studies, 10,972 participants; 525/5632 (9.3%) versus 493/5340 (9.2%); random effects RR 1.04, 95% CI 0.88 to 1.22; low quality evidence). Although some health-related quality of life (HRQoL) domain scores were higher with education, there was no consistent evidence of superiority across all domains.\nAuthors' conclusions\nWe found no reduction in total mortality, in people who received education delivered as part of cardiac rehabilitation, compared to people in control groups (moderate quality evidence). There were no improvements in fatal or non fatal MI, total revascularisations or hospitalisations, with education. There was some evidence of a reduction in fatal and/or non-fatal cardiovascular events with education, but this was based on only two studies. There was also some evidence to suggest that education-based interventions may improve HRQoL. Our findings are supportive of current national and international clinical guidelines that cardiac rehabilitation for people with CHD should be comprehensive and include educational interventions together with exercise and psychological therapy. Further definitive research into education interventions for people with CHD is needed.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched up to June 2016."
}
]
|
query-laysum | 5463 | [
{
"role": "user",
"content": "Abstract: Background\nThe costs of developing new treatments and bringing them to the market are substantial. The pharmaceutical industry uses drug promotion to gain a competitive market share, and drive sale volumes and industry profitability. This involves disseminating information about new treatments to relevant targets. However, conflicts of interest can arise when profits are prioritised over patient care and its benefits. Drug promotion regulations are complex interventions that aim to prevent potential harm associated with these activities.\nObjectives\nTo assess the effects of policies that regulate drug promotion on drug utilisation, coverage or access, healthcare utilisation, patient outcomes, adverse events and costs.\nSearch methods\nWe searched Epistemonikos for related reviews and their included studies. To find primary studies we searched MEDLINE, CENTRAL, Embase, EconLit, Global Index Medicus, Virtual Health Library, INRUD Bibliography, two trial registries and two sources of grey literature. All databases and sources were searched in January 2023.\nSelection criteria\nWe planned to include studies that assessed policies regulating drug promotion to consumers, healthcare professionals or regulators and third-party payers, or any combination of these groups.In this review we defined policies as laws, rules, guidelines, codes of practice, and financial or administrative orders made by governments, non-government organisations or private insurers. One of the following outcomes had to be reported: drug utilisation, coverage or access, healthcare utilisation, patient health outcomes, any adverse effects (unintended consequences), and costs. The study had to be a randomised or non-randomised trial, an interrupted time series analysis (ITS), a repeated measures (RM) study or a controlled before-after (CBA) study.\nData collection and analysis\nAt least two review authors independently assessed eligibility for inclusion of studies. When consensus was not reached, any disagreements were discussed with a third review author. We planned to use the criteria suggested by Cochrane Effective Practice and Organisation of Care (EPOC) to assess the risk of bias of included studies. For randomised trials, non-randomised trials, and CBA studies, we planned to estimate relative effects, with 95% confidence intervals (CI). For dichotomous outcomes, we planned to report the risk ratio (RR) when possible and adjusted for baseline differences in the outcome measures. For ITS and RM, we planned to compute changes along two dimensions: change in level and change in slope. We planned to undertake a structured synthesis following EPOC guidance.\nMain results\nThe search yielded 4593 citations, and 13 studies were selected for full-text review. No study met the inclusion criteria.\nAuthors' conclusions\nWe sought to assess the effects of policies that regulate drug promotion on drug use, coverage or access, use of health services, patient outcomes, adverse events, and costs, however we did not find studies that met the review's inclusion criteria. As pharmaceutical policies that regulate drug promotion have untested effects, their impact, as well as their positive and negative influences, is currently only a matter of opinion, debate, informal or descriptive reporting. There is an urgent need to assess the effects of pharmaceutical policies that regulate drug promotion using well-conducted studies with high methodological rigour.\n\nGiven the provided abstract, please respond to the following query: \"What are drug marketing policies?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What are drug marketing policies?It is very expensive to develop new drugs and bring them to the market. The pharmaceutical industry therefore uses a lot of resources to promote their drugs and increase their use. They often encourage patients and consumers to use these drugs, and they encourage health professionals to prescribe them. They also promote their drugs to health insurance organisations and governments. They do this, for example, through advertisements, visits from sales representatives, free drug samples, and education awareness programmes.These strategies can spread useful information about health conditions and the drugs that are available for them, but they can also cause harm. For instance, patients, doctors and organisations can be persuaded to use or prescribe drugs that are unnecessary, expensive and sometimes harmful.Because of these problems, governments and non-governmental organisations often try to regulate how drugs are promoted. For example, in some countries, the pharmaceutical industry is not allowed to promote prescription drugs to patients and consumers. In other countries, they are only allowed to promote drugs to healthcare professionals in certain ways, such as through medical journals. In some countries, governments and organisations workalone. In other countries they collaborate with the pharmaceutical industry."
}
]
|
query-laysum | 5464 | [
{
"role": "user",
"content": "Abstract: Background\nIn school dental screening, a dental health professional visually inspects children’s oral cavities in a school setting and provides information for parents on their child's current oral health status and treatment needs. Screening at school aims to identify potential problems before symptomatic disease presentation, hence prompting preventive and therapeutic oral health care for the children. This review evaluates the effectiveness of school dental screening for improving oral health status. It is the second update of a review originally published in December 2017 and first updated in August 2019.\nObjectives\nTo assess the effectiveness of school dental screening programmes on overall oral health status and use of dental services.\nSearch methods\nAn information specialist searched four bibliographic databases up to 15 October 2021 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCTs; cluster- or individually randomised) that evaluated school dental screening compared with no intervention, or that compared two different types of screening.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nThe previous version of this review included seven RCTs, and our updated search identified one additional trial. Therefore, this update included eight trials (six cluster-RCTs) with 21,290 children aged 4 to 15 years. Four trials were conducted in the UK, two in India, one in the USA and one in Saudi Arabia. We rated two trials at low risk of bias, three at high risk of bias and three at unclear risk of bias.\nNo trials had long-term follow-up to ascertain the lasting effects of school dental screening. The trials assessed outcomes at 3 to 11 months of follow-up.\nNo trials reported the proportion of children with treated or untreated oral diseases other than caries. Neither did they report on cost-effectiveness or adverse events.\nFour trials evaluated traditional screening versus no screening. We performed a meta-analysis for the outcome 'dental attendance' and found an inconclusive result with high heterogeneity. The heterogeneity was partly due to study design (three cluster-RCTs and one individually randomised trial). Due to this inconsistency, and unclear risk of bias, we downgraded the evidence to very low certainty, and we are unable to draw conclusions about this comparison.\nTwo cluster-RCTs (both four-arm trials) evaluated criteria-based screening versus no screening, suggesting a possible small benefit (pooled risk ratio (RR) 1.07, 95% confidence interval (CI) 0.99 to 1.16; low-certainty evidence). There was no evidence of a difference when comparing criteria-based screening to traditional screening (RR 1.01, 95% CI 0.94 to 1.08; very low-certainty evidence).\nOne trial compared a specific (personalised) referral letter to a non-specific letter. Results favoured the specific referral letter for increasing attendance at general dentist services (RR 1.39, 95% CI 1.09 to 1.77; very low-certainty evidence) and attendance at specialist orthodontist services (RR 1.90, 95% CI 1.18 to 3.06; very low-certainty evidence).\nOne trial compared screening supplemented with motivation to screening alone. Dental attendance was more likely after screening supplemented with motivation (RR 3.08, 95% CI 2.57 to 3.71; very low-certainty evidence).\nOne trial compared referral to a specific dental treatment facility with advice to attend a dentist. There was no evidence of a difference in dental attendance between these two referrals (RR 0.91, 95% CI 0.34 to 2.47; very low-certainty evidence).\nOnly one trial reported the proportion of children with treated dental caries. This trial evaluated a post-screening referral letter based on the common-sense model of self-regulation (a theoretical framework that explains how people understand and respond to threats to their health), with or without a dental information guide, compared to a standard referral letter. The findings were inconclusive. Due to high risk of bias, indirectness and imprecision, we assessed the evidence as very low certainty.\nAuthors' conclusions\nThe evidence is insufficient to draw conclusions about whether there is a role for school dental screening in improving dental attendance.\nWe are uncertain whether traditional screening is better than no screening (very low-certainty evidence). Criteria-based screening may improve dental attendance when compared to no screening (low-certainty evidence). However, when compared to traditional screening, there is no evidence of a difference in dental attendance (very low-certainty evidence).\nFor children requiring treatment, personalised or specific referral letters may improve dental attendance when compared to non-specific referral letters (very low-certainty evidence). Screening supplemented with motivation (oral health education and offer of free treatment) may improve dental attendance in comparison to screening alone (very low-certainty evidence). We are uncertain whether a referral letter based on the 'common-sense model of self-regulation' is better than a standard referral letter (very low-certainty evidence) or whether specific referral to a dental treatment facility is better than a generic advice letter to visit the dentist (very low-certainty evidence).\nThe trials included in this review evaluated effects of school dental screening in the short term. None of them evaluated its effectiveness for improving oral health or addressed possible adverse effects or costs.\n\nGiven the provided abstract, please respond to the following query: \"What did this review study?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Oral diseases, especially tooth decay (called 'dental caries' by dentists), affect children worldwide. If untreated, oral health can deteriorate and negatively impact children's general well-being. It also has a financial cost for families and society as a whole.\nSchool dental screening involves a dental professional examining children's mouths and teeth at school and letting parents know about their child's oral condition and treatment needs. It aims to identify oral health concerns at an early stage and prompt parents to seek treatment where required. Whether this actually improves children's oral health is the question we wanted to answer in this review."
}
]
|
query-laysum | 5465 | [
{
"role": "user",
"content": "Abstract: Background\nThe symptoms of eczema can lead to sleeplessness and fatigue and may have a substantial impact on quality of life. Use of oral H1 antihistamines (H1 AH) as adjuvant therapy alongside topical agents is based on the idea that combining the anti-inflammatory effects of topical treatments with the blocking action of histamine on its receptors in the skin by H1 AH (to reduce the principal symptom of itch) might magnify or intensify the effect of treatment. Also, it would be unethical to compare oral H1 AH alone versus no treatment, as topical treatment is the standard management for this condition.\nObjectives\nTo assess the effects of oral H1 antihistamines as 'add-on' therapy to topical treatment in adults and children with eczema.\nSearch methods\nWe searched the following databases up to May 2018: the Cochrane Skin Group Specialised Register, CENTRAL, MEDLINE, Embase, and the GREAT database (Global Resource of EczemA Trials; from inception). We searched five trials registers and checked the reference lists of included and excluded studies for further references to relevant randomised controlled trials (RCTs). We also searched the abstracts of four conference proceedings held between 2000 and 2018.\nSelection criteria\nWe sought RCTs assessing oral H1 AH as 'add-on' therapy to topical treatment for people with eczema compared with topical treatment plus placebo or no additional treatment as add-on therapy.\nData collection and analysis\nWe used standard Cochrane methodological procedures. Primary outcome measures were 'Mean change in patient-assessed symptoms of eczema' and 'Proportion of participants reporting adverse effects and serious adverse events'. Secondary outcomes were 'Mean change in physician-assessed clinical signs', 'Mean change in quality of life', and 'Number of eczema flares'.\nMain results\nWe included 25 studies (3285 randomised participants). Seventeen studies included 1344 adults, and eight studies included 1941 children. Most studies failed to report eczema severity at baseline, but they were conducted in secondary care settings, so it is likely that they recruited patients with more severe cases of eczema. Trial duration was between three days and 18 months. Researchers studied 13 different H1 AH treatments. We could not undertake pooling because of the high level of diversity across studies in terms of duration and dose of intervention, concomitant topical therapy, and outcome assessment. Risk of bias was generally unclear, but five studies had high risk of bias in one domain (attrition, selection, or reporting bias). Only one study measured quality of life, but these results were insufficient for statistical analysis.\nAlthough this review assessed 17 comparisons, we summarise here the results of three key comparisons in this review.\nCetirizine versus placebo\nOne study compared cetirizine 0.5 mg/kg/d against placebo over 18 months in 795 children. Study authors did not report patient-assessed symptoms of eczema separately for pruritus. Cetirizine is probably associated with fewer adverse events (mainly mild) (risk ratio (RR) 0.68, 95% confidence interval (CI) 0.46 to 1.01) and the need for slightly less additional H1 AH use as an indication of eczema flare rate (P = 0.035; no further numerical data given). Physician-assessed clinical signs (SCORing Atopic Dermatitis index (SCORAD)) were reduced in both groups, but the difference between groups was reported as non-significant (no P value given). Evidence for this comparison was of moderate quality.\nOne study assessed cetirizine 10 mg/d against placebo over four weeks in 84 adults. Results show no evidence of differences between groups in patient-assessed symptoms of eczema (pruritus measured as part of SCORAD; no numerical data given), numbers of adverse events (RR 1.11, 95% CI 0.50 to 2.45; mainly sedation, other skin-related problems, respiratory symptoms, or headache), or physician-assessed changes in clinical signs, amount of local rescue therapy required, or number of applications as an indicator of eczema flares (no numerical data reported). Evidence for this comparison was of low quality.\nFexofenadine versus placebo\nCompared with placebo, fexofenadine 120 mg/d taken in adults over one week (one study) probably leads to a small reduction in patient-assessed symptoms of pruritus on a scale of 0 to 8 (mean difference (MD) -0.25, 95% CI -0.43 to -0.07; n = 400) and a greater reduction in the ratio of physician-assessed pruritus area to whole body surface area (P = 0.007; no further numerical data given); however, these reductions may not be clinically meaningful. Results suggest probably little or no difference in adverse events (mostly somnolence and headache) (RR 1.05, 95% CI 0.74 to 1.50; n = 411) nor in the amount of 0.1% hydrocortisone butyrate used (co-intervention in both groups) as an indicator of eczema flare, but no numerical data were given. Evidence for this comparison was of moderate quality.\nLoratadine versus placebo\nA study of 28 adults compared loratadine 10 mg/d taken over 4 weeks versus placebo. Researchers found no evidence of differences between groups in patient-assessed pruritus, measured by a 100-point visual analogue scale (MD -2.30, 95% CI -20.27 to 15.67); reduction in physician-assessed clinical signs (SCORAD) (MD -4.10, 95% CI -13.22 to 5.02); or adverse events. Study authors reported only one side effect (folliculitis with placebo) (RR 0.25, 95% CI 0.01 to 5.76). Evidence for this comparison was of low quality. Number of eczema flares was not measured for this comparison.\nAuthors' conclusions\nBased on the main comparisons, we did not find consistent evidence that H1 AH treatments are effective as 'add-on' therapy for eczema when compared to placebo; evidence for this comparison was of low and moderate quality. However, fexofenadine probably leads to a small improvement in patient-assessed pruritus, with probably no significant difference in the amount of treatment used to prevent eczema flares. Cetirizine was no better than placebo in terms of physician-assessed clinical signs nor patient-assessed symptoms, and we found no evidence that loratadine was more beneficial than placebo, although all interventions seem safe.\nThe quality of evidence was limited because of poor study design and imprecise results. Future researchers should clearly define the condition (course and severity) and clearly report their methods, especially participant selection and randomisation; baseline characteristics; and outcomes (based on the Harmonising Outcome Measures in Eczema initiative).\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for relevant studies up to May 2018. We included 25 randomised controlled trials with 3285 participants of all ages with diagnosed eczema. Eight studies included children or adolescents, and 17 included adults. The gender of participants and the severity of symptoms often were not reported. All studies were conducted in secondary care settings, including hospital clinics, research clinics, dermatology centres, and surgery centres, meaning that participants were likely to have more severe eczema than if recruitment occurred from first point of contact settings (i.e. primary care). All but one study compared H1 antihistamine versus placebo. Researchers studied 13 different H1 antihistamines, most of which were less sedating H1 antihistamines (known as 'second-generation antihistamines'). Studies lasted between three days and 18 months. Seven trials received funding from pharmaceutical companies; they are the largest trials included in this review."
}
]
|
query-laysum | 5466 | [
{
"role": "user",
"content": "Abstract: Background\nAcute otitis media (AOM) is the most common bacterial infection among young children in the United States. There are limitations and concerns over its treatment with antibiotics and surgery and so effective preventative measures are attractive. A potential preventative measure is xylitol, a natural sugar substitute that reduces the risk of dental decay. Xylitol can reduce the adherence of Streptococcus pneumoniae (S pneumoniae) and Haemophilus influenzae (H influenzae) to nasopharyngeal cells in vitro. This is an update of a review first published in 2011.\nObjectives\nTo assess the efficacy and safety of xylitol to prevent AOM in children aged up to 12 years.\nSearch methods\nWe searched CENTRAL (to Issue 12, 2015), MEDLINE (1950 to January 2016), Embase (1974 to January 2016), CINAHL (1981 to January 2016), LILACS (1982 to January 2016), Web of Science (2011 to January 2016) and International Pharmaceutical Abstracts (2000 to January 2016).\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs of children aged 12 years or younger where xylitol supplementation was compared with placebo or no treatment to prevent AOM.\nData collection and analysis\nTwo review authors independently selected trials from search results, assessed and rated study quality and extracted relevant data for inclusion in the review. We contacted trial authors to request missing data. We noted data on any adverse events of xylitol. We extracted data on relevant outcomes and estimated the effect size by calculating risk ratio (RR), risk difference (RD) and associated 95% confidence intervals (CI).\nMain results\nWe identified five clinical trials that involved 3405 children for inclusion. For this 2016 update, we identified one new trial for inclusion. This trial was systematically reviewed but due to several sources of heterogeneity, was not included in the meta-analysis. The remaining four trials were of adequate methodological quality. In three RCTs that involved a total of 1826 healthy Finnish children attending daycare, there is moderate quality evidence that xylitol (in any form) can reduce the risk of AOM from 30% to around 22% compared with the control group (RR 0.75, 95% CI 0.65 to 0.88). Among the reasons for dropouts, there were no significant differences in abdominal discomfort and rash between the xylitol and the control groups. Xylitol was not effective in reducing AOM among healthy children during a respiratory infection (RR 1.13, 95% CI 0.83 to 1.53; moderate quality evidence) or among otitis-prone healthy children (RR 0.90, 95% CI 0.67 to 1.21; low-quality evidence).\nAuthors' conclusions\nThere is moderate quality evidence showing that the prophylactic administration of xylitol among healthy children attending daycare centres can reduce the occurrence of AOM. There is inconclusive evidence with regard to the efficacy of xylitol in preventing AOM among children with respiratory infection, or among otitis-prone children. The meta-analysis was limited because data came from a small number of studies, and most were from the same research group.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Xylitol, administered in chewing gum, lozenges or syrup, can reduce the occurrence of AOM among healthy children with no acute upper respiratory infection from 30% to 22%. There is no difference in side effects (namely, abdominal discomfort and rash). Based on these results we would expect that out of 1000 children up to 12 years of age, 299 would experience an AOM compared with between 194 and 263 children who would experience an AOM if they are provided with xylitol chewing gum. The preventive effect among healthy children with respiratory infection or among otitis-prone children is inconclusive."
}
]
|
query-laysum | 5467 | [
{
"role": "user",
"content": "Abstract: Background\nIdiopathic toe walking (ITW) is an exclusionary diagnosis given to healthy children who persist in walking on their toes after they should typically have achieved a heel-toe gait. The literature discusses conservative and surgical interventions using a variety of treatment modalities. Young children and children without a limitation in ankle dorsiflexion (the upwards movement of the foot towards the shin of the leg) are commonly treated with conservative interventions. Older children who continue toe walking and present with limitations in ankle dorsiflexion are sometimes treated with surgical procedures. This systematic review is needed to evaluate the evidence for any intervention for the treatment of ITW. The conclusions of this review may support decision making by clinicians caring for children with ITW. It may also assist families when deciding on treatment options for their children with ITW. Many of the treatments employed have financial implications for parents or healthcare services. This review also aims to highlight any deficits in the current research base.\nObjectives\nTo assess the effects of conservative and surgical interventions in children with ITW, specifically effects on gait normalisation, ankle range of motion, pain, frequency of recurrence, and any adverse effects.\nSearch methods\nOn 29 April 2019, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL Plus, and PEDro. We searched the following registers of clinical trials for ongoing and recently completed trials: the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP, apps.who.int/trialsearch), and ClinicalTrials.gov (clinicaltrials.gov). We searched conference proceedings and other grey literature in the BIOSIS databases and System for Information on Grey Literature in Europe (OpenGrey, opengrey.eu). We searched guidelines via the Turning Research Into Practice database (TRIP, tripdatabase.com) and National Guideline Clearinghouse (guideline.gov). We did not apply language restrictions.\nSelection criteria\nWe considered randomised or quasi-randomised trials for inclusion in the review if they involved participants diagnosed with ITW gait in the absence of a medical condition known to cause toe walking, or associated with toe walking. As there is no universally accepted age group for ITW, this review includes ITW at any age, who have been toe walking for more than six months, who can or cannot walk with a heel-toe gait, and who may or may not have limited dorsiflexion of the ankle joint.\nData collection and analysis\nWe used standard Cochrane methodological procedures. The primary outcome was improvement in toe walking (defined as greater than 50% of time spent heel-toe walking). Secondary outcomes were active and passive range of motion of the ankle joint, pain, recurrence of ITW after treatment, and adverse events. We assessed the certainty of the evidence using the GRADE framework.\nMain results\nFour studies, comprising 104 participants, met the inclusion criteria. One study did not report data within the appropriate follow-up timeframe and data from two studies were insufficient for analysis. The single study from which we extracted data had 47 participants and was a randomised, controlled, parallel-group trial conducted in Sweden. It tested the hypothesis that combined treatment with serial casting and botulinum toxin type A (BTX) was more effective than serial casting alone in reducing ITW gait.\nThis study found that more participants treated with BTX improved (defined as toe walking less than 50% of the time, as reported by parents) (risk ratio (RR) 1.21, 95% confidence interval (CI) 0.57 to 2.55; 1 trial, 46 participants; very low-certainty evidence). However, there was little or no difference between groups in passive ankle joint dorsiflexion range of movement on the right with the knee extended (mean difference (MD) -1.48º, 95% CI -4.13 to 1.16; 1 trial, 47 participants), on the right with the knee flexed (MD -0.04º, 95% CI -1.80 to 1.73; 1 trial, 46 participants), on the left with the knee flexed (MD 1.07, 95% CI -1.22 to 3.37), or on the left with the knee extended (MD 0.05, 95% CI -0.91 to 1.91). Nor was there a clear difference between the groups in recurrence of toe-walking gait (assessed via severity of toe walking (graded 1 (mild), 2 (moderate), or 3 (severe)) on gait analysis, analysed as continuous data: MD 0.34 points, 95% CI -0.09 to 0.78; 46 participants). In principle, MDs greater than zero (i.e.) positive values) would favour BTX and casting and negative values would favour casting alone. We have not reported effects as better or worse because all results were from evidence of very low certainty. We downgraded the certainty of evidence because of study limitations (outcome assessment was not blinded) and imprecision. Outcomes of pain and active range of motion were not reported in the included study.\nIn terms of adverse events, calf pain was reported twice in the casting-only group and three times in the BTX group. There were three minor skin problems in each group and one reported case of pain directly after BTX injection. The report did not state if calf pain and skin irritation were from the same or different participants. The study authors reported that adverse events did not alter treatment adherence.\nAuthors' conclusions\nThe certainty of evidence from one study, which compared serial casting with serial casting with BTX for ITW in children, was too low for conclusions to be drawn. A further three studies reported outcomes relating to BTX, footwear, exercises, and different types of orthoses as interventions, however the outcome data were too limited to assess their effects.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We were interested in the effects of treatments for ITW in children. Cochrane authors collected relevant clinical trials to answer this question and assessed the evidence."
}
]
|
query-laysum | 5468 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with Down syndrome are vulnerable to developing dementia at an earlier age than the general population. Alzheimer's disease and cognitive decline in people with Down syndrome can place a significant burden on both the person with Down syndrome and their family and carers. Various pharmacological interventions, including donepezil, galantamine, memantine and rivastigmine, appear to have some effect in treating cognitive decline in people without Down syndrome, but their effectiveness for those with Down syndrome remains unclear.\nObjectives\nTo assess the effectiveness of anti-dementia pharmacological interventions and nutritional supplements for treating cognitive decline in people with Down syndrome.\nSearch methods\nIn January 2015, we searched CENTRAL, ALOIS (the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group), Ovid MEDLINE, Embase, PsycINFO, seven other databases, and two trials registers. In addition, we checked the references of relevant reviews and studies and contacted study authors, other researchers and relevant drug manufacturers to identify additional studies.\nSelection criteria\nRandomised controlled trials (RCTs) of anti-dementia pharmacological interventions or nutritional supplements for adults (aged 18 years and older) with Down syndrome, in which treatment was administered and compared with either placebo or no treatment.\nData collection and analysis\nTwo review authors independently assessed the risk of bias of included trials and extracted the relevant data. Review authors contacted study authors to obtain missing information where necessary.\nMain results\nOnly nine studies (427 participants) met the inclusion criteria for this review. Four of these (192 participants) assessed the effectiveness of donepezil, two (139 participants) assessed memantine, one (21 participants) assessed simvastatin, one study (35 participants) assessed antioxidants, and one study (40 participants) assessed acetyl-L-carnitine.\nFive studies focused on adults aged 45 to 55 years, while the remaining four studies focused on adults aged 20 to 29 years. Seven studies were conducted in either the USA or UK, one between Norway and the UK, and one in Japan. Follow-up periods in studies ranged from four weeks to two years. The reviewers judged all included studies to be at low or unclear risk of bias.\nAnalyses indicate that for participants who received donepezil, scores in measures of cognitive functioning (standardised mean difference (SMD) 0.52, 95% confidence interval (CI) -0.27 to 1.13) and measures of behaviour (SMD 0.42, 95% CI -0.06 to 0.89) were similar to those who received placebo. However, participants who received donepezil were significantly more likely to experience an adverse event (odds ratio (OR) 0.32, 95% CI 0.16 to 0.62). The quality of this body of evidence was low. None of the included donepezil studies reported data for carer stress, institutional/home care, or death.\nFor participants who received memantine, scores in measures of cognitive functioning (SMD 0.05, 95% CI -0.43 to 0.52), behaviour (SMD -0.17, 95% CI -0.46 to 0.11), and occurrence of adverse events (OR 0.45, 95% CI 0.18 to 1.17) were similar to those who received placebo. The quality of this body of evidence was low. None of the included memantine studies reported data for carer stress, institutional/home care, or death.\nDue to insufficient data, it was possible to provide a narrative account only of the outcomes for simvastatin, antioxidants, and acetyl-L-carnitine. Results from one pilot study suggest that participants who received simvastatin may have shown a slight improvement in cognitive measures.\nAuthors' conclusions\nDue to the low quality of the body of evidence in this review, it is difficult to draw conclusions about the effectiveness of any pharmacological intervention for cognitive decline in people with Down syndrome.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Do adults (18 years and older) with Down syndrome benefit from treatment with medicine to prevent cognitive decline, compared with other adults with Down syndrome who receive either fake medicine (placebos) or no medicine?"
}
]
|
query-laysum | 5469 | [
{
"role": "user",
"content": "Abstract: Background\nPharmacological prophylaxis has been proven to reduce the risk of cardiovascular events in individuals with atherosclerotic occlusive arterial disease. However, the role of prophylaxis in individuals with abdominal aortic aneurysm (AAA) remains unclear. Several studies have shown that despite successful repair, those people with AAA have a poorer rate of survival than healthy controls. People with AAA have an increased prevalence of coronary heart disease and risk of cardiovascular events. Despite this association, little is known about the effectiveness of pharmacological prophylaxis in reducing cardiovascular risk in people with AAA. This is an update of a Cochrane review first published in 2014.\nObjectives\nTo determine the long-term effectiveness of antiplatelet, antihypertensive or lipid-lowering medication in reducing mortality and cardiovascular events in people with abdominal aortic aneurysm (AAA).\nSearch methods\nFor this update the Cochrane Vascular Information Specialist (CIS) searched the Cochrane Vascular Specialised Register (14 April 2016). In addition, the CIS searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2016, Issue 3) and trials registries (14 April 2016) and We also searched the reference lists of relevant articles.\nSelection criteria\nRandomised controlled trials in which people with AAA were randomly allocated to one prophylactic treatment versus another, a different regimen of the same treatment, a placebo, or no treatment were eligible for inclusion in this review. Primary outcomes included all-cause mortality and cardiovascular mortality.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, and completed quality assessment and data extraction. We resolved any disagreements by discussion. Only one study met the inclusion criteria of the review, therefore we were unable to perform meta-analysis.\nMain results\nNo new studies met the inclusion criteria for this update. We included one randomised controlled trial in the review. A subgroup of 227 participants with AAA received either metoprolol (N = 111) or placebo (N = 116). There was no clear evidence that metoprolol reduced all-cause mortality (odds ratio (OR) 0.17, 95% confidence interval (CI) 0.02 to 1.41), cardiovascular death (OR 0.20, 95% CI 0.02 to 1.76), AAA-related death (OR 1.05, 95% CI 0.06 to 16.92) or increased nonfatal cardiovascular events (OR 1.44, 95% CI 0.58 to 3.57) 30 days postoperatively. Furthermore, at six months postoperatively, estimated effects were compatible with benefit and harm for all-cause mortality (OR 0.71, 95% CI 0.26 to 1.95), cardiovascular death (OR 0.73, 95% CI 0.23 to 2.39) and nonfatal cardiovascular events (OR 1.41, 95% CI 0.59 to 3.35). Adverse drug effects were reported for the whole study population and were not available for the subgroup of participants with AAA. We considered the study to be at a generally low risk of bias. We downgraded the quality of the evidence for all outcomes to low. We downgraded the quality of evidence for imprecision as only one study with a small number of participants was available, the number of events was small and the result was consistent with benefit and harm.\nAuthors' conclusions\nDue to the limited number of included trials, there is insufficient evidence to draw any conclusions about the effectiveness of cardiovascular prophylaxis in reducing mortality and cardiovascular events in people with AAA. Further good-quality randomised controlled trials that examine many types of prophylaxis with long-term follow-up are required before firm conclusions can be made.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics and key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "After searching for all relevant studies (until 14 April 2016), we found one study in which a subgroup of 227 people with AAA received either the beta-blocker metoprolol (medication that reduces blood pressure) or a placebo (dummy treatment). This study's results were imprecise for all causes of death and death from cardiovascular disease or nonfatal cardiovascular events at 30 days or six months after AAA repair. Side effects from the drug were reported for the whole study population and were not available for the subgroup of participants with AAA."
}
]
|
query-laysum | 5470 | [
{
"role": "user",
"content": "Abstract: Background\nChoroidal neovascularisation (CNV) is a common complication of pathological myopia. Once developed, most eyes with myopic CNV (mCNV) experience a progression to macular atrophy, which leads to irreversible vision loss. Anti-vascular endothelial growth factor (anti-VEGF) therapy is used to treat diseases characterised by neovascularisation and is increasingly used to treat mCNV.\nObjectives\nTo assess the effects of anti-vascular endothelial growth factor (anti-VEGF) therapy for choroidal neovascularisation (CNV), compared with other treatments, sham treatment or no treatment, in people with pathological myopia.\nSearch methods\nWe searched a number of electronic databases including CENTRAL and Ovid MEDLINE, ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform ICTRP). We did not use any date or language restrictions in the electronic searches for trials. Electronic databases were last searched on 16 June 2016.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs comparing anti-VEGF therapy with another treatment (e.g. photodynamic therapy (PDT) with verteporfin, laser photocoagulation, macular surgery, another anti-VEGF), sham treatment or no treatment in participants with mCNV.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two authors independently screened records, extracted data, and assessed risk of bias. We contacted trial authors for additional data. We analysed outcomes as risk ratios (RRs) or mean differences (MDs). We graded the certainty of the evidence using GRADE.\nMain results\nThe present review included six studies which provided data on the comparison between anti-VEGF with PDT, laser, sham treatment and another anti-VEGF treatment, with 594 participants with mCNV. Three trials compared bevacizumab or ranibizumab with PDT, one trial compared bevacizumab with laser, one trial compared aflibercept with sham treatment, and two trials compared bevacizumab with ranibizumab. Pharmaceutical companies conducted two trials. The trials were conducted at multiple clinical centres across three continents (Europe, Asia and North America). In all these six trials, one eye for each participant was included in the study.\nWhen compared with PDT, people treated with anti-VEGF agents (ranibizumab (one RCT), bevacizumab (two RCTs)), were more likely to regain vision. At one year of follow-up, the mean visual acuity (VA) in participants treated with anti-VEGFs was -0.14 logMAR better, equivalent of seven Early Treatment Diabetic Retinopathy Study (ETDRS) letters, compared with people treated with PDT (95% confidence interval (CI) -0.20 to -0.08, 3 RCTs, 263 people, low-certainty evidence). The RR for proportion of participants gaining 3+ lines of VA was 1.86 (95% CI 1.27 to 2.73, 2 RCTs, 226 people, moderate-certainty evidence). At two years, the mean VA in people treated with anti-VEGFs was -0.26 logMAR better, equivalent of 13 ETDRS letters, compared with people treated with PDT (95% CI -0.38 to -0.14, 2 RCTs, 92 people, low-certainty evidence). The RR for proportion of people gaining 3+ lines of VA at two years was 3.43 (95% CI 1.37 to 8.56, 2 RCTs, 92 people, low-certainty evidence). People treated with anti-VEGFs showed no obvious reduction (improvement) in central retinal thickness at one year compared with people treated with PDT (MD -17.84 μm, 95% CI -41.98 to 6.30, 2 RCTs, 226 people, moderate-certainty evidence). There was low-certainty evidence that people treated with anti-VEGF were more likely to have CNV angiographic closure at 1 year (RR 1.24, 95% CI 0.99 to 1.54, 2 RCTs, 208 people). One study allowed ranibizumab treatment as of month 3 in participants randomised to PDT, which may have led to an underestimate of the benefits of anti-VEGF treatment.\nWhen compared with laser photocoagulation, there was more improvement in VA among bevacizumab-treated people than among laser-treated people after one year (MD -0.22 logMAR, equivalent of 11 ETDRS letters, 95% CI -0.43 to -0.01, 1 RCT, 36 people, low-certainty evidence) and after two years (MD -0.29 logMAR, equivalent of 14 ETDRS letters, 95% CI -0.50 to -0.08, 1 RCT, 36 people, low-certainty evidence).\nWhen compared with sham treatment, people treated with aflibercept had better vision at one year (MD -0.19 logMAR, equivalent of 9 ETDRS letters, 95% CI -0.27 to -0.12, 1 RCT, 121 people, moderate-certainty evidence). The fact that this study allowed for aflibercept treatment at 6 months in the control group might cause an underestimation of the benefit with anti-VEGF.\nPeople treated with ranibizumab had similar improvement in VA recovery compared with people treated with bevacizumab after one year (MD -0.02 logMAR, equivalent of 1 ETDRS letter, 95% CI -0.11 to 0.06, 2 RCTs, 80 people, moderate-certainty evidence).\nOf the included six studies, two studies reported no adverse events in either group and two industry-sponsored studies reported both systemic and ocular adverse events. In the control group, there were no systemic or ocular adverse events reported in 149 participants. Fifteen people reported systemic serious adverse events among 359 people treated with anti-VEGF agents (15/359, 4.2%). Five people reported ocular adverse events among 359 people treated with anti-VEGF agents (5/359, 1.4%). The number of adverse events was low, and the estimate of RR was uncertain regarding systemic serious adverse events (4 RCTs, 15 events in 508 people, RR 4.50, 95% CI 0.60 to 33.99, very low-certainty evidence) and serious ocular adverse events (4 RCTs, 5 events in 508 people, RR 1.82, 95% CI 0.23 to 14.71, very low-certainty evidence). There were no reports of mortality or cases of endophthalmitis or retinal detachment.\nThere was sparse reporting of data for vision-related quality of life (in favour of anti-VEGF) in only one trial at one year of follow-up. The studies did not report data for other outcomes, such as percentage of participants with newly developed chorioretinal atrophy.\nAuthors' conclusions\nThere is low to moderate-certainty evidence from RCTs for the efficacy of anti-VEGF agents to treat mCNV at one year and two years. Moderate-certainty evidence suggests ranibizumab and bevacizumab are equivalent in terms of efficacy. Adverse effects occurred rarely and the trials included here were underpowered to assess these. Future research should be focused on the efficacy and safety of different drugs and treatment regimens, the efficacy on different location of mCNV, as well as the effects on practice in the real world.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out if anti-vascular endothelial growth factor (called anti-VEGF) treatment of new blood vessels in people with severe myopia (also known as nearsightedness or shortsightedness) prevents vision loss. Cochrane researchers collected and analysed all relevant studies to answer this question and found six studies."
}
]
|
query-laysum | 5471 | [
{
"role": "user",
"content": "Abstract: Background\nComplete deletion of both the short arm of chromosome 1 (1p) and the long arm of chromosome 19 (19q), known as 1p/19q codeletion, is a mutation that can occur in gliomas. It occurs in a type of glioma known as oligodendroglioma and its higher grade counterpart known as anaplastic oligodendroglioma. Detection of 1p/19q codeletion in gliomas is important because, together with another mutation in an enzyme known as isocitrate dehydrogenase, it is needed to make the diagnosis of an oligodendroglioma. Presence of 1p/19q codeletion also informs patient prognosis and prediction of the best drug treatment. The main two tests in use are fluorescent in situ hybridisation (FISH) and polymerase chain reaction (PCR)-based loss of heterozygosity (LOH) assays (also known as PCR-based short tandem repeat or microsatellite analysis). Many other tests are available. None of the tests is perfect, although PCR-based LOH is expected to have very high sensitivity.\nObjectives\nTo estimate the sensitivity and specificity and cost-effectiveness of different deoxyribonucleic acid (DNA)-based techniques for determining 1p/19q codeletion status in glioma.\nSearch methods\nWe searched MEDLINE, Embase and BIOSIS up to July 2019. There were no restrictions based on language or date of publication. We sought economic evaluation studies from the results of this search and using the National Health Service Economic Evaluation Database.\nSelection criteria\nWe included cross-sectional studies in adults with glioma or any subtype of glioma, presenting raw data or cross-tabulations of two or more DNA-based tests for 1p/19q codeletion. We also sought economic evaluations of these tests.\nData collection and analysis\nWe followed procedures outlined in the Cochrane Handbook for Diagnostic Test Accuracy Reviews. Two review authors independently screened titles/abstracts/full texts, performed data extraction, and undertook applicability and risk of bias assessments using QUADAS-2. Meta-analyses used the hierarchical summary ROC model to estimate and compare test accuracy. We used FISH and PCR-based LOH as alternate reference standards to examine how tests compared with those in common use, and conducted a latent class analysis comparing FISH and PCR-based LOH. We constructed an economic model to evaluate cost-effectiveness.\nMain results\nWe included 53 studies examining: PCR-based LOH, FISH, single nucleotide polymorphism (SNP) array, next-generation sequencing (NGS), comparative genomic hybridisation (CGH), array comparative genomic hybridisation (aCGH), multiplex-ligation-dependent probe amplification (MLPA), real-time PCR, chromogenic in situ hybridisation (CISH), mass spectrometry (MS), restriction fragment length polymorphism (RFLP) analysis, G-banding, methylation array and NanoString. Risk of bias was low for only one study; most gave us concerns about how patients were selected or about missing data. We had applicability concerns about many of the studies because only patients with specific subtypes of glioma were included. 1520 participants contributed to analyses using FISH as the reference, 1304 participants to analyses involving PCR-based LOH as the reference and 262 participants to analyses of comparisons between methods from studies not including FISH or PCR-based LOH.\nMost evidence was available for comparison of FISH with PCR-based LOH (15 studies, 915 participants): PCR-based LOH detected 94% of FISH-determined codeletions (95% credible interval (CrI) 83% to 98%) and FISH detected 91% of codeletions determined by PCR-based LOH (CrI 78% to 97%). Of tumours determined not to have a deletion by FISH, 94% (CrI 87% to 98%) had a deletion detected by PCR-based LOH, and of those determined not to have a deletion by PCR-based LOH, 96% (CrI 90% to 99%) had a deletion detected by FISH. The latent class analysis suggested that PCR-based LOH may be slightly more accurate than FISH. Most other techniques appeared to have high sensitivity (i.e. produced few false-negative results) for detection of 1p/19q codeletion when either FISH or PCR-based LOH was considered as the reference standard, although there was limited evidence. There was some indication of differences in specificity (false-positive rate) with some techniques. Both NGS and SNP array had high specificity when considered against FISH as the reference standard (NGS: 6 studies, 243 participants; SNP: 6 studies, 111 participants), although we rated certainty in the evidence as low or very low. NGS and SNP array also had high specificity when PCR-based LOH was considered the reference standard, although with much more uncertainty as these results were based on fewer studies (just one study with 49 participants for NGS and two studies with 33 participants for SNP array).\nG-banding had low sensitivity and specificity when PCR-based LOH was the reference standard. Although MS had very high sensitivity and specificity when both FISH and PCR-based LOH were considered the reference standard, these results were based on only one study with a small number of participants. Real-time PCR also showed high specificity with FISH as a reference standard, although there were only two studies including 40 participants.\nWe found no relevant economic evaluations. Our economic model using FISH as the reference standard suggested that the resource-optimising test depends on which measure of diagnostic accuracy is most important. With FISH as the reference standard, MLPA is likely to be cost-effective if society was willing to pay GBP 1000 or less for a true positive detected. However, as the value placed on a true positive increased, CISH was most cost-effective. Findings differed when the outcome measure changed to either true negative detected or correct diagnosis. When PCR-based LOH was used as the reference standard, MLPA was likely to be cost-effective for all measures of diagnostic accuracy at lower threshold values for willingness to pay. However, as the threshold values increased, none of the tests were clearly more likely to be considered cost-effective.\nAuthors' conclusions\nIn our review, most techniques (except G-banding) appeared to have good sensitivity (few false negatives) for detection of 1p/19q codeletions in glioma against both FISH and PCR-based LOH as a reference standard. However, we judged the certainty of the evidence low or very low for all the tests. There are possible differences in specificity, with both NGS and SNP array having high specificity (fewer false positives) for 1p/19q codeletion when considered against FISH as the reference standard. The economic analysis should be interpreted with caution due to the small number of studies.\n\nGiven the provided abstract, please respond to the following query: \"What is studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review examined and compared all methods to detect 1p/19q codeletion that are based on the deoxyribonucleic acid (DNA, which contains the information for an organism to develop, survive and reproduce) of the tumour. These include tests known as FISH and CISH, which are performed directly on tumour tissue and a number of other tests that are based on DNA extracted from the tumour tissue including: PCR-based LOH, real-time PCR, MLPA, SNP array, CGH array and NGS. None of these tests is perfect, so there is no 'gold standard' against which to compare them. The two most commonly used tests (FISH and PCR-based LOH) were used as the best available reference tests against which to examine the others."
}
]
|
query-laysum | 5472 | [
{
"role": "user",
"content": "Abstract: Background\nInfantile colic is typically defined as full-force crying for at least three hours per day, on at least three days per week, for at least three weeks. Infantile colic affects a large number of infants and their families worldwide. Its symptoms are broad and general, and while not indicative of disease, may represent a serious underlying condition in a small percentage of infants who may need a medical assessment. Probiotics are live microorganisms that alter the microflora of the host and provide beneficial health effects. The most common probiotics used are of Lactobacillus, Bifidobacterium and Streptococcus. There is growing evidence to suggest that intestinal flora in colicky infants differ from those in healthy infants, and it is suggested that probiotics can redress this balance and provide a healthier intestinal microbiota landscape. The low cost and easy availability of probiotics makes them a potential prophylactic solution to reduce the incidence and prevalence of infantile colic.\nObjectives\nTo evaluate the efficacy and safety of prophylactic probiotics in preventing or reducing severity of infantile colic.\nSearch methods\nIn January 2018 we searched CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, 10 other databases and two trials registers. In addition, we handsearched the abstracts of relevant meetings, searched reference lists, ran citation searches of included studies, and contacted authors and experts in the field, including the manufacturers of probiotics, to identify unpublished trials.\nSelection criteria\nRandomised control trials (RCTs) of newborn infants less than one month of age without the diagnosis of infantile colic at recruitment. We included any probiotic, alone or in combination with a prebiotic (also known as synbiotics), versus no intervention, another intervention(s) or placebo, where the focus of the study was the effect of the intervention on infantile colic.\nData collection and analysis\nWe used standard methodological procedures of Cochrane.\nMain results\nOur search yielded 3284 records, and of these, we selected 21 reports for full-text review. Six studies with 1886 participants met our inclusion criteria, comparing probiotics with placebo. Two studies examined Lactobacillus reuteri DSM, two examined multi-strain probiotics, one examined Lactobacillus rhamnosus, and one examined Lactobacillus paracasei and Bifidobacterium animalis. Two studies began probiotics during pregnancy and continued administering them to the baby after birth.\nWe considered the risk of bias for randomisation as low for all six trials; for allocation concealment as low in two studies and unclear in four others. All studies were blinded, and at low risk of attrition and reporting bias.\nA random-effects meta-analysis of three studies (1148 participants) found no difference between the groups in relation to occurrence of new cases of colic: risk ratio (RR) 0.46, 95% confidence interval (CI) 0.18 to 1.19; low-certainty evidence; I2 = 72%.\nA random-effects meta-analysis of all six studies (1851 participants) found no difference between the groups in relation to serious adverse effects (RR 1.02, 95% CI 0.14 to 7.21; low-certainty evidence; I2 not calculable (only four serious events for one comparison, two in each group: meconium plug obstruction, patent ductus arteriosus and neonatal hepatitis).\nA random-effects meta-analysis of three studies (707 participants) found a mean difference (MD) of –32.57 minutes per day (95% CI –55.60 to –9.54; low-certainty evidence; I2 = 93%) in crying time at study end in favour of probiotics.\nA subgroup analysis of the most studied agent, Lactobacillus reuteri, showed a reduction of 44.26 minutes in daily crying with a random-effects model (95% CI –66.6 to –21.9; I2 = 92%), in favour of probiotics.\nAuthors' conclusions\nThere is no clear evidence that probiotics are more effective than placebo at preventing infantile colic; however, daily crying time appeared to reduce with probiotic use compared to placebo. There were no clear differences in adverse effects.\nWe are limited in our ability to draw conclusions by the certainty of the evidence, which we assessed as being low across all three outcomes, meaning that we are not confident that these results would not change with the addition of further research.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Although probiotics make little or no difference to the occurrence of infantile colic, they may reduce crying time and there were no safety concerns. We still require more research to work out if the onset of colic can be reduced."
}
]
|
query-laysum | 5473 | [
{
"role": "user",
"content": "Abstract: Background\nAttention deficit hyperactivity disorder (ADHD) is a prevalent neurobiological condition, characterised by behavioral and cognitive symptoms such as inattention, impulsivity and/or excessive activity. The syndrome is commonly accompanied by psychiatric comorbidities and is associated with educational and occupational underachievement.\nAlthough psychostimulant medications are the mainstay of treatment for ADHD, not all adults respond optimally to, or can tolerate, these medicines. Thus, alternative non-stimulant treatment approaches for ADHD have been explored. One of these alternatives is bupropion, an aminoketone antidepressant and non-competitive antagonism of nicotinic acetylcholine receptors. Bupropion is registered for the treatment of depression and smoking cessation, but is also used off-label to treat ADHD.\nObjectives\nTo assess the effects and safety of bupropion for the treatment of adults with ADHD.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and seven other databases in February 2017. We also searched three trials registers and three online theses portals. In addition, we checked references of included studies and contacted study authors to identify potentially relevant studies that were missed by our search.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that evaluated the effects (including adverse effects) of bupropion compared to placebo in adults with ADHD.\nData collection and analysis\nTwo review authors (WV, GB) independently screened records and extracted data using a data extraction sheet that we tested in a pilot study. We extracted all relevant data on study characteristics and results. We assessed risks of bias using the Cochrane 'Risk of bias' tool, and assessed the overall quality of evidence using the GRADE approach. We used a fixed-effect model to pool the results across studies.\nMain results\nWe included six studies with a total of 438 participants. Five studies were conducted in the USA, and one in Iran. All studies evaluated a long-acting version of bupropion, with the dosage ranging from 150 mg up to 450 mg daily. Study intervention length varied from six to 10 weeks. Four studies explicitly excluded participants with psychiatric comorbidity and one study included only participants with opioid dependency. Four studies were funded by industry, but the impact of this on study results is unknown. Two studies were publicly funded and in one of these studies, the lead author was a consultant for several pharmaceutical companies and also received investigator-driven funding from two companies, however none of these companies manufacture bupropion. We judged none of the studies to be free of bias because for most risk of bias domains the study reports failed to provide sufficient details. Using the GRADE approach, we rated the overall quality of evidence as low. We downgraded the quality of the evidence because of serious risk of bias and serious imprecision due to small sample sizes.\nWe found low-quality evidence that bupropion decreased the severity of ADHD symptoms (standardised mean difference -0.50, 95% confidence interval (CI) -0.86 to -0.15, 3 studies, 129 participants), and increased the proportion of participants achieving clinical improvement (risk ratio (RR) 1.50, 95% CI 1.13 to 1.99, 4 studies, 315 participants), and reporting an improvement on the Clinical Global Impression - Improvement scale (RR 1.78, 95% CI 1.27 to 2.50, 5 studies, 337 participants). There was low-quality evidence that the proportion of participants who withdrew due to any adverse effect was similar in the bupropion and placebo groups (RR 1.20, 95% CI 0.35 to 4.10, 3 studies, 253 participants). The results were very similar when using a random-effects model and when we analysed only studies that excluded participants with a psychiatric comorbidity.\nAuthors' conclusions\nThe findings of this review, which compared bupropion to placebo for adult ADHD, indicate a possible benefit of bupropion. We found low-quality evidence that bupropion decreased the severity of ADHD symptoms and moderately increased the proportion of participants achieving a significant clinical improvement in ADHD symptoms. Furthermore, we found low-quality evidence that the tolerability of bupropion is similar to that of placebo. In the pharmacological treatment of adults with ADHD, extended- or sustained-release bupropion may be an alternative to stimulants. The low-quality evidence indicates uncertainty with respect to the pooled effect estimates. Further research is very likely to change these estimates. More research is needed to reach more definite conclusions as well as clarifying the optimal target population for this medicine. Treatment response remains to be reported in a DSM5-diagnosed population. There is also a lack of knowledge on long-term outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "ADHD is a brain disorder marked by an ongoing pattern of inattention, impulsivity and/or excessive activity, which interferes with functioning or development. Some people with ADHD only have problems with one of the behaviors, while others have a combination. Bupropion is a drug that is registered for the treatment of depression and for stopping smoking, but is also used to treat ADHD. People with ADHD are usually prescribed stimulant drugs such as methylphenidate and amphetamines. However, not everyone responds well to stimulants. Some people cannot tolerate stimulants because of side effects. Others have medical conditions such as psychiatric conditions or tic disorders that make it not possible to use stimulants. Other people may not want to use stimulants, as these are controlled substances. Nonstimulants such as bupropion are therefore sometimes used instead. Its effectiveness in treating ADHD is not yet known."
}
]
|
query-laysum | 5474 | [
{
"role": "user",
"content": "Abstract: Background\nHeavy menstrual bleeding and pain are common reasons women discontinue intrauterine device (IUD) use. Copper IUD (Cu IUD) users tend to experience increased menstrual bleeding, whereas levonorgestrel IUD (LNG IUD) users tend to have irregular menstruation. Medical therapies used to reduce heavy menstrual bleeding or pain associated with Cu and LNG IUD use include non-steroidal anti-inflammatory drugs (NSAIDs), anti-fibrinolytics and paracetamol. We analysed treatment and prevention interventions separately because the expected outcomes for treatment and prevention interventions differ. We did not combine different drug classes in the analysis as they have different mechanisms of action. This is an update of a review originally on NSAIDs. The review scope has been widened to include all interventions for treatment or prevention of heavy menstrual bleeding or pain associated with IUD use.\nObjectives\nTo evaluate all randomized controlled trials (RCTs) that have assessed strategies for treatment and prevention of heavy menstrual bleeding or pain associated with IUD use, for example, pharmacotherapy and alternative therapies.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and CINAHL to January 2021.\nSelection criteria\nWe included RCTs in any language that tested strategies for treatment or prevention of heavy menstrual bleeding or pain associated with IUD (Cu IUD, LNG IUD or other IUD) use. The comparison could be no intervention, placebo or another active intervention.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, and extracted data. Primary outcomes were volume of menstrual blood loss, duration of menstruation and painful menstruation. We used a random-effects model in all meta-analyses. Review authors assessed the certainty of evidence using GRADE.\nMain results\nThis review includes 21 trials involving 3689 participants from middle- and high-income countries. Women were 18 to 45 years old and either already using an IUD or had just had one placed for contraception. The included trials examined NSAIDs and other interventions. Eleven were treatment trials, of these seven were on users of the Cu IUD, one on LNG IUD and three on an unknown type. Ten were prevention trials, six focused on Cu IUD users, and four on LNG IUD users. Sixteen trials had high risk of detection bias due to subjective assessment of pain and bleeding.\nTreatment of heavy menstrual bleeding\nCu IUD\nVitamin B1 resulted in fewer pads used per day (mean difference (MD) −7.00, 95% confidence interval (CI) −8.50 to −5.50) and fewer bleeding days (MD −2.00, 95% CI –2.38 to −1.62; 1 trial; 110 women; low-certainty evidence) compared to placebo. The evidence is very uncertain about the effect of naproxen on the volume of menstruation compared to placebo (odds ratio (OR) 0.09, 95% CI 0.00 to 1.78; 1 trial, 40 women; very low-certainty evidence).\nTreatment with mefenamic acid resulted in less volume of blood loss compared to tranexamic acid (MD −64.26, 95% CI −105.65 to −22.87; 1 trial, 94 women; low-certainty evidence). However, there was no difference in duration of bleeding with treatment of mefenamic acid or tranexamic acid (MD 0.08 days, 95% CI −0.27 to 0.42, 2 trials, 152 women; low-certainty evidence).\nLNG IUD\nThe use of ulipristal acetate in LNG IUD may not reduce the number of bleeding days in 90 days in comparison to placebo (MD −9.30 days, 95% CI −26.76 to 8.16; 1 trial, 24 women; low-certainty evidence).\nUnknown IUD type\nMefenamic acid may not reduce volume of bleeding compared to Vitex agnus measured by pictorial blood assessment chart (MD −2.40, 95% CI −13.77 to 8.97; 1 trial; 84 women; low-certainty evidence).\nTreatment of pain\nCu IUD\nTreatment with tranexamic acid and sodium diclofenac may result in little or no difference in the occurrence of pain (OR 1.00, 95% CI 0.06 to 17.25; 1 trial, 38 women; very low-certainty evidence).\nUnknown IUD type\nNaproxen may reduce pain (MD 4.10, 95% CI 0.91 to 7.29; 1 trial, 33 women; low-certainty evidence).\nPrevention of heavy menstrual bleeding\nCu IUD\nWe found very low-certainty evidence that tolfenamic acid may prevent heavy bleeding compared to placebo (OR 0.54, 95% CI 0.34 to 0.85; 1 trial, 310 women). There was no difference between ibuprofen and placebo in blood volume reduction (MD −14.11, 95% CI −36.04 to 7.82) and duration of bleeding (MD −0.2 days, 95% CI −1.40 to 1.0; 1 trial, 28 women, low-certainty evidence).\nAspirin may not prevent heavy bleeding in comparison to paracetamol (MD −0.30, 95% CI −26.16 to 25.56; 1 trial, 20 women; very low-certainty evidence).\nLNG IUD\nUlipristal acetate may increase the percentage of bleeding days compared to placebo (MD 9.50, 95% CI 1.48 to 17.52; 1 trial, 118 women; low-certainty evidence). There were insufficient data for analysis in a single trial comparing mifepristone and vitamin B.\nThere were insufficient data for analysis in the single trial comparing tranexamic acid and mefenamic acid and in another trial comparing naproxen with estradiol.\nPrevention of pain\nCu IUD\nThere was low-certainty evidence that tolfenamic acid may not be effective to prevent painful menstruation compared to placebo (OR 0.71, 95% CI 0.44 to 1.14; 1 trial, 310 women). Ibuprofen may not reduce menstrual cramps compared to placebo (OR 1.00, 95% CI 0.11 to 8.95; 1 trial, 20 women, low-certainty evidence).\nAuthors' conclusions\nFindings from this review should be interpreted with caution due to low- and very low-certainty evidence. Included trials were limited; the majority of the evidence was derived from single trials with few participants. Further research requires larger trials and improved trial reporting. The use of vitamin B1 and mefenamic acid to treat heavy menstruation and tolfenamic acid to prevent heavy menstruation associated with Cu IUD should be investigated. More trials are needed to generate evidence for the treatment and prevention of heavy and painful menstruation associated with LNG IUD.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review includes 21 studies with 3689 women. Eleven studies investigated treatment, while 10 looked at prevention of heavy bleeding and pain. Most of the evidence was obtained from single studies with few participants. We have little to no confidence in the results of the studies. Our confidence is limited because some studies involved only a few people, had varied interventions or had no clear reports on how the study was conducted."
}
]
|
query-laysum | 5475 | [
{
"role": "user",
"content": "Abstract: Background\nUlcerative colitis (UC) is a chronic inflammatory bowel disease. Corticosteroids and 5-aminosalicylates are the most commonly used therapies. However, many patients require immunosuppressive therapy for steroid-refractory and steroid-dependent disease. Methotrexate is a medication that is effective for treating a variety of inflammatory diseases, including Crohn's disease. This review was performed to determine the effectiveness of methotrexate treatment in UC patients. This review is an update of a previously published Cochrane review.\nObjectives\nTo assess the efficacy and safety of methotrexate for induction of remission in patients with UC.\nSearch methods\nMEDLINE, EMBASE, CENTRAL and the Cochrane IBD/FBD group specialized trials register were searched from from inception to June 26, 2014. Study references and review papers were also searched for additional trials. Abstracts from major gastroenterological meetings were searched to identify research published in abstract form only.\nSelection criteria\nRandomized controlled trials comparing methotrexate with placebo or an active comparator in patients with active ulcerative colitis were considered for inclusion.\nData collection and analysis\nTwo authors independently reviewed studies for eligibility, extracted data and assessed study quality using the Cochrane risk of bias tool. The primary outcome measure was the proportion of patients who achieved clinical remission and withdrawal from steroids as defined by the studies and expressed as a percentage of the total number of patients randomized (intention-to-treat analysis). We calculated the risk ratio (RR) and corresponding 95% confidence intervals (95% CI) for dichotomous outcomes. The overall quality of the evidence supporting the primary outcome was assessed using the GRADE criteria.\nMain results\nTwo studies (n = 101 patients) were included in the review. One study (n = 67) compared oral methotrexate 12.5 mg/week) to placebo. The other study (n = 34) compared oral methotrexate (15 mg/week) to 6-mercaptopurine (1.5 mg/kg/day) and 5-aminosalicylic acid (3 g/day). The placebo-controlled study was judged to be at low risk of bias. The other study was judged to be at high risk of bias due to an open-label design. There was no statistically significant difference in clinical remission rates between methotrexate and placebo patients. Forty-seven per cent (14/30) of methotrexate patients achieved clinical remission and complete withdrawal from steroids during the study period compared to 49% (18/37) of placebo patients (RR 0.96, 95% CI 0.58 to 1.59. A GRADE analysis indicated that the overall quality of the evidence supporting this outcome was low due to very sparse data (32 events). There were no statistically significant differences in the proportion of patients who achieved clinical remission and withdrawal from steroids in the study comparing oral methotrexate to 6-mercaptopurine and 5-aminosalicylic acid. At 30 weeks, 58% (7/12) of methotrexate patients achieved clinical remission and withdrawal from steroids compared to 79% (11/14) of 6-mercaptopurine patients (RR 0.74, 95% CI 0.43 to 1.29) and 25% of 5-aminosalicylic acid patients (RR 2.33, 95% CI 0.64 to 8.49). GRADE analyses indicated that the overall quality of the evidence was very low due to very sparse data (18 and 9 events respectively) and and high risk of bias. In the placebo-controlled trial two patients (7%) were withdrawn from the methotrexate group due to adverse events (leucopenia, migraine) compared to one patient (3%) who had a rash in the placebo group (RR 2.47, 95% CI 0.23 to 25.91). Adverse events experienced by methotrexate patients in the active comparator study included nausea and dyspepsia, mild alopecia, mild increase in aspartate aminotransferase levels, peritoneal abscess, hypoalbuminemia, severe rash and atypical pneumonia.\nAuthors' conclusions\nAlthough methotrexate was well-tolerated, the studies showed no benefit for methotrexate over placebo or active comparators. The results for efficacy outcomes between methotrexate and placebo, methotrexate and 6-mercaptopurine and methotrexate and 5-aminosalicylic acid were uncertain. Whether a higher dose or parenteral administration would be effective for induction therapy is unknown. At present there is no evidence supporting the use of methotrexate for induction of remission in active ulcerative colitis. A trial in which larger numbers of patients receive a higher dose of oral methotrexate should be considered. Currently there are two large ongoing placebo-controlled trials (METEOR and MERIT-UC) assessing the efficacy and safety of intramuscular or subcutaneous methotrexate in patients with active UC which may help resolve the evidence supporting the use of methotrexate as therapy for active of ulcerative colitis.\n\nGiven the provided abstract, please respond to the following query: \"What did the researchers find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The researchers identified two studies that included a total of 101 participants. One was a high quality study (67 participants) that compared oral methotrexate (12.5 mg/week) to a placebo (a sugar pill or fake medicine). The other study (34 participants) compared oral methotrexate (15 mg/week) against 6-mercaptopurine (an immunosuppressive drug at a dose of 1.5 mg/kg/day) and against 5-aminosalicylic acid (an anti-inflammatory drug at a dose of 3 g/day).\nIn the high quality study, there was no difference between the methotrexate and placebo treatment groups for the number of people who achieved remission and were able to stop taking steroids. This suggests that, when used at this low dose (12.5 mg/week), methotrexate does not produce remission from ulcerative colitis. However, this result is uncertain because of the small number of people who were assessed.\nThe other, smaller study showed no differences between methotrexate and the other treatments in the proportion of participants who experienced remission and were able to stop taking steroids. This result is also uncertain due to poor study design and the low number of participants.\nThe side effects reported in the two studies included leucopenia (a decrease in the number of white blood cells), migraine, rash, nausea and dyspepsia (indigestion), mild alopecia (hair loss), mild increase in levels of an enzyme found in the liver (aspartate aminotransferase), a collection of pus in the abdominal tissue (peritoneal abscess), abnormally low levels of the protein albumin in the blood (hypoalbuminemia), and pneumonia.\nAt present, the results from medical trials do not support the use of low dose oral methotrexate (12.5 mg to 15 mg/week) for the production of remission in active ulcerative colitis. It is not known whether a higher dose of oral methotrexate, or giving methotrexate by a different route (e.g. by injection), would increase the likelihood of remission.\nIn future, researchers should consider organising a study with a larger number of participants who receive a higher dose of oral methotrexate. Currently, there are two large studies being run that compare a higher dose of methotrexate – given by injection – with placebo in people with active ulcerative colitis (the METEOR and MERIT-UC studies). The results of these studies may resolve the uncertainty surrounding the use of methotrexate for the treatment of active ulcerative colitis."
}
]
|
query-laysum | 5476 | [
{
"role": "user",
"content": "Abstract: Background\nMost people who stop smoking gain weight. This can discourage some people from making a quit attempt and risks offsetting some, but not all, of the health advantages of quitting. Interventions to prevent weight gain could improve health outcomes, but there is a concern that they may undermine quitting.\nObjectives\nTo systematically review the effects of: (1) interventions targeting post-cessation weight gain on weight change and smoking cessation (referred to as 'Part 1') and (2) interventions designed to aid smoking cessation that plausibly affect post-cessation weight gain (referred to as 'Part 2').\nSearch methods\nPart 1 - We searched the Cochrane Tobacco Addiction Group's Specialized Register and CENTRAL; latest search 16 October 2020.\nPart 2 - We searched included studies in the following 'parent' Cochrane reviews: nicotine replacement therapy (NRT), antidepressants, nicotine receptor partial agonists, e-cigarettes, and exercise interventions for smoking cessation published in Issue 10, 2020 of the Cochrane Library. We updated register searches for the review of nicotine receptor partial agonists.\nSelection criteria\nPart 1 - trials of interventions that targeted post-cessation weight gain and had measured weight at any follow-up point or smoking cessation, or both, six or more months after quit day.\nPart 2 - trials included in the selected parent Cochrane reviews reporting weight change at any time point.\nData collection and analysis\nScreening and data extraction followed standard Cochrane methods. Change in weight was expressed as difference in weight change from baseline to follow-up between trial arms and was reported only in people abstinent from smoking. Abstinence from smoking was expressed as a risk ratio (RR). Where appropriate, we performed meta-analysis using the inverse variance method for weight, and Mantel-Haenszel method for smoking.\nMain results\nPart 1: We include 37 completed studies; 21 are new to this update. We judged five studies to be at low risk of bias, 17 to be at unclear risk and the remainder at high risk.\nAn intermittent very low calorie diet (VLCD) comprising full meal replacement provided free of charge and accompanied by intensive dietitian support significantly reduced weight gain at end of treatment compared with education on how to avoid weight gain (mean difference (MD) −3.70 kg, 95% confidence interval (CI) −4.82 to −2.58; 1 study, 121 participants), but there was no evidence of benefit at 12 months (MD −1.30 kg, 95% CI −3.49 to 0.89; 1 study, 62 participants). The VLCD increased the chances of abstinence at 12 months (RR 1.73, 95% CI 1.10 to 2.73; 1 study, 287 participants). However, a second study found that no-one completed the VLCD intervention or achieved abstinence.\nInterventions aimed at increasing acceptance of weight gain reported mixed effects at end of treatment, 6 months and 12 months with confidence intervals including both increases and decreases in weight gain compared with no advice or health education. Due to high heterogeneity, we did not combine the data. These interventions increased quit rates at 6 months (RR 1.42, 95% CI 1.03 to 1.96; 4 studies, 619 participants; I2 = 21%), but there was no evidence at 12 months (RR 1.25, 95% CI 0.76 to 2.06; 2 studies, 496 participants; I2 = 26%).\nSome pharmacological interventions tested for limiting post-cessation weight gain (PCWG) reduced weight gain at the end of treatment (dexfenfluramine, phenylpropanolamine, naltrexone). The effects of ephedrine and caffeine combined, lorcaserin, and chromium were too imprecise to give useful estimates of treatment effects. There was very low-certainty evidence that personalized weight management support reduced weight gain at end of treatment (MD −1.11 kg, 95% CI −1.93 to −0.29; 3 studies, 121 participants; I2 = 0%), but no evidence in the longer-term 12 months (MD −0.44 kg, 95% CI −2.34 to 1.46; 4 studies, 530 participants; I2 = 41%). There was low to very low-certainty evidence that detailed weight management education without personalized assessment, planning and feedback did not reduce weight gain and may have reduced smoking cessation rates (12 months: MD −0.21 kg, 95% CI −2.28 to 1.86; 2 studies, 61 participants; I2 = 0%; RR for smoking cessation 0.66, 95% CI 0.48 to 0.90; 2 studies, 522 participants; I2 = 0%).\nPart 2: We include 83 completed studies, 27 of which are new to this update.\nThere was low certainty that exercise interventions led to minimal or no weight reduction compared with standard care at end of treatment (MD −0.25 kg, 95% CI −0.78 to 0.29; 4 studies, 404 participants; I2 = 0%). However, weight was reduced at 12 months (MD −2.07 kg, 95% CI −3.78 to −0.36; 3 studies, 182 participants; I2 = 0%).\nBoth bupropion and fluoxetine limited weight gain at end of treatment (bupropion MD −1.01 kg, 95% CI −1.35 to −0.67; 10 studies, 1098 participants; I2 = 3%); (fluoxetine MD −1.01 kg, 95% CI −1.49 to −0.53; 2 studies, 144 participants; I2 = 38%; low- and very low-certainty evidence, respectively). There was no evidence of benefit at 12 months for bupropion, but estimates were imprecise (bupropion MD −0.26 kg, 95% CI −1.31 to 0.78; 7 studies, 471 participants; I2 = 0%). No studies of fluoxetine provided data at 12 months.\nThere was moderate-certainty that NRT reduced weight at end of treatment (MD −0.52 kg, 95% CI −0.99 to −0.05; 21 studies, 2784 participants; I2 = 81%) and moderate-certainty that the effect may be similar at 12 months (MD −0.37 kg, 95% CI −0.86 to 0.11; 17 studies, 1463 participants; I2 = 0%), although the estimates are too imprecise to assess long-term benefit.\nThere was mixed evidence of the effect of varenicline on weight, with high-certainty evidence that weight change was very modestly lower at the end of treatment (MD −0.23 kg, 95% CI −0.53 to 0.06; 14 studies, 2566 participants; I2 = 32%); a low-certainty estimate gave an imprecise estimate of higher weight at 12 months (MD 1.05 kg, 95% CI −0.58 to 2.69; 3 studies, 237 participants; I2 = 0%).\nAuthors' conclusions\nOverall, there is no intervention for which there is moderate certainty of a clinically useful effect on long-term weight gain. There is also no moderate- or high-certainty evidence that interventions designed to limit weight gain reduce the chances of people achieving abstinence from smoking.\n\nGiven the provided abstract, please respond to the following query: \"Stopping smoking and weight gain\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Stopping smoking and weight gain\nIf you smoke, the best thing you can do for your health is to stop. But people often put on weight when they stop smoking, usually in the first few months of stopping. Gaining weight might undermine some of the benefits of stopping smoking, and might also affect the motivation of some people trying to stop smoking."
}
]
|
query-laysum | 5478 | [
{
"role": "user",
"content": "Abstract: Background\nHeart failure (HF) is a chronic disease with significant impact on quality of life and presents many challenges to those diagnosed with the condition, due to a seemingly complex daily regimen of self-care which includes medications, monitoring of weight and symptoms, identification of signs of deterioration and follow-up and interaction with multiple healthcare services. Education is vital for understanding the importance of this regimen, and adhering to it. Traditionally, education has been provided to people with heart failure in a face-to-face manner, either in a community or a hospital setting, using paper-based materials or video/DVD presentations. In an age of rapidly-evolving technology and uptake of smartphones and tablet devices, mHealth-based technology (defined by the World Health Organization as mobile and wireless technologies to achieve health objectives) is an innovative way to provide health education which has the benefit of being able to reach people who are unable or unwilling to access traditional heart failure education programmes and services.\nObjectives\nTo systematically review and quantify the potential benefits and harms of mHealth-delivered education for people with heart failure.\nSearch methods\nWe performed an extensive search of bibliographic databases and registries (CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, IEEE Xplore, ClinicalTrials.gov and WHO International Clinical Trials Registry Platform (ICTRP) Search Portal), using terms to identify HF, education and mHealth. We searched all databases from their inception to October 2019 and imposed no restriction on language of publication.\nSelection criteria\nWe included studies if they were conducted as a randomised controlled trial (RCT), involving adults (≥ 18 years) with a diagnosis of HF. We included trials comparing mHealth-delivered education such as internet and web-based education programmes for use on smartphones and tablets (including apps) and other mobile devices, SMS messages and social media-delivered education programmes, versus usual HF care.\nData collection and analysis\nTwo review authors independently selected studies, assessed risks of bias, and extracted data from all included studies. We calculated the mean difference (MD) or standardised mean difference (SMD) for continuous data and the odds ratio (OR) for dichotomous data with a 95% confidence interval (CI). We assessed heterogeneity using the I2 statistic and assessed the quality of evidence using GRADE criteria.\nMain results\nWe include five RCTs (971 participants) of mHealth-delivered education interventions for people with HF in this review. The number of trial participants ranged from 28 to 512 participants. Mean age of participants ranged from 60 years to 75 years, and 63% of participants across the studies were men. Studies originated from Australia, China, Iran, Sweden, and The Netherlands. Most studies included participants with symptomatic HF, NYHA Class II - III.\nThree studies addressed HF knowledge, revealing that the use of mHealth-delivered education programmes showed no evidence of a difference in HF knowledge compared to usual care (MD 0.10, 95% CI −0.2 to 0.40, P = 0.51, I2 = 0%; 3 studies, 411 participants; low-quality evidence). One study assessing self-efficacy reported that both study groups had high levels of self-efficacy at baseline and uncertainty in the evidence for the intervention (MD 0.60, 95% CI −0.57 to 1.77; P = 0.31; 1 study, 29 participants; very low-quality evidence).Three studies evaluated HF self-care using different scales. We did not pool the studies due to the heterogenous nature of the outcome measures, and the evidence is uncertain. None of the studies reported adverse events. Four studies examined health-related quality of life (HRQoL). There was uncertainty in the evidence for the use of mHealth-delivered education on HRQoL (MD −0.10, 95% CI −2.35 to 2.15; P = 0.93, I2 = 61%; 4 studies, 942 participants; very low-quality evidence). Three studies reported on HF-related hospitalisation. The use of mHealth-delivered education may result in little to no difference in HF-related hospitalisation (OR 0.74, 95% CI 0.52 to 1.06; P = 0.10, I2 = 0%; 3 studies, 894 participants; low-quality evidence). We downgraded the quality of the studies due to limitations in study design and execution, heterogeneity, wide confidence intervals and fewer than 500 participants in the analysis.\nAuthors' conclusions\nWe found that the use of mHealth-delivered educational interventions for people with HF shows no evidence of a difference in HF knowledge; uncertainty in the evidence for self-efficacy, self-care and health-related quality of life; and may result in little to no difference in HF-related hospitalisations. The identification of studies currently underway and those awaiting classification indicate that this is an area of research from which further evidence will emerge in the short and longer term.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Five studies addressed HF knowledge; we found that the use of mHealth-delivered education programmes showed no evidence of a difference in HF knowledge compared to usual care. One study assessing self-efficacy reported uncertainty in the evidence for mHealth-delivered education compared with usual care. Three studies evaluated HF self-care using different scales. We did not combine the studies, because of the differences between the outcome measures, and the evidence is uncertain. The studies did not report any side effects of the interventions. Four studies examined health-related quality of life and showed uncertainty in the evidence between mHealth-delivered education and usual care. Three studies reported on HF-related hospitalisation rates. The use of mHealth-delivered education may result in little to no difference in HF-related hospitalisations."
}
]
|
query-laysum | 5479 | [
{
"role": "user",
"content": "Abstract: Background\nInfantile nystagmus syndrome (INS) is a type of eye movement disorder that can negatively impact vision. Currently, INS cannot be cured, but its effects can potentially be treated pharmacologically, optically, or surgically. This review focuses on the surgical interventions for INS.\nDespite the range of surgical interventions available, and currently applied in practice for the management of INS, there is no clear consensus, and no accepted clinical guidelines regarding the relative efficacy and safety of the various treatment options. A better understanding of these surgical options, along with their associated side effects, will assist clinicians in evidence-based decision-making in relation to the management of INS.\nObjectives\nTo assess the efficacy and safety of surgical interventions for INS.\nSearch methods\nWe searched CENTRAL, MEDLINE Ovid, Embase Ovid, ISRCTN registry, ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) to 3 July 2020, with no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) studying the efficacy and safety of surgical options for treating INS.\nData collection and analysis\nOur prespecified outcome measures were the change from baseline in: binocular best-corrected distance visual acuity; head posture; amplitude, frequency, intensity, and foveation period durations of the nystagmus waveform; visual recognition times; quality of life and self-reported outcome measures; incidence of adverse effects with a probable causal link to treatment; and permanent adverse effects after surgery.\nTwo review authors independently screened titles and abstracts and full-text articles, extracted data from eligible RCTs, and judged the risk of bias using the Cochrane tool. We reached consensus on any disagreements by discussion. We summarised the overall certainty of the evidence using the GRADE approach.\nMain results\nWe only identified one eligible RCT (N = 10 participants), undertaken in India. This trial randomised participants to receive either a large retro-equatorial recession of the horizontal rectus muscle of 9 mm on the medial rectus and 12 mm on the lateral rectus, or a simple tenotomy and resuturing of the four horizontal rectus muscles. We did not identify any RCTs comparing a surgical intervention for INS relative to no treatment.\nIn the single eligible RCT, both eyes of each participant received the same intervention. The participants’ age and gender were not reported, nor was information on whether participants were idiopathic or had sensory disorders. The study only included participants with null in primary position and did not explicitly exclude those with congenital periodic alternating nystagmus. The study did not report funding source(s) or author declaration of interests. The evaluation period was six months.\nWe judged this study at low risk for sequence generation and other sources of bias, but at high risk of bias for performance and detection bias. The risk of bias was unclear for selection bias, attrition bias, and reporting bias.\nThere is very uncertain evidence about the effect of the interventions on visual acuity and change in amplitude, frequency, and intensity of the nystagmus waveform. We were unable to calculate relative effects due to lack of data. None of the participants in either intervention group reported adverse effects at six-month follow-up (very low-certainty evidence). There was no quantitative data reported for quality of life, although the study reported an improvement in quality of life after surgery in both intervention groups (very low-certainty evidence).\nChange in head posture, foveation period durations of the nystagmus waveform, visual recognition times, and permanent adverse effects after surgery were not reported in the included study.\nWe judged the certainty of the evidence, for both the primary and secondary efficacy outcomes, to be very low. Due to a lack of comprehensive reporting of adverse events, there was also very low-certainty of the safety profile of the evaluated surgical interventions in this population. As such, we are very uncertain about the relative efficacy and safety of these interventions for the surgical management of INS.\nAuthors' conclusions\nThis systematic review identified minimal high-quality evidence relating to the efficacy and safety of surgical interventions for INS. The limited availability of evidence must be considered by clinicians when treating INS, particularly given these procedures are irreversible and often performed on children. More high-quality RCTs are needed to better understand the efficacy and safety profile of surgical interventions for INS. This will assist clinicians, people with INS, and their parents or caregivers to make evidence-based treatment decisions.\n\nGiven the provided abstract, please respond to the following query: \"How did we identify and evaluate the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "First, we searched for randomised controlled studies, in which people were randomly divided into two or more treatment groups. This makes it less likely that any differences between treatment effects were actually due to differences in the people who received them (rather than due to the treatments themselves, which is what we wanted to find out).\nWe then compared the results, and summarised the evidence from all the studies. Finally, we rated our confidence in the evidence, based on factors such as study methods and sizes, and the consistency of findings across studies."
}
]
|
query-laysum | 5480 | [
{
"role": "user",
"content": "Abstract: Background\nPostpartum haemorrhage (PPH) is the single leading cause of maternal mortality worldwide. Most of the deaths associated with PPH occur in resource-poor settings where effective methods of prevention and treatment - such as oxytocin - are not accessible because many births still occur at home, or in community settings, far from a health facility. Likewise, most of the evidence supporting oxytocin effectiveness comes from hospital settings in high-income countries, mainly because of the need of well-organised care for its administration and monitoring. Easier methods for oxytocin administration have been developed for use in resource-poor settings, but as far as we know, its effectiveness has not been assessed in a systematic review.\nObjectives\nTo assess the effectiveness and safety of oxytocin provided in non-facility birth settings by any way in the third stage of labour to prevent PPH.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register, the WHO International Clinical Trials Registry Platform (ICTRP), ClinicalTrials.gov (12 November 2015), and reference lists of retrieved reports.\nSelection criteria\nAll published, unpublished or ongoing randomised or quasi-randomised controlled trials comparing the administration of oxytocin with no intervention, or usual/standard care for the management of the third stage of labour in non-facility birth settings were considered for inclusion.\nQuasi-randomised controlled trials and randomised controlled trials published in abstract form only were eligible for inclusion but none were identified. Cross-over trials were not eligible for inclusion in this review.\nData collection and analysis\nTwo review authors independently assessed studies for eligibility, assessed risk of bias and extracted the data using an agreed data extraction form. Data were checked for accuracy.\nMain results\nWe included one cluster-randomised trial conducted in four rural districts in Ghana that randomised 28 community health officers (CHOs) (serving 2404 potentially eligible pregnant women) to the intervention group and 26 CHOs (serving 3515 potentially eligible pregnant women) to the control group. Overall, the trial had a high risk of bias. CHOs delivered the intervention in the experimental group (injection of 10 IU (international units) of oxytocin in the thigh one minute following birth using a prefilled, auto-disposable syringe). In the control group, CHOs did not provide this prophylactic injection to the women they observed. CHOs had no midwifery skills and did not in any way manage the birth. All other CHO activities (outcome measurement, data collection, and early treatment and referral when necessary) were identical across the control and oxytocin CHOs.\nAlthough only one of the nine cases of severe PPH (blood loss greater or equal to 1000 mL) occurred in the oxytocin group, the effect estimate for this outcome was very imprecise and it is uncertain whether the intervention prevents severe PPH (risk ratio (RR) 0.16, 95% confidence interval (CI) 0.02 to 1.30; 1570 women (very low-quality evidence)). Similarly, because of the lack of cases of severe maternal morbidity (e.g. uterine rupture) and maternal deaths, it was not possible to obtain effect estimates for those outcomes (both very low-quality evidence).\nOxytocin compared with the control group decreased the incidence of PPH (> 500 mL) in both our unadjusted (RR 0.48, 95% CI 0.28 to 0.81; 1569 women) and adjusted (RR 0.49, 95% CI 0.27 to 0.90; 1174 women (both low-quality evidence)) analyses. There was little or no difference between the oxytocin and control groups on the rates of transfer or referral of the mother to a healthcare facility (RR 0.72, 95% CI 0.34 to 1.56; 1586 women (low-quality evidence)), stillbirths (RR 1.27, 95% CI 0.67 to 2.40; 2006 infants (low-quality evidence)); andearly infant deaths (0 to three days) (RR 1.03, 95% CI 0.35 to 3.07; 1969 infants (low-quality evidence)). There were no cases of needle-stick injury or any other maternal major or minor adverse event or unanticipated harmful event. There were no cases of oxytocin use during labour.\nThere were no data reported for some of this review's secondary outcomes: manual removal of placenta, maternal anaemia, neonatal death within 28 days, neonatal transfer to health facility for advanced care, breastfeeding rates. Similarly, the women's or the provider's satisfaction with the intervention was not reported.\nAuthors' conclusions\nIt is uncertain if oxytocin administered by CHO in non-facility settings compared with a control group reduces the incidence of severe PPH (>1000 mL), severe maternal morbidity or maternal deaths. However, the intervention probably decreases the incidence of PPH (> 500 mL).\nThe quality of the one trial included in this review was limited because of the risk of attrition and recruitment biases related to limitations in the follow-up of pregnant women in both arms of the trials and some baseline imbalance on the size of babies at birth. Additionally, there was serious imprecision of the effect estimates for most of the primary outcomes mainly because of the size of the trial, very few or no events and CIs around both relative and absolute estimates of effect that include both appreciable benefit and appreciable harm.\nAlthough the trial presented data both for primary and secondary outcomes, it seemed to be underpowered to detect differences in the primary outcomes that are the ones more relevant for making judgments about the potential applicability of the intervention in other settings (especially severe PPH).\nTherefore, taking into account the extreme setting where the intervention was implemented, the limited role of the CHO in the trial and the lack of power for detecting effects on primary (relevant) outcomes, the applicability of the evidence found seems to be rather limited.\nFurther well-executed and adequately-powered randomised controlled trials assessing the effects of using oxytocin in pre-filled injection devices or other new delivery systems (spray-dried ultrafine formulation of oxytocin) on severe PPH are urgently needed. Likewise, other important outcomes like possible adverse events and acceptability of the intervention by mothers and other community stakeholders should also be assessed.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is uncertain if the administration of oxytocin by community health officers without midwifery skills administered in non-health facility settings compared with a control group reduces the incidence of severe PPH, severe maternal illness or maternal deaths when compared with a control group. However, oxytocin probably decreases the incidence of PPH (> 500 mL).\nConsidering the very specific setting where the trial was conducted and the limited role played by the community health officer in the trial, the applicability of the evidence is rather limited.\nFurther high-quality randomised controlled trials are urgently needed to assess the effects of using oxytocin in pre-filled injection devices or other new delivery systems on severe PPH. Similarly, future studies should consider other important outcomes like possible adverse events and the acceptability of the intervention for mothers and other community stakeholders."
}
]
|
query-laysum | 5481 | [
{
"role": "user",
"content": "Abstract: Background\nLong-acting bronchodilators, comprising long-acting beta2-agonists (LABA) and long-acting anti-muscarinic agents (LAMA, principally tiotropium), are commonly used for managing persistent symptoms of chronic obstructive pulmonary disease (COPD). Combining these treatments, which have different mechanisms of action, may be more effective than the individual components. However, the benefits and risks of combining tiotropium and LABAs for the treatment of COPD are unclear.\nObjectives\nTo compare the relative effects on markers of quality of life, exacerbations, symptoms, lung function and serious adverse events in people with COPD randomised to LABA plus tiotropium versus tiotropium alone; or LABA plus tiotropium versus LABA alone.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register of trials and ClinicalTrials.gov up to July 2015.\nSelection criteria\nWe included parallel-group, randomised controlled trials of three months or longer comparing treatment with tiotropium in addition to LABA against tiotropium or LABA alone for people with COPD.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and then extracted data on trial quality and the outcome results. We contacted study authors for additional information. We collected information on adverse effects from the trials.\nMain results\nThis review included 10 trials on 10,894 participants, mostly recruiting participants with moderate or severe COPD. All of the trials compared tiotropium in addition to LABA to tiotropium alone, and four trials additionally compared LAMA plus LABA with LABA alone. Four studies used the LABA olodaterol, three used indacaterol, two used formoterol, and one used salmeterol.\nCompared to tiotropium alone, treatment with tiotropium plus LABA resulted in a slightly larger improvement in mean health-related quality of life (St George's Respiratory Questionnaire (SGRQ) (mean difference (MD) -1.34, 95% confidence interval (CI) -1.87 to -0.80; 6709 participants; 5 studies). The MD was smaller than the four units that is considered clinically important, but a responder analysis indicated that 7% more participants receiving tiotropium plus LABA had a noticeable benefit (greater than four units) from treatment in comparison to tiotropium alone. In the control arm in one study, which was tiotropium alone, the SGRQ improved by falling 4.5 units from baseline and with tiotropium plus LABA the improvement was a fall of a further 1.3 units (on average). Most of the data came from studies using olodaterol. High withdrawal rates in the trials increased the uncertainty in this result, and the GRADE assessment for this outcome was therefore moderate. There were no significant differences in the other primary outcomes (hospital admission or mortality).\nThe secondary outcome of pre-bronchodilator forced expiratory volume in one second (FEV1) showed a small mean increase with the addition of LABA over the control arm (MD 0.06, 95% CI 0.05 to 0.07; 9573 participants; 10 studies), which showed a change from baseline ranging from 0.03 L to 0.13 L with tiotropium alone. None of the other secondary outcomes (exacerbations, symptom scores, serious adverse events, and withdrawals) showed any statistically significant differences between the groups. There was moderate heterogeneity for both exacerbations and withdrawals.\nThis review included data on four LABAs: two administered twice daily (salmeterol, formoterol) and two once daily (indacaterol, olodaterol). The results were largely from studies of olodaterol and there was insufficient information to assess whether the other LABAs were equivalent to olodaterol or each other.\nComparing LABA plus tiotropium treatment with LABA alone, there was a small but significant improvement in SGRQ (MD -1.25, 95% CI -2.14 to -0.37; 3378 participants; 4 studies). The data came mostly from studies using olodaterol and, although the difference was smaller than four units, this still represented an increase of 10 people with a clinically important improvement for 100 treated. There was also an improvement in FEV1 (MD 0.07, 95% CI 0.06 to 0.09; 3513 participants; 4 studies), and in addition an improvement in exacerbation rates (odds ratio (OR) 0.80, 95% CI 0.69 to 0.93; 3514 participants; 3 studies).\nAuthors' conclusions\nThe results from this review indicated a small mean improvement in health-related quality of life and FEV1 for participants on a combination of tiotropium and LABA compared to either agent alone, and this translated into a small increase in the number of responders on combination treatment. In addition, adding tiotropium to LABA reduced exacerbations, although adding LABA to tiotropium did not. Hospital admission and mortality were not altered by adding LABA to tiotropium, although there may not be enough data. While it is possible that this is affected by higher attrition in the tiotropium group, one would expect that participants withdrawn from the study would have had less favourable outcomes; this means that the expected direction of attrition bias would be to reduce the estimated benefit of the combination treatment. The results were largely from studies of olodaterol and there was insufficient information to assess whether the other LABAs were equivalent to olodaterol or each other.\n\nGiven the provided abstract, please respond to the following query: \"Why is the question important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "These bronchodilators work in different ways and therefore might be more beneficial if used together. The purpose of this review was to determine the benefits and risks of using a combination of both types of bronchodilator compared to the individual bronchodilators."
}
]
|
query-laysum | 5482 | [
{
"role": "user",
"content": "Abstract: Background\nClinicians must balance the risks of bleeding and thrombosis after percutaneous coronary intervention (PCI) in people with an indication for anticoagulation. The potential of non-vitamin K antagonists (NOACs) to prevent bleeding complications is promising, but evidence remains limited.\nObjectives\nTo review the evidence from randomised controlled trials assessing the efficacy and safety of non-vitamin K antagonist oral anticoagulants (NOACs) compared to vitamin K antagonists post-percutaneous coronary intervention (PCI) in people with an indication for anticoagulation.\nSearch methods\nWe identified studies by searching CENTRAL, MEDLINE, Embase, the Conference Proceedings Citation Index – Science and two clinical trials registers in February 2019. We checked bibliographies of identified studies and applied no language restrictions.\nSelection criteria\nWe searched for randomised controlled trials (RCT) that compared NOACs and vitamin K antagonists for people with an indication for anticoagulation who underwent PCI.\nData collection and analysis\nTwo review authors independently checked the results of searches to identify relevant studies, assessed each included study, and extracted study data. We conducted random-effects, pairwise analyses using Review Manager 5 and network meta-analyses (NMA) using the R package 'netmeta'. We ranked competing treatments by P scores, which are derived from the P values of all pairwise comparisons, and allow ranking of treatments on a continuous 0 to 1 scale.\nMain results\nWe identified nine RCTs that met the inclusion criteria, but four were ongoing trials, and were not included in this analysis. We included five RCTs, with 8373 participants, in the NMA (two RCTs compared apixaban to a vitamin K antagonist, two RCTs compared rivaroxaban to a vitamin K antagonist, and one RCT compared dabigatran to a vitamin K antagonist).\nVery low- to moderate-certainty evidence suggests little or no difference between NOACs and vitamin K antagonists in death from cardiovascular causes (not reported in the dabigatran trial), myocardial infarction, stroke, death from any cause, and stent thrombosis. Apixaban (RR 0.85, 95% CI 0.77 to 0.95), high dose rivaroxaban (RR 0.86, 95% CI 0.74 to 1.00), and low dose rivaroxaban (RR 0.80, 95% CI 0.68 to 0.92) probably reduce the risk of recurrent hospitalisation compared with vitamin K antagonists. No studies looked at health-related quality of life.\nVery low- to moderate-certainty evidence suggests that NOACs may be safer than vitamin K antagonists in terms of bleeding. Both high dose dabigatran (RR 0.53, 95% CI 0.29 to 0.97), and low dose dabigatran (RR 0.38, 95% CI 0.21 to 0.70) may reduce major bleeding more than vitamin K antagonists. High dose dabigatran (RR 0.83, 95% CI 0.72 to 0.96), low dose dabigatran (RR 0.66, 95% CI 0.58 to 0.75), apixaban (RR 0.67, 95% Cl 0.51 to 0.88), high dose rivaroxaban (RR 0.66, 95% CI 0.52 to 0.83), and low dose rivaroxaban (RR 0.71, 95% CI 0.57 to 0.88) probably reduce non-major bleeding more than vitamin K antagonists.\nThe results from the NMA were inconclusive between the different NOACs for all primary and secondary outcomes.\nAuthors' conclusions\nVery low- to moderate-certainty evidence suggests no meaningful difference in efficacy outcomes between non-vitamin K antagonist oral anticoagulants (NOAC) and vitamin K antagonists following percutaneous coronary interventions (PCI) in people with non-valvular atrial fibrillation. NOACs probably reduce the risk of recurrent hospitalisation for adverse events compared with vitamin K antagonists.\nLow- to moderate-certainty evidence suggests that dabigatran may reduce the rates of major and non-major bleeding, and apixaban and rivaroxaban probably reduce the rates of non-major bleeding compared with vitamin K antagonists.\nOur network meta-analysis did not show superiority of one NOAC over another for any of the outcomes. Head to head trials, directly comparing NOACs against each other, are required to provide more certain evidence.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence ranged from Very low- to moderate-certainty, indicating the need for more research on this issue."
}
]
|
query-laysum | 5483 | [
{
"role": "user",
"content": "Abstract: Background\nImmune checkpoint inhibitors (ICIs) targeting the PD-1/PD-L1 axis have changed the first-line treatment of people with advanced non-small cell lung cancer (NSCLC). Single-agent pembrolizumab (a PD-1 inhibitor) is currently the standard of care as monotherapy in patients with PD-L1 expression ≥ 50%, either alone or in combination with chemotherapy when PD-L1 expression is less than 50%. Atezolizumab (PD-L1 inhibitor) has also been approved in combination with chemotherapy and bevacizumab (an anti-angiogenic antibody) in first-line NSCLC regardless of PD-L1 expression. The combination of first-line PD-1/PD-L1 inhibitors with anti-CTLA-4 antibodies has also been shown to improve survival compared to platinum-based chemotherapy in advanced NSCLC, particularly in people with high tumour mutational burden (TMB). The association of ipilimumab (an anti CTLA4) and nivolumab (PD-1 inhibitor) has been approved by the US Food and Drug Administration (FDA) in all patients with PD-L1 expression ≥1%. Although these antibodies are currently used in clinical practice, some questions remain unanswered, such as the best-treatment strategy, the role of different biomarkers for treatment selection and the effectiveness of immunotherapy according to specific clinical characteristics.\nObjectives\nTo determine the effectiveness and safety of first-line immune checkpoint inhibitors (ICIs), as monotherapy or in combination, compared to platinum-based chemotherapy, with or without bevacizumab for people with advanced NSCLC, according to the level of PD-L1 expression.\nSearch methods\nWe performed an electronic search of the main databases (Cochrane Central Register of Controlled Trials, MEDLINE, Embase) from inception until 31 December 2020 and conferences meetings from 2015 onwards.\nSelection criteria\nWe included randomised controlled trials (RCTs) reporting on the efficacy or safety of first-line ICI treatment for adults with advanced NSCLC who had not previously received any anticancer treatment. We included trials comparing single- or double-ICI treatment to standard first-line therapy (platinum-based chemotherapy +/- bevacizumab). All data come from ‘international multicentre studies involving adults, age 18 or over, with histologically-confirmed stage IV NSCLC.\nData collection and analysis\nThree review authors independently assessed the search results and a fourth review author resolved any disagreements. Primary outcomes were overall survival (OS) and progression-free survival (PFS); secondary outcomes were overall objective response rate (ORR) by RECIST v 1.1, grade 3 to 5 treatment-related adverse events (AEs) (CTCAE v 5.0) and health-related quality of life (HRQoL). We performed meta-analyses where appropriate using the random-effects model for hazard ratios (HRs) or risk ratios (RRs), with 95% confidence intervals (95% CIs), and used the I² statistic to investigate heterogeneity.\nMain results\nMain results\nWe identified 15 trials for inclusion, seven completed and eight ongoing trials. We obtained data for 5893 participants from seven trials comparing first-line single- (six trials) or double- (two trials) agent ICI with platinum-based chemotherapy, one trial comparing both first-line single- and double-agent ICsI with platinum-based chemotherapy. All trials were at low risk of selection and detection bias, some were classified at high risk of performance, attrition or other source of bias. The overall certainty of evidence according to GRADE ranged from moderate-to-low because of risk of bias, inconsistency, or imprecision. The majority of the included trials reported their outcomes by PD-L1 expressions, with PD-L1 ≥ 50 being considered the most clinically useful cut-off level for decision makers. Also, iIn order to avoid overlaps between various PDL-1 expressions we prioritised the review outcomes according to PD-L1 ≥ 50.\nSingle-agent ICI\nIn the PD-L1 expression ≥ 50% group single-agent ICI probably improved OS compared to platinum-based chemotherapy (hazard ratio (HR) 0.68, 95% confidence interval (CI) 0.60 to 0.76, 6 RCTs, 2111 participants, moderate-certainty evidence). In this group, single-agent ICI also may improve PFS (HR: 0.68, 95% CI 0.52 to 0.88, 5 RCTs, 1886 participants, low-certainty evidence) and ORR (risk ratio (RR):1.40, 95% CI 1.12 to 1.75, 4 RCTs, 1672 participants, low-certainty evidence). HRQoL data were available for only one study including only people with PD-L1 expression ≥ 50%, which suggested that single-agent ICI may improve HRQoL at 15 weeks compared to platinum-based chemotherapy (RR: 1.51, 95% CI 1.08 to 2.10, 1 RCT, 297 participants, low-certainty evidence).\nIn the included studies, treatment-related AEs were not reported according to PD-L1 expression levels. Grade 3-4 AEs may be less frequent with single-agent ICI compared to platinum-based chemotherapy (RR: 0.41, 95% CI 0.33 to 0.50, I² = 62%, 5 RCTs, 3346 participants, low-certainty evidence).\nMore information about efficacy of single-agent ICI compared to platinum-based chemotherapy according to the level of PD-L1 expression and to TMB status or specific clinical characteristics is available in the full text.\nDouble-agent ICI \nDouble-ICI treatment probably prolonged OS compared to platinum-based chemotherapy in people with PD-L1 expression ≥50% (HR: 0.72, 95% CI 0.59 to 0.89 2 RCTs, 612 participants, moderate-certainty evidence).\nTrials did not report data on HRQoL, PFS and ORR according to PD-L1 groups.\nTreatment related AEs were not reported according to PD-L1 expression levels. The frequency of grade 3-4 AEs may not differ between double-ICI treatment and platinum-based chemotherapy (RR: 0.78, 95% CI 0.55 to 1.09, I² = 81%, 2 RCTs, 1869 participants, low-certainty evidence).\nMore information about efficacy of double-agent ICI according to the level of PD-L1 expression and to TMB status is available in the full text.\nAuthors' conclusions\nAuthors' conclusions\nThe evidence in this review suggests that single-agent ICI in people with NSCLC and PD-L1 ≥50% probably leads to a higher overall survival rate and may lead to a higher progression-free survival and overall response rate when compared to platinum-based chemotherapy and may also lead to a lower rate of adverse events and higher HRQoL. Combined ICI in people with NSCLC and PD-L1 ≥50% also probably leads to a higher overall survival rate when compared to platinum-based chemotherapy, but its effect on progression-free survival, overall response rate and HRQoL is unknown due to a lack of data. The rate of adverse events may not differ between groups.\nThis review used to be a living review. It is transitioned out of living mode because current research is exploring ICI in association with chemotherapy or other immunotherapeutic drugs versus ICI as single agent rather than platinum based chemotherapy.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For people with advanced non-small cell lung cancer with a high expression of PD-L1 protein, immunotherapies alone or combinations of immunotherapies prolonged life compared to chemotherapy. The frequency of side effects may be lower with the use of immunotherapies alone compared to chemotherapy. The frequency of side effects may not differ between combinations of immunotherapies and chemotherapy.\n"
}
]
|
query-laysum | 5484 | [
{
"role": "user",
"content": "Abstract: Background\nNicotine replacement therapy (NRT) aims to replace nicotine from cigarettes. This helps to reduce cravings and withdrawal symptoms, and ease the transition from cigarette smoking to complete abstinence. Although there is high-certainty evidence that NRT is effective for achieving long-term smoking abstinence, it is unclear whether different forms, doses, durations of treatment or timing of use impacts its effects.\nObjectives\nTo determine the effectiveness and safety of different forms, deliveries, doses, durations and schedules of NRT, for achieving long-term smoking cessation.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group trials register for papers mentioning NRT in the title, abstract or keywords, most recently in April 2022.\nSelection criteria\nWe included randomised trials in people motivated to quit, comparing one type of NRT use with another. We excluded studies that did not assess cessation as an outcome, with follow-up of fewer than six months, and with additional intervention components not matched between arms. Separate reviews cover studies comparing NRT to control, or to other pharmacotherapies.\nData collection and analysis\nWe followed standard Cochrane methods. We measured smoking abstinence after at least six months, using the most rigorous definition available. We extracted data on cardiac adverse events (AEs), serious adverse events (SAEs) and study withdrawals due to treatment.\nMain results\nWe identified 68 completed studies with 43,327 participants, five of which are new to this update. Most completed studies recruited adults either from the community or from healthcare clinics. We judged 28 of the 68 studies to be at high risk of bias. Restricting the analysis only to those studies at low or unclear risk of bias did not significantly alter results for any comparisons apart from the preloading comparison, which tested the effect of using NRT prior to quit day whilst still smoking.\nThere is high-certainty evidence that combination NRT (fast-acting form plus patch) results in higher long-term quit rates than single form (risk ratio (RR) 1.27, 95% confidence interval (CI) 1.17 to 1.37; I2 = 12%; 16 studies, 12,169 participants). Moderate-certainty evidence, limited by imprecision, indicates that 42/44 mg patches are as effective as 21/22 mg (24-hour) patches (RR 1.09, 95% CI 0.93 to 1.29; I2 = 38%; 5 studies, 1655 participants), and that 21 mg patches are more effective than 14 mg (24-hour) patches (RR 1.48, 95% CI 1.06 to 2.08; 1 study, 537 participants). Moderate-certainty evidence, again limited by imprecision, also suggests a benefit of 25 mg over 15 mg (16-hour) patches, but the lower limit of the CI encompassed no difference (RR 1.19, 95% CI 1.00 to 1.41; I2 = 0%; 3 studies, 3446 participants).\nNine studies tested the effect of using NRT prior to quit day (preloading) in comparison to using it from quit day onward. There was moderate-certainty evidence, limited by risk of bias, of a favourable effect of preloading on abstinence (RR 1.25, 95% CI 1.08 to 1.44; I2 = 0%; 9 studies, 4395 participants).\nHigh-certainty evidence from eight studies suggests that using either a form of fast-acting NRT or a nicotine patch results in similar long-term quit rates (RR 0.90, 95% CI 0.77 to 1.05; I2 = 0%; 8 studies, 3319 participants).\nWe found no clear evidence of an effect of duration of nicotine patch use (low-certainty evidence); duration of combination NRT use (low- and very low-certainty evidence); or fast-acting NRT type (very low-certainty evidence).\nCardiac AEs, SAEs and withdrawals due to treatment were all measured variably and infrequently across studies, resulting in low- or very low-certainty evidence for all comparisons. Most comparisons found no clear evidence of an effect on these outcomes, and rates were low overall. More withdrawals due to treatment were reported in people using nasal spray compared to patches in one study (RR 3.47, 95% CI 1.15 to 10.46; 1 study, 922 participants; very low-certainty evidence) and in people using 42/44 mg patches in comparison to 21/22 mg patches across two studies (RR 4.99, 95% CI 1.60 to 15.50; I2 = 0%; 2 studies, 544 participants; low-certainty evidence).\nAuthors' conclusions\nThere is high-certainty evidence that using combination NRT versus single-form NRT and 4 mg versus 2 mg nicotine gum can result in an increase in the chances of successfully stopping smoking. Due to imprecision, evidence was of moderate certainty for patch dose comparisons. There is some indication that the lower-dose nicotine patches and gum may be less effective than higher-dose products. Using a fast-acting form of NRT, such as gum or lozenge, resulted in similar quit rates to nicotine patches. There is moderate-certainty evidence that using NRT before quitting may improve quit rates versus using it from quit date only; however, further research is needed to ensure the robustness of this finding. Evidence for the comparative safety and tolerability of different types of NRT use is limited. New studies should ensure that AEs, SAEs and withdrawals due to treatment are reported.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Using a combination of nicotine patches together with another type of nicotine replacement therapy (NRT) (such as gum or lozenge) is more likely to help people quit smoking than if they used one type of NRT alone. We also found that people who smoke have the same chance of quitting successfully whether they use a nicotine patch or another type of NRT, such as gum, lozenge or nasal spray.\nMore high-quality studies on different NRT patch doses, durations of NRT use, types of fast-acting NRT, and NRT use prior to quit day are needed to know which treatments work best to help people quit smoking. These studies should report safety outcomes and withdrawals due to treatment."
}
]
|
query-laysum | 5485 | [
{
"role": "user",
"content": "Abstract: Background\nXpert MTB/RIF and Xpert MTB/RIF Ultra (Xpert Ultra) are World Health Organization (WHO)-recommended rapid tests that simultaneously detect tuberculosis and rifampicin resistance in people with signs and symptoms of tuberculosis. This review builds on our recent extensive Cochrane Review of Xpert MTB/RIF accuracy.\nObjectives\nTo compare the diagnostic accuracy of Xpert Ultra and Xpert MTB/RIF for the detection of pulmonary tuberculosis and detection of rifampicin resistance in adults with presumptive pulmonary tuberculosis. For pulmonary tuberculosis and rifampicin resistance, we also investigated potential sources of heterogeneity.\nWe also summarized the frequency of Xpert Ultra trace-positive results, and estimated the accuracy of Xpert Ultra after repeat testing in those with trace-positive results.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register, MEDLINE, Embase, Science Citation Index, Web of Science, LILACS, Scopus, the WHO ICTRP, the ISRCTN registry, and ProQuest to 28 January 2020 with no language restriction.\nSelection criteria\nWe included diagnostic accuracy studies using respiratory specimens in adults with presumptive pulmonary tuberculosis that directly compared the index tests. For pulmonary tuberculosis detection, the reference standards were culture and a composite reference standard. For rifampicin resistance, the reference standards were culture-based drug susceptibility testing and line probe assays.\nData collection and analysis\nTwo review authors independently extracted data using a standardized form, including data by smear and HIV status. We assessed risk of bias using QUADAS-2 and QUADAS-C. We performed meta-analyses comparing pooled sensitivities and specificities, separately for pulmonary tuberculosis detection and rifampicin resistance detection, and separately by reference standard. Most analyses used a bivariate random-effects model. For tuberculosis detection, we estimated accuracy in studies in participants who were not selected based on prior microscopy testing or history of tuberculosis. We performed subgroup analyses by smear status, HIV status, and history of tuberculosis. We summarized Xpert Ultra trace results.\nMain results\nWe identified nine studies (3500 participants): seven had unselected participants (2834 participants). All compared Xpert Ultra and Xpert MTB/RIF for pulmonary tuberculosis detection; seven studies used a paired comparative accuracy design, and two studies used a randomized design. Five studies compared Xpert Ultra and Xpert MTB/RIF for rifampicin resistance detection; four studies used a paired design, and one study used a randomized design. Of the nine included studies, seven (78%) were mainly or exclusively in high tuberculosis burden countries. For pulmonary tuberculosis detection, most studies had low risk of bias in all domains.\nPulmonary tuberculosis detection\nXpert Ultra pooled sensitivity and specificity (95% credible interval) against culture were 90.9% (86.2 to 94.7) and 95.6% (93.0 to 97.4) (7 studies, 2834 participants; high-certainty evidence) versus Xpert MTB/RIF pooled sensitivity and specificity of 84.7% (78.6 to 89.9) and 98.4% (97.0 to 99.3) (7 studies, 2835 participants; high-certainty evidence). The difference in the accuracy of Xpert Ultra minus Xpert MTB/RIF was estimated at 6.3% (0.1 to 12.8) for sensitivity and −2.7% (−5.7 to −0.5) for specificity. If the point estimates for Xpert Ultra and Xpert MTB/RIF are applied to a hypothetical cohort of 1000 patients, where 10% of those presenting with symptoms have pulmonary tuberculosis, Xpert Ultra will miss 9 cases, and Xpert MTB/RIF will miss 15 cases. The number of people wrongly diagnosed with pulmonary tuberculosis would be 40 with Xpert Ultra and 14 with Xpert MTB/RIF.\nIn smear-negative, culture-positive participants, pooled sensitivity was 77.5% (67.6 to 85.6) for Xpert Ultra versus 60.6% (48.4 to 71.7) for Xpert MTB/RIF; pooled specificity was 95.8% (92.9 to 97.7) for Xpert Ultra versus 98.8% (97.7 to 99.5) for Xpert MTB/RIF (6 studies).\nIn people living with HIV, pooled sensitivity was 87.6% (75.4 to 94.1) for Xpert Ultra versus 74.9% (58.7 to 86.2) for Xpert MTB/RIF; pooled specificity was 92.8% (82.3 to 97.0) for Xpert Ultra versus 99.7% (98.6 to 100.0) for Xpert MTB/RIF (3 studies).\nIn participants with a history of tuberculosis, pooled sensitivity was 84.2% (72.5 to 91.7) for Xpert Ultra versus 81.8% (68.7 to 90.0) for Xpert MTB/RIF; pooled specificity was 88.2% (70.5 to 96.6) for Xpert Ultra versus 97.4% (91.7 to 99.5) for Xpert MTB/RIF (4 studies).\nThe proportion of Ultra trace-positive results ranged from 3.0% to 30.4%. Data were insufficient to estimate the accuracy of Xpert Ultra repeat testing in individuals with initial trace-positive results.\nRifampicin resistance detection\nPooled sensitivity and specificity were 94.9% (88.9 to 97.9) and 99.1% (97.7 to 99.8) (5 studies, 921 participants; high-certainty evidence) for Xpert Ultra versus 95.3% (90.0 to 98.1) and 98.8% (97.2 to 99.6) (5 studies, 930 participants; high-certainty evidence) for Xpert MTB/RIF. The difference in the accuracy of Xpert Ultra minus Xpert MTB/RIF was estimated at −0.3% (−6.9 to 5.7) for sensitivity and 0.3% (−1.2 to 2.0) for specificity. If the point estimates for Xpert Ultra and Xpert MTB/RIF are applied to a hypothetical cohort of 1000 patients, where 10% of those presenting with symptoms have rifampicin resistance, Xpert Ultra will miss 5 cases, and Xpert MTB/RIF will miss 5 cases. The number of people wrongly diagnosed with rifampicin resistance would be 8 with Xpert Ultra and 11 with Xpert MTB/RIF.\nWe identified a higher number of rifampicin resistance indeterminate results with Xpert Ultra, pooled proportion 7.6% (2.4 to 21.0) compared to Xpert MTB/RIF pooled proportion 0.8% (0.2 to 2.4). The estimated difference in the pooled proportion of indeterminate rifampicin resistance results for Xpert Ultra versus Xpert MTB/RIF was 6.7% (1.4 to 20.1).\nAuthors' conclusions\nXpert Ultra has higher sensitivity and lower specificity than Xpert MTB/RIF for pulmonary tuberculosis, especially in smear-negative participants and people living with HIV. Xpert Ultra specificity was lower than that of Xpert MTB/RIF in participants with a history of tuberculosis. The sensitivity and specificity trade-off would be expected to vary by setting. For detection of rifampicin resistance, Xpert Ultra and Xpert MTB/RIF had similar sensitivity and specificity. Ultra trace-positive results were common.\nXpert Ultra and Xpert MTB/RIF provide accurate results and can allow rapid initiation of treatment for rifampicin-resistant and multidrug-resistant tuberculosis.\n\nGiven the provided abstract, please respond to the following query: \"How confident are we in the results of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Confident. The review included sufficient studies and participants and used optimum reference standards. In the comparison between Xpert Ultra and Xpert MTB/RIF, most studies were at low risk of bias."
}
]
|
query-laysum | 5486 | [
{
"role": "user",
"content": "Abstract: Background\nCataract is the leading cause of blindness in low- and middle-income countries (LMICs), and the prevalence is inequitably distributed between and within countries. Interventions have been undertaken to improve cataract surgical services, however, the effectiveness of these interventions on promoting equity is not known.\nObjectives\nTo assess the effects on equity of interventions to improve access to cataract services for populations with cataract blindness (and visual impairment) in LMICs.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 3), MEDLINE Ovid (1946 to 12 April 2017), Embase Ovid (1980 to 12 April 2017), LILACS (Latin American and Caribbean Health Sciences Literature Database) (1982 to 12 April 2017), the ISRCTN registry (www.isrctn.com/editAdvancedSearch); searched 12 April 2017, ClinicalTrials.gov (www.clinicaltrials.gov); searched 12 April 2017 and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en); searched 12 April 2017. We did not use any date or language restrictions in the electronic searches for trials.\nSelection criteria\nWe included studies that reported on strategies to improve access to cataract services in LMICs using the following study designs: randomised and quasi-randomised controlled trials (RCTs), controlled before-and-after studies, and interrupted time series studies. Included studies were conducted in LMICs, and were targeted at disadvantaged populations, or disaggregated outcome data by 'PROGRESS-Plus' factors (Place of residence; Race/ethnicity/ culture/ language; Occupation; Gender/sex; Religion; Education; Socio-economic status; Social capital/networks. The 'Plus' component includes disability, sexual orientation and age).\nData collection and analysis\nTwo authors (JR and JP) independently selected studies, extracted data and assessed them for risk of bias. Meta-analysis was not possible, so included studies were synthesised in table and text.\nMain results\nFrom a total of 2865 studies identified in the search, two met our eligibility criteria, both of which were cluster-RCTs conducted in rural China. The way in which the trials were conducted means that the risk of bias is unclear. In both studies, villages were randomised to be either an intervention or control group. Adults identified with vision-impairing cataract, following village-based vision and eye health assessment, either received an intervention to increase uptake of cataract surgery (if their village was an intervention group), or to receive 'standard care' (if their village was a control group).\nOne study (n = 434), randomly allocated 26 villages or townships to the intervention, which involved watching an informational video and receiving counselling about cataract and cataract surgery, while the control group were advised that they had decreased vision due to cataract and it could be treated, without being shown the video or receiving counselling. There was low-certainty evidence that providing information and counselling had no effect on uptake of referral to the hospital (OR 1.03, 95% CI 0.63 to 1.67, 1 RCT, 434 participants) and little or no effect on the uptake of surgery (OR 1.11, 95% CI 0.67 to 1.84, 1 RCT, 434 participants). We assessed the level of evidence to be of low-certainty for both outcomes, due to indirectness of evidence and imprecision of results.\nThe other study (n = 355, 24 towns randomised) included three intervention arms: free surgery; free surgery plus reimbursement of transport costs; and free surgery plus free transport to and from the hospital. These were compared to the control group, which was reminded to use the \"low-cost\" (˜USD 38) surgical service. There was low-certainty evidence that surgical fee waiver with/without transport provision or reimbursement increased uptake of surgery (RR 1.94, 95% CI 1.14 to 3.31, 1 RCT, 355 participants). We assessed the level of evidence to be of low-certainty due to indirectness of evidence and imprecision of results.\nNeither of the studies reported our primary outcome of change in prevalence of cataract blindness, or other outcomes such as cataract surgical coverage, surgical outcome, or adverse effects. Neither study disaggregated outcomes by social subgroups to enable further assessment of equity effects. We sought data from both studies and obtained data from one; the information video and counselling intervention did not have a differential effect across the PROGRESS-Plus categories with available data (place of residence, gender, education level, socioeconomic status and social capital).\nAuthors' conclusions\nCurrent evidence on the effect on equity of interventions to improve access to cataract services in LMICs is limited. We identified only two studies, both conducted in rural China. Assessment of equity effects will be improved if future studies disaggregate outcomes by relevant social subgroups. To assist with assessing generalisability of findings to other settings, robust data on contextual factors are also needed.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The Cochrane researchers searched for studies that had been published up to 12 April 2017."
}
]
|
query-laysum | 5487 | [
{
"role": "user",
"content": "Abstract: Background\nMinistries of health, donors, and other decision-makers are exploring how they can use mobile technologies to acquire accurate and timely statistics on births and deaths. These stakeholders have called for evidence-based guidance on this topic. This review was carried out to support World Health Organization (WHO) recommendations on digital interventions for health system strengthening.\nObjectives\nPrimary objective: To assess the effects of birth notification and death notification via a mobile device, compared to standard practice.\nSecondary objectives: To describe the range of strategies used to implement birth and death notification via mobile devices and identify factors influencing the implementation of birth and death notification via mobile devices.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, the Global Health Library, and POPLINE (August 2, 2019). We searched two trial registries (August 2, 2019). We also searched Epistemonikos for related systematic reviews and potentially eligible primary studies (August 27, 2019). We conducted a grey literature search using mHealthevidence.org (August 15, 2017) and issued a call for papers through popular digital health communities of practice. Finally, we conducted citation searches of included studies in Web of Science and Google Scholar (May 15, 2020). We searched for studies published after 2000 in any language.\nSelection criteria\nFor the primary objective, we included individual and cluster-randomised trials; cross-over and stepped-wedge study designs; controlled before-after studies, provided they have at least two intervention sites and two control sites; and interrupted time series studies. For the secondary objectives, we included any study design, either quantitative, qualitative, or descriptive, that aimed to describe current strategies for birth and death notification via mobile devices; or to explore factors that influence the implementation of these strategies, including studies of acceptability or feasibility.\nFor the primary objective, we included studies that compared birth and death notification via mobile devices with standard practice. For the secondary objectives, we included studies of birth and death notification via mobile device as long as we could extract data relevant to our secondary objectives.\nWe included studies of all cadres of healthcare providers, including lay health workers; administrative, managerial, and supervisory staff; focal individuals at the village or community level; children whose births were being notified and their parents/caregivers; and individuals whose deaths were being notified and their relatives/caregivers.\nData collection and analysis\nFor the primary objective, two authors independently screened all records, extracted data from the included studies and assessed risk of bias. For the analyses of the primary objective, we reported means and proportions, where appropriate. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the certainty of the evidence and we prepared a 'Summary of Findings' table.\nFor the secondary objectives, two authors screened all records, one author extracted data from the included studies and assessed methodological limitations using the WEIRD tool and a second author checked the data and assessments. We carried out a framework analysis using the Supporting the Use of Research Evidence (SURE) framework to identify themes in the data. We used the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach to assess our confidence in the evidence and we prepared a 'Summary of Qualitative Findings' table.\nMain results\nFor the primary objective, we included one study, which used a controlled before-after study design. The study was conducted in Lao People’s Democratic Republic and assessed the effect of using mobile devices for birth notification on outcomes related to coverage and timeliness of Hepatitis B vaccination. However, we are uncertain of the effect of this approach on these outcomes because the certainty of this evidence was assessed as very low. The included study did not assess resource use or unintended consequences. For the primary objective, we did not identify any studies using mobile devices for death notification.\nFor the secondary objective, we included 21 studies. All studies were conducted in low- or middle-income settings. They focussed on identification of births and deaths in rural, remote, or marginalised populations who are typically under-represented in civil registration processes or traditionally seen as having poor access to health services.\nThe review identified several factors that could influence the implementation of birth-death notification via mobile device. These factors were tied to the health system, the person responsible for notifying, the community and families; and include:\n- Geographic barriers that could prevent people’s access to birth-death notification and post-notification services\n- Access to health workers and other notifiers with enough training, supervision, support, and incentives\n- Monitoring systems that ensure the quality and timeliness of the birth and death data\n- Legal frameworks that allow births and deaths to be notified by mobile device and by different types of notifiers\n- Community awareness of the need to register births and deaths\n- Socio-cultural norms around birth and death\n- Government commitment\n- Cost to the system, to health workers and to families\n- Access to electricity and network connectivity, and compatibility with existing systems\n- Systems that protect data confidentiality\nWe have low to moderate confidence in these findings. This was mainly because of concerns about methodological limitations and data adequacy.\nAuthors' conclusions\nWe need more, well-designed studies of the effect of birth and death notification via mobile devices and on factors that may influence its implementation.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We know very little about the effects of using mobile devices to notify births and deaths. Factors that can influence the implementation of this approach include factors tied to the health system and the notification system, the person responsible for notifying, the community, and the families involved."
}
]
|
query-laysum | 5488 | [
{
"role": "user",
"content": "Abstract: Background\nHyperbilirubinaemia occurs in approximately two-thirds of all newborns during the first days of life and is frequently treated with phototherapy. Although generally seen as safe, there is rising concern regarding phototherapy and its potentially damaging effects on DNA and increased side effects particularly for preterm infants. Other methods, such as enteral feeding supplementation with prebiotics, may have an effective use in the management of hyperbilirubinaemia in neonates.\nObjectives\nTo determine whether administration of prebiotics reduces the incidence of hyperbilirubinaemia among term and preterm infants compared with enteral supplementation of milk with distilled water/placebo or no supplementation.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2018, Issue 5), MEDLINE via PubMed (1966 to 14 June 2018), Embase (1980 to 14 June 2018), and CINAHL (1982 to 14 June 2018). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials (RCTs) and quasi-randomised trials.\nSelection criteria\nWe considered all RCTs that studied neonates comparing enteral feeding supplementation with prebiotics versus distilled water/placebo or no supplementation.\nData collection and analysis\nTwo reviewers screened papers and extracted data from selected papers. We used a fixed-effect method in combining the effects of studies that were sufficiently similar. We then used the GRADE approach to assess the quality of the evidence.\nMain results\nThree small studies evaluating 154 infants were included in this review. One study reported a significant reduction in the risk of hyperbilirubinaemia and rate of treatment with phototherapy associated with enteral supplementation with prebiotics (risk ratio (RR) 0.75, 95% confidence interval (95% CI) 0.58 to 0.97; one study, 50 infants; low-quality evidence). Meta-analyses of two studies showed no significant difference in maximum plasma unconjugated bilirubin levels in infants with prebiotic supplementation (mean difference (MD) 0.14 mg/dL, 95% CI -0.91 to 1.20, I² = 81%, P = 0.79; two studies, 78 infants; low-quality evidence). There was no evidence of a significant difference in duration of phototherapy between the prebiotic and control groups, which was only reported by one study (MD 0.10 days, 95% CI -2.00 to 2.20; one study, 50 infants; low-quality evidence). The meta-analyses of two studies demonstrated a significant reduction in the length of hospital stay (MD -10.57 days, 95% CI -17.81 to -3.33; 2 studies, 78 infants; I² = 0%, P = 0.004; low-quality evidence). Meta-analysis of the three studies showed a significant increase in stool frequency in the prebiotic groups (MD 1.18, 95% CI 0.90 to 1.46, I² = 90%; 3 studies, 154 infants; high-quality evidence). No significant difference in mortality during hospital stay after enteral supplementation with prebiotics was reported (typical RR 0.94, 95% CI 0.14 to 6.19; I² = 6%, P = 0.95; 2 studies; 78 infants; low-quality evidence). There were no reports of the need for exchange transfusion and incidence of acute bilirubin encephalopathy, chronic bilirubin encephalopathy, and major neurodevelopmental disability in the included studies. None of the included studies reported any side effects.\nAuthors' conclusions\nCurrent studies are unable to provide reliable evidence about the effectiveness of prebiotics on hyperbilirubinaemia. Additional large, well-designed RCTs should be undertaken in neonates that compare effects of enteral supplementation with prebiotics on neonatal hyperbilirubinaemia with supplementation of milk with any other placebo (particularly distilled water) or no supplementation.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Jaundice occurs in approximately two-thirds of all newborns during the first days of life. Phototherapy is commonly used for management of neonatal jaundice. Phototherapy does not seem to cause serious side effects, but recently some clinical trials have raised concerns based on animal or cell culture studies regarding its potential to damage DNA. Therefore, other types of treatment for neonatal jaundice are being considered. Some evidence suggests that enteral feeding supplementation with prebiotics reduces jaundice among neonates."
}
]
|
query-laysum | 5489 | [
{
"role": "user",
"content": "Abstract: Background\nCongenital talipes equinovarus (CTEV), also known as clubfoot, is a common congenital orthopaedic condition characterised by an excessively turned-in foot (equinovarus) and high medial longitudinal arch (cavus). If left untreated it can result in long-term disability, deformity and pain. Interventions can be conservative (such as splinting or stretching) or surgical. Different treatments might be effective at different stages: at birth (initial presentation); when initial treatment does not work (resistant presentation); when the initial treatment works but the clubfoot returns (relapse/recurrent presentation); and when there has been no early treatment (neglected presentation). This is an update of a review first published in 2010 and last updated in 2014.\nObjectives\nTo assess the effects of any intervention for any type of CTEV in people of any age.\nSearch methods\nOn 28 May 2019, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL Plus, AMED and Physiotherapy Evidence Database. We also searched for ongoing trials in the WHO International Clinical Trials Registry Platform and ClinicalTrials.gov (to May 2019). We checked the references of included studies.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs evaluating interventions for CTEV, including interventions compared to other interventions, sham intervention or no intervention. Participants were people of all ages with CTEV of either one or both feet.\nData collection and analysis\nTwo review authors independently assessed the risks of bias in included trials and extracted the data. We contacted authors of included trials for missing information. We collected adverse event information from trials when it was available. When required we attempted to obtain individual patient data (IPD) from trial authors for re-analysis. If unit-of-analysis issues were present and IPD unavailable we did not report summary data,\nMain results\nWe identified 21 trials with 905 participants; seven trials were newly included for this update. Fourteen trials assessed initial cases of CTEV (560 participants), four trials assessed resistant cases (181 participants) and three trials assessed cases of unknown timing (153 participants). The use of different outcome measures prevented pooling of data for meta-analysis, even when interventions and participants were comparable. All trials displayed high or unclear risks of bias in three or more domains. Twenty trials provided data. Two trials reported on the primary outcome of function using a validated scale, but the data were not suitable for inclusion because of unit-of-analysis issues, as raw data were not available for re-analysis.\nWe were able to analyse data on foot alignment (Pirani score), a secondary outcome, from three trials in participants at initial presentation. The Pirani score is a scale ranging from zero to six, where a higher score indicates a more severe foot. At initial presentation, one trial reported that the Ponseti technique significantly improved foot alignment compared to the Kite technique. After 10 weeks of serial casting, the average total Pirani score of the Ponseti group was 1.15 points lower than that of the Kite group (mean difference (MD) −1.15, 95% confidence interval (CI) −1.32 to −0.98; 60 feet; low-certainty evidence). A second trial found the Ponseti technique to be superior to a traditional technique, with mean total Pirani scores of the Ponseti participants 1.50 points lower than after serial casting and Achilles tenotomy (MD −1.50, 95% CI −2.28 to −0.72; 28 participants; very low-certainty evidence). One trial found evidence that there may be no difference between casting materials in the Ponseti technique, with semi-rigid fibreglass producing average total Pirani scores 0.46 points higher than plaster of Paris at the end of serial casting (95% CI −0.07 to 0.99; 30 participants; low-certainty evidence).\nWe found no trials in relapsed or neglected cases of CTEV.\nA trial in which the type of presentation was not reported showed no evidence of a difference between an accelerated Ponseti and a standard Ponseti treatment in foot alignment. At the end of serial casting, the average total Pirani score in the accelerated group was 0.31 points higher than the standard group (95% CI −0.40 to 1.02; 40 participants; low-certainty evidence).\nNo trial assessed gait using a validated assessment. Health-related quality of life was reported in some trials but data were not available for re-analysis.\nThere is a lack of evidence for the addition of botulinum toxin A during the Ponseti technique, different types of major foot surgery or continuous passive motion treatment following major foot surgery. Most trials did not report on adverse events. Two trials found that further serial casting was more likely to correct relapse after Ponseti treatment than after the Kite technique, which more often required major surgery (risk differences 25% and 50%). In trials evaluating serial casting techniques, adverse events included cast slippage (needing replacement), plaster sores (pressure areas), and skin irritation. Adverse events following surgical procedures included infection and the need for skin grafting.\nAuthors' conclusions\nFrom the evidence available, the Ponseti technique may produce significantly better short-term foot alignment compared to the Kite technique. The certainty of evidence is too low for us to draw conclusions about the Ponseti technique compared to a traditional technique. An accelerated Ponseti technique may be as effective as a standard technique, but results are based on a single small comparative trial. When using the Ponseti technique semi-rigid fibreglass casting may be as effective as plaster of Paris. Relapse following the Kite technique more often led to major surgery compared to relapse following the Ponseti technique. We could draw no conclusions from other included trials because of the limited use of validated outcome measures and the unavailability of raw data. Future RCTs should address these issues.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The purpose of this review was to assess treatments for congenital talipes equinovarus (clubfoot)."
}
]
|
query-laysum | 5490 | [
{
"role": "user",
"content": "Abstract: Background\nMass media tobacco control campaigns can reach large numbers of people. Much of the literature is focused on the effects of tobacco control advertising on young people, but there are also a number of evaluations of campaigns targeting adult smokers, which show mixed results. Campaigns may be local, regional or national, and may be combined with other components of a comprehensive tobacco control policy.\nObjectives\nTo assess the effectiveness of mass media interventions in reducing smoking among adults.\nSearch methods\nThe Cochrane Tobacco Addiction Group search strategy was combined with additional searches for any studies that referred to tobacco/smoking cessation, mass media and adults. We also searched the Cochrane Central Register of Controlled Trials (CENTRAL) and a number of electronic databases. The last search was carried out in November 2016.\nSelection criteria\nControlled trials allocating communities, regions or states to intervention or control conditions; interrupted time series.\nAdults, 25 years or older, who regularly smoke cigarettes. Studies which cover all adults as defined in studies were included.\nMass media are defined here as channels of communication such as television, radio, newspapers, billboards, posters, leaflets or booklets intended to reach large numbers of people, and which are not dependent on person-to-person contact. The purpose of the mass media campaign must be primarily to encourage smokers to quit. They could be carried out alone or in conjunction with tobacco control programmes.\nThe primary outcome was change in smoking behaviour. This could be reported as changes in prevalence, changes in cigarette consumption, quit rates, or odds of being a smoker.\nData collection and analysis\nTwo authors independently assessed all studies for inclusion criteria and for study quality (MB, LS, RTM). One author (MB) extracted data, and a second author (LS) checked them.\nResults were not pooled due to heterogeneity of the included studies and are presented narratively and in table form.\nMain results\nEleven campaigns met the inclusion criteria for this review. Studies differed in design, settings, duration, content and intensity of intervention, length of follow-up, methods of evaluation and also in definitions and measures of smoking behaviour used. Among seven campaigns reporting smoking prevalence, significant decreases were observed in the California and Massachusetts statewide tobacco control campaigns compared with the rest of the USA. Some positive effects on prevalence in the whole population or in the subgroups were observed in three of the remaining seven studies. Three large-scale campaigns of the seven presenting results for tobacco consumption found statistically significant decreases. Among the eight studies presenting abstinence or quit rates, four showed some positive effect, although in one of them the effect was measured for quitting and cutting down combined. Among the three that did not show significant decreases, one demonstrated a significant intervention effect on smokers and ex-smokers combined.\nAuthors' conclusions\nThere is evidence that comprehensive tobacco control programmes which include mass media campaigns can be effective in changing smoking behaviour in adults, but the evidence comes from a heterogeneous group of studies of variable methodological quality. One state-wide tobacco control programme (Massachusetts) showed positive results up to eight years after the campaign. Another (California) showed positive results during the period of adequate funding and implementation and in final evaluation since the beginning of the programme. Six of nine studies carried out in communities or regions showed some positive effects on smoking behaviour and at least one significant change in smoking prevalence (Sydney). The intensity and duration of mass media campaigns may influence effectiveness, but length of follow-up and concurrent secular trends and events can make this difficult to quantify. No consistent relationship was observed between campaign effectiveness and age, education, ethnicity or gender.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Mass media programmes involve communication through television, radio, newspapers, billboards, posters, leaflets or booklets, with the aims of encouraging smokers to quit, and maintaining abstinence in non-smokers. It is difficult to establish their role in this process and they require strict methodological design."
}
]
|
query-laysum | 5491 | [
{
"role": "user",
"content": "Abstract: Background\nIntensive care unit (ICU) acquired or generalised weakness due to critical illness myopathy (CIM) and polyneuropathy (CIP) are major causes of chronically impaired motor function that can affect activities of daily living and quality of life. Physical rehabilitation of those affected might help to improve activities of daily living.\nObjectives\nOur primary objective was to assess the effects of physical rehabilitation therapies and interventions for people with CIP and CIM in improving activities of daily living such as walking, bathing, dressing and eating. Secondary objectives were to assess effects on muscle strength and quality of life, and to assess adverse effects of physical rehabilitation.\nSearch methods\nOn 16 July 2014 we searched the Cochrane Neuromuscular Disease Group Specialized Register and on 14 July 2014 we searched CENTRAL, MEDLINE, EMBASE and CINAHL Plus. In July 2014, we searched the Physiotherapy Evidence Database (PEDro, http://www.pedro.org.au/) and three trials registries for ongoing trials and further data about included studies. There were no language restrictions. We also handsearched relevant conference proceedings and screened reference lists to identify further trials.\nSelection criteria\nWe planned to include randomised controlled trials (RCTs), quasi-RCTs and randomised controlled cross-over trials of any rehabilitation intervention in people with acquired weakness syndrome due to CIP/CIM.\nData collection and analysis\nWe would have extracted data, assessed the risk of bias and classified the quality of evidence for outcomes in duplicate, according to the standard procedures of The Cochrane Collaboration. Outcome data collection would have been for activities of daily living (for example, mobility, walking, transfers and self care). Secondary outcomes included muscle strength, quality of life and adverse events.\nMain results\nThe search strategy retrieved 3587 references. After examination of titles and abstracts, we retrieved the full text of 24 potentially relevant studies. None of these studies met the inclusion criteria of our review. No data were suitable to be included in a meta-analysis.\nAuthors' conclusions\nThere are no published RCTs or quasi-RCTs that examine whether physical rehabilitation interventions improve activities of daily living for people with CIP and CIM. Large RCTs, which are feasible, need to be conducted to explore the role of physical rehabilitation interventions for people with CIP and CIM.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no high quality trials that met our stringent criteria for inclusion in this review."
}
]
|
query-laysum | 5492 | [
{
"role": "user",
"content": "Abstract: Background\nElevated blood pressure, or hypertension, is the leading cause of preventable deaths globally. Diets high in sodium (predominantly sodium chloride) and low in potassium contribute to elevated blood pressure. The WHO recommends decreasing mean population sodium intake through effective and safe strategies to reduce hypertension and its associated disease burden. Incorporating low-sodium salt substitutes (LSSS) into population strategies has increasingly been recognised as a possible sodium reduction strategy, particularly in populations where a substantial proportion of overall sodium intake comes from discretionary salt. The LSSS contain lower concentrations of sodium through its displacement with potassium predominantly, or other minerals. Potassium-containing LSSS can potentially simultaneously decrease sodium intake and increase potassium intake. Benefits of LSSS include their potential blood pressure-lowering effect and relatively low cost. However, there are concerns about potential adverse effects of LSSS, such as hyperkalaemia, particularly in people at risk, for example, those with chronic kidney disease (CKD) or taking medications that impair potassium excretion.\nObjectives\nTo assess the effects and safety of replacing salt with LSSS to reduce sodium intake on cardiovascular health in adults, pregnant women and children.\nSearch methods\nWe searched MEDLINE (PubMed), Embase (Ovid), Cochrane Central Register of Controlled Trials (CENTRAL), Web of Science Core Collection (Clarivate Analytics), Cumulative Index to Nursing and Allied Health Literature (CINAHL, EBSCOhost), ClinicalTrials.gov and WHO International Clinical Trials Registry Platform (ICTRP) up to 18 August 2021, and screened reference lists of included trials and relevant systematic reviews. No language or publication restrictions were applied.\nSelection criteria\nWe included randomised controlled trials (RCTs) and prospective analytical cohort studies in participants of any age in the general population, from any setting in any country. This included participants with non-communicable diseases and those taking medications that impair potassium excretion. Studies had to compare any type and method of implementation of LSSS with the use of regular salt, or no active intervention, at an individual, household or community level, for any duration.\nData collection and analysis\nTwo review authors independently screened titles, abstracts and full-text articles to determine eligibility; and extracted data, assessed risk of bias (RoB) using the Cochrane RoB tool, and assessed the certainty of the evidence using GRADE. We stratified analyses by adults, children (≤ 18 years) and pregnant women. Primary effectiveness outcomes were change in diastolic and systolic blood pressure (DBP and SBP), hypertension and blood pressure control; cardiovascular events and cardiovascular mortality were additionally assessed as primary effectiveness outcomes in adults. Primary safety outcomes were change in blood potassium, hyperkalaemia and hypokalaemia.\nMain results\nWe included 26 RCTs, 16 randomising individual participants and 10 randomising clusters (families, households or villages). A total of 34,961 adult participants and 92 children were randomised to either LSSS or regular salt, with the smallest trial including 10 and the largest including 20,995 participants. No studies in pregnant women were identified. Studies included only participants with hypertension (11/26), normal blood pressure (1/26), pre-hypertension (1/26), or participants with and without hypertension (11/26). This was unknown in the remaining studies. The largest study included only participants with an elevated risk of stroke at baseline. Seven studies included adult participants possibly at risk of hyperkalaemia. All 26 trials specifically excluded participants in whom an increased potassium intake is known to be potentially harmful. The majority of trials were conducted in rural or suburban settings, with more than half (14/26) conducted in low- and middle-income countries.\nThe proportion of sodium chloride replacement in the LSSS interventions varied from approximately 3% to 77%. The majority of trials (23/26) investigated LSSS where potassium-containing salts were used to substitute sodium. In most trials, LSSS implementation was discretionary (22/26). Trial duration ranged from two months to nearly five years.\nWe assessed the overall risk of bias as high in six trials and unclear in 12 trials.\nLSSS compared to regular salt in adults: LSSS compared to regular salt probably reduce DBP on average (mean difference (MD) -2.43 mmHg, 95% confidence interval (CI) -3.50 to -1.36; 20,830 participants, 19 RCTs, moderate-certainty evidence) and SBP (MD -4.76 mmHg, 95% CI -6.01 to -3.50; 21,414 participants, 20 RCTs, moderate-certainty evidence) slightly.\nOn average, LSSS probably reduce non-fatal stroke (absolute effect (AE) 20 fewer/100,000 person-years, 95% CI -40 to 2; 21,250 participants, 3 RCTs, moderate-certainty evidence), non-fatal acute coronary syndrome (AE 150 fewer/100,000 person-years, 95% CI -250 to -30; 20,995 participants, 1 RCT, moderate-certainty evidence) and cardiovascular mortality (AE 180 fewer/100,000 person-years, 95% CI -310 to 0; 23,200 participants, 3 RCTs, moderate-certainty evidence) slightly, and probably increase blood potassium slightly (MD 0.12 mmol/L, 95% CI 0.07 to 0.18; 784 participants, 6 RCTs, moderate-certainty evidence), compared to regular salt.\nLSSS may result in little to no difference, on average, in hypertension (AE 17 fewer/1000, 95% CI -58 to 17; 2566 participants, 1 RCT, low-certainty evidence) and hyperkalaemia (AE 4 more/100,000, 95% CI -47 to 121; 22,849 participants, 5 RCTs, moderate-certainty evidence) compared to regular salt. The evidence is very uncertain about the effects of LSSS on blood pressure control, various cardiovascular events, stroke mortality, hypokalaemia, and other adverse events (very-low certainty evidence).\nLSSS compared to regular salt in children: The evidence is very uncertain about the effects of LSSS on DBP and SBP in children. We found no evidence about the effects of LSSS on hypertension, blood pressure control, blood potassium, hyperkalaemia and hypokalaemia in children.\nAuthors' conclusions\nWhen compared to regular salt, LSSS probably reduce blood pressure, non-fatal cardiovascular events and cardiovascular mortality slightly in adults. However, LSSS also probably increase blood potassium slightly in adults. These small effects may be important when LSSS interventions are implemented at the population level. Evidence is limited for adults without elevated blood pressure, and there is a lack of evidence in pregnant women and people in whom an increased potassium intake is known to be potentially harmful, limiting conclusions on the safety of LSSS in the general population. We also cannot draw firm conclusions about effects of non-discretionary LSSS implementations. The evidence is very uncertain about the effects of LSSS on blood pressure in children.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 26 trials* involving 34,961 adults and 92 children. No studies in pregnant women were found. Most trials were undertaken in rural or suburban areas, with more than half done in low- and middle-income countries. Most trials included some people with high blood pressure (22); the largest included only people with a high risk of stroke. Seven trials were done in people at possible risk of high blood potassium. All trials excluded people where high potassium intake is known to be harmful, such as people with kidney problems or on certain medications. Nearly all trials (23) examined LSSS types where some sodium was replaced with potassium. The amount of sodium replaced in the various LSSS used in the trials ranged from very small (3%) to large (77%).\n*Trials are types of studies in which participants are assigned randomly to two or more treatment groups. This is the best way to ensure similar groups of participants."
}
]
|
query-laysum | 5493 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with asthma may experience exacerbations, or 'attacks', during which their symptoms worsen and additional treatment is required. Written action plans sometimes advocate a short-term increase in the dose of inhaled corticosteroids (ICS) at the first sign of an exacerbation to reduce the severity of the attack and to prevent the need for oral steroids or hospital admission.\nObjectives\nTo compare the clinical effectiveness and safety of increased versus stable doses of ICS as part of a patient-initiated action plan for the home management of exacerbations in children and adults with persistent asthma.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register, which is derived from searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and CINAHL (Cumulative Index to Nursing and Allied Health Literature), and handsearched abstracts to 20 December 2021. We also searched major trial registries for ongoing trials.\nSelection criteria\nWe included parallel and cross-over randomised controlled trials (RCTs) that allocated people with persistent asthma to take a blinded inhaler in the event of an exacerbation which either increased their daily dose of ICS or kept it stable (placebo).\nData collection and analysis\nTwo review authors independently selected trials, assessed quality, and extracted data. We reassessed risk of bias for all studies at the result level using the revised risk of bias tool for RCTs (Risk of Bias 2), and employed the GRADE approach to assess our confidence in the synthesised effect estimates. The primary outcome was treatment failure, defined as the need for rescue oral steroids in the randomised population. Secondary outcomes were treatment failure in the subset who initiated the study inhaler (treated population), unscheduled physician visits, unscheduled acute care, emergency department or hospital visits, serious and non-serious adverse events, and duration of exacerbation.\nMain results\nThis review update added a new study that increased the number of people in the primary analysis from 1520 to 1774, and incorporates the most up-to-date methods to assess the likely impact of bias within the meta-analyses. The updated review now includes nine RCTs (1923 participants; seven parallel and two cross-over) conducted in Europe, North America, and Australasia and published between 1998 and 2018. Five studies evaluated adult populations (n = 1247; ≥ 15 years), and four studies evaluated child or adolescent populations (n = 676; < 15 years). All study participants had mild to moderate asthma. Studies varied in the dose of maintenance ICS, age, fold increase of ICS in the event of an exacerbation, criteria for initiating the study inhaler, and allowed medications. Approximately 50% of randomised participants initiated the study inhaler (range 23% to 100%), and the included studies reported treatment failure in a variety of ways, meaning assumptions were required to permit the combining of data.\nParticipants randomised to increase their ICS dose at the first signs of an exacerbation had similar odds of needing rescue oral corticosteroids to those randomised to a placebo inhaler (odds ratio (OR) 0.97, 95% confidence interval (CI) 0.76 to 1.25; 8 studies; 1774 participants; I2 = 0%; moderate quality evidence). We could draw no firm conclusions from subgroup analyses conducted to investigate the impact of age, time to treatment initiation, baseline dose, smoking history, and fold increase of ICS on the primary outcome. Results for the same outcome in the subset of participants who initiated the study inhaler were unchanged from the previous version, which provides a different point estimate with very low confidence due to heterogeneity, imprecision, and risk of bias (OR 0.84, 95% CI 0.54 to 1.30; 7 studies; 766 participants; I2 = 42%; random-effects model). Confidence was reduced due to risk of bias and assumptions that had to be made to include study data in the intention-to-treat and treated-population analyses. Sensitivity analyses that tested the impact of assumptions made for synthesis and to exclude cross-over studies, studies at overall high risk of bias, and those with commercial funding did not change our conclusions.\nPooled effects for unscheduled physician visits, unscheduled acute care, emergency department or hospital visits, and duration of exacerbation made it very difficult to determine where the true effect may lie, and confidence was reduced by risk of bias. Point estimates for both serious and non-serious adverse events favoured keeping ICS stable, but imprecision and risk of bias due to missing data and outcome measurement and reporting reduced our confidence in the effects (serious adverse events: OR 1.69, 95% CI 0.77 to 3.71; 2 studies; 394 participants; I² = 0%; non-serious adverse events: OR 2.15, 95% CI 0.68 to 6.73; 2 studies; 142 participants; I² = 0%).\nAuthors' conclusions\nEvidence from double-blind trials of adults and children with mild to moderate asthma suggests there is unlikely to be an important reduction in the need for oral steroids from increasing a patient's ICS dose at the first sign of an exacerbation. Other clinically important benefits and potential harms of increased doses of ICS compared with keeping the dose stable cannot be ruled out due to wide confidence intervals, risk of bias in the trials, and assumptions that had to be made for synthesis. Included studies conducted between 1998 and 2018 reflect evolving clinical practice and study methods, and the data do not support thorough investigation of effect modifiers such as baseline dose, fold increase, asthma severity and timing. The review does not include recent evidence from pragmatic, unblinded studies showing benefits of larger dose increases in those with poorly controlled asthma. A systematic review is warranted to examine the differences between the blinded and unblinded trials using robust methods for assessing risk of bias to present the most complete view of the evidence for decision makers.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review is current to 20 December 2021, and the studies were published between 1998 and 2018."
}
]
|
query-laysum | 5494 | [
{
"role": "user",
"content": "Abstract: Background\nDepression affects approximately one-quarter of people treated with dialysis and is considered an important research uncertainty by patients and health professionals. Treatment for depression in dialysis patients may have different benefits and harms compared to the general population due to different clearances of antidepressant medication and the severity of somatic symptoms associated with end-stage kidney disease (ESKD). Guidelines suggest treatment of depression in dialysis patients with pharmacological therapy, preferably a selective serotonin reuptake inhibitor. This is an update of a review first published in 2005.\nObjectives\nTo evaluate the benefit and harms of antidepressants for treating depression in adults with ESKD treated with dialysis.\nSearch methods\nWe searched Cochrane Kidney and Transplant's Specialised Register to 20 January 2016 through contact with the Information Specialist using search terms relevant to this review.\nSelection criteria\nRandomised controlled trials (RCTs) comparing antidepressant treatment with placebo or no treatment, or compared to another antidepressant medication or psychological intervention in adults with ESKD (estimated glomerular filtration rate < 15 mL/min/1.73 m2).\nData collection and analysis\nData were abstracted by two authors independently onto a standard form and subsequently entered into Review Manager. Risk ratios (RR) for dichotomous data and mean differences (MD) for continuous data were calculated with 95% confidence intervals (95% CI).\nMain results\nFour studies in 170 participants compared antidepressant therapy (fluoxetine, sertraline, citalopram or escitalopram) versus placebo or psychological training for 8 to 12 weeks. In generally very low or ungradeable evidence, compared to placebo, antidepressant therapy had no evidence of benefit on quality of life, had uncertain effects on increasing the risk of hypotension (3 studies, 144 participants: RR 1.72, 95% CI 0.75 to 3.92), headache (2 studies 56 participants: RR 2.91, 95% CI 0.73 to 11.57), and sexual dysfunction (2 studies, 101 participants: RR 3.83, 95% CI 0.63 to 23.34), and increased nausea (3 studies, 114 participants: RR 2.67, 95% CI 1.26 to 5.68). There were few or no data for hospitalisation, suicide or all-cause mortality resulting in inconclusive evidence. Antidepressant therapy may reduce depression scores during treatment compared to placebo (1 study, 43 participants: MD -7.50, 95% CI -11.94 to -3.06). Antidepressant therapy was not statistically different from group psychological therapy for effects on depression scores or withdrawal from treatment and a range of other outcomes were not measured.\nAuthors' conclusions\nDespite the high prevalence of depression in dialysis patients and the relative priority that patients place on effective treatments, evidence for antidepressant medication in the dialysis setting is sparse and data are generally inconclusive. The relative benefits and harms of antidepressant therapy in dialysis patients are poorly known and large randomised studies of antidepressants versus placebo are required.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Unfortunately, even though depression is very common and finding good treatments for depression are highly valued by patients on dialysis, there are only a few small studies to tell us about whether drug treatments are both safe and reduce symptoms. Based on this information, we still don't know whether depression treatment works well for people treated with dialysis and is safe (doesn't cause excess and serious side effects)."
}
]
|
query-laysum | 5495 | [
{
"role": "user",
"content": "Abstract: Background\nMost people who stop smoking gain weight. This can discourage some people from making a quit attempt and risks offsetting some, but not all, of the health advantages of quitting. Interventions to prevent weight gain could improve health outcomes, but there is a concern that they may undermine quitting.\nObjectives\nTo systematically review the effects of: (1) interventions targeting post-cessation weight gain on weight change and smoking cessation (referred to as 'Part 1') and (2) interventions designed to aid smoking cessation that plausibly affect post-cessation weight gain (referred to as 'Part 2').\nSearch methods\nPart 1 - We searched the Cochrane Tobacco Addiction Group's Specialized Register and CENTRAL; latest search 16 October 2020.\nPart 2 - We searched included studies in the following 'parent' Cochrane reviews: nicotine replacement therapy (NRT), antidepressants, nicotine receptor partial agonists, e-cigarettes, and exercise interventions for smoking cessation published in Issue 10, 2020 of the Cochrane Library. We updated register searches for the review of nicotine receptor partial agonists.\nSelection criteria\nPart 1 - trials of interventions that targeted post-cessation weight gain and had measured weight at any follow-up point or smoking cessation, or both, six or more months after quit day.\nPart 2 - trials included in the selected parent Cochrane reviews reporting weight change at any time point.\nData collection and analysis\nScreening and data extraction followed standard Cochrane methods. Change in weight was expressed as difference in weight change from baseline to follow-up between trial arms and was reported only in people abstinent from smoking. Abstinence from smoking was expressed as a risk ratio (RR). Where appropriate, we performed meta-analysis using the inverse variance method for weight, and Mantel-Haenszel method for smoking.\nMain results\nPart 1: We include 37 completed studies; 21 are new to this update. We judged five studies to be at low risk of bias, 17 to be at unclear risk and the remainder at high risk.\nAn intermittent very low calorie diet (VLCD) comprising full meal replacement provided free of charge and accompanied by intensive dietitian support significantly reduced weight gain at end of treatment compared with education on how to avoid weight gain (mean difference (MD) −3.70 kg, 95% confidence interval (CI) −4.82 to −2.58; 1 study, 121 participants), but there was no evidence of benefit at 12 months (MD −1.30 kg, 95% CI −3.49 to 0.89; 1 study, 62 participants). The VLCD increased the chances of abstinence at 12 months (RR 1.73, 95% CI 1.10 to 2.73; 1 study, 287 participants). However, a second study found that no-one completed the VLCD intervention or achieved abstinence.\nInterventions aimed at increasing acceptance of weight gain reported mixed effects at end of treatment, 6 months and 12 months with confidence intervals including both increases and decreases in weight gain compared with no advice or health education. Due to high heterogeneity, we did not combine the data. These interventions increased quit rates at 6 months (RR 1.42, 95% CI 1.03 to 1.96; 4 studies, 619 participants; I2 = 21%), but there was no evidence at 12 months (RR 1.25, 95% CI 0.76 to 2.06; 2 studies, 496 participants; I2 = 26%).\nSome pharmacological interventions tested for limiting post-cessation weight gain (PCWG) reduced weight gain at the end of treatment (dexfenfluramine, phenylpropanolamine, naltrexone). The effects of ephedrine and caffeine combined, lorcaserin, and chromium were too imprecise to give useful estimates of treatment effects. There was very low-certainty evidence that personalized weight management support reduced weight gain at end of treatment (MD −1.11 kg, 95% CI −1.93 to −0.29; 3 studies, 121 participants; I2 = 0%), but no evidence in the longer-term 12 months (MD −0.44 kg, 95% CI −2.34 to 1.46; 4 studies, 530 participants; I2 = 41%). There was low to very low-certainty evidence that detailed weight management education without personalized assessment, planning and feedback did not reduce weight gain and may have reduced smoking cessation rates (12 months: MD −0.21 kg, 95% CI −2.28 to 1.86; 2 studies, 61 participants; I2 = 0%; RR for smoking cessation 0.66, 95% CI 0.48 to 0.90; 2 studies, 522 participants; I2 = 0%).\nPart 2: We include 83 completed studies, 27 of which are new to this update.\nThere was low certainty that exercise interventions led to minimal or no weight reduction compared with standard care at end of treatment (MD −0.25 kg, 95% CI −0.78 to 0.29; 4 studies, 404 participants; I2 = 0%). However, weight was reduced at 12 months (MD −2.07 kg, 95% CI −3.78 to −0.36; 3 studies, 182 participants; I2 = 0%).\nBoth bupropion and fluoxetine limited weight gain at end of treatment (bupropion MD −1.01 kg, 95% CI −1.35 to −0.67; 10 studies, 1098 participants; I2 = 3%); (fluoxetine MD −1.01 kg, 95% CI −1.49 to −0.53; 2 studies, 144 participants; I2 = 38%; low- and very low-certainty evidence, respectively). There was no evidence of benefit at 12 months for bupropion, but estimates were imprecise (bupropion MD −0.26 kg, 95% CI −1.31 to 0.78; 7 studies, 471 participants; I2 = 0%). No studies of fluoxetine provided data at 12 months.\nThere was moderate-certainty that NRT reduced weight at end of treatment (MD −0.52 kg, 95% CI −0.99 to −0.05; 21 studies, 2784 participants; I2 = 81%) and moderate-certainty that the effect may be similar at 12 months (MD −0.37 kg, 95% CI −0.86 to 0.11; 17 studies, 1463 participants; I2 = 0%), although the estimates are too imprecise to assess long-term benefit.\nThere was mixed evidence of the effect of varenicline on weight, with high-certainty evidence that weight change was very modestly lower at the end of treatment (MD −0.23 kg, 95% CI −0.53 to 0.06; 14 studies, 2566 participants; I2 = 32%); a low-certainty estimate gave an imprecise estimate of higher weight at 12 months (MD 1.05 kg, 95% CI −0.58 to 2.69; 3 studies, 237 participants; I2 = 0%).\nAuthors' conclusions\nOverall, there is no intervention for which there is moderate certainty of a clinically useful effect on long-term weight gain. There is also no moderate- or high-certainty evidence that interventions designed to limit weight gain reduce the chances of people achieving abstinence from smoking.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 116 studies in total:\n37 studies of specific programmes for weight control for people stopping smoking (21 new studies for this update); and 83 studies of other ways to help people stop smoking (27 new studies for this update). Four of these studies contributed to both.\nThe 37 studies of specific programmes tested behavioural programmes, including dieting, to manage weight in 11,514 people trying to stop smoking. Some of the behavioural programmes were acceptance-based, in which people also learn self-regulation skills (for example, how to deal with cravings) to help them keep to the behaviours needed to lose weight. Most studies (27) were done in the USA and others took place in Australia, Canada, China and Europe.\nThe 83 studies of other ways of stopping smoking included 46,248 people and looked at:\nexercise programmes; taking NRT; taking a medicine called varenicline (used to help people stop smoking); or taking a medicine called fluoxetine (used to treat depression).\nOf these studies, 39 were done in the USA and the rest took place in other countries around the world. Very few studies reported on side effects."
}
]
|
query-laysum | 5496 | [
{
"role": "user",
"content": "Abstract: Background\n¹⁸F-FDFG uptake by brain tissue as measured by positron emission tomography (PET) is a well-established method for assessment of brain function in people with dementia. Certain findings on brain PET scans can potentially predict the decline of mild cognitive Impairment (MCI) to Alzheimer’s disease dementia or other dementias.\nObjectives\nTo determine the diagnostic accuracy of the ¹⁸F-FDG PET index test for detecting people with MCI at baseline who would clinically convert to Alzheimer’s disease dementia or other forms of dementia at follow-up.\nSearch methods\nWe searched the Cochrane Register of Diagnostic Test Accuracy Studies, MEDLINE, EMBASE, Science Citation Index, PsycINFO, BIOSIS previews, LILACS, MEDION, (Meta-analyses van Diagnostisch Onderzoek), DARE (Database of Abstracts of Reviews of Effects), HTA (Health Technology Assessment Database), ARIF (Aggressive Research Intelligence Facility) and C-EBLM (International Federation of Clinical Chemistry and Laboratory Medicine Committee for Evidence-based Laboratory Medicine) databases to January 2013. We checked the reference lists of any relevant studies and systematic reviews for additional studies.\nSelection criteria\nWe included studies that evaluated the diagnostic accuracy of ¹⁸F-FDG PET to determine the conversion from MCI to Alzheimer’s disease dementia or to other forms of dementia, i.e. any or all of vascular dementia, dementia with Lewy bodies, and fronto-temporal dementia. These studies necessarily employ delayed verification of conversion to dementia and are sometimes labelled as ‘delayed verification cross-sectional studies’.\nData collection and analysis\nTwo blinded review authors independently extracted data, resolving disagreement by discussion, with the option to involve a third review author as arbiter if necessary. We extracted and summarised graphically the data for two-by-two tables. We conducted exploratory analyses by plotting estimates of sensitivity and specificity from each study on forest plots and in receiver operating characteristic (ROC) space. When studies had mixed thresholds, we derived estimates of sensitivity and likelihood ratios at fixed values (lower quartile, median and upper quartile) of specificity from the hierarchical summary ROC (HSROC) models.\nMain results\nWe included 14 studies (421 participants) in the analysis. The sensitivities for conversion from MCI to Alzheimer's disease dementia were between 25% and 100% while the specificities were between 15% and 100%. From the summary ROC curve we fitted we estimated that the sensitivity was 76% (95% confidence interval (CI): 53.8 to 89.7) at the included study median specificity of 82%. This equates to a positive likelihood ratio of 4.03 (95% CI: 2.97 to 5.47), and a negative likelihood ratio of 0.34 (95% CI: 0.15 to 0.75). Three studies recruited participants from the same Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort but only the largest ADNI study (Herholz 2011) is included in the meta-analysis. In order to demonstrate whether the choice of ADNI study or discriminating brain region (Chételat 2003) or reader assessment (Pardo 2010) make a difference to the pooled estimate, we performed five additional analyses. At the median specificity of 82%, the estimated sensitivity was between 74% and 76%. There was no impact on our findings. In addition to evaluating Alzheimer's disease dementia, five studies evaluated the accuracy of ¹⁸F-FDG PET for all types of dementia. The sensitivities were between 46% and 95% while the specificities were between 29% and 100%; however, we did not conduct a meta-analysis because of too few studies, and those studies which we had found recruited small numbers of participants. Our findings are based on studies with poor reporting, and the majority of included studies had an unclear risk of bias, mainly for the reference standard and participant selection domains. According to the assessment of Index test domain, more than 50% of studies were of poor methodological quality.\nAuthors' conclusions\nIt is difficult to determine to what extent the findings from the meta-analysis can be applied to clinical practice. Given the considerable variability of specificity values and lack of defined thresholds for determination of test positivity in the included studies, the current evidence does not support the routine use of ¹⁸F-FDG PET scans in clinical practice in people with MCI. The ¹⁸F-FDG PET scan is a high-cost investigation, and it is therefore important to clearly demonstrate its accuracy and to standardise the process of ¹⁸F-FDG PET diagnostic modality prior to its being widely used. Future studies with more uniform approaches to thresholds, analysis and study conduct may provide a more homogeneous estimate than the one available from the included studies we have identified.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The numbers of people with dementia and other cognitive problems are increasing globally. A diagnosis of dementia at early stage is recommended but there is no agreement on the best approach. A range of tests have been developed which healthcare professionals can use to assess people with poor memory or cognitive impairment. In this review we have focused on the ¹⁸F-FDG PET test."
}
]
|
query-laysum | 5497 | [
{
"role": "user",
"content": "Abstract: Background\nSore throat is a common reason for people to present for medical care and to be prescribed antibiotics. Overuse of antibiotics in primary medicine is a concern, hence it is important to establish their efficacy in treating sore throat and preventing secondary complications.\nObjectives\nTo assess the effects of antibiotics for reducing symptoms of sore throat for child and adult patients.\nSearch methods\nWe searched CENTRAL 2021, Issue 2, MEDLINE (January 1966 to April week 1, 2021), Embase (January 1990 to April 2021), and two trial registries (searched 6 April 2021).\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs of antibiotics versus control assessing typical sore throat symptoms or complications amongst children and adults seeking medical care for sore throat symptoms.\nData collection and analysis\nWe used standard methodological procedures as recommended by Cochrane. Two review authors independently screened studies for inclusion and extracted data, resolving any differences in opinion by discussion. We contacted the trial authors from three studies for additional information. We used GRADE to assess the certainty of the evidence for the efficacy of antibiotics on our primary outcomes (sore throat at day three and one week) and secondary outcomes (fever and headache symptoms and incidence of acute rheumatic fever, acute glomerulonephritis, acute otitis media, acute sinusitis, and quinsy).\nMain results\nWe included 29 trials with 15,337 cases of sore throat. The majority of included studies were conducted in the 1950s, during which time the rates of serious complications (especially acute rheumatic fever) were much higher than today. Although clinical antibiotic trials for sore throat and respiratory symptoms are still being conducted, it is unusual for them to include placebo or 'no treatment' control arms, which is a requirement for inclusion in the review.\nThe age of participants ranged from younger than one year to older than 50 years, but most participants across all studies were adults. Although all studies recruited patients presenting with symptoms of sore throat, few of them distinguished between bacterial and viral aetiology. Bias may have been introduced through non-clarity in treatment allocation procedures and lack of blinding in some studies. Harms from antibiotics were poorly or inconsistently reported, and were thus not quantified for this review.\n1. Symptoms\nThroat soreness and headache at day three were reduced by using antibiotics, although 82% of participants in the placebo or no treatment group were symptom-free by one week. The reduction in sore throat symptoms at day three (risk ratio (RR) 0.70, 95% confidence interval (CI) 0.60 to 0.80; 16 studies, 3730 participants; moderate-certainty evidence) was greater than at one week in absolute numbers (RR 0.50, 95% CI 0.34 to 0.75; 14 studies, 3083 participants; moderate-certainty evidence) due to many cases in both treatment groups having resolved by this time. The number needed to treat for an additional beneficial outcome (NNTB) to prevent one sore throat at day three was less than six; at week one it was 18. Compared with placebo or no treatment, antibiotics did not significantly reduce fever at day three (RR 0.75, 95% CI 0.53 to 1.07; 8 studies, 1443 participants; high-certainty evidence), but did reduce headache at day three (RR 0.49, 95% CI 0.34 to 0.70; 4 studies, 1020 participants; high-certainty evidence).\n2. Suppurative complications\nWhilst the prevalence of suppurative complications was low, antibiotics reduced the incidence of acute otitis media within 14 days (Peto odds ratio (OR) 0.21, 95% CI 0.11 to 0.40; 10 studies, 3646 participants; high-certainty evidence) and quinsy within two months (Peto OR 0.16, 95% CI 0.07 to 0.35; 8 studies, 2433 participants; high-certainty evidence) compared to those receiving placebo or no treatment, but not acute sinusitis within 14 days (Peto OR 0.46, 95% CI 0.10 to 2.05; 8 studies, 2387 participants; high-certainty evidence).\n3. Non-suppurative complications\nThere were too few cases of acute glomerulonephritis to determine whether there was a protective effect of antibiotics compared with placebo against this complication (Peto OR 0.07, 95% CI 0.00 to 1.32; 10 studies, 5147 participants; low-certainty evidence). Antibiotics reduced acute rheumatic fever within two months when compared to the control group (Peto OR 0.36, 95% CI 0.26 to 0.50; 18 studies, 12,249 participants; moderate-certainty evidence). It should be noted that the overall prevalence of acute rheumatic fever was very low, particularly in the later studies.\nAuthors' conclusions\nAntibiotics probably reduce the number of people experiencing sore throat, and reduce the likelihood of headache, and some sore throat complications. As the effect on symptoms can be small, clinicians must judge on an individual basis whether it is clinically justifiable to use antibiotics to produce this effect, and whether the underlying cause of the sore throat is likely to be of bacterial origin. Furthermore, the balance between modest symptom reduction and the potential hazards of antimicrobial resistance must be recognised. Few trials have attempted to measure symptom severity. If antibiotics reduce the severity as well as the duration of symptoms, their benefit will have been underestimated in this meta-analysis. Additionally, more trials are needed in low-income countries, in socio-economically deprived sections of high-income countries, as well as in children.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to April 2021."
}
]
|
query-laysum | 5498 | [
{
"role": "user",
"content": "Abstract: Background\nHepatocellular carcinoma occurs mostly in people with chronic liver disease. Worldwide, it ranks sixth in terms of incidence of cancer, and fourth in terms of cancer-related deaths. Contrast-enhanced ultrasound (CEUS) is used as an add-on test to confirm the presence of focal liver lesions suspected as hepatocellular carcinoma after prior diagnostic tests such as abdominal ultrasound or measurement of alpha-foetoprotein, or both. According to guidelines, a single contrast-enhanced imaging investigation, with either computed tomography (CT) or magnetic resonance imaging (MRI), may show the typical hepatocellular carcinoma hallmarks in people with cirrhosis, which will be sufficient to diagnose hepatocellular carcinoma. However, a significant number of hepatocellular carcinomas show atypical imaging features, and therefore, are missed at imaging.\nDynamic CEUS images are obtained similarly to CT and MRI images. CEUS differentiates between arterial and portal venous phases, in which sonographic hepatocellular carcinoma hallmarks, such as arterial hyperenhancement and subsequent washout appearance, are investigated. The advantages of CEUS over CT and MRI include real-time imaging, use of contrast agents that do not contain iodine and are not nephrotoxic, and quick image acquisition. Despite the advantages, the use of CEUS in the diagnostic algorithm for HCC remains controversial, with disagreement on relevant guidelines.\nThere is no clear evidence of the benefit of surveillance programmes in terms of overall survival as the conflicting results can be a consequence of an inaccurate detection, ineffective treatment, or both. Therefore, assessing the diagnostic accuracy of CEUS may clarify whether the absence of benefit could be related to underdiagnosis. Furthermore, an assessment of the accuracy of CEUS for the diagnosis of hepatocellular carcinoma is needed for either diagnosing hepatocellular carcinoma or ruling it out in people with chronic liver disease who are not included in surveillance programmes.\nObjectives\n1. To assess the diagnostic accuracy of contrast-enhanced ultrasound (CEUS) for the diagnosis of hepatocellular carcinoma of any size and at any stage in adults with chronic liver disease, in a surveillance programme or in a clinical setting.\n2. To assess the diagnostic accuracy of CEUS for the diagnosis of resectable hepatocellular carcinoma in people with chronic liver disease and identify potential sources of heterogeneity in the results.\nSearch methods\nWe used standard, extensive Cochrane search methods. The last date of search was 5 November 2021.\nSelection criteria\nWe included studies assessing the diagnostic accuracy of CEUS for the diagnosis of hepatocellular carcinoma in adults with chronic liver disease, with cross-sectional designs, using one of the acceptable reference standards, such as pathology of the explanted liver, and histology of resected or biopsied focal liver lesion with at least a six-month follow-up.\nData collection and analysis\nWe used standard Cochrane methods to screen studies, extract data, and assess the risk of bias and applicability concerns, using the QUADAS-2 checklist. We used the bivariate model and provided estimates of summary sensitivity and specificity. We assessed the certainty of the evidence using GRADE. We presented uncertainty-of-the-accuracy estimates using 95% confidence intervals (CIs).\nMain results\nWe included 23 studies with 6546 participants. Studies were published between 2001 and 2021. We judged all 23 studies at high-risk of bias in at least one domain, and 13/23 studies at high concern for applicability. Most studies used different reference standards to exclude the presence of the target condition. The time interval between the index test and the reference standard was rarely defined. We also had major concerns on their applicability due to the characteristics of the participants.\n– CEUS for hepatocellular carcinoma of any size and stage: sensitivity 77.8% (95% CI 69.4% to 84.4%) and specificity 93.8% (95% CI 89.1% to 96.6%) (23 studies, 6546 participants; very low-certainty evidence).\n– CEUS for resectable hepatocellular carcinoma: sensitivity 77.5% (95% CI 62.9% to 87.6%) and specificity 92.7% (95% CI 86.8% to 96.1%) (13 studies, 1257 participants; low-certainty evidence).\nThe observed heterogeneity in the results remains unexplained. The sensitivity analyses, including only studies with clearly prespecified positivity criteria and only studies in which the reference standard results were interpreted with no knowledge of the results about the index test, showed no differences in the results.\nAuthors' conclusions\nWe found that by using CEUS, as an add-on test following abdominal ultrasound, to diagnose hepatocellular carcinoma of any size and stage, 22% of people with hepatocellular carcinoma would be missed, and 6% of people without hepatocellular carcinoma would unnecessarily undergo further testing or inappropriate treatment. As to resectable hepatocellular carcinoma, we found that 23% of people with resectable hepatocellular carcinoma would incorrectly be unresected, while 8% of people without hepatocellular carcinoma would undergo further inappropriate testing or treatment. The uncertainty resulting from the high risk of bias of the included studies, heterogeneity, and imprecision of the results and concerns on their applicability limit our ability to draw confident conclusions.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In adults with chronic liver disease, contrast-enhanced ultrasound (CEUS) can miss diagnosing hepatocellular carcinoma in around 22.2% of people who would not then receive timely or appropriate treatment, and would fail to diagnose hepatocellular carcinoma in 6.2% of people who could receive unnecessary further testing or treatment.\nIn the subset of people who are able have the hepatocellular carcinoma removed by surgery, CEUS probably misses hepatocellular carcinoma in 22.5% of people who could undergo surgery to remove part of their liver while it would erroneously find cancer in 7.3% of people who would undergo unnecessary further test or surgery.\nAs there were some problems with the way the studies were conducted, CEUS to detect hepatocellular carcinoma may appear more accurate than it actually is."
}
]
|
query-laysum | 5499 | [
{
"role": "user",
"content": "Abstract: Background\nAlzheimer's disease is the most common cause of dementia in older people. One approach to symptomatic treatment of Alzheimer's disease is to enhance cholinergic neurotransmission in the brain by blocking the action of the enzyme responsible for the breakdown of the neurotransmitter acetylcholine. This can be done by a group of drugs known as cholinesterase inhibitors. Donepezil is a cholinesterase inhibitor.\nThis review is an updated version of a review first published in 1998.\nObjectives\nTo assess the clinical efficacy and safety of donepezil in people with mild, moderate or severe dementia due to Alzheimer's disease; to compare the efficacy and safety of different doses of donepezil; and to assess the effect of donepezil on healthcare resource use and costs.\nSearch methods\nWe searched Cochrane Dementia and Cognitive Improvement’s Specialized Register, MEDLINE, Embase, PsycINFO and a number of other sources on 20 May 2017 to ensure that the search was as comprehensive and up-to-date as possible. In addition, we contacted members of the Donepezil Study Group and Eisai Inc.\nSelection criteria\nWe included all double-blind, randomised controlled trials in which treatment with donepezil was administered to people with mild, moderate or severe dementia due to Alzheimer's disease for 12 weeks or more and its effects compared with those of placebo in a parallel group of patients, or where two different doses of donepezil were compared.\nData collection and analysis\nOne reviewer (JSB) extracted data on cognitive function, activities of daily living, behavioural symptoms, global clinical state, quality of life, adverse events, deaths and healthcare resource costs. Where appropriate and possible, we estimated pooled treatment effects. We used GRADE methods to assess the quality of the evidence for each outcome.\nMain results\nThirty studies involving 8257 participants met the inclusion criteria of the review, of which 28 studies reported results in sufficient detail for the meta-analyses. Most studies were of six months' duration or less. Only one small trial lasted 52 weeks. The studies tested mainly donepezil capsules at a dose of 5 mg/day or 10 mg/day. Two studies tested a slow-release oral formulation that delivered 23 mg/day. Participants in 21 studies had mild to moderate disease, in five studies moderate to severe, and in four severe disease. Seventeen studies were industry funded or sponsored, four studies were funded independently of industry and for nine studies there was no information on source of funding.\nOur main analysis compared the safety and efficacy of donepezil 10 mg/day with placebo at 24 to 26 weeks of treatment. Thirteen studies contributed data from 3396 participants to this analysis. Eleven of these studies were multicentre studies. Seven studies recruited patients with mild to moderate Alzheimer's disease, two with moderate to severe, and four with severe Alzheimer's disease, with a mean age of about 75 years. Almost all evidence was of moderate quality, downgraded due to study limitations.\nAfter 26 weeks of treatment, donepezil compared with placebo was associated with better outcomes for cognitive function measured with the Alzheimer's Disease Assessment Scale-Cognitive (ADAS-Cog, range 0 to 70) (mean difference (MD) -2.67, 95% confidence interval (CI) -3.31 to -2.02, 1130 participants, 5 studies), the Mini-Mental State Examination (MMSE) score (MD 1.05, 95% CI 0.73 to 1.37, 1757 participants, 7 studies) and the Severe Impairment Battery (SIB, range 0 to 100) (MD 5.92, 95% CI 4.53 to 7.31, 1348 participants, 5 studies). Donepezil was also associated with better function measured with the Alzheimer's Disease Cooperative Study activities of daily living score for severe Alzheimer's disease (ADCS-ADL-sev) (MD 1.03, 95% CI 0.21 to 1.85, 733 participants, 3 studies). A higher proportion of participants treated with donepezil experienced improvement on the clinician-rated global impression of change scale (odds ratio (OR) 1.92, 95% CI 1.54 to 2.39, 1674 participants, 6 studies). There was no difference between donepezil and placebo for behavioural symptoms measured by the Neuropsychiatric Inventory (NPI) (MD -1.62, 95% CI -3.43 to 0.19, 1035 participants, 4 studies) or by the Behavioural Pathology in Alzheimer's Disease (BEHAVE-AD) scale (MD 0.4, 95% CI -1.28 to 2.08, 194 participants, 1 study). There was also no difference between donepezil and placebo for Quality of Life (QoL) (MD -2.79, 95% CI -8.15 to 2.56, 815 participants, 2 studies).\nParticipants receiving donepezil were more likely to withdraw from the studies before the end of treatment (24% versus 20%, OR 1.25, 95% CI 1.05 to 1.50, 2846 participants, 12 studies) or to experience an adverse event during the studies (72% vs 65%, OR 1.59, 95% 1.31 to 1.95, 2500 participants, 10 studies).\nThere was no evidence of a difference between donepezil and placebo for patient total healthcare resource utilisation.\nThree studies compared donepezil 10 mg/day to donepezil 5 mg/day over 26 weeks. The 5 mg dose was associated with slightly worse cognitive function on the ADAS-Cog, but not on the MMSE or SIB, with slightly better QoL and with fewer adverse events and withdrawals from treatment. Two studies compared donepezil 10 mg/day to donepezil 23 mg/day. There were no differences on efficacy outcomes, but fewer participants on 10 mg/day experienced adverse events or withdrew from treatment.\nAuthors' conclusions\nThere is moderate-quality evidence that people with mild, moderate or severe dementia due to Alzheimer's disease treated for periods of 12 or 24 weeks with donepezil experience small benefits in cognitive function, activities of daily living and clinician-rated global clinical state. There is some evidence that use of donepezil is neither more nor less expensive compared with placebo when assessing total healthcare resource costs. Benefits on 23 mg/day were no greater than on 10 mg/day, and benefits on the 10 mg/day dose were marginally larger than on the 5 mg/day dose, but the rates of withdrawal and of adverse events before end of treatment were higher the higher the dose.\n\nGiven the provided abstract, please respond to the following query: \"Review methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this review we examined evidence about benefits and harms from studies that compared donepezil, taken for at least 12 weeks, to placebo (a dummy pill), or that compared different doses of donepezil. The studies had to be double-blind and randomised, that is, the decision whether people taking part got donepezil or placebo had to be made randomly and neither they nor the researchers should have known which treatment they were getting while the trial was going on. This was to make the comparison as unbiased, or fair, as possible. We searched for studies up to May 2017. We assessed the quality of all the studies we included. When it was sensible to do so, we analysed the results of studies together to get an overall result."
}
]
|
query-laysum | 5500 | [
{
"role": "user",
"content": "Abstract: Background\nHumour-based interventions are defined as any intervention that promotes health and wellness by stimulating a playful discovery, expression, or appreciation of the absurdity or incongruity of life's situations. Humour-based interventions can be implemented in different settings, including hospitals, nursing homes and day care centres. They have been posed as an adjunct to usual care for people with schizophrenia, but a summary of the evidence is lacking.\nObjectives\nTo examine the effects of humour-based interventions as an add-on intervention to standard care for people with schizophrenia.\nSearch methods\nOn 31 July 2019 and 10 February 2021 we searched the Cochrane Schizophrenia Group's study-based register of trials, which is based on CENTRAL, CINAHL, ClinicalTrials.Gov, Embase, ISRCTN, MEDLINE, PsycINFO, PubMed, and WHO ICTRP.\nSelection criteria\nWe included all randomised controlled trials comparing humour-based interventions with active controls, other psychological interventions, or standard care for people with schizophrenia. We excluded studies fulfilling our prespecified selection criteria but without useable data from further quantitative synthesis.\nData collection and analysis\nTwo review authors independently inspected citations, selected studies, extracted data and appraised study quality, following the guidance from the Cochrane Handbook for Systematic Reviews of Interventions. For binary outcomes we calculated risk ratios (RRs) and their 95% confidence intervals (CIs). For continuous outcomes we calculated the mean differences (MDs) and their 95% CIs. We assessed risks of bias for included studies and created summary of findings tables using the GRADE approach.\nMain results\nWe included three studies in this review for qualitative synthesis, although one study did not report any relevant outcomes. We therefore include two studies (n = 96) in our quantitative synthesis. No data were available on the following prespecified primary outcomes: clinically-important change in general mental state, clinically-important change in negative symptoms, clinically-important change in overall quality of life, and adverse effects. As compared with active control, humour-based interventions may not improve the average endpoint score of a general mental state scale (Positive and Negative Syndrome Scale (PANSS) total score: MD −1.70, 95% CI −17.01 to 13.61; 1 study, 30 participants; very low certainty of evidence); positive symptoms (PANSS positive symptom score: MD 0.00, 95% CI −2.58 to 2.58; 1 study, 30 participants; low certainty of evidence), negative symptoms (PANSS negative symptom score: MD −0.70, 95% CI −4.22 to 2.82; 1 study, 30 participants; very low certainty of evidence) and anxiety (State-Trait Anxiety Inventory (STAI): MD −2.60, 95% CI −5.76 to 0.56; 1 study, 30 participants; low certainty of evidence). Due to the small sample size, we remain uncertain about the effect of humour-based interventions on leaving the study early as compared with active control (no event, 1 study, 30 participants; very low certainty of evidence). On the other hand, humour-based interventions may reduce depressive symptoms (Beck Depression Inventory (BDI): MD −6.20, 95% CI −12.08 to −0.32; 1 study, 30 participants; low certainty of evidence). Compared with standard care, humour-based interventions may not improve depressive symptoms (BDI second edition: MD 0.80, 95% CI −2.64 to 4.24; 1 study, 59 participants; low certainty of evidence). We are uncertain about the effect of humour-based interventions on leaving the study early for any reason compared with standard care (risk ratio 0.38, 95% CI 0.08 to 1.80; 1 study, 66 participants; very low certainty of evidence).\nAuthors' conclusions\nWe are currently uncertain whether the evidence supports the use of humour-based interventions in people with schizophrenia. Future research with rigorous and transparent methodology investigating clinically important outcomes is warranted.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Are humour-based interventions effective in treating people with schizophrenia?"
}
]
|
query-laysum | 5501 | [
{
"role": "user",
"content": "Abstract: Background\nTotal knee replacement (TKR) is a common intervention for people with end-stage symptomatic knee osteoarthritis, resulting in significant improvements in pain, function and quality of life within three to six months. It is, however, acutely associated with pain, local oedema and blood loss. Post-operative management may include cryotherapy. This is the application of low temperatures to the skin surrounding the surgical site, through ice or cooled water, often delivered using specialised devices. This is an update of a review published in 2012.\nObjectives\nTo evaluate the effect of cryotherapy in the acute phase after TKR (within 48 hours after surgery) on blood loss, pain, transfusion rate, range of motion, knee function, adverse events and withdrawals due to adverse events.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, six other databases and two trials registers, as well as reference lists, related links and conference proceedings on 27 May 2022.\nSelection criteria\nWe included randomised controlled trials or controlled clinical trials comparing cryotherapy with or without other treatments (such as compression, regional nerve block or continuous passive motion) to no treatment, or the other treatment alone, following TKR for osteoarthritis.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, extracted data and assessed risk of bias and certainty of evidence using GRADE. We discussed any disagreements and consulted another review author to resolve them, if required. Major outcomes were blood loss, pain, transfusion rate, knee range of motion, knee function, total adverse events and withdrawals from adverse events. Minor outcomes were analgesia use, knee swelling, length of stay, quality of life, activity level and participant-reported global assessment of success.\nMain results\nWe included 22 trials (20 randomised trials and two controlled clinical trials), with 1839 total participants. The mean ages reflected the TKR population, ranging from 64 to 74 years.\nCryotherapy with compression was compared to no treatment in four studies, and to compression alone in nine studies. Cryotherapy without compression was compared to no treatment in eight studies. One study compared cryotherapy without compression to control with compression alone. We combined all control interventions in the primary analysis.\nCertainty of evidence was low for blood loss (downgraded for bias and inconsistency), pain (downgraded twice for bias) and range of motion (downgraded for bias and indirectness). It was very low for transfusion rate (downgraded for bias, inconsistency and imprecision), function (downgraded twice for bias and once for inconsistency), total adverse events (downgraded for bias, indirectness and imprecision) and withdrawals from adverse events (downgraded for bias, indirectness and imprecision). The nature of cryotherapy made blinding difficult and most studies had a high risk of performance and detection bias.\nLow-certainty evidence from 12 trials (956 participants) shows that cryotherapy may reduce blood loss at one to 13 days after surgery. Blood loss was 825 mL with no cryotherapy and 561 mL with cryotherapy: mean difference (MD) 264 mL less (95% confidence interval (CI) 7 mL less to 516 mL less).\nLow-certainty evidence from six trials (530 participants) shows that cryotherapy may slightly improve pain at 48 hours on a 0- to 10-point visual analogue scale (lower scores indicate less pain). Pain was 4.8 points with no cryotherapy and 3.16 points with cryotherapy: MD 1.6 points lower (95% CI 2.3 lower to 1.0 lower).\nWe are uncertain whether cryotherapy improves transfusion rate at zero to 13 days after surgery. The transfusion rate was 37% with no cryotherapy and 79% with cryotherapy (risk ratio (RR) 2.13, 95% CI 0.04 to 109.63; 2 trials, 91 participants; very low-certainty evidence).\nLow-certainty evidence from three trials (174 participants) indicates cryotherapy may improve range of motion at discharge: it was 62.9 degrees with no cryotherapy and 71.2 degrees with cryotherapy: MD 8.3 degrees greater (95% CI 3.6 degrees more to 13.1 degrees more).\nWe are uncertain whether cryotherapy improves function two weeks after surgery. Function was 75.4 points on the 0- to 100-point Dutch Western Ontario and McMaster Universities Arthritis Index (WOMAC) scale (lower score indicates worse function) in the control group and 88.6 points with cryotherapy (MD 13.2 points better, 95% CI 0.5 worse to 27.1 improved; 4 trials, 296 participants; very low-certainty evidence).\nWe are uncertain whether cryotherapy reduces total adverse events: the risk ratio was 1.30 (95% CI 0.53 to 3.20; 16 trials, 1199 participants; very low-certainty evidence). Adverse events included discomfort, local skin reactions, superficial infections, cold-induced injuries and thrombolytic events.\nWe are uncertain whether cryotherapy reduces withdrawals from adverse events (RR 2.71, 95% CI 0.42 to 17.38; 19 trials, 1347 participants; very low-certainty evidence).\nNo significant benefit was found for secondary outcomes of analgesia use, length of stay, activity level or quality of life. Evidence from seven studies (403 participants) showed improved mid-patella swelling between two and six days after surgery (MD 7.32 mm less, 95% CI 11.79 to 2.84 lower), though not at six weeks and three months after surgery. The included studies did not assess participant-reported global assessment of success.\nAuthors' conclusions\nThe certainty of evidence was low for blood loss, pain and range of motion, and very low for transfusion rate, function, total adverse events and withdrawals from adverse events. We are uncertain whether cryotherapy improves transfusion rate, function, total adverse events or withdrawals from adverse events. We downgraded evidence for bias, indirectness, imprecision and inconsistency. Hence, the potential benefits of cryotherapy on blood loss, pain and range of motion may be too small to justify its use. More well-designed randomised controlled trials focusing especially on clinically meaningful outcomes, such as blood transfusion, and patient-reported outcomes, such as knee function, quality of life, activity level and participant-reported global assessment of success, are required.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out if cryotherapy has effects on blood loss, pain and knee function within 48 hours after TKR."
}
]
|
query-laysum | 5502 | [
{
"role": "user",
"content": "Abstract: Background\nThe female perineum becomes suffused and stretched during pregnancy, and further strain during vaginal childbirth contributes to approximately 85% of women experiencing some degree of trauma to the perineal region. Multiple factors play a role in the type and severity of trauma experienced, including parity, delivery method, and local practices. There is ongoing debate about best midwifery practice to reduce perineal trauma. Once perineal trauma has occurred, treatment also varies greatly, depending on its degree and severity, local practice and customs, and personal preference. In order to optimise wound-healing outcomes, it is important that wounds are assessed and managed in an appropriate and timely manner. A perineal wound may cause significant physical and/or psychological impact in the short or long term, however little evidence is available on this subject.\nAntenatal education serves to prepare women and their partners for pregnancy, delivery and the postpartum period. The delivery of this education varies widely in type, content, and nature. This review examined antenatal education which is specifically tailored towards perineal care and wound healing in the postnatal period via formal channels. Appropriate patient education positively impacts on wound-healing rates and compliance with wound care. Risk factors that contribute to the breakdown of wounds and poor healing rates may be addressed antenatally in order to optimise postnatal wound healing. It is important to assess whether or not antenatal wound-care education positively affects perineal healing, in order to empower women to incorporate best practice, evidence-based treatment with this important aspect of self-care in the immediate postnatal period.\nObjectives\nTo evaluate the effects of antenatal education on perineal wound healing in postnatal women who have birthed in a hospital setting, and who have experienced a break in the skin of the perineum as a result of a tear or episiotomy, or both.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (30 September 2017), ClinicalTrials.gov (8th September 2017), the WHO International Clinical Trials Registry Platform (ICTRP) (8th September 2017) and reference lists of retrieved studies.\nSelection criteria\nWe considered randomised controlled trials (RCTs) which referred to all formal methods of antenatal education and addressed care of a potential perineal wound as a result of a tear or episiotomy, which was experienced by pregnant women who planned to give birth within a hospital setting.\nTrials using a cluster-RCT and a quasi-randomised design would have been eligible for inclusion in this review but none were identified. Cross-over trials were not eligible for inclusion in this review. Studies published in abstract form would have been eligible for inclusion in this review, but none were identified.\nWe planned to consider all formal methods of antenatal education which addressed care of a perineal wound. We also planned to consider all contact points where there was an opportunity for formal education, including midwifery appointments, antenatal education classes, obstetrician appointments, general practitioner appointments and physiotherapist appointments.\nData collection and analysis\nTwo review authors independently assessed titles and abstracts of the studies identified by the search strategy for their eligibility.\nMain results\nNo studies met the inclusion criteria for this review. We excluded one study and one other study is ongoing.\nAuthors' conclusions\nWe set out to evaluate the RCT evidence pertaining to the impact of antenatal education on perineal wound healing in postnatal women who have birthed in a hospital setting, and who experienced a break in the skin of the perineum as a result of a tear or episiotomy, or both. However, no studies met the inclusion criteria. There is a lack of evidence concerning whether or not antenatal education relating to perineal wound healing in this cohort of women will change the outcome for these women in relation to wound healing, infection rate, re-attendance or re-admission to hospital, pain, health-related quality of life, maternal bonding, and negative emotional experiences. Further study is warranted in this area given the significant physical, psychological and economic impact of perineal wounds, and the large proportion of childbearing women who have experienced a postnatal wound. The benefits of any future research in this field would be maximised by incorporating women in a range of socio-economic groups, and with a range of healthcare options. This research could take both a qualitative and a quantitative approach and examine the outcomes identified in this review in order to assess fully the potential benefits of a tailored antenatal package, and to make recommendations for future practice. There is currently no evidence to inform practice in this regard.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for evidence in September 2017 but did not find any randomised controlled studies relating to our area of interest. We excluded one study and one other study is not yet complete."
}
]
|
query-laysum | 5503 | [
{
"role": "user",
"content": "Abstract: Background\nMelanoma has one of the fastest rising incidence rates of any cancer. It accounts for a small percentage of skin cancer cases but is responsible for the majority of skin cancer deaths. History-taking and visual inspection of a suspicious lesion by a clinician is usually the first in a series of ‘tests’ to diagnose skin cancer. Establishing the accuracy of visual inspection alone is critical to understating the potential contribution of additional tests to assist in the diagnosis of melanoma.\nObjectives\nTo determine the diagnostic accuracy of visual inspection for the detection of cutaneous invasive melanoma and atypical intraepidermal melanocytic variants in adults with limited prior testing and in those referred for further evaluation of a suspicious lesion. Studies were separated according to whether the diagnosis was recorded face-to-face (in-person) or based on remote (image-based) assessment.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception up to August 2016: CENTRAL; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nTest accuracy studies of any design that evaluated visual inspection in adults with lesions suspicious for melanoma, compared with a reference standard of either histological confirmation or clinical follow-up. We excluded studies reporting data for ‘clinical diagnosis’ where dermoscopy may or may not have been used.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic threshold were missing. We estimated summary sensitivities and specificities per algorithm and threshold using the bivariate hierarchical model. We investigated the impact of: in-person test interpretation; use of a purposely developed algorithm to assist diagnosis; and observer expertise.\nMain results\nWe included 49 publications reporting on a total of 51 study cohorts with 34,351 lesions (including 2499 cases), providing 134 datasets for visual inspection. Across almost all study quality domains, the majority of study reports provided insufficient information to allow us to judge the risk of bias, while in three of four domains that we assessed we scored concerns regarding applicability of study findings as 'high'. Selective participant recruitment, lack of detail regarding the threshold for deciding on a positive test result, and lack of detail on observer expertise were particularly problematic.\nAttempts to analyse studies by degree of prior testing were hampered by a lack of relevant information and by the restricted inclusion of lesions selected for biopsy or excision. Accuracy was generally much higher for in-person diagnosis compared to image-based evaluations (relative diagnostic odds ratio of 8.54, 95% CI 2.89 to 25.3, P < 0.001). Meta-analysis of in-person evaluations that could be clearly placed on the clinical pathway showed a general trade-off between sensitivity and specificity, with the highest sensitivity (92.4%, 95% CI 26.2% to 99.8%) and lowest specificity (79.7%, 95% CI 73.7% to 84.7%) observed in participants with limited prior testing (n = 3 datasets). Summary sensitivities were lower for those referred for specialist assessment but with much higher specificities (e.g. sensitivity 76.7%, 95% CI 61.7% to 87.1%) and specificity 95.7%, 95% CI 89.7% to 98.3%) for lesions selected for excision, n = 8 datasets). These differences may be related to differences in the spectrum of included lesions, differences in the definition of a positive test result, or to variations in observer expertise. We did not find clear evidence that accuracy is improved by the use of any algorithm to assist diagnosis in all settings. Attempts to examine the effect of observer expertise in melanoma diagnosis were hindered due to poor reporting.\nAuthors' conclusions\nVisual inspection is a fundamental component of the assessment of a suspicious skin lesion; however, the evidence suggests that melanomas will be missed if visual inspection is used on its own. The evidence to support its accuracy in the range of settings in which it is used is flawed and very poorly reported. Although published algorithms do not appear to improve accuracy, there is insufficient evidence to suggest that the ‘no algorithm’ approach should be preferred in all settings. Despite the volume of research evaluating visual inspection, further prospective evaluation of the potential added value of using established algorithms according to the prior testing or diagnostic difficulty of lesions may be warranted.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Only 19 studies (17 in-person studies and 2 image-based studies) were clear whether the test was the first visual inspection of a lesion or was a visual inspection following referral (for example, when patients are referred by a GP to skin specialists for visual inspection).\nFirst in-person visual inspection (3 studies)\nThe results of three studies of 1339 suspicious skin lesions suggest that in a group of 1000 lesions, of which 90 (9%) actually are melanoma:\n- An estimated 268 will have a visual inspection result indicating melanoma is present. Of these, 185 will not be melanoma and will result in an unnecessary biopsy (false-positive results).\n- An estimated 732 will have a visual inspection result indicating that melanoma is not present. Of these, seven will actually have melanoma and would not be sent for biopsy (false-negative results).\nTwo further studies restricted to 4228 suspicious skin lesions that were all selected to be excised found similar results.\nIn-person visual inspection after referral, all lesions selected to be excised (8 studies)\nThe results of eight studies of 5331 suspicious skin lesions suggest that in a group of 1000 lesions, of which 90 (9%) actually are melanoma:\n- An estimated 108 will have a visual inspection result indicating melanoma is present, and of these, 39 will not be melanoma and will result in an unnecessary biopsy (false-positive results).\n- Of the 892 lesions with a visual inspection result indicating that melanoma is not present, 21 will actually be melanoma and would not be sent for biopsy (false-negative results).\nOverall, the number of false-positive results (diagnosing a skin lesion as a melanoma when it is not) was observed to be higher and the number of false-negative results (not recognising a melanoma when it is present) lower for first visual inspections of suspicious skin lesions compared to visual inspection following referral.\nVisual inspection of images of suspicious skin lesions (2 studies)\nAccuracy was much lower for visual inspection of images of lesions compared to visual inspection in person.\nValue of visual inspection checklists\nThere was no evidence that use of a visual inspection checklist or the level of clinical expertise changed diagnostic accuracy."
}
]
|
query-laysum | 5505 | [
{
"role": "user",
"content": "Abstract: Background\nPerception is the ability to understand information from our senses. It allows us to experience and meaningfully interact with our environment. A stroke may impair perception in up to 70% of stroke survivors, leading to distress, increased dependence on others, and poorer quality of life. Interventions to address perceptual disorders may include assessment and screening, rehabilitation, non-invasive brain stimulation, pharmacological and surgical approaches.\nObjectives\nTo assess the effectiveness of interventions aimed at perceptual disorders after stroke compared to no intervention or control (placebo, standard care, attention control), on measures of performance in activities of daily living.\nSearch methods\nWe searched the trials registers of the Cochrane Stroke Group, CENTRAL, MEDLINE, Embase, and three other databases to August 2021. We also searched trials and research registers, reference lists of studies, handsearched journals, and contacted authors.\nSelection criteria\nWe included randomised controlled trials (RCTs) of adult stroke survivors with perceptual disorders. We defined perception as the specific mental functions of recognising and interpreting sensory stimuli and included hearing, taste, touch, smell, somatosensation, and vision. Our definition of perception excluded visual field deficits, neglect/inattention, and pain.\nData collection and analysis\nOne review author assessed titles, with two review authors independently screening abstracts and full-text articles for eligibility. One review author extracted, appraised, and entered data, which were checked by a second author. We assessed risk of bias (ROB) using the ROB-1 tool, and quality of evidence using GRADE.\nA stakeholder group, comprising stroke survivors, carers, and healthcare professionals, was involved in this review update.\nMain results\nWe identified 18 eligible RCTs involving 541 participants. The trials addressed touch (three trials, 70 participants), somatosensory (seven trials, 196 participants) and visual perception disorders (seven trials, 225 participants), with one (50 participants) exploring mixed touch-somatosensory disorders. None addressed stroke-related hearing, taste, or smell perception disorders. All but one examined the effectiveness of rehabilitation interventions; the exception evaluated non-invasive brain stimulation. For our main comparison of active intervention versus no treatment or control, one trial reported our primary outcome of performance in activities of daily living (ADL):\nSomatosensory disorders: one trial (24 participants) compared an intervention with a control intervention and reported an ADL measure.\nTouch perception disorder: no trials measuring ADL compared an intervention with no treatment or with a control intervention.\nVisual perception disorders: no trials measuring ADL compared an intervention with no treatment or control.\nIn addition, six trials reported ADL outcomes in a comparison of active intervention versus active intervention, relating to somatosensation (three trials), touch (one trial) and vision (two trials).\nAuthors' conclusions\nFollowing a detailed, systematic search, we identified limited RCT evidence of the effectiveness of interventions for perceptual disorders following stroke. There is insufficient evidence to support or refute the suggestion that perceptual interventions are effective. More high-quality trials of interventions for perceptual disorders in stroke are needed. They should recruit sufficient participant numbers, include a 'usual care' comparison, and measure longer-term functional outcomes, at time points beyond the initial intervention period. People with impaired perception following a stroke should continue to receive neurorehabilitation according to clinical guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Stroke can affect a person’s ability to process and understand information from their senses, including hearing, smell, somatosensation (sense of body temperature, position, and movement), taste, touch, and vision. Processing and understanding information from these senses is known as perception.\nLittle research has been carried out to find out whether any treatments for stroke-related problems with perception are helpful.\nPeople with stroke-related problems with hearing, smell, somatosensation, taste, touch, and vision should continue to participate in rehabilitation as recommended by clinical guidelines. Healthcare professionals should continue to offer rehabilitation for stroke-related perceptual problems in keeping with current clinical guidelines and recommendations."
}
]
|
query-laysum | 5506 | [
{
"role": "user",
"content": "Abstract: Background\nDysphagia (swallowing problems), which is common after stroke, is associated with increased risk of death or dependency, occurrence of pneumonia, poor quality of life, and longer hospital stay. Treatments provided to improve dysphagia are aimed at accelerating recovery of swallowing function and reducing these risks. This is an update of the review first published in 1999 and updated in 2012.\nObjectives\nTo assess the effects of swallowing therapy on death or dependency among stroke survivors with dysphagia within six months of stroke onset.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (26 June 2018), the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 6) in the Cochrane Library (searched 26 June 2018), MEDLINE (26 June 2018), Embase (26 June 2018), the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (26 June 2018), Web of Science Core Collection (26 June 2018), SpeechBITE (28 June 2016), ClinicalTrials.Gov (26 June 2018), and the World Health Organization International Clinical Trials Registry Platform (26 June 2018). We also searched Google Scholar (7 June 2018) and the reference lists of relevant trials and review articles.\nSelection criteria\nWe sought to include randomised controlled trials (RCTs) of interventions for people with dysphagia and recent stroke (within six months).\nData collection and analysis\nTwo review authors independently applied the inclusion criteria, extracted data, assessed risk of bias, used the GRADE approach to assess the quality of evidence, and resolved disagreements through discussion with the third review author (PB). We used random-effects models to calculate odds ratios (ORs), mean differences (MDs), and standardised mean differences (SMDs), and provided 95% confidence intervals (CIs) for each.\nThe primary outcome was functional outcome, defined as death or dependency (or death or disability), at the end of the trial. Secondary outcomes were case fatality at the end of the trial, length of inpatient stay, proportion of participants with dysphagia at the end of the trial, swallowing ability, penetration aspiration score, or pneumonia, pharyngeal transit time, institutionalisation, and nutrition.\nMain results\nWe added 27 new studies (1777 participants) to this update to include a total of 41 trials (2660 participants).\nWe assessed the efficacy of swallowing therapy overall and in subgroups by type of intervention: acupuncture (11 studies), behavioural interventions (nine studies), drug therapy (three studies), neuromuscular electrical stimulation (NMES; six studies), pharyngeal electrical stimulation (PES; four studies), physical stimulation (three studies), transcranial direct current stimulation (tDCS; two studies), and transcranial magnetic stimulation (TMS; nine studies).\nSwallowing therapy had no effect on the primary outcome (death or dependency/disability at the end of the trial) based on data from one trial (two data sets) (OR 1.05, 95% CI 0.63 to 1.75; 306 participants; 2 studies; I² = 0%; P = 0.86; moderate-quality evidence). Swallowing therapy had no effect on case fatality at the end of the trial (OR 1.00, 95% CI 0.66 to 1.52; 766 participants; 14 studies; I² = 6%; P = 0.99; moderate-quality evidence). Swallowing therapy probably reduced length of inpatient stay (MD -2.9, 95% CI -5.65 to -0.15; 577 participants; 8 studies; I² = 11%; P = 0.04; moderate-quality evidence). Researchers found no evidence of a subgroup effect based on testing for subgroup differences (P = 0.54). Swallowing therapy may have reduced the proportion of participants with dysphagia at the end of the trial (OR 0.42, 95% CI 0.32 to 0.55; 1487 participants; 23 studies; I² = 0%; P = 0.00001; low-quality evidence). Trial results show no evidence of a subgroup effect based on testing for subgroup differences (P = 0.91). Swallowing therapy may improve swallowing ability (SMD -0.66, 95% CI -1.01 to -0.32; 1173 participants; 26 studies; I² = 86%; P = 0.0002; very low-quality evidence). We found no evidence of a subgroup effect based on testing for subgroup differences (P = 0.09). We noted moderate to substantial heterogeneity between trials for these interventions. Swallowing therapy did not reduce the penetration aspiration score (i.e. it did not reduce radiological aspiration) (SMD -0.37, 95% CI -0.74 to -0.00; 303 participants; 11 studies; I² = 46%; P = 0.05; low-quality evidence). Swallowing therapy may reduce the incidence of chest infection or pneumonia (OR 0.36, 95% CI 0.16 to 0.78; 618 participants; 9 studies; I² = 59%; P = 0.009; very low-quality evidence).\nAuthors' conclusions\nModerate- and low-quality evidence suggests that swallowing therapy did not have a significant effect on the outcomes of death or dependency/disability, case fatality at the end of the trial, or penetration aspiration score. However, swallowing therapy may have reduced length of hospital stay, dysphagia, and chest infections, and may have improved swallowing ability. However, these results are based on evidence of variable quality, involving a variety of interventions. Further high-quality trials are needed to test whether specific interventions are effective.\n\nGiven the provided abstract, please respond to the following query: \"Question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to assess the effectiveness of swallowing therapy for stroke survivors with dysphagia (difficulty in swallowing). We looked at swallowing therapy in survivors up to six months after stroke."
}
]
|
query-laysum | 5507 | [
{
"role": "user",
"content": "Abstract: Background\nPrevention of cognitive impairment and dementia is an important public health goal. Epidemiological evidence shows a relationship between cognitive impairment and Type 2 diabetes mellitus. The risk of dementia increases with duration of disease. This updated systematic review investigated the effect on cognitive function of the type of treatment and level of metabolic control in people with Type 2 diabetes.\nObjectives\nTo assess the effects of different strategies for managing Type 2 diabetes mellitus on cognitive function and the incidence of dementia.\nSearch methods\nWe searched ALOIS (the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group (CDCIG)), the Cochrane Library, MEDLINE, Embase, PsycINFO, CINAHL and LILACS on 15 October 2016. ALOIS contains records from all major health care databases, (CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, LILACS), as well as from many trials' registers and grey literature sources.\nSelection criteria\nWe included randomised controlled trials (RCTs) which compared two or more different treatments for Type 2 diabetes mellitus and in which cognitive function was measured at baseline and after treatment.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of the included RCTs. We pooled data for comparable trials and estimated the effects of treatment by using risk ratios (RRs) and mean differences (MDs), according to the nature of the outcome. We assessed the quality of the evidence using GRADE methods.\nMain results\nWe identified seven eligible studies but only four provided data we could include in efficacy analyses. Two of these studies compared intensive versus standard glycaemic control and two compared different pharmacological treatments. All studies were at unclear risk of bias in at least two domains and one large study was at high risk of performance and detection bias.\n(a) Two studies with 13,934 participants at high cardiovascular risk provided efficacy data on intensive versus standard glycaemic control. A third study with 1791 participants provided additional data on hypoglycaemic episodes and mortality. There is probably no difference between treatment groups in the number of participants who decline by at least 3 points on the Mini–Mental State Examination (MMSE) over five years (RR 0.98, 95% CI 0.88 to 1.08; 1 study; n = 11,140; moderate-quality evidence); and there may also be little or no difference in the incidence of dementia (RR 1.27, 95% CI 0.87 to 1.85; 1 study; n = 11,140; low-quality evidence). From another study, there was probably little or no difference in MMSE score after 40 months (MD −0.01, 95% CI −0.18 to 0.16; 1 study; n = 2794; moderate quality evidence). Participants exposed to the intensive glycaemic control strategy probably experience more episodes of severe hypoglycaemia than those who have standard treatment (RR 2.18, 95% CI 1.52 to 3.14; 2 studies; n = 12,827; moderate-quality evidence). The evidence from these trials suggests that the intensity of glycaemic control may have little or no effect on all-cause mortality (RR 0.99, 95% CI 0.87 to 1.13; 3 studies; n = 15,888; low-quality evidence).\n(b) One study with 156 participants compared glibenclamide (glyburide) with repaglinide. There may be a small advantage of glibenclamide on global cognitive function measured with the MMSE after 12 months (MD −0.90, 95% CI −1.68 to −0.12; low-quality evidence). No data were reported on the incidence of dementia, hypoglycaemic events or all-cause mortality.\n(c) One study with 145 participants compared rosiglitazone plus metformin to glibenclamide (glyburide) plus metformin over 24 weeks. It reported only on cognitive subdomains and not on global cognitive function, incidence of MCI or dementia, hypoglycaemic events or all causes of mortality.\nAuthors' conclusions\nWe found no good evidence that any specific treatment or treatment strategy for Type 2 diabetes can prevent or delay cognitive impairment. The best available evidence related to the comparison of intensive with standard glycaemic control strategies. Here there was moderate-quality evidence that the strategies do not differ in their effect on global cognitive functioning over 40 to 60 months.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We judged the quality of the evidence for all outcomes to be low or moderate due to risk of bias in the included studies, small sample sizes, and imprecise estimates of the effects. This means that our confidence in the results is limited."
}
]
|
query-laysum | 5508 | [
{
"role": "user",
"content": "Abstract: Background\nGastric tubes are commonly used for the administration of drugs and tube feeding for people who are unable to swallow. Feeding via a tube misplaced in the trachea can result in severe pneumonia. Therefore, the confirmation of tube placement in the stomach after tube insertion is important. Recent studies have reported that ultrasonography provides good diagnostic accuracy estimates in the confirmation of appropriate tube placement. Hence, ultrasound could provide a promising alternative to X-rays in the confirmation of tube placement, especially in settings where X-ray facilities are unavailable or difficult to access.\nObjectives\nTo assess the diagnostic accuracy of ultrasound for gastric tube placement confirmation.\nSearch methods\nWe searched the Cochrane Library (2016, Issue 3), MEDLINE (to March 2016), Embase (to March 2016), National Institute for Health Research (NIHR) PROSPERO Register (to May 2016), Aggressive Research Intelligence Facility Databases (to May 2016), ClinicalTrials.gov (to May 2016), ISRCTN registry (May 2016), World Health Organization International Clinical Trials Registry Platform (to May 2016) and reference lists of articles, and contacted study authors.\nSelection criteria\nWe included studies that evaluated the diagnostic accuracy of naso- and orogastric tube placement confirmed by ultrasound visualization using X-ray visualization as the reference standard. We included cross-sectional studies, and case-control studies. We excluded case series or case reports. Studies were excluded if X-ray visualization was not the reference standard or if the tube being placed was a gastrostomy or enteric tube.\nData collection and analysis\nTwo review authors independently assessed the risk of bias and extracted data from each of the included studies. We contacted authors of the included studies to obtain missing data.\nMain results\nWe identified 10 studies (545 participants and 560 tube insertions) which met our inclusion criteria.\nNo study was assigned low risk of bias or low concern in every QUADAS-2 domain. We judged only three (30%) studies to have low risk of bias in the participant selection domain because they performed ultrasound after they confirmed correct position by other methods.\nFew data (43 participants) were available for misplacement detection (specificity) due to the low incidence of misplacement. We did not perform a meta-analysis because of considerable heterogeneity of the index test such as the difference of echo window, the combination of ultrasound with other confirmation methods (e.g. saline flush visualization by ultrasound) and ultrasound during the insertion of the tube. For all settings, sensitivity estimates for individual studies ranged from 0.50 to 1.00 and specificity estimates from 0.17 to 1.00. For settings where X-ray was not readily available and participants underwent gastric tube insertion for drainage (four studies, 305 participants), sensitivity estimates of ultrasound in combination with other confirmatory tests ranged from 0.86 to 0.98 and specificity estimates of 1.00 with wide confidence intervals.\nFor the studies using ultrasound alone (four studies, 314 participants), sensitivity estimates ranged from 0.91 to 0.98 and specificity estimates from 0.67 to 1.00.\nAuthors' conclusions\nOf 10 studies that assessed the diagnostic accuracy of gastric tube placement, few studies had a low risk of bias. Based on limited evidence, ultrasound does not have sufficient accuracy as a single test to confirm gastric tube placement. However, in settings where X-ray is not readily available, ultrasound may be useful to detect misplaced gastric tubes. Larger studies are needed to determine the possibility of adverse events when ultrasound is used to confirm tube placement.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most studies showed good performance for correct placement of the tube. However, few data were available for incorrect placement of the tube and the possible complications of a misplaced tube. Among the included studies, only 43 participants had a misplaced tube. None of the studies reported complications during ultrasound use. Three methods of ultrasound were reported: neck approach, upper abdominal (tummy) approach and a combination of both. No included studies indicated that ultrasound had sufficient accuracy as a single test for the confirmation of gastric tube placement for feeding. In contrast, ultrasound combined with other tests (e.g. saline flush visualization (pushing salt solution through the tube and seeing it inside the stomach by ultrasound)) might be useful for the confirmation of tubes used for gastric drainage."
}
]
|
query-laysum | 5509 | [
{
"role": "user",
"content": "Abstract: Background\nInfants born preterm (before 37 weeks' gestation) have poorer outcomes than infants at term, particularly if born before 32 weeks. Early cord clamping has been standard practice over many years, and enables quick transfer of the infant to neonatal care. Delayed clamping allows blood flow between the placenta, umbilical cord and baby to continue, and may aid transition. Keeping baby at the mother's side enables neonatal care with the cord intact and this, along with delayed clamping, may improve outcomes. Umbilical cord milking (UCM) is proposed for increasing placental transfusion when immediate care for the preterm baby is needed. This Cochrane Review is a further update of a review first published in 2004 and updated in 2012.\nObjectives\nTo assess the effects on infants born at less than 37 weeks' gestation, and their mothers of: 1) delayed cord clamping (DCC) compared with early cord clamping (ECC) both with immediate neonatal care after cord clamping; 2) DCC with immediate neonatal care with cord intact compared with ECC with immediate neonatal care after cord clamping; 3) DCC with immediate neonatal care after cord clamping compared with UCM; 4) UCM compared with ECC with immediate neonatal care after cord clamping.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (10 November 2017), and reference lists of retrieved studies. We updated the search in November 2018 and added nine new trial reports to the awaiting classification section to be assessed at the next update.\nSelection criteria\nRandomised controlled trials (RCTs) comparing delayed with early clamping of the umbilical cord (with immediate neonatal care after cord clamping or with cord intact) and UCM for births before 37 weeks' gestation. Quasi-RCTs were excluded.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. Random-effects are used in all meta-analyses. Review authors assessed the certainty of the evidence using the GRADE approach.\nMain results\nThis update includes forty-eight studies, involving 5721 babies and their mothers, with data available from 40 studies involving 4884 babies and their mothers. Babies were between 24 and 36+6 weeks' gestation at birth and multiple births were included. The data are mostly from high-income countries. Delayed clamping ranged between 30 to 180 seconds, with most studies delaying for 30 to 60 seconds. Early clamping was less than 30 seconds and often immediate. UCM was mostly before cord clamping but some were milked after cord clamping. We undertook subgroup analysis by gestation and type of intervention, and sensitivity analyses by low risk of selection and attrition bias.\nAll studies were high risk for performance bias and many were unclear for other aspects of risk of bias. Certainty of the evidence using GRADE was mostly low, mainly due to imprecision and unclear risk of bias.\nDelayed cord clamping (DCC) versus early cord clamping (ECC) both with immediate neonatal care after cord clamping (25 studies, 3100 babies and their mothers)\nDCC probably reduces the number of babies who die before discharge compared with ECC (average risk ratio (aRR) 0.73, 95% confidence interval (CI) 0.54 to 0.98, 20 studies, 2680 babies (moderate certainty)).\nNo studies reported on 'Death or neurodevelopmental impairment' in the early years'.\nDCC may make little or no difference to the number of babies with severe intraventricular haemorrhage (IVH grades 3 and 4) (aRR 0.94, 95% CI 0.63 to 1.39, 10 studies, 2058 babies, low certainty) but slightly reduces the number of babies with any grade IVH (aRR 0.83, 95% CI 0.70 to 0.99, 15 studies, 2333 babies, high certainty).\nDCC has little or no effect on chronic lung disease (CLD) (aRR 1.04, 95% CI 0.94 to 1.14, 6 studies, 1644 babies, high certainty).\nDue to insufficient data, we were unable to form conclusions regarding periventricular leukomalacia (PVL) (aRR 0.58, 95% CI 0.26 to 1.30, 4 studies, 1544 babies, low certainty) or maternal blood loss of 500 mL or greater (aRR 1.14, 95% CI 0.07 to 17.63, 2 studies, 180 women, very low certainty).\nWe identified no important heterogeneity in subgroup or sensitivity analyses.\nDelayed cord clamping (DCC) with immediate neonatal care with cord intact versus early cord clamping (ECC) (one study, 276 babies and their mothers)\nThere are insufficient data to be confident in our findings, but DCC with immediate neonatal care with cord intact may reduce the number of babies who die before discharge, although the data are also compatible with a slight increase in mortality, compared with ECC (aRR 0.47, 95% CI 0.20 to 1.11, 1 study, 270 babies, low certainty). DCC may also reduce the number of babies who die or have neurodevelopmental impairment in early years (aRR 0.61, 95% CI 0.39 to 0.96, 1 study, 218 babies, low certainty). There may be little or no difference in: severe IVH; all grades IVH; PVL; CLD; maternal blood loss ≥ 500 mL, assessed as low certainty mainly due to serious imprecision.\nDelayed cord clamping (DCC) with immediate neonatal care after cord clamping versus umbilical cord milking (UCM) (three studies, 322 babies and their mothers) and UCM versus early cord clamping (ECC) (11 studies, 1183 babies and their mothers)\nThere are insufficient data for reliable conclusions about the comparative effects of UCM compared with delayed or early clamping (mostly low or very low certainty).\nAuthors' conclusions\nDelayed, rather than early, cord clamping may reduce the risk of death before discharge for babies born preterm. There is insufficient evidence to show what duration of delay is best, one or several minutes, and therefore the optimum time to clamp the umbilical cord remains unclear. Whilst the current evidence supports not clamping the cord before 30 seconds at preterm births, future trials could compare different lengths of delay. Immediate neonatal care with the cord intact requires further study, and there are insufficient data on UCM.\nThe nine new reports awaiting further classification may alter the conclusions of the review once assessed.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this Cochrane Review, we set out to determine if delayed cord clamping or umbilical cord milking improves the health outcomes for babies born before 37 weeks' gestation. These interventions were compared with early cord clamping."
}
]
|
query-laysum | 5510 | [
{
"role": "user",
"content": "Abstract: Background\nStudies evaluating mass drug administration (MDA) in malarious areas have shown reductions in malaria immediately following the intervention. However, these effects vary by endemicity and are not sustained. Since the 2013 version of this Cochrane Review on this topic, additional studies have been published.\nObjectives\nPrimary objectives\nTo assess the sustained effect of MDA with antimalarial drugs on:\n- the reduction in malaria transmission in moderate- to high-transmission settings;\n- the interruption of transmission in very low- to low-transmission settings.\nSecondary objective\nTo summarize the risk of drug-associated adverse effects following MDA.\nSearch methods\nWe searched several trial registries, citation databases, conference proceedings, and reference lists for relevant articles up to 11 February 2021. We also communicated with researchers to identify additional published and unpublished studies.\nSelection criteria\nRandomized controlled trials (RCTs) and non-randomized studies comparing MDA to no MDA with balanced co-interventions across study arms and at least two geographically distinct sites per study arm.\nData collection and analysis\nTwo review authors independently assessed trials for eligibility and extracted data. We calculated relative risk (RR) and rate ratios with corresponding 95% confidence intervals (CIs) to compare prevalence and incidence, respectively, in MDA compared to no-MDA groups. We stratified analyses by malaria transmission and by malaria species. For cluster-randomized controlled trials (cRCTs), we adjusted standard errors using the intracluster correlation coefficient. We assessed the certainty of the evidence using the GRADE approach. For non-randomized controlled before-and-after (CBA) studies, we summarized the data using difference-in-differences (DiD) analyses.\nMain results\nThirteen studies met our criteria for inclusion. Ten were cRCTs and three were CBAs.\nCluster-randomized controlled trials\nModerate- to high-endemicity areas (prevalence ≥ 10%)\nWe included data from two studies conducted in The Gambia and Zambia.\nAt one to three months after MDA, the Plasmodium falciparum (hereafter, P falciparum) parasitaemia prevalence estimates may be higher compared to control but the CIs included no effect (RR 1.76, 95% CI 0.58 to 5.36; Zambia study; low-certainty evidence); parasitaemia incidence was probably lower (RR 0.61, 95% CI 0.40 to 0.92; The Gambia study; moderate-certainty evidence); and confirmed malaria illness incidence may be substantially lower, but the CIs included no effect (rate ratio 0.41, 95% CI 0.04 to 4.42; Zambia study; low-certainty evidence).\nAt four to six months after MDA, MDA showed little or no effect on P falciparum parasitaemia prevalence (RR 1.18, 95% CI 0.89 to 1.56; The Gambia study; moderate-certainty evidence) and, no persisting effect was demonstrated with parasitaemia incidence (rate ratio 0.91, 95% CI 0.55 to 1.50; The Gambia study).\nVery low- to low-endemicityareas (prevalence < 10%)\nSeven studies from Cambodia, Laos, Myanmar (two studies), Vietnam, Zambia, and Zanzibar evaluated the effects of multiple rounds of MDA on P falciparum. Immediately following MDA (less than one month after MDA), parasitaemia prevalence was reduced (RR 0.12, 95% CI 0.03 to 0.52; one study; low-certainty evidence). At one to three months after MDA, there was a reduction in both parasitaemia incidence (rate ratio 0.37, 95% CI 0.21 to 0.55; 1 study; moderate-certainty evidence) and prevalence (RR 0.25, 95% CI 0.15 to 0.41; 7 studies; low-certainty evidence). For confirmed malaria incidence, absolute rates were low, and it is uncertain whether MDA had an effect on this outcome (rate ratio 0.58, 95% CI 0.12 to 2.73; 2 studies; very low-certainty evidence).\nFor P falciparum prevalence, the relative differences declined over time, from RR 0.63 (95% CI 0.36 to 1.12; 4 studies) at four to six months after MDA, to RR 0.86 (95% CI 0.55 to 1.36; 5 studies) at 7 to 12 months after MDA. Longer-term prevalence estimates showed overall low absolute risks, and relative effect estimates of the effect of MDA on prevalence varied from RR 0.82 (95% CI 0.20 to 3.34) at 13 to 18 months after MDA, to RR 1.25 (95% CI 0.25 to 6.31) at 31 to 36 months after MDA in one study.\nFive studies from Cambodia, Laos, Myanmar (2 studies), and Vietnam evaluated the effect of MDA on Plasmodium vivax (hereafter, P vivax). One month following MDA, P vivax prevalence was lower (RR 0.18, 95% CI 0.08 to 0.40; 1 study; low-certainty evidence). At one to three months after MDA, there was a reduction in P vivax prevalence (RR 0.15, 95% CI 0.10 to 0.24; 5 studies; low-certainty evidence). The immediate reduction on P vivax prevalence was not sustained over time, from RR 0.78 (95% CI 0.63 to 0.95; 4 studies) at four to six months after MDA, to RR 1.12 (95% CI 0.94 to 1.32; 5 studies) at 7 to 12 months after MDA. One of the studies in Myanmar provided estimates of longer-term effects, where overall absolute risks were low, ranging from RR 0.81 (95% CI 0.44 to 1.48) at 13 to 18 months after MDA, to RR 1.20 (95% CI 0.44 to 3.29) at 31 to 36 months after MDA.\nNon-randomized studies\nThree CBA studies were conducted in moderate- to high-transmission areas in Burkina Faso, Kenya, and Nigeria. There was a reduction in P falciparum parasitaemia prevalence in MDA groups compared to control groups during MDA (DiD range: -15.8 to -61.4 percentage points), but the effect varied at one to three months after MDA (DiD range: 14.9 to -41.1 percentage points).\nAuthors' conclusions\nIn moderate- to high-transmission settings, no studies reported important effects on P falciparum parasitaemia prevalence within six months after MDA. In very low- to low-transmission settings, parasitaemia prevalence and incidence were reduced initially for up to three months for both P falciparum and P vivax; longer-term data did not demonstrate an effect after four months, but absolute risks in both intervention and control groups were low. No studies provided evidence of interruption of malaria transmission.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included studies available up to 11 February 2021."
}
]
|
query-laysum | 5511 | [
{
"role": "user",
"content": "Abstract: Background\nVitamin A deficiency is a major public health problem in low and middle income countries. Vitamin A supplementation in children six months of age and older has been found to be beneficial, but no effect of supplementation has been noted for children between one and five months of age. Supplementation during the neonatal period has been suggested to have an impact by increasing body stores in early infancy.\nObjectives\nTo evaluate the role of vitamin A supplementation for term neonates in low and middle income countries with respect to prevention of mortality and morbidity.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 2), MEDLINE via PubMed (1966 to 13 March 2016), Embase (1980 to 13 March 2016) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 13 March 2016). We also searched clinical trials databases, conference proceedings and reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nRandomised and quasi-randomised controlled trials. Also trials with a factorial design.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted study data. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the quality of evidence.\nMain results\nWe included 12 trials (168,460 neonates) in this review, with only a few trials reporting disaggregated data for term infants. Therefore, we analysed data and presented estimates for term infants (when specified) and for all infants.\nData for term neonates from three studies did not show a statistically significant effect on the risk of infant mortality at six months in the vitamin A group compared with the control group (typical risk ratio (RR) 0.80; 95% confidence interval (CI) 0.54 to 1.18; I2 = 63%). Analysis of data for all infants from 11 studies revealed no evidence of a significant reduction in the risk of infant mortality at six months among neonates supplemented with vitamin A compared with control neonates (typical RR 0.98, 95% CI 0.89 to 1.07; I2 = 47%). We observed similar results for infant mortality at 12 months of age with no significant effect of vitamin A compared with control (typical RR 1.04, 95% CI 0.94 to 1.15; I2 = 47%). Limited data were available for the outcomes of cause-specific mortality and morbidity, vitamin A deficiency, anaemia and adverse events.\nAuthors' conclusions\nGiven the high burden of death among children younger than five years of age in low and middle income countries, and the fact that mortality in infancy is a major contributory cause, it is critical to obtain sound scientific evidence of the effect of vitamin A supplementation during the neonatal period on infant mortality and morbidity. Evidence provided in this review does not indicate a potential beneficial effect of vitamin A supplementation among neonates at birth in reducing mortality during the first six months or 12 months of life. Given this finding and the absence of a clear indication of the biological mechanism through which vitamin A could affect mortality, along with substantial conflicting findings from individual studies conducted in settings with potentially varying levels of maternal vitamin A deficiency and infant mortality, absence of follow-up studies assessing any long-term impact of a bulging fontanelle after supplementation and the finding of a potentially harmful effect among female infants, additional research is warranted before a decision can be reached regarding policy recommendations for this intervention.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The present review identified 12 studies including 168,460 newborns in which the intervention group was supplemented with vitamin A during the newborn period."
}
]
|
query-laysum | 5512 | [
{
"role": "user",
"content": "Abstract: Background\nTraditionally, laparoscopic cholecystectomy is performed using two 10-mm ports and two 5-mm ports. Recently, a reduction in the number of ports has been suggested as a modification of the standard technique with a view to decreasing pain and improving cosmesis. The safety and effectiveness of using fewer-than-four ports has not yet been established.\nObjectives\nTo assess the benefits (such as improvement in cosmesis and earlier return to activity) and harms (such as increased complications) of using fewer-than-four ports (fewer-than-four-ports laparoscopic cholecystectomy) versus four ports in people undergoing laparoscopic cholecystectomy for any reason (symptomatic gallstones, acalculous cholecystitis, gallbladder polyp, or any other condition).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 8, 2013), MEDLINE, EMBASE, Science Citation Index Expanded, and the World Health Organization International Clinical Trials Registry Platform portal to September 2013.\nSelection criteria\nWe included all randomised clinical trials comparing fewer-than-four ports versus four ports, that is, with standard laparoscopic cholecystectomy that is performed with two ports of at least 10-mm incision and two ports of at least 5-mm incision.\nData collection and analysis\nTwo review authors independently identified the trials and extracted the data. We analysed the data using both the fixed-effect and the random-effects models. For each outcome, we calculated the risk ratio (RR) or mean difference (MD) with 95% confidence intervals (CI) based on intention-to-treat analysis, whenever possible.\nMain results\nWe found nine trials with 855 participants that randomised participants to fewer-than-four-ports laparoscopic cholecystectomy (n = 427) versus four-port laparoscopic cholecystectomy (n = 428). Most trials included low anaesthetic risk participants undergoing elective laparoscopic cholecystectomy. Seven of the nine trials used a single port laparoscopic cholecystectomy and the remaining two trials used three-port laparoscopic cholecystectomy as the experimental intervention. Only one trial including 70 participants had low risk of bias. Fewer-than-four-ports laparoscopic cholecystectomy could be completed successfully in more than 90% of participants in most trials. The remaining participants were mostly converted to four-port laparoscopic cholecystectomy but some participants had to undergo open cholecystectomy.\nThere was no mortality in either group in the seven trials that reported mortality (318 participants in fewer-than-four-ports laparoscopic cholecystectomy group and 316 participants in four-port laparoscopic cholecystectomy group). The proportion of participants with serious adverse events was low in both treatment groups and the estimated RR was compatible with a reduction and substantial increased risk with the fewer-than-four-ports group (6/318 (1.9%)) and four-port laparoscopic cholecystectomy group (0/316 (0%)) (RR 3.93; 95% CI 0.86 to 18.04; 7 trials; 634 participants; very low quality evidence). The estimated difference in the quality of life (measured between 10 and 30 days) was imprecise (standardised mean difference (SMD) 0.18; 95% CI -0.05 to 0.42; 4 trials; 510 participants; very low quality evidence), as was the proportion of participants in whom the laparoscopic cholecystectomy had to be converted to open cholecystectomy between the groups (fewer-than-four ports 3/289 (adjusted proportion 1.2%) versus four port: 5/292 (1.7%); RR 0.68; 95% CI 0.19 to 2.35; 5 trials; 581 participants; very low quality evidence). The fewer-than-four-ports laparoscopic cholecystectomy took 14 minutes longer to complete (MD 14.44 minutes; 95% CI 5.95 to 22.93; 9 trials; 855 participants; very low quality evidence). There was no clear difference in hospital stay between the groups (MD -0.01 days; 95% CI -0.28 to 0.26; 6 trials; 731 participants) or in the proportion of participants discharged as day surgery (RR 0.92; 95% CI 0.70 to 1.22; 1 trial; 50 participants; very low quality evidence) between the two groups. The times taken to return to normal activity and work were shorter by two days in the fewer-than-four-ports group compared with four-port laparoscopic cholecystectomy (return to normal activity: MD -1.20 days; 95% CI -1.58 to -0.81; 2 trials; 325 participants; very low quality evidence; return to work: MD -2.00 days; 95% CI -3.31 to -0.69; 1 trial; 150 participants; very low quality evidence). There was no significant difference in cosmesis scores at 6 to 12 months between the two groups (SMD 0.37; 95% CI -0.10 to 0.84; 2 trials; 317 participants; very low quality evidence).\nAuthors' conclusions\nThere is very low quality evidence that is insufficient to determine whether there is any significant clinical benefit in using fewer-than-four-ports laparoscopic cholecystectomy compared with four-port laparoscopic cholecystectomy. The safety profile of using fewer-than-four ports is yet to be established and fewer-than-four-ports laparoscopic cholecystectomy should be reserved for well-designed randomised clinical trials.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Fewer-than-four-ports laparoscopic cholecystectomy could be completed successfully in more than 90% of participants in most of the trials. The remaining participants were mostly converted to four-port laparoscopic cholecystectomy but some participants had to undergo open cholecystectomy (through a large incision in the abdomen). There was no mortality in either group in the seven trials that reported mortality (634 participants in the two groups). There was no significant difference in the proportion of participants who developed serious complications, quality of life between 10 and 30 days after operation, proportion of participants in whom the laparoscopic operation had to be converted to open cholecystectomy, or in the length of hospital stay between the groups. Fewer-than-four-ports laparoscopic cholecystectomy took about 15 minutes longer to complete than four-port laparoscopic cholecystectomy. The time taken to return to normal activity was one day shorter and time taken to return to work two days shorter in the fewer-than-four-ports group compared with four-port laparoscopic cholecystectomy. There was no significant difference in the cosmetic appearance between the two groups at 6 to 12 months after surgery. There appears to be no advantage of fewer-than-four-ports laparoscopic cholecystectomy in terms of decreasing surgical complications, hospital stay, or in improving quality of life and cosmetic appearance. In contrast, the safety of fewer-than-four port laparoscopic cholecystectomy is yet to be established. Fewer-than-four-ports laparoscopic cholecystectomy cannot be recommended routinely outside well-designed clinical trials."
}
]
|
query-laysum | 5513 | [
{
"role": "user",
"content": "Abstract: Background\nGenital infections caused by Chlamydia trachomatis are the most prevalent bacterial sexually transmitted infection worldwide. Screening of sexually active young adults to detect and treat asymptomatic infections might reduce chlamydia transmission and prevent reproductive tract morbidity, particularly pelvic inflammatory disease (PID) in women, which can cause tubal infertility and ectopic pregnancy.\nObjectives\nTo assess the effects and safety of chlamydia screening versus standard care on chlamydia transmission and infection complications in pregnant and non-pregnant women and in men.\nSearch methods\nWe searched the Cochrane Sexually Transmitted Infections Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, LILACS, CINAHL, DARE, PsycINFO and Web of Science electronic databases up to 14 February 2016, together with World Health Organization International Clinical Trials Registry (ICTRP) and ClinicalTrials.gov. We also handsearched conference proceedings, contacted trial authors and reviewed the reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials (RCTs) in adult women (non-pregnant and pregnant) and men comparing a chlamydia screening intervention with usual care and reporting on a primary outcome (C. trachomatis prevalence, PID in women, epididymitis in men or incidence of preterm delivery). We included non-randomised controlled clinical trials if there were no RCTs for a primary outcome.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, extracted data and assessed the risk of bias. We resolved disagreements by consensus or adjudication by a third reviewer. We described results in forest plots and conducted meta-analysis where appropriate using a fixed-effect model to estimate risk ratios (RR with 95% confidence intervals, CI) in intervention vs control groups. We conducted a pre-specified sensitivity analysis of the primary outcome, PID incidence, according to the risks of selection and detection bias.\nMain results\nWe included six trials involving 359,078 adult women and men. One trial was at low risk of bias in all six specific domains assessed. Two trials examined the effect of multiple rounds of chlamydia screening on C. trachomatis transmission. A cluster-controlled trial in women and men in the general population in the Netherlands found no change in chlamydia test positivity after three yearly invitations (intervention 4.1% vs control 4.3%, RR 0.96, 95% CI 0.84 to 1.09, 1 trial, 317,304 participants at first screening invitation, low quality evidence). Uptake of the intervention was low (maximum 16%). A cluster-randomised trial in female sex workers in Peru found a reduction in chlamydia prevalence after four years (adjusted RR 0.72, 95% CI 0.54 to 0.98, 1 trial, 4465 participants, low quality evidence).\nFour RCTs examined the effect of chlamydia screening on PID in women 12 months after a single screening offer. In analysis of four trials according to the intention-to-treat principle, the risk of PID was lower in women in intervention than control groups, with little evidence of between-trial heterogeneity (RR 0.68, 95% CI 0.49 to 0.94, I2 7%, 4 trials, 21,686 participants, moderate quality evidence). In a sensitivity analysis, the estimated effect of chlamydia screening in two RCTs at low risk of detection bias (RR 0.80, 95% CI 0.55 to 1.17) was compatible with no effect and was lower than in two RCTs at high or unclear risk of detection bias (RR 0.42, 95% CI 0.22 to 0.83).\nThe risk of epididymitis in men invited for screening, 12 months after a single screening offer, was 20% lower risk for epididymitis than in those not invited; the confidence interval was wide and compatible with no effect (RR 0.80, 95% CI 0.45 to 1.42, 1 trial, 14,980 participants, very low quality evidence).\nWe found no RCTs of the effects of chlamydia screening in pregnancy and no trials that measured the harms of chlamydia screening.\nAuthors' conclusions\nEvidence about the effects of screening on C. trachomatis transmission is of low quality because of directness and risk of bias. There is moderate quality evidence that detection and treatment of chlamydia infection can reduce the risk of PID in women at individual level. There is an absence of RCT evidence about the effects of chlamydia screening in pregnancy.\nFuture RCTs of chlamydia screening interventions should determine the effects of chlamydia screening in pregnancy, of repeated rounds of screening on the incidence of chlamydia-associated PID and chlamydia reinfection in general and high risk populations.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chlamydia trachomatis is a common sexually transmitted infection. In several countries, about 3%-5% of sexually active adults aged 15 to 25 years have chlamydia at any given time. Untreated infections can lead to complications, including fertility problems in women and testicular inflammation in men. Screening to identify and treat people who are unknowingly infected might reduce the risk of complications and transmission to others."
}
]
|
query-laysum | 5514 | [
{
"role": "user",
"content": "Abstract: Background\nApnoea is a breathing disorder marked by the absence of airflow at the nose or mouth. In children, risk factors include adenotonsillar hypertrophy, obesity, neuromuscular disorders and craniofacial anomalies. The most common treatment for obstructive sleep apnoea syndrome (OSAS) in childhood is adeno-tonsillectomy. This approach is limited by its surgical risks, mostly in children with comorbidities and, in some patients, by recurrence that can be associated with craniofacial problems. Oral appliances and functional orthopaedic appliances have been used for patients who have OSAS and craniofacial anomalies because they hold the lower jaw (mandible) forwards which potentially enlarges the upper airway and increases the upper airspace, improving the respiratory function.\nObjectives\nTo assess the effects of oral appliances or functional orthopaedic appliances for obstructive sleep apnoea in children.\nSearch methods\nWe searched the following electronic databases: Cochrane Oral Health's Trials Register (to 7 April 2016); Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 3) in the Cochrane Library (searched 7 April 2016); MEDLINE Ovid (1946 to 7 April 2016); Embase Ovid (1980 to 7 April 2016); LILACS BIREME (from 1982 to 7 April 2016); BBO BIREME (from 1986 to 7 April 2016) and SciELO Web of Science (from 1997 to 7 April 2016). We searched ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform for ongoing trials on 7 April 2016. We placed no restrictions on the language or date of publication when searching the electronic databases.\nSelection criteria\nAll randomised or quasi-randomised controlled trials comparing all types of oral and functional orthopaedic appliances with placebo or no treatment, in children 15 years old or younger. Primary outcome: reduction of apnoea to less than one episode per hour. Secondary outcomes: dental and skeletal relationship, sleep parameters improvement, cognitive and phonoaudiological function, behavioural problems, quality of life, side effects (tolerability) and economic evaluation.\nData collection and analysis\nTwo review authors screened studies and extracted data independently. Authors were contacted for additional information. We calculated risk ratios with 95% confidence intervals for all important dichotomous outcomes. We assessed the quality of the evidence of included studies using GRADEpro software.\nMain results\nThe initial search identified 686 trials. Only one trial, reporting the results from a total of 23 children and comparing an oral appliance to no treatment, was suitable for inclusion in the review. The trial assessed apnoea-hypopnoea, daytime symptoms (sleepiness, irritability, tiredness, school problems, morning headache, thirstiness in the morning, oral breathing and nasal stuffiness) and night-time symptoms (habitual snoring, restless sleep and nightmares measured by questionnaire). Results were inconsistent across outcomes measures and time points. The evidence was considered very low quality.\nAuthors' conclusions\nThere is insufficient evidence to support or refute the effectiveness of oral appliances and functional orthopaedic appliances for the treatment of obstructive sleep apnoea in children. Oral appliances or functional orthopaedic appliances may be considered in specified cases as an auxiliary in the treatment of children who have craniofacial anomalies which are risk factors for apnoea.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This was a study with a small number of participants and the quality of the evidence for the different outcomes was rated as very low. At present there is insufficient evidence to conclude that oral or functional orthopaedic appliances are effective in the treatment of obstructive sleep apnoea in children."
}
]
|
query-laysum | 5515 | [
{
"role": "user",
"content": "Abstract: Background\nWorkplace aggression constitutes a serious issue for healthcare workers and organizations. Aggression is tied to physical and mental health issues at an individual level, as well as to absenteeism, decreased productivity or quality of work, and high employee turnover rates at an organizational level. To counteract these negative impacts, organizations have used a variety of interventions, including education and training, to provide workers with the knowledge and skills needed to prevent aggression.\nObjectives\nTo assess the effectiveness of education and training interventions that aim to prevent and minimize workplace aggression directed toward healthcare workers by patients and patient advocates.\nSearch methods\nCENTRAL, MEDLINE, Embase, six other databases and five trial registers were searched from their inception to June 2020 together with reference checking, citation searching and contact with study authors to identify additional studies.\nSelection criteria\nRandomized controlled trials (RCTs), cluster-randomized controlled trials (CRCTs), and controlled before and after studies (CBAs) that investigated the effectiveness of education and training interventions targeting aggression prevention for healthcare workers.\nData collection and analysis\nFour review authors evaluated and selected the studies resulting from the search. We used standard methodological procedures expected by Cochrane. We assessed the certainty of evidence using the GRADE approach.\nMain results\nWe included nine studies—four CRCTs, three RCTs, and two CBAs—with a total of 1688 participants. Five studies reported episodes of aggression, and six studies reported secondary outcomes. Seven studies were conducted among nurses or nurse aides, and two studies among healthcare workers in general. Three studies took place in long-term care, two in the psychiatric ward, and four in hospitals or health centers. Studies were reported from the United States, Switzerland, the United Kingdom, Taiwan, and Sweden.\nAll included studies reported on education combined with training interventions. Four studies evaluated online programs, and five evaluated face-to-face programs. Five studies were of long duration (up to 52 weeks), and four studies were of short duration. Eight studies had short-term follow-up (< 3 months), and one study long-term follow-up (> 1 year). Seven studies were rated as being at \"high\" risk of bias in multiple domains, and all had \"unclear\" risk of bias in a single domain or in multiple domains.\nEffects on aggression\nShort-term follow-up\nThe evidence is very uncertain about effects of education and training on aggression at short-term follow-up compared to no intervention (standardized mean difference [SMD] -0.33, 95% confidence interval [CI] -1.27 to 0.61, 2 CRCTs; risk ratio [RR] 2.30, 95% CI 0.97 to 5.42, 1 CBA; SMD -1.24, 95% CI -2.16 to -0.33, 1 CBA; very low-certainty evidence).\nLong-term follow-up\nEducation may not reduce aggression compared to no intervention in the long term (RR 1.14, 95% CI 0.95 to 1.37, 1 CRCT; low-certainty evidence).\nEffects on knowledge, attitudes, skills, and adverse outcomes\nEducation may increase personal knowledge about workplace aggression at short-term follow-up (SMD 0.86, 95% CI 0.34 to 1.38, 1 RCT; low-certainty evidence). The evidence is very uncertain about effects of education on personal knowledge in the long term (RR 1.26, 95% CI 0.90 to 1.75, 1 RCT; very low-certainty evidence). Education may improve attitudes among healthcare workers at short-term follow-up, but the evidence is very uncertain (SMD 0.59, 95% CI 0.24 to 0.94, 2 CRCTs and 3 RCTs; very low-certainty evidence). The type and duration of interventions resulted in different sizes of effects. Education may not have an effect on skills related to workplace aggression (SMD 0.21, 95% CI -0.07 to 0.49, 1 RCT and 1 CRCT; very low-certainty evidence) nor on adverse personal outcomes, but the evidence is very uncertain (SMD -0.31, 95% CI -1.02 to 0.40, 1 RCT; very low-certainty evidence).\nMeasurements of these concepts showed high heterogeneity.\nAuthors' conclusions\nEducation combined with training may not have an effect on workplace aggression directed toward healthcare workers, even though education and training may increase personal knowledge and positive attitudes. Better quality studies that focus on specific settings of healthcare work where exposure to patient aggression is high are needed. Moreover, as most studies have assessed episodes of aggression toward nurses, future studies should include other types of healthcare workers who are also victims of aggression in the same settings, such as orderlies (healthcare assistants). Studies should especially use reports of aggression at an institutional level and should rely on multi-source data while relying on validated measures. Studies should also include days lost to sick leave and employee turnover and should measure outcomes at one-year follow-up. Studies should specify the duration and type of delivery of education and should use an active comparison to prevent raising awareness and reporting in the intervention group only.\n\nGiven the provided abstract, please respond to the following query: \"Why we did this Cochrane Review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Aggressive behavior exhibited by patients and their families, friends, and carers is a serious problem for healthcare workers. It may affect the quality and safety of the care that healthcare workers can provide.\nEducation and training programs have been developed to try to reduce—or eliminate—aggressive behavior at work. These programs are intended to teach and train healthcare workers about:\n• their organization's policies and procedures;\n• how to assess risks; and\n• strategies to control or reduce the chances—and effects—of experiencing aggressive behavior."
}
]
|
query-laysum | 5516 | [
{
"role": "user",
"content": "Abstract: Background\nDementia, a global health priority, has no current cure. Around 50 million people worldwide currently live with dementia, and this number is expected to treble by 2050. Some health conditions and lifestyle behaviours can increase or decrease the risk of dementia and are known as 'predictors'. Prognostic models combine such predictors to measure the risk of future dementia. Models that can accurately predict future dementia would help clinicians select high-risk adults in middle age and implement targeted risk reduction.\nObjectives\nOur primary objective was to identify multi-domain prognostic models used in middle-aged adults (aged 45 to 65 years) for predicting dementia or cognitive impairment. Eligible multi-domain prognostic models involved two or more of the modifiable dementia predictors identified in a 2020 Lancet Commission report and a 2019 World Health Organization (WHO) report (less education, hearing loss, traumatic brain injury, hypertension, excessive alcohol intake, obesity, smoking, depression, social isolation, physical inactivity, diabetes mellitus, air pollution, poor diet, and cognitive inactivity). Our secondary objectives were to summarise the prognostic models, to appraise their predictive accuracy (discrimination and calibration) as reported in the development and validation studies, and to identify the implications of using dementia prognostic models for the management of people at a higher risk for future dementia.\nSearch methods\nWe searched MEDLINE, Embase, PsycINFO, CINAHL, and ISI Web of Science Core Collection from inception until 6 June 2022. We performed forwards and backwards citation tracking of included studies using the Web of Science platform.\nSelection criteria\nWe included development and validation studies of multi-domain prognostic models. The minimum eligible follow-up was five years. Our primary outcome was an incident clinical diagnosis of dementia based on validated diagnostic criteria, and our secondary outcome was dementia or cognitive impairment determined by any other method.\nData collection and analysis\nTwo review authors independently screened the references, extracted data using a template based on the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS), and assessed risk of bias and applicability of included studies using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). We synthesised the C-statistics of models that had been externally validated in at least three comparable studies.\nMain results\nWe identified 20 eligible studies; eight were development studies and 12 were validation studies. There were 14 unique prognostic models: seven models with validation studies and seven models with development-only studies. The models included a median of nine predictors (range 6 to 34); the median number of modifiable predictors was five (range 2 to 11). The most common modifiable predictors in externally validated models were diabetes, hypertension, smoking, physical activity, and obesity. In development-only models, the most common modifiable predictors were obesity, diabetes, hypertension, and smoking. No models included hearing loss or air pollution as predictors. Nineteen studies had a high risk of bias according to the PROBAST assessment, mainly because of inappropriate analysis methods, particularly lack of reported calibration measures. Applicability concerns were low for 12 studies, as their population, predictors, and outcomes were consistent with those of interest for this review. Applicability concerns were high for nine studies, as they lacked baseline cognitive screening or excluded an age group within the range of 45 to 65 years.\nOnly one model, Cardiovascular Risk Factors, Ageing, and Dementia (CAIDE), had been externally validated in multiple studies, allowing for meta-analysis. The CAIDE model included eight predictors (four modifiable predictors): age, education, sex, systolic blood pressure, body mass index (BMI), total cholesterol, physical activity and APOEƐ4 status. Overall, our confidence in the prediction accuracy of CAIDE was very low; our main reasons for downgrading the certainty of the evidence were high risk of bias across all the studies, high concern of applicability, non-overlapping confidence intervals (CIs), and a high degree of heterogeneity.\nThe summary C-statistic was 0.71 (95% CI 0.66 to 0.76; 3 studies; very low-certainty evidence) for the incident clinical diagnosis of dementia, and 0.67 (95% CI 0.61 to 0.73; 3 studies; very low-certainty evidence) for dementia or cognitive impairment based on cognitive scores. Meta-analysis of calibration measures was not possible, as few studies provided these data.\nAuthors' conclusions\nWe identified 14 unique multi-domain prognostic models used in middle-aged adults for predicting subsequent dementia. Diabetes, hypertension, obesity, and smoking were the most common modifiable risk factors used as predictors in the models. We performed meta-analyses of C-statistics for one model (CAIDE), but the summary values were unreliable. Owing to lack of data, we were unable to meta-analyse the calibration measures of CAIDE. This review highlights the need for further robust external validations of multi-domain prognostic models for predicting future risk of dementia in middle-aged adults.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that evaluated tools used in middle-aged adults to identify those at high risk of dementia later in life. We investigated how well these tools predicted future dementia based on an accuracy value. If the accuracy value is more than a recommended standard of 0.75, we can say that the tool is accurate at predicting future dementia. It is also important to establish that a tool developed in one group of people (in the original development study) can accurately predict dementia in another group of people (in validation studies); only then can it be applied in routine healthcare practice. We compared and summarised the results of the studies."
}
]
|
query-laysum | 5517 | [
{
"role": "user",
"content": "Abstract: Background\nClinical egg allergy is a common food allergy. Current management relies upon strict allergen avoidance. Oral immunotherapy might be an optional treatment, through desensitization to egg allergen.\nObjectives\nTo determine the efficacy and safety of oral and sublingual immunotherapy in children and adults with immunoglobulin E (IgE)-mediated egg allergy as compared to a placebo treatment or an avoidance strategy.\nSearch methods\nWe searched 13 databases for journal articles, conference proceedings, theses and trials registers using a combination of subject headings and text words (last search 31 March 2017).\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing oral immunotherapy or sublingual immunotherapy administered by any protocol with placebo or an elimination diet. Participants were children or adults with clinical egg allergy.\nData collection and analysis\nWe retrieved 97 studies from the electronic searches. We selected studies, extracted data and assessed the methodological quality. We attempted to contact the study investigators to obtain the unpublished data, wherever possible. We used the I² statistic to assess statistical heterogeneity. We estimated a pooled risk ratio (RR) with 95% confidence interval (CI) for each outcome using a Mantel-Haenzel fixed-effect model if statistical heterogeneity was low (I² value less than 50%). We rated the quality of evidence for all outcomes using GRADE.\nMain results\nWe included 10 RCTs that met our inclusion criteria, that involved a total of 439 children (oral immunotherapy 249; control intervention 190), aged 1 year to 18 years. Each study used a different oral immunotherapy protocol; none used sublingual immunotherapy. Three studies used placebo and seven used an egg avoidance diet as the control. Primary outcomes were: an increased amount of egg that can be ingested and tolerated without adverse events while receiving allergen-specific oral immunotherapy or sublingual immunotherapy, compared to control; and a complete recovery from egg allergy after completion of oral immunotherapy or sublingual immunotherapy, compared to control. Most children (82%) in the oral immunotherapy group could ingest a partial serving of egg (1 g to 7.5 g) compared to 10% of control group children (RR 7.48, 95% CI 4.91 to 11.38; RD 0.73, 95% CI 0.67 to 0.80). Fewer than half (45%) of children receiving oral immunotherapy were able to tolerate a full serving of egg compared to 10% of the control group (RR 4.25, 95% CI 2.77 to 6.53; RD 0.35, 95% CI 0.28 to 0.43). All 10 trials reported numbers of children with serious adverse events (SAEs) and numbers of children with mild-to-severe adverse events. SAEs requiring epinephrine/adrenaline presented in 21/249 (8.4%) of children in the oral immunotherapy group, and none in the control group. Mild-to-severe adverse events were frequent; 75% of children presented mild-to-severe adverse events during oral immunotherapy treatment versus 6.8% of the control group (RR 8.35, 95% CI 5.31 to 13.12). Of note, seven studies used an egg avoidance diet as the control. Adverse events occurred in 4.2% of children, which may relate to accidental ingestion of egg-containing food. Three studies used a placebo control with adverse events present in 2.6% of children. Overall, there was inconsistent methodological rigour in the trials. All studies enrolled small numbers of children and used different methods to provide oral immunotherapy. Eight included studies were judged to be at high risk of bias in at least one domain. Furthermore, the quality of evidence was judged to be low due to small numbers of participants and events, and possible biases.\nAuthors' conclusions\nFrequent and increasing exposure to egg over one to two years in people who are allergic to egg builds tolerance, with almost everyone becoming more tolerant compared with a minority in the control group and almost half of people being totally tolerant of egg by the end of treatment compared with 1 in 10 people who avoid egg. However, nearly all who received treatment experienced adverse events, mainly allergy-related. We found that 1 in 12 children had serious allergic reactions requiring adrenaline, and some people gave up oral immunotherapy. It appears that oral immunotherapy for egg allergy is effective, but confidence in the trade-off between benefits and harms is low; because there was a small number of trials with few participants, and methodological problems with some trials.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched up to March 2017 for this update."
}
]
|
query-laysum | 5518 | [
{
"role": "user",
"content": "Abstract: Background\nAntisocial personality disorder (AsPD) is associated with rule-breaking, criminality, substance use, unemployment, relationship difficulties, and premature death. Certain types of medication (drugs) may help people with AsPD. This review updates a previous Cochrane review, published in 2010.\nObjectives\nTo assess the benefits and adverse effects of pharmacological interventions for adults with AsPD.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, 13 other databases and two trials registers up to 5 September 2019. We also checked reference lists and contacted study authors to identify studies.\nSelection criteria\nRandomised controlled trials in which adults (age 18 years and over) with a diagnosis of AsPD or dissocial personality disorder were allocated to a pharmacological intervention or placebo control condition.\nData collection and analysis\nFour authors independently selected studies and extracted data. We assessed risk of bias and created 'Summary of findings tables' and assessed the certainty of the evidence using the GRADE framework. The primary outcomes were: aggression; reconviction; global state/global functioning; social functioning; and adverse events.\nMain results\nWe included 11 studies (three new to this update), involving 416 participants with AsPD. Most studies (10/11) were conducted in North America. Seven studies were conducted exclusively in an outpatient setting, one in an inpatient setting, and one in prison; two studies used multiple settings. The average age of participants ranged from 28.6 years to 45.1 years (overall mean age 39.6 years). Participants were predominantly (90%) male. Study duration ranged from 6 to 24 weeks, with no follow-up period. Data were available from only four studies involving 274 participants with AsPD. All the available data came from unreplicated, single reports, and did not allow independent statistical analysis to be conducted. Many review findings were limited to descriptive summaries based on analyses carried out and reported by the trial investigators.\nNo study set out to recruit participants on the basis of having AsPD; many participants presented primarily with substance abuse problems. The studies reported on four primary outcomes and six secondary outcomes. Primary outcomes were aggression (six studies) global/state functioning (three studies), social functioning (one study), and adverse events (seven studies). Secondary outcomes were leaving the study early (eight studies), substance misuse (five studies), employment status (one study), impulsivity (one study), anger (three studies), and mental state (three studies). No study reported data on the primary outcome of reconviction or the secondary outcomes of quality of life, engagement with services, satisfaction with treatment, housing/accommodation status, economic outcomes or prison/service outcomes.\nEleven different drugs were compared with placebo, but data for AsPD participants were only available for five comparisons. Three classes of drug were represented: antiepileptic; antidepressant; and dopamine agonist (anti-Parkinsonian) drugs. We considered selection bias to be unclear in 8/11 studies, attrition bias to be high in 7/11 studies, and performance bias to be low in 7/11 studies. Using GRADE, we rated the certainty of evidence for each outcome in this review as very low, meaning that we have very little confidence in the effect estimates reported.\nPhenytoin (antiepileptic) versus placebo\nOne study (60 participants) reported very low-certainty evidence that phenytoin (300 mg/day), compared to placebo, may reduce the mean frequency of aggressive acts per week (phenytoin mean = 0.33, no standard deviation (SD) reported; placebo mean = 0.51, no SD reported) in male prisoners with aggression (skewed data) at endpoint (six weeks). The same study (60 participants) reported no evidence of difference between phenytoin and placebo in the number of participants reporting the adverse event of nausea during week one (odds ratio (OR) 1.00, 95% confidence interval (CI) 0.06 to 16.76; very low-certainty evidence). The study authors also reported that no important side effects were detectable via blood cell counts or liver enzyme tests (very low-certainty evidence).\nThe study did not measure reconviction, global/state functioning or social functioning.\nDesipramine (antidepressant) versus placebo\nOne study (29 participants) reported no evidence of a difference between desipramine (250 to 300 mg/day) and placebo on mean social functioning scores (desipramine = 0.19; placebo = 0.21), assessed with the family-social domain of the Addiction Severity Index (scores range from zero to one, with higher values indicating worse social functioning), at endpoint (12 weeks) (very low-certainty evidence).\nNeither of the studies included in this comparison measured the other primary outcomes: aggression; reconviction; global/state functioning; or adverse events.\nNortriptyline (antidepressant) versus placebo\nOne study (20 participants) reported no evidence of a difference between nortriptyline (25 to 75 mg/day) and placebo on mean global state/functioning scores (nortriptyline = 0.3; placebo = 0.7), assessed with the Symptom Check List-90 (SCL-90) Global Severity Index (GSI; mean of subscale scores, ranging from zero to four, with higher scores indicating greater severity of symptoms), at endpoint (six months) in men with alcohol dependency (very low-certainty evidence).\nThe study measured side effects but did not report data on adverse events for the AsPD subgroup.\nThe study did not measure aggression, reconviction or social functioning.\nBromocriptine (dopamine agonist) versus placebo\nOne study (18 participants) reported no evidence of difference between bromocriptine (15 mg/day) and placebo on mean global state/functioning scores (bromocriptine = 0.4; placebo = 0.7), measured with the GSI of the SCL-90 at endpoint (six months) (very low-certainty evidence).\nThe study did not provide data on adverse effects, but reported that 12 patients randomised to the bromocriptine group experienced severe side effects, five of whom dropped out of the study in the first two days due to nausea and severe flu-like symptoms (very low-certainty evidence).\nThe study did not measure aggression, reconviction and social functioning.\nAmantadine (dopamine agonist) versus placebo\nThe study in this comparison did not measure any of the primary outcomes.\nAuthors' conclusions\nThe evidence summarised in this review is insufficient to draw any conclusion about the use of pharmacological interventions in the treatment of antisocial personality disorder. The evidence comes from single, unreplicated studies of mostly older medications. The studies also have methodological issues that severely limit the confidence we can draw from their results. Future studies should recruit participants on the basis of having AsPD, and use relevant outcome measures, including reconviction.\n\nGiven the provided abstract, please respond to the following query: \"Phenytoin (antiepileptic) versus placebo\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One study (60 participants) found very low-certainty evidence that, compared to placebo, phenytoin may reduce the average frequency of aggressive acts per week in aggressive male prisoners with AsPD at six weeks. The number of participants reporting sickness during week one did not differ across groups, and no side effects were detectable by blood tests. We are very uncertain about these findings."
}
]
|
query-laysum | 5519 | [
{
"role": "user",
"content": "Abstract: Background\nPregnancies with pre-existing diabetes are high risk, with increased risk of poorer fetal, neonatal, and maternal outcomes. Identifying interventions to improving health outcomes for women with diabetes and their infants is a priority, as rates of diabetes continue to increase.\nExercise has been shown to have benefits for non-pregnant individuals with pre-existing type 2 diabetes, such as improving glycaemic control, and reducing visceral adipose tissue and plasma triglycerides. For pregnant women with pre-existing diabetes, the effects of exercise interventions on the mother and her baby are unknown.\nAn earlier Cochrane review on 'Exercise for pregnant women with diabetes' considered both pre-existing diabetes and gestational diabetes. That Cochrane review has now been split into two new reviews (following new protocols) - one on gestational diabetes and one on pre-existing diabetes (this review).\nObjectives\nTo evaluate the effects of exercise interventions for improving maternal and fetal outcomes in women with pre-existing diabetes.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, and the WHO International Clinical Trials Registry Platform (ICTRP) on 27 June 2017, and reference lists of retrieved studies.\nSelection criteria\nWe had planned to include published or unpublished randomised controlled trials (RCT) or cluster-randomised trials, in full text or abstract format that compared any type of exercise programme, added to standard care, targeted at women with known pre-gestational diabetes (type 1 or type 2 diabetes), at any stage of pregnancy, compared with 1) standard care alone or 2) standard care plus another exercise intervention. Quasi-randomised and cross-over trials were excluded. Conference abstracts were handled in the same way as full-text publications.\nWomen with gestational diabetes mellitus were excluded, as they were covered in a separate Cochrane review.\nData collection and analysis\nWe had planned that two review authors would independently assess all the potential studies we identified as a result of the search strategy. For eligible studies, two review authors would have independently extracted the data using an agreed form. We had planned to resolve discrepancies through discussion, or by consulting a third person. We also had planned to assess the evidence using the GRADE approach.\nMain results\nWe did not identify any randomised controlled trials.\nAuthors' conclusions\nThere was no evidence from RCTs that evaluated the effects of exercise interventions for improving maternal and fetal outcomes in women with pre-existing diabetes.\nGood quality, large randomised controlled trials are urgently needed to identify exercise interventions that are safe, and improve health outcomes for women with pre-existing diabetes and their babies. Future studies in this area could utilise the standardised outcomes in this review, in order to improve consistency between trials in this area, and aid future meta-analysis.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Diabetes mellitus can be caused by autoimmune destruction of the cells producing insulin, so that levels are reduced (type 1 diabetes), or the body tissues becoming resistant to insulin (type 2 diabetes). The end result is increased blood glucose levels. Insulin is used to regulate glucose levels for pregnant women with type 1 diabetes. For women with type 2 diabetes, lifestyle changes, including diet and exercise, are an important part of treatment. An oral anti-diabetic drug (medication that aims to reduce blood sugar levels) or insulin may be added to lower blood glucose levels. We set out to evaluate the effects of exercise interventions, for pregnant women with pre-existing type 1 or 2 diabetes, on birth outcomes for the mother and her baby. An earlier review on the effects of exercise on diabetes during pregnancy has been split into two reviews - one for women with gestational diabetes, and this review, on women with pre-existing diabetes."
}
]
|
query-laysum | 5520 | [
{
"role": "user",
"content": "Abstract: Background\nPhosphate binders are used to reduce positive phosphate balance and to lower serum phosphate levels for people with chronic kidney disease (CKD) with the aim to prevent progression of chronic kidney disease-mineral and bone disorder (CKD-MBD). This is an update of a review first published in 2011.\nObjectives\nThe aim of this review was to assess the benefits and harms of phosphate binders for people with CKD with particular reference to relevant biochemical end-points, musculoskeletal and cardiovascular morbidity, hospitalisation, and death.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 12 July 2018 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nWe included randomised controlled trials (RCTs) or quasi-RCTs of adults with CKD of any GFR category comparing a phosphate binder to another phosphate binder, placebo or usual care to lower serum phosphate. Outcomes included all-cause and cardiovascular death, myocardial infarction, stroke, adverse events, vascular calcification and bone fracture, and surrogates for such outcomes including serum phosphate, parathyroid hormone (PTH), and FGF23.\nData collection and analysis\nTwo authors independently selected studies for inclusion and extracted study data. We applied the Cochrane 'Risk of Bias' tool and used the GRADE process to assess evidence certainty. We estimated treatment effects using random-effects meta-analysis. Results were expressed as risk ratios (RR) for dichotomous outcomes together with 95% confidence intervals (CI) or mean differences (MD) or standardised MD (SMD) for continuous outcomes.\nMain results\nWe included 104 studies involving 13,744 adults. Sixty-nine new studies were added to this 2018 update.\nMost placebo or usual care controlled studies were among participants with CKD G2 to G5 not requiring dialysis (15/25 studies involving 1467 participants) while most head to head studies involved participants with CKD G5D treated with dialysis (74/81 studies involving 10,364 participants). Overall, seven studies compared sevelamer with placebo or usual care (667 participants), seven compared lanthanum to placebo or usual care (515 participants), three compared iron to placebo or usual care (422 participants), and four compared calcium to placebo or usual care (278 participants). Thirty studies compared sevelamer to calcium (5424 participants), and fourteen studies compared lanthanum to calcium (1690 participants). No study compared iron-based binders to calcium. The remaining studies evaluated comparisons between sevelamer (hydrochloride or carbonate), sevelamer plus calcium, lanthanum, iron (ferric citrate, sucroferric oxyhydroxide, stabilised polynuclear iron(III)-oxyhydroxide), calcium (acetate, ketoglutarate, carbonate), bixalomer, colestilan, magnesium (carbonate), magnesium plus calcium, aluminium hydroxide, sucralfate, the inhibitor of phosphate absorption nicotinamide, placebo, or usual care without binder. In 82 studies, treatment was evaluated among adults with CKD G5D treated with haemodialysis or peritoneal dialysis, while in 22 studies, treatment was evaluated among participants with CKD G2 to G5. The duration of study follow-up ranged from 8 weeks to 36 months (median 3.7 months). The sample size ranged from 8 to 2103 participants (median 69). The mean age ranged between 42.6 and 68.9 years.\nRandom sequence generation and allocation concealment were low risk in 25 and 15 studies, respectively. Twenty-seven studies reported low risk methods for blinding of participants, investigators, and outcome assessors. Thirty-one studies were at low risk of attrition bias and 69 studies were at low risk of selective reporting bias.\nIn CKD G2 to G5, compared with placebo or usual care, sevelamer, lanthanum, iron and calcium-based phosphate binders had uncertain or inestimable effects on death (all causes), cardiovascular death, myocardial infarction, stroke, fracture, or coronary artery calcification. Sevelamer may lead to constipation (RR 6.92, CI 2.24 to 21.4; low certainty) and lanthanum (RR 2.98, CI 1.21 to 7.30, moderate certainty) and iron-based binders (RR 2.66, CI 1.15 to 6.12, moderate certainty) probably increased constipation compared with placebo or usual care. Lanthanum may result in vomiting (RR 3.72, CI 1.36 to 10.18, low certainty). Iron-based binders probably result in diarrhoea (RR 2.81, CI 1.18 to 6.68, high certainty), while the risks of other adverse events for all binders were uncertain.\nIn CKD G5D sevelamer may lead to lower death (all causes) (RR 0.53, CI 0.30 to 0.91, low certainty) and induce less hypercalcaemia (RR 0.30, CI 0.20 to 0.43, low certainty) when compared with calcium-based binders, and has uncertain or inestimable effects on cardiovascular death, myocardial infarction, stroke, fracture, or coronary artery calcification. The finding of lower death with sevelamer compared with calcium was present when the analysis was restricted to studies at low risk of bias (RR 0.50, CI 0.32 to 0.77). In absolute terms, sevelamer may lower risk of death (all causes) from 210 per 1000 to 105 per 1000 over a follow-up of up to 36 months, compared to calcium-based binders. Compared with calcium-based binders, lanthanum had uncertain effects with respect to all-cause or cardiovascular death, myocardial infarction, stroke, fracture, or coronary artery calcification and probably had reduced risks of treatment-related hypercalcaemia (RR 0.16, CI 0.06 to 0.43, low certainty). There were no head-to-head studies of iron-based binders compared with calcium. The paucity of placebo-controlled studies in CKD G5D has led to uncertainty about the effects of phosphate binders on patient-important outcomes compared with placebo.\nIt is uncertain whether the effects of binders on clinically-relevant outcomes were different for patients who were and were not treated with dialysis in subgroup analyses.\nAuthors' conclusions\nIn studies of adults with CKD G5D treated with dialysis, sevelamer may lower death (all causes) compared to calcium-based binders and incur less treatment-related hypercalcaemia, while we found no clinically important benefits of any phosphate binder on cardiovascular death, myocardial infarction, stroke, fracture or coronary artery calcification. The effects of binders on patient-important outcomes compared to placebo are uncertain. In patients with CKD G2 to G5, the effects of sevelamer, lanthanum, and iron-based phosphate binders on cardiovascular, vascular calcification, and bone outcomes compared to placebo or usual care, are also uncertain and they may incur constipation, while iron-based binders may lead to diarrhoea.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 104 studies of phosphate binders that included 13,744 people. Some studies gave treatment for only eight weeks while some studies treated participants for three years. People in the studies had a range of kidney function, and many were on dialysis. Overall we could not be certain of a number of important outcomes because many of the clinical studies we included had important flaws in their design.\nSevelamer treatment may have decreased death for those patients given this medication when taken instead of calcium. The phosphate binders probably caused constipation, but we could not be very certain about the risks of other side-effects. We were not very certain whether phosphate binders reduced heart complications, stroke, bone pain, or calcification of blood vessels."
}
]
|
query-laysum | 5521 | [
{
"role": "user",
"content": "Abstract: Background\nAlthough lifestyle interventions are commonly recommended in the management of patients with chronic gout, the evidence from trial data for their benefits and safety has not been previously examined in a systematic review.\nObjectives\nThe objective of this systematic review was to evaluate the benefits and safety of lifestyle interventions for the treatment of people with chronic gout.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE for studies on 5 April 2013. We also searched the 2010 to 2011 American College of Rheumatology (ACR) and European League against Rheumatism (EULAR) abstracts and performed a handsearch of the reference lists of included articles.\nSelection criteria\nStudies were included if they were randomised or quasi-randomised controlled trials (RCTs or CCTs) which compared lifestyle interventions to another therapy (active or placebo) in patients with chronic gout. Outcomes of interest were changes in gout attack frequency, joint pain, serum urate levels, tophus size, function, quality of life and adverse effects.\nData collection and analysis\nTwo review authors independently applied methods recommended by The Cochrane Collaboration for the selection, appraisal, data collection and synthesis of studies. We assessed the quality of the body of evidence for each outcome using the GRADE approach.\nMain results\nOnly one study (120 participants), at moderate risk of bias, was included in the review. Patients were randomised to one of three interventions: either skim milk powder (SMP) enriched with glycomacropeptide (GMP) and G600, non-enriched SMP or lactose powder, over a three-month period. The frequency of acute gout attacks, measured as the number of flares per month, decreased in all three groups over the three-month study period. Low quality evidence indicated that there was no difference between the SMP/GMP/G600 group and combined control groups (SMP and lactose powder) at three months (mean difference (MD) -0.21, 95% confidence interval (CI) -0.76 to 0.34). There were no significant between-group differences in terms of withdrawals due to adverse effects (risk ratio (RR) 1.27, 95% CI 0.53 to 3.03), and serious adverse events resulting in hospitalisation (2/40 SMP/GMP/G600 group versus 3/80 controls; RR 1.33, 95% CI 0.23 to 7.66). Gastrointestinal adverse effects were the most commonly reported. Pain from self reported gout flares, measured on a 10-point Likert scale, improved more in the SMP/GMP/G600 group compared to controls (MD -1.03, 95% CI -1.96 to -0.10), an absolute difference of 10% (absolute risk difference -0.10, 95% CI -0.20 to -0.01). This is unlikely to be of clinical significance. There was no significant difference in physical function between SMP/GMP/G600 and the control groups at three-month followup (MD -0.03, 95% CI -0.14 to 0.08). Tophus regression and serum urate normalisation were not reported in this study.\nAuthors' conclusions\nWhile there is good evidence from observational studies of an association between various lifestyle risk factors and gout development, there is a paucity of high-quality evidence from randomised controlled trials to either support or refute the use of lifestyle modifications for improving outcomes in people with chronic gout.\n\nGiven the provided abstract, please respond to the following query: \"What is gout and what are lifestyle interventions?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Gout is a very common form of painful joint inflammation (arthritis) caused by urate crystals forming either within or around joints. The inflammation can lead to pain, redness and swelling of affected joints, making the area difficult to touch or move. Some of the reasons why people get gout include their genetic makeup, being overweight, ingesting certain medications (e.g. diuretics), having impaired kidney function and lifestyle habits such as drinking excessive amounts of alcohol and sugar-sweetened drinks.\nAlthough medications are the mainstay of gout treatment, given the recognised association between certain lifestyle risk factors and gout development, lifestyle changes such as losing weight, stopping smoking, exercising more, drinking more coffee and dairy products, and consuming less sugar-sweetened drinks, alcoholic beverages, meat and seafood are commonly recommended to people with chronic gout."
}
]
|
query-laysum | 5522 | [
{
"role": "user",
"content": "Abstract: Background\nAntipsychotic (neuroleptic) medication is used extensively to treat people with chronic mental illnesses. Its use, however, is associated with adverse effects, including movement disorders such as tardive dyskinesia (TD) - a problem often seen as repetitive involuntary movements around the mouth and face. This review, one in a series examining the treatment of TD, covers miscellaneous treatments not covered elsewhere.\nObjectives\nTo determine whether drugs, hormone-, dietary-, or herb-supplements not covered in other Cochrane reviews on TD treatments, surgical interventions, electroconvulsive therapy, and mind-body therapies were effective and safe for people with antipsychotic-induced TD.\nSearch methods\nWe searched the Cochrane Schizophrenia Group’s Study-Based Register of Trials including trial registers (16 July 2015 and 26 April 2017), inspected references of all identified studies for further trials and contacted authors of trials for additional information.\nSelection criteria\nWe included reports if they were randomised controlled trials (RCTs) dealing with people with antipsychotic-induced TD and schizophrenia or other chronic mental illnesses who remained on their antipsychotic medication and had been randomly allocated to the interventions listed above versus placebo, no intervention, or any other intervention.\nData collection and analysis\nWe independently extracted data from these trials and we estimated risk ratios (RR) or mean differences (MD), with 95% confidence intervals (CIs). We assumed that people who left early had no improvement. We assessed risk of bias and created 'Summary of findings' tables using GRADE.\nMain results\nWe included 31 RCTs of 24 interventions with 1278 participants; 22 of these trials were newly included in this 2017 update. Five trials are awaiting classification and seven trials are ongoing. All participants were adults with chronic psychiatric disorders, mostly schizophrenia, and antipsychotic-induced TD. Studies were primarily of short (three to six6 weeks) duration with small samples size (10 to 157 participants), and most (61%) were published more than 20 years ago. The overall risk of bias in these studies was unclear, mainly due to poor reporting of allocation concealment, generation of the sequence, and blinding.\nNineteen of the 31 included studies reported on the primary outcome 'No clinically important improvement in TD symptoms'. Two studies found moderate-quality evidence of a benefit of the intervention compared with placebo: valbenazine (RR 0.63, 95% CI 0.46 to 0.86, 1 RCT, n = 92) and extract of Ginkgo biloba (RR 0.88, 95% CI 0.81 to 0.96, 1 RCT, n = 157), respectively. However, due to small sample sizes we cannot be certain of these effects.\nWe consider the results for the remaining interventions to be inconclusive: Low- to very low-quality evidence of a benefit was found for buspirone (RR 0.53, 95% CI 0.33 to 0.84, 1 RCT, n = 42), dihydrogenated ergot alkaloids (RR 0.45, 95% CI 0.21 to 0.97, 1 RCT, n = 28), hypnosis or relaxation, (RR 0.45, 95% CI 0.21 to 0.94, 1 study, n = 15), pemoline (RR 0.48, 95% CI 0.29 to 0.77, 1 RCT, n = 46), promethazine (RR 0.24, 95% CI 0.11 to 0.55, 1 RCT, n = 34), insulin (RR 0.52, 95% CI 0.29 to 0.96, 1 RCT, n = 20), branched chain amino acids (RR 0.79, 95% CI 0.63 to 1.00, 1 RCT, n = 52), and isocarboxazid (RR 0.24, 95% CI 0.08 to 0.71, 1 RCT, n = 20). There was low- to very low-certainty evidence of no difference between intervention and placebo or no treatment for the following interventions: melatonin (RR 0.89, 95% CI 0.71 to 1.12, 2 RCTs, n = 32), lithium (RR 1.59, 95% CI 0.79 to 3.23, 1 RCT, n = 11), ritanserin (RR 1.00, 95% CI 0.70 to 1.43, 1 RCT, n = 10), selegiline (RR 1.37, 95% CI 0.96 to 1.94, 1 RCT, n = 33), oestrogen (RR 1.18, 95% CI 0.76 to 1.83, 1 RCT, n = 12), and gamma-linolenic acid (RR 1.00, 95% CI 0.69 to 1.45, 1 RCT, n = 16).\nNone of the included studies reported on the other primary outcome, 'no clinically significant extrapyramidal adverse effects'.\nAuthors' conclusions\nThis review has found that the use of valbenazine or extract of Ginkgo biloba may be effective in relieving the symptoms of tardive dyskinesia. However, since only one RCT has investigated each one of these compounds, we are awaiting results from ongoing trials to confirm these results. Results for the remaining interventions covered in this review must be considered inconclusive and these compounds probably should only be used within the context of a well-designed evaluative study.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that had been published up to 26 April 2017."
}
]
|
query-laysum | 5523 | [
{
"role": "user",
"content": "Abstract: Background\nSchizophrenia is one of the most common and disabling mental disorders. About 20% of people with schizophrenia do not respond to antipsychotics, which are the mainstay of the treatment for schizophrenia today, and need to seek other treatment options. Magnetic seizure therapy (MST) is one of the novel non-invasive brain stimulation techniques that are being investigated in recent years.\nObjectives\nTo evaluate the efficacy and tolerability of MST for people with schizophrenia.\nSearch methods\nOn 6 March 2022, we searched the Cochrane Schizophrenia Group's Study-Based Register of Trials which is based on CENTRAL, CINAHL, ClinicalTrials.Gov, Embase, ISRCTN, MEDLINE, PsycINFO, PubMed, and WHO ICTRP.\nSelection criteria\nAll randomised controlled trials (RCTs) comparing MST alone or plus standard care with ECT or any other interventions for people with schizophrenia.\nData collection and analysis\nWe performed reference screening, study selection, data extraction and risk of bias and quality assessment in duplicate. We calculated the risk ratios (RRs) and their 95% confidence intervals (CIs) for binary outcomes and the mean difference (MD) and their 95% CIs for continuous outcomes. We used the original risk of bias tool for risk of bias assessment and created a Summary of findings table using GRADE.\nMain results\nWe included one four-week study with 79 adults in acute schizophrenia, comparing MST plus standard care to ECT plus standard care in this review. We rated the overall risk of bias as high due to high risk of bias in the domains of selective reporting and other biases (early termination and baseline imbalance) and unclear risk of bias in the domain of blinding of participants and personnel.\nWe found that MST and ECT may not differ in improving the global state (n = 79, risk ratio (RR) 1.12, 95% confidence interval (CI) 0.73 to 1.70), overall (n = 79, mean difference (MD) -0.20, 95% CI -8.08 to 7.68), the positive symptoms (n = 79, MD 1.40, 95% CI -1.97 to 4.77) and the negative symptoms (n = 79, MD -1.00, 95% CI -3.85 to 1.85) in people with schizophrenia.\nWe found that MST compared to ECT may cause less delayed memory deficit and less cognitive deterioration (n = 79, number of people with a delayed memory deficit, RR 0.63, 95% CI 0.41 to 0.96; n = 79, mean change in global cognitive function, MD 5.80, 95% CI 0.80 to 10.80), but also may improve more cognitive function (n = 47, number of people with any cognitive improvement, RR 3.30, 95% CI 1.29 to 8.47).\nWe found that there may be no difference between the two groups in terms of leaving the study early due to any reason (n = 79, RR 2.51, 95% CI 0.73 to 8.59), due to adverse effects (n = 79, RR 3.35, 95% CI 0.39 to 28.64) or due to inefficacy (n = 79, RR 2.52, 95% CI 0.11 to 60.10).\nSince all findings were based on one study with high risk of bias and the confidence in the evidence was very low, we were not sure these comparable or favourable effects of MST over ECT were its true effects.\nAuthors' conclusions\nDue to the paucity of data, we cannot draw any conclusion on the efficacy and tolerability of MST for people with schizophrenia. Well-designed RCTs are warranted to answer the question.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that investigated the effect of MST compared to any other treatments for people with schizophrenia.\nWe compared and summarised the results of the study and rated our confidence in the evidence, based on factors such as study methods and sizes."
}
]
|
query-laysum | 5524 | [
{
"role": "user",
"content": "Abstract: Background\nFor centuries, there has been controversy around whether being upright (sitting, birthing stools, chairs, squatting, kneeling) or lying down (lateral (Sim's) position, semi-recumbent, lithotomy position, Trendelenburg's position) have advantages for women giving birth to their babies. This is an update of a review previously published in 2012, 2004 and 1999.\nObjectives\nTo determine the possible benefits and risks of the use of different birth positions during the second stage of labour without epidural anaesthesia, on maternal, fetal, neonatal and caregiver outcomes.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth's Trials Register (30 November 2016) and reference lists of retrieved studies.\nSelection criteria\nRandomised, quasi-randomised or cluster-randomised controlled trials of any upright position assumed by pregnant women during the second stage of labour compared with supine or lithotomy positions. Secondary comparisons include comparison of different upright positions and the supine position. Trials in abstract form were included.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and assessed trial quality. At least two review authors extracted the data. Data were checked for accuracy. The quality of the evidence was assessed using the GRADE approach.\nMain results\nResults should be interpreted with caution because risk of bias of the included trials was variable. We included eleven new trials for this update; there are now 32 included studies, and one trial is ongoing. Thirty trials involving 9015 women contributed to the analysis. Comparisons include any upright position, birth or squat stool, birth cushion, and birth chair versus supine positions.\nIn all women studied (primigravid and multigravid), when compared with supine positions, the upright position was associated with a reduction in duration of second stage in the upright group (MD -6.16 minutes, 95% CI -9.74 to -2.59 minutes; 19 trials; 5811 women; P = 0.0007; random-effects; I² = 91%; very low-quality evidence); however, this result should be interpreted with caution due to large differences in size and direction of effect in individual studies. Upright positions were also associated with no clear difference in the rates of caesarean section (RR 1.22, 95% CI 0.81 to 1.81; 16 trials; 5439 women; low-quality evidence), a reduction in assisted deliveries (RR 0.75, 95% CI 0.66 to 0.86; 21 trials; 6481 women; moderate-quality evidence), a reduction in episiotomies (average RR 0.75, 95% CI 0.61 to 0.92; 17 trials; 6148 women; random-effects; I² = 88%), a possible increase in second degree perineal tears (RR 1.20, 95% CI 1.00 to 1.44; 18 trials; 6715 women; I² = 43%; low-quality evidence), no clear difference in the number of third or fourth degree perineal tears (RR 0.72, 95% CI 0.32 to 1.65; 6 trials; 1840 women; very low-quality evidence), increased estimated blood loss greater than 500 mL (RR 1.48, 95% CI 1.10 to 1.98; 15 trials; 5615 women; I² = 33%; moderate-quality evidence), fewer abnormal fetal heart rate patterns (RR 0.46, 95% CI 0.22 to 0.93; 2 trials; 617 women), no clear difference in the number of babies admitted to neonatal intensive care (RR 0.79, 95% CI 0.51 to 1.21; 4 trials; 2565 infants; low-quality evidence). On sensitivity analysis excluding trials with high risk of bias, these findings were unchanged except that there was no longer a clear difference in duration of second stage of labour (MD -4.34, 95% CI -9.00 to 0.32; 21 trials; 2499 women; I² = 85%).\nThe main reasons for downgrading of GRADE assessment was that several studies had design limitations (inadequate randomisation and allocation concealment) with high heterogeneity and wide CIs.\nAuthors' conclusions\nThe findings of this review suggest several possible benefits for upright posture in women without epidural anaesthesia, such as a very small reduction in the duration of second stage of labour (mainly from the primigravid group), reduction in episiotomy rates and assisted deliveries. However, there is an increased risk blood loss greater than 500 mL and there may be an increased risk of second degree tears, though we cannot be certain of this. In view of the variable risk of bias of the trials reviewed, further trials using well-designed protocols are needed to ascertain the true benefits and risks of various birth positions.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Giving birth in the supine position may have been adopted to make it more convenient for midwives and obstetricians to assist the labour and birth. However, many women report that giving birth on their backs feels painful, uncomfortable and difficult. It is suggested that women in upright positions give birth more easily because the pelvis is able to expand as the baby moves down; gravity may also be helpful and the baby may benefit because the weight of the uterus will not be pressing down on the mother’s major blood vessels which supply oxygen and nutrition to the baby.\nWe looked at the upright positions such as: sitting (on an obstetric chair or stool); kneeling (either on all fours or kneeling up) and squatting (unaided or using a birth cushion or a squatting bar). We compared these with supine positions such as: dorsal; lateral; semi-recumbent and lithotomy. Our aim was to assess the effectiveness, benefits and possible disadvantages of the different positions for women without epidural, during the second stage of labour."
}
]
|
query-laysum | 5525 | [
{
"role": "user",
"content": "Abstract: Background\nInfants born preterm (before 37 weeks' gestation) are at risk of respiratory distress syndrome (RDS) and need for respiratory support due to lung immaturity. One course of prenatal corticosteroids, administered to women at risk of preterm birth, reduces the risk of respiratory morbidity and improves survival of their infants, but these benefits do not extend beyond seven days. Repeat doses of prenatal corticosteroids have been used for women at ongoing risk of preterm birth more than seven days after their first course of corticosteroids, with improvements in respiratory outcomes, but uncertainty remains about any long-term benefits and harms. This is an update of a review last published in 2015.\nObjectives\nTo assess the effectiveness and safety, using the best available evidence, of a repeat dose(s) of prenatal corticosteroids, given to women who remain at risk of preterm birth seven or more days after an initial course of prenatal corticosteroids with the primary aim of reducing fetal and neonatal mortality and morbidity.\nSearch methods\nFor this update, we searched Cochrane Pregnancy and Childbirth's Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP), and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials, including cluster-randomised trials, of women who had already received one course of corticosteroids seven or more days previously and were still at risk of preterm birth, randomised to further dose(s) or no repeat doses, with or without placebo. Quasi-randomised trials were excluded. Abstracts were accepted if they met specific criteria. All trials had to meet criteria for trustworthiness, including a search of the Retraction Watch database for retractions or expressions of concern about the trials or their publications.\nData collection and analysis\nWe used standard Cochrane Pregnancy and Childbirth methods. Two review authors independently selected trials, extracted data, and assessed trial quality and scientific integrity. We chose primary outcomes based on clinical importance as measures of effectiveness and safety, including serious outcomes, for the women and their fetuses/infants, infants in early childhood (age two to less than five years), the infant in mid- to late childhood (age five to less than 18 years) and the infant as an adult. We assessed risk of bias at the outcome level using the RoB 2 tool and assessed certainty of evidence using GRADE.\nMain results\nWe included 11 trials (4895 women and 5975 babies). High-certainty evidence from these trials indicated that treatment of women who remain at risk of preterm birth seven or more days after an initial course of prenatal corticosteroids with repeat dose(s) of corticosteroids, compared with no repeat corticosteroid treatment, reduced the risk of their infants experiencing the primary infant outcome of RDS (risk ratio (RR) 0.82, 95% confidence interval (CI) 0.74 to 0.90; 3540 babies; number needed to treat for an additional beneficial outcome (NNTB) 16, 95% CI 11 to 29) and had little or no effect on chronic lung disease (RR 1.00, 95% CI 0.83 to 1.22; 5661 babies). Moderate-certainty evidence indicated that the composite of serious infant outcomes was probably reduced with repeat dose(s) of corticosteroids (RR 0.88, 95% CI 0.80 to 0.97; 9 trials, 5736 babies; NNTB 39, 95% CI 24 to 158), as was severe lung disease (RR 0.83, 95% CI 0.72 to 0.97; NNTB 45, 95% CI 27 to 256; 4955 babies). Moderate-certainty evidence could not exclude benefit or harm for fetal or neonatal or infant death less than one year of age (RR 0.95, 95% CI 0.73 to 1.24; 5849 babies), severe intraventricular haemorrhage (RR 1.13, 95% CI 0.69 to 1.86; 5066 babies) and necrotising enterocolitis (RR 0.84, 95% CI 0.59 to 1.22; 5736 babies).\nIn women, moderate-certainty evidence found little or no effect on the likelihood of a caesarean birth (RR 1.03, 95% CI 0.98 to 1.09; 4266 mothers). Benefit or harm could not be excluded for maternal death (RR 0.32, 95% 0.01 to 7.81; 437 women) and maternal sepsis (RR 1.13, 95% CI 0.93 to 1.39; 4666 mothers). The evidence was unclear for risk of adverse effects and discontinuation of therapy due to maternal adverse effects. No trials reported breastfeeding status at hospital discharge or risk of admission to the intensive care unit.\nAt early childhood follow-up, moderate- to high-certainty evidence identified little or no effect of exposure to repeat prenatal corticosteroids compared with no repeat corticosteroids for primary outcomes relating to neurodevelopment (neurodevelopmental impairment: RR 0.97, 95% CI 0.85 to 1.10; 3616 children), survival without neurodevelopmental impairment (RR 1.01, 95% CI 0.98 to 1.04; 3845 children) and survival without major neurodevelopmental impairment (RR 1.02, 95% CI 0.98 to 1.05; 1816 children). An increase or decrease in the risk of death since randomisation could not be excluded (RR 1.06, 95% CI 0.81 to 1.40; 5 trials, 4565 babies randomised).\nAt mid-childhood follow-up, moderate-certainty evidence identified little or no effect of exposure to repeat prenatal corticosteroids compared with no repeat corticosteroids on survival free of neurocognitive impairment (RR 1.01, 95% CI 0.95 to 1.08; 963 children) or survival free of major neurocognitive impairment (RR 1.00, 95% CI 0.97 to 1.04; 2682 children). Benefit or harm could not be excluded for death since randomisation (RR 0.93, 95% CI 0.69 to 1.26; 2874 babies randomised) and any neurocognitive impairment (RR 0.96, 95% CI 0.72 to 1.29; 897 children).\nNo trials reported data for follow-up into adolescence or adulthood.\nRisk of bias across outcomes was generally low although there were some concerns of bias. For childhood follow-up, most outcomes had some concerns of risk of bias due to missing data from loss to follow-up.\nAuthors' conclusions\nThe short-term benefits for babies included less respiratory distress and fewer serious health problems in the first few weeks after birth with repeat dose(s) of prenatal corticosteroids for women still at risk of preterm birth seven days or more after an initial course. The current available evidence reassuringly shows no significant harm for the women or child in early and mid-childhood, although no benefit.\nFurther research is needed on the long-term benefits and risks for the baby into adulthood.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Preterm birth is common, affecting approximately one in nine babies worldwide. In addition to breathing difficulties after birth, preterm babies who survive the early weeks after birth are at risk of long-term disabilities such as delays in their development, epilepsy (fits) and cerebral palsy (weakness and problems in the muscles that affects movement and co-ordination).\nCorticosteroid medications are anti-inflammatory and suppress growth and the production of cortisol (stress hormone) from the adrenal gland. As a result, repeat prenatal corticosteroid treatment could increase the risk of infection and suppress production of cortisol for the mother and her baby, and reduce the baby's growth. It is, therefore, important that both the benefits and potential negative effects of repeat courses of corticosteroids are understood to allow the best treatment decisions to be made in clinical practice."
}
]
|
query-laysum | 5526 | [
{
"role": "user",
"content": "Abstract: Background\nTotal mesorectal excision is the standard of care for stage I rectal cancer. Despite major advances and increasing enthusiasm for modern endoscopic local excision (LE), uncertainty remains regarding its oncologic equivalence and safety relative to radical resection (RR).\nObjectives\nTo assess the oncologic, operative, and functional outcomes of modern endoscopic LE compared to RR surgery in adults with stage I rectal cancer.\nSearch methods\nWe searched CENTRAL, Ovid MEDLINE, Ovid Embase, Web of Science - Science Citation Index Expanded (1900 to present), four trial registers (ClinicalTrials.gov, ISRCTN registry, the WHO International Clinical Trials Registry Platform, and the National Cancer Institute Clinical Trials database), two thesis and proceedings databases, and relevant scientific societies' publications in February 2022. We performed handsearching and reference checking and contacted study authors of ongoing trials to identify additional studies.\nSelection criteria\nWe searched for randomized controlled trials (RCTs) in people with stage I rectal cancer comparing any modern LE techniques to any RR techniques with or without the use of neo/adjuvant chemoradiotherapy (CRT).\nData collection and analysis\nWe used standard Cochrane methodological procedures. We calculated hazard ratios (HR) and standard errors for time-to-event data and risk ratios for dichotomous outcomes, using generic inverse variance and random-effects methods. We regrouped surgical complications from the included studies into major and minor according to the standard Clavien-Dindo classification. We assessed the certainty of evidence using the GRADE framework.\nMain results\nFour RCTs were included in data synthesis with a combined total of 266 participants with stage I rectal cancer (T1-2N0M0), if not stated otherwise. Surgery was performed in university hospital settings. The mean age of participants was above 60, and median follow-up ranged from 17.5 months to 9.6 years. Regarding the use of co-interventions, one study used neoadjuvant CRT in all participants (T2 cancers); one study used short-course radiotherapy in the LE group (T1-T2 cancers); one study used adjuvant CRT selectively in high-risk patients undergoing RR (T1-T2 cancers); and the fourth study did not use any CRT (T1 cancers).\nWe assessed the overall risk of bias as high for oncologic and morbidity outcomes across studies. All studies had at least one key domain with a high risk of bias. None of the studies reported separate outcomes for T1 versus T2 or for high-risk features.\nLow-certainty evidence suggests that RR may result in an improvement in disease-free survival compared to LE (3 trials, 212 participants; HR 1.96, 95% confidence interval (CI) 0.91 to 4.24). This would translate into a three-year disease-recurrence risk of 27% (95% CI 14 to 50%) versus 15% after LE and RR, respectively.\nRegarding sphincter function, only one study provided objective results and reported short-term deterioration in stool frequency, flatulence, incontinence, abdominal pain, and embarrassment about bowel function in the RR group. At three years, the LE group had superiority in overall stool frequency, embarrassment about bowel function, and diarrhea.\nLocal excision may have little to no effect on cancer-related survival compared to RR (3 trials, 207 participants; HR 1.42, 95% CI 0.60 to 3.33; very low-certainty evidence). We did not pool studies for local recurrence, but the included studies individually reported comparable local recurrence rates for LE and RR (low-certainty evidence).\nIt is unclear if the risk of major postoperative complications may be lower with LE compared with RR (risk ratio 0.53, 95% CI 0.22 to 1.28; low-certainty evidence; corresponding to 5.8% (95% CI 2.4% to 14.1%) risk for LE versus 11% for RR). Moderate-certainty evidence shows that the risk of minor postoperative complications is probably lower after LE (risk ratio 0.48, 95% CI 0.27 to 0.85); corresponding to an absolute risk of 14% (95% CI 8% to 26%) for LE compared to 30.1% for RR. One study reported an 11% rate of temporary stoma after LE versus 82% in the RR group. Another study reported a 46% rate of temporary or permanent stomas after RR and none after LE.\nThe evidence is uncertain about the effect of LE compared with RR on quality of life. Only one study reported standard quality of life function, in favor of LE, with a 90% or greater probability of superiority in overall quality of life, role, social, and emotional functions, body image, and health anxiety. Other studies reported a significantly shorter postoperative period to oral intake, bowel movement, and off-bed activities in the LE group.\nAuthors' conclusions\nBased on low-certainty evidence, LE may decrease disease-free survival in early rectal cancer. Very low-certainty evidence suggests that LE may have little to no effect on cancer-related survival compared to RR for the treatment of stage I rectal cancer. Based on low-certainty evidence, it is unclear if LE may have a lower major complication rate, but probably causes a large reduction in minor complication rate. Limited data based on one study suggest better sphincter function, quality of life, or genitourinary function after LE. Limitations exist with respect to the applicability of these findings. We identified only four eligible studies with a low number of total participants, subjecting the results to imprecision. Risk of bias had a serious impact on the quality of evidence. More RCTs are needed to answer our review question with greater certainty and to compare local and distant metastasis rates. Data on important patient outcomes such as sphincter function and quality of life are very limited. Results of currently ongoing trials will likely impact the results of this review. Future trials should accurately report and compare outcomes according to the stage and high-risk features of rectal tumors, and evaluate quality of life, sphincter, and genitourinary outcomes. The role of neoadjuvant or adjuvant therapy as an emerging co-intervention for improving oncologic outcomes after LE needs to be further defined.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have low confidence in the evidence mainly because it was based only on a few studies, and due to the way the studies were conducted. In addition, it is possible that the results of the studies could have been affected by the fact that participants and investigators were aware of which treatment participants had received."
}
]
|
query-laysum | 5527 | [
{
"role": "user",
"content": "Abstract: Background\nUlcerative colitis (UC) is a chronic inflammatory disorder of the colon that has a relapsing-remitting course. Health related quality of life (HRQL) is significantly lower in patients with UC than the general population due to the negative effects of the disease on physical, psychological and social well-being. Randomized controlled trials (RCTs) evaluating medical interventions for UC have traditionally used clinical disease activity indices that focus on symptoms to define primary outcomes such as clinical remission or improvement. However, this approach does not evaluate benefits that are highly relevant to patients such as HRQL\nObjectives\nThe primary objective was to assess the impact of biologic therapy on the HRQL of UC patients.\nSearch methods\nWe searched PubMed, MEDLINE, EMBASE and CENTRAL from inception to September, 2015. Conference abstracts and reference lists were also searched.\nSelection criteria\nRCTs that compared biologics to placebo in UC patients and reported on HRQL using the Inflammatory Bowel Disease Questionnaire (IBDQ), or the SF-36 or EQ-5D to measure HRQL were included.\nData collection and analysis\nTwo authors independently screened studies for inclusion, extracted data and assessed study quality using the Cochrane risk of bias tool. The primary outcome was improvement in HRQL. For dichotomous outcomes we calculated the risk ratio (RR) and 95% confidence interval (CI). For continuous outcomes we calculated the mean difference (MD) and 95% CI. The overall quality of the evidence supporting the primary outcome was assessed using GRADE.\nMain results\nNine RCTs (n = 4143) were included. Biologics included rituximab (one small study), interferon-ß-1a (one study), vedolizumab (one study), and the tumor necrosis factor-alpha (TNF-α) antagonists infliximab (two studies), adalimumab (three studies), and golimumab (one study). Risk of bias was low in eight studies. The rituximab study was judged to be at high risk of bias due to attrition bias. The studies comparing interferon-ß-1a and rituximab to placebo found no clear evidence of a difference in the proportion of patients who experienced an improvement in HRQL at 8 or 12 weeks respectively. The proportion of patients with a clinically meaningful improvement in HRQL at 6 or 52 weeks was significantly higher in vedolizumab patients compared to placebo. At 6 weeks 37% (83/225) of vedolizumab patients had an improvement in IBDQ score of at least 16 points from baseline compared to 23% (34/149) of placebo patients (RR 1.62, 95% CI 1.15 to 2.27; 1 study). At 52 weeks, 64% (157/247) of vedolizumab patients had an improvement in IBDQ score of at least 16 points from baseline compared to 38% (48/126) of placebo patients (RR 1.62, 95% CI 1.15 to 2.27; 1 study). A GRADE analysis indicated that the overall quality of the evidence supporting these outcomes was moderate due to sparse data (< 400 events). Patients who received maintenance vedolizumab every eight weeks had significantly higher mean SF-36 scores than placebo patients at 52 weeks (MD 3.40, 95% CI 1.56 to 5.24, 1 study 248 patients). This difference appears to be clinically meaningful as the lower boundary for a clinically meaningful change in SF-36 is three points. A GRADE analysis indicated that the overall quality of the evidence supporting this outcome was moderate due to sparse data (< 400 events). Adalimumab patients had significantly higher mean IBDQ scores than placebo patients at weeks 8 (MD 9.00, 95% CI 2.65 to 15.35; 1 study, 494 patients) and 52 (MD 8.00, 95% CI 0.68 to 15.32; 1 study, 494 patients). However, these differences may not be clinically meaningful as the lower boundary for a clinically meaningful change in IBDQ is 16 points. A GRADE analysis indicated that the overall quality of the evidence supporting this outcome was moderate due to sparse data (< 400 events). Golimumab patients who received a dose of 200/100 mg (MD 12.20, 95% CI 6.52 to 17.88; 504 patients) or 400/200 mg (MD 12.10, 95% CI 6.40 to 17.80; 508 patients) had significantly higher mean IBDQ scores than placebo patients at week 6. Although a GRADE analysis indicated that the overall quality of the evidence supporting these outcomes was high, the difference in IBDQ scores may not be clinically meaningful. Infliximab patients had significantly higher mean IBDQ scores at week 6 or 8 than placebo patients (MD 18,58, 95% CI 13.19 to 23.97; 2 studies, 529 patients). This difference in HRQL is clinically meaningful. A GRADE analysis indicated that the overall quality of the evidence supporting this outcome was high. The proportion of patients with a clinically meaningful improvement in HRQL at eight weeks was significantly higher in infliximab patients compared to placebo. Sixty-nine per cent (333/484) of infliximab patients had an improvement in IBDQ score of > 16 points from baseline compared to 50% of placebo patients (RR 1.39, 95% CI 1.21 to 1.60; 1 study). A GRADE analysis indicated that the overall quality of the evidence supporting this outcome was high. Similar results were found between infliximab and placebo when HRQL was measured using the SF-36 instrument. One small study (n = 43) found no difference in HRQL between infliximab and placebo when measured by the EQ-5D. Pooled analyses of TNF-α antagonists showed a benefit in HRQL favouring TNF-α over placebo.\nAuthors' conclusions\nThese results suggest that biologics have the potential to improve HRQL in UC patients. High quality evidence suggests that infliximab provides a clinically meaningful improvement in HRQL in UC patients receiving induction therapy. Moderate quality evidence suggests that vedolizumab provides a clinically meaningful improvement in HRQL in UC patients receiving maintenance therapy. These findings are important since there is a paucity of effective drugs for the treatment of UC that have the potential to both decrease disease activity and improve HRQL. More research is needed to assess the long-term effect of biologic therapy on HRQL in patients with UC. More research is needed to assess the impact of golimumab and adalimumab on HRQL in UC patients. Trials involving direct head to head comparisons of biologics would help determine which biologics provide optimum benefit for HRQL.\n\nGiven the provided abstract, please respond to the following query: \"What are biological interventions for ulcerative colitis?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Biologics are genetically engineered medications made from living organisms. They work by targeting specific cells in the gut that are involved in the inflammation process."
}
]
|
query-laysum | 5528 | [
{
"role": "user",
"content": "Abstract: Background\nBreast-conserving therapy for women with breast cancer consists of local excision of the tumour (achieving clear margins) followed by radiotherapy (RT). Most true recurrences occur in the same quadrant as the original tumour. Whole breast radiotherapy (WBRT) may not protect against the development of a new primary cancer developing in other quadrants of the breast. In this Cochrane Review, we investigated the delivery of radiation to a limited volume of the breast around the tumour bed (partial breast irradiation (PBI)) sometimes with a shortened treatment duration (accelerated partial breast irradiation (APBI)).\nObjectives\nTo determine whether PBI/APBI is equivalent to or better than conventional or hypofractionated WBRT after breast-conserving therapy for early-stage breast cancer.\nSearch methods\nOn 27 August 2020, we searched the Cochrane Breast Cancer Group Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL and three trial databases. We searched for grey literature: OpenGrey (September 2020), reference lists of articles, conference proceedings and published abstracts, and applied no language restrictions.\nSelection criteria\nRandomised controlled trials (RCTs) without confounding, that evaluated conservative surgery plus PBI/APBI versus conservative surgery plus WBRT. Published and unpublished trials were eligible.\nData collection and analysis\nTwo review authors (BH and ML) performed data extraction, used Cochrane's risk of bias tool and resolved any disagreements through discussion, and assessed the certainty of the evidence for main outcomes using GRADE. Main outcomes were local recurrence-free survival, cosmesis, overall survival, toxicity (subcutaneous fibrosis), cause-specific survival, distant metastasis-free survival and subsequent mastectomy. We entered data into Review Manager 5 for analysis.\nMain results\nWe included nine RCTs that enrolled 15,187 women who had invasive breast cancer or ductal carcinoma in-situ (6.3%) with T1-2N0-1M0 Grade I or II unifocal tumours (less than 2 cm or 3 cm or less) treated with breast-conserving therapy with negative margins. This is the second update of the review and includes two new studies and 4432 more participants.\nLocal recurrence-free survival is probably slightly reduced (by 3/1000, 95% CI 6 fewer to 0 fewer) with the use of PBI/APBI compared to WBRT (hazard ratio (HR) 1.21, 95% confidence interval (CI) 1.03 to 1.42; 8 studies, 13,168 participants; moderate-certainty evidence).\nCosmesis (physician/nurse-reported) is probably worse (by 63/1000, 95% CI 35 more to 92 more) with the use of PBI/APBI (odds ratio (OR) 1.57, 95% CI 1.31 to 1.87; 6 studies, 3652 participants; moderate-certainty evidence).\nOverall survival is similar (0/1000 fewer, 95% CI 6 fewer to 6 more) with PBI/APBI and WBRT (HR 0.99, 95% CI 0.88 to 1.12; 8 studies, 13,175 participants; high-certainty evidence).\nLate radiation toxicity (subcutaneous fibrosis) is probably increased (by 14/1000 more, 95% CI 102 more to 188 more) with PBI/APBI (OR 5.07, 95% CI 3.81 to 6.74; 2 studies, 3011 participants; moderate-certainty evidence).\nThe use of PBI/APBI probably makes little difference (1/1000 less, 95% CI 6 fewer to 3 more) to cause-specific survival (HR 1.06, 95% CI 0.83 to 1.36; 7 studies, 9865 participants; moderate-certainty evidence).\nWe found the use of PBI/APBI compared with WBRT probably makes little or no difference (1/1000 fewer (95% CI 4 fewer to 6 more)) to distant metastasis-free survival (HR 0.95, 95% CI 0.80 to 1.13; 7 studies, 11,033 participants; moderate-certainty evidence).\nWe found the use of PBI/APBI in comparison with WBRT makes little or no difference (2/1000 fewer, 95% CI 20 fewer to 20 more) to mastectomy rates (OR 0.98, 95% CI 0.78 to 1.23; 3 studies, 3740 participants, high-certainty evidence).\nAuthors' conclusions\nIt appeared that local recurrence-free survival is probably worse with PBI/APBI; however, the difference was small and nearly all women remain free of local recurrence. Overall survival is similar with PBI/APBI and WBRT, and we found little to no difference in other oncological outcomes. Some late effects (subcutaneous fibrosis) may be worse with PBI/APBI and its use is probably associated with worse cosmetic outcomes. The limitations of the data currently available mean that we cannot make definitive conclusions about the efficacy and safety or ways to deliver PBI/APBI. We await completion of ongoing trials.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found nine studies, which involved 15,187 women. Our evidence is current to 27 August 2020. Local recurrence is probably slightly more common with APBI/PBI (moderate-quality evidence) and the breast appearance (scored by doctors and nurses) was probably worse with APBI/PBI (moderate-quality evidence). There is probably little difference in survival (high-quality evidence). Late radiation fibrosis (change in breast appearance and feel) is probably increased with APBI/PBI. There are probably few differences in breast cancer-related deaths and spread of breast cancer around the body with the use of APBI/PBI. The use of APBI/PBI makes little difference to how many women need mastectomy (removal of the whole breast) because of unacceptable late side effects or local recurrence."
}
]
|
query-laysum | 5529 | [
{
"role": "user",
"content": "Abstract: Background\nClinicians primarily recommend weight loss for obese women seeking pregnancy. The effectiveness of interventions aimed at weight loss in obese women with subfertility is unclear.\nObjectives\nTo assess the effectiveness and safety of pharmacological and non-pharmacological strategies compared with each other, placebo, or no treatment for achieving weight loss in obese women with subfertility.\nSearch methods\nWe searched the CGF Specialised Register, CENTRAL, MEDLINE, Embase, PsycINFO, and AMED from inception to 18 August 2020. We also checked reference lists and contacted experts in the field for additional relevant papers.\nSelection criteria\nWe included published and unpublished randomised controlled trials in which weight loss was the main goal of the intervention. Our primary effectiveness outcomes were live birth or ongoing pregnancy and primary safety outcomes were miscarriage and adverse events. Secondary outcomes included clinical pregnancy, weight change, quality of life, and mental health outcome.\nData collection and analysis\nReview authors followed standard Cochrane methodology.\nMain results\nThis review includes 10 trials. Evidence was of very low to low quality: the main limitations were due to lack of studies and poor reporting of study methods. The main reasons for downgrading evidence were lack of details by which to judge risk of bias (randomisation and allocation concealment), lack of blinding, and imprecision.\nNon-pharmacological intervention versus no intervention or placebo\nEvidence is insufficient to determine whether a diet or lifestyle intervention compared to no intervention affects live birth (odds ratio (OR) 0.85, 95% confidence interval (CI) 0.65 to 1.11; 918 women, 3 studies; I² = 78%; low-quality evidence). This suggests that if the chance of live birth following no intervention is assumed to be 43%, the chance following diet or lifestyle changes would be 33% to 46%. We are uncertain if lifestyle change compared with no intervention affects miscarriage rate (OR 1.54, 95% CI 0.99 to 2.39; 917 women, 3 studies; I² = 0%; very low-quality evidence). Evidence is insufficient to determine whether lifestyle change compared with no intervention affects clinical pregnancy (OR 1.06, 95% CI 0.81 to 1.40; 917 women, 3 studies; I² = 73%; low-quality evidence). Lifestyle intervention resulted in a decrease in body mass index (BMI), but data were not pooled due to heterogeneity in effect (mean difference (MD) -3.70, 95% CI -4.10 to -3.30; 305 women, 1 study; low-quality evidence; and MD -1.80, 95% CI -2.67 to -0.93; 43 women, 1 study; very low-quality evidence).\nNon-pharmacological versus non-pharmacological intervention\nWe are uncertain whether intensive weight loss interventions compared to standard care nutrition counselling affects live birth (OR 11.00, 95% CI 0.43 to 284; 11 women, 1 study; very low-quality evidence), clinical pregnancy (OR 11.00, 95% CI 0.43 to 284; 11 women, 1 study; very low-quality evidence), BMI (MD -3.00, 95% CI -5.37 to -0.63; 11 women, 1 study; very low-quality evidence), weight change (MD -9.00, 95% CI -15.50 to -2.50; 11 women, 1 study; very low-quality evidence), quality of life (MD 0.06, 95% CI -0.03 to 0.15; 11 women, 1 study; very low-quality evidence), or mental health (MD -7.00, 95% CI -13.92 to -0.08; 11 women, 1 study; very low-quality evidence). No study reported on adverse events .\nPharmacological versus pharmacological intervention\nFor metformin plus liraglutide compared to metformin we are uncertain of an effect on the adverse events nausea (OR 7.22, 95% CI 0.72 to 72.7; 28 women, 1 study; very low-quality evidence), diarrhoea (OR 0.31, 95% CI 0.01 to 8.3; 28 women, 1 study; very low-quality evidence), and headache (OR 5.80, 95% CI 0.25 to 133; 28 women, 1 study; very low-quality evidence). We are uncertain if a combination of metformin plus liraglutide vs metformin affects BMI (MD 2.1, 95% CI -0.42 to 2.62; 28 women, 1 study; very low-quality evidence) and total body fat (MD -0.50, 95% CI -4.65 to 3.65; 28 women, 1 study; very low-quality evidence).\nFor metformin, clomiphene, and L-carnitine versus metformin, clomiphene, and placebo, we are uncertain of an effect on miscarriage (OR 3.58, 95% CI 0.73 to 17.55; 274 women, 1 study; very low-quality evidence), clinical pregnancy (OR 5.56, 95% CI 2.57 to 12.02; 274 women, 1 study; very low-quality evidence) or BMI (MD -0.3, 95% CI 1.17 to 0.57, 274 women, 1 study, very low-quality evidence).\nWe are uncertain if dexfenfluramine versus placebo affects weight loss in kilograms (MD -0.10, 95% CI -2.77 to 2.57; 21 women, 1 study; very low-quality evidence). No study reported on live birth, quality of life, or mental health outcomes.\nPharmacological intervention versus no intervention or placebo\nWe are uncertain if metformin compared with placebo affects live birth (OR 1.57, 95% CI 0.44 to 5.57; 65 women, 1 study; very low-quality evidence). This suggests that if the chance of live birth following placebo is assumed to be 15%, the chance following metformin would be 7% to 50%. We are uncertain if metformin compared with placebo affects gastrointestinal adverse events (OR 0.91, 95% CI 0.32 to 2.57; 65 women, 1 study; very low-quality evidence) or miscarriage (OR 0.50, 95% CI 0.04 to 5.80; 65 women, 1 study; very low-quality evidence) or clinical pregnancy (OR 2.67, 95% CI 0.90 to 7.93; 96 women, 2 studies; I² = 48%; very low-quality evidence). We are also uncertain if diet combined with metformin versus diet and placebo affects BMI (MD -0.30, 95% CI -2.16 to 1.56; 143 women, 1 study; very low-quality evidence) or waist-to-hip ratio (WHR) (MD 2.00, 95% CI -2.21 to 6.21; 143 women, 1 study; very low-quality evidence).\nPharmacological versus non-pharmacological intervention\nNo study undertook this comparison.\nAuthors' conclusions\nEvidence is insufficient to support the use of pharmacological and non-pharmacological strategies for obese women with subfertility. No data are available for the comparison of pharmacological versus non-pharmacological strategies. We are uncertain whether pharmacological or non-pharmacological strategies effect live birth, ongoing pregnancy, adverse events, clinical pregnancy, quality of life, or mental health outcomes. However, for obese women with subfertility, a lifestyle intervention may reduce BMI. Future studies should compare a combination of pharmacological and lifestyle interventions for obese women with subfertility.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Lack of data is a major concern in interpretation of these data. Only 10 studies were included in the analysis. Three studies compared non-pharmacological intervention versus no intervention or placebo. We are uncertain whether diet versus no intervention improves live birth, ongoing pregnancy, clinical pregnancy, or adverse events. A diet or lifestyle intervention may result in body mass index (BMI) weight change. Evidence was insufficient to show a difference in waist-to-hip ratio (WHR) with diet or lifestyle change compared to no intervention. No study reported on quality of life, or mental health outcomes for this comparison.\nOne study compared non-pharmacological interventions - intensive weight loss intervention versus standard of care nutrition counselling; however due to the very low quality of evidence, we are uncertain whether intensive weight loss interventions improve live birth, clinical pregnancy, quality of life, or mental health outcomes. No study reported on adverse events, or weight changes, for this comparison.\nThree studies reported on pharmacological versus pharmacological interventions. Evidence was insufficient to show a difference in adverse events between metformin compared to metformin plus liraglutide. Evidence was insufficient to demonstrate a difference between the combination of metformin, clomiphene, and L-carnitine versus metformin, clomiphene, and placebo for miscarriage. Evidence was insufficient to reveal a difference between the combination of metformin, clomiphene, and L-carnitine versus metformin, clomiphene, and placebo or metformin plus liraglutide versus metformin in clinical pregnancy. Evidence was insufficient to demonstrate a difference between the combination of metformin, clomiphene, and L-carnitine versus metformin, clomiphene, and placebo for weight change using BMI. Moreover, evidence was insufficient to reveal a difference between dexfenfluramine versus placebo or metformin plus liraglutide versus metformin for weight loss in kilograms, or per cent of total body fat. No study reported on live birth and change in quality of life or mental health outcomes for this comparison.\nIn the comparison of pharmacological intervention versus no intervention or placebo, three studies were included. Evidence was insufficient to show a difference between metformin and control groups related to live birth. Evidence was insufficient to reveal a difference between metformin compared to placebo in live birth, clinical pregnancy, or adverse events. Evidence was insufficient to demonstrate a difference between diet combined with metformin versus diet combined with placebo for weight change using BMI or WHR. No study reported on quality of life or mental health outcomes for this comparison.\nWe found no study comparing non-pharmacological with pharmacological interventions."
}
]
|
query-laysum | 5530 | [
{
"role": "user",
"content": "Abstract: Background\nAntisocial personality disorder (AsPD) is associated with rule-breaking, criminality, substance use, unemployment, relationship difficulties, and premature death. Certain types of medication (drugs) may help people with AsPD. This review updates a previous Cochrane review, published in 2010.\nObjectives\nTo assess the benefits and adverse effects of pharmacological interventions for adults with AsPD.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, 13 other databases and two trials registers up to 5 September 2019. We also checked reference lists and contacted study authors to identify studies.\nSelection criteria\nRandomised controlled trials in which adults (age 18 years and over) with a diagnosis of AsPD or dissocial personality disorder were allocated to a pharmacological intervention or placebo control condition.\nData collection and analysis\nFour authors independently selected studies and extracted data. We assessed risk of bias and created 'Summary of findings tables' and assessed the certainty of the evidence using the GRADE framework. The primary outcomes were: aggression; reconviction; global state/global functioning; social functioning; and adverse events.\nMain results\nWe included 11 studies (three new to this update), involving 416 participants with AsPD. Most studies (10/11) were conducted in North America. Seven studies were conducted exclusively in an outpatient setting, one in an inpatient setting, and one in prison; two studies used multiple settings. The average age of participants ranged from 28.6 years to 45.1 years (overall mean age 39.6 years). Participants were predominantly (90%) male. Study duration ranged from 6 to 24 weeks, with no follow-up period. Data were available from only four studies involving 274 participants with AsPD. All the available data came from unreplicated, single reports, and did not allow independent statistical analysis to be conducted. Many review findings were limited to descriptive summaries based on analyses carried out and reported by the trial investigators.\nNo study set out to recruit participants on the basis of having AsPD; many participants presented primarily with substance abuse problems. The studies reported on four primary outcomes and six secondary outcomes. Primary outcomes were aggression (six studies) global/state functioning (three studies), social functioning (one study), and adverse events (seven studies). Secondary outcomes were leaving the study early (eight studies), substance misuse (five studies), employment status (one study), impulsivity (one study), anger (three studies), and mental state (three studies). No study reported data on the primary outcome of reconviction or the secondary outcomes of quality of life, engagement with services, satisfaction with treatment, housing/accommodation status, economic outcomes or prison/service outcomes.\nEleven different drugs were compared with placebo, but data for AsPD participants were only available for five comparisons. Three classes of drug were represented: antiepileptic; antidepressant; and dopamine agonist (anti-Parkinsonian) drugs. We considered selection bias to be unclear in 8/11 studies, attrition bias to be high in 7/11 studies, and performance bias to be low in 7/11 studies. Using GRADE, we rated the certainty of evidence for each outcome in this review as very low, meaning that we have very little confidence in the effect estimates reported.\nPhenytoin (antiepileptic) versus placebo\nOne study (60 participants) reported very low-certainty evidence that phenytoin (300 mg/day), compared to placebo, may reduce the mean frequency of aggressive acts per week (phenytoin mean = 0.33, no standard deviation (SD) reported; placebo mean = 0.51, no SD reported) in male prisoners with aggression (skewed data) at endpoint (six weeks). The same study (60 participants) reported no evidence of difference between phenytoin and placebo in the number of participants reporting the adverse event of nausea during week one (odds ratio (OR) 1.00, 95% confidence interval (CI) 0.06 to 16.76; very low-certainty evidence). The study authors also reported that no important side effects were detectable via blood cell counts or liver enzyme tests (very low-certainty evidence).\nThe study did not measure reconviction, global/state functioning or social functioning.\nDesipramine (antidepressant) versus placebo\nOne study (29 participants) reported no evidence of a difference between desipramine (250 to 300 mg/day) and placebo on mean social functioning scores (desipramine = 0.19; placebo = 0.21), assessed with the family-social domain of the Addiction Severity Index (scores range from zero to one, with higher values indicating worse social functioning), at endpoint (12 weeks) (very low-certainty evidence).\nNeither of the studies included in this comparison measured the other primary outcomes: aggression; reconviction; global/state functioning; or adverse events.\nNortriptyline (antidepressant) versus placebo\nOne study (20 participants) reported no evidence of a difference between nortriptyline (25 to 75 mg/day) and placebo on mean global state/functioning scores (nortriptyline = 0.3; placebo = 0.7), assessed with the Symptom Check List-90 (SCL-90) Global Severity Index (GSI; mean of subscale scores, ranging from zero to four, with higher scores indicating greater severity of symptoms), at endpoint (six months) in men with alcohol dependency (very low-certainty evidence).\nThe study measured side effects but did not report data on adverse events for the AsPD subgroup.\nThe study did not measure aggression, reconviction or social functioning.\nBromocriptine (dopamine agonist) versus placebo\nOne study (18 participants) reported no evidence of difference between bromocriptine (15 mg/day) and placebo on mean global state/functioning scores (bromocriptine = 0.4; placebo = 0.7), measured with the GSI of the SCL-90 at endpoint (six months) (very low-certainty evidence).\nThe study did not provide data on adverse effects, but reported that 12 patients randomised to the bromocriptine group experienced severe side effects, five of whom dropped out of the study in the first two days due to nausea and severe flu-like symptoms (very low-certainty evidence).\nThe study did not measure aggression, reconviction and social functioning.\nAmantadine (dopamine agonist) versus placebo\nThe study in this comparison did not measure any of the primary outcomes.\nAuthors' conclusions\nThe evidence summarised in this review is insufficient to draw any conclusion about the use of pharmacological interventions in the treatment of antisocial personality disorder. The evidence comes from single, unreplicated studies of mostly older medications. The studies also have methodological issues that severely limit the confidence we can draw from their results. Future studies should recruit participants on the basis of having AsPD, and use relevant outcome measures, including reconviction.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The certainty of the evidence is very low, meaning that we are not confident in the findings. There is not enough evidence to determine whether or not medication is a helpful treatment for people with AsPD.\nFurther research is required to clarify which medications, if any, are effective for treating the main features of AsPD. Future studies should recruit participants on the basis of having AsPD, and include reconviction as an outcome measure."
}
]
|
query-laysum | 5531 | [
{
"role": "user",
"content": "Abstract: Background\nChancroid is a genital ulcerative disease caused by Haemophilus ducreyi. This microorganism is endemic in Africa, where it can cause up to 10% of genital ulcers. Macrolides may be an effective alternative to treat chancroid and, based on their oral administration and duration of therapy, could be considered as first line therapy.\nObjectives\nTo assess the effectiveness and safety of macrolides for treatment of H ducreyi infection in sexually active adults.\nSearch methods\nWe searched the Cochrane STI Group Specialized Register, CENTRAL, MEDLINE, Embase, LILACS, WHO ICTRP, ClinicalTrials.gov and Web of Science to 30 October 2017. We also handsearched conference proceedings and reference lists of retrieved studies.\nSelection criteria\nRandomized controlled trials (RCTs) comparing macrolides in different regimens or with other therapeutic alternatives for chancroid.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, extracted data and assessed risk of bias. We resolved disagreements through consensus. We used the GRADE approach to assess the quality of the evidence.\nMain results\nSeven RCTs (875 participants) met our inclusion criteria, of which four were funded by industry. Five studies (664 participants) compared macrolides with ceftriaxone, ciprofloxacin, spectinomycin or thiamphenicol. Low quality evidence suggested there was no difference between the groups after treatment in terms of clinical cure (risk ratio (RR) 1.09, 95% confidence interval (CI) 0.97 to 1.21; 2 studies, 340 participants with syndromic approach and RR 1.06, 95% CI 0.98 to 1.15; 5 studies, 348 participants with aetiological diagnosis) or improvement (RR 0.89, 95% CI 0.52 to 1.52; 2 studies, 340 participants with syndromic approach and RR 0.80, 95% CI 0.42 to 1.51; 3 studies, 187 participants with aetiological diagnosis). Based on low and very low quality evidence, there was no difference between macrolides and any other antibiotic treatments for microbiological cure (RR 0.93, 95% CI 0.74 to 1.16; 1 study, 45 participants) and minor adverse effects (RR 1.34, 95% CI 0.24 to 7.51; 3 studies, 412 participants).\nTwo trials (269 participants) compared erythromycin with any other macrolide type. Low quality evidence suggested that, compared with azithromycin or rosaramicin, long courses of erythromycin did not increase clinical cure (RR 1.00, 95% CI 0.91 to 1.10; 2 studies, 269 participants with syndromic approach and RR 1.04, 95% CI 0.93 to 1.16; 2 studies, 211 participants with aetiological diagnosis), with a similar frequency of minor adverse effects between the groups (RR 1.14, 95% CI 0.63 to 2.06; 1 trial, 101 participants). For this comparison, subgroup analysis found no difference between HIV-positive participants (RR 1.02, 95% CI 0.73 to 1.43; 1 study, 38 participants) and HIV-negative participants (RR 1.04, 95% CI 0.94 to 1.14; 1 study, 89 participants). We downgraded the quality of evidence to low, because of imprecision, some limitations on risk of bias and heterogeneity.\nNone of the trials reported serious adverse events, cost effectiveness and participant satisfaction.\nAuthors' conclusions\nAt present, the quality of the evidence on the effectiveness and safety of macrolides for treatment of H ducreyi infection in sexually active adults is low, implying that we are uncertain about the estimated treatment effect. There is no statistically significant difference between the available therapeutic alternatives for the treatment of sexually active adults with genital ulcers compatible with chancroid. Low quality evidence suggests that azithromycin could be considered as the first therapeutic alternative, based on their mono-dose oral administration, with a similar safety and effectiveness profile, when it is compared with long-term erythromycin use.\nDue to sparse available evidence about the safety and effectiveness of macrolides to treat H ducreyi infection in people with HIV, these results should be taken with caution.\n\nGiven the provided abstract, please respond to the following query: \"Trial characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the available literature up to 30 October 2017 and we included seven trials with 875 participants. The trials recruited men and non-pregnant women over the age of 16 years who presented with genital ulcers compatible with chancroid. Three studies included people with high-risk sexual behaviour as migrant mine workers, sex workers and men with prostitute contact. All the trials tested for other STIs, and discarded people who were also infected with syphilis and herpes simple virus. One study included people with HIV. The most frequent antibiotic treatment was erythromycin with azithromycin as an alternative. They were compared with ceftriaxone, ciprofloxacin, spectinomycin or thiamphenicol. Five studies compared two types of antibiotic and two studies compared different ways of treating with macrolides. Four trials were funded by pharmaceutical companies."
}
]
|
query-laysum | 5532 | [
{
"role": "user",
"content": "Abstract: Background\nYoga is an ancient spiritual practice that originated in India and is currently accepted in the Western world as a form of relaxation and exercise. It has been of interest for people with schizophrenia to determine its efficacy as an adjunct to standard-care treatment.\nObjectives\nTo examine the effects of yoga versus standard care for people with schizophrenia.\nSearch methods\nWe searched the Cochrane Schizophrenia Group Trials Register (November 2012 and January 29, 2015), which is based on regular searches of MEDLINE, PubMed, EMBASE, CINAHL, BIOSIS, AMED, PsycINFO, and registries of clinical trials. We searched the references of all included studies. There were no language, date, document type, or publication status limitations for inclusion of records in the register.\nSelection criteria\nAll randomised controlled trials (RCTs) including people with schizophrenia comparing yoga to standard-care control.\nData collection and analysis\nThe review team independently selected studies, quality rated these, and extracted data. For binary outcomes, we calculated risk ratio (RR) and its 95% confidence interval (CI), on an intention-to-treat basis. For continuous data, we estimated the mean difference (MD) between groups and its 95% CI. We employed mixed-effect and fixed-effect models for analyses. We examined data for heterogeneity (I2 technique), assessed risk of bias for included studies, and created 'Summary of findings' tables using GRADE (Grading of Recommendations Assessment, Development and Evaluation).\nMain results\nWe included eight studies in the review. All outcomes were short term (less than six months). There were clear differences in a number of outcomes in favour of the yoga group, although these were based on one study each, with the exception of leaving the study early. These included mental state (improvement in Positive and Negative Syndrome Scale, 1 RCT, n = 83, RR 0.70 CI 0.55 to 0.88, medium-quality evidence), social functioning (improvement in Social Occupational Functioning Scale, 1 RCT, n = 83, RR 0.88 CI 0.77 to 1, medium-quality evidence), quality of life (average change 36-Item Short Form Survey (SF-36) quality-of-life subscale, 1 RCT, n = 60, MD 15.50, 95% CI 4.27 to 26.73, low-quality evidence), and leaving the study early (8 RCTs, n = 457, RR 0.91 CI 0.6 to 1.37, medium-quality evidence). For the outcome of physical health, there was not a clear difference between groups (average change SF-36 physical-health subscale, 1 RCT, n = 60, MD 6.60, 95% CI -2.44 to 15.64, low-quality evidence). Only one study reported adverse effects, finding no incidence of adverse events in either treatment group. This review was subject to a considerable number of missing outcomes, which included global state, change in cognition, costs of care, effect on standard care, service intervention, disability, and activities of daily living.\nAuthors' conclusions\nEven though we found some positive evidence in favour of yoga over standard-care control, this should be interpreted cautiously in view of outcomes largely based each on one study with limited sample sizes and short-term follow-up. Overall, many outcomes were not reported and evidence presented in this review is of low to moderate quality - -too weak to indicate that yoga is superior to standard-care control for the management of schizophrenia.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Is yoga effective as an add-on treatment for people with schizophrenia?"
}
]
|
query-laysum | 5533 | [
{
"role": "user",
"content": "Abstract: Background\nNeurocysticercosis is a parasitic infection of the central nervous system by the larval stage of the pork tapeworm and is a common cause of seizures and epilepsy in endemic areas. Anthelmintics (albendazole or praziquantel) may be given alongside supportive treatment (antiepileptics/analgesia) with the aim of killing these larvae (cysticerci), with or without corticosteroid treatment. However, there are potential adverse effects of these drugs, and the cysticerci may eventually die without directed anthelminthic treatment.\nObjectives\nTo assess the effects of anthelmintics on people with neurocysticercosis.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS, the WHO ICTRP, and ClinicalTrials.gov, up to 21 October 2020.\nSelection criteria\nRandomized controlled trials comparing anthelmintics and supportive treatment (+/- corticosteroids) with supportive treatment alone (+/- corticosteroids) for people with neurocysticercosis.\nData collection and analysis\nTwo review authors independently screened the title and abstract of all articles identified by the search. We obtained full-text articles to confirm the eligibility of all studies that passed screening. One review author extracted data, which a second review author checked. Two review authors assessed the risk of bias of each trial and performed GRADE assessments. In cases of disagreement at consensus discussion stage between review authors, we consulted a third review author. We calculated risk ratios (RR) for dichotomous variables, with 95% confidence intervals (CIs) for pooled data from studies with similar interventions and outcomes.\nMain results\nWe included 16 studies in the review. Only two studies investigated praziquantel and did not report data in a format that could contribute to meta-analysis. Most results in this review are therefore applicable to albendazole versus placebo or no anthelmintic.\nThe aggregate analysis across all participants with neurocysticercosis did not demonstrate a difference between groups in seizure recurrence, but heterogeneity was marked (RR 0.94, 95% CI 0.78 to 1.14; 10 trials, 1054 participants; I2 = 67%; low-certainty evidence). When stratified by participants with a single cyst or multiple cysts, pooled analysis suggests that albendazole probably improves seizure recurrence for participants with a single cyst (RR 0.61, 95% CI 0.4 to 0.91; 5 trials, 396 participants; moderate-certainty evidence). All studies contributing to this analysis recruited participants with non-viable, intraparenchymal cysts only, and most participants were children. We are uncertain whether or not albendazole reduces seizure recurrence in participants with multiple cysts, as the certainty of the evidence is very low, although the direction of effect is towards albendazole causing harm (RR 2.05, 95% CI 1.28 to 3.31; 2 trials, 321 participants; very low-certainty evidence). This analysis included a large study containing a highly heterogeneous population that received an assessment of unclear risk for multiple 'Risk of bias' domains.\nRegarding radiological outcomes, albendazole probably slightly improves the complete radiological clearance of lesions (RR 1.22, 95% CI 1.07 to 1.39; 13 trials, 1324 participants; moderate-certainty evidence) and the evolution of cysts (RR 1.27, 95% CI 1.10 to 1.47; 6 trials, 434 participants; moderate-certainty evidence).\nMore adverse events appeared to be observed in participants treated with either albendazole or praziquantel compared to those receiving placebo or no anthelmintic. The most commonly reported side effects were headache, abdominal pain, and nausea/vomiting.\nAuthors' conclusions\nFor participants with a single cyst, there was less seizure recurrence in the albendazole group compared to the placebo/no anthelmintic group. The studies contributing to this evidence only recruited participants with a non-viable intraparenchymal cyst. We are uncertain whether albendazole reduces seizure recurrence for participants with multiple cysts. We also found that albendazole probably increases radiological clearance and evolution of lesions. There were very few studies reporting praziquantel outcomes, and these findings apply to albendazole only.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this review was to explore whether treatment with anthelmintics (drugs that kill worms) can benefit people with neurocysticercosis (an infection of the brain caused by the pork tapeworm). The primary outcome of the review was the impact of treatment on seizures (epilepsy). We collected and analysed all relevant studies (trials) to answer this question and found 16 studies. The most commonly reported outcomes were those relating to seizures and also the number and appearance of lesions caused by viable or degrading cysts (dormant worms) on brain imaging."
}
]
|
query-laysum | 5534 | [
{
"role": "user",
"content": "Abstract: Background\nCrohn's disease (CD) is a chronic inflammatory disorder of the gastrointestinal tract, in which the pathogenesis is believed to be partly influenced by the gut microbiome. Probiotics can be used to manipulate the microbiome and have therefore been considered as a potential therapy for CD. There is some evidence that probiotics benefit other gastrointestinal conditions, such as irritable bowel syndrome and ulcerative colitis, but their efficacy in CD is unclear. This is the first update of a Cochrane Review previously published in 2008.\nObjectives\nTo assess the efficacy and safety of probiotics for the induction of remission in CD.\nSearch methods\nThe following electronic databases were searched: MEDLINE (from inception to 6 July 2020), Embase (from inception to 6 July 2020), the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane IBD Review Group Specialised Trials Register, World Health Organization (WHO) International Clinical Trials Registry, and ClinicalTrials.gov.\nSelection criteria\nRandomised controlled trials (RCTs) that compared probiotics with placebo or any other non-probiotic intervention for the induction of remission in CD were eligible for inclusion.\nData collection and analysis\nTwo review authors independently extracted data and assessed the methodological quality of included studies. The primary outcome was clinical remission. Risk ratios (RRs) and 95% confidence intervals (CIs) were calculated for dichotomous outcomes.\nMain results\nThere were two studies that met criteria for inclusion. One study from Germany had 11 adult participants with mild-to-moderate CD, who were treated with a one-week course of corticosteroids and antibiotics (ciprofloxacin 500 mg twice daily and metronidazole 250 mg three times a day), followed by randomised assignment to Lactobacillus rhamnosus strain GG (two billion colony-forming units per day) or corn starch placebo. The other study from the United Kingdom (UK) had 35 adult participants with active CD (CDAI score of 150 to 450) randomised to receive a synbiotic treatment (comprised of freeze-dried Bifidobacterium longum and a commercial product) or placebo. The overall risk of bias was low in one study, whereas the other study had unclear risk of bias in relation to random sequence generation, allocation concealment, and blinding. There was no evidence of a difference between the use of probiotics and placebo for the induction of remission in CD (RR 1.06; 95% CI 0.65 to 1.71; 2 studies, 46 participants) after six months. There was no difference in adverse events between probiotics and placebo (RR 2.55; 95% CI 0.11 to 58.60; 2 studies, 46 participants). The evidence for both outcomes was of very low certainty due to risk of bias and imprecision.\nAuthors' conclusions\nThe available evidence is very uncertain about the efficacy or safety of probiotics, when compared with placebo, for induction of remission in Crohn's disease. There is a lack of well-designed RCTs in this area and further research is needed.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is unclear whether probiotics are better than placebo (dummy pill). No serious adverse events occurred in either study."
}
]
|
query-laysum | 5535 | [
{
"role": "user",
"content": "Abstract: Background\nDementia is a common and serious neuropsychiatric syndrome, characterised by progressive cognitive and functional decline. The majority of people with dementia develop behavioural disturbances, also known as behavioural and psychological symptoms of dementia (BPSD). Several non-pharmacological interventions have been evaluated to treat BPSD in people with dementia. Simulated presence therapy (SPT), an intervention that uses video or audiotape recordings of family members played to the person with dementia, is a possible approach to treat BPSD.\nObjectives\nTo assess the effects of SPT on behavioural and psychological symptoms and quality of life in people with dementia.\nSearch methods\nWe searched ALOIS (the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group), CENTRAL (The Cochrane Library) (9 April 2020), MEDLINE Ovid SP (1946 to 9 April 2020), Embase Ovid SP (1972 to 9 April 2020), PsycINFO Ovid SP (1806 to 9 April 2020), CINAHL via EBSCOhost (1980 to 9 April 2020), LILACS via BIREME (all dates to 9 April 2020), ClinicalTrials.gov (ClinicalTrials.gov) (all dates to 9 April 2020), and the World Health Organization (WHO) Portal (apps.who.int/trialsearch) (all dates to 9 April 2020). We also checked the reference lists of relevant articles to identify any additional studies.\nSelection criteria\nRandomised and quasi-randomised controlled trials, including cross-over studies, that evaluated the efficacy of SPT, consisting of personalised audio or videotape recordings of family members, in people with any form of dementia.\nData collection and analysis\nTwo authors independently selected studies, assessed risk of bias and extracted data. No meta-analyses were conducted because of substantial heterogeneity among the included studies.\nMain results\nThree trials with 144 participants met the inclusion criteria. Two of the trials had a randomised cross-over design, one was a cross-over trial which we classified as quasi-randomised.\nParticipants in the included studies were people with dementia living in nursing homes. They were predominantly women and had a mean age of over 80 years. SPT was performed using an audio or video recording prepared by family members or surrogates. It varied in its content, frequency of administration and duration. All the studies compared multiple treatments. In one study, SPT was compared with two other interventions; in the other two studies, it was compared with three other interventions. Specifically, SPT was compared to usual care, personalised music (two studies), a 'placebo' audiotape containing the voice of a person (two studies), and one-to-one social interaction performed by trained research assistants (one study). In terms of outcomes evaluated, one study considered agitation and withdrawn behaviour (both assessed with three methods); the second study evaluated verbal disruptive behaviour (assessed with three methods); and the third study evaluated physically agitated behaviour and verbally agitated behaviour (the method used was not clearly described).\nAccording to the GRADE criteria, the overall quality of the evidence was very low due to very small numbers of participants and risk of bias in the included studies; (none of the trials was at low risk of selection bias; all the trials were at high risk of performance bias; one trial was at high risk of attrition bias; and all had unclear selective reporting).\nBecause of variation in the participants, the format of SPT, the comparison interventions, and the measures used to assess outcomes, we judged the results unsuitable for a meta-analysis.\nWithin each trial, the effect of SPT on behaviour, compared to usual care, was mixed and depended on the measure used. Two trials which included a personalised music intervention reported no significant differences between simulated presence and music on behavioural outcomes. Because the overall quality of the evidence was very low, we were very uncertain regarding all the results\nNone of the studies evaluated quality of life or any of our secondary outcome measures (performance of activities of daily living, dropout and carer burden).\nAuthors' conclusions\nWe were unable to draw any conclusions about the efficacy of SPT for treating behavioural and psychological symptoms and improving quality of life of people with dementia. New high-quality studies are needed to investigate the effect of SPT.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Dementia is an illness, most common among older people, in which memory and other brain functions deteriorate and people gradually become dependent on others for care. Many people with dementia, particularly in its later stages, show signs of distress at times, or behave in ways which are difficult for their caregivers to manage. There is a lot of interest in finding ways to treat this without using drugs. Simulated presence therapy is a type of treatment which has been used mainly in nursing homes. It involves playing a personalised video or audiotape recording of family members to the person with dementia when he or she is distressed or agitated."
}
]
|
query-laysum | 5536 | [
{
"role": "user",
"content": "Abstract: Background\nThe management of postoperative pain and recovery is still unsatisfactory in a number of cases in clinical practice. Opioids used for postoperative analgesia are frequently associated with adverse effects, including nausea and constipation, preventing smooth postoperative recovery. Not all patients are suitable for, and benefit from, epidural analgesia that is used to improve postoperative recovery. The non-opioid, lidocaine, was investigated in several studies for its use in multimodal management strategies to reduce postoperative pain and enhance recovery. This review was published in 2015 and updated in January 2017.\nObjectives\nTo assess the effects (benefits and risks) of perioperative intravenous (IV) lidocaine infusion compared to placebo/no treatment or compared to epidural analgesia on postoperative pain and recovery in adults undergoing various surgical procedures.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, and reference lists of articles in January 2017. We searched one trial registry contacted researchers in the field, and handsearched journals and congress proceedings. We updated this search in February 2018, but have not yet incorporated these results into the review.\nSelection criteria\nWe included randomized controlled trials comparing the effect of continuous perioperative IV lidocaine infusion either with placebo, or no treatment, or with thoracic epidural analgesia (TEA) in adults undergoing elective or urgent surgery under general anaesthesia. The IV lidocaine infusion must have been started intraoperatively, prior to incision, and continued at least until the end of surgery.\nData collection and analysis\nWe used Cochrane's standard methodological procedures. Our primary outcomes were: pain score at rest; gastrointestinal recovery and adverse events. Secondary outcomes included: postoperative nausea and postoperative opioid consumption. We used GRADE to assess the quality of evidence for each outcome.\nMain results\nWe included 23 new trials in the update. In total, the review included 68 trials (4525 randomized participants). Two trials compared IV lidocaine with TEA. In all remaining trials, placebo or no treatment was used as a comparator. Trials involved participants undergoing open abdominal (22), laparoscopic abdominal (20), or various other surgical procedures (26). The application scheme of systemic lidocaine strongly varies between the studies related to both dose (1 mg/kg/h to 5 mg/kg/h) and termination of the infusion (from the end of surgery until several days after).\nThe risk of bias was low with respect to selection bias (random sequence generation), performance bias, attrition bias, and detection bias in more than 50% of the included studies. For allocation concealment and selective reporting, the quality assessment yielded low risk of bias for only approximately 20% of the included studies.\nIV Lidocaine compared to placebo or no treatment\nWe are uncertain whether IV lidocaine improves postoperative pain compared to placebo or no treatment at early time points (1 to 4 hours) (standardized mean difference (SMD) −0.50, 95% confidence interval (CI) −0.72 to −0.28; 29 studies, 1656 participants; very low-quality evidence) after surgery. Due to variation in the standard deviation (SD) in the studies, this would equate to an average pain reduction of between 0.37 cm and 2.48 cm on a 0 to 10 cm visual analogue scale . Assuming approximately 1 cm on a 0 to 10 cm pain scale is clinically meaningful, we ruled out a clinically relevant reduction in pain with lidocaine at intermediate (24 hours) (SMD −0.14, 95% CI −0.25 to −0.04; 33 studies, 1847 participants; moderate-quality evidence), and at late time points (48 hours) (SMD −0.11, 95% CI −0.25 to 0.04; 24 studies, 1404 participants; moderate-quality evidence). Due to variation in the SD in the studies, this would equate to an average pain reduction of between 0.10 cm to 0.48 cm at 24 hours and 0.08 cm to 0.42 cm at 48 hours. In contrast to the original review in 2015, we did not find any significant subgroup differences for different surgical procedures.\nWe are uncertain whether lidocaine reduces the risk of ileus (risk ratio (RR) 0.37, 95% CI 0.15 to 0.87; 4 studies, 273 participants), time to first defaecation/bowel movement (mean difference (MD) −7.92 hours, 95% CI −12.71 to −3.13; 12 studies, 684 participants), risk of postoperative nausea (overall, i.e. 0 up to 72 hours) (RR 0.78, 95% CI 0.67 to 0.91; 35 studies, 1903 participants), and opioid consumption (overall) (MD −4.52 mg morphine equivalents , 95% CI −6.25 to −2.79; 40 studies, 2201 participants); quality of evidence was very low for all these outcomes.\nThe effect of IV lidocaine on adverse effects compared to placebo treatment is uncertain, as only a small number of studies systematically analysed the occurrence of adverse effects (very low-quality evidence).\nIV Lidocaine compared to TEA\nThe effects of IV lidocaine compared with TEA are unclear (pain at 24 hours (MD 1.51, 95% CI −0.29 to 3.32; 2 studies, 102 participants), pain at 48 hours (MD 0.98, 95% CI −1.19 to 3.16; 2 studies, 102 participants), time to first bowel movement (MD −1.66, 95% CI −10.88 to 7.56; 2 studies, 102 participants); all very low-quality evidence). The risk for ileus and for postoperative nausea (overall) is also unclear, as only one small trial assessed these outcomes (very low-quality evidence). No trial assessed the outcomes, 'pain at early time points' and 'opioid consumption (overall)'. The effect of IV lidocaine on adverse effects compared to TEA is uncertain (very low-quality evidence).\nAuthors' conclusions\nWe are uncertain whether IV perioperative lidocaine, when compared to placebo or no treatment, has a beneficial impact on pain scores in the early postoperative phase, and on gastrointestinal recovery, postoperative nausea, and opioid consumption. The quality of evidence was limited due to inconsistency, imprecision, and study quality. Lidocaine probably has no clinically relevant effect on pain scores later than 24 hours. Few studies have systematically assessed the incidence of adverse effects. There is a lack of evidence about the effects of IV lidocaine compared with epidural anaesthesia in terms of the optimal dose and timing (including the duration) of the administration. We identified three ongoing studies, and 18 studies are awaiting classification; the results of the review may change when these studies are published and included in the review.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We rated the quality of evidence for most outcomes as very low. This was because of inconsistent findings across studies and the fact that the evidence came from small studies that were of moderate design quality or a limited number of studies. The quality of the evidence for minimal or no effect on pain at 24 and 48 hours was moderate quality. The studies involved a variety of surgical procedures. The dose of lidocaine used, and how long it was delivered for after the end of surgery, also varied between studies."
}
]
|
query-laysum | 5537 | [
{
"role": "user",
"content": "Abstract: Background\nCancer cachexia is a multifactorial syndrome characterised by an ongoing loss of skeletal muscle mass, with or without a loss of fat mass, leading to progressive functional impairment. Physical exercise may attenuate cancer cachexia and its impact on patient function. This is the first update of an original Cochrane Review published in Issue 11, 2014, which found no studies to include.\nObjectives\nTo determine the effectiveness, acceptability and safety of exercise, compared with usual care, no treatment or active control, for cancer cachexia in adults.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and eight other databases to March 2020. We searched for ongoing studies in trial registries, checked reference lists and contacted experts to seek relevant studies.\nSelection criteria\nWe sought randomised controlled trials in adults with cancer cachexia, that compared a programme of exercise alone or in combination with another intervention, with usual care, no treatment or an active control group.\nData collection and analysis\nTwo review authors independently assessed titles and abstracts for relevance and extracted data on study design, participants, interventions and outcomes from potentially relevant articles. We used standard methodological procedures expected by Cochrane. Our primary outcome was lean body mass and secondary outcomes were adherence to exercise programme, adverse events, muscle strength and endurance, exercise capacity, fatigue and health-related quality of life. We assessed the certainty of evidence using GRADE and included two Summary of findings tables.\nMain results\nWe included four new studies in this update which overall randomised 178 adults with a mean age of 58 (standard deviation (SD) 8.2) years. Study sample size ranged from 20 to 60 participants and in three studies the proportion of men ranged from 52% to 82% (the fourth study was only available in abstract form). Three studies were from Europe: one in the UK and Norway; one in Belgium and one in Germany. The remaining study was in Canada. The types of primary cancer were head and neck (two studies), lung and pancreas (one study), and mixed (one study).\nWe found two comparisons: exercise alone (strength-based exercise) compared to usual care (one study; 20 participants); and exercise (strength-based exercise/endurance exercise) as a component of a multimodal intervention (pharmacological, nutritional or educational (or a combination) interventions) compared with usual care (three studies, 158 participants). Studies had unclear and high risk of bias for most domains.\nExercise plus usual care compared with usual care\nWe found one study (20 participants). There was no clear evidence of a difference for lean body mass (8 weeks: MD 6.40 kg, 95% CI –2.30 to 15.10; very low-certainty evidence).\nFor our secondary outcomes, all participants adhered to the exercise programme and no participant reported any adverse event during the study. There were no data for muscle strength and endurance, or maximal and submaximal exercise capacity. There was no clear evidence of a difference for either fatigue (4 to 20 scale, lower score was better) (8 weeks: MD –0.10, 95% CI –4.00 to 3.80; very low-certainty evidence) or health-related quality of life (0 to 104 scale, higher score was better) (8 weeks: MD 4.90, 95% CI –15.10 to 24.90; very low-certainty evidence).\nMultimodal intervention (exercise plus other interventions) plus usual care compared with usual care\nWe found three studies but outcome data were only available for two studies. There was no clear evidence of a difference for lean body mass (6 weeks: MD 7.89 kg, 95% CI –9.57 to 25.35; 1 study, 44 participants; very low-certainty evidence; 12 weeks: MD –2.00, 95% CI –8.00 to 4.00; one study, 60 participants; very low-certainty evidence).\nFor our secondary outcomes, there were no data reported on adherence to the exercise programme, endurance, or maximal exercise capacity. In one study (44 participants) there was no clear evidence of a difference for adverse events (patient episode report) (6 weeks: risk ratio (RR) 1.18, 95% CI 0.67 to 2.07; very low-certainty evidence). Another study assessed adverse events but reported no data and the third study did not assess this outcome. There was no clear evidence of a difference in muscle strength (6 weeks: MD 3.80 kg, 95% CI –2.87 to 10.47; 1 study, 44 participants; very low-certainty evidence; 12 weeks MD –5.00 kg, 95% CI –14.00 to 4.00; 1 study, 60 participants; very low-certainty evidence), submaximal exercise capacity (6 weeks: MD –16.10 m walked, 95% CI –76.53 to 44.33; 1 study, 44 participants; very low-certainty evidence; 12 weeks: MD –62.60 m walked, 95% CI –145.87 to 20.67; 1 study, 60 participants; very low-certainty evidence), fatigue (0 to 10 scale, lower score better) (6 weeks: MD 0.12, 95% CI –1.00 to 1.24; 1 study, 44 participants; very low-certainty evidence) or health-related quality of life (0 to 104 scale, higher score better) (12 weeks: MD –2.20, 95% CI –13.99 to 9.59; 1 study, 60 participants; very low-certainty evidence).\nAuthors' conclusions\nThe previous review identified no studies. For this update, our conclusions have changed with the inclusion of four studies. However, we are uncertain of the effectiveness, acceptability and safety of exercise for adults with cancer cachexia. Further high-quality randomised controlled trials are still required to test exercise alone or as part of a multimodal intervention to improve people's well-being throughout all phases of cancer care. We assessed the certainty of the body of evidence as very low, downgraded due to serious study limitations, imprecision and indirectness. We have very little confidence in the results and the true effect is likely to be substantially different from these. The findings of at least three more studies (one awaiting classification and two ongoing) are expected in the next review update.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found four studies that included 178 people (average age: 58 years; 52% to 82% of people in each study were men). The studies lasted for six weeks to three months. Two studies included people with head and neck cancer, one study included people with lung and pancreas cancer, and the fourth study included various cancer types.\nThe studies compared:\n· exercise plus usual care against usual care alone (one study, 20 people);\n· exercise combined with other treatments (medicines, health education or nutrition) plus usual care against usual care alone (three studies, 158 people).\nThe studies did not provide enough robust evidence to determine if exercise is associated with benefits or risks in people with cancer cachexia."
}
]
|
query-laysum | 5538 | [
{
"role": "user",
"content": "Abstract: Background\nGreat saphenous vein (GSV) incompetence, causing varicose veins and venous insufficiency, makes up the majority of lower-limb superficial venous diseases. Treatment options for GSV incompetence include surgery (also known as high ligation and stripping), laser and radiofrequency ablation, and ultrasound-guided foam sclerotherapy. Newer treatments include cyanoacrylate glue, mechanochemical ablation, and endovenous steam ablation. These techniques avoid the need for a general anaesthetic, and may result in fewer complications and improved quality of life (QoL). These treatments should be compared to inform decisions on treatment for varicosities in the GSV. This is an update of a Cochrane Review first published in 2011.\nObjectives\nTo assess the effects of endovenous laser ablation (EVLA), radiofrequency ablation (RFA), endovenous steam ablation (EVSA), ultrasound-guided foam sclerotherapy (UGFS), cyanoacrylate glue, mechanochemical ablation (MOCA) and high ligation and stripping (HL/S) for the treatment of varicosities of the great saphenous vein (GSV).\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, and AMED databases, and World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 2 November 2020. We undertook reference checking to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) treating participants for varicosities of the GSV using EVLA, RFA, EVSA, UGFS, cyanoacrylate glue, MOCA or HL/S. Key outcomes of interest are technical success, recurrence, complications and QoL.\nData collection and analysis\nTwo review authors independently selected trials, applied Cochrane's risk of bias tool, and extracted data. We calculated odds ratios (ORs) with 95% confidence intervals (CIs) and assessed the certainty of evidence using GRADE.\nMain results\nWe identified 11 new RCTs for this update. Therefore, we included 24 RCTs with 5135 participants. Duration of follow-up ranged from five weeks to eight years. Five comparisons included single trials. For comparisons with more than one trial, we could only pool data for 'technical success' and 'recurrence' due to heterogeneity in outcome definitions and time points reported. All trials had some risk of bias concerns. Here we report the clinically most relevant comparisons.\nEVLA versus RFA\nTechnical success was comparable up to five years (OR 0.98, 95% CI 0.41 to 2.38; 5 studies, 780 participants; moderate-certainty evidence); over five years, there was no evidence of a difference (OR 0.85, 95% CI 0.30 to 2.41; 1 study, 291 participants; low-certainty evidence). One study reported recurrence, showing no clear difference at three years (OR 1.53, 95% CI 0.78 to 2.99; 291 participants; low-certainty evidence), but a benefit for RFA may be seen at five years (OR 2.77, 95% CI 1.52 to 5.06; 291 participants; low-certainty evidence).\nEVLA versus UGFS\nTechnical success may be better in EVLA participants up to five years (OR 6.13, 95% CI 0.98 to 38.27; 3 studies, 588 participants; low-certainty evidence), and over five years (OR 6.47, 95% CI 2.60 to 16.10; 3 studies, 534 participants; low-certainty evidence). There was no clear difference in recurrence up to three years and at five years (OR 0.68, 95% CI 0.20 to 2.36; 2 studies, 443 participants; and OR 1.08, 95% CI 0.40 to 2.87; 2 studies, 418 participants; very low-certainty evidence, respectively).\nEVLA versus HL/S\nTechnical success may be better in EVLA participants up to five years (OR 2.31, 95% CI 1.27 to 4.23; 6 studies, 1051 participants; low-certainty evidence). No clear difference in technical success was seen at five years and beyond (OR 0.93, 95% CI 0.57 to 1.50; 5 studies, 874 participants; low-certainty evidence). Recurrence was comparable within three years and at 5 years (OR 0.78, 95% CI 0.47 to 1.29; 7 studies, 1459 participants; and OR 1.09, 95% CI 0.68 to 1.76; 7 studies, 1267 participants; moderate-certainty evidence, respectively).\nRFA versus MOCA\nThere was no clear difference in technical success (OR 1.76, 95% CI 0.06 to 54.15; 3 studies, 435 participants; low-certainty evidence), or recurrence (OR 1.00, 95% CI 0.21 to 4.81; 3 studies, 389 participants; low-certainty evidence). Long-term data are not available.\nRFA versus HL/S\nNo clear difference in technical success was detected up to five years (OR 5.71, 95% CI 0.64 to 50.81; 2 studies, 318 participants; low-certainty evidence); over five years, there was no evidence of a difference (OR 0.88, 95% CI 0.29 to 2.69; 1 study, 289 participants; low-certainty evidence). No clear difference in recurrence was detected up to three years (OR 0.93, 95% CI 0.58 to 1.51; 4 studies, 546 participants; moderate-certainty evidence); but a possible long-term benefit for RFA was seen (OR 0.41, 95% CI 0.22 to 0.75; 1 study, 289 participants; low-certainty evidence).\nUGFS versus HL/S\nMeta-analysis showed a possible benefit for HL/S compared with UGFS in technical success up to five years (OR 0.32, 95% CI 0.11 to 0.94; 4 studies, 954 participants; low-certainty evidence), and over five years (OR 0.09, 95% CI 0.03 to 0.30; 3 studies, 525 participants; moderate-certainty evidence). No clear difference was detected in recurrence up to three years (OR 1.81, 95% CI 0.87 to 3.77; 3 studies, 822 participants; low-certainty evidence), and after five years (OR 1.24, 95% CI 0.57 to 2.71; 3 studies, 639 participants; low-certainty evidence).\nComplications were generally low for all interventions, but due to different definitions and time points, we were unable to draw conclusions (very-low certainty evidence). Similarly, most studies evaluated QoL but used different questionnaires at variable time points. Rates of QoL improvement were comparable between interventions at follow-up (moderate-certainty evidence).\nAuthors' conclusions\nOur conclusions are limited due to the relatively small number of studies for each comparison and differences in outcome definitions and time points reported. Technical success was comparable between most modalities. EVLA may offer improved technical success compared to UGFS or HL/S. HL/S may have improved technical success compared to UGFS. No evidence of a difference was detected in recurrence, except for a possible long-term benefit for RFA compared to EVLA or HL/S. Studies which provide more evidence on the breadth of treatments are needed. Future trials should seek to standardise clinical terminology of outcome measures and the time points at which they are measured.\n\nGiven the provided abstract, please respond to the following query: \"What are varicose veins?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Varicose veins are bulging, twisty veins close to the skin’s surface that usually occur in the legs. They are caused by chronic venous insufficiency, which is when your veins do not manage to help blood to flow back up to your heart efficiently, and blood pools in your legs. About one-third of adults are thought to have chronic venous insufficiency. Women are more likely than men to have varicose veins.\nVaricose veins can be painful, itchy and unsightly, especially when standing and walking. Occasionally, they may result in skin changes or sores (ulcers) on the leg that take more than two weeks to heal."
}
]
|
query-laysum | 5539 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the original Cochrane review, last published in 2009 (Huertas-Ceballos 2009). Recurrent abdominal pain (RAP), including children with irritable bowel syndrome, is a common problem affecting between 4% and 25% of school-aged children. For the majority of such children, no organic cause for their pain can be found on physical examination or investigation. Many dietary inventions have been suggested to improve the symptoms of RAP. These may involve either excluding ingredients from the diet or adding supplements such as fibre or probiotics.\nObjectives\nTo examine the effectiveness of dietary interventions in improving pain in children of school age with RAP.\nSearch methods\nWe searched CENTRAL, Ovid MEDLINE, Embase, eight other databases, and two trials registers, together with reference checking, citation searching and contact with study authors, in June 2016.\nSelection criteria\nRandomised controlled trials (RCTs) comparing dietary interventions with placebo or no treatment in children aged five to 18 years with RAP or an abdominal pain-related, functional gastrointestinal disorder, as defined by the Rome III criteria (Rasquin 2006).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We grouped dietary interventions together by category for analysis. We contacted study authors to ask for missing information and clarification, when needed. We assessed the quality of the evidence for each outcome using the GRADE approach.\nMain results\nWe included 19 RCTs, reported in 27 papers with a total of 1453 participants. Fifteen of these studies were not included in the previous review. All 19 RCTs had follow-up ranging from one to five months. Participants were aged between four and 18 years from eight different countries and were recruited largely from paediatric gastroenterology clinics. The mean age at recruitment ranged from 6.3 years to 13.1 years. Girls outnumbered boys in most trials. Fourteen trials recruited children with a diagnosis under the broad umbrella of RAP or functional gastrointestinal disorders; five trials specifically recruited only children with irritable bowel syndrome. The studies fell into four categories: trials of probiotic-based interventions (13 studies), trials of fibre-based interventions (four studies), trials of low FODMAP (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) diets (one study), and trials of fructose-restricted diets (one study).\nWe found that children treated with probiotics reported a greater reduction in pain frequency at zero to three months postintervention than those given placebo (standardised mean difference (SMD) -0.55, 95% confidence interval (CI) -0.98 to -0.12; 6 trials; 523 children). There was also a decrease in pain intensity in the intervention group at the same time point (SMD -0.50, 95% CI -0.85 to -0.15; 7 studies; 575 children). However, we judged the evidence for these outcomes to be of low quality using GRADE due to an unclear risk of bias from incomplete outcome data and significant heterogeneity.\nWe found that children treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (odds ratio (OR) 1.63, 95% CI 1.07 to 2.47; 7 studies; 722 children). The estimated number needed to treat for an additional beneficial outcome (NNTB) was eight, meaning that eight children would need to receive probiotics for one to experience improvement in pain in this timescale. We judged the evidence for this outcome to be of moderate quality due to significant heterogeneity.\nChildren with a symptom profile defined as irritable bowel syndrome treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (OR 3.01, 95% CI 1.77 to 5.13; 4 studies; 344 children). Children treated with probiotics were more likely to experience improvement in pain at three to six months postintervention compared to those receiving placebo (OR 1.94, 95% CI 1.10 to 3.43; 2 studies; 224 children). We judged the evidence for these two outcomes to be of moderate quality due to small numbers of participants included in the studies.\nWe found that children treated with fibre-based interventions were not more likely to experience an improvement in pain at zero to three months postintervention than children given placebo (OR 1.83, 95% CI 0.92 to 3.65; 2 studies; 136 children). There was also no reduction in pain intensity compared to placebo at the same time point (SMD -1.24, 95% CI -3.41 to 0.94; 2 studies; 135 children). We judged the evidence for these outcomes to be of low quality due to an unclear risk of bias, imprecision, and significant heterogeneity.\nWe found only one study of low FODMAP diets and only one trial of fructose-restricted diets, meaning no pooled analyses were possible.\nWe were unable to perform any meta-analyses for the secondary outcomes of school performance, social or psychological functioning, or quality of daily life, as not enough studies included these outcomes or used comparable measures to assess them.\nWith the exception of one study, all studies reported monitoring children for adverse events; no major adverse events were reported.\nAuthors' conclusions\nOverall, we found moderate- to low-quality evidence suggesting that probiotics may be effective in improving pain in children with RAP. Clinicians may therefore consider probiotic interventions as part of a holistic management strategy. However, further trials are needed to examine longer-term outcomes and to improve confidence in estimating the size of the effect, as well as to determine the optimal strain and dosage. Future research should also explore the effectiveness of probiotics in children with different symptom profiles, such as those with irritable bowel syndrome.\nWe found only a small number of trials of fibre-based interventions, with overall low-quality evidence for the outcomes. There was therefore no convincing evidence that fibre-based interventions improve pain in children with RAP. Further high-quality RCTs of fibre supplements involving larger numbers of participants are required. Future trials of low FODMAP diets and other dietary interventions are also required to facilitate evidence-based recommendations.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Probiotics\nWe found evidence from 13 studies suggesting that probiotics might be effective in improving pain in the shorter term. Most studies did not report on other areas such as quality of daily life. No harmful effects were reported, other than dry mouth in one study. We judged this evidence to be of moderate or low quality because some studies were small, showed varying results, or were at risk of bias.\nFibre supplements\nWe found no clear evidence of improvement of pain from four studies of fibre supplements. Most studies did not report on other areas such as quality of daily life. No harmful effects were reported. There were few studies of fibre supplements, and some of these studies were at risk of bias. We judged this evidence to be of low quality.\nLow FODMAP diets\nWe found only one study evaluating the effectiveness of low FODMAP diets in children with RAP.\nFructose-restricted diets\nWe found only one study evaluating the effectiveness of fructose-restricted diets in children with RAP."
}
]
|
query-laysum | 5540 | [
{
"role": "user",
"content": "Abstract: Background\nThe placenta has an essential role in determining the outcome of pregnancy. Consequently, biochemical measurement of placentally-derived factors has been suggested as a means to improve fetal and maternal outcome of pregnancy.\nObjectives\nTo assess whether clinicians' knowledge of the results of biochemical tests of placental function is associated with improvement in fetal or maternal outcome of pregnancy.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 July 2015) and reference lists of retrieved studies.\nSelection criteria\nRandomised, cluster-randomised or quasi-randomised controlled trials assessing the merits of the use of biochemical tests of placental function to improve pregnancy outcome.\nStudies were eligible if they compared women who had placental function tests and the results were available to their clinicians with women who either did not have the tests, or the tests were done but the results were not available to the clinicians. The placental function tests were any biochemical test of placental function carried out using the woman's maternal biofluid, either alone or in combination with other placental function test/s.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, extracted data and assessed trial quality. Authors of published trials were contacted for further information.\nMain results\nThree trials were included, two quasi-randomised controlled trials and one randomised controlled trial. One trial was deemed to be at low risk of bias while the other two were at high risk of bias. Different biochemical analytes were measured - oestrogen was measured in one trial and the other two measured human placental lactogen (hPL). One trial did not contribute outcome data, therefore, the results of this review are based on two trials with 740 participants.\nThere was no evidence of a difference in the incidence of death of a baby (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.36 to 2.13, two trials, 740 participants (very low quality evidence)) or the frequency of a small-for-gestational-age infant (RR 0.44, 95% CI 0.16 to 1.19, one trial, 118 participants (low quality evidence)).\nIn terms of this review's secondary outcomes, there was no evidence of a clear difference between women who had biochemical tests of placental function compared with standard antenatal care for the incidence of stillbirth (RR 0.56, 95% CI 0.16 to 1.88, two trials, 740 participants (very low quality evidence)) or neonatal death (RR 1.62, 95% CI 0.39 to 6.74, two trials, 740 participants, very low quality evidence)) although the directions of any potential effect were in opposing directions. There was no evidence of a difference between groups in elective delivery (RR 0.98, 95% CI 0.84 to 1.14, two trials, 740 participants (low quality evidence)), caesarean section (one trial, RR 0.48, 95% CI 0.15 to 1.52, one trial, 118 participants (low quality evidence)), change in anxiety score (mean difference -2.40, 95% CI -4.78 to -0.02, one trial, 118 participants), admissions to neonatal intensive care (RR 0.32, 95% CI 0.03 to 3.01, one trial, 118 participants), and preterm birth before 37 weeks' gestation (RR 2.90, 95% CI 0.12 to 69.81, one trial, 118 participants). One trial (118 participants) reported that there were no cases of serious neonatal morbidity. Maternal death was not reported.\nA number of this review's secondary outcomes relating to the baby were not reported in the included studies, namely: umbilical artery pH < 7.0, neonatal intensive care for more than seven days, very preterm birth (< 32 weeks' gestation), need for ventilation, organ failure, fetal abnormality, neurodevelopment in childhood (cerebral palsy, neurodevelopmental delay). Similarly, a number of this review's maternal secondary outcomes were not reported in the included studies (admission to intensive care, high dependency unit admission, hospital admission for > seven days, pre-eclampsia, eclampsia, and women's perception of care).\nAuthors' conclusions\nThere is insufficient evidence to support the use of biochemical tests of placental function to reduce perinatal mortality or increase identification of small-for-gestational-age infants. However, we were only able to include data from two studies that measured oestrogens and hPL. The quality of the evidence was low or very low.\nTwo of the trials were performed in the 1970s on women with a variety of antenatal complications and this evidence cannot be generalised to women at low-risk of complications or groups of women with specific pregnancy complications (e.g. fetal growth restriction). Furthermore, outcomes described in the 1970s may not reflect what would be expected at present. For example, neonatal mortality rates have fallen substantially, such that an infant delivered at 28 weeks would have a greater chance of survival were those studies repeated; this may affect the primary outcome of the meta-analysis.\nWith data from just two studies (740 women), this review is underpowered to detect a difference in the incidence of death of a baby or the frequency of a small-for-gestational-age infant as these have a background incidence of approximately 0.75% and 10% of pregnancies respectively. Similarly, this review is underpowered to detect differences between serious and/or rare adverse events such as severe neonatal morbidity. Two of the three included studies were quasi-randomised, with significant risk of bias from group allocation. Additionally, there may be performance bias as in one of the two studies contributing data, participants receiving standard care did not have venepuncture, so clinicians treating participants could identify which arm of the study they were in. Future studies should consider more robust randomisation methods and concealment of group allocation and should be adequately powered to detect differences in rare adverse events.\nThe studies identified in this review examined two different analytes: oestrogens and hPL. There are many other placental products that could be employed as surrogates of placental function, including: placental growth factor (PlGF), human chorionic gonadotrophin (hCG), plasma protein A (PAPP-A), placental protein 13 (PP-13), pregnancy-specific glycoproteins and progesterone metabolites and further studies should be encouraged to investigate these other placental products. Future randomised controlled trials should test analytes identified as having the best predictive reliability for placental dysfunction leading to small-for-gestational-age infants and perinatal mortality.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included three randomised controlled studies. Two trials were at a high risk of bias and one was at a low risk of bias. One study did not contribute any data towards this review. Therefore, this review is based on data from two studies involving 740 mothers. The evidence from these studies was graded as either low or very low quality evidence.\nWe found insufficient evidence to draw any conclusions about the effectiveness of tests that measure placental health in reducing the number of babies that die before birth (very low quality evidence) or shortly after birth (very low quality evidence), or in reducing the number of babies that are born small for their gestational age (low quality evidence).There was no evidence to suggest that measurement of placental health could cause harm by increasing intervention (planned delivery or caesarean section (low quality evidence) or increasing mothers' anxiety levels. There was no change in the number of babies admitted to the neonatal intensive care unit or the proportion of babies born before 37 weeks gestation (low quality evidence). There were no reports of serious disease for babies (as reported in one study only) or maternal deaths in any of the studies. A number of this review's other outcomes of interest were not reported in the included studies.\nMore research is needed to determine the most useful test for placental health as a way of predicting poor pregnancy outcome, and then to investigate whether performing this test on mothers improves pregnancy outcomes."
}
]
|
query-laysum | 5541 | [
{
"role": "user",
"content": "Abstract: Background\nComforting behaviours, such as the use of pacifiers (dummies, soothers), blankets and finger or thumb sucking, are common in babies and young children. These comforting habits, which can be referred to collectively as 'non-nutritive sucking habits' (NNSHs), tend to stop as children get older, under their own impetus or with support from parents and carers. However, if the habit continues whilst the permanent dentition is becoming established, it can contribute to, or cause, development of a malocclusion (abnormal bite). A diverse variety of approaches has been used to help children with stopping a NNSH. These include advice, removal of the comforting object, fitting an orthodontic appliance to interfere with the habit, application of an aversive taste to the digit or behaviour modification techniques. Some of these interventions are easier to apply than others and less disturbing for the child and their parent; some are more applicable to a particular type of habit.\nObjectives\nThe primary objective of the review was to evaluate the effects of different interventions for cessation of NNSHs in children. The secondary objectives were to determine which interventions work most quickly and are the most effective in terms of child and parent- or carer-centred outcomes of least discomfort and psychological distress from the intervention, as well as the dental measures of malocclusion (reduction in anterior open bite, overjet and correction of posterior crossbite) and cost-effectiveness.\nSearch methods\nWe searched the following electronic databases: the Cochrane Oral Health Group Trials Register (to 8 October 2014), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2014, Issue 9), MEDLINE via OVID (1946 to 8 October 2014), EMBASE via OVID (1980 to 8 October 2014), PsycINFO via OVID (1980 to 8 October 2014) and CINAHL via EBSCO (1937 to 8 October 2014), the US National Institutes of Health Trials Register (Clinical Trials.gov) (to 8 October 2014) and the WHO International Clinical Trials Registry Platform (to 8 October 2014). There were no restrictions regarding language or date of publication in the searches of the electronic databases. We screened reference lists from relevant articles and contacted authors of eligible studies for further information where necessary.\nSelection criteria\nRandomised or quasi-randomised controlled trials in children with a non-nutritive sucking habit that compared one intervention with another intervention or a no-intervention control group. The primary outcome of interest was cessation of the habit.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration. Three review authors were involved in screening the records identified; two undertook data extraction, two assessed risk of bias and two assessed overall quality of the evidence base. Most of the data could not be combined and only one meta-analysis could be carried out.\nMain results\nWe included six trials, which recruited 252 children (aged 2 and a half to 18 years), but presented follow-up data on only 246 children. Digit sucking was the only NNSH assessed in the studies. Five studies compared single or multiple interventions with a no-intervention or waiting list control group and one study made a head-to-head comparison. All the studies were at high risk of bias due to major limitations in methodology and reporting. There were small numbers of participants in the studies (20 to 38 participants per study) and follow-up times ranged from 1 to 36 months. Short-term outcomes were observed under one year post-intervention and long-term outcomes were observed at one year or more post-intervention.\nOrthodontics appliance (with or without psychological intervention) versus no treatment\nTwo trials that assessed this comparison evaluated our primary outcome of cessation of habit. One of the trials evaluated palatal crib and one used a mix of palatal cribs and arches. Both trials were at high risk of bias. The orthodontic appliance was more likely to stop digit sucking than no treatment, whether it was used over the short term (risk ratio (RR) 6.53, 95% confidence interval (CI) 1.67 to 25.53; 2 trials, 70 participants) or long term (RR 5.81, 95% CI 1.49 to 22.66; 1 trial, 37 participants) or used in combination with a psychological intervention (RR 6.36, 95% CI 0.97 to 41.96; 1 trial, 32 participants).\nPsychological intervention versus no treatment\nTwo trials (78 participants) at high risk of bias evaluated positive reinforcement (alone or in combination with gaining the child's co-operation) or negative reinforcement compared with no treatment. Pooling of data showed a statistically significant difference in favour of the psychological interventions in the short term (RR 6.16, 95% CI 1.18 to 32.10; I2 = 0%). One study, with data from 57 participants, reported on the long-term effect of positive and negative reinforcement on sucking cessation and found a statistically significant difference in favour of the psychological interventions (RR 6.25, 95% CI 1.65 to 23.65).\nHead-to-head comparisons\nOnly one trial demonstrated a clear difference in effectiveness between different active interventions. This trial, which had only 22 participants, found a higher likelihood of cessation of habit with palatal crib than palatal arch (RR 0.13, 95% CI 0.03 to 0.59).\nAuthors' conclusions\nThis review found low-quality evidence that orthodontic appliances (palatal arch and palatal crib) and psychological interventions (including positive and negative reinforcement) are effective at improving sucking cessation in children. There is very low-quality evidence that palatal crib is more effective than palatal arch. This review has highlighted the need for high-quality trials evaluating interventions to stop non-nutritive sucking habits to be conducted and the need for a consolidated, standardised approach to reporting outcomes in these trials.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Often babies and children develop a habit of sucking objects to comfort and calm them. They frequently suck dummies (known as pacifiers in the USA), fingers, thumbs or other items like blankets. Eventually, most children grow out of the habit, or stop due to encouragement from their parents. Some children, however, continue sucking as a habit. If they continue to do so as their adult teeth start to grow through (around the age of six), there is a risk that these adult teeth will grow into the wrong position causing them to stick out too far or not meet properly when biting. As a result these children often need dental treatment to fix the problems caused by their sucking habit.\nPossible treatments to help children break their sucking habits examined in studies in this review include the use of two different braces in the mouth; giving advice and incentives for changing behaviour (known as psychological advice/treatment); applying a bitter, nasty tasting substance to the children’s thumbs/fingers or combinations of these treatments. None of the studies included looked at barrier methods, for example the use of gloves or plasters or withdrawal of dummies."
}
]
|
query-laysum | 5542 | [
{
"role": "user",
"content": "Abstract: Background\nHigh-frequency oscillation (HFO) is an alternative to conventional mechanical ventilation that is sometimes used to treat people with acute respiratory distress syndrome, but effects on oxygenation, mortality and adverse clinical outcomes are uncertain. This review was originally published in 2004 and was updated in 2013 and again in 2015.\nObjectives\nTo determine the effects of HFO compared to conventional mechanical ventilation on physiological outcomes, clinical outcomes, and mortality when used for the treatment of acute respiratory distress syndrome (ARDS).\nSearch methods\nWe electronically searched the Cochrane Central Register of Controlled Trials (CENTRAL) (Ovid), MEDLINE (Ovid), EMBASE (Ovid), and ISI, from inception to December 2015. We conducted the original search in 2002. We manually searched reference lists from included studies and review articles; searched conference proceedings of the American Thoracic Society (1994 to 2015), Society of Critical Care Medicine (1994 to 2015), European Society of Intensive Care Medicine (1994 to 2015), and American College of Chest Physicians (1994 to 2015); contacted clinical experts in the field; and searched for unpublished and ongoing trials in clinicaltrials.gov and controlled-trials.com.\nSelection criteria\nRandomized controlled trials (RCTs) comparing treatment using HFO with conventional mechanical ventilation for children and adults diagnosed with ARDS.\nData collection and analysis\nThree review authors independently extracted data on clinical, physiological, and safety outcomes according to a predefined protocol. We contacted investigators of all included studies to clarify methods and obtain additional data. We used random-effects models in the analyses.\nMain results\nWe include 10 RCTs (n = 1850); almost all participants had moderate or severe ARDS. For the primary analysis, the risk of bias was low in three studies and unclear in five studies; the overall quality of evidence was very low due to imprecision, inconsistency, indirectness and methodologic limitations. In participants randomized to HFO, there was no significant difference in hospital or 30-day mortality (risk ratio (RR) 0.92, 95% confidence interval (CI) 0.72 to 1.16; P = 0.46, I² = 66%; 8 trials, 1779 participants, 807 deaths) compared with conventional ventilation. One large multicentre RCT was terminated early because of increased mortality in participants randomized to HFO compared to mechanical ventilation with low tidal volume and high positive end expiratory pressure, with HFO reserved only as a rescue therapy. We found substantial between-trial statistical heterogeneity (I² = 0% to 66%) for clinical outcomes, including mortality.\nAuthors' conclusions\nThe findings of this systematic review suggest that HFO does not reduce hospital and 30-day mortality due to ARDS; the quality of evidence was very low. Our findings do not support the use of HFO as a first-line strategy in people undergoing mechanical ventilation for ARDS.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Acute respiratory distress syndrome (ARDS) is a life-threatening condition. It is characterized by acute lung inflammation, stiff lungs that increase the work of breathing, and reduced ability of the lungs to adequately oxygenate the blood. Survivors can have a reduced quality of life. People with ARDS usually require an artificial respirator in order to prevent death. High-frequency oscillation (HFO) ventilation differs from conventional ventilation in that very small breaths are delivered very rapidly (180 to 900 breaths per minute). HFO helps with the opening of collapsed lung tissue by providing constant positive pressure in a person's airway. We performed a systematic review to determine whether HFO improves clinical outcomes (including preventing deaths) when compared to conventional breathing machines for adults and children with ARDS."
}
]
|
query-laysum | 5543 | [
{
"role": "user",
"content": "Abstract: Background\nSalivary gland dysfunction is an 'umbrella' term for the presence of either xerostomia (subjective sensation of dryness), or salivary gland hypofunction (reduction in saliva production). It is a predictable side effect of radiotherapy to the head and neck region, and is associated with a significant impairment of quality of life. A wide range of pharmacological interventions, with varying mechanisms of action, have been used for the prevention of radiation-induced salivary gland dysfunction.\nObjectives\nTo assess the effects of pharmacological interventions for the prevention of radiation-induced salivary gland dysfunction.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 14 September 2016); the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8) in the Cochrane Library (searched 14 September 2016); MEDLINE Ovid (1946 to 14 September 2016); Embase Ovid (1980 to 14 September 2016); CINAHL EBSCO (Cumulative Index to Nursing and Allied Health Literature; 1937 to 14 September 2016); LILACS BIREME Virtual Health Library (Latin American and Caribbean Health Science Information database; 1982 to 14 September 2016); Zetoc Conference Proceedings (1993 to 14 September 2016); and OpenGrey (1997 to 14 September 2016). We searched the US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials, irrespective of their language of publication or publication status. Trials included participants of all ages, ethnic origin and gender, scheduled to receive radiotherapy on its own or in addition to chemotherapy to the head and neck region. Participants could be outpatients or inpatients. We included trials comparing any pharmacological agent regimen, prescribed prophylactically for salivary gland dysfunction prior to or during radiotherapy, with placebo, no intervention or an alternative pharmacological intervention. Comparisons of radiation techniques were excluded.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 39 studies that randomised 3520 participants; the number of participants analysed varied by outcome and time point. The studies were ordered into 14 separate comparisons with meta-analysis only being possible in three of those.\nWe found low-quality evidence to show that amifostine, when compared to a placebo or no treatment control, might reduce the risk of moderate to severe xerostomia (grade 2 or higher on a 0 to 4 scale) at the end of radiotherapy (risk ratio (RR) 0.35, 95% confidence interval (CI) 0.19 to 0.67; P = 0.001, 3 studies, 119 participants), and up to three months after radiotherapy (RR 0.66, 95% CI 0.48 to 0.92; P = 0.01, 5 studies, 687 participants), but there is insufficient evidence that the effect is sustained up to 12 months after radiotherapy (RR 0.70, 95% CI 0.40 to 1.23; P = 0.21, 7 studies, 682 participants). We found very low-quality evidence that amifostine increased unstimulated salivary flow rate up to 12 months after radiotherapy, both in terms of mg of saliva per 5 minutes (mean difference (MD) 0.32, 95% CI 0.09 to 0.55; P = 0.006, 1 study, 27 participants), and incidence of producing greater than 0.1 g of saliva over 5 minutes (RR 1.45, 95% CI 1.13 to 1.86; P = 0.004, 1 study, 175 participants). However, there was insufficient evidence to show a difference when looking at stimulated salivary flow rates. There was insufficient (very low-quality) evidence to show that amifostine compromised the effects of cancer treatment when looking at survival measures. There was some very low-quality evidence of a small benefit for amifostine in terms of quality of life (10-point scale) at 12 months after radiotherapy (MD 0.70, 95% CI 0.20 to 1.20; P = 0.006, 1 study, 180 participants), but insufficient evidence at the end of and up to three months postradiotherapy. A further study showed no evidence of a difference at 6, 12, 18 and 24 months postradiotherapy. There was low-quality evidence that amifostine is associated with increases in: vomiting (RR 4.90, 95% CI 2.87 to 8.38; P < 0.00001, 5 studies, 601 participants); hypotension (RR 9.20, 95% CI 2.84 to 29.83; P = 0.0002, 3 studies, 376 participants); nausea (RR 2.60, 95% CI 1.81 to 3.74; P < 0.00001, 4 studies, 556 participants); and allergic response (RR 7.51, 95% CI 1.40 to 40.39; P = 0.02, 3 studies, 524 participants).\nWe found insufficient evidence (that was of very low quality) to determine whether or not pilocarpine performed better or worse than a placebo or no treatment control for the outcomes: xerostomia, salivary flow rate, survival, and quality of life. There was some low-quality evidence that pilocarpine was associated with an increase in sweating (RR 2.98, 95% CI 1.43 to 6.22; P = 0.004, 5 studies, 389 participants).\nWe found insufficient evidence to determine whether or not palifermin performed better or worse than placebo for: xerostomia (low quality); survival (moderate quality); and any adverse effects.\nThere was also insufficient evidence to determine the effects of the following interventions: biperiden plus pilocarpine, Chinese medicines, bethanechol, artificial saliva, selenium, antiseptic mouthrinse, antimicrobial lozenge, polaprezinc, azulene rinse, and Venalot Depot (coumarin plus troxerutin).\nAuthors' conclusions\nThere is some low-quality evidence to suggest that amifostine prevents the feeling of dry mouth in people receiving radiotherapy to the head and neck (with or without chemotherapy) in the short- (end of radiotherapy) to medium-term (three months postradiotherapy). However, it is less clear whether or not this effect is sustained to 12 months postradiotherapy. The benefits of amifostine should be weighed against its high cost and side effects. There was insufficient evidence to show that any other intervention is beneficial.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of evidence for amifostine was found to be low because of risk of bias, inconsistency and imprecision caused by the small number of studies in the comparison or sample size. A standardized scale for measuring participant's experience of dry mouth would in future allow comparison and pooling together of results."
}
]
|
query-laysum | 5544 | [
{
"role": "user",
"content": "Abstract: Background\nSurgery is a common treatment option in oral cavity cancer (and less frequently in oropharyngeal cancer) to remove the primary tumour and sometimes neck lymph nodes. People with early-stage disease may undergo surgery alone or surgery plus radiotherapy, chemotherapy, immunotherapy/biotherapy, or a combination of these. Timing and extent of surgery varies. This is the third update of a review originally published in 2007.\nObjectives\nTo evaluate the relative benefits and harms of different surgical treatment modalities for oral cavity and oropharyngeal cancers.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 9 February 2022.\nSelection criteria\nRandomised controlled trials (RCTs) that compared two or more surgical treatment modalities, or surgery versus other treatment modalities, for primary tumours of the oral cavity or oropharynx.\nData collection and analysis\nOur primary outcomes were overall survival, disease-free survival, locoregional recurrence, and recurrence; and our secondary outcomes were adverse effects of treatment, quality of life, direct and indirect costs to patients and health services, and participant satisfaction. We used standard Cochrane methods. We reported survival data as hazard ratios (HRs). For overall survival, we reported the HR of mortality, and for disease-free survival, we reported the combined HR of new disease, progression, and mortality; therefore, HRs below 1 indicated improvement in these outcomes. We used GRADE to assess certainty of evidence for each outcome.\nMain results\nWe identified four new trials, bringing the total number of included trials to 15 (2820 participants randomised, 2583 participants analysed). For objective outcomes, we assessed four trials at high risk of bias, three at low risk, and eight at unclear risk. The trials evaluated nine comparisons; none compared different surgical approaches for excision of the primary tumour.\nFive trials evaluated elective neck dissection (ND) versus therapeutic (delayed) ND in people with oral cavity cancer and clinically negative neck nodes. Elective ND compared with therapeutic ND probably improves overall survival (HR 0.64, 95% confidence interval (CI) 0.50 to 0.83; I2 = 0%; 4 trials, 883 participants; moderate certainty) and disease-free survival (HR 0.56, 95% CI 0.45 to 0.70; I2 = 12%; 5 trials, 954 participants; moderate certainty), and probably reduces locoregional recurrence (HR 0.58, 95% CI 0.43 to 0.78; I2 = 0%; 4 trials, 458 participants; moderate certainty) and recurrence (RR 0.58, 95% CI 0.48 to 0.70; I2 = 0%; 3 trials, 633 participants; moderate certainty). Elective ND is probably associated with more adverse events (risk ratio (RR) 1.31, 95% CI 1.11 to 1.54; I2 = 0%; 2 trials, 746 participants; moderate certainty).\nTwo trials evaluated elective radical ND versus elective selective ND in people with oral cavity cancer, but we were unable to pool the data as the trials used different surgical procedures. Neither study found evidence of a difference in overall survival (pooled measure not estimable; very low certainty). We are unsure if there is a difference in effect on disease-free survival (HR 0.57, 95% CI 0.29 to 1.11; 1 trial, 104 participants; very low certainty) or recurrence (RR 1.21, 95% CI 0.63 to 2.33; 1 trial, 143 participants; very low certainty). There may be no difference between the interventions in terms of adverse events (1 trial, 148 participants; low certainty).\nTwo trials evaluated superselective ND versus selective ND, but we were unable to use the data.\nOne trial evaluated supraomohyoid ND versus modified radical ND in 332 participants. We were unable to use any of the primary outcome data. The evidence on adverse events was very uncertain, with more complications, pain, and poorer shoulder function in the modified radical ND group.\nOne trial evaluated sentinel node biopsy versus elective ND in 279 participants. There may be little or no difference between the interventions in overall survival (HR 1.00, 95% CI 0.90 to 1.11; low certainty), disease-free survival (HR 0.98, 95% CI 0.90 to 1.07; low certainty), or locoregional recurrence (HR 1.04, 95% CI 0.91 to 1.19; low certainty). The trial provided no usable data for recurrence, and reported no adverse events (very low certainty).\nOne trial evaluated positron emission tomography-computed tomography (PET-CT) following chemoradiotherapy (with ND only if no or incomplete response) versus planned ND (before or after chemoradiotherapy) in 564 participants. There is probably no difference between the interventions in overall survival (HR 0.92, 95% CI 0.65 to 1.31; moderate certainty) or locoregional recurrence (HR 1.00, 95% CI 0.94 to 1.06; moderate certainty).\nOne trial evaluated surgery plus radiotherapy versus radiotherapy alone and provided very low-certainty evidence of better overall survival in the surgery plus radiotherapy group (HR 0.24, 95% CI 0.10 to 0.59; 35 participants). The data were unreliable because the trial stopped early and had multiple protocol violations. In terms of adverse events, subcutaneous fibrosis was more frequent in the surgery plus radiotherapy group, but there were no differences in other adverse events (very low certainty).\nOne trial evaluated surgery versus radiotherapy alone for oropharyngeal cancer in 68 participants. There may be little or no difference between the interventions for overall survival (HR 0.83, 95% CI 0.09 to 7.46; low certainty) or disease-free survival (HR 1.07, 95% CI 0.27 to 4.22; low certainty). For adverse events, there were too many outcomes to draw reliable conclusions.\nOne trial evaluated surgery plus adjuvant radiotherapy versus chemotherapy. We were unable to use the data for any of the outcomes reported (very low certainty).\nAuthors' conclusions\nWe found moderate-certainty evidence based on five trials that elective neck dissection of clinically negative neck nodes at the time of removal of the primary oral cavity tumour is superior to therapeutic neck dissection, with increased survival and disease-free survival, and reduced locoregional recurrence.\nThere was moderate-certainty evidence from one trial of no difference between positron emission tomography (PET-CT) following chemoradiotherapy versus planned neck dissection in terms of overall survival or locoregional recurrence.\nThe evidence for each of the other seven comparisons came from only one or two studies and was assessed as low or very low-certainty.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 9 February 2022."
}
]
|
query-laysum | 5545 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis (CF) is the most common life-threatening, inherited disease in white populations which causes several dysfunctions, including postural abnormalities. Physical therapy may help in some consequences of these postural abnormalities, such as pain, trunk deformity and quality of life.\nObjectives\nTo determine the effects of a range of physical therapies for managing postural abnormalities in people with cystic fibrosis, specifically on quality of life, pain and trunk deformity.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Trials Register, compiled from electronic database searches, hand-searched journals and conference abstract books. We also searched the reference lists of relevant articles and reviews. Additional searches were conducted on ClinicalTrials.gov and on the WHO International Clinical Trials Registry Platform for any planned, ongoing and unpublished studies.\nDate of the last search: 19 March 2020.\nSelection criteria\nRandomised controlled trials examining any modality of physical therapy considered relevant for treating postural disorders compared with each other, no physical therapy, sham treatment or usual care in people with CF (of any age or disease severity).\nData collection and analysis\nTwo review authors independently selected eligible trials, assessed the risk of bias in each trial and extracted the data. We contacted trial authors to obtain missing or additional information. We assessed the quality of the evidence using the GRADE criteria.\nMain results\nTwo trials, involving a total of 50 participants with CF and postural abnormalities, were included in this review. One was in people with stable disease (lasting three months) and one in hospital inpatients experiencing an exacerbation (20 days). Both trials compared manual therapy comprising mobilizations to the rib cage and thoracic spine, treatment of specific muscle dysfunction or tight muscle groups; and postural awareness and education versus medical usual care. The age of participants ranged from 17 years to 58 years. Both trials were conducted in the UK.\nThe following outcomes were measured: change in quality of life, change in pain, change in trunk deformity and change in pulmonary function. Manual therapy may make little or no difference to the change in trunk deformity compared to usual care (low-quality evidence). No results could be analysed for quality of life (very low-quality evidence) and pain outcomes (very low-quality evidence) because of the high heterogeneity between trials. It is uncertain whether the intervention improves lung function: forced vital capacity (very low-quality evidence); forced expiratory volume in one second (very low-quality evidence); or Tiffeneau’s index (ratio of forced expiratory volume at one second (FEV1) and forced vital capacity (FVC)). Only one trial (15 participants) measured functional capacity, and the change in walked distance seemed to favour intervention over usual care, but with the possibility of no effect due to wide confidence intervals. The same trial also reported that six participants in the intervention group had positive comments about the intervention and no adverse events were mentioned.\nAuthors' conclusions\nDue to methodological limitations in the included trials, and in addition to the very low to low quality of the current evidence, there is limited evidence about the benefits of physical therapies on postural abnormalities in people with CF. Therefore, further well-conducted trials with robust methodologies are required considering a prior inclusion criterion to identify the participants who have postural abnormalities.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cystic fibrosis is a genetic disease that causes problems in some organs of the body, especially in the lungs. Recently, it has been noticed that postural abnormalities may happen because of the disease progression. Some physical therapies that promote the stretching and strengthening of the muscles connected to the spine may help improving the pain and quality of life in people with cystic fibrosis."
}
]
|
query-laysum | 5546 | [
{
"role": "user",
"content": "Abstract: Background\nVarious hormone therapies (HT) are available to treat menopausal vasomotor symptoms. Bioidentical hormones are chemically identical to those produced by the human body, and several types are well-tested and available on prescription. Many women have opted for bioidentical hormone therapy (BHT) on the assumption that it is safer than other forms of HT. We evaluated the evidence.\nObjectives\nTo determine the effectiveness and safety of bioidentical hormones compared to placebo or non-bioidentical hormones for the relief of vasomotor symptoms.\nSearch methods\nIn July 2015 we searched the Cochrane Central Register of Controlled Trials, PubMed, Embase, Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS), registers of ongoing trials and the reference lists of articles retrieved.\nSelection criteria\nRandomised controlled trials (RCTs) comparing bioidentical hormone therapy (BHT) versus placebo or non-bioidentical hormones.\nData collection and analysis\nWe used standard methodological procedures expected by the Cochrane Collaboration. Our primary outcome was vasomotor symptoms (hot flushes and night sweats). We evaluated the overall quality of the evidence using Grading of Recommendations Assessment, Development and Evaluation criteria (GRADE).\nMain results\nWe included 23 RCTs (5779 participants). Most studies (20/23) included only women with moderate to severe hot flushes. All studies compared unopposed 17 beta-estradiol (beta-estradiol) versus placebo or conjugated equine estrogens (CEE). None of the studies reported night sweats as a separate outcome.\nBHT patch versus placebo\nFrequency of hot flushes\nFour RCTs reported data suitable for analysis. There were fewer hot flushes in the BHT group, with a moderate to large effect size (SMD -0.68, 95% CI -0.83 to -0.53, four RCTs, 793 women, I2 = 67%, low quality evidence). There was moderate heterogeneity, but a consistent direction of effect. Seven RCTs reported data unsuitable for analysis; all reported a benefit in the intervention group.\nSymptom intensity\nTwo RCTs reported analysable data. Measured on a 0-100 visual analogue scale (VAS), hot flush intensity was lower in the BHT group (MD -19.94 points, 95% CI -24.86 to -15.02, two RCTs, 393 women, I2 = 54%, low quality evidence). There was moderate heterogeneity, but a consistent direction of effect.\nAdverse effects\nAdverse events (such as headache, vaginal bleeding, breast tenderness and skin reactions) were more common in the intervention group (odds ratio (OR) 2.14, 95% CI 1.29 to 3.54, 9 RCTs, 1822 women, I2 = 73%, low quality evidence). There was moderate heterogeneity, but a consistent direction of effect. In one study, five women in the intervention group developed endometrial hyperplasia.\nBHT gel versus placebo\nHot flush frequency\nThree RCTs reported this outcome, but the data were unsuitable for analysis. All reported a benefit in the BHT group.\nAdverse effects\nAdverse events were more common in the BHT group (OR 1.41, 95% CI 1.09 to 1.83, 3 RCTs, 1086 women, I2 = 0%, moderate quality evidence).\nOral BHT versus placebo\nHot flush frequency\nTwo studies reported analysable data. There were fewer hot flushes in the BHT group, with a moderate to large effect size (SMD -0.80, 95% CI -1.03 to -0.57, two RCTs, 356 women, I2 = 14%, low quality evidence).\nAdverse effects\nThere was no evidence of a difference between the groups (OR 1.28, 95% CI 0.84 to 1.96, 3 RCTs, 433 women, I2 = 0%, low quality evidence).\nTopical BHT emulsion versus placebo\nHot flush frequency\nOne study with data unsuitable for analysis reported a benefit in the intervention group.\nAdverse effects\nThere was no evidence of a difference between the groups (OR 1.46, 95% CI 0.80 to 2.66, one RCT, 200 women, low quality evidence).\nIntranasal BHT versus placebo\nHot flush frequency\nOnly one study reported analysable data. There were fewer hot flushes per day in the BHT group (MD -3.04 95% CI -4.05 to -2.03, one study, 458 women, moderate quality evidence)\nAdverse effects\nAdverse events (such as headache, breast tenderness, arthralgia and nausea) were more common in the intervention group (OR 1.96, 95% CI 1.26 to 3.03, one RCT, 458 women, moderate quality evidence).\nSubgroup analyses\nSubgroup analyses by dose of BHT suggested that higher doses of BHT may be associated with more effectiveness but also higher risk of adverse effects.\nBHT patch versus 0.625 mg CEE\nTwo RCTs reported this comparison, but the data were unsuitable for analysis.\nHot flush frequency\nBoth RCTs reported no evidence of a difference between the groups.\nAdverse effects\nFindings were inconsistent. In one comparison (0.1 mg BHT versus CEE), breast pain and vaginal bleeding were more frequent in the BHT group.\nOral BHT versus 0.625 mg CEE\nHot flush frequency\nOne study with data unsuitable for analysis reported no evidence of a difference between the groups.\nAdverse effects\nThere was no evidence of a difference between the groups (OR 1.20, 95% CI 0.50 to 2.87, one RCT, 103 women, very low quality evidence).\nAuthors' conclusions\nThere was low to moderate quality evidence that BHT in various forms and doses is more effective than placebo for treating moderate to severe menopausal hot flushes. There was low to moderate quality evidence of higher rates of adverse effects such as headache, vaginal bleeding, breast tenderness and skin reactions in the BHT group. There was some evidence to suggest that higher doses of BHT are associated with greater effectiveness but also with higher risk of adverse effects. Although all the included studies used unopposed estrogen, it is recommended best practice to use progestogen therapy in women with a uterus taking estrogen in order to avoid endometrial hyperplasia, regardless of the source of the estrogen. No data are yet available about the safety of BHT with regard to long-term outcomes such as heart attack, stroke and breast cancer.\nThere was no good evidence of a difference in effectiveness between BHT and CEE, and findings with regard to adverse effects were inconsistent. The quality of the evidence was too low to reach any firm conclusions.\nThe main limitations in the quality of the evidence were study risk of bias (mainly due to poor reporting of methods), imprecision and lack of data suitable for analysis.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This Cochrane review evaluates the effectiveness and safety of bioidentical hormone treatment (BHT) compared to no treatment or non-bioidentical hormone treatment (HT) for vasomotor symptoms experienced during the menopausal transition period."
}
]
|
query-laysum | 5547 | [
{
"role": "user",
"content": "Abstract: Background\nKeratoconus is an ectatic (weakening) disease of the cornea, which is the clear surface at the front of the eye. Approximately 10% to 15% of patients diagnosed with keratoconus require corneal transplantation. This may be full-thickness (penetrating) or partial-thickness (lamellar).\nObjectives\nTo compare visual outcomes after deep anterior lamellar keratoplasty (DALK) and penetrating keratoplasty for keratoconus, and to compare additional outcomes relating to factors which may contribute to poor visual outcomes (e.g. astigmatism, graft rejection and failure).\nSearch methods\nWe searched a number of electronic databases including CENTRAL, PubMed and EMBASE without using any date or language restrictions. We last searched the electronic databases on 31 October 2013. We also handsearched the proceedings of several international ophthalmic conferences.\nSelection criteria\nWe included all randomised controlled trials (RCTs) comparing the outcomes of DALK and penetrating keratoplasty in the treatment of keratoconus.\nData collection and analysis\nTwo authors assessed trial quality and extracted data independently. For dichotomous data (graft failure, rejection, achievement of functional vision) results were expressed as odds ratios (ORs) and 95% confidence intervals (CIs). For continuous data (postoperative best corrected visual acuity (BCVA), uncorrected visual acuity (UCVA), keratometric astigmatism and spherical equivalent) results were expressed as mean differences (MDs) and 95% CIs.\nMain results\nWe identified two completed studies, with a total of 111 participants (n = 30 and n = 81), both conducted in Iran, that met our inclusion criteria. Participants had moderate to severe keratoconus pre-operatively and were randomly allocated to receive either DALK or penetrating keratoplasty. Only one eye of each participant was treated as part of the trials. The smaller study had 12 month follow-up data for all participants. For the larger study, four DALK surgeries had to be abandoned due to technical failure and visual and refractive outcomes were not measured in these participants. Follow-up length for the remaining 77 participants ranged from 6.8 to 36.4 months, with all 77 followed for at least three months post-suture removal. Details of the randomisation procedure were unavailable for the smaller study and so sensitivity analyses were conducted to determine if the results from this study had affected the overall results of the review.\nNeither of the included studies reported a difference between groups on any of the measures of post-graft visual achievement, keratometric astigmatism or spherical equivalent. A single case of graft failure in a penetrating keratoplasty was reported. No postoperative graft failures were reported in the DALK group of either study.\nInstances of graft rejection were reported in both groups, in both studies. The majority of these cases were successfully treated with steroids. The data, which related to all cases in each study - given that the four cases that did not go ahead as planned had already technically failed without presence of rejection - showed that rejection was less likely to occur in DALK (odds ratio (OR): 0.33, 95% confidence interval (CI) 0.14 to 0.81, GRADE rating: moderate).\nResults of the sensitivity analysis indicated that inclusion of the Razmju 2011 study did not bias the results with regards to rejection episodes. While sensitivity analysis showed altered results with regards to failure rates, the data available from the Javadi 2010 study alone had a very wide 95% CI, suggesting an imprecise estimate. Therefore, even after removal of the Razmju 2011 data, it is still difficult to draw conclusions regarding superiority of one technique over another with regards to graft failure.\nDALK was unable to be completed as planned in four cases and in a further three cases, complications during dissection required further intervention. Other adverse events, of varying severity, were reported in both intervention groups with similar frequency. For both types of surgery, these included postoperative astigmatism, steroid induced ocular hypertension and persistent epithelial defects. In recipients of DALK, one participant had interface neovascularisation (a proliferation of blood vessels where the host and donor cornea come together) and one had wrinkling of Descemet's membrane, the basement membrane separating the corneal stroma from the corneal endothelium. In the penetrating keratoplasty groups, one participant required graft resuturing and one had an atonic pupil, a condition in which the pupil dilates and is non-reactive.\nOverall, the quality of the evidence was rated as very low to moderate, with methodological limitations, incomplete data analysis and imprecision of findings, as well as high risk of bias in several areas for both studies.\nAuthors' conclusions\nWe found no evidence to support a difference in outcomes with regards to BCVA at three months post-graft or at any of the other time points analysed (GRADE rating: very low). We also found no evidence of a difference in outcomes with regards to graft survival, final UCVA or keratometric outcomes. We found some evidence that rejection is more likely to occur following penetrating keratoplasty than DALK (GRADE rating: moderate). The small number of studies included in the review and methodological issues relating to the two, mean that the overall quality of the evidence in this review is low. There is currently insufficient evidence to determine which technique may offer better overall outcomes - final visual acuity and time to attain this, keratometric stabilisation, risk of rejection or failure, or both, and risk of other adverse events - for patients with keratoconus. Large randomised trials comparing the outcomes of penetrating keratoplasty and DALK in the treatment of keratoconus are needed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included two randomised controlled trials (RCTs) which involved a total of 111 participants in this review. Both trials were conducted in single medical centres in Iran and compared the outcomes, at least three months post-suture removal (for a minimum of 12 months in the newer study, and for a range of 6.8 to 36.4 months in the older study), of participants with keratoconus who had received DALK to those who had received penetrating keratoplasty. The evidence is current to October 2013."
}
]
|
query-laysum | 5548 | [
{
"role": "user",
"content": "Abstract: Background\nDevelopmental dysplasia of the hip (DDH) describes the abnormal development of a hip in childhood, ranging from complete dislocation of the hip joint to subtle immaturity of a hip that is enlocated and stable within the socket. DDH occurs in around 10 per 1000 live births, though only one per 1000 are completely dislocated. There is variation in treatment pathways for DDH, which differs between hospitals and even between clinicians within the same hospital. The variation is related to the severity of dysplasia that is believed to require treatment, and the techniques used to treat dysplasia.\nObjectives\nTo determine the effectiveness of splinting and the optimal treatment strategy for the non-operative management of DDH in babies under six months of age.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, seven other electronic databases, and two trials registers up to November 2021. We also checked reference lists, contacted study authors, and handsearched relevant meetings abstracts.\nSelection criteria\nRandomised controlled trials (RCTs), including quasi-RCTs, as well as non-RCTs and cohort studies conducted after 1980 were included. Participants were babies with all severities of DDH who were under six months of age. Interventions included dynamic splints (e.g. Pavlik harness), static splints (e.g Fixed abduction brace) or double nappies (diapers), compared to no splinting or delayed splinting.\nData collection and analysis\nTwo review authors independently selected studies, extracted data and performed risk of bias and GRADE assessments. The primary outcomes were: measurement of acetabular index at years one, two and five, as determined by radiographs (angle): the need for operative intervention to achieve reduction and to address dysplasia; and complications. We also investigated other outcomes highlighted by parents as important, including the bond between parent and child and the ability of mothers to breastfeed.\nMain results\nWe included six RCTs or quasi-RCTs (576 babies). These were supported by 16 non-RCTs (8237 babies). Five studies had non-commercial funding, three studies stated 'no funding' and 14 studies did not state funding source. The RCTs were generally at unclear risk of bias, although we judged three RCTs to be at high risk of bias for incomplete outcome data. The non-RCTs were of moderate and critical risk of bias. We did not undertake meta-analysis due to methodological and clinical differences between studies; instead, we have summarised the results narratively.\nDynamic splinting versus delayed or no splinting\nFour RCTs and nine non-RCTs compared immediate dynamic splinting and delayed dynamic splinting or no splinting. Of the RCTs, two considered stable hips and one considered unstable (dislocatable) hips and one jointly considered unstable and stable hips. No studies considered only dislocated hips.\nTwo RCTs (265 babies, very low-certainty evidence) reported acetabular index at one year amongst stable or dislocatable hips. Both studies found there may be no evidence of a difference in splinting stable hips at first diagnosis compared to a strategy of active surveillance: one reported a mean difference (MD) of 0.10 (95% confidence interval (CI) −0.74 to 0.94), and the other an MD of 0.20 (95% CI −1.65 to 2.05). Two RCTs of stable hips (181 babies, very low-certainty evidence) reported there may be no evidence of a difference between groups for acetabular index at two years: one study reported an MD of −1.90 (95% CI −4.76 to 0.96), and another study reported an MD of −0.10 (95% CI −1.93 to 1.73), but did not take into account hips from the same child. No study reported data at five years.\nFour RCTs (434 babies, very low-certainty evidence) reported the need for surgical intervention. Three studies reported that no surgical interventions occurred. In the remaining study, two babies in the dynamic splinting group developed instability and were subsequently treated surgically. This study did not explicitly state if this treatment was to achieve concentric reduction or address residual dysplasia.\nThree RCTs (390 babies, very low-certainty evidence) reported no complications (avascular necrosis and femoral nerve palsy).\nDynamic splinting versus static splinting\nOne RCT and five non-RCTs compared dynamic versus static splinting. The RCT (118 hips) reported no occurrences of avascular necrosis (very low-certainty evidence) and did not report radiological outcomes or need for operative intervention.\nOne quasi-RCT compared double nappies versus delayed or no splinting but reported no outcomes of interest.\nOther comparisons\nNo RCTs compared static splinting versus delayed or no splinting or staged weaning versus immediate removal.\nAuthors' conclusions\nThere is a paucity of RCT evidence for splinting for the non-operative management of DDH: we included only six RCTs with 576 babies. Moreover, there was considerable heterogeneity between the studies, precluding meta-analysis. We judged the RCT evidence for all primary outcomes as being of very low certainty, meaning we are very uncertain about the true effects.\nResults from individual studies provide limited evidence of intervention effects on different severities of DDH. Amongst stable dysplastic hips, there was no evidence to suggest that treatment at any stage expedited the development of the acetabulum. For dislocatable hips, a delay in treatment onset to six weeks does not appear to result in any evidence of a difference in the development of the acetabulum at one year or increased risk of surgery. However, delayed splinting may reduce the number of babies requiring treatment with a harness.\nNo RCTs compared static splinting with delayed or no splinting, staged weaning versus immediate removal or double nappies versus delayed or no splinting.\nThere were few operative interventions or complications amongst the RCTs and the non-randomised studies. There's no apparent signal to indicate a higher frequency of either outcome in either intervention group.\nGiven the frequency of this disease, and the fact that many countries undertake mandatory DDH screening, there is a clear need to develop an evidence-based pathway for treatment. Particular uncertainties requiring future research are the effectiveness of splinting amongst stable dysplastic hips, the optimal timing for the onset of splinting, the optimal type of splint to use and the need for 'weaning of splints'. Only once a robust pathway for treatment is established, can we properly assess the cost-effectiveness of screening interventions for DDH.\n\nGiven the provided abstract, please respond to the following query: \"How are these hips treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Shallow and dislocated hips are commonly treated with hip splints. Splints control the movement of the legs to guide the hips into the socket, allowing the hip to improve naturally. Splints can either fix the legs in position, called ‘static splints’, or allow the legs some freedom to move, called ‘dynamic splints’. Occasionally, clinicians may recommend the use of double nappies (double diapers), which are bulky, and gently push the legs apart to act as a type of splint."
}
]
|
query-laysum | 5549 | [
{
"role": "user",
"content": "Abstract: Background\nEndothelin, a powerful vasoconstrictor, is one of the mediators in the causation of persistent pulmonary hypertension of the newborn (PPHN). Theoretically, endothelin receptor antagonists (ETRA) have the potential to improve the outcomes of infants with PPHN.\nObjectives\nTo assess the efficacy and safety of ETRA in the treatment of PPHN in full-term, post-term and late preterm infants.\nTo assess the efficacy and safety of selective ETRAs (which block only the ETA receptors) and non-selective ETRAs (which block both ETA and ETB receptors) separately.\nSearch methods\nCENTRAL (Cochrane Central Register of Controlled Trials), MEDLINE, EMBASE and CINAHL databases were searched until December 2015.\nSelection criteria\nRandomised, cluster-randomised or quasi-randomised controlled trials were eligible.\nData collection and analysis\nTwo review authors independently searched the literature, selected the studies, assessed the risk of bias and extracted the data. A fixed-effect model was used for meta-analysis. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the quality of evidence.\nMain results\nTwo randomised controlled trials of ETRA met the inclusion criteria. Both studies utilized oral Bosentan. The first study was done in a setting where inhaled nitric oxide (iNO) therapy was not available. Forty-seven infants (≥ 34 weeks' gestation) were randomised to receive either Bosentan or placebo. The second study was a multicentre study where iNO therapy was the standard of care for PPHN. Twenty-one infants were randomised to receive either 'iNO plus Bosentan' or 'iNO plus placebo'.\nIn the first study, there was no significant difference in the incidence of death before hospital discharge between the Bosentan and placebo groups (1/23 vs 3/14; RR 0.20, 95% CI 0.02 to 1.77; RD −0.17, 95% CI −0.40 to 0.06). A higher proportion of infants in the Bosentan group showed improvement in oxygenation index (OI) at the end of therapy (21/24 vs 3/15; RR 4.38, 95% CI 1.57 to 12.17; RD 0.68, 95% CI 0.43 to 0.92; number needed to treat for a beneficial outcome (NNTB) 1.5). The duration of mechanical ventilation was lower in the Bosentan group (4.3 ± 0.9 vs 11.5 ± 0.6 days; MD −7.20, 95% CI −7.64 to −6.76). There was no significant difference in adverse neurological outcomes at six months (0/23 vs 4/14; RR 0.07, 95% CI 0.00 to 1.20; RD −0.29, 95% CI −0.52 to -0.05). The study suffered from a high risk of attrition bias since 8/23 infants in the placebo group were excluded from various analyses. Since the protocol for the study could not be accessed, the study suffered from unclear risk of reporting bias.\nIn the second study, there was no significant difference in the incidence of treatment failure needing extracorporeal membrane oxygenation (ECMO) between the 'iNO plus Bosentan' vs 'iNO plus placebo' groups (1/13 vs 0/8; RR 1.93, 95% CI 0.09 to 42.35; RD 0.08, 95% CI −0.14 to 0.30). There was no significant difference in the median time to wean from iNO ('iNO plus Bosentan': 3.7 days (95% CI 1.17 to 6.95); 'iNO plus placebo': 2.9 days (95% CI 1.26 to 4.23); P = 0.34). There were no significant differences in the OI 0, 3, 5, 12, 24, 48 and 72 hours of treatment between the groups. There were no significant differences in the time to complete weaning from mechanical ventilation (median 10.8 days (CI 3.21 to 12.21) versus 8.6 days (CI 3.71 to 9.66); P = 0.24). The study had unequal distribution to the Bosentan group (N = 13) and the placebo group (N = 8). The methods used for generating random sequence numbers and allocation concealment were unclear, resulting in unclear risk of selection bias.\nBoth studies reported that Bosentan was well tolerated and no major adverse effects were noted. Data from the two studies was not pooled given the heterogenous nature of the clinical settings and the modalities used for the treatment of PPHN.\nOverall, the quality of evidence was considered low, given the small sample size of the included studies, the numerical imbalance between the groups due to randomisation and attrition, and unclear risk of bias on some of the important domains.\nAuthors' conclusions\nThere is inadequate evidence to support the use of ETRAs either as stand-alone therapy or as adjuvant to inhaled nitric oxide in PPHN. Adequately powered RCTs are needed.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found very limited data from two included studies for the use of this class of drugs in newborn babies. Overall, the quality of evidence was considered low because of the very small sample size and methodological issues in the included studies."
}
]
|
query-laysum | 5550 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the original Cochrane review first published in 2008. Scoliosis surgery is often associated with substantial blood loss and potentially detrimental effects in children. Antifibrinolytic agents are often used to reduce perioperative blood loss. Clinical trials have evaluated their efficacy in children undergoing surgical correction of scoliosis, but no systematic review has been published. This review was first published in 2008 and was updated in 2016.\nObjectives\nTo assess the efficacy and safety of aprotinin, tranexamic acid and aminocaproic acid in reducing blood loss and transfusion requirements in children undergoing surgery for correction of idiopathic or secondary scoliosis.\nSearch methods\nWe ran the search for the previous review in June 2007. For this updated version, we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 7), MEDLINE (1946 to August week 1 2015), Embase (1947 to 2015 week 38), Latin American Caribbean Health Sciences Literature (LILACS) (1982 to 14 August 2015), Database of Abstracts of Reviews of Effects (DARE; 2015, Issue 2) and reference lists of reviews and retrieved articles for randomized controlled trials in any language. We also checked the clinical trial registry at http://www.clinicaltrials.gov on 8 October 2015.\nSelection criteria\nWe included blinded and unblinded randomized controlled trials (RCTs) that evaluated the effects of antifibrinolytics on perioperative blood loss in children 18 years of age or younger and undergoing scoliosis surgery.\nData collection and analysis\nTwo review authors independently extracted data. The primary outcome was total blood loss (intraoperative and postoperative combined). Secondary efficacy outcomes were the number of participants receiving blood transfusion (both autologous and allogeneic) or receiving allogeneic blood transfusion alone, and the total amount of blood transfused. Safety outcomes included the number of deaths, the number of participants reporting any adverse event or a serious adverse event, withdrawals due to adverse events and the number of participants experiencing a specific adverse event (i.e. renal insufficiency, hypersensitivity or thrombosis). We assessed methodological risk of bias for each included study and employed the GRADE approach to assess the overall quality of the evidence.\nMain results\nWe included three new studies (201 participants) in this updated review, for a total of nine studies (455 participants). All but one study employed placebo as the control group intervention. For the primary outcome, antifibrinolytic drugs decreased the amount of perioperative blood loss by 427 mL (95% confidence interval (CI) 251 to 603 mL), for a reduction of over 20% versus placebo. We rated the quality of evidence for our primary outcome as low on the basis of unclear risk of bias for several domains in most studies and the small total number of participants.\nFor secondary outcomes, fewer participants receiving antifibrinolytic drugs received transfusion (allogeneic or autologous) versus those receiving placebo (risk ratio (RR) 0.65, 95% CI 0.50 to 0.85, number needed to treat to prevent one additional harmful outcome (NNTp) 5; very low-quality evidence). Only two studies specifically evaluated the number of participants transfused with only allogeneic blood (risk difference (RD) -0.15, 95% CI -0.26 to -0.03, NNTp 7; very low-quality evidence). Antifibrinolytic drugs decreased the volume of blood transfused by 327 mL (95% CI -186 to -469 mL; low-quality evidence).\nNo study reported deaths in active or control groups. Data were insufficient to allow performance of meta-analysis for any safety outcome. No studies adequately described their methods in assessing safety. The only adverse event of note occurred in one study, when three participants in the placebo group developed postoperative deep vein thrombosis.\nAuthors' conclusions\nSince the last published version of this review (2008), we have found three new studies. Additional evidence shows that antifibrinolytics reduce the requirement for both autologous and allogeneic blood transfusion. Limited evidence of low to very low quality supports the use of antifibrinolytic drugs for reducing blood loss and decreasing the risk, and volume, of transfusion in children undergoing scoliosis surgery. Evidence is insufficient to support the use of a particular agent, although tranexamic acid may be preferred, given its widespread availability. The optimal dose regimen for any of these three agents has not been established. Although adverse events appear to occur infrequently, evidence is insufficient to confirm the safety of these agents, particularly for rare but potentially catastrophic events. No long-term safety data are available.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The antifibrinolytic drugs evaluated in this systematic review were aprotinin, tranexamic acid and aminocaproic acid. We found nine studies that enrolled a total of 455 participants, aged 18 years or younger, who received either antifibrinolytic drugs or a placebo.\nTotal enrolment in each study ranged from 36 participants to 80 participants. Two of the studies evaluated aprotinin, four tranexamic acid and four aminocaproic acid. One study compared aminocaproic acid with tranexamic acid. The other studies administered placebo to the control group. Studies that evaluated aprotinin and aminocaproic acid used high doses (when reported). Of the studies that evaluated tranexamic acid, two used high doses, one used low doses and one did not describe the dose given. Five studies evaluated idiopathic scoliosis (scoliosis with no known cause), and four evaluated both idiopathic scoliosis and scoliosis that occurred as the result of another disease, such as cerebral palsy.\nOf the nine included studies, five reported how long patients were assessed. Follow-up ranged from one to 10 days after surgery, or lasted for the participant's length of stay in the hospital."
}
]
|
query-laysum | 5551 | [
{
"role": "user",
"content": "Abstract: Background\nRandomized controlled trials (RCTs) have yielded conflicting results regarding the ability of beta-blockers to influence perioperative cardiovascular morbidity and mortality. Thus routine prescription of these drugs in unselected patients remains a controversial issue. A previous version of this review assessing the effectiveness of perioperative beta-blockers in cardiac and non-cardiac surgery was last published in 2018. The previous review has now been split into two reviews according to type of surgery. This is an update and assesses the evidence in cardiac surgery only.\nObjectives\nTo assess the effectiveness of perioperatively administered beta-blockers for the prevention of surgery-related mortality and morbidity in adults undergoing cardiac surgery.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, Biosis Previews and Conference Proceedings Citation Index-Science on 28 June 2019. We searched clinical trials registers and grey literature, and conducted backward- and forward-citation searching of relevant articles.\nSelection criteria\nWe included RCTs and quasi-randomized studies comparing beta-blockers with a control (placebo or standard care) administered during the perioperative period to adults undergoing cardiac surgery. We excluded studies in which all participants in the standard care control group were given a pharmacological agent that was not given to participants in the intervention group, studies in which all participants in the control group were given a beta-blocker, and studies in which beta-blockers were given with an additional agent (e.g. magnesium). We excluded studies that did not measure or report review outcomes.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, and assessed risks of bias. We assessed the certainty of evidence with GRADE.\nMain results\nWe included 63 studies with 7768 participants; six studies were quasi-randomized and the remaining were RCTs. All participants were undergoing cardiac surgery, and in most studies, at least some of the participants were previously taking beta-blockers. Types of beta-blockers were: propranolol, metoprolol, sotalol, esmolol, landiolol, acebutolol, timolol, carvedilol, nadolol, and atenolol. In twelve studies, beta-blockers were titrated according to heart rate or blood pressure. Duration of administration varied between studies, as did the time at which drugs were administered; in nine studies this was before surgery, in 20 studies during surgery, and in the remaining studies beta-blockers were started postoperatively. Overall, we found that most studies did not report sufficient details for us to adequately assess risk of bias. In particular, few studies reported methods used to randomize participants to groups. In some studies, participants in the control group were given beta-blockers as rescue therapy during the study period, and all studies in which the control was standard care were at high risk of performance bias because of the open-label study design. No studies were prospectively registered with clinical trials registers, which limited the assessment of reporting bias. We judged 68% studies to be at high risk of bias in at least one domain.\nStudy authors reported few deaths (7 per 1000 in both the intervention and control groups), and we found low-certainty evidence that beta-blockers may make little or no difference to all-cause mortality at 30 days (risk ratio (RR) 0.95, 95% confidence interval (CI) 0.47 to 1.90; 29 studies, 4099 participants). For myocardial infarctions, we found no evidence of a difference in events (RR 1.05, 95% CI 0.72 to 1.52; 25 studies, 3946 participants; low-certainty evidence). Few study authors reported cerebrovascular events, and the evidence was uncertain (RR 1.37, 95% CI 0.51 to 3.67; 5 studies, 1471 participants; very low-certainty evidence). Based on a control risk of 54 per 1000, we found low-certainty evidence that beta-blockers may reduce episodes of ventricular arrhythmias by 32 episodes per 1000 (RR 0.40, 95% CI 0.25 to 0.63; 12 studies, 2296 participants). For atrial fibrillation or flutter, there may be 163 fewer incidences with beta-blockers, based on a control risk of 327 incidences per 1000 (RR 0.50, 95% CI 0.42 to 0.59; 40 studies, 5650 participants; low-certainty evidence). However, the evidence for bradycardia and hypotension was less certain. We found that beta-blockers may make little or no difference to bradycardia (RR 1.63, 95% CI 0.92 to 2.91; 12 studies, 1640 participants; low-certainty evidence), or hypotension (RR 1.84, 95% CI 0.89 to 3.80; 10 studies, 1538 participants; low-certainty evidence).\nWe used GRADE to downgrade the certainty of evidence. Owing to studies at high risk of bias in at least one domain, we downgraded each outcome for study limitations. Based on effect size calculations in the previous review, we found an insufficient number of participants in all outcomes (except atrial fibrillation) and, for some outcomes, we noted a wide confidence interval; therefore, we also downgraded outcomes owing to imprecision. The evidence for atrial fibrillation and length of hospital stay had a moderate level of statistical heterogeneity which we could not explain, and we, therefore, downgraded these outcomes for inconsistency.\nAuthors' conclusions\nWe found no evidence of a difference in early all-cause mortality, myocardial infarction, cerebrovascular events, hypotension and bradycardia. However, there may be a reduction in atrial fibrillation and ventricular arrhythmias when beta-blockers are used. A larger sample size is likely to increase the certainty of this evidence. Four studies awaiting classification may alter the conclusions of this review.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Beta-blockers may be beneficial for people who are undergoing cardiac surgery because they may reduce the number of people who experience atrial fibrillation and ventricular arrhythmias. Beta-blockers may make little or no difference to the other outcomes in this review, including death, heart attacks or stroke."
}
]
|
query-laysum | 5552 | [
{
"role": "user",
"content": "Abstract: Background\nThis is the second update of a Cochrane Review first published in 2015 and last updated in 2018.\nAppendectomy, the surgical removal of the appendix, is performed primarily for acute appendicitis. Patients who undergo appendectomy for complicated appendicitis, defined as gangrenous or perforated appendicitis, are more likely to suffer postoperative complications. The routine use of abdominal drainage to reduce postoperative complications after appendectomy for complicated appendicitis is controversial.\nObjectives\nTo assess the safety and efficacy of abdominal drainage to prevent intraperitoneal abscess after appendectomy (irrespective of open or laparoscopic) for complicated appendicitis; to compare the effects of different types of surgical drains; and to evaluate the optimal time for drain removal.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE, Ovid Embase, Web of Science, the World Health Organization International Trials Registry Platform, ClinicalTrials.gov, Chinese Biomedical Literature Database, and three trials registers on 24 February 2020, together with reference checking, citation searching, and contact with study authors to identify additional studies.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared abdominal drainage versus no drainage in people undergoing emergency open or laparoscopic appendectomy for complicated appendicitis. We also included RCTs that compared different types of drains and different schedules for drain removal in people undergoing appendectomy for complicated appendicitis.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently identified the trials for inclusion, collected the data, and assessed the risk of bias. We used the GRADE approach to assess evidence certainty. We included intraperitoneal abscess as the primary outcome. Secondary outcomes were wound infection, morbidity, mortality, hospital stay, hospital costs, pain, and quality of life.\nMain results\nUse of drain versus no drain\nWe included six RCTs (521 participants) comparing abdominal drainage and no drainage in participants undergoing emergency open appendectomy for complicated appendicitis. The studies were conducted in North America, Asia, and Africa. The majority of participants had perforated appendicitis with local or general peritonitis. All participants received antibiotic regimens after open appendectomy. None of the trials was assessed as at low risk of bias.\nThe evidence is very uncertain regarding the effects of abdominal drainage versus no drainage on intraperitoneal abscess at 30 days (risk ratio (RR) 1.23, 95% confidence interval (CI) 0.47 to 3.21; 5 RCTs; 453 participants; very low-certainty evidence) or wound infection at 30 days (RR 2.01, 95% CI 0.88 to 4.56; 5 RCTs; 478 participants; very low-certainty evidence). There were seven deaths in the drainage group (N = 183) compared to one in the no-drainage group (N = 180), equating to an increase in the risk of 30-day mortality from 0.6% to 2.7% (Peto odds ratio 4.88, 95% CI 1.18 to 20.09; 4 RCTs; 363 participants; low-certainty evidence). Abdominal drainage may increase 30-day overall complication rate (morbidity; RR 6.67, 95% CI 2.13 to 20.87; 1 RCT; 90 participants; low-certainty evidence) and hospital stay by 2.17 days (95% CI 1.76 to 2.58; 3 RCTs; 298 participants; low-certainty evidence) compared to no drainage.\nThe outcomes hospital costs, pain, and quality of life were not reported in any of the included studies.\nThere were no RCTs comparing the use of drain versus no drain in participants undergoing emergency laparoscopic appendectomy for complicated appendicitis.\nOpen drain versus closed drain\nThere were no RCTs comparing open drain versus closed drain for complicated appendicitis.\nEarly versus late drain removal\nThere were no RCTs comparing early versus late drain removal for complicated appendicitis.\nAuthors' conclusions\nThe certainty of the currently available evidence is low to very low. The effect of abdominal drainage on the prevention of intraperitoneal abscess or wound infection after open appendectomy is uncertain for patients with complicated appendicitis. The increased rates for overall complication rate and hospital stay for the drainage group compared to the no-drainage group are based on low-certainty evidence. Consequently, there is no evidence for any clinical improvement with the use of abdominal drainage in patients undergoing open appendectomy for complicated appendicitis. The increased risk of mortality with drainage comes from eight deaths observed in just under 400 recruited participants. Larger studies are needed to more reliably determine the effects of drainage on morbidity and mortality outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "'Appendicitis' refers to inflammation (the reaction of a part of the body to injury or infection, characterised by swelling, heat, and pain) of the appendix. Appendectomy, the surgical removal of the appendix, is performed primarily in individuals who have acute appendicitis. Individuals undergoing an appendectomy for complicated appendicitis, which is defined as gangrenous (soft-tissue death) or perforated (burst) appendicitis, are more likely to suffer postoperative complications. The routine placement of a surgical drain to prevent intraperitoneal abscess after an appendectomy for complicated appendicitis is controversial and has been questioned."
}
]
|
query-laysum | 5553 | [
{
"role": "user",
"content": "Abstract: Background\nThe impact of exercise-based cardiac rehabilitation (CR) following heart valve surgery is uncertain. We conducted an update of this systematic review and a meta-analysis to assess randomised controlled trial evidence for the use of exercise-based CR following heart valve surgery.\nObjectives\nTo assess the benefits and harms of exercise-based CR compared with no exercise training in adults following heart valve surgery or repair, including both percutaneous and surgical procedures. We considered CR programmes consisting of exercise training with or without another intervention (such as an intervention with a psycho-educational component).\nSearch methods\nWe searched the Cochrane Central Register of Clinical Trials (CENTRAL), in the Cochrane Library; MEDLINE (Ovid); Embase (Ovid); the Cumulative Index to Nursing and Allied Health Literature (CINAHL; EBSCO); PsycINFO (Ovid); Latin American Caribbean Health Sciences Literature (LILACS; Bireme); and Conference Proceedings Citation Index-Science (CPCI-S) on the Web of Science (Clarivate Analytics) on 10 January 2020. We searched for ongoing trials from ClinicalTrials.gov, Clinical-trials.com, and the World Health Organization International Clinical Trials Registry Platform on 15 May 2020.\nSelection criteria\nWe included randomised controlled trials that compared exercise-based CR interventions with no exercise training. Trial participants comprised adults aged 18 years or older who had undergone heart valve surgery for heart valve disease (from any cause) and had received heart valve replacement or heart valve repair. Both percutaneous and surgical procedures were included.\nData collection and analysis\nTwo review authors independently extracted data. We assessed the risk of systematic errors (‘bias’) by evaluating risk domains using the 'Risk of bias' (RoB2) tool. We assessed clinical and statistical heterogeneity. We performed meta-analyses using both fixed-effect and random-effects models. We used the GRADE approach to assess the quality of evidence for primary outcomes (all-cause mortality, all-cause hospitalisation, and health-related quality of life).\nMain results\nWe included six trials with a total of 364 participants who have had open or percutaneous heart valve surgery. For this updated review, we identified four additional trials (216 participants). One trial had an overall low risk of bias, and we classified the remaining five trials as having some concerns.\nFollow-up ranged across included trials from 3 to 24 months. Based on data at longest follow-up, a total of nine participants died: 4 CR versus 5 control (relative risk (RR) 0.83, 95% confidence interval (CI) 0.26 to 2.68; 2 trials, 131 participants; GRADE quality of evidence very low). No trials reported on cardiovascular mortality. One trial reported one cardiac-related hospitalisation in the CR group and none in the control group (RR 2.72, 95% CI 0.11 to 65.56; 1 trial, 122 participants; GRADE quality of evidence very low). We are uncertain about health-related quality of life at completion of the intervention in CR compared to control (Short Form (SF)-12/36 mental component: mean difference (MD) 1.28, 95% CI -1.60 to 4.16; 2 trials, 150 participants; GRADE quality of evidence very low; and SF-12/36 physical component: MD 2.99, 95% CI -5.24 to 11.21; 2 trials, 150 participants; GRADE quality of evidence very low), or at longest follow-up (SF-12/36 mental component: MD -1.45, 95% CI -4.70 to 1.80; 2 trials, 139 participants; GRADE quality of evidence very low; and SF-12/36 physical component: MD -0.87, 95% CI -3.57 to 1.83; 2 trials, 139 participants; GRADE quality of evidence very low).\nAuthors' conclusions\nDue to lack of evidence and the very low quality of available evidence, this updated review is uncertain about the impact of exercise-CR in this population in terms of mortality, hospitalisation, and health-related quality of life. High-quality (low risk of bias) evidence on the impact of CR is needed to inform clinical guidelines and routine practice.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cardiac rehabilitation (CR) that includes exercise training has been recommended as treatment for people after heart valve surgery. However, the strength of this evidence is uncertain. This updated review aimed to assess the benefits and harms of exercise-based CR for adults who have undergone heart valve surgery or repair. All types of heart valve surgery were included."
}
]
|
query-laysum | 5554 | [
{
"role": "user",
"content": "Abstract: Background\nIn many low- and middle-income countries (LMICs) morbidity and mortality associated with cardiovascular diseases (CVDs) have grown exponentially over recent years. It is estimated that about 80% of CVD deaths occur in LMICs. People in LMICs are more exposed to cardiovascular risk factors such as tobacco, and often do not have access to effective and equitable healthcare services (including early detection services). Evidence from high-income countries indicates that multiple risk factor intervention programmes do not result in reductions in CVD events. Given the increasing incidence of CVDs and lower CVD health awareness in LMICs it is possible that such programmes may have beneficial effects.\nObjectives\nTo determine the effectiveness of multiple risk factor interventions (with or without pharmacological treatment) aimed at modifying major cardiovascular risk factors for the primary prevention of CVD in LMICs.\nSearch methods\nWe searched (from inception to 27 June 2014) the Cochrane Library (CENTRAL, HTA, DARE, EED), MEDLINE, EMBASE, Global Health and three other databases on 27 June 2014. We also searched two clinical trial registers and conducted reference checking to identify additional studies. We applied no language limits.\nSelection criteria\nWe included randomised controlled trials (RCTs) of health promotion interventions to achieve behaviour change (i.e. smoking cessation, dietary advice, increasing activity levels) with or without pharmacological treatments, which aim to alter more than one cardiovascular risk factor (i.e. diet, reduce blood pressure, smoking, total blood cholesterol or increase physical activity) of at least six months duration of follow-up conducted in LMICs.\nData collection and analysis\nTwo authors independently assessed trial eligibility and risk of bias, and extracted data. We combined dichotomous data using risk ratios (RRs) and continuous data using mean differences (MDs), and presented all results with a 95% confidence interval (CI). The primary outcome was combined fatal and non-fatal cardiovascular disease events.\nMain results\nThirteen trials met the inclusion criteria and are included in the review. All studies had at least one domain with unclear risk of bias. Some studies were at high risk of bias for random sequence generation (two trials), allocation concealment (two trials), blinding of outcome assessors (one trial) and incomplete outcome data (one trial). Duration and content of multiple risk factor interventions varied across the trials. Two trials recruited healthy participants and the other 11 trials recruited people with varying risks of CVD, such as participants with known hypertension and type 2 diabetes. Only one study reported CVD outcomes and multiple risk factor interventions did not reduce the incidence of cardiovascular events (RR 0.57, 95% CI 0.11 to 3.07, 232 participants, low-quality evidence); the result is imprecise (a wide confidence interval and small sample size) and makes it difficult to draw a reliable conclusion. None of the included trials reported all-cause mortality. The pooled effect indicated a reduction in systolic blood pressure (MD -6.72 mmHg, 95% CI -9.82 to -3.61, I² = 91%, 4868 participants, low-quality evidence), diastolic blood pressure (MD -4.40 mmHg, 95% CI -6.47 to -2.34, I² = 92%, 4701 participants, low-quality evidence), body mass index (MD -0.76 kg/m², 95% CI -1.29 to -0.22, I² = 80%, 2984 participants, low-quality evidence) and waist circumference (MD -3.31, 95% CI -4.77 to -1.86, I² = 55%, 393 participants, moderate-quality evidence) in favour of multiple risk factor interventions, but there was substantial heterogeneity. There was insufficient evidence to determine the effect of these interventions on consumption of fruit or vegetables, smoking cessation, glycated haemoglobin, fasting blood sugar, high density lipoprotein (HDL) cholesterol, low density lipoprotein (LDL) cholesterol and total cholesterol. None of the included trials reported on adverse events.\nAuthors' conclusions\nDue to the limited evidence currently available, we can draw no conclusions as to the effectiveness of multiple risk factor interventions on combined CVD events and mortality. There is some evidence that multiple risk factor interventions may lower blood pressure levels, body mass index and waist circumference in populations in LMIC settings at high risk of hypertension and diabetes. There was considerable heterogeneity between the trials, the trials were small, and at some risk of bias. Larger studies with longer follow-up periods are required to confirm whether multiple risk factor interventions lead to reduced CVD events and mortality in LMIC settings.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review examines the effectiveness of health promotion interventions that aim to reduce more than one major cardiovascular risk factor (multiple risk factor intervention) for the primary prevention of cardiovascular disease in low- and middle-income countries (LMICs). Such risk factors are overweight/obesity, high blood pressure, smoking, too much bad cholesterol or low physical activity levels."
}
]
|
query-laysum | 5555 | [
{
"role": "user",
"content": "Abstract: Background\nAggression occurs frequently within health and social care settings. It can result in injury to patients and staff and can adversely affect staff performance and well-being. De-escalation is a widely used and recommended intervention for managing aggression, but the efficacy of the intervention as a whole and the specific techniques that comprise it are unclear.\nObjectives\nTo assess the effects of de-escalation techniques for managing non-psychosis-induced aggression in adults in care settings, in both staff and service users.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL and 14 other databases in September 2017, plus three trials registers in October 2017. We also checked references, and contacted study authors and authorities in the field to identify additional published and unpublished studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs comparing de-escalation techniques with standard practice or alternative techniques for managing aggressive behaviour in adult care settings. We excluded studies in which participants had psychosis.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane.\nMain results\nThis review includes just one cluster-randomised study of 306 older people with dementia and an average age of 86 years, conducted across 16 nursing homes in France. The study did not measure any of our primary or secondary outcomes but did measure behavioural change using three measurement scales: the Cohen-Mansfield Agitation Inventory (CMAI; 29-item scale), the Neuropsychiatric Inventory (NPI; 12-item scale), and the Observation Scale (OS; 25-item scale). For the CMAI, the study reports a Global score (29 items rated on a seven-point scale (1 = never occurs to 7 = occurs several times an hour) and summed to give a total score ranging from 29 to 203) and mean scores (evaluable items (rated on the same 7-point scale) divided by the theoretical total number of items) for the following four domains: Physically Non-Aggressive Behaviour, such as pacing (13 items); Verbally Non-Aggressive Behaviour, such as repetition (four items); Physically Aggressive Behaviour, such as hitting (nine items); and Verbally Aggressive Behaviour, such as swearing (three items). Four of the five CMAI scales improved in the intervention group (Global: change mean difference (MD) −5.69 points, 95% confidence interval (CI) −9.59 to −1.79; Physically Non-Aggressive: change MD −0.32 points, 95% CI −0.49 to −0.15; Verbally Non-Aggressive: change MD −0.44 points, 95% CI −0.69 to −0.19; and Verbally Aggressive: change MD −0.16 points, 95% CI −0.31 to −0.01). There was no difference in change scores on the Physically Aggressive scale (MD −0.08 points, 95% CI −0.37 to 0.21). Using GRADE guidelines, we rated the quality of this evidence as very low due to high risk of bias and indirectness of the outcome measures. There were no differences in NPI or OS change scores between groups by the end of the study.\nWe also identified one ongoing study.\nAuthors' conclusions\nThe limited evidence means that uncertainty remains around the effectiveness of de-escalation and the relative efficacy of different techniques. High-quality research on the effectiveness of this intervention is therefore urgently needed.\n\nGiven the provided abstract, please respond to the following query: \"Currentness of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to September 2017."
}
]
|
query-laysum | 5556 | [
{
"role": "user",
"content": "Abstract: Background\nFebrile seizures occurring in a child older than one month during an episode of fever affect 2-4% of children in Great Britain and the United States and recur in 30%. Rapid-acting antiepileptics and antipyretics given during subsequent fever episodes have been used to avoid the adverse effects of continuous antiepileptic drugs.\nThis is an updated version of a Cochrane Review previously published in 2017.\nObjectives\nTo evaluate primarily the effectiveness and safety of antiepileptic and antipyretic drugs used prophylactically to treat children with febrile seizures; and also to evaluate any other drug intervention where there is a sound biological rationale for its use.\nSearch methods\nFor the latest update we searched the following databases on 3 February 2020: Cochrane Register of Studies (CRS Web), MEDLINE (Ovid, 1946 to 31 January 2020). CRS Web includes randomised or quasi-randomised controlled trials from PubMed, Embase, ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform (ICTRP), the Cochrane Central Register of Controlled Trials (CENTRAL), and the specialised registers of Cochrane Review Groups including the Cochrane Epilepsy Group. We imposed no language restrictions and contacted researchers to identify continuing or unpublished studies.\nSelection criteria\nTrials using randomised or quasi-randomised participant allocation that compared the use of antiepileptics, antipyretics or recognised Central Nervous System active agents with each other, placebo, or no treatment.\nData collection and analysis\nFor the original review, two review authors independently applied predefined criteria to select trials for inclusion and extracted the predefined relevant data, recording methods for randomisation, blinding, and exclusions. For the 2016 update, a third review author checked all original inclusions, data analyses, and updated the search. For the 2020 update, one review author updated the search and performed the data analysis following a peer-review process with the original review authors. We assessed seizure recurrence at 6, 12, 18, 24, 36, 48 months, and where data were available at age 5 to 6 years along with recorded adverse effects. We evaluated the presence of publication bias using funnel plots.\nMain results\nWe included 42 articles describing 32 randomised trials, with 4431 randomised participants used in the analysis of this review. We analysed 15 interventions of continuous or intermittent prophylaxis and their control treatments. Methodological quality was moderate to poor in most studies. We found no significant benefit for intermittent phenobarbital, phenytoin, valproate, pyridoxine, ibuprofen, or zinc sulfate versus placebo or no treatment; nor for diclofenac versus placebo followed by ibuprofen, paracetamol, or placebo; nor for continuous phenobarbital versus diazepam, intermittent rectal diazepam versus intermittent valproate, or oral diazepam versus clobazam.\nThere was a significant reduction of recurrent febrile seizures with intermittent diazepam versus placebo or no treatment at six months (risk ratio (RR) 0.64, 95% confidence interval (CI) 0.48 to 0.85; 6 studies, 1151 participants; moderate-certainty evidence), 12 months (RR 0.69, 95% CI 0.56 to 0.84; 8 studies, 1416 participants; moderate-certainty evidence), 18 months (RR 0.37, 95% CI 0.23 to 0.60; 1 study, 289 participants; low-certainty evidence), 24 months (RR 0.73, 95% CI 0.56 to 0.95; 4 studies, 739 participants; high-certainty evidence), 36 months (RR 0.58, 95% CI 0.40 to 0.85; 1 study, 139 participants; low-certainty evidence), 48 months (RR 0.36, 95% CI 0.15 to 0.89; 1 study, 110 participants; moderate-certainty evidence), with no benefit at 60 to 72 months (RR 0.08, 95% CI 0.00 to 1.31; 1 study, 60 participants; very low-certainty evidence).\nPhenobarbital versus placebo or no treatment reduced seizures at six months (RR 0.59, 95% CI 0.42 to 0.83; 6 studies, 833 participants; moderate-certainty evidence), 12 months (RR 0.54, 95% CI 0.42 to 0.70; 7 studies, 807 participants; low-certainty evidence), and 24 months (RR 0.69, 95% CI 0.53 to 0.89; 3 studies, 533 participants; moderate-certainty evidence), but not at 18 months (RR 0.77, 95% CI 0.56 to 1.05; 2 studies, 264 participants) or 60 to 72 months follow-up (RR 1.50, 95% CI 0.61 to 3.69; 1 study, 60 participants; very low-certainty evidence).\nIntermittent clobazam compared to placebo at six months resulted in a RR of 0.36 (95% CI 0.20 to 0.64; 1 study, 60 participants; low-certainty evidence), an effect found against an extremely high (83.3%) recurrence rate in the controls, a result that needs replication.\nWhen compared to intermittent diazepam, intermittent oral melatonin did not significantly reduce seizures at six months (RR 0.45, 95% CI 0.18 to 1.15; 1 study, 60 participants; very-low certainty evidence).\nWhen compared to placebo, intermittent oral levetiracetam significantly reduced recurrent seizures at 12 months (RR 0.27, 95% CI 0.15 to 0.52; 1 study, 115 participants; very low-certainty evidence).\nThe recording of adverse effects was variable. Two studies reported lower comprehension scores in phenobarbital-treated children. Adverse effects were recorded in up to 30% of children in the phenobarbital-treated groups and 36% in benzodiazepine-treated groups. We found evidence of publication bias in the meta-analyses of comparisons for phenobarbital versus placebo (seven studies) at 12 months but not at six months (six studies); and valproate versus placebo (four studies) at 12 months. There were too few studies to identify publication bias for the other comparisons.\nThe methodological quality of most of the included studies was low or very low. Methods of randomisation and allocation concealment often did not meet current standards, and 'treatment versus no treatment' was more commonly seen than 'treatment versus placebo', leading to obvious risks of bias.\nAuthors' conclusions\nWe found reduced recurrence rates for intermittent diazepam and continuous phenobarbital, with adverse effects in up to 30% of children. The apparent benefit for clobazam treatment in one trial needs to be replicated. Levetiracetam also shows benefit with a good safety profile; however, further study is required. Given the benign nature of recurrent febrile seizures, and the high prevalence of adverse effects of these drugs, parents and families should be supported with adequate contact details of medical services and information on recurrence, first aid management, and, most importantly, the benign nature of the phenomenon.\n\nGiven the provided abstract, please respond to the following query: \"Objective\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to know in how many children these drugs would prevent a recurrence of febrile seizures or cause unwanted effects."
}
]
|
query-laysum | 5557 | [
{
"role": "user",
"content": "Abstract: Background\nWe know that the brain damage resulting from traumatic and other insults is not due solely to the direct consequences of the primary injury. A significant and potentially preventable contribution to the overall morbidity arises from secondary hypoxic-ischaemic damage. Brain swelling accompanied by raised intracranial pressure (ICP) prevents adequate cerebral perfusion with well-oxygenated blood.\nDetection of raised ICP could be useful in alerting clinicians to the need to improve cerebral perfusion, with consequent reductions in brain injury.\nObjectives\nTo determine whether routine ICP monitoring in severe coma of any cause reduces the risk of all-cause mortality or severe disability at final follow-up.\nSearch methods\nWe searched the Cochrane Injuries Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (OvidSP), EMBASE (OvidSP), CINAHL Plus, ISI Web of Science (SCI-EXPANDED & CPCI-S), clinical trials registries and reference lists. We ran the most recent search on 22 May 2015.\nSelection criteria\nAll randomised controlled studies of real-time ICP monitoring by invasive or semi-invasive means in acute coma (traumatic or non-traumatic aetiology) versus clinical care without ICP monitoring (that is, guided only by clinical or radiological inference of the presence of raised ICP).\nData collection and analysis\nTwo authors (ET and RF) worked independently to identify the one study that met inclusion criteria. JR and RF independently extracted data and assessed risk of bias. We contacted study authors for additional information, including details of methods and outcome data.\nMain results\nOne randomized controlled trial (RCT) meeting the selection criteria has been identified to date.\nThe included study had 324 participants. We judged risk of bias to be low for all categories except blinding of participants and personnel, which is not feasible for this intervention. There were few missing data, and we analysed all on an intention-to-treat basis.\nParticipants could be 13 years of age or older (mean age of sample 29; range 22 to 44), and all had severe traumatic brain injury, mostly due to traffic incidents. All were receiving care within intensive care units (ICUs) at one of six hospitals in either Bolivia or Ecuador. Investigators followed up 92% of participants for six months or until death. The trial excluded patients with a Glasgow Coma Score (GCS) equal to three and fixed dilated pupils on admission on the basis that they had sustained brain injury of an unsalvageable severity.\nThe study compared people managed using either an intracranial monitor or non-invasive monitoring (imaging and clinical examination) to identify potentially harmful raised intracranial pressure. Both study groups used imaging and clinical examination measures.\nMortality at six months was 56/144 (39%) in the ICP-monitored group and 67/153 (44%) in the non-invasive group.\nUnfavourable outcome (defined as death or moderate to severe disability at six months) as assessed by the extended Glasgow Outcome Scale (GOS-E) was 80/144 (56%) in the ICP-monitored group and 93/153 (61%) in the non-invasive group.\nSix percent of participants in the ICP monitoring group had complications related to the monitoring, none of which met criteria for being a serious adverse event. There were no complications relating to the non-invasive group.\nOther complications and adverse events were comparable between treatment groups, 70/157 (45%) in the ICP-monitored group and 76/167 (46%) in the non-invasive group.\nLate mortality in both the monitored and non-invasive groups was high, with 35% of deaths occurring > 14 days after injury. The authors comment that this high late mortality may reflect inadequacies in post-ICU services for disabled survivors requiring specialist rehabilitation care.\nAuthors' conclusions\nThe data from the single RCT studying the role of routine ICP monitoring in acute traumatic coma fails to provide evidence to support the intervention.\nResearch in this area is complicated by the fact that RCTs necessarily assess the combined impact of measurement of ICP with the clinical management decisions made in light of this data. Future studies will need to assess the added value of ICP data alongside other information from the multimodal monitoring typically performed in intensive care unit settings. Additionally, even within traumatically acquired brain injury (TBI), there is great heterogeneity in mechanisms, distribution, location and magnitude of injury, and studies within more homogeneous subgroups are likely to be more informative.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The brain is situated in a rigid box (the skull) that cannot expand, so normal swelling from injury cannot occur. When brain swelling does occur, pressure inside the skull rises. This makes it harder for the heart to pump the oxygen-rich blood into the brain needed for recovery. If treating physicians cannot control swelling, the lack of blood supply to the swollen brain can cause further brain damage. Efforts to avoid this damage can include regular monitoring of the pressure inside the skull (intracranial). There are different ways to monitor pressure. One commonly used method is to insert a small probe into the skull. But whenever something is put into the skull, there is a chance it may cause bleeding or an infection."
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.