dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum | 2226 | [
{
"role": "user",
"content": "Abstract: Background\nPersistent postural-perceptual dizziness (PPPD) is a chronic balance disorder, which is characterised by subjective unsteadiness or dizziness that is worse on standing and with visual stimulation. The condition was only recently defined and therefore the prevalence is currently unknown. However, it is likely to include a considerable number of people with chronic balance problems. The symptoms can be debilitating and have a profound impact on quality of life. At present, little is known about the optimal way to treat this condition. A variety of medications may be used, as well as other treatments, such as vestibular rehabilitation.\nObjectives\nTo evaluate the benefits and harms of pharmacological interventions for persistent postural-perceptual dizziness (PPPD).\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 21 November 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with PPPD, which compared selective serotonin reuptake inhibitors (SSRIs) or serotonin and norepinephrine reuptake inhibitors (SNRIs) with either placebo or no treatment. We excluded studies that did not use the Bárány Society criteria to diagnose PPPD and studies that followed up participants for less than three months.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vestibular symptoms (assessed as a dichotomous outcome - improved or not improved), 2) change in vestibular symptoms (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) generic health-related quality of life and 6) other adverse effects. We considered outcomes reported at three time points: 3 to < 6 months, 6 to ≤ 12 months and > 12 months. We planned to use GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe identified no studies that met our inclusion criteria.\nAuthors' conclusions\nAt present, there is no evidence from placebo-controlled randomised trials regarding pharmacological treatments - specifically SSRIs and SNRIs - for PPPD. Consequently, there is great uncertainty over the use of these treatments for this condition. Further work is needed to establish whether any treatments are effective at improving the symptoms of PPPD, and whether their use is associated with any adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"What is PPPD?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "PPPD stands for persistent postural-perceptual dizziness. People with this condition have symptoms of dizziness or unsteadiness, which are worse when standing up or moving around, or with rich visual stimulation (from complicated patterns or busy moving images)."
}
]
|
query-laysum | 2227 | [
{
"role": "user",
"content": "Abstract: Background\nAbdominal aortic graft infections are a major complication following abdominal aortic aneurysm surgery, with high morbidity and mortality rates. They can be treated surgically or conservatively using medical management. The two most common surgical techniques are in situ replacement of the graft and extra-anatomical bypass. Medical management most commonly consists of a course of long-term antibiotics. There is currently no consensus on which intervention (extra-anatomical bypass, in situ replacement, or medical) is the most effective in managing abdominal aortic graft infections. Whilst in emergency or complex situations such as graft rupture surgical management is the only option, in non-emergency situations it is often personal preference that influences the clinician's decision-making.\nObjectives\nTo assess and compare the effects of surgical and medical interventions for abdominal aortic graft infections.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL databases and WHO ICTRP and ClinicalTrials.gov trials registers to 2 December 2019. We also reviewed the bibliographies of the studies identified by the search and contacted specialists in the field and study authors to request information on any possible unpublished data.\nSelection criteria\nWe aimed to include all randomised controlled trials that used surgical or medical interventions to treat abdominal aortic graft infections. The definitions of abdominal aortic graft infections were accepted as presented in the individual studies, and included secondary infection due to aortoenteric fistula. We excluded studies presenting data on prosthetic graft infections in general, unless data specific to abdominal aortic graft infections could be isolated.\nData collection and analysis\nTwo review authors independently assessed all studies identified by the search. We planned to independently assess risk of bias of the included trials and to evaluate the quality of the evidence using the GRADE approach. Our main outcomes were overall mortality, amputation, graft re-infection, overall graft-related complications, graft-related mortality, acute limb ischaemia, and re-intervention.\nMain results\nWe identified no randomised controlled trials to conduct meta-analysis.\nAuthors' conclusions\nThere is currently insufficient evidence to draw conclusions to support any treatment over the other. Multicentre clinical trials are required to compare different treatments for the condition.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Abdominal aortic graft infections are a serious complication following aortic aneurysm surgery. They can be managed in two ways: either with surgery or medical management. Surgery is currently the mainstay of treatment and is usually performed by two methods, one of which involves removing and replacing the infected graft, and the other bypassing the infected graft. Medical management commonly involves a long-term course of antibiotics. Medical management avoids the risk of potential complications of surgery, but requires use of antibiotics for several months, or even lifelong. There is currently no agreement on which intervention (surgical or medical) is the most effective in managing abdominal aortic graft infections. In emergency or complex situations such as graft rupture, surgical management is the only option, whilst for those unfit for surgery, medical management is the preferred option. However, in non-emergency situations, it is often personal preference that influences the healthcare provider's decision-making. This review aimed to assess and compare the effects of surgical and medical interventions for abdominal aortic graft infections."
}
]
|
query-laysum | 2228 | [
{
"role": "user",
"content": "Abstract: Background\nVasomotor symptoms, such as hot flushes and night sweats, are very common during the menopausal transition. Hormone therapy has traditionally been used as a highly effective treatment, but concerns about increased risk of some chronic diseases have markedly increased the interest of women in alternative treatments. Some of the most popular of these treatments are foods or supplements enriched with phytoestrogens—plant-derived chemicals that have estrogenic action.\nObjectives\nTo assess the efficacy, safety and acceptability of food products, extracts and dietary supplements containing high levels of phytoestrogens when compared with no treatment, placebo or hormone therapy for the amelioration of vasomotor menopausal symptoms (such as hot flushes and night sweats) in perimenopausal and postmenopausal women.\nSearch methods\nSearches targeted the following electronic databases: the Cochrane Menstrual Disorders and Subfertility Group Specialised Register of randomised trials (29 July 2013), the Cochrane Register of Controlled Trials (CENTRAL; 29 July 2013), MEDLINE (inception to 29 July 2013), EMBASE (inception to 29 July 2013), AMED (1985 to 29 July 2013), PsycINFO (inception to 29 July 2013) and CINAHL (inception to 29 July 2013). Attempts were made to access grey literature by sending letters to pharmaceutical companies and performing searches of ongoing trial registers. Reference lists of included trials were also searched.\nSelection criteria\nStudies were included if they were randomised, included perimenopausal or postmenopausal participants with vasomotor symptoms (hot flushes or night sweats), lasted at least 12 weeks and provided interventions such as foods or supplements with high levels of phytoestrogens (not combined with other herbal treatments). Trials that included women who had breast cancer or a history of breast cancer were excluded.\nData collection and analysis\nSelection of trials, extraction of data and assessment of quality were undertaken by at least two review authors. Most trials were too dissimilar for their results to be combined in a meta-analysis, so these findings are provided in narrative 'Summary of results' tables. Studies were grouped into broad categories: dietary soy, soy extracts, red clover extracts, genistein extracts and other types of phytoestrogens. Five trials used Promensil, a red clover extract; results of these trials were combined in a meta-analysis, and summary effect measures were calculated.\nMain results\nA total of 43 randomised controlled trials (4,364 participants) were included in this review. Very few trials provided data suitable for inclusion in a meta-analysis. Among the five trials that yielded data assessing the daily frequency of hot flushes suitable for pooling, no significant difference overall was noted in the incidence of hot flushes between participants taking Promensil (a red clover extract) and those given placebo (mean difference (MD) -0.93, 95% confidence interval (CI) -1.95 to 0.10, I2 = 31%). No evidence indicated a difference in percentage reduction in hot flushes in two trials between Promensil and placebo (MD 20.15, 95% CI -12.08 to 52.38, I2 = 82%). Four trials that were not combined in meta-analyses suggested that extracts with high (> 30 mg/d) levels of genistein consistently reduced the frequency of hot flushes. Individual results from the remaining trials were compared in broad subgroups such as dietary soy, soy extracts and other types of phytoestrogens that could not be combined. Some of these trials found that phytoestrogen treatments alleviated the frequency and severity of hot flushes and night sweats when compared with placebo, but many trials were small and were determined to be at high risk of bias. A strong placebo effect was noted in most trials, with a reduction in frequency ranging from 1% to 59% with placebo. No indication suggested that discrepant results were due to the amount of isoflavone in the active treatment arm, the severity of vasomotor symptoms or trial quality factors. Also, no evidence indicated that these treatments caused oestrogenic stimulation of the endometrium or the vagina or other adverse effects when used for up to two years.\nAuthors' conclusions\nNo conclusive evidence shows that phytoestrogen supplements effectively reduce the frequency or severity of hot flushes and night sweats in perimenopausal or postmenopausal women, although benefits derived from concentrates of genistein should be further investigated.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Hormone therapy is an effective treatment for controlling the most common menopausal symptoms—hot flushes and night sweats. However, it is now recommended only in low doses given for the shortest possible time because of concerns about increased risk of some chronic diseases. Many women have started to use therapies that they perceive as 'natural' and safe, but they often do not have good information about the potential benefits and risks. Some of these therapies contain phytoestrogens—a group of plant-derived chemicals that are thought to prevent or treat disease. Phytoestrogens are found in a wide variety of plants, some of which are foods, particularly soy, alfalfa and red clover."
}
]
|
query-laysum | 2229 | [
{
"role": "user",
"content": "Abstract: Background\nPulmonary arterial hypertension is a devastating disease that leads to right heart failure and premature death. Endothelin receptor antagonists have shown efficacy in the treatment of pulmonary arterial hypertension.\nObjectives\nTo evaluate the efficacy of endothelin receptor antagonists (ERAs) in pulmonary arterial hypertension.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and the reference sections of retrieved articles. The searches are current as of 4 November 2020.\nSelection criteria\nWe included randomised trials and quasi-randomised trials involving participants with pulmonary arterial hypertension.\nData collection and analysis\nTwo of five review authors selected studies, extracted data and assessed study quality according to established criteria. We used standard methods expected by Cochrane. The primary outcomes were exercise capacity (six-minute walk distance, 6MWD), World Health Organization (WHO) or New York Heart Association (NYHA) functional class, Borg dyspnoea scores and dyspnoea-fatigue ratings, and mortality.\nMain results\nWe included 17 randomised controlled trials involving a total of 3322 participants. Most trials were of relatively short duration (12 weeks to six months). Sixteen trials were placebo-controlled, and of these nine investigated a non-selective ERA and seven a selective ERA.\nWe evaluated two comparisons in the review: ERA versus placebo and ERA versus phosphodiesterase type 5 (PDE5) inhibitor. The abstract focuses on the placebo-controlled trials only and presents the pooled results of selective and non-selective ERAs.\nAfter treatment, participants receiving ERAs could probably walk on average 25.06 m (95% confidence interval (CI) 17.13 to 32.99 m; 2739 participants; 14 studies; I2 = 34%, moderate-certainty evidence) further than those receiving placebo in a 6MWD. Endothelin receptor antagonists probably improved more participants' WHO functional class (odds ratio (OR) 1.41, 95% CI 1.16 to 1.70; participants = 3060; studies = 15; I2 = 5%, moderate-certainty evidence) and probably lowered the odds of functional class deterioration (OR 0.43, 95% CI 0.26 to 0.72; participants = 2347; studies = 13; I2 = 40%, moderate-certainty evidence) compared with placebo. There may be a reduction in mortality with ERAs (OR 0.78, 95% CI 0.58, 1.07; 2889 participants; 12 studies; I2 = 0%, low-certainty evidence), and pooled data suggest that ERAs probably improve cardiopulmonary haemodynamics and may reduce Borg dyspnoea score in symptomatic patients. Hepatic toxicity was not common, but may be increased by ERA treatment from 37 to 67 (95% CI 34 to 130) per 1000 over 25 weeks of treatment (OR 1.88, 95% CI 0.91 to 3.90; moderate-certainty evidence). Although ERAs were well tolerated in this population, several cases of irreversible liver failure caused by sitaxsentan have been reported, which led the licence holder for sitaxsentan to withdraw the product from all markets worldwide.\nAs planned, we performed subgroup analyses comparing selective and non-selective ERAs, and with the exception of mean pulmonary artery pressure, did not detect any clear subgroup differences for any outcome.\nAuthors' conclusions\nFor people with pulmonary arterial hypertension with WHO functional class II and III, endothelin receptor antagonists probably increase exercise capacity, improve WHO functional class, prevent WHO functional class deterioration, result in favourable changes in cardiopulmonary haemodynamic variables compared with placebo. However, they are less effective in reducing dyspnoea and mortality. The efficacy data were strongest in those with idiopathic pulmonary hypertension. The irreversible liver failure caused by sitaxsentan and its withdrawal from global markets emphasise the importance of hepatic monitoring in people treated with ERAs. The question of the effects of ERAs on pulmonary arterial hypertension has now likely been answered. The combined use of ERAs and phosphodiesterase inhibitors may provide more benefit in pulmonary arterial hypertension; however, this needs to be confirmed in future studies.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Endothelin receptor antagonists probably increase exercise capacity, improve World Health Organization functional class (a measurement of how severe a person's pulmonary hypertension symptoms are), and may improve death rates and symptoms in people with PAH; however they may also increase the risk of liver damage, although this was rare. The question of the effects of endothelin receptor antagonists on PAH has now likely been answered."
}
]
|
query-laysum | 2230 | [
{
"role": "user",
"content": "Abstract: Background\nFoot wounds in people with diabetes mellitus (DM) are a common and serious global health issue. People with DM are prone to developing foot ulcers and, if these do not heal, they may also undergo foot amputation surgery resulting in postoperative wounds. Negative pressure wound therapy (NPWT) is a technology that is currently used widely in wound care. NPWT involves the application of a wound dressing attached to a vacuum suction machine. A carefully controlled negative pressure (or vacuum) sucks wound and tissue fluid away from the treated area into a canister. A clear and current overview of current evidence is required to facilitate decision-making regarding its use.\nObjectives\nTo assess the effects of negative pressure wound therapy compared with standard care or other therapies in the treatment of foot wounds in people with DM in any care setting.\nSearch methods\nIn January 2018, for this first update of this review, we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies, reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting. We identified six additional studies for inclusion in the review.\nSelection criteria\nPublished or unpublished randomised controlled trials (RCTs) that evaluated the effects of any brand of NPWT in the treatment of foot wounds in people with DM, irrespective of date or language of publication. Particular effort was made to identify unpublished studies.\nData collection and analysis\nTwo review authors independently performed study selection, risk of bias assessment and data extraction. Initial disagreements were resolved by discussion, or by including a third review author when necessary. We presented and analysed data separately for foot ulcers and postoperative wounds.\nMain results\nEleven RCTs (972 participants) met the inclusion criteria. Study sample sizes ranged from 15 to 341 participants. One study had three arms, which were all included in the review. The remaining 10 studies had two arms. Two studies focused on postamputation wounds and all other studies included foot ulcers in people with DM. Ten studies compared NPWT with dressings; and one study compared NPWT delivered at 75 mmHg with NPWT delivered at 125 mmHg. Our primary outcome measures were the number of wounds healed and time to wound healing.\nNPWT compared with dressings for postoperative wounds\nTwo studies (292 participants) compared NPWT with moist wound dressings in postoperative wounds (postamputation wounds). Only one study specified a follow-up time, which was 16 weeks. This study (162 participants) reported an increased number of healed wounds in the NPWT group compared with the dressings group (risk ratio (RR) 1.44, 95% confidence interval (CI) 1.03 to 2.01; low-certainty evidence, downgraded for risk of bias and imprecision). This study also reported that median time to healing was 21 days shorter with NPWT compared with moist dressings (hazard ratio (HR) calculated by review authors 1.91, 95% CI 1.21 to 2.99; low-certainty evidence, downgraded for risk of bias and imprecision). Data from the two studies suggest that it is uncertain whether there is a difference between groups in amputation risk (RR 0.38, 95% CI 0.14 to 1.02; 292 participants; very low-certainty evidence, downgraded once for risk of bias and twice for imprecision).\nNPWT compared with dressings for foot ulcers\nThere were eight studies (640 participants) in this analysis and follow-up times varied between studies. Six studies (513 participants) reported the proportion of wounds healed and data could be pooled for five studies. Pooled data (486 participants) suggest that NPWT may increase the number of healed wounds compared with dressings (RR 1.40, 95% CI 1.14 to 1.72; I² = 0%; low-certainty evidence, downgraded once for risk of bias and once for imprecision). Three studies assessed time to healing, but only one study reported usable data. This study reported that NPWT reduced the time to healing compared with dressings (hazard ratio (HR) calculated by review authors 1.82, 95% CI 1.27 to 2.60; 341 participants; low-certainty evidence, downgraded once for risk of bias and once for imprecision).\nData from three studies (441 participants) suggest that people allocated to NPWT may be at reduced risk of amputation compared with people allocated to dressings (RR 0.33, 95% CI 0.15 to 0.70; I² = 0%; low-certainty evidence; downgraded once for risk of bias and once for imprecision).\nLow-pressure compared with high-pressure NPWT for foot ulcers\nOne study (40 participants) compared NPWT 75 mmHg and NPWT 125 mmHg. Follow-up time was four weeks. There were no data on primary outcomes. There was no clear difference in the number of wounds closed or covered with surgery between groups (RR 0.83, 95% CI 0.47 to 1.47; very low-certainty evidence, downgraded once for risk of bias and twice for serious imprecision) and adverse events (RR 1.50, 95% CI 0.28 to 8.04; very low-certainty evidence, downgraded once for risk of bias and twice for serious imprecision).\nAuthors' conclusions\nThere is low-certainty evidence to suggest that NPWT, when compared with wound dressings, may increase the proportion of wounds healed and reduce the time to healing for postoperative foot wounds and ulcers of the foot in people with DM. For the comparisons of different pressures of NPWT for treating foot ulcers in people with DM, it is uncertain whether there is a difference in the number of wounds closed or covered with surgery, and adverse events. None of the included studies provided evidence on time to closure or coverage surgery, health-related quality of life or cost-effectiveness. The limitations in current RCT evidence suggest that further trials are required to reduce uncertainty around decision-making regarding the use of NPWT to treat foot wounds in people with DM.\n\nGiven the provided abstract, please respond to the following query: \"What did the review study?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Diabetes mellitus is a common condition that leads to high blood glucose (blood sugar) concentrations, with around 2.8 million people affected in the UK (approximately 4.3% of the population). Some people with diabetes can develop ulcers on their feet. These wounds can take a long time to heal, they can be painful and become infected. Ulceration of the foot in people with diabetes can also lead to a higher risk of amputation of parts of the foot or leg. Generally, people with diabetes are at a higher risk of lower-limb amputation than people without diabetes.\nNPWT is a treatment currently being used for wounds including leg ulcers. NPWT involves the application of a wound dressing attached to a vacuum suction machine which sucks any wound and tissue fluid away from the treated area into a canister. Worldwide, the use of NPWT is increasing. However, it is expensive compared with wound treatments such as dressings.\nWe wanted to find out if NPWT could help foot wounds in people with diabetes to heal more quickly and effectively. We wanted to know if people treated with NPWT experienced any side effects. We were also interested in the impact of NPWT on people's quality of life."
}
]
|
query-laysum | 2231 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review published in 2015.\nEpilepsy is a chronic neurological disorder, characterised by recurring, unprovoked seizures. Vagus nerve stimulation (VNS) is a neuromodulatory treatment that is used as an adjunctive therapy for treating people with drug-resistant epilepsy. VNS consists of chronic, intermittent electrical stimulation of the vagus nerve, delivered by a programmable pulse generator.\nObjectives\nTo evaluate the efficacy and tolerability of VNS when used as add-on treatment for people with drug-resistant focal epilepsy.\nSearch methods\nFor this update, we searched the Cochrane Register of Studies (CRS), and MEDLINE Ovid on 3 March 2022. We imposed no language restrictions. CRS Web includes randomised or quasi-randomised controlled trials from the Specialised Registers of Cochrane Review Groups, including Epilepsy, CENTRAL, PubMed, Embase, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform.\nSelection criteria\nWe considered parallel or cross-over, randomised, double-blind, controlled trials of VNS as add-on treatment, which compared high- and low-level stimulation (including three different stimulation paradigms: rapid, mild, and slow duty-cycle), and VNS stimulation versus no stimulation, or a different intervention. We considered adults or children with drug-resistant focal seizures who were either not eligible for surgery, or who had failed surgery.\nData collection and analysis\nWe followed standard Cochrane methods, assessing the following outcomes:\n1. 50% or greater reduction in seizure frequency\n2. Treatment withdrawal (any reason)\n3. Adverse effects\n4. Quality of life (QoL)\n5. Cognition\n6. Mood\nMain results\nWe did not identify any new studies for this update, therefore, the conclusions are unchanged.\nWe included the five randomised controlled trials (RCT) from the last update, with a total of 439 participants. The baseline phase ranged from 4 to 12 weeks, and double-blind treatment phases from 12 to 20 weeks. We rated two studies at an overall low risk of bias, and three at an overall unclear risk of bias, due to lack of reported information about study design. Effective blinding of studies of VNS is difficult, due to the frequency of stimulation-related side effects, such as voice alteration.\nThe risk ratio (RR) for 50% or greater reduction in seizure frequency was 1.73 (95% confidence interval (CI) 1.13 to 2.64; 4 RCTs, 373 participants; moderate-certainty evidence), showing that high frequency VNS was over one and a half times more effective than low frequency VNS.\nThe RR for treatment withdrawal was 2.56 (95% CI 0.51 to 12.71; 4 RCTs, 375 participants; low-certainty evidence). Results for the top five reported adverse events were: hoarseness RR 2.17 (99% CI 1.49 to 3.17; 3 RCTs, 330 participants; moderate-certainty evidence); cough RR 1.09 (99% CI 0.74 to 1.62; 3 RCTs, 334 participants; moderate-certainty evidence); dyspnoea RR 2.45 (99% CI 1.07 to 5.60; 3 RCTs, 312 participants; low-certainty evidence); pain RR 1.01 (99% CI 0.60 to 1.68; 2 RCTs; 312 participants; moderate-certainty evidence); paraesthesia 0.78 (99% CI 0.39 to 1.53; 2 RCTs, 312 participants; moderate-certainty evidence).\nResults from two studies (312 participants) showed that a small number of favourable QOL effects were associated with VNS stimulation, but results were inconclusive between high- and low-level stimulation groups. One study (198 participants) found inconclusive results between high- and low-level stimulation for cognition on all measures used. One study (114 participants) found the majority of participants showed an improvement in mood on the Montgomery–Åsberg Depression Rating Scale compared to baseline, but results between high- and low-level stimulation were inconclusive.\nWe found no important heterogeneity between studies for any of the outcomes.\nAuthors' conclusions\nVNS for focal seizures appears to be an effective and well-tolerated treatment. Results of the overall efficacy analysis show that high-level stimulation reduced the frequency of seizures better than low-level stimulation. There were very few withdrawals, which suggests that VNS is well tolerated.\nAdverse effects associated with implantation and stimulation were primarily hoarseness, cough, dyspnoea, pain, paraesthesia, nausea, and headache, with hoarseness and dyspnoea more likely to occur with high-level stimulation than low-level stimulation.\nHowever, the evidence for these outcomes is limited, and of moderate to low certainty.\nFurther high-quality research is needed to fully evaluate the efficacy and tolerability of VNS for drug-resistant focal seizures.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 3 March 2022."
}
]
|
query-laysum | 2232 | [
{
"role": "user",
"content": "Abstract: Background\nSelf-administered survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource-intensive than other data collection methods. These survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a survey questionnaire could affect the quality of the responses collected.\nObjectives\nTo assess the impact that smartphone and tablet apps as a delivery mode have on the quality of survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects.\nSearch methods\nWe searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015.\nSelection criteria\nWe included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-administered survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-administered survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a survey questionnaire; differences in respondent's adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps.\nData collection and analysis\nTwo review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents). We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted.\nMain results\nWe included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study.\nRegarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents' daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates.\nAuthors' conclusions\nOur results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-administered survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.\n\nGiven the provided abstract, please respond to the following query: \"Objective\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this Cochrane review, we assessed the impact that using apps to deliver a survey can have on various aspects of the quality of responses. These include response rates, data accuracy, data completeness, time taken to complete a survey questionnaire, and acceptability to respondents."
}
]
|
query-laysum | 2233 | [
{
"role": "user",
"content": "Abstract: Background\nOvarian cancer is the seventh most frequent cancer diagnosis worldwide, and the eighth leading cause of cancer mortality. Epithelial ovarian cancer is the most common kind, accounting for 90% of cases. First-line therapy for women with epithelial ovarian cancer consists of a combination of cytoreductive surgery and platinum and taxane-based chemotherapy. However, more than 50% of women with epithelial ovarian cancer will experience a relapse and require further chemotherapy and at some point develop resistance to platinum-based drugs.\nCurrently, guidance on the use of most chemotherapy drugs, including taxanes, is unclear for women whose epithelial ovarian cancer has recurred. Paclitaxel, topotecan, pegylated liposomal doxorubicin hydrochloride, trabectedin and gemcitabine are all licensed for use in the UK at the discretion of clinicians, following discussion with the women as to potential adverse effects. Taxanes can be given in once-weekly regimens (at a lower dose) or three-weekly regimens (at a higher dose), which may have differences in the severity of side effects and effectiveness. As relapsed disease suggests incurable disease, it is all the more important to consider side effects and the impact of treatment schedules, as well as quality of life, and not only the life-prolonging effects of treatment.\nObjectives\nTo assess the efficacy and toxicity of different taxane monotherapy regimens for women with recurrent epithelial ovarian, tubal or primary peritoneal cancer.\nSearch methods\nWe searched CENTRAL, MEDLINE and Embase, up to 22 March 2022. Other related databases and trial registries were searched as well as grey literature and no additional studies were identified. A total of 1500 records were identified.\nSelection criteria\nWe included randomised controlled trials of taxane monotherapy for adult women diagnosed with recurrent epithelial ovarian, tubal or primary peritoneal cancer, previously treated with platinum-based chemotherapy. We included trials comparing two or more taxane monotherapy regimens. Participants could be experiencing their first recurrence of disease or any line of recurrence.\nData collection and analysis\nTwo review authors screened, independently assessed studies, and extracted data from the included studies. The clinical outcomes we examined were overall survival, response rate, progression-free survival, neurotoxicity, neutropenia, alopecia, and quality of life. We performed statistical analyses using fixed-effect and random-effects models following standard Cochrane methodology. We rated the certainty of evidence according to the GRADE approach.\nMain results\nOur literature search yielded 1500 records of 1466 studies; no additional studies were identified by searching grey literature or handsearching. We uploaded the search results into Covidence. After the exclusion of 92 duplicates, we screened titles and abstracts of 1374 records. Of these, we identified 24 studies for full-text screening. We included four parallel-group randomised controlled trials (RCTs). All trials were multicentred and conducted in a hospital setting. The studies included 981 eligible participants with recurrent epithelial ovarian cancer, tubal or primary peritoneal cancer with a median age ranging between 56 to 62 years of age. All participants had a WHO (World Health Organization) performance status of between 0 to 2. The proportion of participants with serous histology ranged between 56% to 85%. Participants included women who had platinum-sensitive (71%) and platinum-resistant (29%) relapse. Some participants were taxane pre-treated (5.6%), whilst the majority were taxane-naive (94.4%). No studies were classified as having a high risk of bias for any of the domains in the Cochrane risk of bias tool.\nWe found that there may be little or no difference in overall survival (OS) between weekly paclitaxel and three-weekly paclitaxel, but the evidence is very uncertain (risk ratio (RR) of 0.94, 95% confidence interval (CI) 0.66 to 1.33, two studies, 263 participants, very low-certainty evidence). Similarly, there may be little or no difference in response rate (RR of 1.07, 95% CI 0.78 to 1.48, two studies, 263 participants, very low-certainty evidence) and progression-free survival (PFS) (RR of 0.83, 95% CI 0.46 to 1.52, two studies, 263 participants, very low-certainty evidence) between weekly and three-weekly paclitaxel, but the evidence is very uncertain.\nWe found differences in the chemotherapy-associated adverse events between the weekly and three-weekly paclitaxel regimens. The weekly paclitaxel regimen may result in a reduction in neutropenia (RR 0.51, 95% 0.27 to 0.95, two studies, 260 participants, low-certainty evidence) and alopecia (RR 0.58, 95% CI 0.46 to 0.73, one study, 205 participants, low-certainty evidence). There may be little or no difference in neurotoxicity, but the evidence was very low-certainty and we cannot exclude an effect (RR 0.53, 95% CI 0.19 to 1.45, two studies, 260 participants).\nWhen examining the effect of paclitaxel dosage in the three-weekly regimen, the 250 mg/m2 paclitaxel regimen probably causes more neurotoxicity compared to the 175 mg/m2 regimen (RR 0.41, 95% CI 0.21 to 0.80, one study, 330 participants, moderate-certainty evidence).\nQuality-of-life data were not extractable from any of the included studies.\nAuthors' conclusions\nFewer people may experience neutropenia when given weekly rather than three-weekly paclitaxel (low-certainty evidence), although it may make little or no difference to the risk of developing neurotoxicity (very low-certainty evidence). This is based on the participants receiving lower doses of drug more often. However, our confidence in this result is low and the true effect may be substantially different from the estimate of the effect. Weekly paclitaxel probably reduces the risk of alopecia, although the rates in both arms were high (46% versus 79%) (low-certainty evidence). A change to weekly from three-weekly chemotherapy could be considered to reduce the likelihood of toxicity, as it may have little or no negative impact on response rate (very low-certainty evidence), PFS (very low-certainty evidence) or OS (very low-certainty evidence).\nThree-weekly paclitaxel, given at a dose of 175 mg/m2 compared to a higher dose,probably reduces the risk of neurotoxicity.We are moderately confident in this result; the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. A change to 175 mg/m2 paclitaxel (from a higher dose), if a three-weekly regimen is used, probably has little or no negative impact on PFS or OS (very low-certainty evidence).\n\nGiven the provided abstract, please respond to the following query: \"Objectives\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess the benefits and side effects of different treatment intervals and different doses of taxane chemotherapy for women with relapsed epithelial ovarian cancer."
}
]
|
query-laysum | 2234 | [
{
"role": "user",
"content": "Abstract: Background\nThe choice of the appropriate perioperative thromboprophylaxis for people with cancer depends on the relative benefits and harms of different anticoagulants.\nObjectives\nTo systematically review the evidence for the relative efficacy and safety of anticoagulants for perioperative thromboprophylaxis in people with cancer.\nSearch methods\nThis update of the systematic review was based on the findings of a comprehensive literature search conducted on 14 June 2018 that included a major electronic search of Cochrane Central Register of Controlled Trials (CENTRAL, 2018, Issue 6), MEDLINE (Ovid), and Embase (Ovid); handsearching of conference proceedings; checking of references of included studies; searching for ongoing studies; and using the 'related citation' feature in PubMed.\nSelection criteria\nRandomized controlled trials (RCTs) that enrolled people with cancer undergoing a surgical intervention and assessed the effects of low-molecular weight heparin (LMWH) to unfractionated heparin (UFH) or to fondaparinux on mortality, deep venous thrombosis (DVT), pulmonary embolism (PE), bleeding outcomes, and thrombocytopenia.\nData collection and analysis\nUsing a standardized form, we extracted data in duplicate on study design, participants, interventions outcomes of interest, and risk of bias. Outcomes of interest included all-cause mortality, PE, symptomatic venous thromboembolism (VTE), asymptomatic DVT, major bleeding, minor bleeding, postphlebitic syndrome, health related quality of life, and thrombocytopenia. We assessed the certainty of evidence for each outcome using the GRADE approach (GRADE Handbook).\nMain results\nOf 7670 identified unique citations, we included 20 RCTs with 9771 randomized people with cancer receiving preoperative prophylactic anticoagulation. We identified seven reports for seven new RCTs for this update.\nThe meta-analyses did not conclusively rule out either a beneficial or harmful effect of LMWH compared with UFH for the following outcomes: mortality (risk ratio (RR) 0.82, 95% confidence interval (CI) 0.63 to 1.07; risk difference (RD) 9 fewer per 1000, 95% CI 19 fewer to 4 more; moderate-certainty evidence), PE (RR 0.49, 95% CI 0.17 to 1.47; RD 3 fewer per 1000, 95% CI 5 fewer to 3 more; moderate-certainty evidence), symptomatic DVT (RR 0.67, 95% CI 0.27 to 1.69; RD 3 fewer per 1000, 95% CI 7 fewer to 7 more; moderate-certainty evidence), asymptomatic DVT (RR 0.86, 95% CI 0.71 to 1.05; RD 11 fewer per 1000, 95% CI 23 fewer to 4 more; low-certainty evidence), major bleeding (RR 1.01, 95% CI 0.69 to 1.48; RD 0 fewer per 1000, 95% CI 10 fewer to 15 more; moderate-certainty evidence), minor bleeding (RR 1.01, 95% CI 0.76 to 1.33; RD 1 more per 1000, 95% CI 34 fewer to 47 more; moderate-certainty evidence), reoperation for bleeding (RR 0.93, 95% CI 0.57 to 1.50; RD 4 fewer per 1000, 95% CI 22 fewer to 26 more; moderate-certainty evidence), intraoperative transfusion (mean difference (MD) -35.36 mL, 95% CI -253.19 to 182.47; low-certainty evidence), postoperative transfusion (MD 190.03 mL, 95% CI -23.65 to 403.72; low-certainty evidence), and thrombocytopenia (RR 3.07, 95% CI 0.32 to 29.33; RD 6 more per 1000, 95% CI 2 fewer to 82 more; moderate-certainty evidence). LMWH was associated with lower incidence of wound hematoma (RR 0.70, 95% CI 0.54 to 0.92; RD 26 fewer per 1000, 95% CI 39 fewer to 7 fewer; moderate-certainty evidence). The meta-analyses found the following additional results: outcomes intraoperative blood loss (MD -6.75 mL, 95% CI -85.49 to 71.99; moderate-certainty evidence); and postoperative drain volume (MD 30.18 mL, 95% CI -36.26 to 96.62; moderate-certainty evidence).\nIn addition, the meta-analyses did not conclusively rule out either a beneficial or harmful effect of LMWH compared with Fondaparinux for the following outcomes: any VTE (DVT or PE, or both; RR 2.51, 95% CI 0.89 to 7.03; RD 57 more per 1000, 95% CI 4 fewer to 228 more; low-certainty evidence), major bleeding (RR 0.74, 95% CI 0.45 to 1.23; RD 8 fewer per 1000, 95% CI 16 fewer to 7 more; low-certainty evidence), minor bleeding (RR 0.83, 95% CI 0.34 to 2.05; RD 8fewer per 1000, 95% CI 33 fewer to 52 more; low-certainty evidence), thrombocytopenia (RR 0.35, 95% CI 0.04 to 3.30; RD 14 fewer per 1000, 95% CI 20 fewer to 48 more; low-certainty evidence), any PE (RR 3.13, 95% CI 0.13 to 74.64; RD 2 more per 1000, 95% CI 1 fewer to 78 more; low-certainty evidence) and postoperative drain volume (MD -20.00 mL, 95% CI -114.34 to 74.34; low-certainty evidence)\nAuthors' conclusions\nWe found no difference between perioperative thromboprophylaxis with LMWH versus UFH and LMWH compared with fondaparinux in their effects on mortality, thromboembolic outcomes, major bleeding, or minor bleeding in people with cancer. There was a lower incidence of wound hematoma with LMWH compared to UFH.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 20 studies that included 9771 people with cancer. The evidence did not identify any difference between the effects of LMWH and UFH on death, getting a blood clot, or bleeding. There was less bruising around the wound following the operation with LMWH compared with UFH. Fondaparinux may have reduced the risk of getting a blood clot."
}
]
|
query-laysum | 2235 | [
{
"role": "user",
"content": "Abstract: Background\nEmergency departments (EDs) are facing serious and significant issues in the delivery of effective and efficient care to patients. Acute assessment services have been implemented at many hospitals internationally to assist in maintaining patient flow for identified groups of patients attending the ED. Identifying the risks and benefits, and optimal configurations of these services may be beneficial to those wishing to utilise an acute assessment service to improve patient flow.\nObjectives\nTo assess the effects of acute assessment services on patient flow following attendance at a hospital ED.\nSearch methods\nWe searched MEDLINE, CENTRAL, Embase and two trials registers on 24 September 2022 to identify studies. No restrictions were imposed on publication year, publication type, or publication language.\nSelection criteria\nStudies eligible for inclusion were randomised trials and cluster-randomised trials with at least two intervention and two control sites. Participants were adults (as defined by study authors) receiving care either in the ED or the acute assessment service, where both were based in the hospital setting. The comparison was hospital-based acute assessment services with usual, ED-only care. The outcomes of this review were mortality at time point closest to 30 days, length of stay in the service (in minutes), and waiting time to see a doctor (in minutes).\nData collection and analysis\nWe followed the standard procedures of Cochrane Effective Practice and Organisation of Care for this review (https://epoc.cochrane.org/resources).\nMain results\nWe identified a total of 5754 records in the search. Following assessment of 3609 de-duplicated records, none were found to be eligible for inclusion in this review.\nAuthors' conclusions\nAt present there are no randomised controlled trials exploring the effects of acute assessment services on patient flow in hospital-based emergency departments compared to usual, ED-only care.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Studies published up to September 2022 were included in this review."
}
]
|
query-laysum | 2236 | [
{
"role": "user",
"content": "Abstract: Background\nDuring elective (planned) caesarean sections, some obstetricians routinely dilate the cervix intraoperatively, using sponge forceps, a finger, or other instruments, because the cervix of women not in labour may not be dilated, and this may cause obstruction of blood or lochia drainage. However, mechanical cervical dilatation during caesarean section may result in contamination by vaginal micro-organisms during dilatation, and increase the risk of infection or cervical trauma.\nObjectives\nTo determine the effects of mechanical dilatation of the cervix during elective caesarean section on postoperative morbidity.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) and reference lists of retrieved studies on 20 September 2017.\nSelection criteria\nWe included all randomised, quasi-randomised, and cluster-randomised controlled trials comparing intraoperative cervical dilatation using a finger, sponge forceps, or other instruments during elective caesarean section versus no mechanical dilatation.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We assessed the quality of the evidence using the GRADE approach.\nMain results\nWe included eight studies with a total of 2227 women undergoing elective caesarean section. Of these, 1097 underwent intraoperative cervical dilatation with a double-gloved index finger or Hegar dilator inserted into the cervical canal to dilate, and 1130 did not undergo intraoperative cervical dilatation. Six of the eight included trials had high risk of bias for some of the risk of bias domains.\nVery low-quality evidence suggested it was unclear whether cervical dilatation had any impact on postpartum haemorrhage (estimated blood loss greater than 1000 mL; risk ratio (RR) 1.97, 95% confidence interval (CI) 0.48 to 8.13; 5/205 versus 3/242; one study, 447 women).\nLow- or very low-quality evidence showed no clear difference for the need for blood transfusion (RR 3.54, 95% CI 0.37 to 33.79; two studies, 847 women); postoperative haemoglobin (mean difference (MD -0.05, 95% CI -0.15 to 0.06; three studies, 749 women), or haematocrit (MD 0.01%, 95% CI -0.18 to 0.20; one study, 400 women); the incidence of drop from baseline haemoglobin above 0.5 g/dL (RR 0.92, 95% CI 0.64 to 1.31; two studies, 722 women), or amount of haemoglobin drop (MD -0.01 g/dL, 95% -0.14 to 0.13; three studies, 796 women); the incidence of secondary postpartum haemorrhage within six weeks (RR 1.18, 95% CI 0.07 to 18.76; one study, 447 women); febrile morbidity (RR 1.18, 95% CI 0.76 to 1.85; seven studies, 2126 women); endometritis (RR 0.94, 95% CI 0.35 to 2.52; four studies, 1536 women); or uterine subinvolution (RR 0.34, 95% CI 0.08 to 1.36; two studies, 654 women); the results crossed the line of no effect for all of the outcomes. There were no data for cervical trauma.\nWe found a slight improvement with mechanical dilatation for these secondary outcomes, not prespecified in the protocol: mean blood loss, endometrial cavity thickness, retained products of conception, distortion of uterine incision, and healing ratio. The evidence for these outcomes was based on one or two studies. Cervical dilatation did not have a clear effect on these secondary outcomes, not prespecified in the protocol: wound infection, urinary tract infection, operative time, infectious morbidity, and integrity of uterine scar.\nAuthors' conclusions\nAt this time, the evidence does not support or refute the use of mechanical dilatation of the cervix during elective caesarean section for reducing postoperative morbidity.\nFurther large, well-designed studies are required to compare the effect of intraoperative mechanical dilatation of the cervix with no intraoperative mechanical cervical dilatation for reducing postoperative morbidity.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is uncertain whether cervical dilatation has any impact on reducing postoperative problems after caesarean section. This means there is insufficient evidence to encourage or discourage the use of mechanical dilatation of the cervix at elective caesarean section for reducing postoperative ill-health."
}
]
|
query-laysum | 2237 | [
{
"role": "user",
"content": "Abstract: Background\nOlder people with hip fractures are often malnourished at the time of fracture, and subsequently have poor food intake. This is an update of a Cochrane review first published in 2000, and previously updated in 2010.\nObjectives\nTo review the effects (benefits and harms) of nutritional interventions in older people recovering from hip fracture.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, CENTRAL, MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, Embase, CAB Abstracts, CINAHL, trial registers and reference lists. The search was last run in November 2015.\nSelection criteria\nRandomised and quasi-randomised controlled trials of nutritional interventions for people aged over 65 years with hip fracture where the interventions were started within the first month after hip fracture.\nData collection and analysis\nTwo review authors independently selected trials, extracted data and assessed risk of bias. Where possible, we pooled data for primary outcomes which were: all cause mortality; morbidity; postoperative complications (e.g. wound infections, pressure sores, deep venous thromboses, respiratory and urinary infections, cardiovascular events); and 'unfavourable outcome' defined as the number of trial participants who died plus the number of survivors with complications. We also pooled data for adverse events such as diarrhoea.\nMain results\nWe included 41 trials involving 3881 participants. Outcome data were limited and risk of bias assessment showed that trials were often methodologically flawed, with less than half of trials at low risk of bias for allocation concealment, incomplete outcome data, or selective reporting of outcomes. The available evidence was judged of either low or very low quality indicating that we were uncertain or very uncertain about the estimates.\nEighteen trials evaluated oral multinutrient feeds that provided non-protein energy, protein, vitamins and minerals. There was low-quality evidence that oral feeds had little effect on mortality (24/486 versus 31/481; risk ratio (RR) 0.81 favouring supplementation, 95% confidence interval (CI) 0.49 to 1.32; 15 trials). Thirteen trials evaluated the effect of oral multinutrient feeds on complications (e.g. pressure sore, infection, venous thrombosis, pulmonary embolism, confusion). There was low-quality evidence that the number of participants with complications may be reduced with oral multinutrient feeds (123/370 versus 157/367; RR 0.71, 95% CI 0.59 to 0.86; 11 trials). Based on very low-quality evidence from six studies (334 participants), oral supplements may result in lower numbers with 'unfavourable outcome' (death or complications): RR 0.67, 95% CI 0.51 to 0.89. There was very low-quality evidence for six studies (442 participants) that oral supplementation did not result in an increased incidence of vomiting and diarrhoea (RR 0.99, 95% CI 0.47 to 2.05).\nOnly very low-quality evidence was available from the four trials examining nasogastric multinutrient feeding. Pooled data from three heterogeneous trials showed no evidence of an effect of supplementation on mortality (14/142 versus 14/138; RR 0.99, 95% CI 0.50 to 1.97). One trial (18 participants) found no difference in complications. None reported on unfavourable outcome. Nasogastric feeding was poorly tolerated. One study reported no cases of aspiration pneumonia.\nThere is very low-quality evidence from one trial (57 participants, mainly men) of no evidence for an effect of tube feeding followed by oral supplementation on mortality or complications. Tube feeding, however, was poorly tolerated.\nThere is very low-quality evidence from one trial (80 participants) that a combination of intravenous feeding and oral supplements may not affect mortality but could reduce complications. However, this expensive intervention is usually reserved for people with non-functioning gastrointestinal tracts, which is unlikely in this trial.\nFour trials tested increasing protein intake in an oral feed. These provided low-quality evidence for no clear effect of increased protein intake on mortality (30/181 versus 21/180; RR 1.42, 95% CI 0.85 to 2.37; 4 trials) or number of participants with complications but very low-quality and contradictory evidence of a reduction in unfavourable outcomes (66/113 versus 82/110; RR 0.78, 95% CI 0.65 to 0.95; 2 trials). There was no evidence of an effect on adverse events such as diarrhoea.\nTrials testing intravenous vitamin B1 and other water soluble vitamins, oral 1-alpha-hydroxycholecalciferol (vitamin D), high dose bolus vitamin D, different oral doses or sources of vitamin D, intravenous or oral iron, ornithine alpha-ketoglutarate versus an isonitrogenous peptide supplement, taurine versus placebo, and a supplement with vitamins, minerals and amino acids, provided low- or very low-quality evidence of no clear effect on mortality or complications, where reported.\nBased on low-quality evidence, one trial evaluating the use of dietetic assistants to help with feeding indicated that this intervention may reduce mortality (19/145 versus 36/157; RR 0.57, 95% CI 0.34 to 0.95) but not the number of participants with complications (79/130 versus 84/125).\nAuthors' conclusions\nThere is low-quality evidence that oral multinutrient supplements started before or soon after surgery may prevent complications within the first 12 months after hip fracture, but that they have no clear effect on mortality. There is very low-quality evidence that oral supplements may reduce 'unfavourable outcome' (death or complications) and that they do not result in an increased incidence of vomiting and diarrhoea. Adequately sized randomised trials with robust methodology are required. In particular, the role of dietetic assistants, and peripheral venous feeding or nasogastric feeding in very malnourished people require further evaluation.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Eighteen studies examined the use of additional oral feeds that provided energy from sources other than protein, protein, some vitamins and minerals. There was low-quality evidence that these multinutrient oral feeds may not reduce mortality but that they may reduce the number of people with complications (e.g. pressure sore, infection, venous thrombosis, pulmonary embolism, confusion). There was very low-quality evidence that oral multinutrient feeds may reduce unfavourable outcome (death or complications) and that they did not result in increased vomiting and diarrhoea.\nFour studies examined nasogastric tube feeding, where liquid food is delivered via a tube inserted into the nose and passed down into the stomach, with non-protein energy, protein, some vitamins and minerals. These studies provided very low-quality evidence that tube feeding, which was poorly tolerated, did not seem to make a difference to mortality or complications. Unfavourable outcome was not recorded and there was insufficient evidence on adverse events.\nOne study provided very low-quality evidence that nasogastric tube feeding followed by oral feeds may not affect mortality or complications. It reported that tube feeding was poorly tolerated.\nOne study provided very low-quality evidence that giving feed into a vein initially and then by mouth may not affect mortality but may reduce complications. However, we were surprised that this intervention was being used in people who seemed to be able to take nutrition orally.\nIncreasing protein intake in an oral feed was tested in four studies. These provided low-quality evidence of no clear effect on mortality or complications and very low-quality evidence for a reduction in unfavourable outcome.\nStudies testing intravenous vitamin B1 and other water soluble vitamins, oral 1-alpha-hydroxycholecalciferol (vitamin D), high dose bolus vitamin D, different oral doses or sources of vitamin D, intravenous or oral iron, ornithine alpha-ketoglutarate versus an isonitrogenous peptide supplement, taurine versus placebo, and a supplement with vitamins, minerals and amino acids, provided low- or very low-quality evidence of no clear effect on mortality or complications, where reported.\nOne study, evaluating the use of dietetic assistants to help with feeding, provided low-quality evidence that this may reduce mortality but not the numbers of people with complications."
}
]
|
query-laysum | 2238 | [
{
"role": "user",
"content": "Abstract: Background\nIt is unclear whether there are differences in benefits and harms between mobile and fixed prostheses for total knee arthroplasty (TKA). The previous Cochrane review published in 2004 included two articles. Many more trials have been performed since then; therefore an update is needed.\nObjectives\nTo assess the benefits and harms of mobile bearing compared with fixed bearing cruciate retaining total knee arthroplasty for functional and clinical outcomes in patients with osteoarthritis (OA) or rheumatoid arthritis (RA).\nSearch methods\nWe searched The Cochrane Library, PubMed, EMBASE, CINAHL and Web of Science up to 27 February 2014, and the trial registers ClinicalTrials.gov, Multiregister, Current Controlled Trials and the World Health Organization (WHO) International Clinical Trials Registry Platform for data from unpublished trials, up to 11 February 2014. We also screened the reference lists of selected articles.\nSelection criteria\nWe selected randomised controlled trials comparing mobile bearing with fixed bearing prostheses in cruciate retaining TKA among patients with osteoarthritis or rheumatoid arthritis, using functional or clinical outcome measures and follow-up of at least six months.\nData collection and analysis\nWe used standard methodological procedures as expected by The Cochrane Collaboration.\nMain results\nWe found 19 studies with 1641 participants (1616 with OA (98.5%) and 25 with RA (1.5%)) and 2247 knees. Seventeen new studies were included in this update.\nQuality of the evidence ranged from moderate (knee pain) to low (other outcomes). Most studies had unclear risk of bias for allocation concealment, blinding of participants and personnel, blinding of outcome assessment and selective reporting, and high risk of bias for incomplete outcome data and other bias.\nKnee pain\nWe calculated the standardised mean difference (SMD) for pain, using the Knee Society Score (KSS) and visual analogue scale (VAS) in 11 studies (58%) and 1531 knees (68%). No statistically significant differences between groups were reported (SMD 0.09, 95% confidence interval (CI) -0.03 to 0.22, P value 0.15). This represents an absolute risk difference of 2.4% points higher (95% CI 0.8% lower to 5.9% higher) on the KSS pain scale and a relative percent change of 0.22% (95% CI 0.07% lower to 0.53% higher). The results were homogeneous.\nClinical and functional scores\nThe KSS clinical score did not differ statistically significantly between groups (14 studies (74%) and 1845 knees (82%)) with a mean difference (MD) of -1.06 points (95% CI -2.87 to 0.74, P value 0.25) and heterogeneous results. KSS function was reported in 14 studies (74%) with 1845 knees (82%) as an MD of -0.10 point (95% CI -1.93 to 1.73, P value 0.91) and homogeneous results. In two studies (11%), the KSS total score was favourable for mobile bearing (159 vs 132 for fixed bearing), with MD of -26.52 points (95% CI -45.03 to -8.01, P value 0.005), but with a wide 95% confidence interval indicating uncertainty about the estimate.\nOther reported scoring systems did not show statistically significant differences: Hospital for Special Surgery (HSS) score (seven studies (37%) in 1021 knees (45%)) with an MD of -1.36 (95% CI -4.18 to 1.46, P value 0.35); Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) total score (two studies (11%), 167 knees (7%)) with an MD of -4.46 (95% CI -16.26 to 7.34, P value 0.46); and Oxford total (five studies (26%), 647 knees (29%) with an MD of -0.25 (95% CI -1.41 to 0.91, P value 0.67).\nHealth-related quality of life\nThree studies (16%) with 498 knees (22%) reported on health-related quality of life, and no statistically significant differences were noted between the mobile bearing and fixed bearing groups. The Short Form (SF)-12 Physical Component Summary had an MD of -1.96 (95% CI -4.55 to 0.63, P value 0.14) and heterogeneous results.\nRevision surgery\nTwenty seven revisions (1.3%) were performed in 17 studies (89%) with 2065 knees (92%). In all, 13 knees were revised in the fixed bearing group and 14 knees in the mobile bearing group. No statistically significant differences were found (risk difference 0.00, 95% CI -0.01 to 0.01, P value 0.58), and homogeneous results were reported.\nMortality\nIn seven out of 19 studies, 13 participants (37%) died. Two of these participants had undergone bilateral surgery, and for seven participants, it was unclear which prosthesis they had received; therefore they were excluded from the analyses. Thus our analysis included four out of 191 participants (2.1%) who had died: one in the fixed bearing group and three in the mobile bearing group. No statistically significant differences were found. The risk difference was -0.02 (95% CI -0.06 to 0.03, P value 0.49) and results were homogeneous.\nReoperation rates\nThirty reoperations were performed in 17 studies (89%) with 2065 knees (92%): 18 knees in the fixed bearing group (of the 1031 knees) and 12 knees in the mobile group (of the 1034 knees). No statistically significant differences were found. The risk difference was -0.01 (95% CI -0.01 to 0.01, P value 0.99) with homogeneous results.\nOther serious adverse events\nSixteen studies (84%) reported nine other serious adverse events in 1735 knees (77%): four in the fixed bearing group (of the 862 knees) and five in the mobile bearing group (of the 873 knees). No statistically significant differences were found (risk difference 0.00, 95% CI -0.01 to 0.01, P value 0.88), and results were homogeneous.\nAuthors' conclusions\nModerate- to low-quality evidence suggests that mobile bearing prostheses may have similar effects on knee pain, clinical and functional scores, health-related quality of life, revision surgery, mortality, reoperation rate and other serious adverse events compared with fixed bearing prostheses in posterior cruciate retaining TKA. Therefore we cannot draw firm conclusions. Most (98.5%) participants had OA, so the findings primarily reflect results reported in participants with OA. Future studies should report in greater detail outcomes such as those presented in this systematic review, with sufficient follow-up time to allow gathering of high-quality evidence and to inform clinical practice. Large registry-based studies may have added value, but they are subject to treatment-by-indication bias. Therefore, this systematic review of RCTs can be viewed as the best available evidence.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Mobile bearing implants probably cause little or no difference in pain compared with fixed bearing implants (moderate quality).\nMobile bearing implants may cause little or no difference in function, health-related quality of life, revision surgery, mortality, reoperation rates and serious adverse events compared with fixed bearing implants (low quality)."
}
]
|
query-laysum | 2239 | [
{
"role": "user",
"content": "Abstract: Background\nInfantile hypertrophic pyloric stenosis (IHPS) is a disorder of young children (aged one year or less) and can be treated by laparoscopic (LP) or open (OP) longitudinal myotomy of the pylorus. Since the first description in 1990, LP is being performed more often worldwide.\nObjectives\nTo compare the efficacy and safety of open versus laparoscopic pyloromyotomy for IHPS.\nSearch methods\nWe conducted a literature search on 04 February 2021 to identify all randomised controlled trials (RCTs), without any language restrictions. We searched the following electronic databases: MEDLINE (1990 to February 2021), Embase (1990 to February 2021), and the Cochrane Central Register of Controlled Trials (CENTRAL). We also searched the Internet using the Google Search engine (www.google.com) and Google Scholar (scholar.google.com) to identify grey literature not indexed in databases.\nSelection criteria\nWe included RCTs and quasi-randomised trials comparing LP with OP for hypertrophic pyloric stenosis.\nData collection and analysis\nTwo review authors independently screened references and extracted data from trial reports. Where outcomes or study details were not reported, we requested missing data from the corresponding authors of the primary RCTs. We used a random-effects model to calculate risk ratios (RRs) for binary outcomes, and mean differences (MDs) for continuous outcomes. Two review authors independently assessed risks of bias. We used GRADE to assess the certainty of the evidence for all outcomes.\nMain results\nThe electronic database search resulted in a total of 434 records. After de-duplication, we screened 410 independent publications, and ultimately included seven RCTs (reported in 8 reports) in quantitative analysis. The seven included RCTs enrolled 720 participants (357 with open pyloromyotomy and 363 with laparoscopic pyloromyotomy). One study was a multi-country trial, three were carried out in the USA, and one study each was carried out in France, Japan, and Bangladesh.\nThe evidence suggests that LP may result in a small increase in mucosal perforation compared with OP (RR 1.60, 95% CI 0.49 to 5.26; 7 studies, 720 participants; low-certainty evidence). LP may result in up to 5 extra instances of mucosal perforation per 1,000 participants; however, the confidence interval ranges from 4 fewer to 44 more per 1,000 participants.\nFour RCTs with 502 participants reported on incomplete pyloromyotomy. They indicate that LP may increase the risk of incomplete pyloromyotomy compared with OP, but the confidence interval crosses the line of no effect (RR 7.37, 95% CI 0.92 to 59.11; 4 studies, 502 participants; low-certainty evidence). In the LP groups, 6 cases of incomplete pyloromyotomy were reported in 247 participants while no cases of incomplete pyloromyotomy were reported in the OP groups (from 255 participants).\nAll included studies (720 participants) reported on postoperative wound infections or abscess formations. The evidence is very uncertain about the effect of LP on postoperative wound infection or abscess formation compared with OP (RR 0.59, 95% CI 0.24 to 1.45; 7 studies, 720 participants; very low-certainty evidence). The evidence is also very uncertain about the effect of LP on postoperative incisional hernia compared with OP (RR 1.01, 95% CI 0.11 to 9.53; 4 studies, 382 participants; very low-certainty evidence).\nLength of hospital stay was assessed by five RCTs, including 562 participants. The evidence is very uncertain about the effect of LP compared to OP (mean difference −3.01 hours, 95% CI −8.39 to 2.37 hours; very low-certainty evidence). Time to full feeds was assessed by six studies, including 622 participants. The evidence is very uncertain about the effect of LP on time to full feeds compared with OP (mean difference −5.86 hours, 95% CI −15.95 to 4.24 hours; very low-certainty evidence). The evidence is also very uncertain about the effect of LP on operating time compared with OP (mean difference 0.53 minutes, 95% CI −3.53 to 4.59 minutes; 6 studies, 622 participants; very low-certainty evidence).\nAuthors' conclusions\nLaparoscopic pyloromyotomy may result in a small increase in mucosal perforation when compared with open pyloromyotomy for IHPS. There may be an increased risk of incomplete pyloromyotomy following LP compared with OP, but the effect estimate is imprecise and includes the possibility of no difference. We do not know about the effect of LP compared with OP on the need for re-operation, postoperative wound infections or abscess formation, postoperative haematoma or seroma formation, incisional hernia occurrence, length of postoperative stay, time to full feeds, or operating time because the certainty of the evidence was very low for these outcomes. We downgraded the certainty of the evidence for most outcomes due to limitations in the study design (most outcomes were susceptible to detection bias) and imprecision. There is limited evidence available comparing LP with OP for IHPS. The included studies did not provide sufficient information to determine the effect of training, experience, or surgeon preferences on the outcomes assessed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases up to February 2021 and included seven studies involving 720 participants with infantile pyloric stenosis. All participants were infants ranging in age from 11 to 108 days, and there were more male than female patients. All seven studies reported on mucosal perforation rates, four out of seven RCTs reported on incomplete pyloromyotomy. Time of enrolment ranged from 16 to 56 months."
}
]
|
query-laysum | 2240 | [
{
"role": "user",
"content": "Abstract: Background\nTreament-related diarrhoea is one of the most common and troublesome adverse effects related to chemotherapy or radiotherapy in people with cancer. Its reported incidence has been as high as 50% to 80%. Severe treatment-related diarrhoea can lead to fluid and electrolyte losses and nutritional deficiencies and could adversely affect quality of life (QoL). It is also associated with increased risk of infection in people with neutropenia due to anticancer therapy and often leads to treatment delays, dose reductions, or treatment discontinuation. Probiotics may be effective in preventing or treating chemotherapy- or radiotherapy-induced diarrhoea.\nObjectives\nTo evaluate the clinical effectiveness and side effects of probiotics used alone or combined with other agents for prevention or treatment of chemotherapy- or radiotherapy-related diarrhoea in people with cancer.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 7), MEDLINE (1946 to July week 2, 2017), and Embase (1980 to 2017, week 30). We also searched prospective clinical trial registers and the reference lists of included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) investigating the effects of probiotics for prevention or treatment of chemotherapy- or radiotherapy-related diarrhoea in people with cancer.\nData collection and analysis\nTwo review authors independently selected studies, extracted data, and assessed risk of bias. We used random-effects models for all meta-analyses. If meta-analysis was not possible, we summarised the results narratively.\nMain results\nWe included 12 studies involving 1554 participants. Eleven studies were prevention studies, of which seven compared probiotics with placebo (887 participants), one compared two doses of probiotics with each other and with placebo (246 participants), and three compared probiotics with another active agent (216 participants).The remaining study assessed the effectiveness of probiotics compared with placebo for treatment of radiotherapy-related diarrhoea (205 participants).\nFor prevention of radiotherapy (with or without chemotherapy)-induced diarrhoea, review authors identified five heterogeneous placebo-controlled studies (with 926 participants analysed). Owing to heterogeneity, we could not carry out a meta-analysis, except for two outcomes. For occurrence of any diarrhoea, risk ratios (RRs) ranged from 0.35 (95% confidence interval (CI) 0.26 to 0.47) to 1.0 (95% CI 0.94 to 1.06) (three studies; low-certainty evidence). A beneficial effect of probiotics on quality of life could neither be demonstrated nor refuted (two studies; low-certainty evidence). For occurrence of grade 2 or higher diarrhoea, the pooled RR was 0.75 (95% CI 0.55 to 1.03; four studies; 420 participants; low-certainty evidence), and for grade 3 or higher diarrhoea, RRs ranged from 0.11 (95% CI 0.06 to 0.23) to 1.24 (95% CI 0.74 to 2.08) (three studies; low-certainty evidence). For probiotic users, time to rescue medication was 36 hours longer in one study (95% CI 34.7 to 37.3), but another study reported no difference (moderate-certainty evidence). For the need for rescue medication, the pooled RR was 0.50 (95% CI 0.15 to 1.66; three studies; 194 participants; very low-certainty evidence). No study reported major differences between groups with respect to adverse effects. Although not mentioned explicitly, no studies reported deaths, except one in which one participant in the probiotics group died of myocardial infarction after three sessions of radiotherapy.\nThree placebo-controlled studies, with 128 analysed participants, addressed prevention of chemotherapy-induced diarrhoea. For occurrence of any diarrhoea, the pooled RR was 0.59 (95% CI 0.36 to 0.96; two studies; 106 participants; low-certainty evidence). For all other outcomes, a beneficial effect of probiotics could be neither demonstrated nor refuted (one to two studies; 46 to 106 participants; all low-certainty evidence). Studies did not address quality of life nor time to rescue medication.\nThree studies compared probiotics with another intervention in 213 participants treated with radiotherapy (with or without chemotherapy). One very small study (21 participants) reported less diarrhoea six weeks after treatment when dietary counselling was provided (RR 0.30, 95% CI 0.11 to 0.81; very low-certainty evidence). In another study (148 participants), grade 3 or 4 diarrhoea occurred less often in the probiotics group than in the control group (guar gum containing nutritional supplement) (odds ratio (OR) 0.38, 95% CI 0.16 to 0.89; low-certainty evidence), and two studies (63 participants) found less need for rescue medication of probiotics versus another active treatment (RR 0.44, 95% CI 0.22 to 0.86; very low-certainty evidence). Studies did not address quality of life nor time to rescue medication.\nOne placebo-controlled study with 205 participants addressed treatment for radiotherapy-induced diarrhoea and could not demonstrate or refute a beneficial effect of probiotics on average diarrhoea grade, time to rescue medication for diarrhoea (13 hours longer in the probiotics group; 95% CI -0.9 to 26.9 hours), or need for rescue medication (RR 0.74, 95% CI 0.53 to 1.03; moderate-certainty evidence). This study did not address quality of life.\nNo studies reported serious adverse events or diarrhoea-related deaths.\nAuthors' conclusions\nThis review presents limited low- or very low-certainty evidence supporting the effects of probiotics for prevention and treatment of diarrhoea related to radiotherapy (with or without chemotherapy) or chemotherapy alone, need for rescue medication, or occurrence of adverse events. All studies were underpowered and heterogeneous. Severe side effects were absent from all studies.\nRobust evidence on this topic must be provided by future methodologically well-designed trials.\n\nGiven the provided abstract, please respond to the following query: \"Aim of the review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To evaluate the effects of live micro-organisms (probiotics) in preventing the occurrence or reducing the severity of diarrhoea in people with cancer who are receiving chemotherapy or radiotherapy."
}
]
|
query-laysum | 2241 | [
{
"role": "user",
"content": "Abstract: Background\nMyopia is a common refractive error, where elongation of the eyeball causes distant objects to appear blurred. The increasing prevalence of myopia is a growing global public health problem, in terms of rates of uncorrected refractive error and significantly, an increased risk of visual impairment due to myopia-related ocular morbidity. Since myopia is usually detected in children before 10 years of age and can progress rapidly, interventions to slow its progression need to be delivered in childhood.\nObjectives\nTo assess the comparative efficacy of optical, pharmacological and environmental interventions for slowing myopia progression in children using network meta-analysis (NMA). To generate a relative ranking of myopia control interventions according to their efficacy. To produce a brief economic commentary, summarising the economic evaluations assessing myopia control interventions in children. To maintain the currency of the evidence using a living systematic review approach.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), MEDLINE; Embase; and three trials registers. The search date was 26 February 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) of optical, pharmacological and environmental interventions for slowing myopia progression in children aged 18 years or younger. Critical outcomes were progression of myopia (defined as the difference in the change in spherical equivalent refraction (SER, dioptres (D)) and axial length (mm) in the intervention and control groups at one year or longer) and difference in the change in SER and axial length following cessation of treatment ('rebound').\nData collection and analysis\nWe followed standard Cochrane methods. We assessed bias using RoB 2 for parallel RCTs. We rated the certainty of evidence using the GRADE approach for the outcomes: change in SER and axial length at one and two years. Most comparisons were with inactive controls.\nMain results\nWe included 64 studies that randomised 11,617 children, aged 4 to 18 years. Studies were mostly conducted in China or other Asian countries (39 studies, 60.9%) and North America (13 studies, 20.3%). Fifty-seven studies (89%) compared myopia control interventions (multifocal spectacles, peripheral plus spectacles (PPSL), undercorrected single vision spectacles (SVLs), multifocal soft contact lenses (MFSCL), orthokeratology, rigid gas-permeable contact lenses (RGP); or pharmacological interventions (including high- (HDA), moderate- (MDA) and low-dose (LDA) atropine, pirenzipine or 7-methylxanthine) against an inactive control. Study duration was 12 to 36 months. The overall certainty of the evidence ranged from very low to moderate.\nSince the networks in the NMA were poorly connected, most estimates versus control were as, or more, imprecise than the corresponding direct estimates. Consequently, we mostly report estimates based on direct (pairwise) comparisons below.\nAt one year, in 38 studies (6525 participants analysed), the median change in SER for controls was −0.65 D. The following interventions may reduce SER progression compared to controls: HDA (mean difference (MD) 0.90 D, 95% confidence interval (CI) 0.62 to 1.18), MDA (MD 0.65 D, 95% CI 0.27 to 1.03), LDA (MD 0.38 D, 95% CI 0.10 to 0.66), pirenzipine (MD 0.32 D, 95% CI 0.15 to 0.49), MFSCL (MD 0.26 D, 95% CI 0.17 to 0.35), PPSLs (MD 0.51 D, 95% CI 0.19 to 0.82), and multifocal spectacles (MD 0.14 D, 95% CI 0.08 to 0.21). By contrast, there was little or no evidence that RGP (MD 0.02 D, 95% CI −0.05 to 0.10), 7-methylxanthine (MD 0.07 D, 95% CI −0.09 to 0.24) or undercorrected SVLs (MD −0.15 D, 95% CI −0.29 to 0.00) reduce progression.\nAt two years, in 26 studies (4949 participants), the median change in SER for controls was −1.02 D. The following interventions may reduce SER progression compared to controls: HDA (MD 1.26 D, 95% CI 1.17 to 1.36), MDA (MD 0.45 D, 95% CI 0.08 to 0.83), LDA (MD 0.24 D, 95% CI 0.17 to 0.31), pirenzipine (MD 0.41 D, 95% CI 0.13 to 0.69), MFSCL (MD 0.30 D, 95% CI 0.19 to 0.41), and multifocal spectacles (MD 0.19 D, 95% CI 0.08 to 0.30). PPSLs (MD 0.34 D, 95% CI −0.08 to 0.76) may also reduce progression, but the results were inconsistent. For RGP, one study found a benefit and another found no difference with control. We found no difference in SER change for undercorrected SVLs (MD 0.02 D, 95% CI −0.05 to 0.09).\nAt one year, in 36 studies (6263 participants), the median change in axial length for controls was 0.31 mm. The following interventions may reduce axial elongation compared to controls: HDA (MD −0.33 mm, 95% CI −0.35 to 0.30), MDA (MD −0.28 mm, 95% CI −0.38 to −0.17), LDA (MD −0.13 mm, 95% CI −0.21 to −0.05), orthokeratology (MD −0.19 mm, 95% CI −0.23 to −0.15), MFSCL (MD −0.11 mm, 95% CI −0.13 to −0.09), pirenzipine (MD −0.10 mm, 95% CI −0.18 to −0.02), PPSLs (MD −0.13 mm, 95% CI −0.24 to −0.03), and multifocal spectacles (MD −0.06 mm, 95% CI −0.09 to −0.04). We found little or no evidence that RGP (MD 0.02 mm, 95% CI −0.05 to 0.10), 7-methylxanthine (MD 0.03 mm, 95% CI −0.10 to 0.03) or undercorrected SVLs (MD 0.05 mm, 95% CI −0.01 to 0.11) reduce axial length.\nAt two years, in 21 studies (4169 participants), the median change in axial length for controls was 0.56 mm. The following interventions may reduce axial elongation compared to controls: HDA (MD −0.47mm, 95% CI −0.61 to −0.34), MDA (MD −0.33 mm, 95% CI −0.46 to −0.20), orthokeratology (MD −0.28 mm, (95% CI −0.38 to −0.19), LDA (MD −0.16 mm, 95% CI −0.20 to −0.12), MFSCL (MD −0.15 mm, 95% CI −0.19 to −0.12), and multifocal spectacles (MD −0.07 mm, 95% CI −0.12 to −0.03). PPSL may reduce progression (MD −0.20 mm, 95% CI −0.45 to 0.05) but results were inconsistent. We found little or no evidence that undercorrected SVLs (MD -0.01 mm, 95% CI −0.06 to 0.03) or RGP (MD 0.03 mm, 95% CI −0.05 to 0.12) reduce axial length.\nThere was inconclusive evidence on whether treatment cessation increases myopia progression. Adverse events and treatment adherence were not consistently reported, and only one study reported quality of life.\nNo studies reported environmental interventions reporting progression in children with myopia, and no economic evaluations assessed interventions for myopia control in children.\nAuthors' conclusions\nStudies mostly compared pharmacological and optical treatments to slow the progression of myopia with an inactive comparator. Effects at one year provided evidence that these interventions may slow refractive change and reduce axial elongation, although results were often heterogeneous. A smaller body of evidence is available at two or three years, and uncertainty remains about the sustained effect of these interventions. Longer-term and better-quality studies comparing myopia control interventions used alone or in combination are needed, and improved methods for monitoring and reporting adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• Medications such as atropine, given as eye drops, can slow the progression of short- or near-sightedness (myopia) in children, and also reduce elongation of the eyeball due to myopia. Higher doses of atropine are most effective. We are uncertain about the effects of lower doses of atropine.\n• Several treatments, including special types of lenses in eye glasses as well as contact lenses, may slow the progression of short-sightedness, but their effect is still uncertain and there is insufficient information on the risk of unwanted effects.\n• It is also unclear whether the reported benefit of medications or lenses on myopia progression is maintained over the years."
}
]
|
query-laysum | 2242 | [
{
"role": "user",
"content": "Abstract: Background\nPanic disorder is common and deleterious to mental well-being. Psychological therapies and pharmacological interventions are both used as treatments for panic disorder with and without agoraphobia. However, there are no up-to-date reviews on the comparative efficacy and acceptability of the two treatment modalities, and such a review is necessary for improved treatment planning for this disorder.\nObjectives\nTo assess the efficacy and acceptability of psychological therapies versus pharmacological interventions for panic disorder, with or without agoraphobia, in adults.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Group Specialised Register on 11 September 2015. This register contains reports of relevant randomised controlled trials from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (1950 to present), Embase (1974 to present), and PsycINFO (1967 to present). We cross-checked reference lists of relevant papers and systematic reviews. We did not apply any restrictions on date, language, or publication status.\nSelection criteria\nWe included all randomised controlled trials comparing psychological therapies with pharmacological interventions for panic disorder with or without agoraphobia as diagnosed by operationalised criteria in adults.\nData collection and analysis\nTwo review authors independently extracted data and resolved any disagreements in consultation with a third review author. For dichotomous data, we calculated risk ratios (RR) with 95% confidence intervals (CI). We analysed continuous data using standardised mean differences (with 95% CI). We used the random-effects model throughout.\nMain results\nWe included 16 studies with a total of 966 participants in the present review. Eight of the studies were conducted in Europe, four in the USA, two in the Middle East, and one in Southeast Asia.\nNone of the studies reported long-term remission/response (long term being six months or longer from treatment commencement).\nThere was no evidence of a difference between psychological therapies and selective serotonin reuptake inhibitors (SSRIs) in terms of short-term remission (RR 0.85, 95% CI 0.62 to 1.17; 6 studies; 334 participants) or short-term response (RR 0.97, 95% CI 0.51 to 1.86; 5 studies; 277 participants) (very low-quality evidence), and no evidence of a difference between psychological therapies and SSRIs in treatment acceptability as measured using dropouts for any reason (RR 1.33, 95% CI 0.80 to 2.22; 6 studies; 334 participants; low-quality evidence).\nThere was no evidence of a difference between psychological therapies and tricyclic antidepressants in terms of short-term remission (RR 0.82, 95% CI 0.62 to 1.09; 3 studies; 229 participants), short-term response (RR 0.75, 95% CI 0.51 to 1.10; 4 studies; 270 participants), or dropouts for any reason (RR 0.83, 95% CI 0.53 to 1.30; 5 studies; 430 participants) (low-quality evidence).\nThere was no evidence of a difference between psychological therapies and other antidepressants in terms of short-term remission (RR 0.90, 95% CI 0.48 to 1.67; 3 studies; 135 participants; very low-quality evidence) and evidence that psychological therapies did not significantly increase or decrease the short-term response over other antidepressants (RR 0.96, 95% CI 0.67 to 1.37; 3 studies; 128 participants) or dropouts for any reason (RR 1.55, 95% CI 0.91 to 2.65; 3 studies; 180 participants) (low-quality evidence).\nThere was no evidence of a difference between psychological therapies and benzodiazepines in terms of short-term remission (RR 1.08, 95% CI 0.70 to 1.65; 3 studies; 95 participants), short-term response (RR 1.58, 95% CI 0.70 to 3.58; 2 studies; 69 participants), or dropouts for any reason (RR 1.12, 95% CI 0.54 to 2.36; 3 studies; 116 participants) (very low-quality evidence).\nThere was no evidence of a difference between psychological therapies and either antidepressant alone or antidepressants plus benzodiazepines in terms of short-term remission (RR 0.86, 95% CI 0.71 to 1.05; 11 studies; 663 participants) and short-term response (RR 0.95, 95% CI 0.76 to 1.18; 12 studies; 800 participants) (low-quality evidence), and there was no evidence of a difference between psychological therapies and either antidepressants alone or antidepressants plus benzodiazepines in terms of treatment acceptability as measured by dropouts for any reason (RR 1.08, 95% CI 0.77 to 1.51; 13 studies; 909 participants; very low-quality evidence). The risk of selection bias and reporting bias was largely unclear. Preplanned subgroup and sensitivity analyses limited to trials with longer-term, quality-controlled, or individual psychological therapies suggested that antidepressants might be more effective than psychological therapies for some outcomes.\nThere were no data to contribute to a comparison between psychological therapies and serotonin–norepinephrine reuptake inhibitors (SNRIs) and subsequent adverse effects.\nAuthors' conclusions\nThe evidence in this review was often imprecise. The superiority of either therapy over the other is uncertain due to the low and very low quality of the evidence with regard to short-term efficacy and treatment acceptability, and no data were available regarding adverse effects.\nThe sensitivity analysis and investigation of the sources of heterogeneity indicated three possible influential factors: quality control of psychological therapies, the length of intervention, and the individual modality of psychological therapies.\nFuture studies should examine the long-term effects after intervention or treatment continuation and should provide information on risk of bias, especially with regard to selection and reporting biases.\n\nGiven the provided abstract, please respond to the following query: \"Which studies were included in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched databases to find all trials published between 1950 and September 2015 comparing psychological therapies with antidepressants or benzodiazepines for the treatment of panic disorder with or without agoraphobia in adults. To be included in the review, the studies had to be randomised controlled trials and had to include people with a clear diagnosis of panic disorder with or without agoraphobia.\nWe included 16 studies with a total of 966 participants in the review. We rated the overall quality of the evidence from these studies as very low to moderate."
}
]
|
query-laysum | 2243 | [
{
"role": "user",
"content": "Abstract: Background\nWhen human milk is not available for feeding preterm infants, protein hydrolysate, rather than standard cow's milk formulas (with intact proteins), is often used because it is perceived as being tolerated better and less likely to lead to complications. However, protein hydrolysate formulas are more expensive than standard formulas, and concern exists that their use in practice is not supported by high-quality evidence.\nObjectives\nTo assess the effects of feeding preterm infants hydrolysed formula (vs standard cow's milk formula) on risk of feed intolerance, necrotising enterocolitis, and other morbidity and mortality.\nSearch methods\nWe used the standard Cochrane Neonatal search strategy including electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 1), in the Cochrane Library; Ovid MEDLINE (1966 to 28 January 2019); Ovid Embase (1980 to 28 January 2019); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (28 January 2019), as well as conference proceedings and previous reviews.\nSelection criteria\nRandomised and quasi-randomised controlled trials that compared feeding preterm infants protein hydrolysate versus standard (non-hydrolysed) cow's milk formula.\nData collection and analysis\nTwo review authors assessed trial eligibility and risk of bias and extracted data independently. We analysed treatment effects as described in the individual trials and reported risk ratios and risk differences for dichotomous data, and mean differences for continuous data, with respective 95% confidence intervals (CIs). We used a fixed-effect model in meta-analyses and explored potential causes of heterogeneity in sensitivity analyses. We assessed quality of evidence at the outcome level using the GRADE approach.\nMain results\nWe identified 11 trials for inclusion in the review. All trials were small (total participants 665) and had various methodological limitations including uncertainty about methods to ensure allocation concealment and blinding. Most participants were clinically stable preterm infants of less than about 34 weeks' gestational age or with birth weight less than about 1750 g. Fewer participants were extremely preterm, extremely low birth weight, or growth restricted. Most trials found no effects on feed intolerance, assessed variously as mean pre-feed gastric residual volume, incidence of abdominal distension or other gastrointestinal signs of concern, or time taken to achieve full enteral feeds (meta-analysis was limited because studies used different measures). Meta-analysis showed no effect on the risk of necrotising enterocolitis (typical risk ratio 1.10, 95% CI 0.36 to 3.34; risk difference 0.00, 95% CI -0.03 to 0.04; 5 trials, 385 infants) (low-certainty evidence; downgraded for imprecision and design weaknesses).\nAuthors' conclusions\nThe identified trials provide only low-certainty evidence about the effects of feeding preterm infants protein hydrolysate versus standard formula. Existing data do not support conclusions that feeding protein hydrolysate affects the risk of feed intolerance or necrotising enterocolitis. Additional large, pragmatic trials are needed to provide more reliable and precise estimates of effectiveness and cost-effectiveness.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
]
|
query-laysum | 2244 | [
{
"role": "user",
"content": "Abstract: Background\nThis is the first update of a review published in 2015, Issue 1. Chronic pain is common during childhood and adolescence and is associated with negative outcomes, such as increased severity of pain, reduced function, and low mood. Psychological therapies, traditionally delivered face-to-face with a therapist, are efficacious at reducing pain intensity and disability. To address barriers to treatment access, such as distance and cost of treatment, technology is being used to deliver these psychological therapies remotely. Therapies delivered remotely, such as via the Internet, computer-based programmes, and smartphone applications, can be used to deliver treatment to children and adolescents with chronic pain.\nObjectives\nTo determine the efficacy of psychological therapies delivered remotely compared to waiting list, treatment as usual, or active control treatments, for the management of chronic pain in children and adolescents.\nSearch methods\nWe searched four databases (CENTRAL, MEDLINE, Embase, and PsycINFO) from inception to May 2018 for randomised controlled trials (RCTs) of remotely-delivered psychological interventions for children and adolescents with chronic pain. We searched for chronic pain conditions including, but not exclusive to, headache, recurrent abdominal pain, musculoskeletal pain, and neuropathic pain. We also searched online trial registries, reference sections, and citations of included studies for potential trials.\nSelection criteria\nWe included RCTs that investigated the efficacy of a psychological therapy delivered remotely via technology in comparison to an active, treatment as usual, or waiting-list control. We considered blended treatments, which used a combination of technology and up to 30% face-to-face interaction. Interventions had to be delivered primarily via technology to be included, and we excluded interventions delivered via telephone. We included studies that delivered interventions to children and adolescents (up to 18 years of age) with a chronic pain condition or where chronic pain was a primary symptom of their condition (e.g. juvenile arthritis). We included studies that reported 10 or more participants in each comparator arm, at each extraction point.\nData collection and analysis\nWe combined all psychological therapies in the analyses. We split pain conditions into headache and mixed (non-headache) pain and analysed them separately. We extracted pain severity/intensity, disability, depression, anxiety, and adverse events as primary outcomes, and satisfaction with treatment as a secondary outcome. We considered outcomes at two time points: first immediately following the end of treatment (known as 'post-treatment'), and second, any follow-up time point post-treatment between three and 12 months (known as 'follow-up'). We assessed risk of bias and all outcomes for quality using the GRADE assessment.\nMain results\nWe found 10 studies with 697 participants (an additional 4 studies with 326 participants since the previous review) that delivered treatment remotely; four studies investigated children with headache conditions, one study was with children with juvenile idiopathic arthritis, one included children with sickle cell disease, one included children with irritable bowel syndrome, and three studies included children with different chronic pain conditions (i.e. headache, recurrent abdominal pain, musculoskeletal pain). The average age of children receiving treatment was 13.17 years.\nWe judged selection, detection, and reporting biases to be mostly low risk. However, we judged performance and attrition biases to be mostly unclear. Out of the 16 planned analyses, we were able to conduct 13 meta-analyses. We downgraded outcomes for imprecision, indirectness of evidence, inconsistency of results, or because the analysis only included one study.\nHeadache conditions\nFor headache pain conditions, we found headache severity was reduced post-treatment (risk ratio (RR) 2.02, 95% confidence interval (CI) 1.35 to 3.01); P < 0.001, number needed to treat to benefit (NNTB) = 5.36, 7 studies, 379 participants; very low-quality evidence). No effect was found at follow-up (very low-quality evidence). There were no effects of psychological therapies delivered remotely for disability post-treatment (standardised mean difference (SMD) -0.16, 95% CI -0.46 to 0.13; P = 0.28, 5 studies, 440 participants) or follow-up (both very low-quality evidence). Similarly, no effect was found for the outcomes of depression (SMD -0.04, 95% CI -0.15 to 0.23, P = 0.69, 4 studies, 422 participants) or anxiety (SMD -0.08, 95% CI -0.28 to 0.12; P = 0.45, 3 studies, 380 participants) at post-treatment, or follow-up (both very low-quality evidence).\nMixed chronic pain conditions\nWe did not find any beneficial effects of psychological therapies for reducing pain intensity post-treatment for mixed chronic pain conditions (SMD -0.90, 95% CI -1.95 to 0.16; P = 0.10, 5 studies, 501 participants) or at follow-up. There were no beneficial effects of psychological therapies delivered remotely for disability post-treatment (SMD -0.28, 95% CI -0.74 to 0.18; P = 0.24, 3 studies, 363 participants) and a lack of data at follow-up meant no analysis could be run. We found no beneficial effects for the outcomes of depression (SMD 0.04, 95% CI -0.18 to 0.26; P = 0.73, 2 studies, 317 participants) and anxiety (SMD 0.53, 95% CI -0.63 to 1.68; P = 0.37, 2 studies, 370 participants) post-treatment, however, we are cautious of our findings as we could only include two studies in the analyses. We could not conduct analyses at follow-up. We judged the evidence for all outcomes to be very low quality.\nAll conditions\nAcross all chronic pain conditions, six studies reported minor adverse events which were not attributed to the psychological therapies. Satisfaction with treatment is described qualitatively and was overall positive. However, we judged both these outcomes as very low quality.\nAuthors' conclusions\nThere are currently a small number of trials investigating psychological therapies delivered remotely, primarily via the Internet. We are cautious in our interpretations of analyses. We found one beneficial effect of therapies to reduce headache severity post-treatment. For the remaining outcomes there was either no beneficial effect at post-treatment or follow-up, or lack of evidence to determine an effect. Overall, participant satisfaction with treatment was positive. We judged the quality of the evidence to be very low, meaning we are very uncertain about the estimate. Further studies are needed to increase our confidence in this potentially promising field.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For this update, we conducted the search through to May 2018. We found 10 studies including 697 children and adolescents; four of these studies (326 participants) were new for this update. Four studies treated children with headache, one study treated children with juvenile idiopathic arthritis, one treated children with sickle cell disease, one included children with irritable bowel syndrome, and three studies included mixed samples of children, some who had headache and some with other chronic pain conditions. All studies delivered cognitive behavioural therapy. The average age of children receiving the interventions was 13 years. We looked at six outcomes: pain, physical functioning, depression, anxiety, side effects, and satisfaction with treatment."
}
]
|
query-laysum | 2245 | [
{
"role": "user",
"content": "Abstract: Background\nMultiple sclerosis (MS) is a chronic disease of the central nervous system, affecting approximately 2.5 million people worldwide. People with MS may experience limitations in muscular strength and endurance – including the respiratory muscles, affecting functional performance and exercise capacity. Respiratory muscle weakness can also lead to diminished performance on coughing, which may result in (aspiration) pneumonia or even acute ventilatory failure, complications that frequently cause death in MS. Training of the respiratory muscles might improve respiratory function and cough efficacy.\nObjectives\nTo assess the effects of respiratory muscle training versus any other type of training or no training for respiratory muscle function, pulmonary function and clinical outcomes in people with MS.\nSearch methods\nWe searched the Trials Register of the Cochrane Multiple Sclerosis and Rare Diseases of the Central Nervous System Group (3 February 2017), which contains trials from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, LILACS and the trial registry databases ClinicalTrials.gov and WHO International Clinical Trials Registry Platform. Two authors independently screened records yielded by the search, handsearched reference lists of review articles and primary studies, checked trial registers for protocols, and contacted experts in the field to identify further published or unpublished trials.\nSelection criteria\nWe included randomized controlled trials (RCTs) that investigated the efficacy of respiratory muscle training versus any control in people with MS.\nData collection and analysis\nOne reviewer extracted study characteristics and study data from included RCTs, and two other reviewers independently cross-checked all extracted data. Two review authors independently assessed risk of bias with the Cochrane 'Risk of bias' assessment tool. When at least two RCTs provided data for the same type of outcome, we performed meta-analyses. We assessed the certainty of the evidence according to the GRADE approach.\nMain results\nWe included six RCTs, comprising 195 participants with MS. Two RCTs investigated inspiratory muscle training with a threshold device; three RCTs, expiratory muscle training with a threshold device; and one RCT, regular breathing exercises. Eighteen participants (˜ 10%) dropped out; trials reported no serious adverse events.\nWe pooled and analyzed data of 5 trials (N=137) for both inspiratory and expiratory muscle training, using a fixed-effect model for all but one outcome. Compared to no active control, meta-analysis showed that inspiratory muscle training resulted in no significant difference in maximal inspiratory pressure (mean difference (MD) 6.50 cmH2O, 95% confidence interval (CI) −7.39 to 20.38, P = 0.36, I2 = 0%) or maximal expiratory pressure (MD −8.22 cmH2O, 95% CI −26.20 to 9.77, P = 0.37, I2 = 0%), but there was a significant benefit on the predicted maximal inspiratory pressure (MD 20.92 cmH2O, 95% CI 6.03 to 35.81, P = 0.006, I2 = 18%). Meta-analysis with a random-effects model failed to show a significant difference in predicted maximal expiratory pressure (MD 5.86 cmH2O, 95% CI −10.63 to 22.35, P = 0.49, I2 = 55%). These studies did not report outcomes for health-related quality of life.\nThree RCTS compared expiratory muscle training versus no active control or sham training. Under a fixed-effect model, meta-analysis failed to show a significant difference between groups with regard to maximal expiratory pressure (MD 8.33 cmH2O, 95% CI −0.93 to 17.59, P = 0.18, I2 = 42%) or maximal inspiratory pressure (MD 3.54 cmH2O, 95% CI −5.04 to 12.12, P = 0.42, I2 = 41%). One trial assessed quality of life, finding no differences between groups.\nFor all predetermined secondary outcomes, such as forced expiratory volume, forced vital capacity and peak flow pooling was not possible. However, two trials on inspiratory muscle training assessed fatigue using the Fatigue Severity Scale (range of scores 0-56 ), finding no difference between groups (MD, −0.28 points, 95% CI−0.95 to 0.39, P = 0.42, I2 = 0%). Due to the low number of studies included, we could not perform cumulative meta-analysis or subgroup analyses. It was not possible to perform a meta-analysis for adverse events, no serious adverse were mentioned in any of the included trials.\nThe quality of evidence was low for all outcomes because of limitations in design and implementation as well as imprecision of results.\nAuthors' conclusions\nThis review provides low-quality evidence that resistive inspiratory muscle training with a resistive threshold device is moderately effective postintervention for improving predicted maximal inspiratory pressure in people with mild to moderate MS, whereas expiratory muscle training showed no significant effects. The sustainability of the favourable effect of inspiratory muscle training is unclear, as is the impact of the observed effects on quality of life.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Multiple sclerosis (MS) is a chronic disease of the central nervous system, affecting approximately 2.5 million people worldwide. Although the exact cause of the disease is unknown, it is generally accepted that MS involves an abnormal immune response within the central nervous system. Depending on the severity of the disease, people with MS may experience varying limitations in, for example muscular strength and endurance, including in the muscles needed to breathe (respiratory muscles). Strength of the respiratory muscles is related to people's ability to function and to exercise, and respiratory muscle weakness can lead to less effective coughing, which may result in aspiration pneumonia (when food, saliva or other liquids are breathed into airways instead of being swallowed) or even acute failure of respiratory function. These pulmonary complications are frequently reported causes of death in people with MS. Training of the respiratory muscles might improve breathing and cough effectiveness."
}
]
|
query-laysum | 2246 | [
{
"role": "user",
"content": "Abstract: Background\nStandard treatment for deep vein thrombosis (DVT) aims to reduce immediate complications. Use of thrombolytic clot removal strategies (i.e. thrombolysis (clot dissolving drugs), with or without additional endovascular techniques), could reduce the long-term complications of post-thrombotic syndrome (PTS) including pain, swelling, skin discolouration, or venous ulceration in the affected leg. This is the fourth update of a Cochrane Review first published in 2004.\nObjectives\nTo assess the effects of thrombolytic clot removal strategies and anticoagulation compared to anticoagulation alone for the management of people with acute deep vein thrombosis (DVT) of the lower limb.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL and AMED and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registries to 21 April 2020. We also checked the references of relevant articles to identify additional studies.\nSelection criteria\nWe considered randomised controlled trials (RCTs) examining thrombolysis (with or without adjunctive clot removal strategies) and anticoagulation versus anticoagulation alone for acute DVT.\nData collection and analysis\nWe used standard methodological procedures as recommended by Cochrane. We assessed the risk of bias in included trials with the Cochrane 'Risk of bias' tool. Certainty of the evidence was evaluated using GRADE. For dichotomous outcomes, we calculated the risk ratio (RR) with the corresponding 95% confidence interval (CI). We pooled data using a fixed-effect model, unless we identified heterogeneity, in which case we used a random-effects model. The primary outcomes of interest were clot lysis, bleeding and post thrombotic syndrome.\nMain results\nTwo new studies were added for this update. Therefore, the review now includes a total of 19 RCTs, with 1943 participants. These studies differed with respect to the thrombolytic agent, the doses of the agent and the techniques used to deliver the agent. Systemic, loco-regional and catheter-directed thrombolysis (CDT) strategies were all included. For this update, CDT interventions also included those involving pharmacomechanical thrombolysis. Three of the 19 included studies reported one or more domain at high risk of bias. We combined the results as any (all) thrombolysis interventions compared to standard anticoagulation.\nComplete clot lysis occurred more frequently in the thrombolysis group at early follow-up (RR 4.75; 95% CI 1.83 to 12.33; 592 participants; eight studies) and at intermediate follow-up (RR 2.42; 95% CI 1.42 to 4.12; 654 participants; seven studies; moderate-certainty evidence). Two studies reported on clot lysis at late follow-up with no clear benefit from thrombolysis seen at this time point (RR 3.25, 95% CI 0.17 to 62.63; two studies). No differences between strategies (e.g. systemic, loco-regional and CDT) were detected by subgroup analysis at any of these time points (tests for subgroup differences: P = 0.41, P = 0.37 and P = 0.06 respectively).\nThose receiving thrombolysis had increased bleeding complications (6.7% versus 2.2%) (RR 2.45, 95% CI 1.58 to 3.78; 1943 participants, 19 studies; moderate-certainty evidence). No differences between strategies were detected by subgroup analysis (P = 0.25).\nUp to five years after treatment, slightly fewer cases of PTS occurred in those receiving thrombolysis; 50% compared with 53% in the standard anticoagulation (RR 0.78, 95% CI 0.66 to 0.93; 1393 participants, six studies; moderate-certainty evidence). This was still observed at late follow-up (beyond five years) in two studies (RR 0.56, 95% CI 0.43 to 0.73; 211 participants; moderate-certainty evidence).\nWe used subgroup analysis to investigate if the level of DVT (iliofemoral, femoropopliteal or non-specified) had an effect on the incidence of PTS. No benefit of thrombolysis was seen for either iliofemoral or femoropopliteal DVT (six studies; test for subgroup differences: P = 0.29). Systemic thrombolysis and CDT had similar levels of effectiveness. Studies of CDT included four trials in femoral and iliofemoral DVT, and results from these are consistent with those from trials of systemic thrombolysis in DVT at other levels of occlusion.\nAuthors' conclusions\nComplete clot lysis occurred more frequently after thrombolysis (with or without additional clot removal strategies) and PTS incidence was slightly reduced. Bleeding complications also increased with thrombolysis, but this risk has decreased over time with the use of stricter exclusion criteria of studies. Evidence suggests that systemic administration of thrombolytics and CDT have similar effectiveness. Using GRADE, we judged the evidence to be of moderate-certainty, due to many trials having small numbers of participants or events, or both. Future studies are needed to investigate treatment regimes in terms of agent, dose and adjunctive clot removal methods; prioritising patient-important outcomes, including PTS and quality of life, to aid clinical decision making.\n\nGiven the provided abstract, please respond to the following query: \"How up-to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this Cochrane Review is current to 21 April 2020."
}
]
|
query-laysum | 2247 | [
{
"role": "user",
"content": "Abstract: Background\nAnovulation is a common cause of infertility. Drugs used to treat anovulation include selective oestrogen receptor modulators, aromatase inhibitors and gonadotrophins. Ovulation triggers are used with these drugs, as a surrogate for the hormonal surge seen in spontaneous menstrual cycles, to control the timing of ovulation and the timing of sexual intercourse. Ovulation triggers given without reliable evidence of oocyte maturity could be inappropriately timed; they increase costs, and the need to time intercourse precisely after the ovulation trigger is given adds to psychological stress.\nThis is an update of a Cochrane review first published in Issue 3, 2008, of the Cochrane Database of Systematic Reviews.\nObjectives\nTo determine the benefits and harms of administering an ovulation trigger to anovulatory women receiving treatment with ovulation-inducing agents in comparison with spontaneous ovulation following ovulation induction.\nSearch methods\nWe updated searches of the Menstrual Disorders and Subfertility Group (MDSG) Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and PsycINFO to November 2013. We checked conference proceedings, trial registries and reference lists and contacted researchers.\nSelection criteria\nParallel-group, randomised, controlled trials (RCTs) evaluating the administration of an ovulation trigger to anovulatory women receiving treatment with ovulation-inducing agents.\nData collection and analysis\nWe independently assessed trial eligibility and trial quality and extracted data. We calculated odds ratios (ORs) with 95% confidence intervals (CIs) for dichotomous data and used the random-effects model in meta-analyses when significant heterogeneity was present. We assessed overall quality of the evidence by using the GRADE approach.\nMain results\nNo new trials were identified. This review includes two RCTs with low risk of bias that compared urinary human chorionic gonadotrophin (hCG) versus no treatment in anovulatory women receiving clomiphene citrate. Urinary hCG did not result in an increase in live birth rate over no hCG (OR 0.97, 95% CI 0.52 to 1.83; two trials, 305 participants, I2 = 16%; low-quality evidence), but very serious imprecision around the effect estimate reduces our confidence in the apparent lack of effect of hCG as an ovulation trigger in clomiphene-induced cycles in anovulatory women.\nAmong this review's secondary outcomes, urinary hCG may not increase ovulation rate (OR 0.99, 95% CI 0.36 to 2.77; two trials, 305 participants, I2 = 55%; low-quality evidence), clinical pregnancy rate (OR 1.02, 95% CI 0.56 to 1.89; two trials, 305 participants, I2 = 35%; low-quality evidence) or miscarriage rate in pregnant women (OR 1.19, 95% CI 0.17 to 8.23; two trials, 54 participants, I2 = 0%; low-quality evidence). Multiple pregnancies and preterm deliveries were uncommon, and ovarian hyperstimulation syndrome, adverse events and deaths were not reported as outcomes in either trial.\n\nWe found no trials evaluating other ovulation triggers.\nAuthors' conclusions\nEvidence is inadequate to recommend or refute the use of urinary hCG as an ovulation trigger in anovulatory women treated with clomiphene citrate. We found no trials evaluating the use of ovulation triggers in anovulatory women treated with other ovulation-inducing agents.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The two studies included in this review randomly assigned 305 women being treated with clomiphene citrate to help eggs to develop to additionally receive a medicine (urinary hCG) to trigger their release or to receive no additional treatment. We found no trials comparing other ovulation triggers given with other medicines used for ovulation induction. The evidence is current to November 2013."
}
]
|
query-laysum | 2248 | [
{
"role": "user",
"content": "Abstract: Background\nAnaemia is a prevalent health problem worldwide. Some types are preventable or controllable with iron supplementation (pills or drops), fortification (sprinkles or powders containing iron added to food) or improvements to dietary diversity and quality (e.g. education or counselling).\nObjectives\nTo summarise the evidence from systematic reviews regarding the benefits or harms of nutrition-specific interventions for preventing and controlling anaemia in anaemic or non-anaemic, apparently healthy populations throughout the life cycle.\nMethods\nIn August 2020, we searched MEDLINE, Embase and 10 other databases for systematic reviews of randomised controlled trials (RCTs) in anaemic or non-anaemic, apparently healthy populations. We followed Cochrane methodology, extracting GRADE ratings where provided. The primary outcomes were haemoglobin (Hb) concentration, anaemia, and iron deficiency anaemia (IDA); secondary outcomes were iron deficiency (ID), severe anaemia and adverse effects (e.g. diarrhoea, vomiting).\nMain results\nWe included 75 systematic reviews, 33 of which provided GRADE assessments; these varied between high and very low.\nInfants (6 to 23 months; 13 reviews)\nIron supplementation increased Hb levels and reduced the risk of anaemia and IDA in two reviews. Iron fortification of milk or cereals, multiple-micronutrient powder (MMNP), home fortification of complementary foods, and supplementary feeding increased Hb levels and reduced the risk of anaemia in six reviews. In one review, lipid-based nutrient supplementation (LNS) reduced the risk of anaemia. In another, caterpillar cereal increased Hb levels and reduced IDA prevalence. Food-based strategies (red meat and fortified cow's milk, beef) showed no evidence of a difference (1 review).\nPreschool and school-aged children (2 to 10 years; 8 reviews)\nDaily or intermittent iron supplementation increased Hb levels and reduced the risk of anaemia and ID in two reviews. One review found no evidence of difference in Hb levels, but an increased risk of anaemia and ID for the intermittent regime. All suggested that zinc plus iron supplementation versus zinc alone, multiple-micronutrient (MMN)-fortified beverage versus control, and point-of-use fortification of food with iron-containing micronutrient powder (MNP) versus placebo or no intervention may increase Hb levels and reduce the risk of anaemia and ID. Fortified dairy products and cereal food showed no evidence of a difference on the incidence of anaemia (1 review).\nAdolescent children (11 to 18 years; 4 reviews)\nCompared with no supplementation or placebo, five types of iron supplementation may increase Hb levels and reduce the risk of anaemia (3 reviews). One review on prevention found no evidence of a difference in anaemia incidence on iron supplementation with or without folic acid, but Hb levels increased. Another suggested that nutritional supplementation and counselling reduced IDA. One review comparing MMN fortification with no fortification observed no evidence of a difference in Hb levels.\nNon-pregnant women of reproductive age (19 to 49 years; 5 reviews)\nTwo reviews suggested that iron therapy (oral, intravenous (IV), intramuscular (IM)) increased Hb levels; one showed that iron folic acid supplementation reduced anaemia incidence; and another that daily iron supplementation with or without folic acid or vitamin C increased Hb levels and reduced the risk of anaemia and ID. No review reported interventions related to fortification or dietary diversity and quality.\nPregnant women of reproductive age (15 to 49 years; 23 reviews)\nOne review apiece suggested that: daily iron supplementation with or without folic acid increased Hb levels in the third trimester or at delivery and in the postpartum period, and reduced the risk of anaemia, IDA and ID in the third trimester or at delivery; intermittent iron supplementation had no effect on Hb levels and IDA, but increased the risk of anaemia at or near term and ID, and reduced the risk of side effects; vitamin A supplementation alone versus placebo, no intervention or other micronutrient might increase maternal Hb levels and reduce the risk of maternal anaemia; MMN with iron and folic acid versus placebo reduced the risk of anaemia; supplementation with oral bovine lactoferrin versus oral ferrous iron preparations increased Hb levels and reduced gastrointestinal side effects; MNP for point-of-use fortification of food versus iron and folic acid supplementation might decrease Hb levels at 32 weeks' gestation and increase the risk of anaemia; and LNS versus iron or folic acid and MMN increased the risk of anaemia.\nMixed population (all ages; 22 reviews)\nIron supplementation versus placebo or control increased Hb levels in healthy children, adults, and elderly people (4 reviews). Hb levels appeared to increase and risk of anaemia and ID decrease in two reviews investigating MMN fortification versus placebo or no treatment, iron fortified flour versus control, double fortified salt versus iodine only fortified salt, and rice fortification with iron alone or in combination with other micronutrients versus unfortified rice or no intervention. Each review suggested that fortified versus non-fortified condiments or noodles, fortified (sodium iron ethylenediaminetetraacetate; NaFeEDTA) versus non-fortified soy sauce, and double-fortified salt versus control salt may increase Hb concentration and reduce the risk of anaemia. One review indicated that Hb levels increased for children who were anaemic or had IDA and received iron supplementation, and decreased for those who received dietary interventions. Another assessed the effects of foods prepared in iron pots, and found higher Hb levels in children with low-risk malaria status in two trials, but no difference when comparing food prepared in non-cast iron pots in a high-risk malaria endemicity mixed population.\nThere was no evidence of a difference for adverse effects. Anaemia and malaria prevalence were rarely reported. No review focused on women aged 50 to 65 years plus or men (19 to 65 years plus).\nAuthors' conclusions\nCompared to no treatment, daily iron supplementation may increase Hb levels and reduce the risk of anaemia and IDA in infants, preschool and school-aged children and pregnant and non-pregnant women. Iron fortification of foods in infants and use of iron pots with children may have prophylactic benefits for malaria endemicity low-risk populations. In any age group, only a limited number of reviews assessed interventions to improve dietary diversity and quality. Future trials should assess the effects of these types of interventions, and consider the requirements of different populations.\n\nGiven the provided abstract, please respond to the following query: \"Non-pregnant women of reproductive age (19 to 49 years)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Two reviews suggested that iron therapy (oral, intravenous, intramuscular) increased Hb levels. One review found that intravenous iron increased Hb levels compared with oral iron, and another that daily iron supplementation with or without folic acid or vitamin C increased Hb levels and reduced the risk of anaemia and ID."
}
]
|
query-laysum | 2249 | [
{
"role": "user",
"content": "Abstract: Background\nClonidine is a presynaptic alpha-2-adrenergic receptor agonist that has been used for many years to treat hypertension and other conditions, including chronic pain. Adverse events associated with systemic use of the drug have limited its application. Topical use of drugs has been gaining interest since the beginning of the century, as it may limit adverse events without loss of analgesic efficacy. Topical clonidine (TC) formulations have been investigated for almost 20 years in clinical trials. This is an update of the original Cochrane Review published in Issue 8, 2015.\nObjectives\nThe objective of this review was to assess the analgesic efficacy and safety of TC compared with placebo or other drugs in adults aged 18 years or above with chronic neuropathic pain.\nSearch methods\nFor this update we searched the Cochrane Register of Studies Online (CRSO), MEDLINE (Ovid), and Embase (Ovid) databases, and reference lists of retrieved papers and trial registries. We also contacted experts in the field. The most recent search was performed on 27 October 2021.\nSelection criteria\nWe included randomised, double-blind studies of at least two weeks' duration comparing TC versus placebo or other active treatment in adults with chronic neuropathic pain.\nData collection and analysis\nTwo review authors independently screened references for eligibility, extracted data, and assessed risk of bias. Any discrepancies were resolved by discussion or by consulting a third review author if necessary. Where required, we contacted trial authors to request additional information.\nWe presented pooled estimates for dichotomous outcomes as risk ratios (RRs) with 95% confidence intervals (CIs), and continuous outcomes as mean differences (MDs) with P values. We used Review Manager Web software to perform the meta-analyses. We used a fixed-effect model if we considered heterogeneity as not important; otherwise, we used a random-effects model.\nThe review primary outcomes were: participant-reported pain relief of 50% or greater; participant-reported pain relief of 30% or greater; much or very much improved on Patient Global Impression of Change scale (PGIC); and very much improved on PGIC. Secondary outcomes included withdrawals due to adverse events; participants experiencing at least one adverse event; and withdrawals due to lack of efficacy. All outcomes were measured at the longest follow-up period.\nWe assessed the certainty of evidence using GRADE and created two summary of findings tables.\nMain results\nWe included four studies in the review (two new in this update), with a total of 743 participants with painful diabetic neuropathy (PDN). TC (0.1% or 0.2%) was applied in gel form to the painful area two to three times daily. The double-blind treatment phase of three studies lasted 8 weeks to 85 days and compared TC versus placebo. In the fourth study, the double-blind treatment phase lasted 12 weeks and compared TC versus topical capsaicin. We assessed the studies as at unclear or high risk of bias for most domains; all studies were at unclear risk of bias for allocation concealment and blinding of outcome assessment; one study was at high risk of bias for blinding of participants and personnel; two studies were at high risk of attrition bias; and three studies were at high risk of bias due to notable funding concerns. We judged the certainty of evidence (GRADE) to be moderate to very low, downgrading for study limitations, imprecision of results, and publication bias.\nTC compared to placebo\nThere was no evidence of a difference in number of participants with participant-reported pain relief of 50% or greater during longest follow-up period (12 weeks) between groups (risk ratio (RR) 1.21, 95% confidence interval (CI) 0.78 to 1.86; 179 participants; 1 study; low certainty evidence). However, the number of participants with participant-reported pain relief of 30% or greater during longest follow-up period (8 to 12 weeks) was higher in the TC group compared with placebo (RR 1.35, 95% CI 1.03 to 1.77; 344 participants; 2 studies, very low certainty evidence). The number needed to treat for an additional beneficial outcome (NNTB) for this comparison was 8.33 (95% CI 4.3 to 50.0). Also, there was no evidence of a difference between groups for the outcomes much or very much improved on the PGIC during longest follow-up period (12 weeks) or very much improved on PGIC during the longest follow-up period (12 weeks) (RR 1.06, 95% CI 0.76 to 1.49 and RR 1.82, 95% CI 0.89 to 3.72, respectively; 179 participants; 1 study; low certainty evidence). We observed no evidence of a difference between groups in withdrawals due to adverse events and withdrawals due to lack of efficacy during the longest follow-up period (12 weeks) (RR 0.34, 95% CI 0.04 to 3.18 and RR 1.01, 95% CI 0.06 to 15.92, respectively; 179 participants; 1 study; low certainty evidence) and participants experiencing at least one adverse event during longest follow-up period (12 weeks) (RR 0.65, 95% CI 0.14 to 3.05; 344 participants; 2 studies; low certainty evidence).\nTC compared to active comparator\nThere was no evidence of a difference in the number of participants with participant-reported pain relief of 50% or greater during longest follow-up period (12 weeks) between groups (RR 1.41, 95% CI 0.99 to 2.0; 139 participants; 1 study; low certainty evidence). Other outcomes were not reported.\nAuthors' conclusions\nThis is an update of a review published in 2015, for which our conclusions remain unchanged. Topical clonidine may provide some benefit to adults with painful diabetic neuropathy; however, the evidence is very uncertain. Additional trials are needed to assess TC in other neuropathic pain conditions and to determine whether it is possible to predict who or which groups of people will benefit from TC.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To find out how clonidine applied to the skin (topical clonidine) works in people with neuropathic pain, we searched medical databases and references of retrieved papers and registries or clinical trials. We also contacted experts in the field. Two review authors independently screened references for eligibility, extracted data, and assessed risk of bias. When necessary, we contacted trial authors to request additional information."
}
]
|
query-laysum | 2250 | [
{
"role": "user",
"content": "Abstract: Background\nFollow-up examinations are commonly performed after primary treatment for women with breast cancer. They are used to detect recurrences at an early (asymptomatic) stage. This is an update of a Cochrane review first published in 2000.\nObjectives\nTo assess the effectiveness of different policies of follow-up for distant metastases on mortality, morbidity and quality of life in women treated for stage I, II or III breast cancer.\nSearch methods\nFor this 2014 review update, we searched the Cochrane Breast Cancer Group's Specialised Register (4 July 2014), MEDLINE (4 July 2014), Embase (4 July 2014), CENTRAL (2014, Issue 3), the World Health Organization (WHO) International Clinical Trials Registry Platform (4 July 2014) and ClinicalTrials.gov (4 July 2014). References from retrieved articles were also checked.\nSelection criteria\nAll randomised controlled trials (RCTs) assessing the effectiveness of different policies of follow-up after primary treatment were reviewed for inclusion.\nData collection and analysis\nTwo review authors independently assessed trials for eligibility for inclusion in the review and risk of bias. Data were pooled in an individual patient data meta-analysis for the two RCTs testing the effectiveness of different follow-up schemes. Subgroup analyses were conducted by age, tumour size and lymph node status.\nMain results\nSince 2000, one new trial has been published; the updated review now includes five RCTs involving 4023 women with breast cancer (clinical stage I, II or III).\nTwo trials involving 2563 women compared follow-up based on clinical visits and mammography with a more intensive scheme including radiological and laboratory tests. After pooling the data, no significant differences in overall survival (hazard ratio (HR) 0.98, 95% confidence interval (CI) 0.84 to 1.15, two studies, 2563 participants, high-quality evidence), or disease-free survival (HR 0.84, 95% CI 0.71 to 1.00, two studies, 2563 participants, low-quality evidence) emerged. No differences in overall survival and disease-free survival emerged in subgroup analyses according to patient age, tumour size and lymph node status before primary treatment. In 1999, 10-year follow-up data became available for one trial of these trials, and no significant differences in overall survival were found. No difference was noted in quality of life measures (one study, 639 participants, high-quality evidence).\nThe new included trial, together with a previously included trial involving 1264 women compared follow-up performed by a hospital-based specialist versus follow-up performed by general practitioners. No significant differences were noted in overall survival (HR 1.07, 95% CI 0.64 to 1.78, one study, 968 participants, moderate-quality evidence), time to detection of recurrence (HR 1.06, 95% CI 0.76 to 1.47, two studies, 1264 participants, moderate-quality evidence), and quality of life (one study, 356 participants, high-quality evidence). Patient satisfaction was greater among patients treated by general practitioners. One RCT involving 196 women compared regularly scheduled follow-up visits versus less frequent visits restricted to the time of mammography. No significant differences emerged in interim use of telephone and frequency of general practitioners's consultations.\nAuthors' conclusions\nThis updated review of RCTs conducted almost 20 years ago suggests that follow-up programs based on regular physical examinations and yearly mammography alone are as effective as more intensive approaches based on regular performance of laboratory and instrumental tests in terms of timeliness of recurrence detection, overall survival and quality of life.\nIn two RCTs, follow-up care performed by trained and not trained general practitioners working in an organised practice setting had comparable effectiveness to that delivered by hospital-based specialists in terms of overall survival, recurrence detection, and quality of life.\n\nGiven the provided abstract, please respond to the following query: \"Study Characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A literature search up to July 2014 found five trials (involving 4023 women with a median follow-up variable from 16 to 120 months). Since the previous version of this Cochrane review in 2004, one new study has been published."
}
]
|
query-laysum | 2251 | [
{
"role": "user",
"content": "Abstract: Background\nKeratoconus is a condition of the eye that affects approximately 1 in 2000 people. The disease leads to a gradual increase in corneal curvature and decrease in visual acuity with consequent impact on quality of life. Collagen cross-linking (CXL) with ultraviolet A (UVA) light and riboflavin (vitamin B2) is a relatively new treatment that has been reported to slow or halt the progression of the disease in its early stages.\nObjectives\nThe objective of this review was to assess whether there is evidence that CXL is an effective and safe treatment for halting the progression of keratoconus compared to no treatment.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 7), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to August 2014), EMBASE (January 1980 to August 2014), Latin American and Caribbean Health Sciences Literature Database (LILACS) (1982 to August 2014), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1982 to August 2014), OpenGrey (System for Information on Grey Literature in Europe) (www.opengrey.eu/), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organisation International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We used no date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 28 August 2014.\nSelection criteria\nWe included randomised controlled trials (RCTs) where CXL with UVA light and riboflavin was used to treat people with keratoconus and was compared to no treatment.\nData collection and analysis\nTwo review authors independently screened the search results, assessed trial quality, and extracted data using standard methodological procedures expected by Cochrane. Our primary outcomes were two indicators of progression at 12 months: increase in maximum keratometry of 1.5 dioptres (D) or more and deterioration in uncorrected visual acuity of more than 0.2 logMAR.\nMain results\nWe included three RCTs conducted in Australia, the United Kingdom, and the United States that enrolled a total of 225 eyes and analysed 219 eyes. The total number of people enrolled was not clear in two of the studies. Only adults were enrolled into these studies. Out of the eyes analysed, 119 had CXL (all using the epithelium-off technique) and 100 served as controls. One of these studies only reported comparative data on review outcomes. All three studies were at high risk for performance bias (lack of masking), detection bias (only one trial attempted to mask outcome assessment), and attrition bias (incomplete follow-up). It was not possible to pool data due to differences in measuring and reporting outcomes. We identified a further three unpublished trials that potentially had enrolled a total of 195 participants.\nThere was limited evidence on the risk of progression. Analysis of the first few participants followed up to one year in one study suggested that eyes given CXL were less likely to have an increase in maximum keratometry of 1.5 D or more at 12 months compared to eyes given no treatment, but the confidence intervals (CI) were wide and compatible with no effect or more progression in the CXL group (risk ratio (RR) 0.12, 95% CI 0.01 to 2.00, 19 eyes). The same study reported the number of eyes with an increase of 2 D or more at 36 months in the whole cohort with a RR of 0.03 favouring CXL (95% CI 0.00 to 0.43, 94 eyes). Another study reported \"progression\" at 18 months using a different definition; people receiving CXL were less likely to progress, but again the effect was uncertain (RR 0.14, 95% CI 0.01 to 2.61, 44 eyes). We judged this to be very low-quality evidence due to the risk of bias of included studies, imprecision, indirectness and publication bias but noted that the size of the potential effect was large.\nOn average, treated eyes had a less steep cornea (approximately 2 D less steep) (mean difference (MD) -1.92, 95% CI -2.54 to -1.30, 94 eyes, 1 RCT, very low-quality evidence) and better uncorrected visual acuity (approximately 2 lines or 10 letters better) (MD -0.20, 95% CI -0.31 to -0.09, 94 eyes, 1 RCT, very low-quality evidence) at 12 months. None of the studies reported loss of 0.2 logMAR acuity. The data on corneal thickness were inconsistent. There were no data available on quality of life or costs. Adverse effects were not uncommon but mostly transient and of low clinical significance. In one trial, 3 out of 12 participants treated with CXL had an adverse effect including corneal oedema, anterior chamber inflammation, and recurrent corneal erosions. In one trial at 3 years 3 out of 50 participants experienced adverse events including mild diffuse corneal oedema and paracentral infiltrate, peripheral corneal vascularisation, and subepithelial infiltrates and anterior chamber inflammation. No adverse effects were reported in the control groups.\nAuthors' conclusions\nThe evidence for the use of CXL in the management of keratoconus is limited due the lack of properly conducted RCTs.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We judged the quality of the evidence to be very low because of problems in the way the studies were done and reported and the small number of eyes included."
}
]
|
query-laysum | 2252 | [
{
"role": "user",
"content": "Abstract: Background\nAcute liver failure is a rare and serious disease. Acute liver failure may be paracetamol-induced or non-paracetamol-induced. Acute liver failure not caused by paracetamol (acetaminophen) has a poor prognosis with limited treatment options. N-acetylcysteine has been successful in treating paracetamol-induced acute liver failure and reduces the risk of needing to undergo liver transplantation. Recent randomised clinical trials have explored whether the benefit can be extrapolated to treat non-paracetamol-related acute liver failure. The American Association for the Study of Liver Diseases (AASLD) 2011 guideline suggested that N-acetylcysteine could improve spontaneous survival when given during early encephalopathy stages for patients with non-paracetamol-related acute liver failure.\nObjectives\nTo assess the benefits and harms of N-acetylcysteine compared with placebo or no N-acetylcysteine, as an adjunct to usual care, in people with non-paracetamol-related acute liver failure.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Group Controlled Trials Register (searched 25 June 2020), Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 6) in The Cochrane Library, MEDLINE Ovid (1946 to 25 June 2020), Embase Ovid (1974 to 25 June 2020), Latin American and Caribbean Health Science Information database (LILACS) (1982 to 25 June 2020), Science Citation Index Expanded (1900 to 25 June 2020), and Conference Proceedings Citation Index – Science (1990 to 25 June 2020).\nSelection criteria\nWe included randomised clinical trials that compared N-acetylcysteine at any dose or route with placebo or no intervention in participants with non-paracetamol-induced acute liver failure.\nData collection and analysis\nWe used standard methodological procedures as described in the Cochrane Handbook for Systematic Reviews of Interventions. We conducted meta-analyses and presented results using risk ratios (RR) with 95% confidence intervals (CIs). We quantified statistical heterogeneity by calculating I2. We assessed bias using the Cochrane risk of bias tool and determined the certainty of the evidence using the GRADE approach.\nMain results\nWe included two randomised clinical trials: one with 183 adults and one with 174 children (birth through age 17 years). We classified both trials at overall high risk of bias. One unregistered study in adults is awaiting classification while we are awaiting responses from study authors for details on trial methodology (e.g. randomisation processes).\nWe did not meta-analyse all-cause mortality because of significant clinical heterogeneity in the two trials. For all-cause mortality at 21 days between adults receiving N-acetylcysteine versus placebo, there was inconclusive evidence of effect (N-acetylcysteine 24/81 (29.6%) versus placebo 31/92 (33.7%); RR 0.88, 95% CI 0.57 to 1.37; low certainty evidence). The certainty of the evidence was low due to risk of bias and imprecision. Similarly, for all-cause mortality at one year between children receiving N-acetylcysteine versus placebo, there was inconclusive evidence of effect (25/92 (27.2%) versus 17/92 (18.5%); RR 1.47, 95% CI 0.85 to 2.53; low certainty evidence). We downgraded the certainty of evidence due to very serious imprecision.\nWe did not meta-analyse serious adverse events and liver transplantation at one year due to incomplete reporting and clinical heterogeneity. For liver transplantation at 21 days in the trial with adults, there was inconclusive evidence of effect (RR 0.72, 95% CI 0.49 to 1.06; low certainty evidence). We downgraded the certainty of the evidence due to serious risk of bias and imprecision. For liver transplantation at one year in the trial with children, there was inconclusive evidence of effect (RR 1.23, 95% CI 0.84 to 1.81; low certainty of evidence). We downgraded the certainty of the evidence due to very serious imprecision.\nThere was inconclusive evidence of effect on serious adverse events in the trial with children (RR 1.25, 95% CI 0.35 to 4.51; low certainty evidence). We downgraded the certainty of the evidence due to very serious imprecision.\nWe did not meta-analyse non-serious adverse events due to clinical heterogeneity. There was inconclusive evidence of effect on non-serious adverse events in adults (RR 1.07, 95% CI 0.79 to 1.45; 173 participants; low certainty of evidence) and children (RR 1.19, 95% CI 0.62 to 2.16; 184 participants; low certainty of evidence). None of the trials reported outcomes of proportion of participants with resolution of encephalopathy and coagulopathy or health-related quality of life.\nThe National Institute of Health in the United States funded both trials through grants. One of the trials received additional funding from two hospital foundations' grants. Pharmaceutical companies provided the study drug and matching placebo, but they did not have input into study design nor involvement in analysis.\nAuthors' conclusions\nThe available evidence is inconclusive regarding the effect of N-acetylcysteine compared with placebo or no N-acetylcysteine, as an adjunct to usual care, on mortality or transplant rate in non-paracetamol-induced acute liver failure. Current evidence does not support the guideline suggestion to use N-acetylcysteine in adults with non-paracetamol-related acute liver failure, nor the rising use observed in clinical practice. The uncertainty based on current scanty evidence warrants additional randomised clinical trials with non-paracetamol-related acute liver failure evaluating N-acetylcysteine versus placebo, as well as investigations to identify predictors of response and the optimal N-acetylcysteine dose and duration.\n\nGiven the provided abstract, please respond to the following query: \"Systematic review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We investigated the effects and safety of using N-acetylcysteine in people with acute liver failure even when it is not related to too much paracetamol ingestion."
}
]
|
query-laysum | 2253 | [
{
"role": "user",
"content": "Abstract: Background\nDiagnosing acute appendicitis (appendicitis) based on clinical evaluation, blood testing, and urinalysis can be difficult. Therefore, in persons with suspected appendicitis, abdominopelvic computed tomography (CT) is often used as an add-on test following the initial evaluation to reduce remaining diagnostic uncertainty. The aim of using CT is to assist the clinician in discriminating between persons who need surgery with appendicectomy and persons who do not.\nObjectives\nPrimary objective\nOur primary objective was to evaluate the accuracy of CT for diagnosing appendicitis in adults with suspected appendicitis.\nSecondary objectives\nOur secondary objectives were to compare the accuracy of contrast-enhanced versus non-contrast-enhanced CT, to compare the accuracy of low-dose versus standard-dose CT, and to explore the influence of CT-scanner generation, radiologist experience, degree of clinical suspicion of appendicitis, and aspects of methodological quality on diagnostic accuracy.\nSearch methods\nWe searched MEDLINE, Embase, and Science Citation Index until 16 June 2017. We also searched references lists. We did not exclude studies on the basis of language or publication status.\nSelection criteria\nWe included prospective studies that compared results of CT versus outcomes of a reference standard in adults (> 14 years of age) with suspected appendicitis. We excluded studies recruiting only pregnant women; studies in persons with abdominal pain at any location and with no particular suspicion of appendicitis; studies in which all participants had undergone ultrasonography (US) before CT and the decision to perform CT depended on the US outcome; studies using a case-control design; studies with fewer than 10 participants; and studies that did not report the numbers of true-positives, false-positives, false-negatives, and true-negatives. Two review authors independently screened and selected studies for inclusion.\nData collection and analysis\nTwo review authors independently collected the data from each study and evaluated methodological quality according to the Quality Assessment of Studies of Diagnostic Accuracy - Revised (QUADAS-2) tool. We used the bivariate random-effects model to obtain summary estimates of sensitivity and specificity.\nMain results\nWe identified 64 studies including 71 separate study populations with a total of 10,280 participants (4583 with and 5697 without acute appendicitis). Estimates of sensitivity ranged from 0.72 to 1.0 and estimates of specificity ranged from 0.5 to 1.0 across the 71 study populations. Summary sensitivity was 0.95 (95% confidence interval (CI) 0.93 to 0.96), and summary specificity was 0.94 (95% CI 0.92 to 0.95). At the median prevalence of appendicitis (0.43), the probability of having appendicitis following a positive CT result was 0.92 (95% CI 0.90 to 0.94), and the probability of having appendicitis following a negative CT result was 0.04 (95% CI 0.03 to 0.05). In subgroup analyses according to contrast enhancement, summary sensitivity was higher for CT with intravenous contrast (0.96, 95% CI 0.92 to 0.98), CT with rectal contrast (0.97, 95% CI 0.93 to 0.99), and CT with intravenous and oral contrast enhancement (0.96, 95% CI 0.93 to 0.98) than for unenhanced CT (0.91, 95% CI 0.87 to 0.93). Summary sensitivity of CT with oral contrast enhancement (0.89, 95% CI 0.81 to 0.94) and unenhanced CT was similar. Results show practically no differences in summary specificity, which varied from 0.93 (95% CI 0.90 to 0.95) to 0.95 (95% CI 0.90 to 0.98) between subgroups. Summary sensitivity for low-dose CT (0.94, 95% 0.90 to 0.97) was similar to summary sensitivity for standard-dose or unspecified-dose CT (0.95, 95% 0.93 to 0.96); summary specificity did not differ between low-dose and standard-dose or unspecified-dose CT. No studies had high methodological quality as evaluated by the QUADAS-2 tool. Major methodological problems were poor reference standards and partial verification primarily due to inadequate and incomplete follow-up in persons who did not have surgery.\nAuthors' conclusions\nThe sensitivity and specificity of CT for diagnosing appendicitis in adults are high. Unenhanced standard-dose CT appears to have lower sensitivity than standard-dose CT with intravenous, rectal, or oral and intravenous contrast enhancement. Use of different types of contrast enhancement or no enhancement does not appear to affect specificity. Differences in sensitivity and specificity between low-dose and standard-dose CT appear to be negligible. The results of this review should be interpreted with caution for two reasons. First, these results are based on studies of low methodological quality. Second, the comparisons between types of contrast enhancement and radiation dose may be unreliable because they are based on indirect comparisons that may be confounded by other factors.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors searched for and included studies published up to 16 June 2017."
}
]
|
query-laysum | 2254 | [
{
"role": "user",
"content": "Abstract: Background\nPrimary angle closure glaucoma (PACG) accounts for 50% of glaucoma blindness worldwide. More than three-quarters of individuals with PACG reside in Asia. In these populations, PACG often develops insidiously leading to chronically raised intraocular pressure and optic nerve damage, which is often asymptomatic. Non-contact tests to identify people at risk of angle closure are relatively quick and can be carried out by appropriately trained healthcare professionals or technicians as a triage test. If the test is positive, the person will be referred for further specialist assessment.\nObjectives\nTo determine the diagnostic accuracy of non-contact tests (limbal anterior chamber depth (LACD) (van Herick test); oblique flashlight test; scanning peripheral anterior chamber depth analyser (SPAC), Scheimpflug photography; anterior segment optical coherence tomography (AS-OCT), for identifying people with an occludable angle.\nSearch methods\nWe searched the following bibliographic databases 3 October 2019: CENTRAL; MEDLINE; Embase; BIOSIS; OpenGrey; ARIF and clinical trials registries. The searches were limited to remove case reports. There were no date or language restrictions in the searches.\nSelection criteria\nWe included prospective and retrospective cross-sectional, cohort and case-control studies conducted in any setting that evaluated the accuracy of one or more index tests for identifying people with an occludable angle compared to a gonioscopic reference standard.\nData collection and analysis\nTwo review authors independently performed data extraction and quality assessment using QUADAS2 for each study. For each test, 2 x 2 tables were constructed and sensitivity and specificity were calculated. When four or more studies provided data at fixed thresholds for each test, we fitted a bivariate model using the METANDI function in STATA to calculate pooled point estimates for sensitivity and specificity. For comparisons between index tests and subgroups, we performed a likelihood ratio test comparing the model with and without the covariate.\nMain results\nWe included 47 studies involving 26,151 participants and analysing data from 23,440. Most studies were conducted in Asia (36, 76.6%). Twenty-seven studies assessed AS-OCT (analysing 15,580 participants), 17 studies LACD (7385 participants), nine studies Scheimpflug photography (1616 participants), six studies SPAC (5239 participants) and five studies evaluated the oblique flashlight test (998 participants). Regarding study quality, 36 of the included studies (76.6%) were judged to have a high risk of bias in at least one domain.The use of a case-control design (13 studies) or inappropriate exclusions (6 studies) raised patient selection concerns in 40.4% of studies and concerns in the index test domain in 59.6% of studies were due to lack of masking or post-hoc determination of optimal thresholds. Among studies that did not use a case-control design, 16 studies (20,599 participants) were conducted in a primary care/community setting and 18 studies (2590 participants) in secondary care settings, of which 15 investigated LACD.\nSummary estimates were calculated for commonly reported parameters and thresholds for each test; LACD ≤ 25% (16 studies, 7540 eyes): sensitivity 0.83 (95% confidence interval (CI) 0.74, 0.90), specificity 0.88 (95% CI 0.84, 0.92) (moderate-certainty); flashlight (grade1) (5 studies, 1188 eyes): sensitivity 0.51 (95% CI 0.25, 0.76), specificity 0.92 (95% CI 0.70, 0.98) (low-certainty); SPAC (≤ 5 and/or S or P) (4 studies, 4677 eyes): sensitivity 0.83 (95% CI 0.70, 0.91), specificity 0.78 (95% CI 0.70, 0.83) (moderate-certainty); Scheimpflug photography (central ACD) (9 studies, 1698 eyes): sensitivity 0.92 (95% CI 0.84, 0.96), specificity 0.86 (95% CI 0.76, 0.93) (moderate-certainty); AS-OCT (subjective opinion of occludability) (13 studies, 9242 eyes): sensitivity 0.85 (95% CI 0.76, 0.91); specificity 0.71 (95% CI 0.62, 0.78) (moderate-certainty).\nFor comparisons of sensitivity and specificity between index tests we used LACD (≤ 25%) as the reference category. The flashlight test (grade 1 threshold) showed a statistically significant lower sensitivity than LACD (≤ 25%), whereas AS-OCT (subjective judgement) had a statistically significant lower specificity. There were no statistically significant differences for the other index test comparisons. A subgroup analysis was conducted for LACD (≤ 25%), comparing community (7 studies, 14.4% prevalence) vs secondary care (7 studies, 42% prevalence) settings. We found no evidence of a statistically significant difference in test performance according to setting.\nPerforming LACD on 1000 people at risk of angle closure with a prevalence of occludable angles of 10%, LACD would miss about 17 cases out of the 100 with occludable angles and incorrectly classify 108 out of 900 without angle closure.\nAuthors' conclusions\nThe finding that LACD performed as well as index tests that use sophisticated imaging technologies, confirms the potential for this test for case-detection of occludable angles in high-risk populations. However, methodological issues across studies may have led to our estimates of test accuracy being higher than would be expected in standard clinical practice. There is still a need for high-quality studies to evaluate the performance of non-invasive tests for angle assessment in both community-based and secondary care settings.\n\nGiven the provided abstract, please respond to the following query: \"Why is improving the diagnosis of primary angle closure glaucoma important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Glaucoma is a group of eye diseases that cause damage to the optic nerve at the back of the eye. If untreated, glaucoma can lead to blindness. Primary angle closure glaucoma is a type of glaucoma, where the drainage route for the fluid inside the eye (known as the angle) is narrowed or blocked, leading to raised eye pressure and loss of the field of vision. Primary angle closure glaucoma accounts for a quarter of all cases of glaucoma globally and it is more likely to lead to vision loss than the more common form, primary open angle glaucoma.\nA variety of non-invasive tests are available to identify people at risk of primary angle closure glaucoma in a community or non-specialist clinical setting. Those who test positive are referred for further specialist investigation and possible treatment. Failure to detect this condition (a false negative result) may result in an increased risk of progressive optic nerve damage and blindness. An incorrect diagnosis (a false positive result) could lead to unnecessary and costly investigation."
}
]
|
query-laysum | 2255 | [
{
"role": "user",
"content": "Abstract: Background\nIt is estimated that up to 75% of cancer survivors may experience cognitive impairment as a result of cancer treatment and given the increasing size of the cancer survivor population, the number of affected people is set to rise considerably in coming years. There is a need, therefore, to identify effective, non-pharmacological interventions for maintaining cognitive function or ameliorating cognitive impairment among people with a previous cancer diagnosis.\nObjectives\nTo evaluate the cognitive effects, non-cognitive effects, duration and safety of non-pharmacological interventions among cancer patients targeted at maintaining cognitive function or ameliorating cognitive impairment as a result of cancer or receipt of systemic cancer treatment (i.e. chemotherapy or hormonal therapies in isolation or combination with other treatments).\nSearch methods\nWe searched the Cochrane Centre Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PUBMED, Cumulative Index of Nursing and Allied Health Literature (CINAHL) and PsycINFO databases. We also searched registries of ongoing trials and grey literature including theses, dissertations and conference proceedings. Searches were conducted for articles published from 1980 to 29 September 2015.\nSelection criteria\nRandomised controlled trials (RCTs) of non-pharmacological interventions to improve cognitive impairment or to maintain cognitive functioning among survivors of adult-onset cancers who have completed systemic cancer therapy (in isolation or combination with other treatments) were eligible. Studies among individuals continuing to receive hormonal therapy were included. We excluded interventions targeted at cancer survivors with central nervous system (CNS) tumours or metastases, non-melanoma skin cancer or those who had received cranial radiation or, were from nursing or care home settings. Language restrictions were not applied.\nData collection and analysis\nAuthor pairs independently screened, selected, extracted data and rated the risk of bias of studies. We were unable to conduct planned meta-analyses due to heterogeneity in the type of interventions and outcomes, with the exception of compensatory strategy training interventions for which we pooled data for mental and physical well-being outcomes. We report a narrative synthesis of intervention effectiveness for other outcomes.\nMain results\nFive RCTs describing six interventions (comprising a total of 235 participants) met the eligibility criteria for the review. Two trials of computer-assisted cognitive training interventions (n = 100), two of compensatory strategy training interventions (n = 95), one of meditation (n = 47) and one of physical activity intervention (n = 19) were identified. Each study focused on breast cancer survivors. All five studies were rated as having a high risk of bias. Data for our primary outcome of interest, cognitive function were not amenable to being pooled statistically. Cognitive training demonstrated beneficial effects on objectively assessed cognitive function (including processing speed, executive functions, cognitive flexibility, language, delayed- and immediate- memory), subjectively reported cognitive function and mental well-being. Compensatory strategy training demonstrated improvements on objectively assessed delayed-, immediate- and verbal-memory, self-reported cognitive function and spiritual quality of life (QoL). The meta-analyses of two RCTs (95 participants) did not show a beneficial effect from compensatory strategy training on physical well-being immediately (standardised mean difference (SMD) 0.12, 95% confidence interval (CI) -0.59 to 0.83; I2= 67%) or two months post-intervention (SMD - 0.21, 95% CI -0.89 to 0.47; I2 = 63%) or on mental well-being two months post-intervention (SMD -0.38, 95% CI -1.10 to 0.34; I2 = 67%). Lower mental well-being immediately post-intervention appeared to be observed in patients who received compensatory strategy training compared to wait-list controls (SMD -0.57, 95% CI -0.98 to -0.16; I2 = 0%). We assessed the assembled studies using GRADE for physical and mental health outcomes and this evidence was rated to be low quality and, therefore findings should be interpreted with caution. Evidence for physical activity and meditation interventions on cognitive outcomes is unclear.\nAuthors' conclusions\nOverall, the, albeit low-quality evidence may be interpreted to suggest that non-pharmacological interventions may have the potential to reduce the risk of, or ameliorate, cognitive impairment following systemic cancer treatment. Larger, multi-site studies including an appropriate, active attentional control group, as well as consideration of functional outcomes (e.g. activities of daily living) are required in order to come to firmer conclusions about the benefits or otherwise of this intervention approach. There is also a need to conduct research into cognitive impairment among cancer patient groups other than women with breast cancer.\n\nGiven the provided abstract, please respond to the following query: \"What are the main findings?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified five eligible studies that described six interventions. These included two studies of computerised cognitive skills practice, two cognitive coping skills training programmes, one meditation intervention and one exercise intervention. All five studies included a total of 235 women who had been treated for breast cancer. The findings suggest that cognitive skills practice and cognitive coping skills training may be useful in improving patient reports and formal assessments of cognition, as well as quality of life. There was insufficient evidence to know if meditation and exercise interventions had any effect on cognition."
}
]
|
query-laysum | 2256 | [
{
"role": "user",
"content": "Abstract: Background\nObstructive sleep apnoea syndrome (OSAS) is associated with several chronic diseases, including erectile dysfunction (ED). The association of OSAS and ED is far more common than might be found by chance; the treatment of OSAS with non-invasive positive airway pressure therapy is associated with improvement of respiratory symptoms, and may contribute to the improvement of associated conditions, such as ED.\nObjectives\nTo assess the effectiveness and acceptability of non-invasive positive airway pressure therapy for improving erectile dysfunction in OSAS.\nSearch methods\nWe identified studies from the Cochrane Airways Trials Register, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, AMED EBSCO, and LILACS, the US National Institutes of Health ongoing trials register ClinicalTrials.gov, and the World Health Organisation international clinical trials registry platform to 14 June 2021, with no restriction on date, language, or status of publication. We checked the reference lists of all primary studies, and review articles for additional references, and relevant manufacturers' websites for study information. We also searched specific conference proceedings for the British Association of Urological Surgeons; the European Association of Urology; and the American Urological Association to 14 June 2021.\nSelection criteria\nWe considered randomised controlled trials (RCTs) with a parallel or cross-over design, or cluster-RCTs, which included men aged 18 years or older, with OSAS and ED. We considered RCTs comparing any non-invasive positive airway pressure therapy (such as continuous positive airways pressure (CPAP), bilevel positive airway pressure (BiPAP), variable positive airway pressure (VPAP), or similar devices) versus sham, no treatment, waiting list, or pharmacological treatment for ED. The primary outcomes were remission of ED and serious adverse events; secondary outcome were sex-related quality of life, health-related quality of life, and minor adverse events.\nData collection and analysis\nTwo review authors independently conducted study selection, data extraction, and risk of bias assessment. A third review author solved any disagreement. We used the Cochrane RoB 1 tool to assess the risk of bias of the included RCTs. We used the GRADE approach to assess the certainty of the body of evidence. To measure the treatment effect on dichotomous outcomes, we used the risk ratio (RR); for continuous outcomes, we used the mean difference (MD). We calculated 95% confidence intervals (CI) for these measures. When possible (data availability and homogeneous studies), we used a random-effect model to pool data with a meta-analysis.\nMain results\nWe included six RCTs (all assessing CPAP as the non-invasive positive airway pressure therapy device), with a total of 315 men with OSAS and ED. All RCTs presented some important risk of bias related to selection, performance, assessment, or reporting bias. None of included RCTs assessed the ED remission rate, and we used the provided ED mean scores as a proxy.\nCPAP versus no CPAP\nThere is uncertainty about the effect of CPAP on mean ED scores after 4 weeks, using the International index of erectile function (IIEF-5, higher = better; MD 7.50, 95% CI 4.05 to 10.95; 1 RCT; 27 participants; very low-certainty evidence), and after 12 weeks (IIEF-ED, ED domain; MD 2.50, 95% CI -1.10 to 6.10; 1 RCT; 57 participants; very low-certainty evidence, downgraded due to methodological limitations and imprecision). There is uncertainty about the effect of CPAP on sex-related quality of life after 12 weeks, using the Self-esteem and relationship test (SEAR, higher = better; MD 1.00, 95% CI -8.09 to 10.09; 1 RCT; 57 participants; very low-certainty evidence, downgraded due to methodological limitations and imprecision); no serious adverse events were reported after 4 weeks (1 RCT; 27 participants; very low-certainty evidence, downgraded due to methodological limitations and imprecision).\nCPAP versus sham CPAP\nOne RCT assessed this comparison (61 participants), but we were unable to extract outcomes for this comparison due to the factorial design and reporting of this trial.\nCPAP versus sildenafil (phosphodiesterase type 5 inhibitors)\nSildenafil may slightly improve erectile function at 12 weeks when compared to CPAP, measured with the IIEF-ED (MD -4.78, 95% CI -6.98 to -2.58; 3 RCTs; 152 participants; I² = 59%; low-certainty evidence, downgraded due to methodological limitations).\nThere is uncertainty about the effect of CPAP on sex-related quality of life after 12 weeks, measured with the Erectile Dysfunction Inventory of Treatment Satisfaction questionnaire (EDITS, higher = better; MD -1.24, 95% CI -1.80 to -0.67; 2 RCTs; 122 participants; I² = 0%; very low-certainty evidence, downgraded due to methodological limitations). No serious adverse events were reported for either group (2 RCTs; 70 participants; very low-certainty evidence, downgraded due to methodological limitations and imprecision). There is uncertainty about the effects of CPAP when compared to sildenafil for the incidence of minor adverse events (RR 1.33, 95% CI 0.34 to 5.21; 1 RCT; 40 participants; very low-certainty evidence, downgraded due to methodological limitations and imprecision). The confidence interval was wide and neither a significant increase nor reduction in the risk of minor adverse events can be ruled out with the use of CPAP (4/20 men complained of nasal dryness in the CPAP group, and 3/20 men complained of transient flushing and mild headache in the sildenafil group).\nAuthors' conclusions\nWhen compared with no CPAP, we are uncertain about the effectiveness and acceptability of CPAP for improving erectile dysfunction in men with obstructive sleep apnoea. When compared with sildenafil, there is some evidence that sildenafil may slightly improve erectile function at 12 weeks.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included six studies, that included 315 men with OSAS and ED. They compared the use of CPAP with either: no CPAP; with sham device (a device similar to CPAP without positive pressure, as a placebo); or with phosphodiesterase type 5 inhibitors (first-line oral medications for the treatment of ED), for at least month. We evaluated the following primary outcomes (remission of ED and serious adverse events); and secondary outcomes (sex-related quality of life, health-related quality of life, and minor adverse events)."
}
]
|
query-laysum | 2257 | [
{
"role": "user",
"content": "Abstract: Background\nPatellar tendinopathy is an overuse condition that commonly affects athletes. Surgery is usually offered if medical and physical therapies fail to treat it effectively. There is variation in the type of surgery performed for the condition.\nObjectives\nTo assess the benefits and harms of surgery for patellar tendinopathy in adults.\nSearch methods\nWe searched the following databases, to 17 July 2018: the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Library, OVID MEDLINE, OVID Embase, clinical trial registries (www.ClinicalTrials.gov) and the WHO trials portal (www.who.int/ictrp/en/).\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared surgical techniques (open or arthroscopic) with non-operative treatment (including placebo surgery, exercise or other non-surgical modalities) in adults with patellar tendinopathy.\nMajor outcomes assessed were knee pain, function, quality of life, participant global assessment of success, withdrawal rate, proportion with adverse events and proportion with tendon rupture.\nData collection and analysis\nTwo review authors selected studies for inclusion, extracted trial characteristics and outcome data, assessed the risk of bias and assessed the quality of the evidence using GRADE.\nMain results\nTwo trials (92 participants) met our inclusion criteria. Participants in both trials were followed for 12 months. Neither trial compared surgery to placebo surgery. One trial (40 randomised participants) compared open surgical excision with eccentric exercises, and the other compared arthroscopic surgery with sclerosing injections (52 randomised participants). Due to the nature of the interventions, neither the participants or the investigators were blinded to the group allocation, resulting in the potential for performance and detection bias. Some outcomes were selectively not recorded, leading to reporting bias. Overall, the certainty of the evidence from these studies was low for all outcomes due to the potential for bias, and imprecision due to small sample sizes.\nCompared with eccentric exercises, low-certainty evidence indicates that open surgical excision provides no clinically important benefits with respect to knee pain, function or global assessment of success. At 12 months, mean knee pain — measured by pain with standing jump on a 10-point scale (lower scores indicating less pain) — was 1.7 points (standard deviation (SD) 1.6) in the eccentric training group and 1.3 (SD 0.8) in the surgical group (one trial, 40 participants). This equates to an absolute pain reduction of 4% (ranging from 4% worse to 12% better, the minimal clinically important difference being 15%) and a relative reduction in pain of 10% better (ranging from 30% better to 10% worse) in the treatment group. At 12 months, function on the zero- to 100-point Victorian Institute of Sport Assessment (VISA) scale was 65.7 (SD 23.8) in the eccentric training group and 72.9 (SD 11.7) in the surgical group (one trial, 40 participants). This equates to an absolute change of 7% better function (ranging from 4% worse to 19% better) and relative change of 25% better (ranging from 15% worse to 65% better, the minimal clinically important difference being 13%). Participant global assessment of success was measured by the number of people with no pain at 12 months: 7/20 participants in the eccentric training group reported no pain, compared with 5/20 in the open surgical group (risk ratio (RR) 0.71 (95% CI 0.27 to 1.88); one trial, 40 participants). There were no withdrawals, but five out of 20 people from the eccentric exercise group crossed over to open surgical excision. Quality of life, adverse events and tendon ruptures were not measured.\nCompared with sclerosing injection, low-certainty evidence indicates that arthroscopic surgery may provide a reduction in pain and improvement in participant global assessment of success, however further studies are likely to change these results. At 12 months, mean pain with activities, measured on a 100-point scale (lower scores indicating less pain), was 41.1 (SD 28.5) in the sclerosing injection group and 12.8 (SD 19.3) in the arthroscopic surgery group (one trial, 52 participants). This equates to an absolute pain reduction of 28% better (ranging from 15% to 42% better, the minimal clinically important difference being 15%), and a relative change of 41% better (ranging from 21% to 61% better). At 12 months, the mean participant global assessment of success, measured by satisfaction on a 100-point scale (scale zero to 100, higher scores indicating greater satisfaction), was 52.9 (SD 32.6) in the sclerosing injection group and 86.8 (SD 20.8) in the arthroscopic surgery group (one trial, 52 participants). This equates to an absolute improvement of 34% (ranging from 19% to 49%). In both groups, one participant (4%) withdrew from the study. Functional outcome scores, including the VISA score, were not reported. Quality-of-life assessment, adverse events, and specifically the proportion with a tendon rupture, were not reported.\nWe did not perform subgroup analysis to assess differences in outcome between arthroscopic or open surgical excision, as we did not identify more than one study with a common comparator.\nAuthors' conclusions\nWe are uncertain if surgery is beneficial over other therapeutic interventions, namely eccentric exercises or injectables. Low-certainty evidence shows that surgery for patellar tendinopathy may not provide clinically important benefits over eccentric exercise in terms of pain, function or participant-reported treatment success, but may provide clinically meaningful pain reduction and treatment success when compared with sclerosing injections. However, further research is likely to change these results. The evidence was downgraded two levels due to the small sample sizes and susceptibility to bias. We are uncertain if there are additional risks associated with surgery as study authors failed to report adverse events. Surgery seems to be embedded in clinical practice for late-stage patella tendinopathy, due to exhaustion of other therapeutic methods rather than evidence of benefit.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Compared with eccentric exercises, open surgery offered little benefit at 12 months (results for individual outcomes as follows).\nPain (lower scores mean less pain)\nImproved by 4% (ranging from 4% worse to 12% better) or by 0.4 points on a scale of zero to 10 points.\nPeople who had surgery rated their pain as 1.3 points.\nPeople who had eccentric exercises rated their pain as 1.7 points.\nGlobal assessment of success (those who reported no pain at 12 months)\n10% fewer people had no pain (ranging from 38% less to 18% more), or 10 fewer people out of 100.\nTwenty-five out of 100 people had no pain with surgery.\nThirty-five out of 100 people had no pain with eccentric exercises.\nWithdrawals\nNo participants in either group withdrew from the study.\nThe study did not report on quality-of-life improvements or adverse events (including tendon ruptures).\nCompared with sclerosing injections, arthroscopic (keyhole) surgery offered some reduction in pain and improvement in participant global assessment of success at 12 months (results for individual outcomes as follows; further studies are likely to change these results).\nPain (lower scores mean less pain)\nImproved by 28% (ranging from 15% to 42% better) or by 28 points on a scale of zero to 100 points.\nPeople who had surgery rated their pain as 12.8 points.\nPeople who had sclerosing injection rated their pain as 41.1 points.\nGlobal assessment of success (participant-reported success, higher score is better)\nImproved by 34% (ranging from 19% to 49% better) or by 33.9 points on a scale of zero to 100 points.\nPeople who had surgery rated their pain as 86.8 points.\nPeople who had sclerosing injection rated their pain as 52.9 points.\nWithdrawals\nOne person from each group (4%) withdrew from the study for reasons unrelated to the treatment.\nThe study did not report on quality-of-life improvements, functional score improvements or adverse events (including tendon ruptures)."
}
]
|
query-laysum | 2258 | [
{
"role": "user",
"content": "Abstract: Background\nDialysis treatments weigh heavily on patients' physical and psychosocial health. Multiple studies have assessed the potential for exercise training to improve outcomes in adults undergoing dialysis. However, uncertainties exist in its relevance and sustainable benefits for patient-important outcomes. This is an update of a review first published in 2011.\nObjectives\nTo assess the benefits and safety of regular structured exercise training in adults undergoing dialysis on patient-important outcomes including death, cardiovascular events, fatigue, functional capacity, pain, and depression. We also aimed to define the optimal prescription of exercise in adults undergoing dialysis.\nSearch methods\nIn this update, we conducted a systematic search of the Cochrane Kidney and Transplant Register of Studies up to 23 December 2020. The Register includes studies identified from CENTRAL, MEDLINE, EMBASE, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov as well as kidney-related journals and the proceedings of major kidney conferences.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs of any structured exercise programs of eight weeks or more in adults undergoing maintenance dialysis compared to no exercise or sham exercise.\nData collection and analysis\nTwo authors independently assessed the search results for eligibility, extracted the data and assessed the risk of bias using the Cochrane risk of bias tool. Whenever appropriate, we performed random-effects meta-analyses of the mean difference in outcomes. The primary outcomes were death (any cause), cardiovascular events and fatigue. Secondary outcomes were health-related quality of life (HRQoL), depression, pain, functional capacity, blood pressure, adherence to the exercise program, and intervention-related adverse events.\nMain results\nWe identified 89 studies involving 4291 randomised participants, of which 77 studies (3846 participants) contributed to the meta-analyses. Seven studies included adults undergoing peritoneal dialysis. Fifty-six studies reported aerobic exercise interventions, 21 resistance exercise interventions and 19 combined aerobic and resistance training within the same study arm. The interventions lasted from eight weeks to two years and most often took place thrice weekly during dialysis treatments. A single study reported death and no study reported long-term cardiovascular events. Five studies directly assessed fatigue, 46 reported HRQoL and 16 reported fatigue or pain through their assessment of HRQoL. Thirty-five studies assessed functional capacity, and 21 reported resting peripheral blood pressure. Twelve studies reported adherence to exercise sessions, and nine reported exercise-related adverse events. Overall, the quality of the included studies was low and blinding of the participants was generally not feasible due to the nature of the intervention.\nExercise had uncertain effects on death, cardiovascular events, and the mental component of HRQoL due to the very low certainty of evidence. Compared with sham or no exercise, exercise training for two to 12 months may improve fatigue in adults undergoing dialysis, however, a meta-analysis could not be conducted. Any exercise training for two to 12 months may improve the physical component of HRQoL (17 studies, 656 participants: MD 4.12, 95% CI 1.88 to 6.37 points on 100 points-scale; I² = 49%; low certainty evidence). Any exercise training for two to 12 months probably improves depressive symptoms (10 studies, 441 participants: SMD -0.65, 95% CI -1.07 to -0.22; I² = 77%; moderate certainty evidence) and the magnitude of the effect may be greater when maintaining the exercise beyond four months (6 studies, 311 participants: SMD -0.30, 95% CI 0.14 to -0.74; I² = 71%). Any exercise training for three to 12 months may improve pain (15 studies, 872 participants: MD 5.28 95% CI -0.12 to 10.69 points on 100 points-scale; I² = 63%: low certainty evidence) however, the 95% CI indicates that exercise training may make little or no difference in the level of pain. Any exercise training for two to six months probably improves functional capacity as it increased the distance reached during six minutes of walking (19 studies, 827 participants: MD 49.91 metres, 95% CI 37.22 to 62.59; I² = 34%; moderate certainty evidence) and the number of sit-to-stand cycles performed in 30 seconds (MD 2.33 cycles, 95% CI 1.71 to 2.96; moderate certainty evidence). There was insufficient evidence to assess the safety of exercise training for adults undergoing maintenance dialysis. The results were similar for aerobic exercise, resistance exercise, and a combination of both aerobic and resistance exercise.\nAuthors' conclusions\nIt is uncertain whether exercise training improves death, cardiovascular events, or the mental component of HRQoL in adults undergoing maintenance dialysis. Exercise training probably improves depressive symptoms, particularly when the intervention is maintained beyond four months. Exercise training is also likely to improve functional capacity. Low certainty evidence suggested that exercise training may improve fatigue, the physical component of quality of life, and pain. The safety of exercise training for adults undergoing dialysis remains uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Exercise training for people undergoing maintenance dialysis is likely to improve depression and their capacity to perform activities and tasks. Exercise training may also improve fatigue and pain sightly. Exercise training may improve the physical aspects of quality of life, but it is unclear whether it improves a person's mental well-being. It is unclear whether exercise training reduces the number of deaths or cardiovascular events."
}
]
|
query-laysum | 2259 | [
{
"role": "user",
"content": "Abstract: Background\nTransurethral radiofrequency collagen denaturation is a relatively novel, minimally invasive device-based intervention used to treat individuals with urinary incontinence (UI). No systematic review of the evidence supporting its use has been published to date.\nObjectives\nTo evaluate the efficacy of transurethral radiofrequency collagen denaturation, compared with other interventions, in the treatment of women with UI.\nReview authors sought to compare the following.\n• Transurethral radiofrequency collagen denaturation versus no treatment/sham treatment.\n• Transurethral radiofrequency collagen denaturation versus conservative physical treatment.\n• Transurethral radiofrequency collagen denaturation versus mechanical devices (pessaries for UI).\n• Transurethral radiofrequency collagen denaturation versus drug treatment.\n• Transurethral radiofrequency collagen denaturation versus injectable treatment for UI.\n• Transurethral radiofrequency collagen denaturation versus other surgery for UI.\nSearch methods\nWe conducted a systematic search of the Cochrane Incontinence Group Specialised Register (searched 19 December 2014), EMBASE and EMBASE Classic (January 1947 to 2014 Week 50), Google Scholar and three trials registries in December 2014, along with reference checking. We sought to identify unpublished studies by handsearching abstracts of major gynaecology and urology meetings, and by contacting experts in the field and the device manufacturer.\nSelection criteria\nRandomised and quasi-randomised trials of transurethral radiofrequency collagen denaturation versus no treatment/sham treatment, conservative physical treatment, mechanical devices, drug treatment, injectable treatment for UI or other surgery for UI in women were eligible.\nData collection and analysis\nWe screened search results and selected eligible studies for inclusion. We assessed risk of bias and analysed dichotomous variables as risk ratios (RRs) with 95% confidence intervals (CIs) and continuous variables as mean differences (MDs) with 95% CIs. We rated the quality of evidence using the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach.\nMain results\nWe included in the analysis one small sham-controlled randomised trial of 173 women performed in the United States. Participants enrolled in this study had been diagnosed with stress UI and were randomly assigned to transurethral radiofrequency collagen denaturation (treatment) or a sham surgery using a non-functioning catheter (no treatment). Mean age of participants in the 12-month multi-centre trial was 50 years (range 22 to 76 years).\nOf three patient-important primary outcomes selected for this systematic review, the number of women reporting UI symptoms after intervention was not reported. No serious adverse events were reported for the transurethral radiofrequency collagen denaturation arm or the sham treatment arm during the 12-month trial. Owing to high risk of bias and imprecision, we downgraded the quality of evidence for this outcome to low. The effect of transurethral radiofrequency collagen denaturation on the number of women with an incontinence quality of life (I-QOL) score improvement ≥ 10 points at 12 months was as follows: RR 1.11, 95% CI 0.77 to 1.62; participants = 142, but the confidence interval was wide. For this outcome, the quality of evidence was also low as the result of high risk of bias and imprecision.\nWe found no evidence on the number of women undergoing repeat continence surgery. The risk of other adverse events (pain/dysuria (RR 5.73, 95% CI 0.75 to 43.70; participants = 173); new detrusor overactivity (RR 1.36, 95% CI 0.63 to 2.93; participants = 173); and urinary tract infection (RR 0.95, 95% CI 0.24 to 3.86; participants = 173) could not be established reliably as the trial was small. Evidence was insufficient for assessment of whether use of transurethral radiofrequency collagen denaturation was associated with an increased rate of urinary retention, haematuria and hesitancy compared with sham treatment in 173 participants. The GRADE quality of evidence for all other adverse events with available evidence was low as the result of high risk of bias and imprecision.\nWe found no evidence to inform comparisons of transurethral radiofrequency collagen denaturation with conservative physical treatment, mechanical devices, drug treatment, injectable treatment for UI or other surgery for UI.\nAuthors' conclusions\nIt is not known whether transurethral radiofrequency collagen denaturation, as compared with sham treatment, improves patient-reported symptoms of UI. Evidence is insufficient to show whether the procedure improves disease-specific quality of life. Evidence is also insufficient to show whether the procedure causes serious adverse events or other adverse events in comparison with sham treatment, and no evidence was found for comparison with any other method of treatment for UI.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Using the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach, we found no evidence for the question of whether low-temperature heat via the urethra changed the number of women who leaked. We found low-quality evidence related to serious side effects, minor side effects and quality of life when compared with no treatment because data were limited and the study was poorly conducted. We found no evidence on whether this treatment changed the number of women who underwent another surgery. Because we did not find studies that compared this treatment with other treatments, we do not know whether this treatment results in better or worse outcomes."
}
]
|
query-laysum | 2260 | [
{
"role": "user",
"content": "Abstract: Background\nThe hypertensive disorders of pregnancy include pre-eclampsia, gestational hypertension, chronic hypertension, and undefined hypertension. Pre-eclampsia is considerably more prevalent in low-income than in high-income countries. One possible explanation for this discrepancy is dietary differences, particularly calcium deficiency. Calcium supplementation in the second half of pregnancy reduces the serious consequences of pre-eclampsia, but has limited effect on the overall risk of pre-eclampsia. It is important to establish whether calcium supplementation before, and in early pregnancy (before 20 weeks' gestation) has added benefit. Such evidence could count towards justification of population-level interventions to improve dietary calcium intake, including fortification of staple foods with calcium, especially in contexts where dietary calcium intake is known to be inadequate. This is an update of a review first published in 2017.\nObjectives\nTo determine the effect of calcium supplementation, given before or early in pregnancy and for at least the first half of pregnancy, on pre-eclampsia and other hypertensive disorders, maternal morbidity and mortality, and fetal and neonatal outcomes.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Trials Register (31 July 2018), PubMed (13 July 2018), ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP; 31 July 2018), and reference lists of retrieved studies.\nSelection criteria\nEligible studies were randomised controlled trials (RCT) of calcium supplementation, including women not yet pregnant, or women in early pregnancy. Cluster-RCTs, quasi-RCTs, and trials published as abstracts were eligible, but we did not identify any.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data, and checked them for accuracy. They assessed the quality of the evidence for key outcomes using the GRADE approach.\nMain results\nCalcium versus placebo\nWe included one study (1355 women), which took place across multiple hospital sites in Argentina, South Africa, and Zimbabwe. Most analyses were conducted only on 633 women from this group who were known to have conceived, or on 579 who reached 20 weeks' gestation; the trial was at moderate risk of bias due to high attrition rates pre-conception. Non-pregnant women with previous pre-eclampsia received either calcium 500 mg daily or placebo, from enrolment until 20 weeks' gestation. All participants received calcium 1.5 g daily from 20 weeks until birth.\nPrimary outcomes: calcium supplementation commencing before conception may make little or no difference to the risk of pre-eclampsia (69/296 versus 82/283, risk ratio (RR) 0.80, 95% confidence interval (CI) 0.61 to 1.06; low-quality evidence). For pre-eclampsia or pregnancy loss or stillbirth (or both) at any gestational age, calcium may slightly reduce the risk of this composite outcome, however the 95% CI met the line of no effect (RR 0.82, 95% CI 0.66 to 1.00; low-quality evidence). Supplementation may make little or no difference to the severe maternal morbidity and mortality index (RR 0.93, 95% CI 0.68 to 1.26; low-quality evidence), pregnancy loss or stillbirth at any gestational age (RR 0.83, 95% CI 0.61 to 1,14; low-quality evidence), or caesarean section (RR 1.11, 95% CI 0.96 to 1,28; low-quality evidence).\nCalcium supplementation may make little or no difference to the following secondary outcomes: birthweight < 2500 g (RR 1.00, 95% CI 0.76 to 1.30; low-quality evidence), preterm birth < 37 weeks (RR 0.90, 95% CI 0.74 to 1.10), early preterm birth < 32 weeks (RR 0.79, 95% CI 0.56 to 1.12), and pregnancy loss, stillbirth or neonatal death before discharge (RR 0.82, 95% CI 0.61 to 1.10; low-quality evidence), no conception, gestational hypertension, gestational proteinuria, severe gestational hypertension, severe pre-eclampsia, severe pre-eclamptic complications index. There was no clear evidence on whether or not calcium might make a difference to perinatal death, or neonatal intensive care unit admission for > 24h, or both (RR 1.11, 95% CI 0.77 to 1.60; low-quality evidence).\nIt is unclear what impact calcium supplementation has on Apgar score < 7 at five minutes (RR 0.43, 95% CI 0.15 to 1.21; very low-quality evidence), stillbirth, early onset pre-eclampsia, eclampsia, placental abruption, intensive care unit admission > 24 hours, maternal death, hospital stay > 7 days from birth, and pregnancy loss before 20 weeks' gestation.\nAuthors' conclusions\nThe single included study suggested that calcium supplementation before and early in pregnancy may reduce the risk of women experiencing the composite outcome pre-eclampsia or pregnancy loss at any gestational age, but the results are inconclusive for all other outcomes for women and babies. Therefore, current evidence neither supports nor refutes the routine use of calcium supplementation before conception and in early pregnancy.\nTo determine the overall benefit of calcium supplementation commenced before or in early pregnancy, the effects found in the study of calcium supplementation limited to the first half of pregnancy need to be added to the known benefits of calcium supplementation in the second half of pregnancy.\nFurther research is needed to confirm whether initiating calcium supplementation pre- or in early pregnancy is associated with a reduction in adverse pregnancy outcomes for mother and baby. Research could also address the acceptability of the intervention to women, which was not covered by this review update.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We need more research to decide whether or not calcium before pregnancy or during early pregnancy helps women avoid high blood-pressure and other related problems.\nFurther research is needed to confirm whether initiating calcium supplementation pre- or in early pregnancy is associated with a reduction in adverse pregnancy outcomes for mother and baby. Research could also address the acceptability of the intervention to women, which was not covered by this review update."
}
]
|
query-laysum | 2261 | [
{
"role": "user",
"content": "Abstract: Background\nChronic pain, considered to be pain lasting more than three months, is a common and often difficult to treat condition that can significantly impact upon function and quality of life. Treatment typically includes pharmacological and non-pharmacological approaches. Transcutaneous electrical nerve stimulation (TENS) is an adjunct non-pharmacological treatment commonly recommended by clinicians and often used by people with pain.\nObjectives\nTo provide an overview of evidence from Cochrane Reviews of the effectiveness of TENS to reduce pain in adults with chronic pain (excluding headache or migraine).\nTo provide an overview of evidence from Cochrane Reviews of the safety of TENS when used to reduce pain in adults with chronic pain (excluding headache or migraine).\nTo identify possible sources of inconsistency in the approaches taken to evaluating the evidence related to TENS for chronic pain (excluding headache or migraine) in the Cochrane Library with a view to recommending strategies to improve consistency in methodology and reporting.\nTo highlight areas of remaining uncertainty regarding the effectiveness of TENS for chronic pain (excluding headache or migraine) with a view to recommending strategies to reduce any uncertainty.\nMethods\nSearch methods\nWe searched the Cochrane Database of Systematic Reviews (CDSR), in the Cochrane Library, across all years up to Issue 11 of 12, 2018.\nSelection of reviews\nTwo authors independently screened the results of the electronic search by title and abstract against inclusion/exclusion criteria. We included all Cochrane Reviews of randomised controlled trials (RCTs) assessing the effectiveness of TENS in people with chronic pain. We included reviews if they investigated the following: TENS versus sham; TENS versus usual care or no treatment/waiting list control; TENS plus active intervention versus active intervention alone; comparisons between different types of TENS; or TENS delivered using different stimulation parameters.\nData extraction and analysis\nTwo authors independently extracted relevant data, assessed review quality using the AMSTAR checklist and applied GRADE judgements where required to individual reviews. Our primary outcomes included pain intensity and nature/incidence of adverse effects; our secondary outcomes included disability, health-related quality of life, analgesic medication use and participant global impression of change.\nMain results\nWe included nine reviews investigating TENS use in people with defined chronic pain or in people with chronic conditions associated with ongoing pain. One review investigating TENS for phantom or stump-associated pain in people following amputation did not have any included studies. We therefore extracted data from eight reviews which represented 51 TENS-related RCTs representing 2895 TENS-comparison participants entered into the studies.\nThe included reviews followed consistent methods and achieved overall high scores on the AMSTAR checklist. The evidence reported within each review was consistently rated as very low quality. Using review authors' assessment of risk of bias, there were significant methodological limitations in included studies; and for all reviews, sample sizes were consistently small (the majority of studies included fewer than 50 participants per group).\nSix of the eight reviews presented a narrative synthesis of included studies. Two reviews reported a pooled analysis.\nPrimary and secondary outcomes\nOne review reported a beneficial effect of TENS versus sham therapy at reducing pain intensity on a 0 to 10 scale (MD −1.58, 95% CI −2.08 to −1.09, P < 0.001, I² = 29%, P = 0.22, 5 studies, 207 participants). However the quality of the evidence was very low due to significant methodological limitations and imprecision. A second review investigating pain intensity performed a pooled analysis by combining studies that compared TENS to sham with studies that compared TENS to no intervention (SMD −0.85, 95% CI −1.36 to −0.34, P = 0.001, I² = 83%, P < 0.001). This pooled analysis was judged as offering very low quality evidence due to significant methodological limitations, large between-trial heterogeneity and imprecision. We considered the approach of combining sham and no intervention data to be problematic since we would predict these different comparisons may be estimating different true effects. All remaining reviews also reported pain intensity as an outcome measure; however the data were presented in narrative review form only.\nDue to methodological limitation and lack of useable data, we were unable to offer any meaningful report on the remaining primary outcome regarding nature/incidence of adverse effects, nor for the remaining secondary outcomes: disability, health-related quality of life, analgesic medication use and participant global impression of change for any comparisons.\nWe found the included reviews had a number of inconsistencies when evaluating the evidence from TENS studies. Approaches to assessing risk of bias around the participant, personnel and outcome-assessor blinding were perhaps the most obvious area of difference across included reviews. We also found wide variability in terms of primary and secondary outcome measures, and inclusion/exclusion criteria for studies varied with respect to including studies which assessed immediate effects of single interventions.\nAuthors' conclusions\nWe found the methodological quality of the reviews was good, but quality of the evidence within them was very low. We were therefore unable to conclude with any confidence that, in people with chronic pain, TENS is harmful, or beneficial for pain control, disability, health-related quality of life, use of pain relieving medicines, or global impression of change. We make recommendations with respect to future TENS study designs which may meaningfully reduce the uncertainty relating to the effectiveness of this treatment in people with chronic pain.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are unable to confidently state whether TENS is effective in relieving pain in people with chronic pain. This is due to the very low quality of the evidence, and the overall small numbers of participants included in studies in the reviews. Issues with quality, study size and lack of data meant we were unable to draw any conclusion on TENS-associated harms or side-effects or the effect of TENS on disability, health-related quality of life, use of pain-relieving medicines or people's impression of how much TENS changed their condition."
}
]
|
query-laysum | 2262 | [
{
"role": "user",
"content": "Abstract: Background\nSquamous cell carcinoma is the most common form of malignancy of the oral cavity, and is often proceeded by oral potentially malignant disorders (OPMD). Early detection of oral cavity squamous cell carcinoma (oral cancer) can improve survival rates. The current diagnostic standard of surgical biopsy with histology is painful for patients and involves a delay in order to process the tissue and render a histological diagnosis; other diagnostic tests are available that are less invasive and some are able to provide immediate results. This is an update of a Cochrane Review first published in 2015.\nObjectives\nPrimary objective: to estimate the diagnostic accuracy of index tests for the detection of oral cancer and OPMD, in people presenting with clinically evident suspicious and innocuous lesions.\nSecondary objective: to estimate the relative accuracy of the different index tests.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: MEDLINE Ovid (1946 to 20 October 2020), and Embase Ovid (1980 to 20 October 2020). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were also searched for ongoing trials to 20 October 2020. No restrictions were placed on the language or date of publication when searching the electronic databases. We conducted citation searches, and screened reference lists of included studies for additional references.\nSelection criteria\nWe selected studies that reported the diagnostic test accuracy of the following index tests when used as an adjunct to conventional oral examination in detecting OPMD or oral cavity squamous cell carcinoma: vital staining (a dye to stain oral mucosa tissues), oral cytology, light-based detection and oral spectroscopy, blood or saliva analysis (which test for the presence of biomarkers in blood or saliva).\nData collection and analysis\nTwo review authors independently screened titles and abstracts for relevance. Eligibility, data extraction and quality assessment were carried out by at least two authors, independently and in duplicate. Studies were assessed for methodological quality using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Meta-analysis was used to combine the results of studies for each index test using the bivariate approach to estimate the expected values of sensitivity and specificity.\nMain results\nThis update included 63 studies (79 datasets) published between 1980 and 2020 evaluating 7942 lesions for the quantitative meta-analysis. These studies evaluated the diagnostic accuracy of conventional oral examination with: vital staining (22 datasets), oral cytology (24 datasets), light-based detection or oral spectroscopy (24 datasets). Nine datasets assessed two combined index tests. There were no eligible diagnostic accuracy studies evaluating blood or salivary sample analysis. Two studies were classed as being at low risk of bias across all domains, and 33 studies were at low concern for applicability across the three domains, where patient selection, the index test, and the reference standard used were generalisable across the population attending secondary care.\nThe summary estimates obtained from the meta-analysis were:\n- vital staining: sensitivity 0.86 (95% confidence interval (CI) 0.79 to 0.90) specificity 0.68 (95% CI 0.58 to 0.77), 20 studies, sensitivity low-certainty evidence, specificity very low-certainty evidence;\n- oral cytology: sensitivity 0.90 (95% CI 0.82 to 0.94) specificity 0.94 (95% CI 0.88 to 0.97), 20 studies, sensitivity moderate-certainty evidence, specificity moderate-certainty evidence;\n- light-based: sensitivity 0.87 (95% CI 0.78 to 0.93) specificity 0.50 (95% CI 0.32 to 0.68), 23 studies, sensitivity low-certainty evidence, specificity very low-certainty evidence; and\n- combined tests: sensitivity 0.78 (95% CI 0.45 to 0.94) specificity 0.71 (95% CI 0.53 to 0.84), 9 studies, sensitivity very low-certainty evidence, specificity very low-certainty evidence.\nAuthors' conclusions\nAt present none of the adjunctive tests can be recommended as a replacement for the currently used standard of a surgical biopsy and histological assessment. Given the relatively high values of the summary estimates of sensitivity and specificity for oral cytology, this would appear to offer the most potential. Combined adjunctive tests involving cytology warrant further investigation. Potentially eligible studies of blood and salivary biomarkers were excluded from the review as they were of a case-control design and therefore ineligible. In the absence of substantial improvement in the tests evaluated in this updated review, further research into biomarkers may be warranted.\n\nGiven the provided abstract, please respond to the following query: \"Why is it important to improve the detection of oral cancer?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Persistent abnormal patches or sores in the mouth may represent mouth cancer or oral potentially malignant disorders (OPMDs). OPMDs can sometimes turn into mouth cancer, but if identified early enough patient outcomes can be improved."
}
]
|
query-laysum | 2263 | [
{
"role": "user",
"content": "Abstract: Background\nRespiratory viruses are the leading cause of lower respiratory tract infection (LRTI) and hospitalisation in infants and young children. Respiratory syncytial virus (RSV) is the main infectious agent in this population. Palivizumab is administered intramuscularly every month during five months in the first RSV season to prevent serious RSV LRTI in children. Given its high cost, it is essential to know if palivizumab continues to be effective in preventing severe RSV disease in children.\nObjectives\nTo assess the effects of palivizumab for preventing severe RSV infection in children.\nSearch methods\nWe searched CENTRAL, MEDLINE, three other databases and two trials registers to 14 October 2021, together with reference checking, citation searching and contact with study authors to identify additional studies. We searched Embase to October 2020, as we did not have access to this database for 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs), including cluster-RCTs, comparing palivizumab given at a dose of 15 mg/kg once a month (maximum five doses) with placebo, no intervention or standard care in children 0 to 24 months of age from both genders, regardless of RSV infection history.\nData collection and analysis\nWe used Cochrane’s Screen4Me workflow to help assess the search results. Two review authors screened studies for selection, assessed risk of bias and extracted data. We used standard Cochrane methods. We used GRADE to assess the certainty of the evidence. The primary outcomes were hospitalisation due to RSV infection, all-cause mortality and adverse events. Secondary outcomes were hospitalisation due to respiratory-related illness, length of hospital stay, RSV infection, number of wheezing days, days of supplemental oxygen, intensive care unit length of stay and mechanical ventilation days.\nMain results\nWe included five studies with a total of 3343 participants. All studies were parallel RCTs, assessing the effects of 15 mg/kg of palivizumab every month up to five months compared to placebo or no intervention in an outpatient setting, although one study also included hospitalised infants. Most of the included studies were conducted in children with a high risk of RSV infection due to comorbidities like bronchopulmonary dysplasia and congenital heart disease. The risk of bias of outcomes across all studies was similar and predominately low.\nPalivizumab reduces hospitalisation due to RSV infection at two years' follow-up (risk ratio (RR) 0.44, 95% confidence interval (CI) 0.30 to 0.64; 5 studies, 3343 participants; high certainty evidence). Based on 98 hospitalisations per 1000 participants in the placebo group, this corresponds to 43 (29 to 62) per 1000 participants in the palivizumab group. Palivizumab probably results in little to no difference in mortality at two years' follow-up (RR 0.69, 95% CI 0.42 to 1.15; 5 studies, 3343 participants; moderate certainty evidence). Based on 23 deaths per 1000 participants in the placebo group, this corresponds to 16 (10 to 27) per 1000 participants in the palivizumab group. Palivizumab probably results in little to no difference in adverse events at 150 days' follow-up (RR 1.09, 95% CI 0.85 to 1.39; 3 studies, 2831 participants; moderate certainty evidence). Based on 84 cases per 1000 participants in the placebo group, this corresponds to 91 (71 to 117) per 1000 participants in the palivizumab group. Palivizumab probably results in a slight reduction in hospitalisation due to respiratory-related illness at two years' follow-up (RR 0.78, 95% CI 0.62 to 0.97; 5 studies, 3343 participants; moderate certainty evidence). Palivizumab may result in a large reduction in RSV infection at two years' follow-up (RR 0.33, 95% CI 0.20 to 0.55; 3 studies, 554 participants; low certainty evidence). Based on 195 cases of RSV infection per 1000 participants in the placebo group, this corresponds to 64 (39 to 107) per 1000 participants in the palivizumab group. Palivizumab also reduces the number of wheezing days at one year's follow-up (RR 0.39, 95% CI 0.35 to 0.44; 1 study, 429 participants; high certainty evidence).\nAuthors' conclusions\nThe available evidence suggests that prophylaxis with palivizumab reduces hospitalisation due to RSV infection and results in little to no difference in mortality or adverse events. Moreover, palivizumab results in a slight reduction in hospitalisation due to respiratory-related illness and may result in a large reduction in RSV infections. Palivizumab also reduces the number of wheezing days. These results may be applicable to children with a high risk of RSV infection due to comorbidities.\nFurther research is needed to establish the effect of palivizumab on children with other comorbidities known as risk factors for severe RSV disease (e.g. immune deficiencies) and other social determinants of the disease, including children living in low- and middle-income countries, tropical regions, children lacking breastfeeding, living in poverty, or members of families in overcrowded situations.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 14 October 2021."
}
]
|
query-laysum | 2264 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated review originally published in 2004 and first updated in 2007. This version includes substantial changes to bring it in line with current methodological requirements. Methadone is a synthetic opioid that presents some challenges in dose titration and is recognised to cause potentially fatal arrhythmias in some patients. It does have a place in therapy for people who cannot tolerate other opioids but should be initiated only by experienced practitioners. This review is one of a suite of reviews on opioids for cancer pain.\nObjectives\nTo determine the effectiveness and tolerability of methadone as an analgesic in adults and children with cancer pain.\nSearch methods\nFor this update we searched CENTRAL, MEDLINE, Embase, CINAHL, and clinicaltrials.gov, to May 2016, without language restriction. We also checked reference lists in relevant articles.\nSelection criteria\nWe sought randomised controlled trials comparing methadone (any formulation and by any route) with active or placebo comparators in people with cancer pain.\nData collection and analysis\nAll authors agreed on studies for inclusion. We retrieved full texts whenever there was any uncertainty about eligibility. One review author extracted data, which were checked by another review author. There were insufficient comparable data for meta-analysis. We extracted information on the effect of methadone on pain intensity or pain relief, the number or proportion of participants with 'no worse than mild pain'. We looked for data on withdrawal and adverse events. We looked specifically for information about adverse events relating to appetite, thirst, and somnolence. We assessed the evidence using GRADE and created a 'Summary of findings' table.\nMain results\nWe revisited decisions made in the earlier version of this review and excluded five studies that were previously included. We identified one new study for this update. This review includes six studies with 388 participants. We did not identify any studies in children.\nThe included studies differed so much in their methods and comparisons that no synthesis of results was feasible. Only one study (103 participants) specifically reported the number of participants with a given level of pain relief, in this case a reduction of at least 20% - similar in both the methadone and morphine groups. Using an outcome of 'no worse than mild pain', methadone was similar to morphine in effectiveness, and most participants who could tolerate methadone achieved 'no worse than mild pain'. Adverse event withdrawals with methadone were uncommon (12/202) and similar in other groups. Deaths were uncommon except in one study where the majority of participants died, irrespective of treatment group. For specific adverse events, somnolence was more common with methadone than with morphine, while dry mouth was more common with morphine than with methadone. None of the studies reported effects on appetite.\nWe judged the quality of evidence to be low, downgraded due to risk of bias and sparse data. For specific adverse events, we considered the quality of evidence to be very low, downgraded due to risk of bias, sparse data, and indirectness, as surrogates for appetite, thirst and somnolence were used.\nThere were no data on the use of methadone in children.\nAuthors' conclusions\nBased on low-quality evidence, methadone is a drug that has similar analgesic benefits to morphine and has a role in the management of cancer pain in adults. Other opioids such as morphine and fentanyl are easier to manage but may be more expensive than methadone in many economies.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One person in two or three who gets cancer will suffer from pain that becomes moderate or severe in intensity. The pain tends to get worse as the cancer progresses. Methadone has been used for many years as one of a number of different pain killers for cancer pain."
}
]
|
query-laysum | 2265 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis (CF) is a multisystem disease and the importance of growth and nutrition has been well established, given its implications for lung function and overall survival. It has been established that intestinal dysbiosis (i.e. microbial imbalance) and inflammation is present in people with CF. Probiotics are commercially available (over-the-counter) and may improve both intestinal and overall health.\nObjectives\nTo assess the efficacy and safety of probiotics for improving health outcomes in children and adults with CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. Date of last register search: 20 January 2020.\nWe also searched ongoing trials registries and the reference lists of relevant articles and reviews. Date of last search: 29 January 2019.\nSelection criteria\nRandomised or quasi-randomised controlled trials (RCTs) assessing efficacies and safety of probiotics in children and adults with CF. Cross-over RCTs with a washout phase were included and for those without a washout period, only the first phase of each trial was analysed.\nData collection and analysis\nWe independently extracted data and assessed the risk of bias of the included trials; we used GRADE to assess the certainty of the evidence. We contacted trial authors for additional data. Meta-analyses were undertaken on outcomes at several time points.\nMain results\nWe identified 17 trials and included 12 RCTs (11 completed and one trial protocol - this trial was terminated early) (464 participants). Eight trials included only children, whilst four trials included both children and adults. Trial duration ranged from one to 12 months. Nine trials compared a probiotic (seven single strain and three multistrain preparations) with a placebo preparation, two trials compared a synbiotic (multistrain) with a placebo preparation and one trial compared two probiotic preparations.\nOverall we judged the risk of bias in the 12 trials to be low. Three trials had a high risk of performance bias, two trials a high risk of attrition bias and six trials a high risk of reporting bias. Only two trials were judged to have low or unclear risk of bias for all domains. Four trials were sponsored by grants only, two trials by industry only, two trials by both grants and industry and three trials had an unknown funding source.\nCombined data from four trials (225 participants) suggested probiotics may reduce the number of pulmonary exacerbations during a four to 12 month time-frame, mean difference (MD) -0.32 episodes per participant (95% confidence interval (CI) -0.68 to 0.03; P = 0.07) (low-certainty evidence); however, the 95% CI includes the possibility of both an increased and a reduced number of exacerbations. Additionally, two trials (127 participants) found no evidence of an effect on the duration of antibiotic therapy during the same time period. Combined data from four trials (177 participants) demonstrated probiotics may reduce faecal calprotectin, MD -47.4 µg/g (95% CI -93.28 to -1.54; P = 0.04) (low-certainty evidence), but the results for other biomarkers mainly did not show any difference between probiotics and placebo. Two trials (91 participants) found no evidence of effect on height, weight or body mass index (low-certainty evidence). Combined data from five trials (284 participants) suggested there was no difference in lung function (forced expiratory volume at one second (FEV1) % predicted) during a three- to 12-month time frame, MD 1.36% (95% CI -1.20 to 3.91; P = 0.30) (low-certainty evidence). Combined data from two trials (115 participants) suggested there was no difference in hospitalisation rates during a three- to 12-month time frame, MD -0.44 admissions per participant (95% CI -1.41 to 0.54; P = 0.38) (low-certainty evidence). One trial (37 participants) reported health-related quality of life and while the parent report favoured probiotics, SMD 0.87 (95% CI 0.19 to 1.55) the child self-report did not identify any effect, SMD 0.59 (95% CI -0.07 to 1.26) (low-certainty evidence). There were limited results for gastrointestinal symptoms and intestinal microbial profile which were not analysable.\nOnly four trials and one trial protocol (298 participants) reported adverse events as a priori hypotheses. No trials reported any deaths. One terminated trial (12 participants and available as a protocol only) reported a severe allergic reaction (severe urticaria) for one participant in the probiotic group. Two trials reported a single adverse event each (vomiting in one child and diarrhoea in one child). The estimated number needed to harm for any adverse reaction (serious or not) is 52 people (low-certainty evidence).\nAuthors' conclusions\nProbiotics significantly reduce faecal calprotectin (a marker of intestinal inflammation) in children and adults with CF, however the clinical implications of this require further investigation. Probiotics may make little or no difference to pulmonary exacerbation rates, however, further evidence is required before firm conclusions can be made. Probiotics are associated with a small number of adverse events including vomiting, diarrhoea and allergic reactions. In children and adults with CF, probiotics may be considered by patients and their healthcare providers. Given the variability of probiotic composition and dosage, further adequately-powered multicentre RCTs of at least 12 months duration are required to best assess the efficacy and safety of probiotics for children and adults with CF.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There were several issues with the overall certainty of the evidence which affects our confidence in the results from these trials. We think the fact that some participants did not complete their treatment or were not included in the reports may affect the results on weight. We think the fact that not all planned outcomes were reported in four trials may affect the results of intestinal inflammation, growth and abdominal symptoms. We think the fact that the pharmaceutical industry sponsored at least four of the trials should be considered when looking at the results of this review. As most trials only included children we are not certain that the results would also apply to adults and there were also some issues related to whether people taking part in the trial knew which treatment they were receiving.\nWe judged the certainty of evidence for changes in pulmonary exacerbations, intestinal inflammation, lung function, hospital admissions, weight and health-related quality of life to be low. Adverse events were only recorded by four trials and the protocol for the terminated trial and the certainty of the evidence was also judged to be low."
}
]
|
query-laysum | 2266 | [
{
"role": "user",
"content": "Abstract: Background\nDemodex blepharitis is a chronic condition commonly associated with recalcitrant dry eye symptoms though many people with Demodex mites are asymptomatic. The primary cause of this condition in humans is two types of Demodex mites: Demodex folliculorum and Demodex brevis. There are varying reports of the prevalence of Demodex blepharitis among adults, and it affects both men and women equally. While Demodex mites are commonly treated with tea tree oil, the effectiveness of tea tree oil for treating Demodex blepharitis is not well documented.\nObjectives\nTo evaluate the effects of tea tree oil on ocular Demodex infestation in people with Demodex blepharitis.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register) (2019, Issue 6); Ovid MEDLINE; Embase.com; PubMed; LILACS; ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We used no date or language restrictions in the electronic search for trials. We last searched the databases on 18 June 2019.\nSelection criteria\nWe included randomized controlled trials (RCTs) that compared treatment with tea tree oil (or its components) versus another treatment or no treatment for people with Demodex blepharitis.\nData collection and analysis\nTwo review authors independently screened the titles and abstracts and then full text of records to determine their eligibility. The review authors independently extracted data and assessed risk of bias using Covidence. A third review author resolved any conflicts at all stages.\nMain results\nWe included six RCTs (1124 eyes of 562 participants; 17 to 281 participants per study) from the US, Korea, China, Australia, Ireland, and Turkey. The RCTs compared some formulation of tea tree oil to another treatment or no treatment. Included participants were both men and women, ranging from 39 to 55 years of age. All RCTs were assessed at unclear or high risk of bias in one or more domains. We also identified two RCTs that are ongoing or awaiting publications.\nData from three RCTs that reported a short-term mean change in the number of Demodex mites per eight eyelashes contributed to a meta-analysis. We are uncertain about the mean reduction for the groups that received the tea tree oil intervention (mean difference [MD] 0.70, 95% confidence interval [CI] 0.24 to 1.16) at four to six weeks as compared to other interventions. Only one RCT reported data for long-term changes, which found that the group that received intense pulse light as the treatment had complete eradication of Demodex mites at three months. We graded the certainty of the evidence for this outcome as very low.\nThree RCTs reported no evidence of a difference for participant reported symptoms measured on the Ocular Surface Disease Index (OSDI) between the tea tree oil group and the group receiving other forms of intervention. Mean differences in these studies ranged from -10.54 (95% CI – 24.19, 3.11) to 3.40 (95% CI -0.70 7.50). We did not conduct a meta-analysis for this outcome given substantial statistical heterogeneity and graded the certainty of the evidence as low.\nOne RCT provided information concerning visual acuity but did not provide sufficient data for between-group comparisons. The authors noted that mean habitual LogMAR visual acuity for all study participants improved post-treatment (mean LogMAR 1.16, standard deviation 0.26 at 4 weeks). We graded the certainty of evidence for this outcome as low. No RCTs provided data on mean change in number of cylindrical dandruff or the proportion of participants experiencing conjunctival injection or experiencing meibomian gland dysfunction.\nThree RCTs provided information on adverse events. One reported no adverse events. The other two described a total of six participants randomized to treatment with tea tree oil who experienced ocular irritation or discomfort that resolved with re-educating the patient on application techniques and continuing use of the tea tree oil. We graded the certainty of the evidence for this outcome as very low.\nAuthors' conclusions\nThe current review suggests that there is uncertainty related to the effectiveness of 5% to 50% tea tree oil for the short-term treatment of Demodex blepharitis; however, if used, lower concentrations may be preferable in the eye care arena to avoid induced ocular irritation. Future studies should be better controlled, assess outcomes at long term (e.g. 10 to 12 weeks or beyond), account for patient compliance, and study the effects of different tea tree oil concentrations.\n\nGiven the provided abstract, please respond to the following query: \"What was the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To examine the effects of tea tree oil, an essential oil derived from an Australian tree, which can be applied in many different forms (eyelid wipes, eyelid shampoo, oil massages, etc.) on Demodex blepharitis. Demodex blepharitis is an inflammation of the eyelid caused byDemodexmites (frequently referred to as eyelash mites)."
}
]
|
query-laysum | 2267 | [
{
"role": "user",
"content": "Abstract: Background\nTraumatic peripheral nerve injury is common and incurs significant cost to individuals and society. Healing following direct nerve repair or repair with autograft is slow and can be incomplete. Several bioengineered nerve wraps or devices have become available as an alternative to direct repair or autologous nerve graft. Nerve wraps attempt to reduce axonal escape across a direct repair site and nerve devices negate the need for a donor site defect, required by an autologous nerve graft. Comparative evidence to guide clinicians in their potential use is lacking. We collated existing evidence to guide the clinical application of currently available nerve wraps and conduits.\nObjectives\nTo assess and compare the effects and complication rates of licensed bioengineered nerve conduits or wraps for surgical repair of traumatic peripheral nerve injuries of the upper limb.\nTo compare effects and complications against the current gold surgical standard (direct repair or nerve autograft).\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search was 26 January 2022. We searched online and, where not accessible, contacted societies' secretariats to review abstracts from the British Surgical Society of the Hand, International Federation of Surgical Societies of the Hand, Federation of European Surgical Societies of the Hand, and the American Society for Peripheral Nerve from October 2007 to October 2018.\nSelection criteria\nWe included parallel group randomised controlled trials (RCTs) and quasi-RCTs of nerve repair in the upper limb using a bioengineered wrap or conduit, with at least 12 months of follow-up.\nData collection and analysis\nWe used standard Cochrane procedures. Our primary outcomes were 1. muscle strength and 2. sensory recovery at 24 months or more. Our secondary outcomes were 3. British Medical Research Council (BMRC) grading, 4. integrated functional outcome (Rosén Model Instrument (RMI)), 5. touch threshold, 6. two-point discrimination, 7. cold intolerance, 8. impact on daily living measured using the Disability of Arm Shoulder and Hand Patient-Reported Outcome Measure (DASH-PROM), 9. sensory nerve action potential, 10. cost of the device, and 11. adverse events (any and specific serious adverse events (further surgery)). We used GRADE to assess the certainty of the evidence.\nMain results\nFive studies involving 213 participants and 257 nerve injuries reconstructed with wraps or conduits (129 participants) or standard repair (128 participants) met the inclusion criteria. Of those in the standard repair group, 119 nerve injuries were managed with direct epineurial repair, and nine autologous nerve grafts were performed. One study excluded the outcome data for the repair using an autologous nerve graft from their analysis, as it was the only autologous nerve graft in the study, so data were available for 127 standard repairs. There was variation in the functional outcome measures reported and the time postoperatively at which they were recorded.\nMean sensory recovery, assessed with BMRC sensory grading (range S0 to S4, higher score considered better) was 0.03 points higher in the device group (range 0.43 lower to 0.49 higher; 1 RCT, 28 participants; very low-certainty evidence) than in the standard repair group (mean 2.75 points), which suggested little or no difference between the groups, but the evidence is very uncertain. There may be little or no difference at 24 months in mean touch thresholds between standard repair (0.81) and repair using devices, which was 0.01 higher but this evidence is also very uncertain (95% confidence interval (CI) 0.06 lower to 0.08 higher; 1 trial, 32 participants; very low-certainty evidence).\nData were not available to assess BMRC motor grading at 24 months or more. Repair using bioengineered devices may not improve integrated functional outcome scores at 24 months more than standard techniques, as assessed by the Rosén Model Instrument (RMI; range 0 to 3, higher scores better); the CIs allow for both no important difference and a better outcome with standard repair (mean RMI 1.875), compared to the device group (0.17 lower, 95% CI 0.38 lower to 0.05 higher; P = 0.13; 2 trials, 60 participants; low-certainty evidence). Data from one study suggested that the five-year postoperative outcome of RMI may be slightly improved after repair using a device (mean difference (MD) 0.23, 95% CI 0.07 to 0.38; 1 trial, 28 participants; low-certainty evidence). No studies measured impact on daily living using DASH-PROM.\nThe proportion of people with adverse events may be greater with nerve wraps or conduits than with standard techniques, but the evidence is very uncertain (risk ratio (RR) 7.15, 95% CI 1.74 to 29.42; 5 RCTs, 213 participants; very low-certainty evidence). This corresponds to 10 adverse events per 1000 people in the standard repair group and 68 per 1000 (95% CI 17 to 280) in the device group. The use of nerve repair devices may be associated with a greater need for revision surgery but this evidence is also very uncertain (12/129 device repairs required revision surgery (removal) versus 0/127 standard repairs; RR 7.61, 95% CI 1.48 to 39.02; 5 RCTs, 256 nerve repairs; very low-certainty evidence).\nAuthors' conclusions\nBased on the available evidence, this review does not support use of currently available nerve repair devices over standard repair. There is significant heterogeneity in participants, injury pattern, repair timing, and outcome measures and their timing across studies of nerve repair using bioengineered devices, which make comparisons unreliable. Studies were generally small and at high or unclear risk of bias. These factors render the overall certainty of evidence for any outcome low or very low. The data reviewed here provide some evidence that more people may experience adverse events with use of currently available bioengineered devices than with standard repair techniques, and the need for revision surgery may also be greater. The evidence for sensory recovery is very uncertain and there are no data for muscle strength at 24 months (our primary outcome measures). We need further trials, adhering to a minimum standard of outcome reporting (with at least 12 months' follow-up, including integrated sensorimotor evaluation and patient-reported outcomes) to provide high-certainty evidence and facilitate more detailed analysis of effectiveness of emerging, increasingly sophisticated, bioengineered repair devices.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We could not reliably compare hand or nerve function between studies, and a major finding of this review is that we need standardised study designs. We will need well planned studies of new nerve repair devices to inform safe future use.\nThe evidence is up to date to January 2022."
}
]
|
query-laysum | 2268 | [
{
"role": "user",
"content": "Abstract: Background\nAcute sinusitis is a common reason for primary care encounters. It causes significant symptoms including facial pain, congested nose, headache, thick nasal mucus, fever, and cough and often results in time off work or school. Sinusitis treatment focuses on eliminating causative factors and controlling the inflammatory and infectious components. The frozen, dried, natural fluid extract of the Cyclamen europaeum plant delivered intranasally is thought to have beneficial effects in relieving congestion by facilitating nasal drainage, and has an anti-inflammatory effect.\nObjectives\nTo assess the effectiveness of topical intranasal Cyclamen europaeum extract on clinical response in adults and children with acute sinusitis.\nSearch methods\nWe searched CENTRAL, which includes the Cochrane Acute Respiratory Infections Group's Specialised Register, MEDLINE, Embase, and trials registers (ClinicalTrials.gov; WHO ICTRP) in January 2018. We also searched the reference lists of included studies and review literature for further relevant studies and contacted trial authors for additional information.\nSelection criteria\nRandomised controlled trials comparing Cyclamen europaeum extract administered intranasally to placebo, antibiotics, intranasal corticosteroids, or no treatment in adults or children, or both, with acute sinusitis. Acute sinusitis was defined by clinical diagnosis and confirmed by nasal endoscopy or by radiological evidence.\nData collection and analysis\nTwo review authors independently extracted data and assessed trial quality. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included two randomised controlled trials that involved a total of 147 adult outpatients with acute sinusitis confirmed by radiology or nasal endoscopy who were assigned to Cyclamen europaeum nasal spray or placebo study arms for up to 15 days. The risk of selection and detection bias was unclear, as allocation concealment and blinding of outcome assessors were not reported in either study. Attrition was high (60%) in one study, although dropouts were balanced between study arms.\nNeither study reported our two primary outcomes: proportion of participants whose symptoms resolved or improved at 14 days and 30 days. No serious adverse events or complications related to treatment were reported; however, more mild adverse events such as nasal and throat irritation, mild epistaxis, and sneezing occurred in Cyclamen europaeum group participants (50%) compared to placebo group participants (24%) (risk ratio 2.11, 95% confidence interval 1.35 to 3.29); moderate-quality evidence.\nAuthors' conclusions\nThe effectiveness of Cyclamen europaeum for people with acute sinusitis is unknown. Although no serious side effects were observed, 50% of participants who received Cyclamen europaeum reported adverse events compared with 24% of those who received placebo.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We assessed the effectiveness of Cyclamen europaeum frozen, dried, natural fluid extract, an herbal medicine from the tuber of Cyclamen europaeum, given as a nasal spray compared with sham treatment (placebo) to resolve or improve acute sinusitis in adults and children."
}
]
|
query-laysum | 2269 | [
{
"role": "user",
"content": "Abstract: Background\nHalitosis or bad breath is a symptom in which a noticeably unpleasant breath odour is present due to an underlying oral or systemic disease. 50% to 60% of the world population has experienced this problem which can lead to social stigma and loss of self-confidence. Multiple interventions have been tried to control halitosis ranging from mouthwashes and toothpastes to lasers. This new Cochrane Review incorporates Cochrane Reviews previously published on tongue scraping and mouthrinses for halitosis.\nObjectives\nThe objectives of this review were to assess the effects of various interventions used to control halitosis due to oral diseases only. We excluded studies including patients with halitosis secondary to systemic disease and halitosis-masking interventions.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 8 April 2019), the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 3) in the Cochrane Library (searched 8 April 2019), MEDLINE Ovid (1946 to 8 April 2019), and Embase Ovid (1980 to 8 April 2019). We also searched LILACS BIREME (1982 to 19 April 2019), the National Database of Indian Medical Journals (1985 to 19 April 2019), OpenGrey (1992 to 19 April 2019), and CINAHL EBSCO (1937 to 19 April 2019). The US National Institutes of Health Ongoing Trials Register ClinicalTrials.gov (8 April 2019), the World Health Organization International Clinical Trials Registry Platform (8 April 2019), the ISRCTN Registry (19 April 2019), the Clinical Trials Registry - India (19 April 2019), were searched for ongoing trials. We also searched the cross-references of included studies and systematic reviews published on the topic. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials (RCTs) which involved adults over the age of 16, and any intervention for managing halitosis compared to another or placebo, or no intervention. The active interventions or controls were administered over a minimum of one week and with no upper time limit. We excluded quasi-randomised trials, trials comparing the results for less than one week follow-up, and studies including advanced periodontitis.\nData collection and analysis\nTwo pairs of review authors independently selected trials, extracted data, and assessed risk of bias. We estimated mean differences (MDs) for continuous data, with 95% confidence intervals (CIs). We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included 44 trials in the review with 1809 participants comparing an intervention with a placebo or a control. The age of participants ranged from 17 to 77 years. Most of the trials reported on short-term follow-up (ranging from one week to four weeks). Only one trial reported long-term follow-up (three months).\nThree studies were at low overall risk of bias, 16 at high overall risk of bias, and the remaining 25 at unclear overall risk of bias.\nWe compared different types of interventions which were categorised as mechanical debridement, chewing gums, systemic deodorising agents, topical agents, toothpastes, mouthrinse/mouthwash, tablets, and combination methods.\nMechanical debridement: for mechanical tongue cleaning versus no tongue cleaning, the evidence was very uncertain for the outcome dentist-reported organoleptic test (OLT) scores (MD -0.20, 95% CI -0.34 to -0.07; 2 trials, 46 participants; very low-certainty evidence). No data were reported for patient-reported OLT score or adverse events.\nChewing gums: for 0.6% eucalyptus chewing gum versus placebo chewing gum, the evidence was very uncertain for the outcome dentist-reported OLT scores (MD -0.10, 95% CI -0.31 to 0.11; 1 trial, 65 participants; very low-certainty evidence). No data were reported for patient-reported OLT score or adverse events.\nSystemic deodorising agents: for 1000 mg champignon versus placebo, the evidence was very uncertain for the outcome patient-reported visual analogue scale (VAS) scores (MD -1.07, 95% CI -14.51 to 12.37; 1 trial, 40 participants; very low-certainty evidence). No data were reported for dentist-reported OLT score or adverse events.\nTopical agents: for hinokitiol gel versus placebo gel, the evidence was very uncertain for the outcome dentist-reported OLT scores (MD -0.27, 95% CI -1.26 to 0.72; 1 trial, 18 participants; very low-certainty evidence). No data were reported for patient-reported OLT score or adverse events.\nToothpastes: for 0.3% triclosan toothpaste versus control toothpaste, the evidence was very uncertain for the outcome dentist-reported OLT scores (MD -3.48, 95% CI -3.77 to -3.19; 1 trial, 81 participants; very low-certainty evidence). No data were reported for patient-reported OLT score or adverse events.\nMouthrinse/mouthwash: for mouthwash containing chlorhexidine and zinc acetate versus placebo mouthwash, the evidence was very uncertain for the outcome dentist-reported OLT scores (MD -0.20, 95% CI -0.58 to 0.18; 1 trial, 44 participants; very low-certainty evidence). No data were reported for patient-reported OLT score or adverse events.\nTablets: no data were reported on key outcomes for this comparison.\nCombination methods: for brushing plus cetylpyridium mouthwash versus brushing, the evidence was uncertain for the outcome dentist-reported OLT scores (MD -0.48, 95% CI -0.72 to -0.24; 1 trial, 70 participants; low-certainty evidence). No data were reported for patient-reported OLT score or adverse events.\nAuthors' conclusions\nWe found low- to very low-certainty evidence to support the effectiveness of interventions for managing halitosis compared to placebo or control for the OLT and patient-reported outcomes tested. We were unable to draw any conclusions regarding the superiority of any intervention or concentration. Well-planned RCTs need to be conducted by standardising the interventions and concentrations.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Bad breath or halitosis is caused by too much bacteria or small food parts left inside the mouth, most commonly at the back of the tongue. It can be a sign of a disease within the mouth or other body diseases. People with bad breath can have low self-esteem and feel embarrassed. It can affect their personal relationships and work. In this review, we looked at treatments for bad breath due to a disease within the mouth and at treatments that aim to control not just mask bad breath."
}
]
|
query-laysum | 2270 | [
{
"role": "user",
"content": "Abstract: Background\nStaphylococcus aureus causes pulmonary infection in young children with cystic fibrosis. Prophylactic antibiotics are prescribed hoping to prevent such infection and lung damage. Antibiotics have adverse effects and long-term use might lead to infection with Pseudomonas aeruginosa. This is an update of a previously published review.\nObjectives\nTo assess continuous oral antibiotic prophylaxis to prevent the acquisition of Staphylococcus aureus versus no prophylaxis in people with cystic fibrosis, we tested the following hypotheses to investigate whether prophylaxis:\n1. improves clinical status, lung function and survival;\n2. leads to fewer isolates of Staphylococcus aureus;\n3. causes adverse effects (e.g. diarrhoea, skin rash, candidiasis);\n4. leads to fewer isolates of other common pathogens from respiratory secretions;\n5. leads to the emergence of antibiotic resistance and colonisation of the respiratory tract with Pseudomonas aeruginosa.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register, comprising references identified from comprehensive electronic database searches, handsearches of relevant journals and abstract books of conference proceedings. Companies manufacturing anti-staphylococcal antibiotics were contacted.\nMost recent search of the Group's Register: 27 February 2020.\nOnline trials registries were also searched. Most recent search of online trials registries: 15 September 2020.\nSelection criteria\nRandomised trials of continuous oral prophylactic antibiotics (given for at least one year) compared to intermittent antibiotics given 'as required', in people with cystic fibrosis of any disease severity.\nData collection and analysis\nThe authors assessed studies for eligibility and methodological quality and extracted data. The quality of the evidence was assessed using the GRADE criteria. The review's primary outcomes of interest were lung function by spirometry (forced expiratory volume in one second (FEV1)) and the number of people with one or more isolates of Staphylococcus aureus (sensitive strains).\nMain results\nWe included four studies, with a total of 401 randomised participants aged zero to seven years on enrolment; one study is ongoing. The two older included studies generally had a higher risk of bias across all domains, but in particular due to a lack of blinding and incomplete outcome data, than the two more recent studies. We only regarded the most recent study as being generally free of bias, although even here we were not certain of the effect of the per protocol analysis on the study results. Evidence quality was judged to be low for all outcomes assessed after being downgraded based on GRADE assessments. Downgrading decisions were due to limitations in study design (all outcomes), for imprecision and for inconsistency .\nProphylactic anti-staphylococcal antibiotics probably make little or no difference to lung function measured as FEV1 % predicted after six years (mean difference (MD) -2.30, 95% confidence interval (CI) -13.59 to 8.99, one study, n = 119, low-quality evidence); but may reduce the number of children having one or more isolates of Staphylococcus aureus at two years (odds ratio (OR) 0.21, 95% CI 0.13 to 0.35, three studies, n = 315, low-quality evidence). At the same time point, there may be little or no effect on nutrition as reported using weight z score (MD 0.06, 95% CI -0.33 to 0.45, two studies, n = 140, low-quality evidence), additional courses of antibiotics (OR 0.18, 95% CI 0.01 to 3.60, one study, n = 119, low-quality evidence) or adverse effects (low-quality evidence). There was no difference in the number of isolates of Pseudomonas aeruginosa between groups at two years (OR 0.74, 95% CI 0.45 to 1.23, three studies, n = 312, low-quality evidence), though there was a trend towards a lower cumulative isolation rate of Pseudomonas aeruginosa in the prophylaxis group at two and three years and towards a higher rate from four to six years. As the studies reviewed lasted six years or less, conclusions cannot be drawn about the long-term effects of prophylaxis.\nAuthors' conclusions\nAnti-staphylococcal antibiotic prophylaxis may lead to fewer children having isolates of Staphylococcus aureus, when commenced early in infancy and continued up to six years of age. The clinical importance of this finding is uncertain. Further research may establish whether the trend towards more children with CF with Pseudomonas aeruginosa, after four to six years of prophylaxis, is a chance finding and whether choice of antibiotic or duration of treatment might influence this.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, the quality of the evidence was concerning. All the studies were of variable quality and we judged the quality of the evidence to be low for the outcomes we assessed. We judged that the two older studies had a higher risk of bias overall compared to the two newer studies. In particular this was because those taking part in the studies (or their parents or caregivers) would be able to guess which treatment they were receiving, and also one study did not state if anyone had dropped out and if so what the reasons were. Only the newest study seemed to be free of bias, although even here we were not certain if the study results were distorted by the way the data were analysed. Given these concerns, if results from future trials confirm the current findings, it would increase our certainty in our conclusions."
}
]
|
query-laysum | 2271 | [
{
"role": "user",
"content": "Abstract: Background\nThis review is an update of a review of tramadol for neuropathic pain, published in 2006; updating was to bring the review in line with current standards. Neuropathic pain, which is caused by a lesion or disease affecting the somatosensory system, may be central or peripheral in origin. Peripheral neuropathic pain often includes symptoms such as burning or shooting sensations, abnormal sensitivity to normally painless stimuli, or an increased sensitivity to normally painful stimuli. Neuropathic pain is a common symptom in many diseases of the peripheral nervous system.\nObjectives\nTo assess the analgesic efficacy of tramadol compared with placebo or other active interventions for chronic neuropathic pain in adults, and the adverse events associated with its use in clinical trials.\nSearch methods\nWe searched CENTRAL, MEDLINE, and Embase for randomised controlled trials from inception to January 2017. We also searched the reference lists of retrieved studies and reviews, and online clinical trial registries.\nSelection criteria\nWe included randomised, double-blind trials of two weeks' duration or longer, comparing tramadol (any route of administration) with placebo or another active treatment for neuropathic pain, with subjective pain assessment by the participant.\nData collection and analysis\nTwo review authors independently extracted data and assessed trial quality and potential bias. Primary outcomes were participants with substantial pain relief (at least 50% pain relief over baseline or very much improved on Patient Global Impression of Change scale (PGIC)), or moderate pain relief (at least 30% pain relief over baseline or much or very much improved on PGIC). Where pooled analysis was possible, we used dichotomous data to calculate risk ratio (RR) and number needed to treat for an additional beneficial outcome (NNT) or harmful outcome (NNH), using standard methods. We assessed the quality of the evidence using GRADE and created 'Summary of findings' tables.\nMain results\nWe identified six randomised, double-blind studies involving 438 participants with suitably characterised neuropathic pain. In each, tramadol was started at a dose of about 100 mg daily and increased over one to two weeks to a maximum of 400 mg daily or the maximum tolerated dose, and then maintained for the remainder of the study. Participants had experienced moderate or severe neuropathic pain for at least three months due to cancer, cancer treatment, postherpetic neuralgia, peripheral diabetic neuropathy, spinal cord injury, or polyneuropathy. The mean age was 50 to 67 years with approximately equal numbers of men and women. Exclusions were typically people with other significant comorbidity or pain from other causes. Study duration for treatments was four to six weeks, and two studies had a cross-over design.\nNot all studies reported all the outcomes of interest, and there were limited data for pain outcomes. At least 50% pain intensity reduction was reported in three studies (265 participants, 110 events). Using a random-effects analysis, 70/132 (53%) had at least 50% pain relief with tramadol, and 40/133 (30%) with placebo; the risk ratio (RR) was 2.2 (95% confidence interval (CI) 1.02 to 4.6). The NNT calculated from these data was 4.4 (95% CI 2.9 to 8.8). We downgraded the evidence for this outcome by two levels to low quality because of the small size of studies and of the pooled data set, because there were only 110 actual events, the analysis included different types of neuropathic pain, the studies all had at least one high risk of potential bias, and because of the limited duration of the studies.\nParticipants experienced more adverse events with tramadol than placebo. Report of any adverse event was higher with tramadol (58%) than placebo (34%) (4 studies, 266 participants, 123 events; RR 1.6 (95% CI 1.2 to 2.1); NNH 4.2 (95% CI 2.8 to 8.3)). Adverse event withdrawal was higher with tramadol (16%) than placebo (3%) (6 studies, 485 participants, 45 events; RR 4.1 (95% CI 2.0 to 8.4); NNH 8.2 (95% CI 5.8 to 14)). Only four serious adverse events were reported, without obvious attribution to treatment, and no deaths were reported. We downgraded the evidence for this outcome by two or three levels to low or very low quality because of small study size, because there were few actual events, and because of the limited duration of the studies.\nAuthors' conclusions\nThere is only modest information about the use of tramadol in neuropathic pain, coming from small, largely inadequate studies with potential risk of bias. That bias would normally increase the apparent benefits of tramadol. The evidence of benefit from tramadol was of low or very low quality, meaning that it does not provide a reliable indication of the likely effect, and the likelihood is very high that the effect will be substantially different from the estimate in this systematic review.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In January 2017, we searched for clinical trials in which tramadol was used to treat neuropathic pain in adults. Six studies met the inclusion criteria, randomising 438 participants to treatment with tramadol or placebo. Study duration was between four and six weeks. Not all reported the outcomes of interest.\nOur definition of a good result was someone who had a high level of pain relief and was able to keep taking the medicine without side effects that made them stop treatment."
}
]
|
query-laysum | 2272 | [
{
"role": "user",
"content": "Abstract: Background\nThe prevalence of musculoskeletal symptoms among sedentary workers is high. Interventions that promote occupational standing or walking have been found to reduce occupational sedentary time, but it is unclear whether these interventions ameliorate musculoskeletal symptoms in sedentary workers.\nObjectives\nTo investigate the effectiveness of workplace interventions to increase standing or walking for decreasing musculoskeletal symptoms in sedentary workers.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, OSH UPDATE, PEDro, ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) search portal up to January 2019. We also screened reference lists of primary studies and contacted experts to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), cluster-randomised controlled trials (cluster-RCTs), quasi RCTs, and controlled before-and-after (CBA) studies of interventions to reduce or break up workplace sitting by encouraging standing or walking in the workplace among workers with musculoskeletal symptoms. The primary outcome was self-reported intensity or presence of musculoskeletal symptoms by body region and the impact of musculoskeletal symptoms such as pain-related disability. We considered work performance and productivity, sickness absenteeism, and adverse events such as venous disorders or perinatal complications as secondary outcomes.\nData collection and analysis\nTwo review authors independently screened titles, abstracts, and full-text articles for study eligibility. These review authors independently extracted data and assessed risk of bias. We contacted study authors to request additional data when required. We used GRADE considerations to assess the quality of evidence provided by studies that contributed to the meta-analyses.\nMain results\nWe found ten studies including three RCTs, five cluster RCTs, and two CBA studies with a total of 955 participants, all from high-income countries. Interventions targeted changes to the physical work environment such as provision of sit-stand or treadmill workstations (four studies), an activity tracker (two studies) for use in individual approaches, and multi-component interventions (five studies). We did not find any studies that specifically targeted only the organisational level components. Two studies assessed pain-related disability.\nPhysical work environment\nThere was no significant difference in the intensity of low back symptoms (standardised mean difference (SMD) -0.35, 95% confidence interval (CI) -0.80 to 0.10; 2 RCTs; low-quality evidence) nor in the intensity of upper back symptoms (SMD -0.48, 95% CI -.96 to 0.00; 2 RCTs; low-quality evidence) in the short term (less than six months) for interventions using sit-stand workstations compared to no intervention. No studies examined discomfort outcomes at medium (six to less than 12 months) or long term (12 months and more). No significant reduction in pain-related disability was noted when a sit-stand workstation was used compared to when no intervention was provided in the medium term (mean difference (MD) -0.4, 95% CI -2.70 to 1.90; 1 RCT; low-quality evidence).\nIndividual approach\nThere was no significant difference in the intensity or presence of low back symptoms (SMD -0.05, 95% CI -0.87 to 0.77; 2 RCTs; low-quality evidence), upper back symptoms (SMD -0.04, 95% CI -0.92 to 0.84; 2 RCTs; low-quality evidence), neck symptoms (SMD -0.05, 95% CI -0.68 to 0.78; 2 RCTs; low-quality evidence), shoulder symptoms (SMD -0.14, 95% CI -0.63 to 0.90; 2 RCTs; low-quality evidence), or elbow/wrist and hand symptoms (SMD -0.30, 95% CI -0.63 to 0.90; 2 RCTs; low-quality evidence) for interventions involving an activity tracker compared to an alternative intervention or no intervention in the short term. No studies provided outcomes at medium term, and only one study examined outcomes at long term.\nOrganisational level\nNo studies evaluated the effects of interventions solely targeted at the organisational level.\nMulti-component approach\nThere was no significant difference in the proportion of participants reporting low back symptoms (risk ratio (RR) 0.93, 95% CI 0.69 to 1.27; 3 RCTs; low-quality evidence), neck symptoms (RR 1.00, 95% CI 0.76 to 1.32; 3 RCTs; low-quality evidence), shoulder symptoms (RR 0.83, 95% CI 0.12 to 5.80; 2 RCTs; very low-quality evidence), and upper back symptoms (RR 0.88, 95% CI 0.76 to 1.32; 3 RCTs; low-quality evidence) for interventions using a multi-component approach compared to no intervention in the short term. Only one RCT examined outcomes at medium term and found no significant difference in low back symptoms (MD -0.40, 95% CI -1.95 to 1.15; 1 RCT; low-quality evidence), upper back symptoms (MD -0.70, 95% CI -2.12 to 0.72; low-quality evidence), and leg symptoms (MD -0.80, 95% CI -2.49 to 0.89; low-quality evidence). There was no significant difference in the proportion of participants reporting low back symptoms (RR 0.89, 95% CI 0.57 to 1.40; 2 RCTs; low-quality evidence), neck symptoms (RR 0.67, 95% CI 0.41 to 1.08; two RCTs; low-quality evidence), and upper back symptoms (RR 0.52, 95% CI 0.08 to 3.29; 2 RCTs; low-quality evidence) for interventions using a multi-component approach compared to no intervention in the long term. There was a statistically significant reduction in pain-related disability following a multi-component intervention compared to no intervention in the medium term (MD -8.80, 95% CI -17.46 to -0.14; 1 RCT; low-quality evidence).\nAuthors' conclusions\nCurrently available limited evidence does not show that interventions to increase standing or walking in the workplace reduced musculoskeletal symptoms among sedentary workers at short-, medium-, or long-term follow up. The quality of evidence is low or very low, largely due to study design and small sample sizes. Although the results of this review are not statistically significant, some interventions targeting the physical work environment are suggestive of an intervention effect. Therefore, in the future, larger cluster-RCTs recruiting participants with baseline musculoskeletal symptoms and long-term outcomes are needed to determine whether interventions to increase standing or walking can reduce musculoskeletal symptoms among sedentary workers and can be sustained over time.\n\nGiven the provided abstract, please respond to the following query: \"Effects of combining multiple interventions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Available evidence is insufficient to show the effectiveness of combining multiple interventions in reducing the proportions of people with low back or upper back pain at short-term follow-up (less than six months), medium-term follow-up (between six and 12 months), or long-term follow-up (12 months or longer)."
}
]
|
query-laysum | 2273 | [
{
"role": "user",
"content": "Abstract: Background\nOro/nasopharyngeal suction is a method used to clear secretions from the oropharynx and nasopharynx through the application of negative pressure via a suction catheter or bulb syringe. Traditionally, airway oro/nasopharyngeal suction at birth has been used routinely to remove fluid rapidly from the oropharynx and nasopharynx in vigorous and non-vigorous infants at birth. Concerns relating to the reported adverse effects of oro/nasopharyngeal suctioning led to a practice review and routine oro/nasopharyngeal suctioning is no longer recommended for vigorous infants. However, it is important to know whether there is any clear benefit or harm for infants whose oro/nasopharyngeal airway is suctioned compared to infants who are not suctioned.\nObjectives\nTo evaluate the effect of routine oropharyngeal/nasopharyngeal suction compared to no suction on mortality and morbidity in newly born infants.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review group to search the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 3), MEDLINE via PubMed (1966 to April 18, 2016), Embase (1980 to April 18, 2016), and CINAHL (1982 to April 18, 2016). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nRandomised, quasi-randomised controlled trials and cluster randomised trials that evaluated the effect of routine oropharyngeal/nasopharyngeal suction compared to no suction on mortality and morbidity in newly born infants with and without meconium-stained amniotic fluid.\nData collection and analysis\nThe review authors extracted from the reports of the clinical trials, data regarding clinical outcomes including mortality, need for resuscitation, admission to neonatal intensive care, five minute Apgar score, episodes of apnoea and length of hospital stay.\nMain results\nEight randomised controlled trials met the inclusion criteria and only included term infants (n = 4011). Five studies included infants with no fetal distress and clear amniotic fluid, one large study included vigorous infants with clear or meconium-stained amniotic fluid, and two large studies included infants with thin or thick meconium-stained amniotic fluid. Overall, there was no statistical difference between oro/nasopharyngeal suction and no oro/nasopharyngeal suction for all reported outcomes: mortality (typical RR 2.29, 95% CI 0.94 to 5.53; typical RD 0.01, 95% CI -0.00 to 0.01; I2 = 0%, studies = 2, participants = 3023), need for resuscitation (typical RR 0.85, 95% CI 0.69 to 1.06; typical RD -0.01, 95% CI -0.03 to 0.00; I2 = 0%, studies = 5, participants = 3791), admission to NICU (typical RR 0.82, 95% CI 0.62 to 1.08; typical RD -0.03, 95% CI -0.08 to 0.01; I2 = 27%, studies = 2, participants = 997) and Apgar scores at five minutes (MD -0.03, 95% CI -0.08 to 0.02; I2 not estimated, studies = 3, participants = 330).\nAuthors' conclusions\nThe currently available evidence does not support or refute the benefits or harms of routine oro/nasopharyngeal suction over no suction. Further high-quality studies are required in preterm infants or term newborn infants with thick meconium amniotic fluid. Studies should investigate long-term effects such as neurodevelopmental outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The currently available evidence neither supports nor refutes suctioning as a beneficial therapy for healthy term infants and further quality studies are needed in term and preterm newborns."
}
]
|
query-laysum | 2274 | [
{
"role": "user",
"content": "Abstract: Background\nAnovulation is a common cause of infertility. Drugs used to treat anovulation include selective oestrogen receptor modulators, aromatase inhibitors and gonadotrophins. Ovulation triggers are used with these drugs, as a surrogate for the hormonal surge seen in spontaneous menstrual cycles, to control the timing of ovulation and the timing of sexual intercourse. Ovulation triggers given without reliable evidence of oocyte maturity could be inappropriately timed; they increase costs, and the need to time intercourse precisely after the ovulation trigger is given adds to psychological stress.\nThis is an update of a Cochrane review first published in Issue 3, 2008, of the Cochrane Database of Systematic Reviews.\nObjectives\nTo determine the benefits and harms of administering an ovulation trigger to anovulatory women receiving treatment with ovulation-inducing agents in comparison with spontaneous ovulation following ovulation induction.\nSearch methods\nWe updated searches of the Menstrual Disorders and Subfertility Group (MDSG) Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and PsycINFO to November 2013. We checked conference proceedings, trial registries and reference lists and contacted researchers.\nSelection criteria\nParallel-group, randomised, controlled trials (RCTs) evaluating the administration of an ovulation trigger to anovulatory women receiving treatment with ovulation-inducing agents.\nData collection and analysis\nWe independently assessed trial eligibility and trial quality and extracted data. We calculated odds ratios (ORs) with 95% confidence intervals (CIs) for dichotomous data and used the random-effects model in meta-analyses when significant heterogeneity was present. We assessed overall quality of the evidence by using the GRADE approach.\nMain results\nNo new trials were identified. This review includes two RCTs with low risk of bias that compared urinary human chorionic gonadotrophin (hCG) versus no treatment in anovulatory women receiving clomiphene citrate. Urinary hCG did not result in an increase in live birth rate over no hCG (OR 0.97, 95% CI 0.52 to 1.83; two trials, 305 participants, I2 = 16%; low-quality evidence), but very serious imprecision around the effect estimate reduces our confidence in the apparent lack of effect of hCG as an ovulation trigger in clomiphene-induced cycles in anovulatory women.\nAmong this review's secondary outcomes, urinary hCG may not increase ovulation rate (OR 0.99, 95% CI 0.36 to 2.77; two trials, 305 participants, I2 = 55%; low-quality evidence), clinical pregnancy rate (OR 1.02, 95% CI 0.56 to 1.89; two trials, 305 participants, I2 = 35%; low-quality evidence) or miscarriage rate in pregnant women (OR 1.19, 95% CI 0.17 to 8.23; two trials, 54 participants, I2 = 0%; low-quality evidence). Multiple pregnancies and preterm deliveries were uncommon, and ovarian hyperstimulation syndrome, adverse events and deaths were not reported as outcomes in either trial.\n\nWe found no trials evaluating other ovulation triggers.\nAuthors' conclusions\nEvidence is inadequate to recommend or refute the use of urinary hCG as an ovulation trigger in anovulatory women treated with clomiphene citrate. We found no trials evaluating the use of ovulation triggers in anovulatory women treated with other ovulation-inducing agents.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We cannot be certain whether ovulation triggers are better or worse than no ovulation triggers in women undergoing ovulation induction because not enough women were included in the two trials for definite results to be obtained. Larger trials in women undergoing ovulation induction that compare different ovulation triggers versus no additional treatment are needed."
}
]
|
query-laysum | 2275 | [
{
"role": "user",
"content": "Abstract: Background\nNaloxone, a specific opioid antagonist, is available for the treatment of newborn infants with cardiorespiratory or neurological depression that may be due to intrauterine exposure to opioid. It is unclear whether newborn infants may benefit from this therapy and whether naloxone has any harmful effects.\nObjectives\nTo determine the effect of naloxone on the need for and duration of neonatal unit stay in infants of mothers who received opioid analgesia prior to delivery or of mothers who have used a prescribed or non-prescribed opioid during pregnancy.\nSearch methods\nWe searched the following databases in February 2018: the Cochrane Central Register of Controlled Trials (the Cochrane Library 2018, Issue 1), MEDLINE (OvidSP), MEDLINE In process & Other Non-Indexed Citations (OvidSP), Embase (OvidSP), CINAHL (EBSCO), Maternity and Infant Care (OvidSP), and PubMed. We searched for ongoing and completed trials in the WHO International Clinical Trials Registry Platform and the EU Clinical Trials Register. We checked the reference lists of relevant articles to identify further potentially relevant studies.\nSelection criteria\nRandomised controlled trials comparing the administration of naloxone versus placebo, or no drug, or another dose of naloxone to newborn infants with suspected or confirmed in utero exposure to opioid.\nData collection and analysis\nWe extracted data using the standard methods of Cochrane Neonatal with separate evaluation of trial quality and data extraction by two review authors and synthesis of data using risk ratio, risk difference, and mean difference.\nMain results\nWe included nine trials, with 316 participants in total, that compared the effects of naloxone versus placebo or no drug in newborn infants exposed to maternal opioid analgesia prior to delivery. None of the included trials investigated infants born to mothers who had used a prescribed or non-prescribed opioid during pregnancy. None of these trials specifically recruited infants with cardiorespiratory or neurological depression. The main outcomes reported were measures of respiratory function in the first six hours after birth. There is some evidence that naloxone increases alveolar ventilation. The trials did not assess the effect on the primary outcomes of this review (admission to a neonatal unit and failure to establish breastfeeding).\nAuthors' conclusions\nThe existing evidence from randomised controlled trials is insufficient to determine whether naloxone confers any important benefits to newborn infants with cardiorespiratory or neurological depression that may be due to intrauterine exposure to opioid. Given concerns about the safety of naloxone in this context, it may be appropriate to limit its use to randomised controlled trials that aim to resolve these uncertainties.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found nine completed trials that compared giving to newborn babies, whose mothers had received opioids during labour, either naloxone or a placebo ('dummy drug'). These trials were conducted more than 30 years ago and they were generally very small including only about 300 infants in total. Most of the trials did not use reliable methods consistently. Evidence is up-to-date as of February 2018."
}
]
|
query-laysum | 2276 | [
{
"role": "user",
"content": "Abstract: Background\nInfantile hypertrophic pyloric stenosis (IHPS) is a disorder of young children (aged one year or less) and can be treated by laparoscopic (LP) or open (OP) longitudinal myotomy of the pylorus. Since the first description in 1990, LP is being performed more often worldwide.\nObjectives\nTo compare the efficacy and safety of open versus laparoscopic pyloromyotomy for IHPS.\nSearch methods\nWe conducted a literature search on 04 February 2021 to identify all randomised controlled trials (RCTs), without any language restrictions. We searched the following electronic databases: MEDLINE (1990 to February 2021), Embase (1990 to February 2021), and the Cochrane Central Register of Controlled Trials (CENTRAL). We also searched the Internet using the Google Search engine (www.google.com) and Google Scholar (scholar.google.com) to identify grey literature not indexed in databases.\nSelection criteria\nWe included RCTs and quasi-randomised trials comparing LP with OP for hypertrophic pyloric stenosis.\nData collection and analysis\nTwo review authors independently screened references and extracted data from trial reports. Where outcomes or study details were not reported, we requested missing data from the corresponding authors of the primary RCTs. We used a random-effects model to calculate risk ratios (RRs) for binary outcomes, and mean differences (MDs) for continuous outcomes. Two review authors independently assessed risks of bias. We used GRADE to assess the certainty of the evidence for all outcomes.\nMain results\nThe electronic database search resulted in a total of 434 records. After de-duplication, we screened 410 independent publications, and ultimately included seven RCTs (reported in 8 reports) in quantitative analysis. The seven included RCTs enrolled 720 participants (357 with open pyloromyotomy and 363 with laparoscopic pyloromyotomy). One study was a multi-country trial, three were carried out in the USA, and one study each was carried out in France, Japan, and Bangladesh.\nThe evidence suggests that LP may result in a small increase in mucosal perforation compared with OP (RR 1.60, 95% CI 0.49 to 5.26; 7 studies, 720 participants; low-certainty evidence). LP may result in up to 5 extra instances of mucosal perforation per 1,000 participants; however, the confidence interval ranges from 4 fewer to 44 more per 1,000 participants.\nFour RCTs with 502 participants reported on incomplete pyloromyotomy. They indicate that LP may increase the risk of incomplete pyloromyotomy compared with OP, but the confidence interval crosses the line of no effect (RR 7.37, 95% CI 0.92 to 59.11; 4 studies, 502 participants; low-certainty evidence). In the LP groups, 6 cases of incomplete pyloromyotomy were reported in 247 participants while no cases of incomplete pyloromyotomy were reported in the OP groups (from 255 participants).\nAll included studies (720 participants) reported on postoperative wound infections or abscess formations. The evidence is very uncertain about the effect of LP on postoperative wound infection or abscess formation compared with OP (RR 0.59, 95% CI 0.24 to 1.45; 7 studies, 720 participants; very low-certainty evidence). The evidence is also very uncertain about the effect of LP on postoperative incisional hernia compared with OP (RR 1.01, 95% CI 0.11 to 9.53; 4 studies, 382 participants; very low-certainty evidence).\nLength of hospital stay was assessed by five RCTs, including 562 participants. The evidence is very uncertain about the effect of LP compared to OP (mean difference −3.01 hours, 95% CI −8.39 to 2.37 hours; very low-certainty evidence). Time to full feeds was assessed by six studies, including 622 participants. The evidence is very uncertain about the effect of LP on time to full feeds compared with OP (mean difference −5.86 hours, 95% CI −15.95 to 4.24 hours; very low-certainty evidence). The evidence is also very uncertain about the effect of LP on operating time compared with OP (mean difference 0.53 minutes, 95% CI −3.53 to 4.59 minutes; 6 studies, 622 participants; very low-certainty evidence).\nAuthors' conclusions\nLaparoscopic pyloromyotomy may result in a small increase in mucosal perforation when compared with open pyloromyotomy for IHPS. There may be an increased risk of incomplete pyloromyotomy following LP compared with OP, but the effect estimate is imprecise and includes the possibility of no difference. We do not know about the effect of LP compared with OP on the need for re-operation, postoperative wound infections or abscess formation, postoperative haematoma or seroma formation, incisional hernia occurrence, length of postoperative stay, time to full feeds, or operating time because the certainty of the evidence was very low for these outcomes. We downgraded the certainty of the evidence for most outcomes due to limitations in the study design (most outcomes were susceptible to detection bias) and imprecision. There is limited evidence available comparing LP with OP for IHPS. The included studies did not provide sufficient information to determine the effect of training, experience, or surgeon preferences on the outcomes assessed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review showed that laparoscopic pyloromyotomy may result in a small increase in mucosal perforation rates compared to the open approach. Laparoscopy may also result in higher rates of incomplete pyloromyotomy. The evidence is very uncertain about the effect of the laparoscopic approach on postoperative surgical complications including surgical site infection and incisional hernia. Similarly, very low certainty of evidence does not allow a final conclusion on the effect of open compared to laparoscopic pyloromyotomy on length of stay, time to full feeds and operating time in infantile pyloric stenosis."
}
]
|
query-laysum | 2277 | [
{
"role": "user",
"content": "Abstract: Background\nChest physiotherapy is widely prescribed to assist the clearance of airway secretions in people with cystic fibrosis. Oscillating devices generate intra- or extra-thoracic oscillations orally or external to the chest wall. Internally they create variable resistances within the airways, generating controlled oscillating positive pressure which mobilises mucus. Extra-thoracic oscillations are generated by forces outside the respiratory system, e.g. high frequency chest wall oscillation. This is an update of a previously published review.\nObjectives\nTo identify whether oscillatory devices, oral or chest wall, are effective for mucociliary clearance and whether they are equivalent or superior to other forms of airway clearance in the successful management of secretions in people with cystic fibrosis.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches and hand searches of relevant journals and abstract books of conference proceedings. Latest search of the Cystic Fibrosis Trials Register: 29 July 2019.\nIn addition we searched the trials databases ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform. Latest search of trials databases: 15 August 2019.\nSelection criteria\nRandomised controlled studies and controlled clinical studies of oscillating devices compared with any other form of physiotherapy in people with cystic fibrosis. Single-treatment interventions (therapy technique used only once in the comparison) were excluded.\nData collection and analysis\nTwo authors independently applied the inclusion criteria to publications, assessed the quality of the included studies and assessed the evidence using GRADE.\nMain results\nThe searches identified 82 studies (330 references); 39 studies (total of 1114 participants) met the inclusion criteria. Studies varied in duration from up to one week to one year; 20 of the studies were cross-over in design. The studies also varied in type of intervention and the outcomes measured, data were not published in sufficient detail in most of these studies, so meta-analysis was limited. Few studies were considered to have a low risk of bias in any domain. It is not possible to blind participants and clinicians to physiotherapy interventions, but 13 studies did blind the outcome assessors. The quality of the evidence across all comparisons ranged from low to very low.\nForced expiratory volume in one second was the most frequently measured outcome and while many of the studies reported an improvement in those people using a vibrating device compared to before the study, there were few differences when comparing the different devices to each other or to other airway clearance techniques. One study identified an increase in frequency of exacerbations requiring antibiotics whilst using high frequency chest wall oscillation when compared to positive expiratory pressure (low-quality evidence). There were some small but significant changes in secondary outcome variables such as sputum volume or weight, but not wholly in favour of oscillating devices and due to the low- or very low-quality evidence, it is not clear whether these were due to the particular intervention. Participant satisfaction was reported in 13 studies but again with low- or very low-quality evidence and not consistently in favour of an oscillating device, as some participants preferred breathing techniques or techniques used prior to the study interventions. The results for the remaining outcome measures were not examined or reported in sufficient detail to provide any high-level evidence.\nAuthors' conclusions\nThere was no clear evidence that oscillation was a more or less effective intervention overall than other forms of physiotherapy; furthermore there was no evidence that one device is superior to another. The findings from one study showing an increase in frequency of exacerbations requiring antibiotics whilst using an oscillating device compared to positive expiratory pressure may have significant resource implications. More adequately-powered long-term randomised controlled trials are necessary and outcomes measured should include frequency of exacerbations, individual preference, adherence to therapy and general satisfaction with treatment. Increased adherence to therapy may then lead to improvements in other parameters, such as exercise tolerance and respiratory function. Additional evidence is needed to evaluate whether oscillating devices combined with other forms of airway clearance is efficacious in people with cystic fibrosis.There may also be a requirement to consider the cost implication of devices over other forms of equally advantageous airway clearance techniques. Using the GRADE method to assess the quality of the evidence, we judged this to be low or very low quality, which suggests that further research is very likely to have an impact on confidence in any estimate of effect generated by future interventions.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Given the differences in study design, it was difficult to combine the results from these studies in a useful way.\nWe did not find any clear evidence that vibrating devices were better than any other form of physiotherapy which they were compared to in these studies, or that one device was better than another. One study found that people using an vibrating device needed additional antibiotics for a chest infection more often than those using positive expiratory pressure. When recommending the most suitable method of airway clearance, physiotherapists should consider the needs of the people they are treating.\nFor the future, larger and longer trials are needed to measure the frequency of lung infections, preference, adherence to and general satisfaction with treatment, financial constraints should also be taken into consideration. We think adherence is important, because if people with cystic fibrosis are willing to stick to their physiotherapy regimen, there may be improvements in other outcomes such as exercise tolerance, respiratory function and mortality."
}
]
|
query-laysum | 2278 | [
{
"role": "user",
"content": "Abstract: Background\nUpper gastrointestinal bleeding is typically a mild, self-limiting condition that can affect both preterm and term neonates, although it can be severe particularly when associated with co-morbidities. Pharmacological interventions with a proton pump inhibitor (PPI), H2 receptor antagonist (H2RA), antacid, bismuth and sucralfate may have effects on both the prevention and treatment of upper gastrointestinal bleeding in infants.\nObjectives\nTo assess how different pharmacological interventions (PPIs, H2RAs, antacids, sucralfate or bismuth salts) administered to preterm and term neonates for the prevention or treatment of upper gastrointestinal bleeding to reduce morbidity and mortality compare with placebo or no treatment, supportive care, or each other.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2018, Issue 6), MEDLINE via PubMed (1966 to 12 July 2018), Embase (1980 to 12 July 2018), and CINAHL (1982 to 12 July 2018). We also searched clinical trial databases, conference proceedings, the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials, and online for Chinese literature articles.\nSelection criteria\nWe selected randomised, quasi-randomised and cluster-randomised trials involving preterm and term neonates. Trials were included if they used a proton pump inhibitor, H2 receptor antagonist, antacid, sucralfate or bismuth either for the prevention or treatment of upper gastrointestinal bleeding.\nData collection and analysis\nTwo review authors independently assessed the eligibility of studies for inclusion, extracted data and assessed methodological quality. We conducted meta-analysis using a fixed-effect model. We used the GRADE approach to assess quality of evidence.\nMain results\nEleven studies with 818 infants met the criteria for inclusion in this review.\nFour trials with 329 infants assessed the use of an H2 receptor antagonist for prevention of upper gastrointestinal bleeding in high-risk newborn infants. Meta-analysis of these four trials identified a reduction in any upper gastrointestinal bleeding when using an H2 receptor antagonist (typical risk ratio (RR) 0.36, 95% confidence interval (CI) 0.22 to 0.58; typical risk difference (RD) −0.20, 95% CI −0.28 to −0.11; number needed to treat for an additional beneficial outcome (NNTB) 5, 95% CI 4 to 9). The quality of evidence was moderate. A single trial with 53 infants assessing prevention of upper gastrointestinal bleeding reported no difference in mortality in infants assigned H2 receptor antagonist versus no treatment; however the quality of evidence was very low.\nSeven trials with 489 infants assessed an inhibitor of gastric acid (H2 receptor antagonist or proton pump inhibitor) for treatment of gastrointestinal bleeding in newborn infants. Meta-analysis of two trials (131 infants) showed no difference in mortality from use of a H2 receptor antagonist compared to no treatment. The quality of evidence was low. Meta-analysis of two trials (104 infants) showed a reduction in duration of upper gastrointestinal bleeding from use of an inhibitor of gastric acid compared to no treatment (mean difference −1.06 days, 95% CI −1.28 to −0.84). The quality of evidence was very low. Meta-analysis of six trials (451 infants) showed a reduction in continued upper gastrointestinal bleeding from use of any inhibitor of gastric acid compared to no treatment (typical RR 0.36, 95% CI 0.26 to 0.49; typical RD −0.26, 95% CI −0.33, −0.19; NNTB 4, 95% CI 3 to 5). The quality of evidence was low. There were no significant subgroup differences in duration of upper gastrointestinal bleeding or of continued upper gastrointestinal bleeding according to type of inhibitor of gastric acid. A single trial (38 infants) reported no difference in anaemia requiring blood transfusion from use of a H2 receptor antagonist compared to no treatment.\nAlthough no serious adverse events were reported from the use of a H2 receptor antagonist or proton pump inhibitor, some neonatal morbidities — including necrotising enterocolitis, ventilator-associated pneumonia, duration of ventilation and respiratory support, and duration of hospital stay — were not reported. Long-term outcome was not reported.\nAuthors' conclusions\nThere is moderate-quality evidence that use of an H2 receptor antagonist reduces the risk of gastrointestinal bleeding in newborn infants at high risk of gastrointestinal bleeding. There is low-quality evidence that use of an inhibitor of gastric acid (H2 receptor antagonist or proton pump inhibitor) reduces the duration of upper gastrointestinal bleeding and the incidence of continued gastric bleeding in newborn infants with gastrointestinal bleeding. However, there is no evidence that use of an inhibitor of gastric acid in newborn infants affects mortality or the need for blood transfusion. As no study reported the incidence of necrotising enterocolitis, ventilator- or hospital-associated pneumonia, sepsis, or long-term outcome, the safety of inhibitors of gastric acid secretion is unclear.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess how different medications for reducing stomach acidity (proton pump inhibitors, histamine 2 receptor antagonists, antacids) or for protecting the stomach lining (sucralfate or bismuth salts) given to preterm and term infants help to prevent or treat upper gastrointestinal bleeding, reduce suffering of other illnesses and deaths."
}
]
|
query-laysum | 2279 | [
{
"role": "user",
"content": "Abstract: Background\nAcute sinusitis is a common reason for primary care encounters. It causes significant symptoms including facial pain, congested nose, headache, thick nasal mucus, fever, and cough and often results in time off work or school. Sinusitis treatment focuses on eliminating causative factors and controlling the inflammatory and infectious components. The frozen, dried, natural fluid extract of the Cyclamen europaeum plant delivered intranasally is thought to have beneficial effects in relieving congestion by facilitating nasal drainage, and has an anti-inflammatory effect.\nObjectives\nTo assess the effectiveness of topical intranasal Cyclamen europaeum extract on clinical response in adults and children with acute sinusitis.\nSearch methods\nWe searched CENTRAL, which includes the Cochrane Acute Respiratory Infections Group's Specialised Register, MEDLINE, Embase, and trials registers (ClinicalTrials.gov; WHO ICTRP) in January 2018. We also searched the reference lists of included studies and review literature for further relevant studies and contacted trial authors for additional information.\nSelection criteria\nRandomised controlled trials comparing Cyclamen europaeum extract administered intranasally to placebo, antibiotics, intranasal corticosteroids, or no treatment in adults or children, or both, with acute sinusitis. Acute sinusitis was defined by clinical diagnosis and confirmed by nasal endoscopy or by radiological evidence.\nData collection and analysis\nTwo review authors independently extracted data and assessed trial quality. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included two randomised controlled trials that involved a total of 147 adult outpatients with acute sinusitis confirmed by radiology or nasal endoscopy who were assigned to Cyclamen europaeum nasal spray or placebo study arms for up to 15 days. The risk of selection and detection bias was unclear, as allocation concealment and blinding of outcome assessors were not reported in either study. Attrition was high (60%) in one study, although dropouts were balanced between study arms.\nNeither study reported our two primary outcomes: proportion of participants whose symptoms resolved or improved at 14 days and 30 days. No serious adverse events or complications related to treatment were reported; however, more mild adverse events such as nasal and throat irritation, mild epistaxis, and sneezing occurred in Cyclamen europaeum group participants (50%) compared to placebo group participants (24%) (risk ratio 2.11, 95% confidence interval 1.35 to 3.29); moderate-quality evidence.\nAuthors' conclusions\nThe effectiveness of Cyclamen europaeum for people with acute sinusitis is unknown. Although no serious side effects were observed, 50% of participants who received Cyclamen europaeum reported adverse events compared with 24% of those who received placebo.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out the proportion of participants whose symptoms resolved or improved up to 14 days and 30 days, but did not find any evidence. The included studies provided data for changes in symptom scores for acute sinusitis with treatment. One study reported improvement in facial pain up to seven days. Neither study reported complications or days off school or work.\nPeople who received Cyclamen europaeum rather than a non-active substance reported more side effects like nasal irritation, sneezing, and mild nasal bleeding. No major side effects occurred.\nWe found no evidence as to whether Cyclamen europaeum is effective or not."
}
]
|
query-laysum | 2280 | [
{
"role": "user",
"content": "Abstract: Background\nImpaction of a soft food bolus in the oesophagus causes dysphagia and regurgitation. If the bolus does not pass spontaneously, then the patient is at risk of aspiration, dehydration, perforation, and death. Definitive management is with endoscopic intervention, recommended within 24 hours. Prior to endoscopy, many patients undergo a period of observation, awaiting spontaneous disimpaction, or may undergo enteral or parenteral treatments to attempt to dislodge the bolus. There is little consensus as to which of these conservative strategies is safe and effective to be used in this initial period, before resorting to definitive endoscopic management for persistent impaction.\nObjectives\nTo evaluate the efficacy of non-endoscopic conservative treatments in the management of soft food boluses impacted within the oesophagus.\nSearch methods\nWe searched the following databases, using relevant search terms: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase and CINAHL. The date of the search was 18 August 2019. We screened the reference lists of relevant studies and reviews on the topic to identify any additional studies.\nSelection criteria\nWe included randomised controlled trials of the management of acute oesophageal soft food bolus impaction, in adults and children, reporting the incidence of disimpaction (confirmed radiologically or clinically by return to oral diet) without the need for endoscopic intervention. We did not include studies focusing on sharp or solid object impaction.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane.\nMain results\nWe identified 890 unique records through the electronic searches. We excluded 809 clearly irrelevant records and retrieved 81 records for further assessment. We subsequently included one randomised controlled trial that met the eligibility criteria, which was conducted in four Swedish centres and randomised 43 participants to receive either intravenous diazepam followed by glucagon, or intravenous placebos. The effect of the active substances compared with placebo on rates of disimpaction without intervention is uncertain, as the numbers from this single study were small, and the rates were similar (38% versus 32%; risk ratio 1.19, 95% confidence interval 0.51 to 2.75, P = 0.69). The certainty of the evidence using GRADE for this outcome is low. Data on adverse events were lacking.\nAuthors' conclusions\nThere is currently inadequate data to recommend the use of any enteral or parenteral treatments in the management of acute oesophageal soft food bolus impaction. There is also inadequate data regarding potential adverse events from the use of these treatments, or from potential delays in definitive endoscopic management. Caution should be exercised when using any conservative management strategies in these patients.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Can a period of observation, treatment with swallowed substances, or treatment with substances given via the blood help to dislodge soft foods that are stuck between the throat and the stomach in order to avoid the need for an endoscopic procedure to clear it?"
}
]
|
query-laysum | 2281 | [
{
"role": "user",
"content": "Abstract: Background\nPseudomonas aeruginosa is the most common bacterial pathogen causing lung infections in people with cystic fibrosis and appropriate antibiotic therapy is vital. Antibiotics for pulmonary exacerbations are usually given intravenously, and for long-term treatment, via a nebuliser. Oral anti-pseudomonal antibiotics with the same efficacy and safety as intravenous or nebulised antibiotics would benefit people with cystic fibrosis due to ease of treatment and avoidance of hospitalisation. This is an update of a previous review.\nObjectives\nTo determine the benefit or harm of oral anti-pseudomonal antibiotic therapy for people with cystic fibrosis, colonised with Pseudomonas aeruginosa, in the:\n1. treatment of a pulmonary exacerbation; and\n2. long-term treatment of chronic infection.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings.\nWe contacted pharmaceutical companies and checked reference lists of identified trials.\nDate of last search: 08 July 2016.\nSelection criteria\nRandomised or quasi-randomised controlled trials comparing any dose of oral anti-pseudomonal antibiotics, to other combinations of inhaled, oral or intravenous antibiotics, or to placebo or usual treatment for pulmonary exacerbations and long-term treatment.\nData collection and analysis\nTwo authors independently selected the trials, extracted data and assessed quality. We contacted trial authors to obtain missing information.\nMain results\nWe included three trials examining pulmonary exacerbations (171 participants) and two trials examining long-term therapy (85 participants). We regarded the most important outcomes as quality of life and lung function. The analysis did not identify any statistically significant difference between oral anti-pseudomonal antibiotics and other treatments for these outcome measures for either pulmonary exacerbations or long-term treatment. One of the included trials reported significantly better lung function when treating a pulmonary exacerbation with ciprofloxacin when compared with intravenous treatment; however, our analysis did not confirm this finding. We found no evidence of difference between oral anti-pseudomonal antibiotics and other treatments regarding adverse events or development of antibiotic resistance, but trials were not adequately powered to detect this. None of the studies had a low risk of bias from blinding which may have an impact particularly on subjective outcomes such as quality of life. The risk of bias for other criteria could not be clearly stated across the studies.\nAuthors' conclusions\nWe found no conclusive evidence that an oral anti-pseudomonal antibiotic regimen is more or less effective than an alternative treatment for either pulmonary exacerbations or long-term treatment of chronic infection with P. aeruginosa. Until results of adequately-powered future trials are available, treatment needs to be selected on a pragmatic basis, based upon any available non-randomised evidence, the clinical circumstances of the individual, the known effectiveness of drugs against local strains and upon individual preference.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence we found was limited. The trials were very different in terms of design, drugs used, length of treatment and follow up and the outcomes measured. We judged the trials to be at different risks of bias, but we did not think any of them had a low risk of bias from blinding, which might affect the results of subjective outcomes like quality of life."
}
]
|
query-laysum | 2282 | [
{
"role": "user",
"content": "Abstract: Background\nNon-bismuth quadruple sequential therapy (SEQ) comprising a first induction phase with a dual regimen of amoxicillin and a proton pump inhibitor (PPI) for five days followed by a triple regimen phase with a PPI, clarithromycin and metronidazole for another five days, has been suggested as a new first-line treatment option to replace the standard triple therapy (STT) comprising a proton pump inhibitor (PPI), clarithromycin and amoxicillin, in which eradication proportions have declined to disappointing levels.\nObjectives\nTo conduct a meta-analysis of randomised controlled trials (RCTs) comparing the efficacy of a SEQ regimen with STT for the eradication of H. pylori infection, and to compare the incidence of adverse effects associated with both STT and SEQ H. pylori eradication therapies.\nSearch methods\nWe conducted bibliographical searches in electronic databases, and handsearched abstracts from Congresses up to April 2015.\nSelection criteria\nWe sought randomised controlled trials (RCTs) comparing 10-day SEQ and STT (of at least seven days) for the eradication of H. pylori. Participants were adults and children diagnosed as positive for H. pylori infection and naïve to H. pylori treatment.\nData collection and analysis\nWe used a pre-piloted, tabular summary to collect demographic and medical information of included study participants as well as therapeutic data and information related to the diagnosis and confirmatory tests.\nWe evaluated the difference in intention-to-treat eradication between SEQ and STT regimens across studies, and assessed sources of the heterogeneity of this risk difference (RD) using subgroup analyses.\nWe evaluated the quality of the evidence following Cochrane standards, and summarised it using GRADE methodology.\nMain results\nWe included 44 RCTs with a total of 12,284 participants (6042 in SEQ and 6242 in STT). The overall analysis showed that SEQ was significantly more effective than STT (82% vs 75% in the intention-to-treat analysis; RD 0.09, 95% confidence interval (CI) 0.06 to 0.11; P < 0.001, moderate-quality evidence). Results were highly heterogeneous (I² = 75%), and 20 studies did not demonstrate differences between therapies.\nReporting by geographic region (RD 0.09, 95% CI 0.06 to 0.12; studies = 44; I² = 75%, based on low-quality evidence) showed that differences between SEQ and STT were greater in Europe (RD 0.16, 95% CI 0.14 to 0.19) when compared to Asia, Africa or South America. European studies also showed a tendency towards better efficacy with SEQ; however, this tendency was reversed in 33% of the Asian studies. Africa reported the closest risk difference (RD 0.14 , 95% 0.07 to 0.22) to Europe among studied regions, but confidence intervals were wider and therefore the quality of the evidence showing SEQ to be superior to STT was reduced for this region.\nBased on high-quality evidence, subgroup analyses showed that SEQ and STT therapies were equivalent when STT lasted for 14 days. Although, overall, the mean eradication proportion with SEQ was over 80%, we noted a tendency towards a lower average effect with this regimen in the more recent studies (2008 and after); weighted linear regression showed that the efficacies of both regimens evolved differently over the years, having a higher reduction in the efficacy of SEQ (-1.72% yearly) than in STT (-0.9% yearly). In these more recent studies (2008 and after) we were also unable to detect the superiority of SEQ over STT when STT was given for 10 days.\nBased on very low-quality evidence, subgroup analyses on antibiotic resistance showed that the widest difference in efficacy between SEQ and STT was in the subgroup analysis based on clarithromycin-resistant participants, in which SEQ reached a 75% average efficacy versus 43% with STT.\nReporting on adverse events (AEs) (RD 0.00, 95% CI -0.02 to 0.02; participants = 8103; studies = 27; I² = 26%, based on high-quality evidence) showed no significant differences between SEQ and STT (20.4% vs 19.5%, respectively) and results were homogeneous.\nThe quality of the studies was limited due to a lack of systematic reporting of the factors affecting risk of bias. Although randomisation was reported, its methodology (e.g. algorithms, number of blocks) was not specified in several studies. Additionally, the other 'Risk of bias' domains (such as allocation concealment of the sequence randomisation, or blinding during either performance or outcome assessment) were also unreported.\nHowever, subgroup analyses as well as sensitivity analyses or funnel plots indicated that treatment outcomes were not influenced by the quality of the included studies. On the other hand, we rated 'length of STT' and AEs for the main outcome as high-quality according to GRADE classification; but we downgraded 'publication date' quality to moderate, and 'geographic region' and 'antibiotic resistance' to low- and very low-quality, respectively.\nAuthors' conclusions\nOur meta-analysis indicates that prior to 2008 SEQ was more effective than STT, especially when STT was given for only seven days. Nevertheless, the apparent advantage of sequential treatment has decreased over time, and more recent studies do not show SEQ to have a higher efficacy versus STT when STT is given for 10 days.\nBased on the results of this meta-analysis, although SEQ offers an advantage when compared with STT, it cannot be presented as a valid alternative, given that neither SEQ nor STT regimens achieved optimal efficacy ( ≥ 90% eradication rate).\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Gastric ulcer and cancer are mainly caused by infection with the bacteria Helicobacter pylori, a harmful micro-organism able to colonise the human stomach. Published data seem to indicate that this bacteria is present in nearly half of the world's population. The bacterial colonisation leads to a chronic infection that, over time, may alter the stomach's function, tissue structure, and even cell cycle, being able to produce a variety of symptoms and diseases.\nAlthough this micro-organism may respond to traditional antibiotics, it has a strong resistance to treatment, and in a high percentage of cases can survive most single and double therapies. Different combinations of antibiotics have therefore been used, and the best treatment is still unclear. The most commonly recommended one is the standard triple therapy (STT), containing two antibiotics (clarithromycin, and a nitroimidazole or amoxicillin) and a stomach protector (omeprazole). However, several studies have demonstrated that STT fails in more than one in five people, so investigators proposed replacing it with a non-bismuth quadruple sequential sequential (SEQ) treatment, containing a first phase with a dual therapy (amoxicillin and omeprazole), followed by a triple-therapy phase (nitroimidazole, clarithromycin and omeprazole)."
}
]
|
query-laysum | 2283 | [
{
"role": "user",
"content": "Abstract: Background\nMarfan syndrome is a hereditary disorder affecting the connective tissue and is caused by a mutation of the fibrillin-1 (FBN1) gene. It affects multiple systems of the body, most notably the cardiovascular, ocular, skeletal, dural and pulmonary systems. Aortic root dilatation is the most frequent cardiovascular manifestation and its complications, including aortic regurgitation, dissection and rupture are the main cause of morbidity and mortality.\nObjectives\nTo assess the long-term efficacy and safety of beta-blocker therapy as compared to placebo, no treatment or surveillance only in people with Marfan syndrome.\nSearch methods\nWe searched the following databases on 28 June 2017; CENTRAL, MEDLINE, Embase, Science Citation Index Expanded and the Conference Proceeding Citation Index - Science in the Web of Science Core Collection. We also searched the Online Metabolic and Molecular Bases of Inherited Disease (OMMBID), ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) on 30 June 2017. We did not impose any restriction on language of publication.\nSelection criteria\nAll randomised controlled trials (RCTs) of at least one year in duration assessing the effects of beta-blocker monotherapy compared with placebo, no treatment or surveillance only, in people of all ages with a confirmed diagnosis of Marfan syndrome were eligible for inclusion.\nData collection and analysis\nTwo review authors independently screened titles and abstracts for inclusion, extracted data and assessed trial quality. Trial authors were contacted to obtain missing data. Dichotomous outcomes will be reported as relative risk and continuous outcomes as mean differences with 95% confidence intervals. We assessed the quality of evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nOne open-label, randomised, single-centre trial including 70 participants with Marfan syndrome (aged 12 to 50 years old) met the inclusion criteria. Participants were randomly assigned to propranolol (N = 32) or no treatment (N = 38) for an average duration of 9.3 years in the control group and 10.7 years in the treatment group. The initial dose of propranolol was 10 mg four times daily and the optimal dose was reached when the heart rate remained below 100 beats per minute during exercise or the systolic time interval increased by 30%. The mean (± standard error (SE)) optimal dose of propranolol was 212 ± 68 mg given in four divided doses daily.\nBeta-blocker therapy did not reduce the incidence of all-cause mortality (RR 0.24, 95% CI 0.01 to 4.75; participants = 70; low-quality evidence). Mortality attributed to Marfan syndrome was not reported. Non-fatal serious adverse events were also not reported. However, study authors report on pre-defined, non-fatal clinical endpoints, which include aortic dissection, aortic regurgitation, cardiovascular surgery and congestive heart failure. Their analysis showed no difference between the treatment and control groups in these outcomes (RR 0.79, 95% CI 0.37 to 1.69; participants = 70; low-quality evidence).\nBeta-blocker therapy did not reduce the incidence of aortic dissection (RR 0.59, 95% CI 0.12 to 3.03), aortic regurgitation (RR 1.19, 95% CI 0.18 to 7.96), congestive heart failure (RR 1.19, 95% CI 0.18 to 7.96) or cardiovascular surgery, (RR 0.59, 95% CI 0.12 to 3.03); participants = 70; low-quality evidence.\nThe study reports a reduced rate of aortic dilatation measured by M-mode echocardiography in the treatment group (aortic ratio mean slope: 0.084 (control) vs 0.023 (treatment), P < 0.001). The change in systolic and diastolic blood pressure, total adverse events and withdrawal due to adverse events were not reported in the treatment or control group at study end point.\nWe judged this study to be at high risk of selection (allocation concealment) bias, performance bias, detection bias, attrition bias and selective reporting bias. The overall quality of evidence was low. We do not know whether a statistically significant reduced rate of aortic dilatation translates into clinical benefit in terms of aortic dissection or mortality.\nAuthors' conclusions\nBased on only one, low-quality RCT comparing long-term propranolol to no treatment in people with Marfan syndrome, we could draw no definitive conclusions for clinical practice. High-quality, randomised trials are needed to evaluate the long-term efficacy of beta-blocker treatment in people with Marfan syndrome. Future trials should report on all clinically relevant end points and adverse events to evaluate benefit versus harm of therapy.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included one study of 70 participants aged 12 to 50 years old with Marfan syndrome, who were assigned to either a beta-blocker called propranolol or no treatment for an average duration of 9.3 years in the control (no treatment) group and 10.7 years in the treatment group."
}
]
|
query-laysum | 2284 | [
{
"role": "user",
"content": "Abstract: Background\nDrug- and alcohol-related impairment in the workplace has been linked to an increased risk of injury for workers. Randomly testing populations of workers for these substances has become a practice in many jurisdictions, with the intention of reducing the risk of workplace incidents and accidents. Despite the proliferation of random drug and alcohol testing (RDAT), there is currently a lack of consensus about whether it is effective at preventing workplace injury, or improving other non-injury accident outcomes in the work place.\nObjectives\nTo assess the effectiveness of workplace RDAT to prevent injuries and improve non-injury accident outcomes (unplanned events that result in damage or loss of property) in workers compared with no workplace RDAT.\nSearch methods\nWe conducted a systematic literature search to identify eligible published and unpublished studies. The date of the last search was 1 November 2020. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, two other databases, Google Scholar, and three trials registers. We also screened the reference lists of relevant publications known to us.\nSelection criteria\nStudy designs that were eligible for inclusion in our review included randomised controlled trials (RCTs), cluster-randomised trials (CRTs), interrupted time-series (ITS) studies, and controlled before-after (CBA) studies. Studies needed to evaluate the effectiveness of RDAT in preventing workplace injury or improving other non-injury workplace outcomes. We also considered unpublished data from clinical trial registries. We included employees working in all safety-sensitive occupations, except for commercial drivers, who are the subject of another Cochrane Review.\nData collection and analysis\nIndependently, two review authors used a data collection form to extract relevant characteristics from the included study. They then analysed a line graph included in the study of the prevalence rate of alcohol violations per year. Independently, the review authors completed a GRADE assessment, as a means of rating the quality of the evidence.\nMain results\nAlthough our searching originally identified 4198 unique hits, only one study was eligible for inclusion in this review. This was an ITS study that measured the effect of random alcohol testing (RAT) on the test positivity rate of employees of major airlines in the USA from 1995 to 2002. The study included data from 511,745 random alcohol tests, and reported no information about testing for other substances. The rate of positive results was the only outcome of interest reported by the study.\nThe average rate of positive results found by RAT increased from 0.07% to 0.11% when the minimum percentage of workers who underwent RAT annually was reduced from 25% to 10%. Our analyses found this change to be a statistically significant increase (estimated change in level, where the level reflects the average percentage points of positive tests = 0.040, 95% confidence interval 0.005 to 0.075; P = 0.031). Our GRADE assessment, for the observed effect of lower minimum testing percentages associating with a higher rate of positive test results, found the quality of the evidence to be 'very low' across the five GRADE domains. The one included study did not address the following outcomes of interest: fatal injuries; non-fatal injuries; non-injury accidents; absenteeism; and adverse effects associated with RDAT.\nAuthors' conclusions\nIn the aviation industry in the USA, the only setting for which the eligible study reported data, there was a statistically significant increase in the rate of positive RAT results following a reduction in the percentage of workers tested, which we deem to be clinically relevant. This result suggests an inverse relationship between the proportion of positive test results and the rate of testing, which is consistent with a deterrent effect for testing. No data were reported on adverse effects related to RDAT.\nWe could not draw definitive conclusions regarding the effectiveness of RDAT for employees in safety-sensitive occupations (not including commercial driving), or with safety-sensitive job functions. We identified only one eligible study that reflected one industry in one country, was of non-randomised design, and tested only for alcohol, not for drugs or other substances. Our GRADE assessment resulted in a 'very low' rating for the quality of the evidence on the only outcome reported. The paucity of eligible research was a major limitation in our review, and additional studies evaluating the effect of RDAT on safety outcomes are needed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Two of our team rated our confidence in the evidence, based on factors such as number of studies, study size and methods. Overall, our confidence in the evidence was very low. This means we cannot rely on what this study reported to make generalisations about the effectiveness of random alcohol testing alone, or random alcohol testing combined with drug testing (RDAT), in the workplace. We need researchers to do more studies to find the answers."
}
]
|
query-laysum | 2285 | [
{
"role": "user",
"content": "Abstract: Background\nViral epidemics or pandemics of acute respiratory infections (ARIs) pose a global threat. Examples are influenza (H1N1) caused by the H1N1pdm09 virus in 2009, severe acute respiratory syndrome (SARS) in 2003, and coronavirus disease 2019 (COVID-19) caused by SARS-CoV-2 in 2019. Antiviral drugs and vaccines may be insufficient to prevent their spread. This is an update of a Cochrane Review last published in 2020. We include results from studies from the current COVID-19 pandemic.\nObjectives\nTo assess the effectiveness of physical interventions to interrupt or reduce the spread of acute respiratory viruses.\nSearch methods\nWe searched CENTRAL, PubMed, Embase, CINAHL, and two trials registers in October 2022, with backwards and forwards citation analysis on the new studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster-RCTs investigating physical interventions (screening at entry ports, isolation, quarantine, physical distancing, personal protection, hand hygiene, face masks, glasses, and gargling) to prevent respiratory virus transmission.\nData collection and analysis\nWe used standard Cochrane methodological procedures.\nMain results\nWe included 11 new RCTs and cluster-RCTs (610,872 participants) in this update, bringing the total number of RCTs to 78. Six of the new trials were conducted during the COVID-19 pandemic; two from Mexico, and one each from Denmark, Bangladesh, England, and Norway. We identified four ongoing studies, of which one is completed, but unreported, evaluating masks concurrent with the COVID-19 pandemic.\nMany studies were conducted during non-epidemic influenza periods. Several were conducted during the 2009 H1N1 influenza pandemic, and others in epidemic influenza seasons up to 2016. Therefore, many studies were conducted in the context of lower respiratory viral circulation and transmission compared to COVID-19. The included studies were conducted in heterogeneous settings, ranging from suburban schools to hospital wards in high-income countries; crowded inner city settings in low-income countries; and an immigrant neighbourhood in a high-income country. Adherence with interventions was low in many studies.\nThe risk of bias for the RCTs and cluster-RCTs was mostly high or unclear.\nMedical/surgical masks compared to no masks\nWe included 12 trials (10 cluster-RCTs) comparing medical/surgical masks versus no masks to prevent the spread of viral respiratory illness (two trials with healthcare workers and 10 in the community). Wearing masks in the community probably makes little or no difference to the outcome of influenza-like illness (ILI)/COVID-19 like illness compared to not wearing masks (risk ratio (RR) 0.95, 95% confidence interval (CI) 0.84 to 1.09; 9 trials, 276,917 participants; moderate-certainty evidence. Wearing masks in the community probably makes little or no difference to the outcome of laboratory-confirmed influenza/SARS-CoV-2 compared to not wearing masks (RR 1.01, 95% CI 0.72 to 1.42; 6 trials, 13,919 participants; moderate-certainty evidence). Harms were rarely measured and poorly reported (very low-certainty evidence).\nN95/P2 respirators compared to medical/surgical masks\nWe pooled trials comparing N95/P2 respirators with medical/surgical masks (four in healthcare settings and one in a household setting). We are very uncertain on the effects of N95/P2 respirators compared with medical/surgical masks on the outcome of clinical respiratory illness (RR 0.70, 95% CI 0.45 to 1.10; 3 trials, 7779 participants; very low-certainty evidence). N95/P2 respirators compared with medical/surgical masks may be effective for ILI (RR 0.82, 95% CI 0.66 to 1.03; 5 trials, 8407 participants; low-certainty evidence). Evidence is limited by imprecision and heterogeneity for these subjective outcomes. The use of a N95/P2 respirators compared to medical/surgical masks probably makes little or no difference for the objective and more precise outcome of laboratory-confirmed influenza infection (RR 1.10, 95% CI 0.90 to 1.34; 5 trials, 8407 participants; moderate-certainty evidence). Restricting pooling to healthcare workers made no difference to the overall findings. Harms were poorly measured and reported, but discomfort wearing medical/surgical masks or N95/P2 respirators was mentioned in several studies (very low-certainty evidence).\nOne previously reported ongoing RCT has now been published and observed that medical/surgical masks were non-inferior to N95 respirators in a large study of 1009 healthcare workers in four countries providing direct care to COVID-19 patients.\nHand hygiene compared to control\nNineteen trials compared hand hygiene interventions with controls with sufficient data to include in meta-analyses. Settings included schools, childcare centres and homes. Comparing hand hygiene interventions with controls (i.e. no intervention), there was a 14% relative reduction in the number of people with ARIs in the hand hygiene group (RR 0.86, 95% CI 0.81 to 0.90; 9 trials, 52,105 participants; moderate-certainty evidence), suggesting a probable benefit. In absolute terms this benefit would result in a reduction from 380 events per 1000 people to 327 per 1000 people (95% CI 308 to 342). When considering the more strictly defined outcomes of ILI and laboratory-confirmed influenza, the estimates of effect for ILI (RR 0.94, 95% CI 0.81 to 1.09; 11 trials, 34,503 participants; low-certainty evidence), and laboratory-confirmed influenza (RR 0.91, 95% CI 0.63 to 1.30; 8 trials, 8332 participants; low-certainty evidence), suggest the intervention made little or no difference. We pooled 19 trials (71, 210 participants) for the composite outcome of ARI or ILI or influenza, with each study only contributing once and the most comprehensive outcome reported. Pooled data showed that hand hygiene may be beneficial with an 11% relative reduction of respiratory illness (RR 0.89, 95% CI 0.83 to 0.94; low-certainty evidence), but with high heterogeneity. In absolute terms this benefit would result in a reduction from 200 events per 1000 people to 178 per 1000 people (95% CI 166 to 188). Few trials measured and reported harms (very low-certainty evidence).\nWe found no RCTs on gowns and gloves, face shields, or screening at entry ports.\nAuthors' conclusions\nThe high risk of bias in the trials, variation in outcome measurement, and relatively low adherence with the interventions during the studies hampers drawing firm conclusions. There were additional RCTs during the pandemic related to physical interventions but a relative paucity given the importance of the question of masking and its relative effectiveness and the concomitant measures of mask adherence which would be highly relevant to the measurement of effectiveness, especially in the elderly and in young children.\nThere is uncertainty about the effects of face masks. The low to moderate certainty of evidence means our confidence in the effect estimate is limited, and that the true effect may be different from the observed estimate of the effect. The pooled results of RCTs did not show a clear reduction in respiratory viral infection with the use of medical/surgical masks. There were no clear differences between the use of medical/surgical masks compared with N95/P2 respirators in healthcare workers when used in routine care to reduce respiratory viral infection. Hand hygiene is likely to modestly reduce the burden of respiratory illness, and although this effect was also present when ILI and laboratory-confirmed influenza were analysed separately, it was not found to be a significant difference for the latter two outcomes. Harms associated with physical interventions were under-investigated.\nThere is a need for large, well-designed RCTs addressing the effectiveness of many of these interventions in multiple settings and populations, as well as the impact of adherence on effectiveness, especially in those most at risk of ARIs.\n\nGiven the provided abstract, please respond to the following query: \"How do respiratory viruses spread?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Respiratory viruses are viruses that infect the cells in your airways: nose, throat, and lungs. These infections can cause serious problems and affect normal breathing. They can cause flu (influenza), severe acute respiratory syndrome (SARS), and COVID-19.\nPeople infected with a respiratory virus spread virus particles into the air when they cough or sneeze. Other people become infected if they come into contact with these virus particles in the air or on surfaces on which they land. Respiratory viruses can spread quickly through a community, through populations and countries (causing epidemics), and around the world (causing pandemics).\nPhysical measures to try to prevent respiratory viruses spreading between people include:\n· washing hands often;\n· not touching your eyes, nose, or mouth;\n· sneezing or coughing into your elbow;\n· wiping surfaces with disinfectant;\n· wearing masks, eye protection, gloves, and protective gowns;\n· avoiding contact with other people (isolation or quarantine);\n· keeping a certain distance away from other people (distancing); and\n· examining people entering a country for signs of infection (screening)."
}
]
|
query-laysum | 2286 | [
{
"role": "user",
"content": "Abstract: Background\nPremenstrual syndrome (PMS) is a psychological and somatic disorder of unknown aetiology, with symptoms typically including irritability, depression, mood swings, bloating, breast tenderness and sleep disturbances. About 3% to 10% of women who experience these symptoms may also meet criteria for premenstrual dysphoric disorder (PMDD). PMS symptoms recur during the luteal phase of the menstrual cycle and reduce by the end of menstruation. PMS results from ovulation and may be due to ovarian steroid interactions relating to neurotransmitter dysfunction. Premenstrual disorders have a devastating effect on women, their families and their work.\nSeveral treatment options have been suggested for PMS, including pharmacological and surgical interventions. The treatments thought to be most effective tend to fall into one of two categories: suppressing ovulation or correcting a speculated neuroendocrine anomaly.\nTransdermal oestradiol by patch, gel or implant effectively stops ovulation and the cyclical hormonal changes which produce the cyclical symptoms. These preparations are normally used for hormone therapy and contain lower doses of oestrogen than found in oral contraceptive pills. A shortened seven-day course of a progestogen is required each month for endometrial protection but can reproduce premenstrual syndrome-type symptoms in these women.\nObjectives\nTo determine the effectiveness and safety of non-contraceptive oestrogen-containing preparations in the management of PMS.\nSearch methods\nOn 14 March 2016, we searched the following databases: the Cochrane Gynaecology and Fertility Group (CGF) Specialised Register; Cochrane Central Register of Studies (CRSO); MEDLINE; Embase; PsycINFO; CINAHL; ClinicalTrials.gov; metaRegister of Controlled trials (mRCT); and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) Search Portal. In addition, we checked the reference lists of articles retrieved.\nSelection criteria\nWe included published and unpublished randomized placebo or active controlled trials on the efficacy of the use of non-contraceptive oestrogen-containing preparations in the management of premenstrual syndrome in women of reproductive age with PMS diagnosed by at least two prospective cycles without current psychiatric disorder.\nData collection and analysis\nTwo review authors independently selected studies, assessed risk of bias, extracted data on premenstrual symptoms and adverse effects and entered data into Review Manager 5 software. Where possible, intention-to-treat or modified intention-to-treat analysis was used. Studies were pooled using a fixed-effect model, analysing cross-over trials as parallel trials. Standardised mean differences (SMDs) with 95% confidence intervals (CIs) were calculated for premenstrual symptom scores. Risk ratios (RRs) with 95% confidence intervals (CIs) were calculated for dichotomous outcomes. The overall quality of the evidence was assessed using the GRADE working group methods.\nMain results\nThe search resulted in 524 potentially relevant articles. Five eligible randomized controlled trials (RCTs) were identified (305 women). Trials using oral tablets, transdermal patches and implants were identified. No trial used gels.\nOne small cross-over trial (11 women, effective sample size 22 women considering cross-over trials) compared oral luteal-phase oestrogen versus placebo. Data were very low quality and unsuitable for analysis, but study authors reported that the intervention was ineffective and might aggravate the symptoms of PMS. They also reported that there were no adverse events.\nThree studies compared continuous oestrogen with progestogen versus placebo (with or without progestogen). These trials were of reasonable quality, although with a high risk of attrition bias and an unclear risk of bias due to potential carry-over effects in two cross-over trials. Continuous oestrogen had a small to moderate positive effect on global symptom scores (SMD −0.34, 95% CI −0.59 to −0.10, P = 0.005, 3 RCTs, 158 women, effective sample size 267 women, I² = 63%, very low quality evidence). The evidence was too imprecise to determine if the groups differed in withdrawal rates due to adverse effects (RR 0.64, 95% CI 0.26 to 1.58, P = 0.33, 3 RCTs, 196 women, effective sample size 284 women, I² = 0%, very low quality evidence). Similarly, the evidence was very imprecise in measures of specific adverse events, with large uncertainties around the true value of the relative risk. None of the studies reported on long-term risks such as endometrial cancer or breast cancer.\nOne study compared patch dosage (100 vs 200 µg oestrogen, with progestogen in both arms) and had a high risk of performance bias, detection bias and attrition bias. The study did not find evidence that dosage affects global symptoms but there was much uncertainty around the effect estimate (SMD −1.55, 95% CI −8.88 to 5.78, P = 0.68, 1 RCT, 98 women, very low quality evidence). The evidence on rates of withdrawal for adverse events was too imprecise to draw any conclusions (RR 0.70, 95% CI 0.34 to 1.46, P = 0.34, 1 RCT, 107 women, low-quality evidence). However, it appeared that the 100 µg dose might be associated with a lower overall risk of adverse events attributed to oestrogen (RR 0.51, 95% Cl 0.26 to 0.99, P = 0.05, 1 RCT, 107 women, very low quality evidence) with a large uncertainty around the effect estimate.\nThe overall quality of the evidence for all comparisons was very low, mainly due to risk of bias (specifically attrition), imprecision, and statistical and clinical heterogeneity.\nAuthors' conclusions\nWe found very low quality evidence to support the effectiveness of continuous oestrogen (transdermal patches or subcutaneous implants) plus progestogen, with a small to moderate effect size. We found very low quality evidence from a study based on 11 women to suggest that luteal-phase oral unopposed oestrogen is probably ineffective and possibly detrimental for controlling the symptoms of PMS. A comparison between 200 µg and 100 µg doses of continuous oestrogen was inconclusive with regard to effectiveness, but suggested that the lower dose was less likely to cause side effects. Uncertainty remains regarding safety, as the identified studies were too small to provide definite answers. Moreover, no included trial addressed adverse effects that might occur beyond the typical trial duration of 2-8 months. This suggests the choice of oestrogen dose and mode of administration could be based on an individual woman’s preference and modified according to the effectiveness and tolerability of the chosen regimen.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Oestrogen is widely used to suppress ovulation, mainly as a contraceptive. This is the first systematic review aiming to evaluate the effectiveness and safety of non-contraceptive oestrogen-containing preparations (oral, patch, implant and gel) in controlling symptoms of premenstrual syndrome (PMS)."
}
]
|
query-laysum | 2287 | [
{
"role": "user",
"content": "Abstract: Background\nElbow supracondylar fractures are common, with treatment decisions based on fracture displacement. However, there remains controversy regarding the best treatments for this injury.\nObjectives\nTo assess the effects (benefits and harms) of interventions for treating supracondylar elbow fractures in children.\nSearch methods\nWe searched CENTRAL, MEDLINE, and Embase in March 2021. We also searched trial registers and reference lists. We applied no language or publication restrictions.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials comparing different interventions for the treatment of supracondylar elbow fractures in children. We included studies investigating surgical interventions (different fixation techniques and different reduction techniques), surgical versus non-surgical treatment, traction types, methods of non-surgical intervention, and timing and location of treatment.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We collected data and conducted GRADE assessment for five critical outcomes: functional outcomes, treatment failure (requiring re-intervention), nerve injury, major complications (pin site infection in most studies), and cosmetic deformity (cubitus varus).\nMain results\nWe included 52 trials with 3594 children who had supracondylar elbow fractures; most were Gartland 2 and 3 fractures. The mean ages of children ranged from 4.9 to 8.4 years and the majority of participants were boys. Most studies (33) were conducted in countries in South-East Asia.\nWe identified 12 different comparisons of interventions: retrograde lateral wires versus retrograde crossed wires; lateral crossed (Dorgan) wires versus retrograde crossed wires; retrograde lateral wires versus lateral crossed (Dorgan) wires; retrograde crossed wires versus posterior intrafocal wires; retrograde lateral wires in a parallel versus divergent configuration; retrograde crossed wires using a mini-open technique or inserted percutaneously; buried versus non-buried wires; external versus internal fixation; open versus closed reduction; surgical fixation versus non-surgical immobilisation; skeletal versus skin traction; and collar and cuff versus backslab.\nWe report here the findings of four comparisons that represent the most substantial body of evidence for the most clinically relevant comparisons.\nAll studies in these four comparisons had unclear risks of bias in at least one domain. We downgraded the certainty of all outcomes for serious risks of bias, for imprecision when evidence was derived from a small sample size or had a wide confidence interval (CI) that included the possibility of benefits or harms for both treatments, and when we detected the possibility of publication bias.\nRetrograde lateral wires versus retrograde crossed wires (29 studies, 2068 children)\nThere was low-certainty evidence of less nerve injury with retrograde lateral wires (RR 0.65, 95% CI 0.46 to 0.90; 28 studies, 1653 children). In a post hoc subgroup analysis, we noted a greater difference in the number of children with nerve injuries when lateral wires were compared to crossed wires inserted with a percutaneous medial wire technique (RR 0.41, 95% CI 0.20 to 0.81, favours lateral wires; 10 studies, 552 children), but little difference when an open technique was used (RR 0.91, 95% CI 0.59 to 1.40, favours lateral wires; 11 studies, 656 children). Although we noted a statistically significant difference between these subgroups from the interaction test (P = 0.05), we could not rule out the possibility that other factors could account for this difference.\nWe found little or no difference between the interventions in major complications, which were described as pin site infections in all studies (RR 1.08, 95% CI 0.65 to 1.79; 19 studies, 1126 children; low-certainty evidence). For functional status (1 study, 35 children), treatment failure requiring re-intervention (1 study, 60 children), and cosmetic deformity (2 studies, 95 children), there was very low-certainty evidence showing no evidence of a difference between interventions.\nOpen reduction versus closed reduction (4 studies, 295 children)\nType of reduction method may make little or no difference to nerve injuries (RR 0.30, 95% CI 0.09 to 1.01, favours open reduction; 3 studies, 163 children). However, there may be fewer major complications (pin site infections) when closed reduction is used (RR 4.15, 95% CI 1.07 to 16.20; 4 studies, 253 children). The certainty of the evidence for these outcomes is low. No studies reported functional outcome, treatment failure requiring re-intervention, or cosmetic deformity. The four studies in this comparison used direct visualisation during surgery. One additional study used a joystick technique for reduction, and we did not combine data from this study in analyses.\nSurgical fixation using wires versus non-surgical immobilisation using a cast (3 studies, 140 children)\nThere was very low-certainty evidence showing little or no difference between interventions for treatment failure requiring re-intervention (1 study, 60 children), nerve injury (3 studies, 140 children), major complications (3 studies, 126 children), and cosmetic deformity (2 studies, 80 children). No studies reported functional outcome.\nBackslab versus sling (1 study, 50 children)\nNo nerve injuries or major complications were experienced by children in either group; this evidence is of very low certainty. Functional outcome, treatment failure, and cosmetic deformity were not reported.\nAuthors' conclusions\nWe found insufficient evidence for many treatments of supracondylar fractures. Fixation of displaced supracondylar fractures with retrograde lateral wires compared with crossed wires provided the most substantial body of evidence in this review, and our findings indicate that there may be a lower risk of nerve injury with retrograde lateral wires. In future trials of treatments, we would encourage the adoption of a core outcome set, which includes patient-reported measures. Evaluation of the effectiveness of traction compared with surgical fixation would provide a valuable addition to this clinical field.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out:\n- which types of treatments work best to heal the bone effectively; and\n- whether these treatments are associated with any unwanted effects."
}
]
|
query-laysum | 2288 | [
{
"role": "user",
"content": "Abstract: Background\nMetastatic breast cancer is not a curable disease, but women with metastatic disease are living longer. Surgery to remove the primary tumour is associated with an increased survival in other types of metastatic cancer. Breast surgery is not standard treatment for metastatic disease, however several recent retrospective studies have suggested that breast surgery could increase the women's survival. These studies have methodological limitations including selection bias. A systematic review mapping all randomised controlled trials addressing the benefits and potential harms of breast surgery is ideal to answer this question.\nObjectives\nTo assess the effects of breast surgery in women with metastatic breast cancer.\nSearch methods\nWe conducted searches using the MeSH terms 'breast neoplasms', 'mastectomy', and 'analysis, survival' in the following databases: the Cochrane Breast Cancer Specialised Register, CENTRAL, MEDLINE (by PubMed) and Embase (by OvidSP) on 22 February 2016. We also searched ClinicalTrials.gov (22 February 2016) and the WHO International Clinical Trials Registry Platform (24 February 2016). We conducted an additional search in the American Society of Clinical Oncology (ASCO) conference proceedings in July 2016 that included reference checking, citation searching, and contacting study authors to identify additional studies.\nSelection criteria\nThe inclusion criteria were randomised controlled trials of women with metastatic breast cancer at initial diagnosis comparing breast surgery plus systemic therapy versus systemic therapy alone. The primary outcomes were overall survival and quality of life. Secondary outcomes were progression-free survival (local and distant control), breast cancer-specific survival, and toxicity from local therapy.\nData collection and analysis\nTwo review authors independently conducted trial selection, data extraction, and 'Risk of bias' assessment (using Cochrane's 'Risk of bias' tool), which a third review author checked. We used the GRADE tool to assess the quality of the body of evidence. We used the risk ratio (RR) to measure the effect of treatment for dichotomous outcomes and the hazard ratio (HR) for time-to-event outcomes. We calculated 95% confidence intervals (CI) for these measures. We used the random-effects model, as we expected clinical or methodological heterogeneity, or both, among the included studies.\nMain results\nWe included two trials enrolling 624 women in the review. It is uncertain whether breast surgery improves overall survival as the quality of the evidence has been assessed as very low (HR 0.83, 95% CI 0.53 to 1.31; 2 studies; 624 women). The two studies did not report quality of life. Breast surgery may improve local progression-free survival (HR 0.22, 95% CI 0.08 to 0.57; 2 studies; 607 women; low-quality evidence), while it probably worsened distant progression-free survival (HR 1.42, 95% CI 1.08 to 1.86; 1 study; 350 women; moderate-quality evidence). The two included studies did not measure breast cancer-specific survival. Toxicity from local therapy was reported by 30-day mortality and did not appear to differ between the two groups (RR 0.99, 95% CI 0.14 to 6.90; 1 study; 274 women; low-quality evidence).\nAuthors' conclusions\nBased on existing evidence from two randomised clinical trials, it is not possible to make definitive conclusions on the benefits and risks of breast surgery associated with systemic treatment for women diagnosed with metastatic breast cancer. Until the ongoing clinical trials are finalised, the decision to perform breast surgery in these women should be individualised and shared between the physician and the patient considering the potential risks, benefits, and costs of each intervention.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is not possible to make definitive conclusions about the benefits of breast surgery associated with medical treatment for women with metastatic breast cancer. The decision to perform surgery in such cases should be individualised and shared between the physician and the patient, considering the potential risks and benefits involved in this choice. The inclusion of results of ongoing trials involving women with these characteristics in the next update of this review will help to decrease existing uncertainties."
}
]
|
query-laysum | 2289 | [
{
"role": "user",
"content": "Abstract: Background\nThe treatment of choice for moderate to severe obstructive sleep apnoea (OSA) is continuous positive airways pressure (CPAP) applied via a mask during sleep. However, this is not tolerated by all individuals and its role in mild OSA is not proven. Drug therapy has been proposed as an alternative to CPAP in some patients with mild to moderate sleep apnoea and could be of value in patients intolerant of CPAP. A number of mechanisms have been proposed by which drugs could reduce the severity of OSA. These include an increase in tone in the upper airway dilator muscles, an increase in ventilatory drive, a reduction in the proportion of rapid eye movement (REM) sleep, an increase in cholinergic tone during sleep, an increase in arousal threshold, a reduction in airway resistance and a reduction in surface tension in the upper airway.\nObjectives\nTo determine the efficacy of drug therapies in the specific treatment of sleep apnoea.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register of trials. Searches were current as of July 2012.\nSelection criteria\nRandomised, placebo controlled trials involving adult patients with confirmed OSA. We excluded trials if continuous positive airways pressure, mandibular devices or oxygen therapy were used. We excluded studies investigating treatment of associated conditions such as excessive sleepiness, hypertension, gastro-oesophageal reflux disease and obesity.\nData collection and analysis\nWe used standard methodological procedures recommended by The Cochrane Collaboration.\nMain results\nThirty trials of 25 drugs, involving 516 participants, contributed data to the review. Drugs had several different proposed modes of action and the results were grouped accordingly in the review. Each of the studies stated that the participants had OSA but diagnostic criteria were not always explicit and it was possible that some patients with central apnoeas may have been recruited.\nAcetazolamide, eszopiclone, naltrexone, nasal lubricant (phosphocholinamine) and physiostigmine were administered for one to two nights only. Donepezil in patients with and without Alzheimer's disease, fluticasone in patients with allergic rhinitis, combinations of ondansetrone and fluoxetine and paroxetine were trials of one to three months duration, however most of the studies were small and had methodological limitations. The overall quality of the available evidence was low.\nThe primary outcomes for the systematic review were the apnoea hypopnoea index (AHI) and the level of sleepiness associated with OSA, estimated by the Epworth Sleepiness Scale (ESS). AHI was reported in 25 studies and of these 10 showed statistically significant reductions in AHI.\nFluticasone in patients with allergic rhinitis was well tolerated and reduced the severity of sleep apnoea compared with placebo (AHI 23.3 versus 30.3; P < 0.05) and improved subjective daytime alertness. Excessive sleepiness was reported to be altered in four studies, however the only clinically and statistically significant change in ESS of -2.9 (SD 2.9; P = 0.04) along with a small but statistically significant reduction in AHI of -9.4 (SD 17.2; P = 0.03) was seen in patients without Alzheimer's disease receiving donepezil for one month. In 23 patients with mild to moderate Alzheimer's disease donepezil led to a significant reduction in AHI (donepezil 20 (SD 15) to 9.9 (SD 11.5) versus placebo 23.2 (SD 26.4) to 22.9 (SD 28.8); P = 0.035) after three months of treatment but no reduction in sleepiness was reported. High dose combined treatment with ondansetron 24 mg and fluoxetine 10 mg showed a 40.5% decrease in AHI from the baseline at treatment day 28. Paroxetine was shown to reduce AHI compared to placebo (-6.10 events/hour; 95% CI -11.00 to -1.20) but failed to improve daytime symptoms.\nPromising results from the preliminary mirtazapine study failed to be reproduced in the two more recent multicentre trials and, moreover, the use of mirtazapine was associated with significant weight gain and sleepiness. Few data were presented on the long-term tolerability of any of the compounds used.\nAuthors' conclusions\nThere is insufficient evidence to recommend the use of drug therapy in the treatment of OSA. Small studies have reported positive effects of certain agents on short-term outcomes. Certain agents have been shown to reduce the AHI in largely unselected populations with OSA by between 24% and 45%. For donepezil and fluticasone, studies of longer duration with a larger population and better matching of groups are required to establish whether the change in AHI and impact on daytime symptoms are reproducible. Individual patients had more complete responses to particular drugs. It is possible that better matching of drugs to patients according to the dominant mechanism of their OSA will lead to better results and this also needs further study.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Of 25 drugs tested, 10 had some impact on the severity of OSA (as measured by the apnoea hypopnoea index or AHI) and four altered symptoms of sleepiness, although in most people changes were only modest.\nEszopiclone, acetazolamide, naltrexone, physiostigmine and nasal lubricant (phosphocholinamine) were trialled for one to two nights only and the long-term effects are therefore unknown. A topical nasal steroid was well tolerated and reduced the severity of sleep apnoea and improved subjective daytime alertness in a specific subgroup of people with both OSA and rhinitis. Acetazolamide reduced the number of respiratory events per hour of sleep but did not reduce daytime sleepiness and was poorly tolerated long term. Paroxetine had only a small effect on the severity of OSA and while it was tolerated there was no useful effect on daytime symptoms. In contrast, participants reported a symptomatic benefit from protriptyline but there was no improvement in OSA, suggesting a different mechanism for their improved sense of well-being. The antidepressant mirtazapine improved the severity of sleep apnoea in older trials, however two more recent, larger trials failed to demonstrated the same results; moreover, participants experienced side effects of sleepiness and weight gain. Donepezil, well established in the treatment of Alzheimer's disease, improved oxygen saturation and the severity of OSA in patients with and without dementia. In people with no dementia it was shown to reduce sleepiness but the study was small and neither the AHI nor ESS were normalised. Further trials are needed to confirm these results. Eszopliclone reduced the severity of OSA in selected patients nonetheless larger scale trials are needed to verify the results."
}
]
|
query-laysum | 2290 | [
{
"role": "user",
"content": "Abstract: Background\nAround 16 million cases of whooping cough (pertussis) occur worldwide each year, mostly in low-income countries. Much of the morbidity of whooping cough in children and adults is due to the effects of the paroxysmal cough. Cough treatments proposed include corticosteroids, beta2-adrenergic agonists, pertussis-specific immunoglobulin, antihistamines and possibly leukotriene receptor antagonists (LTRAs).\nObjectives\nTo assess the effectiveness and safety of interventions to reduce the severity of paroxysmal cough in whooping cough in children and adults.\nSearch methods\nWe updated our searches of the Cochrane Central Register of Controlled Trials (CENTRAL, 2014, Issue 1), which contains the Cochrane Acute Respiratory Infections Group's Specialised Register, the Database of Abstracts of Reviews of Effects (DARE 2014, Issue 2), accessed from The Cochrane Library, MEDLINE (1950 to 30 January 2014), EMBASE (1980 to 30 January 2014), AMED (1985 to 30 January 2014), CINAHL (1980 to 30 January 2014) and LILACS (30 January 2014). We searched Current Controlled Trials to identify trials in progress.\nSelection criteria\nWe selected randomised controlled trials (RCTs) and quasi-RCTs of any intervention (excluding antibiotics and vaccines) to suppress the cough in whooping cough.\nData collection and analysis\nTwo review authors (SB, MT) independently selected trials, extracted data and assessed the quality of each trial for this review in 2009. Two review authors (SB, KW) independently reviewed additional studies identified by the updated searches in 2012 and 2014. The primary outcome was frequency of paroxysms of coughing. Secondary outcomes were frequency of vomiting, frequency of whoop, frequency of cyanosis (turning blue), development of serious complications, mortality from any cause, side effects due to medication, admission to hospital and duration of hospital stay.\nMain results\nWe included 12 trials of varying sample sizes (N = 9 to 135), mainly from high-income countries, including a total of 578 participants. Ten trials recruited children (N = 448 participants). Two trials recruited adolescents and adults (N = 130 participants). We considered only three trials to be of high methodological quality (one trial each of diphenhydramine, pertussis immunoglobulin and montelukast). Included studies did not show a statistically significant benefit for any of the interventions. Only six trials, including a total of 196 participants, reported data in sufficient detail for analysis. Diphenhydramine did not change coughing episodes; the mean difference (MD) of coughing spells per 24 hours was 1.9; 95% confidence interval (CI) -4.7 to 8.5 (N = 49 participants from one trial). One trial on pertussis immunoglobulin reported a possible mean reduction of -3.1 whoops per 24 hours (95% CI -6.2 to 0.02, N = 47 participants) but no change in hospital stay (MD -0.7 days; 95% CI -3.8 to 2.4, N = 46 participants). Dexamethasone did not show a clear decrease in length of hospital stay (MD -3.5 days; 95% CI -15.3 to 8.4, N = 11 participants from one trial) and salbutamol showed no change in coughing paroxysms per day (MD -0.2; 95% CI -4.1 to 3.7, N = 42 participants from two trials). Only one trial comparing pertussis immunoglobulin versus placebo (N = 47 participants) reported data on adverse events: 4.3% in the treatment group (rash) versus 5.3% in the placebo group (loose stools, pain and swelling at injection site).\nAuthors' conclusions\nThere is insufficient evidence to draw conclusions about the effectiveness of interventions for the cough in whooping cough. More high-quality trials are needed to assess the effectiveness of potential antitussive treatments in patients with whooping cough.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence from 12 studies about the effect of treatments for cough in patients with whooping cough."
}
]
|
query-laysum | 2291 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with neuromuscular disorders may have a weak, ineffective cough predisposing them to respiratory complications. Cough augmentation techniques aim to improve cough effectiveness and mucous clearance, reduce the frequency and duration of respiratory infections requiring hospital admission, and improve quality of life.\nObjectives\nTo determine the efficacy and safety of cough augmentation techniques in adults and children with chronic neuromuscular disorders.\nSearch methods\nOn 13 April 2020, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, and ClinicalTrials.gov for randomised controlled trials (RCTs), quasi-RCTs, and randomised cross-over trials.\nSelection criteria\nWe included trials of cough augmentation techniques compared to no treatment, alternative techniques, or combinations thereof, in adults and children with chronic neuromuscular disorders.\nData collection and analysis\nTwo review authors independently assessed trial eligibility, extracted data, and assessed risk of bias. The primary outcomes were the number and duration of unscheduled hospitalisations for acute respiratory exacerbations. We assessed the certainty of evidence using GRADE.\nMain results\nThe review included 11 studies involving 287 adults and children, aged three to 73 years. Inadequately reported cross-over studies and the limited additional information provided by authors severely restricted the number of analyses that could be performed.\nStudies compared manually assisted cough, mechanical insufflation, manual and mechanical breathstacking, mechanical insufflation-exsufflation, glossopharyngeal breathing, and combination techniques to unassisted cough and alternative or sham interventions. None of the included studies reported on the primary outcomes of this review (number and duration of unscheduled hospital admissions) or listed 'adverse events' as primary or secondary outcome measures.\nThe evidence suggests that a range of cough augmentation techniques may increase peak cough flow compared to unassisted cough (199 participants, 8 RCTs), but the evidence is very uncertain. There may be little to no difference in peak cough flow outcomes between alternative cough augmentation techniques (216 participants, 9 RCTs).\nThere was insufficient evidence to determine the effect of interventions on measures of gaseous exchange, pulmonary function, quality of life, general function, or participant preference and satisfaction.\nAuthors' conclusions\nWe are very uncertain about the safety and efficacy of cough augmentation techniques in adults and children with chronic neuromuscular disorders and further studies are needed.\n\nGiven the provided abstract, please respond to the following query: \"Results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 11 studies with 287 people and several cough augmentation techniques. One study measured the long-term effects of treatment, but was only published as an abstract without enough information to accurately analyse the study findings. Many included studies had problems with how they were performed, how their findings were reported, or both, which made it difficult to fully interpret their results. None of the studies reported on the outcomes we thought were the most important for making decisions about the effectiveness and safety of cough augmentation techniques. For example, the studies did not report on the number or duration of unscheduled hospital admissions for chest infections, survival, functional ability, or quality of life. The safety of cough augmentation techniques could not be determined. Some studies suggested that cough augmentation techniques may be better than an unassisted cough, but the results are very uncertain. There was not enough evidence to show that any one technique was better than another in improving cough effort."
}
]
|
query-laysum | 2292 | [
{
"role": "user",
"content": "Abstract: Background\nIt remains unclear whether people with non-muscle invasive bladder cancer (NMIBC) benefit from intravesical gemcitabine compared to other agents in the primary or recurrent setting following transurethral resection of a bladder tumor. This is an update of a Cochrane Review first published in 2012. Since that time, several randomized controlled trials (RCTs) have been reported, making this update relevant.\nObjectives\nTo assess the comparative effectiveness and toxicity of intravesical gemcitabine instillation for NMIBC.\nSearch methods\nWe performed a comprehensive literature search of the Cochrane Library, MEDLINE, Embase, four other databases, trial registries, and conference proceedings to 11 September 2020, with no restrictions on the language or status of publication.\nSelection criteria\nWe included RCTs in which participants received intravesical gemcitabine for primary or recurrent NMIBC.\nData collection and analysis\nTwo review authors independently assessed the included studies and extracted data for the primary outcomes: time to recurrence, time to progression, grade III to V adverse events determined by the Common Terminology Criteria for Adverse Events version 5.0 (CTCAE v5.0), and the secondary outcomes: time to death from bladder cancer, time to death from any cause, grade I or II adverse events determined by the CTCAE v5.0 and disease-specific quality of life. We performed statistical analyses using a random-effects model and rated the certainty of the evidence using GRADE.\nMain results\nWe included seven studies with 1222 participants with NMIBC across five comparisons. This abstract focuses on the primary outcomes of the three most clinically relevant comparisons.\n1. Gemcitabine versus saline: based on two years' to four years' follow-up, gemcitabine may reduce the risk of recurrence over time compared to saline (39% versus 47% recurrence rate, hazard ratio [HR] 0.77, 95% confidence interval [CI] 0.54 to 1.09; studies = 2, participants = 734; I2 = 49%; low-certainty evidence), but the CI included the possibility of no effect. Gemcitabine may result in little to no difference in the risk of progression over time compared to saline (4.6% versus 4.8% progression rate, HR 0.96, 95% CI 0.19 to 4.71; studies = 2, participants = 654; I2 = 53%; low-certainty evidence). Gemcitabine may result in little to no difference in the CTCAE grade III to V adverse events compared to saline (5.9% versus 4.7% adverse events rate, risk ratio [RR] 1.26, 95% CI 0.58 to 2.75; studies = 2, participants = 668; I2 = 24%; low-certainty evidence).\n2. Gemcitabine versus mitomycin: based on three years' follow-up (studies = 1, participants = 109), gemcitabine may reduce the risk of recurrence over time compared to mitomycin (17% versus 40% recurrence rate, HR 0.36, 95% CI 0.19 to 0.69; low-certainty evidence). Gemcitabine may reduce the risk of progression over time compared to mitomycin (11% versus 18% progression rate, HR 0.57, 95% CI 0.32 to 1.01; low-certainty evidence), but the CI included the possibility of no effect. We are very uncertain about the effect of gemcitabine on the CTCAE grade III to V adverse events compared to mitomycin (RR 0.51, 95% CI 0.13 to 1.93; very low-certainty evidence). The analysis was only based on recurrent NMIBC.\n3. Gemcitabine versus Bacillus Calmette-Guérin (BCG) for recurrent (one-course BCG failure) high-risk NMIBC: based on 6 months' to 22 months' follow-up (studies = 1, participants = 80), gemcitabine may reduce the risk of recurrence compared to BCG (41% versus 97% recurrence rate, HR 0.15, 95% CI 0.09 to 0.26; low-certainty evidence) and progression over time (16% versus 33% progression rate, HR 0.45, 95% CI 0.27 to 0.76; low-certainty evidence). We are very uncertain about the effect of gemcitabine on the CTCAE grade III to V adverse events compared to BCG (RR 1.00, 95% CI 0.21 to 4.66; very low-certainty evidence).\nIn addition, the review provides information on the comparison of gemcitabine versus BCG and gemcitabine versus one-third dose BCG.\nAuthors' conclusions\nBased on findings of this review, gemcitabine may have a more favorable impact on recurrence and progression-free survival than mitomycin but we are very uncertain as to how major adverse events compare. The same is true when comparing gemcitabine to BCG in individuals with high risk disease who have previously failed BCG. The underlying low- to very low-certainty evidence indicates that our confidence in these results is limited; the true effects may be substantially different from these findings; therefore, better quality studies are needed.\n\nGiven the provided abstract, please respond to the following query: \"Reliability of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The reliability of the evidence was low or very low for most of the treatments we compared, meaning that we were often uncertain about whether the findings were true. Further research will likely change these findings."
}
]
|
query-laysum | 2293 | [
{
"role": "user",
"content": "Abstract: Background\nAntipsychotic (neuroleptic) medication is used extensively to treat people with serious mental illnesses. However, it is associated with a wide range of adverse effects, including movement disorders. Because of this, many people treated with antipsychotic medication also receive anticholinergic drugs in order to reduce some of the associated movement side-effects. However, there is also a suggestion from animal experiments that the chronic administration of anticholinergics could cause tardive dyskinesia.\nObjectives\nTo determine whether the use or the withdrawal of anticholinergic drugs (benzhexol, benztropine, biperiden, orphenadrine, procyclidine, scopolamine, or trihexylphenidyl) are clinically effective for the treatment of people with both antipsychotic-induced tardive dyskinesia and schizophrenia or other chronic mental illnesses.\nSearch methods\nWe retrieved 712 references from searching the Cochrane Schizophrenia Group's Study-Based Register of Trials including the registries of clinical trials (16 July 2015 and 26 April 2017). We also inspected references of all identified studies for further trials and contacted authors of trials for additional information.\nSelection criteria\nWe included reports identified in the search if they were controlled trials dealing with people with antipsychotic-induced tardive dyskinesia and schizophrenia or other chronic mental illness who had been randomly allocated to (a) anticholinergic medication versus placebo (or no intervention), (b) anticholinergic medication versus any other intervention for the treatment of tardive dyskinesia, or (c) withdrawal of anticholinergic medication versus continuation of anticholinergic medication.\nData collection and analysis\nWe independently extracted data from included trials and we estimated risk ratios (RR) with 95% confidence intervals (CIs). We assumed that people who left early had no improvement. We assessed risk of bias and created a 'Summary of findings' table using GRADE.\nMain results\nThe previous version of this review included no trials. We identified two trials that could be included from the 2015 and 2017 searches. They randomised 30 in- and outpatients with schizophrenia in the USA and Germany. Overall, the risk of bias was unclear, mainly due to poor reporting: allocation concealment was not described; generation of the sequence was not explicit; studies were not clearly blinded; and outcome data were not fully reported.\nFindings were sparse. One study reported on the primary outcomes and found that significantly more participants allocated to procyclidine (anticholinergic) had not improved to a clinically important extent compared with those allocated to isocarboxazid (MAO-inhibitor) after 40 weeks' treatment (1 RCT, n = 20; RR 4.20, 95% CI 1.40 to 12.58; very low quality evidence); that there was no evidence of a difference in the incidence of any adverse effects (1 RCT, n = 20; RR 0.33, 95% CI 0.02 to 7.32; very low quality evidence); or acceptability of treatment (measured by participants leaving the study early) (1 RCT, n = 20; RR 0.33, 95% CI 0.02 to 7.32; very low quality evidence). The other trial compared anticholinergic withdrawal with anticholinergic continuation and found no evidence of a difference in the incidence of acceptability of treatment (measured by participants leaving the study early) (1 RCT, n = 10; RR 2.14, 95% CI 0.11 to 42.52; very low quality evidence).\nNo trials reported on social confidence, social inclusion, social networks, or personalised quality of life — outcomes designated important to patients. No studies comparing either i. anticholinergics with placebo or no treatment, or ii. studies of anticholinergic withdrawal, were found that reported on the primary outcome 'no clinically important improvement in TD symptoms and adverse events'.\nAuthors' conclusions\nBased on currently available evidence, no confident statement can be made about the effectiveness of anticholinergics to treat people with antipsychotic-induced tardive dyskinesia. The same applies for the withdrawal of such medications. Whether the withdrawal of anticholinergics may benefit people with antipsychotic-induced TD should be evaluated in a parallel-group, placebo-controlled randomised trial, with adequate sample size and at least 6 weeks of follow-up.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ".\nPeople with schizophrenia often hear voices and see things (hallucinations) and have beliefs which are at odds with those held by people without schizophrenia (delusions). The main treatment of these symptoms are antipsychotic drugs. However, these drugs can have debilitating side-effects. Tardive dyskinesia is an involuntary movement that causes the face, mouth, tongue and jaw to convulse, spasm and grimace. It is caused by long-term use, or high doses, of antipsychotic drugs, is difficult to treat and can be incurable. Many people being treated with antipsychotic medication also receive anticholinergic drugs to try to reduce some of these movement side-effects. There is, however, evidence from animal experiments that anticholinergic drugs could cause tardive dyskinesia."
}
]
|
query-laysum | 2294 | [
{
"role": "user",
"content": "Abstract: Background\nGlobally, about 6% of children are born with a serious birth defect of genetic or partially genetic origin. Carrier screening or testing is one way to identify couples at increased risk of having a child with an autosomal recessive condition. The most common autosomal recessive conditions are thalassaemia, sickle cell disease, cystic fibrosis and Tay-Sachs disease, with higher carrier rates in high-risk populations of specific ancestral backgrounds. Identifying and counselling couples at genetic risk of the conditions before pregnancy enables them to make fully informed reproductive decisions, with some of these choices not being available if testing is only offered in an antenatal setting. This is an update of a previously published review.\nObjectives\nTo assess the effectiveness of systematic preconception genetic risk assessment to enable autonomous reproductive choice and to improve reproductive outcomes in women and their partners who are both identified as carriers of thalassaemia, sickle cell disease, cystic fibrosis and Tay-Sachs disease in healthcare settings when compared to usual care.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Trials Registers. Date of latest search of the registers: 04 August 2021.\nIn addition, we searched for all relevant trials from 1970 (or the date at which the database was first available if after 1970) to date using electronic databases (MEDLINE, Embase, CINAHL, PsycINFO), clinical trial databases (National Institutes of Health, Clinical Trials Search portal of the World Health Organization, metaRegister of controlled clinical trials), and hand searching of key journals and conference abstract books from 1998 to date (European Journal of Human Genetics, Genetics in Medicine, Journal of Community Genetics). We also searched the reference lists of relevant articles, reviews and guidelines and also contacted subject experts in the field to request any unpublished or other published trials. Date of latest search of all these sources: 25 June 2021.\nSelection criteria\nAny randomised controlled trials (RCTs) or quasi-RCTs (published or unpublished) comparing reproductive outcomes of systematic preconception genetic risk assessment for thalassaemia, sickle cell disease, cystic fibrosis and Tay-Sachs disease when compared to usual care.\nData collection and analysis\nWe identified 37 papers, describing 22 unique trials which were potentially eligible for inclusion in the review. However, after assessment, we found no RCTs of preconception genetic risk assessment for thalassaemia, sickle cell disease, cystic fibrosis and Tay-Sachs disease.\nMain results\nNo RCTs of preconception genetic risk assessment for thalassaemia, sickle cell disease, cystic fibrosis and Tay-Sachs disease are included. A trial identified earlier has published its results and has subsequently been listed as excluded in this review.\nAuthors' conclusions\nAs there are no RCTs of preconception genetic risk assessment for thalassaemia, sickle cell disease, cystic fibrosis, or Tay-Sachs disease included in either the earlier or current versions of this review, we recommend considering potential non-RCTs studies (for example prospective cohorts or before-and-after studies) for future reviews. While RCTs are desirable to inform evidence-based practice and robust recommendations, the ethical, legal and social implications associated with using this trial design to evaluate the implementation of preconception genetic risk assessment involving carrier testing and reproductive autonomy must also be considered. In addition, rather than focusing on single gene-by-gene carrier testing for specific autosomal-recessive conditions as the intervention being evaluated, preconception expanded genetic screening should also be included in future searches as this has received much attention in recent years as a more pragmatic strategy.\nThe research evidence for current international policy recommendations is limited to non-randomised studies.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We did not find any trials that we could include in this review. In an earlier version of this review, we had already found the protocol for a trial that has now published its results, but we have excluded the trial in this version of the review because it did not look at the right topic after all."
}
]
|
query-laysum | 2295 | [
{
"role": "user",
"content": "Abstract: Background\nGroin dissection is commonly performed for the treatment of a variety of cancers, including melanoma, and squamous cell carcinoma of the skin, penis or vulva. It is uncertain whether insertion of a drain reduces complication rates, and, if used, the optimum time for drain removal after surgery is also unknown.\nObjectives\nTo assess the current level of evidence to determine whether placement of a drain is beneficial after groin dissection in terms of reducing seroma, haematoma, wound dehiscence and wound infection rates, and to determine the optimal type and duration of drainage following groin dissection if it is shown to be beneficial.\nSearch methods\nIn September 2014 we searched the following electronic databases using a pre-designed search strategy: the Cochrane Wounds Group Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library). In November 2013 we searched Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We did not restrict the search and study selection with respect to language, date of publication or study setting.\nSelection criteria\nWe considered all randomised controlled trials (RCTs) comparing wound drainage with no wound drainage in individuals undergoing groin dissection, where the most superior node excised was Cloquet's node (the most superior inguinal lymph node). No limits were applied to language of publication or trial location. Two review authors independently determined the eligibility of each trial.\nData collection and analysis\nTwo review authors, working independently, screened studies identified from the search; there were no disagreements.\nMain results\nWe did not identify any RCTs that met the inclusion criteria for the review.\nAuthors' conclusions\nThere is a need for high quality RCTs to guide clinical practice in this under-researched area.\n\nGiven the provided abstract, please respond to the following query: \"Why remove lymph glands?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Surgical removal of the lymph glands found in the groin (the inguinal lymph nodes) is an important part of the treatment for several types of cancer, including melanoma and other types of skin cancer, as well as squamous cell cancer of the penis, vulva and the surrounding skin. Sometimes complications, such as wound infection, bruising (haematoma) or a collection of lymph fluid in the area (seroma), can occur after removal of these lymph nodes."
}
]
|
query-laysum | 2296 | [
{
"role": "user",
"content": "Abstract: Background\nPostoperative pain is common and may be severe. Postoperative administration of non-steroidal anti-inflammatory drugs (NSAIDs) reduces patient opioid requirements and, in turn, may reduce the incidence and severity of opioid-induced adverse events (AEs).\nObjectives\nTo assess the analgesic efficacy and adverse effects of single-dose intravenous ketorolac, compared with placebo or an active comparator, for moderate to severe postoperative pain in adults.\nSearch methods\nWe searched the following databases without language restrictions: CENTRAL, MEDLINE, Embase and LILACS on 20 April 2020. We checked clinical trials registers and reference lists of retrieved articles for additional studies.\nSelection criteria\nRandomized double-blind trials that compared a single postoperative dose of intravenous ketorolac with placebo or another active treatment, for treating acute postoperative pain in adults following any surgery.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. \nOur primary outcome was the number of participants in each arm achieving at least 50% pain relief over a four- and six-hour period.\nOur secondary outcomes were time to and number of participants using rescue medication; withdrawals due to lack of efficacy, adverse events (AEs), and for any other cause; and number of participants experiencing any AE, serious AEs (SAEs), and NSAID-related or opioid-related AEs.\nFor subgroup analysis, we planned to analyze different doses of parenteral ketorolac separately and to analyze results based on the type of surgery performed.\nWe assessed the certainty of evidence using GRADE.\nMain results\nWe included 12 studies, involving 1905 participants undergoing various surgeries (pelvic/abdominal, dental, and orthopedic), with 17 to 83 participants receiving intravenous ketorolac in each study. Mean study population ages ranged from 22.5 years to 67.4 years. Most studies administered a dose of ketorolac of 30 mg; one study assessed 15 mg, and another administered 60 mg.\nMost studies had an unclear risk of bias for some domains, particularly allocation concealment and blinding, and a high risk of bias due to small sample size. The overall certainty of evidence for each outcome ranged from very low to moderate. Reasons for downgrading certainty included serious study limitations, inconsistency and imprecision.\nKetorolac versus placebo\nVery low-certainty evidence from eight studies (658 participants) suggests that ketorolac results in a large increase in the number of participants achieving at least 50% pain relief over four hours compared to placebo, but the evidence is very uncertain (risk ratio (RR) 2.81, 95% confidence interval (CI) 1.80 to 4.37). The number needed to treat for one additional participant to benefit (NNTB) was 2.4 (95% CI 1.8 to 3.7). Low-certainty evidence from 10 studies (914 participants) demonstrates that ketorolac may result in a large increase in the number of participants achieving at least 50% pain relief over six hours compared to placebo (RR 3.26, 95% CI 1.93 to 5.51). The NNTB was 2.5 (95% CI 1.9 to 3.7).\nAmong secondary outcomes, for time to rescue medication, moderate-certainty evidence comparing intravenous ketorolac versus placebo demonstrated a mean median of 271 minutes for ketorolac versus 104 minutes for placebo (6 studies, 633 participants). For the number of participants using rescue medication, very low-certainty evidence from five studies (417 participants) compared ketorolac with placebo. The RR was 0.60 (95% CI 0.36 to 1.00), that is, it did not demonstrate a difference between groups.\nKetorolac probably results in a slight increase in total adverse event rates compared with placebo (74% versus 65%; 8 studies, 810 participants; RR 1.09, 95% CI 1.00 to 1.19; number needed to treat for an additional harmful event (NNTH) 16.7, 95% CI 8.3 to infinite, moderate-certainty evidence). Serious AEs were rare. Low-certainty evidence from eight studies (703 participants) did not demonstrate a difference in rates between ketorolac and placebo (RR 0.62, 95% CI 0.13 to 3.03).\nKetorolac versus NSAIDs \nKetorolac was compared to parecoxib in four studies and diclofenac in two studies. For our primary outcome, over both four and six hours there was no evidence of a difference between intravenous ketorolac and another NSAID (low-certainty and moderate-certainty evidence, respectively). Over four hours, four studies (337 participants) produced an RR of 1.04 (95% CI 0.89 to 1.21) and over six hours, six studies (603 participants) produced an RR of 1.06 (95% CI 0.95 to 1.19).\nFor time to rescue medication, low-certainty evidence from four studies (427 participants) suggested that participants receiving ketorolac waited an extra 35 minutes (mean median 331 minutes versus 296 minutes). For the number of participants using rescue medication, very low-certainty evidence from three studies (260 participants) compared ketorolac with another NSAID. The RR was 0.90 (95% CI 0.58 to 1.40), that is, there may be little or no difference between groups.\nKetorolac probably results in a slight increase in total adverse event rates compared with another NSAID (76% versus 68%, 5 studies, 516 participants; RR 1.11, 95% CI 1.00 to 1.23; NNTH 12.5, 95% CI 6.7 to infinite, moderate-certainty evidence). Serious AEs were rare. Low-certainty evidence from five studies (530 participants) did not demonstrate a difference in rates between ketorolac and another NSAID (RR 3.18, 95% CI 0.13 to 76.99). Only one of the five studies reported a single serious AE.\nAuthors' conclusions\nThe amount and certainty of evidence for the use of intravenous ketorolac as a treatment for postoperative pain varies across efficacy and safety outcomes and amongst comparators, from very low to moderate. The available evidence indicates that postoperative intravenous ketorolac administration may offer substantial pain relief for most patients, but further research may impact this estimate. Adverse events appear to occur at a slightly higher rate in comparison to placebo and to other NSAIDs. Insufficient information is available to assess whether intravenous ketorolac has a different rate of gastrointestinal or surgical-site bleeding, renal dysfunction, or cardiovascular events versus other NSAIDs. There was a lack of studies in cardiovascular surgeries and in elderly populations who may be at increased risk for adverse events.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up to date to April 2020."
}
]
|
query-laysum | 2297 | [
{
"role": "user",
"content": "Abstract: Background\nOlanzapine as an antiemetic represents a new use of an antipsychotic drug. People with cancer may experience nausea and vomiting whilst receiving chemotherapy or radiotherapy, or whilst in the palliative phase of illness.\nObjectives\nTo assess the efficacy and safety of olanzapine when used as an antiemetic in the prevention and treatment of nausea and vomiting related to cancer in adults.\nSearch methods\nWe searched CENTRAL, MEDLINE and Embase for published data on 20th September 2017, as well as ClinicalTrials.gov and World Health Organization International Clinical Trials Registry Platform for unpublished trials. We checked reference lists, and contacted experts in the field and study authors.\nSelection criteria\nWe included randomised controlled trials (RCTs) of olanzapine versus any comparator with or without adjunct therapies for the prevention or treatment, or both, of nausea or vomiting in people with cancer aged 18 years or older, in any setting, of any duration, with at least 10 participants per treatment arm.\nData collection and analysis\nWe used standard Cochrane methodology. We used GRADE to assess quality of evidence for each main outcome. We extracted data for absence of nausea or vomiting and frequency of serious adverse events as primary outcomes. We extracted data for patient perception of treatment, other adverse events, somnolence and fatigue, attrition, nausea or vomiting severity, breakthrough nausea and vomiting, rescue antiemetic use, and nausea and vomiting as secondary outcomes at specified time points.\nMain results\nWe included 14 RCTs (1917 participants) from high-, middle- and low-income countries, representing over 24 different cancers. Thirteen studies were in chemotherapy-induced nausea and vomiting. Oral olanzapine was administered during highly emetogenic (HEC) or moderately emetogenic (MEC) chemotherapy (12 studies); chemoradiotherapy (one study); or palliation (one study). Eight studies await classification and 13 are ongoing.\nThe main comparison was olanzapine versus placebo/no treatment. Other comparisons were olanzapine versus NK1 antagonist, prokinetic, 5-HT3 antagonist or dexamethasone.\nWe assessed all but one study as having one or more domains that were at high risk of bias. Eight RCTs with fewer than 50 participants per treatment arm, and 10 RCTs with issues related to blinding, were at high risk of bias. We downgraded GRADE assessments due to imprecision, inconsistency and study limitations.\nOlanzapine versus placebo/no treatment\nPrimary outcomes\nOlanzapine probably doubles the likelihood of no nausea or vomiting during chemotherapy from 25% to 50% (risk ratio (RR) 1.98, 95% confidence interval (CI) 1.59 to 2.47; 561 participants; 3 studies; solid tumours; HEC or MEC therapy; moderate-quality evidence) when added to standard therapy. Number needed to treat for additional beneficial outcome (NNTB) was 5 (95% CI 3.3 - 6.6).\nIt is uncertain if olanzapine increases the risk of serious adverse events (absolute risk difference 0.7% more, 95% CI 0.2 to 5.2) (RR 2.46, 95% CI 0.48 to 12.55; 7 studies, 889 participants, low-quality evidence).\nSecondary outcomes\nFour studies reported patient perception of treatment. One study (48 participants) reported no difference in patient preference. Four reported quality of life but data were insufficient for meta-analysis.\nOlanzapine may increase other adverse events (RR 1.71, 95% CI 0.99 to 2.96; 332 participants; 4 studies; low-quality evidence) and probably increases somnolence and fatigue compared to no treatment or placebo (RR 2.33, 95% CI 1.30 to 4.18; anticipated absolute risk 8.2% more, 95% CI 1.9 to 18.8; 464 participants; 5 studies; moderate-quality evidence). Olanzapine probably does not affect all-cause attrition (RR 0.99, 95% CI 0.57 to 1.73; 943 participants; 8 studies; I² = 0%). We are uncertain if olanzapine increases attrition due to adverse events (RR 3.00, 95% CI 0.13 to 70.16; 422 participants; 6 studies). No participants withdrew due to lack of efficacy.\nWe are uncertain if olanzapine reduces breakthrough nausea and vomiting (RR 0.38, 95% CI 0.10 to 1.47; 501 participants; 2 studies; I² = 54%) compared to placebo or no treatment. No studies reported 50% reduction in severity of nausea or vomiting, use of rescue antiemetics, or attrition.\nWe are uncertain of olanzapine's efficacy in reducing acute nausea or vomiting. Olanzapine probably reduces delayed nausea (RR 1.71, 95% CI 1.40 to 2.09; 585 participants; 3 studies) and vomiting (RR 1.28, 95% CI 1.14 to 1.42; 702 participants; 5 studies).\nSubgroup analysis: 5 mg versus 10 mg\nPlanned subgroup analyses found that it is unclear if 5 mg is as effective an antiemetic as 10 mg. There is insufficient evidence to exclude the possibility that 5 mg may confer a lower risk of somnolence and fatigue than 10 mg.\nOther comparisons\nOne study (20 participants) compared olanzapine versus NK1 antagonists. We observed no difference in any reported outcomes.\nOne study (112 participants) compared olanzapine versus a prokinetic (metoclopramide), reporting that olanzapine may increase freedom from overall nausea (RR 2.95, 95% CI 1.73 to 5.02) and overall vomiting (RR 3.03, 95% CI 1.78 to 5.14).\nOne study (62 participants) examined olanzapine versus 5-HT3 antagonists, reporting olanzapine may increase the likelihood of 50% or greater reduction in nausea or vomiting at 48 hours (RR 1.82, 95% CI 1.11 to 2.97) and 24 hours (RR 1.36, 95% CI 0.80 to 2.34).\nOne study (229 participants) compared olanzapine versus dexamethasone, reporting that olanzapine may reduce overall nausea (RR 1.73, 95% CI 1.37 to 2.18), overall vomiting (RR 1.27, 95% CI 1.10 to 1.48), delayed nausea (RR 1.66, 95% CI 1.33 to 2.08) and delayed vomiting (RR 1.25, 95% CI 1.07 to 1.45).\nAuthors' conclusions\nThere is moderate-quality evidence that oral olanzapine probably increases the likelihood of not being nauseous or vomiting during chemotherapy from 25% to 50% in adults with solid tumours, in addition to standard therapy, compared to placebo or no treatment. There is uncertainty whether it increases serious adverse events. It may increase the likelihood of other adverse events, probably increasing somnolence and fatigue. There is uncertainty about relative benefits and harms of 5 mg versus 10 mg.\nWe identified only RCTs describing oral administration. The findings of this review cannot be extrapolated to provide evidence about the efficacy and safety of any injectable form (intravenous, intramuscular or subcutaneous) of olanzapine.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies in September 2017."
}
]
|
query-laysum | 2298 | [
{
"role": "user",
"content": "Abstract: Background\nDentinal hypersensitivity is characterized by short, sharp pain from exposed dentine that occurs in response to external stimuli such as cold, heat, osmotic, tactile or chemicals, and cannot be explained by any other form of dental defect or pathology. Laser therapy has become a commonly used intervention and might be effective for dentinal hypersensitivity.\nObjectives\nTo assess the effects of in-office employed lasers versus placebo laser, placebo agents or no treatment for relieving pain of dentinal hypersensitivity.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 20 October 2020), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2020, Issue 9), MEDLINE Ovid (1946 to 20 October 2020), Embase Ovid (1980 to 20 October 2020), CINAHL EBSCO (Cumulative Index to Nursing and Allied Health Literature; 1937 to 20 October 2020), and LILACS BIREME Virtual Health Library (Latin American and Caribbean Health Science Information database; from 1982 to 20 October 2020). Conference proceedings were searched via the ISI Web of Science and ZETOC, and OpenGrey was searched for grey literature. The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nRandomized controlled trials (RCTs) in which in-office lasers were compared to placebo or no treatment on patients aged above 12 years with tooth hypersensitivity.\nData collection and analysis\nTwo review authors independently and in duplicate screened the search results, extracted data, and assessed the risk of bias of the included studies. Disagreement was resolved by discussion. For continuous outcomes, we used mean differences (MD) and 95% confidence intervals (CI). We conducted meta-analyses only with studies of similar comparisons reporting the same outcome measures. We assessed the overall certainty of the evidence using GRADE.\nMain results\nWe included a total of 23 studies with 936 participants and 2296 teeth. We assessed five studies at overall low risk of bias, 13 at unclear, and five at high risk of bias. 17 studies contributed data to the meta-analyses. We divided the studies into six subgroups based on the type of laser and the primary outcome measure. We assessed the change in intensity of pain using quantitative pain scale (visual analogue scale (VAS) of 0 to 10 (no pain to worst possible pain)) when tested through air blast and tactile stimuli in three categories of short (0 to 24 hours), medium (more than 24 hours to 2 months), and long term (more than 2 months).\nResults demonstrated that compared to placebo or no treatment the application of all types of lasers combined may reduce pain intensity when tested through air blast stimuli at short term (MD -2.24, 95% CI -3.55 to -0.93; P = 0.0008; 13 studies, 978 teeth; low-certainty evidence), medium term (MD -2.46, 95% CI -3.57 to -1.35; P < 0.0001; 11 studies, 1007 teeth; very low-certainty evidence), and long term (MD -2.60, 95% CI -4.47 to -0.73; P = 0.006; 5 studies, 564 teeth; very low-certainty evidence). Similarly, compared to placebo or no treatment the application of all types of lasers combined may reduce pain intensity when tested through tactile stimuli at short term (MD -0.67, 95% CI -1.31 to -0.03; P = 0.04; 8 studies, 506 teeth; low-certainty evidence) and medium term (MD -1.73, 95% CI -3.17 to -0.30; P = 0.02; 9 studies, 591 teeth; very low-certainty evidence). However, there was insufficient evidence of a difference in pain intensity for all types of lasers when tested through tactile stimuli in the long term (MD -3.52, 95% CI -10.37 to 3.33; P = 0.31; 2 studies, 184 teeth; very low-certainty evidence).\nMost included studies assessed adverse events and reported that no obvious adverse events were observed during the trials. No studies investigated the impact of laser treatment on participants' quality of life.\nAuthors' conclusions\nLimited and uncertain evidence from meta-analyses suggests that the application of laser overall may improve pain intensity when tested through air blast or tactile stimuli at short, medium, or long term when compared to placebo/no treatment. Overall, laser therapy appears to be safe. Future studies including well-designed double-blinded RCTs are necessary to further investigate the clinical efficacy of lasers as well as their cost-effectiveness.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- Lasers may slightly reduce pain after 24 hours. They may reduce pain beyond 24 hours but the evidence is very uncertain.\n- Lasers do not appear to cause adverse (unwanted) effects.\n- We need future studies to strengthen the evidence and investigate the impact of laser treatment on quality of life."
}
]
|
query-laysum | 2299 | [
{
"role": "user",
"content": "Abstract: Background\nManaged withdrawal is a necessary step prior to drug-free treatment or as the endpoint of long-term substitution treatment.\nObjectives\nTo assess the effects of opioid antagonists plus minimal sedation for opioid withdrawal. Comparators were placebo as well as more established approaches to detoxification, such as tapered doses of methadone, adrenergic agonists, buprenorphine and symptomatic medications.\nSearch methods\nWe updated our searches of the following databases to December 2016: CENTRAL, MEDLINE, Embase, PsycINFO and Web of Science. We also searched two trials registers and checked the reference lists of included studies for further references to relevant studies.\nSelection criteria\nWe included randomised and quasi-randomised controlled clinical trials along with prospective controlled cohort studies comparing opioid antagonists plus minimal sedation versus other approaches or different opioid antagonist regimens for withdrawal in opioid-dependent participants.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nTen studies (6 randomised controlled trials and 4 prospective cohort studies, involving 955 participants) met the inclusion criteria for the review. We considered 7 of the 10 studies to be at high risk of bias in at least one of the domains we assessed.\nNine studies compared an opioid antagonist-adrenergic agonist combination versus a treatment regimen based primarily on an alpha2-adrenergic agonist (clonidine or lofexidine). Other comparisons (placebo, tapered doses of methadone, buprenorphine) made by included studies were too diverse for any meaningful analysis. This review therefore focuses on the nine studies comparing an opioid antagonist (naltrexone or naloxone) plus clonidine or lofexidine versus treatment primarily based on clonidine or lofexidine.\nFive studies took place in an inpatient setting, two studies were in outpatients with day care, two used day care only for the first day of opioid antagonist administration, and one study described the setting as outpatient without indicating the level of care provided.\nThe included studies were heterogeneous in terms of the type of opioid antagonist treatment regimen, the comparator, the outcome measures assessed, and the means of assessing outcomes. As a result, the validity of any estimates of overall effect is doubtful, therefore we did not calculate pooled results for any of the analyses.\nThe quality of the evidence for treatment with an opioid antagonist-adrenergic agonist combination versus an alpha2-adrenergic agonist is very low. Two studies reported data on peak withdrawal severity, and four studies reported data on the average severity over the period of withdrawal. Peak withdrawal induced by opioid antagonists in combination with an adrenergic agonist appears to be more severe than withdrawal managed with clonidine or lofexidine alone, but the average severity over the withdrawal period is less. In some situations antagonist-induced withdrawal may be associated with significantly higher rates of treatment completion compared to withdrawal managed with adrenergic agonists. However, this result was not consistent across studies, and the extent of any benefit is highly uncertain.\nWe could not extract any data on the occurrence of adverse events, but two studies reported delirium or confusion following the first dose of naltrexone. Delirium may be more likely with higher initial doses and with naltrexone rather than naloxone (which has a shorter half-life), but we could not confirm this from the available evidence.\nInsufficient data were available to make any conclusions on the best duration of treatment.\nAuthors' conclusions\nUsing opioid antagonists plus alpha2-adrenergic agonists is a feasible approach for managing opioid withdrawal. However, it is unclear whether this approach reduces the duration of withdrawal or facilitates transfer to naltrexone treatment to a greater extent than withdrawal managed primarily with an adrenergic agonist.\nA high level of monitoring and support is desirable for several hours following administration of opioid antagonists because of the possibility of vomiting, diarrhoea and delirium.\nUsing opioid antagonists to induce and accelerate opioid withdrawal is not currently an active area of research or clinical practice, and the research community should give greater priority to investigating approaches, such as those based on buprenorphine, that facilitate the transition to sustained-release preparations of naltrexone.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to December 2016."
}
]
|
query-laysum | 2300 | [
{
"role": "user",
"content": "Abstract: Background\nThe purpose of low-vision rehabilitation is to allow people to resume or to continue to perform daily living tasks, with reading being one of the most important. This is achieved by providing appropriate optical devices and special training in the use of residual-vision and low-vision aids, which range from simple optical magnifiers to high-magnification video magnifiers.\nObjectives\nTo assess the effects of different visual reading aids for adults with low vision.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 12); MEDLINE Ovid; Embase Ovid; BIREME LILACS, OpenGrey, the ISRCTN registry; ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). The date of the search was 17 January 2018.\nSelection criteria\nThis review includes randomised and quasi-randomised trials that compared any device or aid used for reading to another device or aid in people aged 16 or over with low vision as defined by the study investigators. We did not compare low-vision aids with no low-vision aid since it is obviously not possible to measure reading speed, our primary outcome, in people that cannot read ordinary print. We considered reading aids that maximise the person's visual reading capacity, for example by increasing image magnification (optical and electronic magnifiers), augmenting text contrast (coloured filters) or trying to optimise the viewing angle or gaze position (such as prisms). We have not included studies investigating reading aids that allow reading through hearing, such as talking books or screen readers, or through touch, such as Braille-based devices and we did not consider rehabilitation strategies or complex low-vision interventions.\nData collection and analysis\nWe used standard methods expected by Cochrane. At least two authors independently assessed trial quality and extracted data. The primary outcome of the review was reading speed in words per minute. Secondary outcomes included reading duration and acuity, ease and frequency of use, quality of life and adverse outcomes. We graded the certainty of the evidence using GRADE.\nMain results\nWe included 11 small studies with a cross-over design (435 people overall), one study with two parallel arms (37 participants) and one study with three parallel arms (243 participants). These studies took place in the USA (7 studies), the UK (5 studies) and Canada (1 study). Age-related macular degeneration (AMD) was the most frequent cause of low vision, with 10 studies reporting 50% or more participants with the condition. Participants were aged 9 to 97 years in these studies, but most were older (the median average age across studies was 71 years). None of the studies were masked; otherwise we largely judged the studies to be at low risk of bias. All studies reported the primary outcome: results for reading speed. None of the studies measured or reported adverse outcomes.\nReading speed may be higher with stand-mounted closed circuit television (CCTV) than with optical devices (stand or hand magnifiers) (low-certainty evidence, 2 studies, 92 participants). There was moderate-certainty evidence that reading duration was longer with the electronic devices and that they were easier to use. Similar results were seen for electronic devices with the camera mounted in a 'mouse'. Mixed results were seen for head-mounted devices with one study of 70 participants finding a mouse-based head-mounted device to be better than an optical device and another study of 20 participants finding optical devices better (low-certainty evidence). Low-certainty evidence from three studies (93 participants) suggested no important differences in reading speed, acuity or ease of use between stand-mounted and head-mounted electronic devices. Similarly, low-certainty evidence from one study of 100 participants suggested no important differences between a 9.7'' tablet computer and stand-mounted CCTV in reading speed, with imprecise estimates (other outcomes not reported).\nLow-certainty evidence showed little difference in reading speed in one study with 100 participants that added electronic portable devices to preferred optical devices. One parallel-arm study in 37 participants found low-certainty evidence of higher reading speed at one month if participants received a CCTV at the initial rehabilitation consultation instead of a standard low-vision aids prescription alone.\nA parallel-arm study including 243 participants with AMD found no important differences in reading speed, reading acuity and quality of life between prism spectacles and conventional spectacles. One study in 10 people with AMD found that reading speed with several overlay coloured filters was no better and possibly worse than with a clear filter (low-certainty evidence, other outcomes not reported).\nAuthors' conclusions\nThere is insufficient evidence supporting the use of a specific type of electronic or optical device for the most common profiles of low-vision aid users. However, there is some evidence that stand-mounted electronic devices may improve reading speeds compared with optical devices. There is less evidence to support the use of head-mounted or portable electronic devices; however, the technology of electronic devices may have improved since the studies included in this review took place, and modern portable electronic devices have desirable properties such as flexible use of magnification. There is no good evidence to support the use of filters or prism spectacles. Future research should focus on assessing sustained long-term use of each device and the effect of different training programmes on its use, combined with investigation of which patient characteristics predict performance with different devices, including some of the more costly electronic devices.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The number of people with low vision is increasing with the ageing population. Magnifying optical and electronic aids are commonly prescribed to help people maintain the ability to read when their vision starts to fade. Cochrane authors reviewed the evidence for the effect of reading aids on reading ability in people with low vision to find out whether there are differences in reading performance using conventional optical devices, such as hand-held or stand-based microscopic magnifiers, as compared to electronic devices such as stand-based, closed circuit television and hand-held electronic magnifiers.\nCochrane Review authors assessed how certain the evidence was for each review finding. They looked for factors that can make the evidence less certain, such as problems with the way the studies were done, very small studies, and inconsistent findings across studies. They also looked for factors that can make the evidence more certain, including very large effects. They graded each finding as being of very low, low, moderate or high certainty."
}
]
|
query-laysum | 2301 | [
{
"role": "user",
"content": "Abstract: Background\nAcupuncture has been used by rehabilitation specialists as an adjunct therapy for the symptomatic treatment of rheumatoid arthritis (RA). Acupuncture is a traditional Chinese medicine where thin needles are inserted in specific documented points believed to represent concentration of body energies. In some cases a small electrical impulse is added to the needles. Once the needles are inserted in some of the appropriate points, endorphins, morphine-like substances, have been shown to be released in the patient's system, thus inducing local or generalised analgesia (pain relief). This review is an update of the original review published in July 2002.\nObjectives\nTo evaluate the effects of acupuncture or electroacupuncture on the objective and subjective measures of disease activity in patients with RA.\nSearch methods\nA comprehensive search of MEDLINE, EMBASE, PEDro, Current Contents , Sports Discus and CINAHL, initially done in September 2001, was updated in May 2005.The Cochrane Field of Rehabilitation and Related Therapies and the Cochrane Musculoskeletal Review Group were also contacted for a search of their specialized registries. Handsearching was conducted on all retrieved papers and content experts were contacted to identify additional studies.\nSelection criteria\nComparative controlled studies, such as randomized controlled trials and controlled clinical trials in patients with RA were eligible. Trials published in languages other than French and English were not analyzed. Abstracts were excluded unless further data could be obtained from the authors.\nData collection and analysis\nTwo independent reviewers identified potential articles from the literature search and extracted data using pre-defined extraction forms. Consensus was reached on all the extracted data. Quality was assessed by two reviewers using a five point validated tool that measured the quality of randomization, double-blinding and description of withdrawals.\nMain results\nAfter the updated searches were conducted, five further potential articles were identified; however, these did not meet the inclusion criteria. Two studies involving a total of 84 people were included. One study used acupuncture while the other used electroacupuncture. In the acupuncture study, no statistically significant difference was found between groups for erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), visual analogue scale for patient's global assessment (VAS G), number of swollen joints and tender joints, general health questionnaire (GHQ), modified disease activity scale (DAS) or for the decrease in analgesic intake. Although not statistically significant, pain in the treatment group improved by 4 points on a 0-100mm visual analogue scale versus no improvement in the placebo group. \nIn the second study, using electroacupuncture, a significant decrease in knee pain was reported in the experimental group, 24 hours post treatment, when compared to the placebo group (WMD: -2.0 with 95% CI -3.6,-4.0). A significant decrease was found also at four months post-treatment (WMD -0.2, 95% CI: -0.36, -0.04)\nAuthors' conclusions\nAlthough the results of the study on electroacupuncture show that electroacupuncture may be beneficial to reduce symptomatic knee pain in patients with RA 24 hours and 4 months post treatment, the reviewers concluded that the poor quality of the trial, including the small sample size preclude its recommendation. The reviewers further conclude that acupuncture has no effect on ESR, CRP, pain, patient's global assessment, number of swollen joints, number of tender joints, general health, disease activity and reduction of analgesics. These conclusions are limited by methodological considerations such as the type of acupuncture (acupuncture vs electroacupuncture), the site of intervention, the low number of clinical trials and the small sample size of the included studies.\n\nGiven the provided abstract, please respond to the following query: \"What causes shoulder pain and how can acupuncture help?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Rheumatoid arthritis (RA) is a disease in which the body's immune system attacks its own healthy tissues. The attack happens mostly in the joints of the hands and feet and causes redness, pain, swelling and heat around the joints. Drug and non-drug treatments are used to relieve pain and/or swelling.\nAcupuncture is a non-drug therapy being in which thin needles are inserted into the body at specific spots. It is thought that acupuncture works either by releasing chemical compounds in the body that relieve pain, by overriding pain signals in the nerves or by allowing energy (Qi) or blood to flow freely through the body. It is not known whether acupuncture works or is safe."
}
]
|
query-laysum | 2302 | [
{
"role": "user",
"content": "Abstract: Background\nAbout 10% of reproductive-aged women suffer from endometriosis, a costly chronic disease causing pelvic pain and subfertility. Laparoscopy is the gold standard diagnostic test for endometriosis, but is expensive and carries surgical risks. Currently, there are no non-invasive or minimally invasive tests available in clinical practice to accurately diagnose endometriosis. Although other reviews have assessed the ability of blood tests to diagnose endometriosis, this is the first review to use Cochrane methods, providing an update on the rapidly expanding literature in this field.\nObjectives\nTo evaluate blood biomarkers as replacement tests for diagnostic surgery and as triage tests to inform decisions on surgery for endometriosis. Specific objectives include:\n1. To provide summary estimates of the diagnostic accuracy of blood biomarkers for the diagnosis of peritoneal, ovarian and deep infiltrating pelvic endometriosis, compared to surgical diagnosis as a reference standard.\n2. To assess the diagnostic utility of biomarkers that could differentiate ovarian endometrioma from other ovarian masses.\nSearch methods\nWe did not restrict the searches to particular study designs, language or publication dates. We searched CENTRAL to July 2015, MEDLINE and EMBASE to May 2015, as well as these databases to 20 April 2015: CINAHL, PsycINFO, Web of Science, LILACS, OAIster, TRIP, ClinicalTrials.gov, DARE and PubMed.\nSelection criteria\nWe considered published, peer-reviewed, randomised controlled or cross-sectional studies of any size, including prospectively collected samples from any population of reproductive-aged women suspected of having one or more of the following target conditions: ovarian, peritoneal or deep infiltrating endometriosis (DIE). We included studies comparing the diagnostic test accuracy of one or more blood biomarkers with the findings of surgical visualisation of endometriotic lesions.\nData collection and analysis\nTwo authors independently collected and performed a quality assessment of data from each study. For each diagnostic test, we classified the data as positive or negative for the surgical detection of endometriosis, and we calculated sensitivity and specificity estimates. We used the bivariate model to obtain pooled estimates of sensitivity and specificity whenever sufficient datasets were available. The predetermined criteria for a clinically useful blood test to replace diagnostic surgery were a sensitivity of 0.94 and a specificity of 0.79 to detect endometriosis. We set the criteria for triage tests at a sensitivity of ≥ 0.95 and a specificity of ≥ 0.50, which 'rules out' the diagnosis with high accuracy if there is a negative test result (SnOUT test), or a sensitivity of ≥ 0.50 and a specificity of ≥ 0.95, which 'rules in' the diagnosis with high accuracy if there is a positive result (SpIN test).\nMain results\nWe included 141 studies that involved 15,141 participants and evaluated 122 blood biomarkers. All the studies were of poor methodological quality. Studies evaluated the blood biomarkers either in a specific phase of the menstrual cycle or irrespective of the cycle phase, and they tested for them in serum, plasma or whole blood. Included women were a selected population with a high frequency of endometriosis (10% to 85%), in which surgery was indicated for endometriosis, infertility work-up or ovarian mass. Seventy studies evaluated the diagnostic performance of 47 blood biomarkers for endometriosis (44 single-marker tests and 30 combined tests of two to six blood biomarkers). These were angiogenesis/growth factors, apoptosis markers, cell adhesion molecules, high-throughput markers, hormonal markers, immune system/inflammatory markers, oxidative stress markers, microRNAs, tumour markers and other proteins. Most of these biomarkers were assessed in small individual studies, often using different cut-off thresholds, and we could only perform meta-analyses on the data sets for anti-endometrial antibodies, interleukin-6 (IL-6), cancer antigen-19.9 (CA-19.9) and CA-125. Diagnostic estimates varied significantly between studies for each of these biomarkers, and CA-125 was the only marker with sufficient data to reliably assess sources of heterogeneity.\nThe mean sensitivities and specificities of anti-endometrial antibodies (4 studies, 759 women) were 0.81 (95% confidence interval (CI) 0.76 to 0.87) and 0.75 (95% CI 0.46 to 1.00). For IL-6, with a cut-off value of > 1.90 to 2.00 pg/ml (3 studies, 309 women), sensitivity was 0.63 (95% CI 0.52 to 0.75) and specificity was 0.69 (95% CI 0.57 to 0.82). For CA-19.9, with a cut-off value of > 37.0 IU/ml (3 studies, 330 women), sensitivity was 0.36 (95% CI 0.26 to 0.45) and specificity was 0.87 (95% CI 0.75 to 0.99).\nStudies assessed CA-125 at different thresholds, demonstrating the following mean sensitivities and specificities: for cut-off > 10.0 to 14.7 U/ml: 0.70 (95% CI 0.63 to 0.77) and 0.64 (95% CI 0.47 to 0.82); for cut-off > 16.0 to 17.6 U/ml: 0.56 (95% CI 0.24, 0.88) and 0.91 (95% CI 0.75, 1.00); for cut-off > 20.0 U/ml: 0.67 (95% CI 0.50 to 0.85) and 0.69 (95% CI 0.58 to 0.80); for cut-off > 25.0 to 26.0 U/ml: 0.73 (95% CI 0.67 to 0.79) and 0.70 (95% CI 0.63 to 0.77); for cut-off > 30.0 to 33.0 U/ml: 0.62 (95% CI 0.45 to 0.79) and 0.76 (95% CI 0.53 to 1.00); and for cut-off > 35.0 to 36.0 U/ml: 0.40 (95% CI 0.32 to 0.49) and 0.91 (95% CI 0.88 to 0.94).\nWe could not statistically evaluate other biomarkers meaningfully, including biomarkers that were assessed for their ability to differentiate endometrioma from other benign ovarian cysts.\nEighty-two studies evaluated 97 biomarkers that did not differentiate women with endometriosis from disease-free controls. Of these, 22 biomarkers demonstrated conflicting results, with some studies showing differential expression and others no evidence of a difference between the endometriosis and control groups.\nAuthors' conclusions\nOf the biomarkers that were subjected to meta-analysis, none consistently met the criteria for a replacement or triage diagnostic test. A subset of blood biomarkers could prove useful either for detecting pelvic endometriosis or for differentiating ovarian endometrioma from other benign ovarian masses, but there was insufficient evidence to draw meaningful conclusions. Overall, none of the biomarkers displayed enough accuracy to be used clinically outside a research setting. We also identified blood biomarkers that demonstrated no diagnostic value in endometriosis and recommend focusing research resources on evaluating other more clinically useful biomarkers.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Generally, the reports were of low methodological quality, and most blood tests were only assessed by a single or a small number of studies. When the same biomarker was studied, there were significant differences in how studies were conducted, the group of women studied and the cut-offs used to determine a positive result."
}
]
|
query-laysum | 2303 | [
{
"role": "user",
"content": "Abstract: Background\nNicotinic acid (niacin) is known to decrease LDL-cholesterol, and triglycerides, and increase HDL-cholesterol levels. The evidence of benefits with niacin monotherapy or add-on to statin-based therapy is controversial.\nObjectives\nTo assess the effectiveness of niacin therapy versus placebo, administered as monotherapy or add-on to statin-based therapy in people with or at risk of cardiovascular disease (CVD) in terms of mortality, CVD events, and side effects.\nSearch methods\nTwo reviewers independently and in duplicate screened records and potentially eligible full texts identified through electronic searches of CENTRAL, MEDLINE, Embase, Web of Science, two trial registries, and reference lists of relevant articles (latest search in August 2016).\nSelection criteria\nWe included all randomised controlled trials (RCTs) that either compared niacin monotherapy to placebo/usual care or niacin in combination with other component versus other component alone. We considered RCTs that administered niacin for at least six months, reported a clinical outcome, and included adults with or without established CVD.\nData collection and analysis\nTwo reviewers used pre-piloted forms to independently and in duplicate extract trials characteristics, risk of bias items, and outcomes data. Disagreements were resolved by consensus or third party arbitration. We conducted random-effects meta-analyses, sensitivity analyses based on risk of bias and different assumptions for missing data, and used meta-regression analyses to investigate potential relationships between treatment effects and duration of treatment, proportion of participants with established coronary heart disease and proportion of participants receiving background statin therapy. We used GRADE to assess the quality of evidence.\nMain results\nWe included 23 RCTs that were published between 1968 and 2015 and included 39,195 participants in total. The mean age ranged from 33 to 71 years. The median duration of treatment was 11.5 months, and the median dose of niacin was 2 g/day. The proportion of participants with prior myocardial infarction ranged from 0% (4 trials) to 100% (2 trials, median proportion 48%); the proportion of participants taking statin ranged from 0% (4 trials) to 100% (12 trials, median proportion 100%).\nUsing available cases, niacin did not reduce overall mortality (risk ratio (RR) 1.05, 95% confidence interval (CI) 0.97 to 1.12; participants = 35,543; studies = 12; I2 = 0%; high-quality evidence), cardiovascular mortality (RR 1.02, 95% CI 0.93 to 1.12; participants = 32,966; studies = 5; I2 = 0%; moderate-quality evidence), non-cardiovascular mortality (RR 1.12, 95% CI 0.98 to 1.28; participants = 32,966; studies = 5; I2 = 0%; high-quality evidence), the number of fatal or non-fatal myocardial infarctions (RR 0.93, 95% CI 0.87 to 1.00; participants = 34,829; studies = 9; I2 = 0%; moderate-quality evidence), nor the number of fatal or non-fatal strokes (RR 0.95, 95% CI 0.74 to 1.22; participants = 33,661; studies = 7; I2 = 42%; low-quality evidence). Participants randomised to niacin were more likely to discontinue treatment due to side effects than participants randomised to control group (RR 2.17, 95% CI 1.70 to 2.77; participants = 33,539; studies = 17; I2 = 77%; moderate-quality evidence). The results were robust to sensitivity analyses using different assumptions for missing data.\nAuthors' conclusions\nModerate- to high-quality evidence suggests that niacin does not reduce mortality, cardiovascular mortality, non-cardiovascular mortality, the number of fatal or non-fatal myocardial infarctions, nor the number of fatal or non-fatal strokes but is associated with side effects. Benefits from niacin therapy in the prevention of cardiovascular disease events are unlikely.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Niacin did not reduce the number of deaths, heart attack or stroke. Many people (18%) had to stop taking niacin due to side effects. The results did not differ between participants who had or had not experienced a heart attack before taking niacin. The results did not differ between participants who were or were not taking a statin (another drug that prevents heart attack and stroke). The overall quality of evidence was moderate to high.\nIn summary, we found no evidence of benefits from niacin therapy."
}
]
|
query-laysum | 2304 | [
{
"role": "user",
"content": "Abstract: Background\nPeripheral arterial disease (PAD) affects between 4% and 12% of people aged 55 to 70 years, and 20% of people over 70 years. A common complaint is intermittent claudication (exercise-induced lower limb pain relieved by rest). These patients have a three- to six-fold increase in cardiovascular mortality. Cilostazol is a drug licensed for the use of improving claudication distance and, if shown to reduce cardiovascular risk, could offer additional clinical benefits. This is an update of the review first published in 2007.\nObjectives\nTo determine the effect of cilostazol on initial and absolute claudication distances, mortality and vascular events in patients with stable intermittent claudication.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, and AMED databases, and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registries, on 9 November 2020.\nSelection criteria\nWe considered double-blind, randomised controlled trials (RCTs) of cilostazol versus placebo, or versus other drugs used to improve claudication distance in patients with stable intermittent claudication.\nData collection and analysis\nTwo authors independently assessed trials for selection and independently extracted data. Disagreements were resolved by discussion. We assessed the risk of bias with the Cochrane risk of bias tool. Certainty of the evidence was evaluated using GRADE. For dichotomous outcomes, we used odds ratios (ORs) with corresponding 95% confidence intervals (CIs) and for continuous outcomes we used mean differences (MDs) and 95% CIs. We pooled data using a fixed-effect model, or a random-effects model when heterogeneity was identified. Primary outcomes were initial claudication distance (ICD) and quality of life (QoL). Secondary outcomes were absolute claudication distance (ACD), revascularisation, amputation, adverse events and cardiovascular events.\nMain results\nWe included 16 double-blind, RCTs (3972 participants) comparing cilostazol with placebo, of which five studies also compared cilostazol with pentoxifylline. Treatment duration ranged from six to 26 weeks. All participants had intermittent claudication secondary to PAD. Cilostazol dose ranged from 100 mg to 300 mg; pentoxifylline dose ranged from 800 mg to 1200 mg. The certainty of the evidence was downgraded by one level for all studies because publication bias was strongly suspected. Other reasons for downgrading were imprecision, inconsistency and selective reporting.\nCilostazol versus placebo\nParticipants taking cilostazol had a higher ICD compared with those taking placebo (MD 26.49 metres; 95% CI 18.93 to 34.05; 1722 participants; six studies; low-certainty evidence). We reported QoL measures descriptively due to insufficient statistical detail within the studies to combine the results; there was a possible indication in improvement of QoL in the cilostazol treatment groups (low-certainty evidence). Participants taking cilostazol had a higher ACD compared with those taking placebo (39.57 metres; 95% CI 21.80 to 57.33; 2360 participants; eight studies; very-low certainty evidence). The most commonly reported adverse events were headache, diarrhoea, abnormal stools, dizziness, pain and palpitations. Participants taking cilostazol had an increased odds of experiencing headache compared to participants taking placebo (OR 2.83; 95% CI 2.26 to 3.55; 2584 participants; eight studies; moderate-certainty evidence).Very few studies reported on other outcomes so conclusions on revascularisation, amputation, or cardiovascular events could not be made.\nCilostazol versus pentoxifylline\nThere was no difference detected between cilostazol and pentoxifylline for improving walking distance, both in terms of ICD (MD 20.0 metres, 95% CI -2.57 to 42.57; 417 participants; one study; low-certainty evidence); and ACD (MD 13.4 metres, 95% CI -43.50 to 70.36; 866 participants; two studies; very low-certainty evidence). One study reported on QoL; the study authors reported no difference in QoL between the treatment groups (very low-certainty evidence). No study reported on revascularisation, amputation or cardiovascular events. Cilostazol participants had an increased odds of experiencing headache compared with participants taking pentoxifylline at 24 weeks (OR 2.20, 95% CI 1.16 to 4.17; 982 participants; two studies; low-certainty evidence).\nAuthors' conclusions\nCilostazol has been shown to improve walking distance in people with intermittent claudication. However, participants taking cilostazol had higher odds of experiencing headache. There is insufficient evidence about the effectiveness of cilostazol for serious events such as amputation, revascularisation, and cardiovascular events. Despite the importance of QoL to patients, meta-analysis could not be undertaken because of differences in measures used and reporting. Very limited data indicated no difference between cilostazol and pentoxifylline for improving walking distance and data were too limited for any conclusions on other outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cilostazol can increase the distance walked both in total and before the onset of pain, compared to placebo. Cilostazol was associated with increased headaches and there was a lack of evidence for other important outcomes such as amputation, revascularisation and cardiovascular events."
}
]
|
query-laysum | 2305 | [
{
"role": "user",
"content": "Abstract: Background\nSchizophrenia is one of the most common and disabling mental disorders. About 20% of people with schizophrenia do not respond to antipsychotics, which are the mainstay of the treatment for schizophrenia today, and need to seek other treatment options. Magnetic seizure therapy (MST) is one of the novel non-invasive brain stimulation techniques that are being investigated in recent years.\nObjectives\nTo evaluate the efficacy and tolerability of MST for people with schizophrenia.\nSearch methods\nOn 6 March 2022, we searched the Cochrane Schizophrenia Group's Study-Based Register of Trials which is based on CENTRAL, CINAHL, ClinicalTrials.Gov, Embase, ISRCTN, MEDLINE, PsycINFO, PubMed, and WHO ICTRP.\nSelection criteria\nAll randomised controlled trials (RCTs) comparing MST alone or plus standard care with ECT or any other interventions for people with schizophrenia.\nData collection and analysis\nWe performed reference screening, study selection, data extraction and risk of bias and quality assessment in duplicate. We calculated the risk ratios (RRs) and their 95% confidence intervals (CIs) for binary outcomes and the mean difference (MD) and their 95% CIs for continuous outcomes. We used the original risk of bias tool for risk of bias assessment and created a Summary of findings table using GRADE.\nMain results\nWe included one four-week study with 79 adults in acute schizophrenia, comparing MST plus standard care to ECT plus standard care in this review. We rated the overall risk of bias as high due to high risk of bias in the domains of selective reporting and other biases (early termination and baseline imbalance) and unclear risk of bias in the domain of blinding of participants and personnel.\nWe found that MST and ECT may not differ in improving the global state (n = 79, risk ratio (RR) 1.12, 95% confidence interval (CI) 0.73 to 1.70), overall (n = 79, mean difference (MD) -0.20, 95% CI -8.08 to 7.68), the positive symptoms (n = 79, MD 1.40, 95% CI -1.97 to 4.77) and the negative symptoms (n = 79, MD -1.00, 95% CI -3.85 to 1.85) in people with schizophrenia.\nWe found that MST compared to ECT may cause less delayed memory deficit and less cognitive deterioration (n = 79, number of people with a delayed memory deficit, RR 0.63, 95% CI 0.41 to 0.96; n = 79, mean change in global cognitive function, MD 5.80, 95% CI 0.80 to 10.80), but also may improve more cognitive function (n = 47, number of people with any cognitive improvement, RR 3.30, 95% CI 1.29 to 8.47).\nWe found that there may be no difference between the two groups in terms of leaving the study early due to any reason (n = 79, RR 2.51, 95% CI 0.73 to 8.59), due to adverse effects (n = 79, RR 3.35, 95% CI 0.39 to 28.64) or due to inefficacy (n = 79, RR 2.52, 95% CI 0.11 to 60.10).\nSince all findings were based on one study with high risk of bias and the confidence in the evidence was very low, we were not sure these comparable or favourable effects of MST over ECT were its true effects.\nAuthors' conclusions\nDue to the paucity of data, we cannot draw any conclusion on the efficacy and tolerability of MST for people with schizophrenia. Well-designed RCTs are warranted to answer the question.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up-to-date to 06 March 2022."
}
]
|
query-laysum | 2306 | [
{
"role": "user",
"content": "Abstract: Background\nAn association has been hypothesized between periodontitis and hypertension. Periodontal therapy is believed to reduce systemic inflammatory mediators and increase endothelial function, thus having the potential to prevent and treat hypertension.\nObjectives\nTo assess the effect and safety of different periodontal treatment modalities on blood pressure (BP) in people with chronic periodontitis.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched for randomized controlled trials (RCTs) up to November 2020 in the Cochrane Hypertension Specialised Register, CENTRAL, MEDLINE, Embase, seven other databases, and two clinical trials registries. We contacted the authors of relevant papers regarding further published and unpublished work.\nSelection criteria\nRCTs and quasi-RCTs aiming to detect the effect of periodontal treatment on BP were eligible. Participants should have been diagnosed with chronic periodontitis and hypertension (or no hypertension if the study explored the preventive effect of periodontal treatment). Participants in the intervention group should have undergone subgingival scaling and root planing (SRP) and any other type of periodontal treatments, compared with either no periodontal treatment or alternative periodontal treatment in the control group.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane for study identification, data extraction, and risk of bias assessment. We used a formal pilot-tested data extraction form for data extraction, and the Cochrane risk of bias tool for risk of bias assessment. We planned the meta-analysis, test for heterogeneity, sensitivity analysis, and subgroup analysis. We assessed the certainty of evidence using GRADE. The primary outcome was change in systolic BP (SBP) and diastolic BP (DBP).\nMain results\nWe included eight RCTs. Five had low risk of bias, one had unclear risk of bias, and two had high risk of bias.\nFour trials compared periodontal treatment with no treatment. We found no evidence of a difference in the short-term change of SBP and DBP for people diagnosed with periodontitis and other cardiovascular diseases except hypertension (very low-certainty evidence). We found no evidence of a difference in long-term changes in SBP (mean difference [MD] −2.25 mmHg, 95% confidence interval [CI] −9.41 to 4.92; P = 0.54; studies = 2, participants = 108; low-certainty evidence) and DBP (MD −2.55 mmHg, 95% CI −6.90 to 1.80; P = 0.25; studies = 2, participants = 103; low-certainty evidence). Concerning people diagnosed with periodontitis, in the short term, two studies of low certainty reported no changes in SBP (MD −0.14 mmHg, 95% CI −4.05 to 3.77; P = 0.94; participants = 294) and DBP (MD −0.15 mmHg, 95% CI −2.47 to 2.17; P = 0.90; participants = 294), and we found no evidence of a difference in SBP and DBP over a long period based on low certainty of evidence.\nThree studies compared intensive periodontal treatment with supra-gingival scaling. We found no evidence of a difference in changes in SBP and DBP for any length of time in people diagnosed with periodontitis (very low-certainty evidence).\nIn people diagnosed with periodontitis and hypertension, we found one study reporting a significant reduction in the short term in SBP (MD −11.20 mmHg, 95% CI −15.40 to −7.00; P < 0.001; participants = 101; moderate-certainty evidence) and DBP (MD −8.40 mmHg, 95% CI −12.19 to −4.61; P < 0.0001; participants = 101; moderate-certainty evidence).\nAuthors' conclusions\nWe found no evidence of a difference of an impact of periodontal treatments on BP in most comparisons assessed in this review, and given the low certainty of evidence and the lack of relevant studies we could not draw conclusions about the effect of periodontal treatment on BP in people with chronic periodontitis. We found only one study suggesting that periodontal treatment may reduce SBP and DBP over a short period in people with hypertension and chronic periodontitis, but the certainty of evidence was moderate.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Can periodontal treatment prevent hypertension or lower the blood pressure of people with hypertension?"
}
]
|
query-laysum | 2307 | [
{
"role": "user",
"content": "Abstract: Background\nObesity is a global public health threat. The transtheoretical stages of change (TTM SOC) model has long been considered a useful interventional approach in lifestyle modification programmes, but its effectiveness in producing sustainable weight loss in overweight and obese individuals has been found to vary considerably.\nObjectives\nTo assess the effectiveness of dietary intervention or physical activity interventions, or both, and other interventions based on the transtheoretical model (TTM) stages of change (SOC) to produce sustainable (one year and longer) weight loss in overweight and obese adults.\nSearch methods\nStudies were obtained from searches of multiple electronic bibliographic databases. We searched The Cochrane Library, MEDLINE, EMBASE and PsycINFO. The date of the last search, for all databases, was 17 December 2013.\nSelection criteria\nTrials were included if they fulfilled the criteria of randomised controlled clinical trials (RCTs) using the TTM SOC as a model, that is a theoretical framework or guideline in designing lifestyle modification strategies, mainly dietary and physical activity interventions, versus a comparison intervention of usual care; one of the outcome measures of the study was weight loss, measured as change in weight or body mass index (BMI); participants were overweight or obese adults only; and the intervention was delivered by healthcare professionals or trained lay people at the hospital and community level, including at home.\nData collection and analysis\nTwo review authors independently extracted the data, assessed studies for risk of bias and evaluated overall study quality according to GRADE (Grading of Recommendations Assessment, Development and Evaluation). We resolved disagreements by discussion or consultation with a third party. A narrative, descriptive analysis was conducted for the systematic review.\nMain results\nA total of three studies met the inclusion criteria, allocating 2971 participants to the intervention and control groups. The total number of participants randomised to the intervention groups was 1467, whilst 1504 were randomised to the control groups. The length of intervention was 9, 12 and 24 months in the different trials. The use of TTM SOC in combination with diet or physical activity, or both, and other interventions in the included studies produced inconclusive evidence that TTM SOC interventions led to sustained weight loss (the mean difference between intervention and control groups varied from 2.1 kg to 0.2 kg at 24 months; 2971 participants; 3 trials; low quality evidence). Following application of TTM SOC there were improvements in physical activity and dietary habits, such as increased exercise duration and frequency, reduced dietary fat intake and increased fruit and vegetable consumption (very low quality evidence). Weight gain was reported as an adverse event in one of the included trials. None of the trials reported health-related quality of life, morbidity, or economic costs as outcomes. The small number of studies and their variable methodological quality limit the applicability of the findings to clinical practice. The main limitations include inadequate reporting of outcomes and the methods for allocation, randomisation and blinding; extensive use of self-reported measures to estimate the effects of interventions on a number of outcomes, including weight loss, dietary consumption and physical activity levels; and insufficient assessment of sustainability due to lack of post-intervention assessments.\nAuthors' conclusions\nThe evidence to support the use of TTM SOC in weight loss interventions is limited by risk of bias and imprecision, not allowing firm conclusions to be drawn. When combined with diet or physical activity, or both, and other interventions we found very low quality evidence that it might lead to better dietary and physical activity habits. This systematic review highlights the need for well-designed RCTs that apply the principles of the TTM SOC appropriately to produce conclusive evidence about the effect of TTM SOC lifestyle interventions on weight loss and other health outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included three studies in our systematic review. Altogether the studies evaluated 2971 participants, with 1467 participants allocated to the intervention groups and 1504 to the control groups. The studies had a length of intervention of 9, 12 and 24 months.\nThis plain language summary was current as of December 2013."
}
]
|
query-laysum | 2308 | [
{
"role": "user",
"content": "Abstract: Background\nConstruction workers are frequently exposed to various types of injury-inducing hazards. There are a number of injury prevention interventions, yet their effectiveness is uncertain.\nObjectives\nTo assess the effects of interventions for preventing injuries in construction workers.\nSearch methods\nWe searched the Cochrane Injuries Group's specialised register, CENTRAL (issue 3), MEDLINE, Embase and PsycINFO up to April 2017. The searches were not restricted by language or publication status. We also handsearched the reference lists of relevant papers and reviews.\nSelection criteria\nRandomised controlled trials, controlled before-after (CBA) studies and interrupted time-series (ITS) of all types of interventions for preventing fatal and non-fatal injuries among workers at construction sites.\nData collection and analysis\nTwo review authors independently selected studies, extracted data and assessed their risk of bias. For ITS studies, we re-analysed the studies and used an initial effect, measured as the change in injury rate in the year after the intervention, as well as a sustained effect, measured as the change in time trend before and after the intervention.\nMain results\nSeventeen studies (14 ITS and 3 CBA studies) met the inclusion criteria in this updated version of the review. The ITS studies evaluated the effects of: introducing or changing regulations that laid down safety and health requirements for the construction sites (nine studies), a safety campaign (two studies), a drug-free workplace programme (one study), a training programme (one study), and safety inspections (one study) on fatal and non-fatal occupational injuries. One CBA study evaluated the introduction of occupational health services such as risk assessment and health surveillance, one evaluated a training programme and one evaluated the effect of a subsidy for upgrading to safer scaffoldings. The overall risk of bias of most of the included studies was high, as it was uncertain for the ITS studies whether the intervention was independent from other changes and thus could be regarded as the main reason of change in the outcome. Therefore, we rated the quality of the evidence as very low for all comparisons.\nCompulsory interventions\nRegulatory interventions at national or branch level may or may not have an initial effect (effect size (ES) of −0.33; 95% confidence interval (CI) −2.08 to 1.41) and may or may not have a sustained effect (ES −0.03; 95% CI −0.30 to 0.24) on fatal and non-fatal injuries (9 ITS studies) due to highly inconsistent results (I² = 98%). Inspections may or may not have an effect on non-fatal injuries (ES 0.07; 95% CI −2.83 to 2.97; 1 ITS study).\nEducational interventions\nSafety training interventions may result in no significant reduction of non-fatal injuries (1 ITS study and 1 CBA study).\nInformational interventions\nWe found no studies that had evaluated informational interventions alone such as campaigns for risk communication.\nPersuasive interventions\nWe found no studies that had evaluated persuasive interventions alone such as peer feedback on workplace actions to increase acceptance of safe working methods.\nFacilitative interventions\nMonetary subsidies to companies may lead to a greater decrease in non-fatal injuries from falls to a lower level than no subsidies (risk ratio (RR) at follow-up: 0.93; 95% CI 0.30 to 2.91 from RR 3.89 at baseline; 1 CBA study).\nMultifaceted interventions\nA safety campaign intervention may result in an initial (ES −1.82; 95% CI −2.90 to −0.74) and sustained (ES −1.30; 95% CI −1.79 to −0.81) decrease in injuries at the company level (1 ITS study), but not at the regional level (1 ITS study). A multifaceted drug-free workplace programme at the company level may reduce non-fatal injuries in the year following implementation by −7.6 per 100 person-years (95% CI −11.2 to −4.0) and in the years thereafter by −2.0 per 100 person-years (95% CI −3.5 to −0.5) (1 ITS study). Introducing occupational health services may result in no decrease in fatal or non-fatal injuries (one CBA study).\nAuthors' conclusions\nThe vast majority of interventions to adopt safety measures recommended by standard texts on safety, consultants and safety courses have not been adequately evaluated. There is very low-quality evidence that introducing regulations as such may or may not result in a decrease in fatal and non-fatal injuries. There is also very low-quality evidence that regionally oriented safety campaigns, training, inspections or the introduction of occupational health services may not reduce non-fatal injuries in construction companies. There is very low-quality evidence that company-oriented safety interventions such as a multifaceted safety campaign, a multifaceted drug workplace programme and subsidies for replacement of scaffoldings may reduce non-fatal injuries among construction workers. More studies, preferably cluster-randomised controlled trials, are needed to evaluate different strategies to increase the employers' and workers' adherence to the safety measures prescribed by regulation.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To find out which interventions are most effective for reducing on-the-job injuries in construction workers."
}
]
|
query-laysum | 2309 | [
{
"role": "user",
"content": "Abstract: Background\nAbout half of patients with Crohn's disease (CD) require surgery within 10 years of diagnosis. Resection of the affected segment is highly effective, however the majority of patients experience clinical recurrence after surgery. Most of these patients have asymptomatic endoscopic recurrence weeks or months before starting with symptoms. This inflammation can be detected by colonoscopy and is a good predictor of poor prognosis.Therapy guided by colonoscopy could tailor the management and improve the prognosis of postoperative CD.\nObjectives\nTo assess the effects of prophylactic therapy guided by colonoscopy in reducing the postoperative recurrence of CD in adults.\nSearch methods\nThe following electronic databases were searched up to 17 December 2019: MEDLINE, Embase, CENTRAL, Clinical Trials.gov, WHO Trial Registry and Cochrane IBD specialized register. Reference lists of included articles, as well as conference proceedings were handsearched.\nSelection criteria\nRandomised controlled trials (RCTs), quasi-RCTs and cohort studies comparing colonoscopy-guided management versus management non-guided by colonoscopy.\nData collection and analysis\nTwo review authors independently considered studies for eligibility, extracted the data and assessed study quality. Methodological quality was assessed using both the Cochrane 'Risk of bias' tool for RCTs and Newcastle-Ottawa scale (NOS) for cohort studies. The primary outcome was clinical recurrence. Secondary outcomes included: endoscopic, surgical recurrence and adverse events. We calculated the risk ratio (RR) for each dichotomous outcome and extracted the hazard ratio (HR) for time-to-event outcomes. All estimates were reported with their corresponding 95% confidence interval (CI). Data were analysed on an intention-to-treat (ITT) basis. The overall quality of the evidence was evaluated using GRADE criteria.\nMain results\nTwo RCTs (237 participants) and five cohort studies (794 participants) met the inclusion criteria. Meta-analysis was not conducted as the studies were highly heterogeneous. We included two comparisons.\nIntensification of prophylactic-therapy guided by colonoscopy versus intensification guided by clinical recurrence\nOne unblinded RCT and four retrospective cohort studies addressed this comparison. All participants received the same prophylactic therapy immediately after surgery. In the colonoscopy-based management group the therapy was intensified in case of endoscopic recurrence; in the control group the therapy was intensified only in case of symptoms.\nIn the RCT, clinical recurrence (defined as Crohn's Disease Activity Index (CDAI) > 150 points) in the colonoscopy-based management group was 37.7% (46/122) compared to 46.1% (21/52) in the control group at 18 months' follow up (RR 0.82, 95% CI: 0.56 to 1.18, 174 participants, low-certainty evidence). There may be a reduction in endoscopic recurrence at 18 months with colonoscopy-based management (RR 0.73, 95% CI 0.56 to 0.95, 1 RCT, 174 participants, low-certainty evidence). The certainty of the evidence for surgical recurrence was very low, due to only four cohort studies with inconsistent results reporting this outcome.\nAdverse events at 18 months were similar in both groups, with 82% in the intervention group (100/122) and 86.5% in the control group (45/52) (RR 0.95, 95% CI:0.83 to 1.08, 1 RCT, 174 participants, low-certainty of evidence).The most common adverse events reported were alopecia, wound infection, sensory symptoms, systemic lupus, vasculitis and severe injection site reaction. Perforations or haemorrhages secondary to colonoscopy were not reported.\nInitiation of prophylactic-therapy guided by colonoscopy versus initiation immediately after surgery\nAn unblinded RCT and two retrospective cohort studies addressed this comparison. The control group received prophylactic therapy immediately after surgery, and in the colonoscopy-based management group the therapy was delayed up to detection of endoscopic recurrence.\nThe effects on clinical and endoscopic recurrence are uncertain (clinical recurrence until week 102: RR 1.16, 95% CI 0.73 to 1.84; endoscopic recurrence at week 102: RR 1.16, 95% CI 0.73 to 1.84; 1 RCT, 63 participants, very low-certainty evidence). Results from one cohort study were similarly uncertain (median follow-up 32 months, 199 participants). The effects on surgical recurrence at a median follow-up of 50 to 55 months were also uncertain in one cohort study (RR 0.79, 95% CI 0.38 to 1.62, 133 participants, very low-certainty evidence).\nThere were fewer adverse events with colonoscopy-based management (54.8% (17/31)) compared with the control group (93.8% (30/32)) but the evidence is very uncertain (RR 0.58, 95% CI 0.42 to 0.82; 1 RCT, 63 participants). Common adverse events were infections, gastrointestinal intolerance, leukopenia, pancreatitis and skin lesions. Perforations or haemorrhages secondary to colonoscopy were not reported.\nAuthors' conclusions\nIntensification of prophylactic-therapy guided by colonoscopy may reduce clinical and endoscopic postoperative recurrence of CD compared to intensification guided by symptoms, and there may be little or no difference in adverse effects. We are uncertain whether initiation of therapy guided by colonoscopy impacts postoperative recurrence and adverse events when compared to initiation immediately after surgery, as the certainty of the evidence is very low. Further studies are necessary to improve the certainty of the evidence of this review.\n\nGiven the provided abstract, please respond to the following query: \"How was this study performed?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A systematic review of the current literature was performed to assess the efficacy of prophylactic-therapy guided by colonoscopy in reducing the postoperative recurrence of CD. An electronic search of several databases was performed and studies that met our inclusion criteria were selected for further evaluation."
}
]
|
query-laysum | 2310 | [
{
"role": "user",
"content": "Abstract: Background\nFrom the societal and employers' perspectives, sickness absence has a large economic impact. Internationally, there is variation in sickness certification practices. However, in most countries a physician's certificate of illness or reduced work ability is needed at some point of sickness absence. In many countries, there is a time period of varying length called the 'self-certification period' at the beginning of sickness absence. During that time a worker is not obliged to provide his or her employer a medical certificate and it is usually enough that the employee notifies his or her supervisor when taken ill. Self-certification can be introduced at organisational, regional, or national level.\nObjectives\nTo evaluate the effects of introducing, abolishing, or changing the period of self-certification of sickness absence on: the total or average duration (number of sickness absence days) of short-term sickness absence periods; the frequency of short-term sickness absence periods; the associated costs (of sickness absence and (occupational) health care); and social climate, supervisor involvement, and workload or presenteeism (see Figure 1).\nSearch methods\nWe conducted a systematic literature search to identify all potentially eligible published and unpublished studies. We adapted the search strategy developed for MEDLINE for use in the other electronic databases. We also searched for unpublished trials on ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). We used Google Scholar for exploratory searches.\nSelection criteria\nWe considered randomised controlled trials (RCTs), controlled before-after (CBA) studies, and interrupted time-series (ITS) studies for inclusion. We included studies carried out with individual employees or insured workers. We also included studies in which participants were addressed at the aggregate level of organisations, companies, municipalities, healthcare settings, or general populations. We included studies evaluating the effects of introducing, abolishing, or changing the period of self-certification of sickness absence.\nData collection and analysis\nWe conducted a systematic literature search up to 14 June 2018. We calculated missing data from other data reported by the authors. We intended to perform a random-effects meta-analysis, but the studies were too different to enable meta-analysis.\nMain results\nWe screened 6091 records for inclusion. Five studies fulfilled our inclusion criteria: one is an RCT and four are CBA studies. One study from Sweden changed the period of self-certification in 1985 in two districts for all insured inhabitants. Three studies from Norway conducted between 2001 and 2014 changed the period of self-certification in municipalities for all or part of the workers. One study from 1969 introduced self-certification for all manual workers of an oil refinery in the UK.\nLonger compared to shorter self-certificationfor reducing sickness absence in workers\nOutcome: average duration of sickness absence periods\nExtending the period of self-certification from one week to two weeks produced a higher mean duration of sickness absence periods: mean difference in change values between the intervention and control group (MDchange) was 0.67 days/period up to 29 days (95% confidence interval (95% CI) 0.55 to 0.79; 1 RCT; low-certainty evidence).\nThe introduction of self-certification for a maximum of three days produced a lower mean duration of sickness absence up to three days (MDchange −0.32 days/period, 95% CI −0.39 to −0.25; 1 CBA study; very low-certainty evidence). The authors of a different study reported that prolonging self-certification from ≤ 3 days to ≤ 365 days did not lead to a change, but they did not provide numerical data (very low-certainty evidence).\nOutcome: number of sickness absence periods per worker\nExtending the period of self-certification from one week to two weeks resulted in no difference in the number of sickness absence periods in one RCT, but the authors did not report numerical data (low-certainty evidence).\nThe introduction of self-certification for a maximum of three days produced a higher mean number of sickness absence periods lasting up to three days (MDchange 0.48 periods, 95% CI 0.33 to 0.63) in one CBA study (very low-certainty evidence).\nExtending the period of self-certification from three days to up to a year decreased the number of periods in one CBA study, but the authors did not report data (very low-certainty evidence).\nOutcome: average lost work time per 100 person-years\nExtending the period of self-certification from one week to two weeks resulted in an inferred increase in lost work time in one RCT (very low-certainty evidence).\nExtending the period of self-certification (introduction of self-certification for a maximum of three days (from zero to three days) and from three days to five days, respectively) resulted in more work time lost due to sickness absence periods lasting up to three days in two CBA studies that could not be pooled (MDchange 0.54 days/person-year, 95% CI 0.47 to 0.61; and MDchange 1.38 days/person-year, 95% CI 1.16 to 1.60; very low-certainty evidence).\nExtending the period of self-certification from three days up to 50 days led to 0.65 days less lost work time in one CBA study, based on absence periods lasting between four and 16 days. Extending the period of self-certification from three days up to 365 days resulted in less work time lost due to sickness absence periods longer than 16 days (MDchange −2.84 days, 95% CI −3.35 to −2.33; 1 CBA study; very low-certainty evidence).\nOutcome: costs of sickness absence and physician certification\nOne RCT reported that the higher costs of sickness absence benefits incurred by extending the period of self-certification far outweighed the possible reduction in costs of fewer physician appointments by almost six to one (low-certainty evidence).\nIn summary, we found very low-certainty evidence that introducing self-certification of sickness absence or prolonging the self-certification period has inconsistent effects on the mean number of sickness absence days, the number of sickness absence periods, and on lost work time due to sickness absence periods.\nAuthors' conclusions\nThere is low- to very low-certainty evidence of inconsistent effects of changing the period of self-certification on the duration or frequency of short-term sickness absence periods or the amount of work time lost due to sickness absence. Because the evidence is of low or very low certainty, more and better studies are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are uncertain whether changing the length of time a worker is allowed to take time off work because of illness without a physician's certificate has any effect on sickness absence, as the included studies reported very different results, and the certainty of the evidence was low to very low."
}
]
|
query-laysum | 2311 | [
{
"role": "user",
"content": "Abstract: Background\nSkeletal muscle cramps are common and often occur in association with pregnancy, advanced age, exercise or motor neuron disorders (such as amyotrophic lateral sclerosis). Typically, such cramps have no obvious underlying pathology, and so are termed idiopathic. Magnesium supplements are marketed for the prophylaxis of cramps but the efficacy of magnesium for this purpose remains unclear.\nThis is an update of a Cochrane Review first published in 2012, and performed to identify and incorporate more recent studies.\nObjectives\nTo assess the effects of magnesium supplementation compared to no treatment, placebo control or other cramp therapies in people with skeletal muscle cramps.\nSearch methods\nOn 9 September 2019, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, LILACS, CINAHL Plus, AMED, and SPORTDiscus. We also searched WHO-ICTRP and ClinicalTrials.gov for registered trials that might be ongoing or unpublished, and ISI Web of Science for studies citing the studies included in this review.\nSelection criteria\nRandomized controlled trials (RCTs) of magnesium supplementation (in any form) to prevent skeletal muscle cramps in any patient group (i.e. all clinical presentations of cramp). We considered comparisons of magnesium with no treatment, placebo control, or other therapy.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted data. Two review authors assessed risk of bias. We attempted to contact all study authors when questions arose and obtained participant-level data for four of the included trials, one of which was unpublished. We collected all data on adverse effects from the included RCTs.\nMain results\nWe identified 11 trials (nine parallel-group, two cross-over) enrolling a total of 735 individuals, amongst whom 118 cross-over participants additionally served as their own controls. Five trials enrolled women with pregnancy-associated leg cramps (408 participants) and five trials enrolled people with idiopathic cramps (271 participants, with 118 additionally crossed over to control). Another study enrolled 29 people with liver cirrhosis, only some of whom suffered muscle cramps. All trials provided magnesium as an oral supplement, except for one trial which provided magnesium as a series of slow intravenous infusions. Nine trials compared magnesium to placebo, one trial compared magnesium to no treatment, calcium carbonate or vitamin B, and another trial compared magnesium to vitamin E or calcium. We judged the single trial in people with liver cirrhosis and all five trials in participants with pregnancy-associated leg cramps to be at high risk of bias. In contrast, we rated the risk of bias high in only one of five trials in participants with idiopathic rest cramps.\nFor idiopathic cramps, largely in older adults (mean age 61.6 to 69.3 years) presumed to have nocturnal leg cramps (the commonest presentation), differences in measures of cramp frequency when comparing magnesium to placebo were small, not statistically significant, and showed minimal heterogeneity (I² = 0% to 12%). This includes the primary endpoint, percentage change from baseline in the number of cramps per week at four weeks (mean difference (MD) −9.59%, 95% confidence interval (CI) −23.14% to 3.97%; 3 studies, 177 participants; moderate-certainty evidence); and the difference in the number of cramps per week at four weeks (MD −0.18 cramps/week, 95% CI −0.84 to 0.49; 5 studies, 307 participants; moderate-certainty evidence). The percentage of individuals experiencing a 25% or better reduction in cramp rate from baseline was also no different (RR 1.04, 95% CI 0.84 to 1.29; 3 studies, 177 participants; high-certainty evidence). Similarly, no statistically significant difference was found at four weeks in measures of cramp intensity or cramp duration. This includes the number of participants rating their cramps as moderate or severe at four weeks (RR 1.33, 95% CI 0.81 to 2.21; 2 studies, 91 participants; moderate-certainty evidence); and the percentage of participants with the majority of cramp durations of one minute or more at four weeks (RR 1.83, 95% CI 0.74 to 4.53, 1 study, 46 participants; low-certainty evidence).\nWe were unable to perform meta-analysis for trials of pregnancy-associated leg cramps. The single study comparing magnesium to no treatment failed to find statistically significant benefit on a three-point ordinal scale of overall treatment efficacy. Of the three trials comparing magnesium to placebo, one found no benefit on frequency or intensity measures, another found benefit for both, and a third reported inconsistent results for frequency that could not be reconciled. The single study in people with liver cirrhosis was small and had limited reporting of cramps, but found no difference in terms of cramp frequency or cramp intensity.\nOur analysis of adverse events pooled all studies, regardless of the setting in which cramps occurred. Major adverse events (occurring in 2 out of 72 magnesium recipients and 3 out of 68 placebo recipients), and withdrawals due to adverse events, were not significantly different from placebo. However, in the four studies for which it could be determined, more participants experienced minor adverse events in the magnesium group than in the placebo group (RR 1.51, 95% CI 0.98 to 2.33; 4 studies, 254 participants; low-certainty evidence). Overall, oral magnesium was associated with mostly gastrointestinal adverse events (e.g. diarrhoea), experienced by 11% (10% in control) to 37% (14% in control) of participants.\nAuthors' conclusions\nIt is unlikely that magnesium supplementation provides clinically meaningful cramp prophylaxis to older adults experiencing skeletal muscle cramps. In contrast, for those experiencing pregnancy-associated rest cramps the literature is conflicting and further research in this population is needed. We found no RCTs evaluating magnesium for exercise-associated muscle cramps or disease-state-associated muscle cramps (for example amyotrophic lateral sclerosis/motor neuron disease) other than a single small (inconclusive) study in people with liver cirrhosis, only some of whom suffered cramps.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Muscle cramps are common and occur in a wide range of settings. Older adults and pregnant women commonly complain of leg cramps while they are resting, athletes can cramp when they are pushing the limits of their endurance, and some people develop muscle cramps as a symptom of other medical conditions. One potential treatment that is already being marketed to prevent muscle cramps is magnesium supplementation. Magnesium is a common mineral in our diets and extra oral supplements of this mineral are available either over the Internet or in health food stores and pharmacies (usually in the form of tablets or powders to be dissolved in water). We wanted to combine studies to get the best estimate of magnesium's effect on cramping. We also wanted to examine the effect of magnesium in different categories of cramp sufferers, in case it might work in one setting, and not another."
}
]
|
query-laysum | 2312 | [
{
"role": "user",
"content": "Abstract: Background\nIntrahepatic cholestasis of pregnancy (ICP) is a liver disorder that can develop in pregnancy. It occurs when there is a build-up of bile acids in the maternal blood. It has been linked to adverse maternal and fetal/neonatal outcomes. As the pathophysiology is poorly understood, therapies have been largely empiric. As ICP is an uncommon condition (incidence less than 2% a year), many trials have been small. Synthesis, including recent larger trials, will provide more evidence to guide clinical practice. This review is an update of a review first published in 2001 and last updated in 2013.\nObjectives\nTo assess the effects of pharmacological interventions to treat women with intrahepatic cholestasis of pregnancy, on maternal, fetal and neonatal outcomes.\nSearch methods\nFor this update, we searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (13 December 2019), and reference lists of retrieved studies.\nSelection criteria\nRandomised or quasi-randomised controlled trials, including cluster-randomised trials and trials published in abstract form only, that compared any drug with placebo or no treatment, or two drug intervention strategies, for women with a clinical diagnosis of intrahepatic cholestasis of pregnancy.\nData collection and analysis\nThe review authors independently assessed trials for eligibility and risks of bias. We independently extracted data and checked these for accuracy. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included 26 trials involving 2007 women. They were mostly at unclear to high risk of bias. They assessed nine different pharmacological interventions, resulting in 14 different comparisons. We judged two placebo-controlled trials of ursodeoxycholic acid (UDCA) in 715 women to be at low risk of bias.\nThe ten different pharmacological interventions were: agents believed to detoxify bile acids (UCDA) and S-adenosylmethionine (SAMe); agents used to bind bile acids in the intestine (activated charcoal, guar gum, cholestyramine); Chinese herbal medicines (yinchenghao decoction (YCHD), salvia, Yiganling and Danxioling pill (DXLP)), and agents aimed to reduce bile acid production (dexamethasone)\nCompared with placebo, UDCA probably results in a small improvement in pruritus score measured on a 100 mm visual analogue scale (VAS) (mean difference (MD) −7.64 points, 95% confidence interval (CI) −9.69 to −5.60 points; 2 trials, 715 women; GRADE moderate certainty), where a score of zero indicates no itch and a score of 100 indicates severe itching. The evidence for fetal distress and stillbirth were uncertain, due to serious limitations in study design and imprecision (risk ratio (RR) 0.70, 95% CI 0.35 to 1.40; 6 trials, 944 women; RR 0.33, 95% CI 0.08 to 1.37; 6 trials, 955 women; GRADE very low certainty).\nWe found very few differences for the other comparisons included in this review.\nThere is insufficient evidence to indicate if SAMe, guar gum, activated charcoal, dexamethasone, cholestyramine, Salvia, Yinchenghao decoction, Danxioling and Yiganling, or Yiganling alone or in combination are effective in treating women with intrahepatic cholestasis of pregnancy.\nAuthors' conclusions\nWhen compared with placebo, UDCA administered to women with ICP probably shows a reduction in pruritus. However the size of the effect is small and for most pregnant women and clinicians, the reduction may fall below the minimum clinically worthwhile effect. The evidence was unclear for other adverse fetal outcomes, due to very low-certainty evidence. There is insufficient evidence to indicate that SAMe, guar gum, activated charcoal, dexamethasone, cholestyramine, YCHD, DXLP, Salvia, Yiganling alone or in combination are effective in treating women with cholestasis of pregnancy. There are no trials of the efficacy of topical emollients.\nFurther high-quality trials of other interventions are needed in order to identify effective treatments for maternal itching and preventing adverse perinatal outcomes. It would also be helpful to identify those women who are mostly likely to respond to UDCA (for example, whether bile acid concentrations affect how women with ICP respond to treatment with UDCA).\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for evidence in December 2019, and identified 26 trials involving 2007 women. The trials assessed nine different interventions, but for most of them the trials were small and had a high risk of bias; we were therefore unable to draw firm conclusions. However, the most widely-used treatment, ursodeoxycholic acid (UDCA), for which we identified seven trials (1008 women), included two trials at low risk of bias (755 women). There is now evidence that UDCA probably reduces itching (moderate-certainty evidence). However, the size of the effect is small and for many pregnant women may not be worthwhile. The evidence for an effect of UCDA on stillbirth or fetal distress is unclear, mainly due to limitations in study design and imprecise results (very low-certainty evidence)."
}
]
|
query-laysum | 2313 | [
{
"role": "user",
"content": "Abstract: Background\nBronchopulmonary dysplasia (BPD), defined as oxygen dependence at 36 weeks' postmenstrual age (PMA), remains an important complication of prematurity. Pulmonary inflammation plays a central role in the pathogenesis of BPD. Attenuating pulmonary inflammation with postnatal systemic corticosteroids reduces the incidence of BPD in preterm infants but may be associated with an increased risk of adverse neurodevelopmental outcomes. Local administration of corticosteroids via inhalation may be an effective and safe alternative.\nObjectives\nTo assess the benefits and harms of inhaled corticosteroids versus placebo, initiated between seven days of postnatal life and 36 weeks' postmenstrual age, to preterm infants at risk of developing bronchopulmonary dysplasia.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, and three trials registries to August 2022. We searched conference proceedings and the reference lists of retrieved articles for additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing inhaled corticosteroids to placebo, started between seven days' postnatal age (PNA) and 36 weeks' PMA, in infants at risk of BPD. We excluded trials investigating systemic corticosteroids versus inhaled corticosteroids.\nData collection and analysis\nWe collected data on participant characteristics, trial methodology, and inhalation regimens. The primary outcomes were mortality, BPD, or both at 36 weeks' PMA. Secondary outcomes included short-term respiratory outcomes (mortality or BPD at 28 days' PNA, failure to extubate, total days of mechanical ventilation and oxygen use, and need for systemic corticosteroids) and adverse effects. We contacted the trial authors to verify the validity of extracted data and to request missing data. We analysed all data using Review Manager 5. Where possible, we reported the results of meta-analyses using risk ratios (RRs) and risk differences (RDs) for dichotomous outcomes and mean differences (MDs) for continuous outcomes, along with their 95% confidence intervals (CIs). We analysed ventilated and non-ventilated participants separately. We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included seven trials involving 218 preterm infants in this review. We identified no new eligible studies in this update. The evidence is very uncertain regarding whether inhaled corticosteroids affects the combined outcome of mortality or BPD at 36 weeks' PMA (RR 1.10, 95% CI 0.74 to 1.63; RD 0.07, 95% CI −0.21 to 0.34; 1 study, 30 infants; very low-certainty) or its separate components: mortality (RR 3.00, 95% CI 0.35 to 25.78; RD 0.07, 95% CI −0.08 to 0.21; 3 studies, 61 infants; very low-certainty) and BPD (RR 1.00, 95% CI 0.59 to 1.70; RD 0.00, 95% CI −0.31 to 0.31; 1 study, 30 infants; very low-certainty) at 36 weeks' PMA. Inhaled corticosteroids may reduce the need for systemic corticosteroids, but the evidence is very uncertain (RR 0.51, 95% CI 0.26 to 1.00; RD −0.22, 95% CI −0.42 to −0.02; number needed to treat for an additional beneficial outcome 5, 95% CI 2 to 115; 4 studies, 74 infants; very low-certainty). There was a paucity of data on short-term and long-term adverse effects. Despite a low risk of bias in the individual studies, we considered the certainty of the evidence for all comparisons discussed above to be very low, because the studies had few participants, there was substantial clinical heterogeneity between studies, and only three studies reported the primary outcome of this review.\nAuthors' conclusions\nBased on the available evidence, we do not know if inhaled corticosteroids initiated from seven days of life in preterm infants at risk of developing BPD reduces mortality or BPD at 36 weeks' PMA. There is a need for larger randomised placebo-controlled trials to establish the benefits and harms of inhaled corticosteroids.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared inhaled corticosteroids to placebo in premature infants who were at risk of BPD. We compared and summarised the results of the included studies, and rated our confidence in the evidence based on factors such as study methods and sizes."
}
]
|
query-laysum | 2314 | [
{
"role": "user",
"content": "Abstract: Background\nThe coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems worldwide. Multiple reports on thromboembolic complications related to COVID-19 have been published, and researchers have described that people with COVID-19 are at high risk for developing venous thromboembolism (VTE). Anticoagulants have been used as pharmacological interventions to prevent arterial and venous thrombosis, and their use in the outpatient setting could potentially reduce the prevalence of vascular thrombosis and associated mortality in people with COVID-19. However, even lower doses used for a prophylactic purpose may result in adverse events such as bleeding. It is important to consider the evidence for anticoagulant use in non-hospitalised people with COVID-19.\nObjectives\nTo evaluate the benefits and harms of prophylactic anticoagulants versus active comparators, placebo or no intervention, or non-pharmacological interventions in non-hospitalised people with COVID-19.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 18 April 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing prophylactic anticoagulants with placebo or no treatment, another active comparator, or non-pharmacological interventions in non-hospitalised people with COVID-19. We included studies that compared anticoagulants with a different dose of the same anticoagulant. We excluded studies with a duration of under two weeks.\nData collection and analysis\nWe used standard Cochrane methodological procedures. Our primary outcomes were all-cause mortality, VTE (deep vein thrombosis (DVT) or pulmonary embolism (PE)), and major bleeding. Our secondary outcomes were DVT, PE, need for hospitalisation, minor bleeding, adverse events, and quality of life. We used GRADE to assess the certainty of the evidence.\nMain results\nWe included five RCTs with up to 90 days of follow-up (short term). Data were available for meta-analysis from 1777 participants.\nAnticoagulant compared to placebo or no treatment\nFive studies compared anticoagulants with placebo or no treatment and provided data for three of our outcomes of interest (all-cause mortality, major bleeding, and adverse events). The evidence suggests that prophylactic anticoagulants may lead to little or no difference in all-cause mortality (risk ratio (RR) 0.36, 95% confidence interval (CI) 0.04 to 3.61; 5 studies; 1777 participants; low-certainty evidence) and probably reduce VTE from 3% in the placebo group to 1% in the anticoagulant group (RR 0.36, 95% CI 0.16 to 0.85; 4 studies; 1259 participants; number needed to treat for an additional beneficial outcome (NNTB) = 50; moderate-certainty evidence). There may be little to no difference in major bleeding (RR 0.36, 95% CI 0.01 to 8.78; 5 studies; 1777 participants; low-certainty evidence). Anticoagulants probably result in little or no difference in DVT (RR 1.02, 95% CI 0.30 to 3.46; 3 studies; 1009 participants; moderate-certainty evidence), but probably reduce the risk of PE from 2.7% in the placebo group to 0.7% in the anticoagulant group (RR 0.25, 95% CI 0.08 to 0.79; 3 studies; 1009 participants; NNTB 50; moderate-certainty evidence). Anticoagulants probably lead to little or no difference in reducing hospitalisation (RR 1.01, 95% CI 0.59 to 1.75; 4 studies; 1459 participants; moderate-certainty evidence) and may lead to little or no difference in adverse events (minor bleeding, RR 2.46, 95% CI 0.90 to 6.72; 5 studies, 1777 participants; low-certainty evidence).\nAnticoagulant compared to a different dose of the same anticoagulant\nOne study compared anticoagulant (higher-dose apixaban) with a different (standard) dose of the same anticoagulant and reported five relevant outcomes. No cases of all-cause mortality, VTE, or major bleeding occurred in either group during the 45-day follow-up (moderate-certainty evidence). Higher-dose apixaban compared to standard-dose apixaban may lead to little or no difference in reducing the need for hospitalisation (RR 1.89, 95% CI 0.17 to 20.58; 1 study; 278 participants; low-certainty evidence) or in the number of adverse events (minor bleeding, RR 0.47, 95% CI 0.09 to 2.54; 1 study; 278 participants; low-certainty evidence).\nAnticoagulant compared to antiplatelet agent\nOne study compared anticoagulant (apixaban) with antiplatelet agent (aspirin) and reported five relevant outcomes. No cases of all-cause mortality or major bleeding occurred during the 45-day follow-up (moderate-certainty evidence). Apixaban may lead to little or no difference in VTE (RR 0.36, 95% CI 0.01 to 8.65; 1 study; 279 participants; low-certainty evidence), need for hospitalisation (RR 3.20, 95% CI 0.13 to 77.85; 1 study; 279 participants; low-certainty evidence), or adverse events (minor bleeding, RR 2.13, 95% CI 0.40 to 11.46; 1 study; 279 participants; low-certainty evidence).\nNo included studies reported on quality of life or investigated anticoagulants compared to a different anticoagulant, or anticoagulants compared to non-pharmacological interventions.\nAuthors' conclusions\nWe found low- to moderate-certainty evidence from five RCTs that prophylactic anticoagulants result in little or no difference in major bleeding, DVT, need for hospitalisation, or adverse events when compared with placebo or no treatment in non-hospitalised people with COVID-19. Low-certainty evidence indicates that prophylactic anticoagulants may result in little or no difference in all-cause mortality when compared with placebo or no treatment, but moderate-certainty evidence indicates that prophylactic anticoagulants probably reduce the incidence of VTE and PE.\nLow-certainty evidence suggests that comparing different doses of the same prophylactic anticoagulant may result in little or no difference in need for hospitalisation or adverse events. Prophylactic anticoagulants may result in little or no difference in risk of VTE, hospitalisation, or adverse events when compared with antiplatelet agents (low-certainty evidence). Given that there were only short-term data from one study, these results should be interpreted with caution.\nAdditional trials of sufficient duration are needed to clearly determine any effect on clinical outcomes.\n\nGiven the provided abstract, please respond to the following query: \"How are COVID-19 and VTE related?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "COVID-19 typically affects the lungs and airways; however, in addition to respiratory problems, people with COVID-19 can also experience problems with their blood vessels, leading to blood clots forming in the veins and lungs."
}
]
|
query-laysum | 2315 | [
{
"role": "user",
"content": "Abstract: Background\nDirect laryngoscopy is the method currently used for tracheal intubation in children. It occasionally offers unexpectedly poor laryngeal views. Indirect laryngoscopy involves visualizing the vocal cords by means other than obtaining a direct sight, with the potential to improve outcomes. We reviewed the current available literature and performed a meta-analysis to compare direct versus indirect laryngoscopy, or videolaryngoscopy, with regards to efficacy and adverse effects.\nObjectives\nTo assess the efficacy of indirect laryngoscopy, or videolaryngoscopy, versus direct laryngoscopy for intubation of children with regards to intubation time, number of attempts at intubation, and adverse haemodynamic responses to endotracheal intubation. We also assessed other adverse responses to intubation, such as trauma to oral, pharyngeal, and laryngeal structures, and we assessed vocal cord view scores.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), and trial registers (www.clinicaltrials.gov and www.controlledtrials) in November 2015. We reran the search in January 2017. We added new studies of potential interest to a list of ‘Studies awaiting classification' and will incorporate them into formal review findings during the review update. We performed reference checking and citation searching and contacted the authors of unpublished data to ask for more information. We applied no language restrictions.\nSelection criteria\nWe included only randomized controlled trials. Participants were children aged 28 days to 18 years. Investigators performed intubations using any type of indirect laryngoscopes, or videolaryngoscopes, versus direct laryngoscopes.\nData collection and analysis\nWe used Cochrane standard methodological procedures. Two review authors independently reviewed titles, extracted data, and assessed risk of bias.\nMain results\nWe included 12 studies (803 children) in this review and meta-analysis. We identified three studies that are awaiting classification and two ongoing studies.\nTrial results show that a longer intubation time was required when indirect laryngoscopy, or videolaryngoscopy, was used instead of direct laryngoscopy (12 trials; n = 798; mean difference (MD) 5.49 seconds, 95% confidence interval (CI) 1.37 to 9.60; I2 = 90%; very low-quality evidence). Researchers found no significant differences between direct and indirect laryngoscopy on assessment of success of the first attempt at intubation (11 trials; n = 749; risk ratio (RR) 0.96, 95% CI 0.91 to 1.02; I2 = 67%; low-quality evidence) and observed that unsuccessful intubation (five trials; n = 263) was significantly increased in the indirect laryngoscopy, or videolaryngoscopy, group (RR 4.93, 95% CI 1.33 to 18.31; I2 = 0%; low-quality evidence). Five studies reported the effect of intubation on oxygen saturation (n = 272; very low-quality evidence). Five children had desaturation during intubation: one from the direct laryngoscopy group and four from the indirect laryngoscopy, or videolaryngoscopy, group.\nTwo studies (n = 100) reported other haemodynamic responses to intubation (very low-quality evidence). One study reported a significant increase in heart rate five minutes after intubation in the indirect laryngoscopy group (P = 0.007); the other study found that the heart rate change in the direct laryngoscopy group was significantly less than the heart rate change in the indirect laryngoscopy, or videolaryngoscopy, group (P < 0.001). A total of five studies (n = 244; very low-quality evidence) looked at evidence of trauma resulting from intubation. Investigators reported that only two children from the direct laryngoscopy group had trauma compared with no children in the indirect laryngoscopy, or videolaryngoscopy, group.\nUse of indirect laryngoscopy, or videolaryngoscopy, improved the percentage of glottic opening (five trials; n = 256). Studies noted no significant difference in Cormack and Lehane score (C&L) grade 1 (three trials; n = 190; RR 1.06, 95% CI 0.93 to 1.21; I2 = 59%).\nAuthors' conclusions\nEvidence suggests that indirect laryngoscopy, or videolaryngoscopy, leads to prolonged intubation time with an increased rate of intubation failure when compared with direct laryngoscopy (very low-quality evidence due to imprecision, inconsistency, and study limitations). Review authors had difficulty reaching conclusions on adverse haemodynamic responses and other adverse effects of intubation, as only a few children were reported to have these outcomes. Use of indirect laryngoscopy, or videolaryngoscopy, might lead to improved vocal cord view, but marked heterogeneity between studies made it difficult for review authors to reach conclusions on this outcome.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Children who need a general anaesthetic sometimes need a breathing tube placed in their throat, known as intubation. Intubations are also performed in emergency situations such as trauma, severe breathing difficulty, and heart dysfunction. Intubation is traditionally performed with a laryngoscope, a device that lifts the tongue to allow a direct view of the vocal cords. This is known as direct laryngoscopy. New devices have been developed that show the vocal cords through a fine video camera placed on the tip of the device; this is known as indirect laryngoscopy, or videolaryngoscopy.\nIndirect laryngoscopes, or videolaryngoscopes, are thought to provide a better view of the vocal cords when compared with direct laryngoscopes, but whether this equipment allows easier placement of the breathing tube remains unclear."
}
]
|
query-laysum | 2316 | [
{
"role": "user",
"content": "Abstract: Background\nObesity is a global public health threat. Chromium picolinate (CrP) is advocated in the medical literature for the reduction of body weight, and preparations are sold as slimming aids in the USA and Europe, and on the Internet.\nObjectives\nTo assess the effects of CrP supplementation in overweight or obese people.\nSearch methods\nWe searched The Cochrane Library, MEDLINE, EMBASE, ISI Web of Knowledge, the Chinese Biomedical Literature Database, the China Journal Fulltext Database and the Chinese Scientific Journals Fulltext Database (all databases to December 2012), as well as other sources (including databases of ongoing trials, clinical trials registers and reference lists).\nSelection criteria\nWe included trials if they were randomised controlled trials (RCT) of CrP supplementation in people who were overweight or obese. We excluded studies including children, pregnant women or individuals with serious medical conditions.\nData collection and analysis\nTwo authors independently screened titles and abstracts for relevance. Screening for inclusion, data extraction and 'Risk of bias' assessment were carried out by one author and checked by a second. We assessed the risk of bias by evaluating the domains selection, performance, attrition, detection and reporting bias. We performed a meta-analysis of included trials using Review Manager 5.\nMain results\nWe evaluated nine RCTs involving a total of 622 participants. The RCTs were conducted in the community setting, with interventions mainly delivered by health professionals, and had a short- to medium-term follow up (up to 24 weeks). Three RCTs compared CrP plus resistance or weight training with placebo plus resistance or weight training, the other RCTs compared CrP alone versus placebo. We focused this review on investigating which dose of CrP would prove most effective versus placebo and therefore assessed the results according to CrP dose. However, in order to find out if CrP works in general, we also analysed the effect of all pooled CrP doses versus placebo on body weight only.\nAcross all CrP doses investigated (200 µg, 400 µg, 500 µg, 1000 µg) we noted an effect on body weight in favour of CrP of debatable clinical relevance after 12 to 16 weeks of treatment: mean difference (MD) -1.1 kg (95% CI -1.7 to -0.4); P = 0.001; 392 participants; 6 trials; low-quality evidence (GRADE)). No firm evidence and no dose gradient could be established when comparing different doses of CrP with placebo for various weight loss measures (body weight, body mass index, percentage body fat composition, change in waist circumference).\nOnly three studies provided information on adverse events (low-quality evidence (GRADE)). There were two serious adverse events and study dropouts in participants taking 1000 µg CrP, and one serious adverse event in an individual taking 400 µg CrP. Two participants receiving placebo discontinued due to adverse events; one event was reported as serious. No study reported on all-cause mortality, morbidity, health-related quality of life or socioeconomic effects.\nAuthors' conclusions\nWe found no current, reliable evidence to inform firm decisions about the efficacy and safety of CrP supplements in overweight or obese adults.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of evidence was considered low and we have inadequate information from which to draw conclusions about the efficacy and safety of chromium picolinate supplementation in overweight or obese adults."
}
]
|
query-laysum | 2317 | [
{
"role": "user",
"content": "Abstract: Background\nChildren with severe global developmental delay (SGDD) have significant intellectual disability and severe motor impairment; they are extremely limited in their functional movement and are dependent upon others for all activities of daily living. SGDD does not directly cause lung dysfunction, but the combination of immobility, weakness, skeletal deformity and parenchymal damage from aspiration can lead to significant prevalence of respiratory illness. Respiratory pathology is a significant cause of morbidity and mortality for children with SGDD; it can result in frequent hospital admissions and impacts upon quality of life. Although many treatment approaches are available, there currently exists no comprehensive review of the literature to inform best practice. A broad range of treatment options exist; to focus the scope of this review and allow in-depth analysis, we have excluded pharmaceutical interventions.\nObjectives\nTo assess the effects of non-pharmaceutical treatment modalities for the management of respiratory morbidity in children with severe global developmental delay.\nSearch methods\nWe conducted comprehensive searches of the following databases from inception to November 2013: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, the Allied and Complementary Medicine Database (AMED) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL). We searched the Web of Science and clinical trials registries for grey literature and for planned, ongoing and unpublished trials. We checked the reference lists of all primary included studies for additional relevant references.\nWe updated searches in November 2019 and July 2020 and added six studies to awaiting classification, but these results have not been fully incorporated into the review.\nSelection criteria\nRandomised controlled trials, controlled trials and cohort studies of children up to 18 years of age with a diagnosis of severe neurological impairment and respiratory morbidity were included. Studies of airways clearance techniques, suction, assisted coughing, non-invasive ventilation, tracheostomy and postural management were eligible for inclusion.\nData collection and analysis\nWe used standard methodological procedures as expected by The Cochrane Collaboration. As the result of heterogeneity, we could not perform meta-analysis. We have therefore presented our results using a narrative approach.\nMain results\nFifteen studies were included in the review. Studies included children with a range of severe neurological impairments in differing settings, for example, home and critical care. Several different treatment modalities were assessed, and a wide range of outcome measures were used. Most studies used a non-randomised design and included small sample groups. Only four randomised controlled trials were identified. Non-randomised design, lack of information about how participants were selected and who completed outcome measures and incomplete reporting led to high or unclear risk of bias in many studies. Results from low-quality studies suggest that use of non-invasive ventilation, mechanically assisted coughing, high-frequency chest wall oscillation (HFCWO), positive expiratory pressure and supportive seating may confer potential benefits. No serious adverse effects were reported for ventilatory support or airway clearance interventions other than one incident in a clinically unstable child following mechanically assisted coughing. Night-time positioning equipment and spinal bracing were shown to have a potentially negative effect for some participants. However, these findings must be considered as tentative and require testing in future randomised trials.\nAuthors' conclusions\nThis review found no high-quality evidence for any single intervention for the management of respiratory morbidity in children with severe global developmental delay. Our search yielded data on a wide range of interventions of interest. Significant differences in study design and in outcome measures precluded the possibility of meta-analysis. No conclusions on efficacy or safety of interventions for respiratory morbidity in children with severe global developmental delay can be made based upon the findings of this review.\nA co-ordinated approach to future research is vital to ensure that high-quality evidence becomes available to guide treatment for this vulnerable patient group.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The results showed that several different treatments provided potential benefits, and for most interventions no serious adverse effects were reported. However, the quality of the studies was not good enough to inspire confidence in the findings. Night-time positioning equipment and spinal bracing were shown to have a potentially negative effect in some participants. Although some studies looked at the same type of treatment, they used it in different ways or used different measures to assess effectiveness, so we could not put the results together."
}
]
|
query-laysum | 2318 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of our previous 2013 review. Several recent trials and systematic reviews of the impact of exercise on people with dementia are reporting promising findings.\nObjectives\nPrimary objective\nDo exercise programs for older people with dementia improve their cognition, activities of daily living (ADLs), neuropsychiatric symptoms, depression, and mortality?\nSecondary objectives\nDo exercise programs for older people with dementia have an indirect impact on family caregivers’ burden, quality of life, and mortality?\nDo exercise programs for older people with dementia reduce the use of healthcare services (e.g. visits to the emergency department) by participants and their family caregivers?\nSearch methods\nWe identified trials for inclusion in the review by searching ALOIS (www.medicine.ox.ac.uk/alois), the Cochrane Dementia and Cognitive Improvement Group’s Specialised Register, on 4 September 2011, on 13 August 2012, and again on 3 October 2013.\nSelection criteria\nIn this review, we included randomized controlled trials in which older people, diagnosed with dementia, were allocated either to exercise programs or to control groups (usual care or social contact/activities) with the aim of improving cognition, ADLs, neuropsychiatric symptoms, depression, and mortality. Secondary outcomes related to the family caregiver(s) and included caregiver burden, quality of life, mortality, and use of healthcare services.\nData collection and analysis\nIndependently, at least two authors assessed the retrieved articles for inclusion, assessed methodological quality, and extracted data. We analysed data for summary effects. We calculated mean differences or standardized mean difference (SMD) for continuous data, and synthesized data for each outcome using a fixed-effect model, unless there was substantial heterogeneity between studies, when we used a random-effects model. We planned to explore heterogeneity in relation to severity and type of dementia, and type, frequency, and duration of exercise program. We also evaluated adverse events.\nMain results\nSeventeen trials with 1067 participants met the inclusion criteria. However, the required data from three included trials and some of the data from a fourth trial were not published and not made available. The included trials were highly heterogeneous in terms of subtype and severity of participants' dementia, and type, duration, and frequency of exercise. Only two trials included participants living at home.\nOur meta-analysis revealed that there was no clear evidence of benefit from exercise on cognitive functioning. The estimated standardized mean difference between exercise and control groups was 0.43 (95% CI -0.05 to 0.92, P value 0.08; 9 studies, 409 participants). There was very substantial heterogeneity in this analysis (I² value 80%), most of which we were unable to explain, and we rated the quality of this evidence as very low. We found a benefit of exercise programs on the ability of people with dementia to perform ADLs in six trials with 289 participants. The estimated standardized mean difference between exercise and control groups was 0.68 (95% CI 0.08 to 1.27, P value 0.02). However, again we observed considerable unexplained heterogeneity (I² value 77%) in this meta-analysis, and we rated the quality of this evidence as very low. This means that there is a need for caution in interpreting these findings.\nIn further analyses, in one trial we found that the burden experienced by informal caregivers providing care in the home may be reduced when they supervise the participation of the family member with dementia in an exercise program. The mean difference between exercise and control groups was -15.30 (95% CI -24.73 to -5.87; 1 trial, 40 participants; P value 0.001). There was no apparent risk of bias in this study. In addition, there was no clear evidence of benefit from exercise on neuropsychiatric symptoms (MD -0.60, 95% CI -4.22 to 3.02; 1 trial, 110 participants; P value .0.75), or depression (SMD 0.14, 95% CI -0.07 to 0.36; 5 trials, 341 participants; P value 0.16). We could not examine the remaining outcomes, quality of life, mortality, and healthcare costs, as either the appropriate data were not reported, or we did not retrieve trials that examined these outcomes.\nAuthors' conclusions\nThere is promising evidence that exercise programs may improve the ability to perform ADLs in people with dementia, although some caution is advised in interpreting these findings. The review revealed no evidence of benefit from exercise on cognition, neuropsychiatric symptoms, or depression. There was little or no evidence regarding the remaining outcomes of interest (i.e., mortality, caregiver burden, caregiver quality of life, caregiver mortality, and use of healthcare services).\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review evaluated the results of 17 trials (search dates August 2012 and October 2013), including 1,067 participants, that tested whether exercise programs could improve cognition (which includes such things as memory, reasoning ability and spatial awareness), activities of daily living, behaviour and psychological symptoms (such as depression, anxiety and agitation) in older people with dementia. We also looked for effects on mortality, quality of life, caregivers' experience and use of healthcare services, and for any adverse effects of exercise."
}
]
|
query-laysum | 2319 | [
{
"role": "user",
"content": "Abstract: Background\nAntibiotics can disturb gastrointestinal microbiota which may lead to reduced resistance to pathogens such as Clostridium difficile (C. difficile). Probiotics are live microbial preparations that, when administered in adequate amounts, may confer a health benefit to the host, and are a potential C. difficile prevention strategy. Recent clinical practice guidelines do not recommend probiotic prophylaxis, even though probiotics have the highest quality evidence among cited prophylactic therapies.\nObjectives\nTo assess the efficacy and safety of probiotics for preventing C.difficile-associated diarrhea (CDAD) in adults and children.\nSearch methods\nWe searched PubMed, EMBASE, CENTRAL, and the Cochrane IBD Group Specialized Register from inception to 21 March 2017. Additionally, we conducted an extensive grey literature search.\nSelection criteria\nRandomized controlled (placebo, alternative prophylaxis, or no treatment control) trials investigating probiotics (any strain, any dose) for prevention of CDAD, or C. difficile infection were considered for inclusion.\nData collection and analysis\nTwo authors (independently and in duplicate) extracted data and assessed risk of bias. The primary outcome was the incidence of CDAD. Secondary outcomes included detection of C. difficile infection in stool, adverse events, antibiotic-associated diarrhea (AAD) and length of hospital stay. Dichotomous outcomes (e.g. incidence of CDAD) were pooled using a random-effects model to calculate the risk ratio (RR) and corresponding 95% confidence interval (95% CI). We calculated the number needed to treat for an additional beneficial outcome (NNTB) where appropriate. Continuous outcomes (e.g. length of hospital stay) were pooled using a random-effects model to calculate the mean difference and corresponding 95% CI. Sensitivity analyses were conducted to explore the impact of missing data on efficacy and safety outcomes. For the sensitivity analyses, we assumed that the event rate for those participants in the control group who had missing data was the same as the event rate for those participants in the control group who were successfully followed. For the probiotic group, we calculated effects using the following assumed ratios of event rates in those with missing data in comparison to those successfully followed: 1.5:1, 2:1, 3:1, and 5:1. To explore possible explanations for heterogeneity, a priori subgroup analyses were conducted on probiotic species, dose, adult versus pediatric population, and risk of bias as well as a post hoc subgroup analysis on baseline risk of CDAD (low 0% to 2%; moderate 3% to 5%; high > 5%). The overall quality of the evidence supporting each outcome was independently assessed using the GRADE criteria.\nMain results\nThirty-nine studies (9955 participants) met the eligibility requirements for our review. Overall, 27 studies were rated as either high or unclear risk of bias. A complete case analysis (i.e. participants who completed the study) among trials investigating CDAD (31 trials, 8672 participants) suggests that probiotics reduce the risk of CDAD by 60%. The incidence of CDAD was 1.5% (70/4525) in the probiotic group compared to 4.0% (164/4147) in the placebo or no treatment control group (RR 0.40, 95% CI 0.30 to 0.52; GRADE = moderate). Twenty-two of 31 trials had missing CDAD data ranging from 2% to 45%. Our complete case CDAD results proved robust to sensitivity analyses of plausible and worst-plausible assumptions regarding missing outcome data and results were similar whether considering subgroups of trials in adults versus children, inpatients versus outpatients, different probiotic species, lower versus higher doses of probiotics, or studies at high versus low risk of bias. However, in a post hoc analysis, we did observe a subgroup effect with respect to baseline risk of developing CDAD. Trials with a baseline CDAD risk of 0% to 2% and 3% to 5% did not show any difference in risk but trials enrolling participants with a baseline risk of > 5% for developing CDAD demonstrated a large 70% risk reduction (interaction P value = 0.01). Among studies with a baseline risk > 5%, the incidence of CDAD in the probiotic group was 3.1% (43/1370) compared to 11.6% (126/1084) in the control group (13 trials, 2454 participants; RR 0.30, 95% CI 0.21 to 0.42; GRADE = moderate). With respect to detection of C. difficile in the stool pooled complete case results from 15 trials (1214 participants) did not show a reduction in infection rates. C. difficile infection was 15.5% (98/633) in the probiotics group compared to 17.0% (99/581) in the placebo or no treatment control group (RR 0.86, 95% CI 0.67 to 1.10; GRADE = moderate). Adverse events were assessed in 32 studies (8305 participants) and our pooled complete case analysis indicates probiotics reduce the risk of adverse events by 17% (RR 0.83, 95% CI 0.71 to 0.97; GRADE = very low). In both treatment and control groups the most common adverse events included abdominal cramping, nausea, fever, soft stools, flatulence, and taste disturbance.\nAuthors' conclusions\nBased on this systematic review and meta-analysis of 31 randomized controlled trials including 8672 patients, moderate certainty evidence suggests that probiotics are effective for preventing CDAD (NNTB = 42 patients, 95% CI 32 to 58). Our post hoc subgroup analyses to explore heterogeneity indicated that probiotics are effective among trials with a CDAD baseline risk >5% (NNTB = 12; moderate certainty evidence), but not among trials with a baseline risk ≤5% (low to moderate certainty evidence). Although adverse effects were reported among 32 included trials, there were more adverse events among patients in the control groups. The short-term use of probiotics appears to be safe and effective when used along with antibiotics in patients who are not immunocompromised or severely debilitated. Despite the need for further research, hospitalized patients, particularly those at high risk of CDAD, should be informed of the potential benefits and harms of probiotics.\n\nGiven the provided abstract, please respond to the following query: \"What are probiotics?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Probiotics are live organisms (bacteria or yeast). thought to improve the balance of organisms that populate the gut, counteracting potential disturbances to the gut microbial balance that are associated with antibiotic use, and reducing the risk of colonization by pathogenic bacteria. Probiotics can be found in dietary supplements or yogurts and are becoming increasingly available as capsules sold in health food stores and supermarkets. As 'functional food' or 'good bacteria', probiotics have been suggested as a means of both preventing and treating C. difficile-associated diarrhea (CDAD)."
}
]
|
query-laysum | 2320 | [
{
"role": "user",
"content": "Abstract: Background\nGastrostomy has been established as the standard procedure for administering long-term enteral nutrition in individuals with swallowing disturbances. Percutaneous gastrostomy is a less-invasive approach than open surgical gastrostomy, and can be accomplished via endoscopy (percutaneous endoscopic gastrostomy or PEG) or sonographic or fluoroscopic guidance (percutaneous radiological gastrostomy or PRG). Both techniques have different limitations, advantages, and contraindications. In order to determine the optimal technique for long-term nutritional supplementation many studies have been conducted to compare the outcomes of these two techniques; however, it remains unclear as to which method is superior to the other with respect to both efficacy and safety.\nObjectives\nTo compare the safety and efficacy of PEG and PRG in the treatment of individuals with swallowing disturbances.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, January 2016); MEDLINE (1946 to 22 January 2016); EMBASE (1980 to 22 January 2016); the reference lists of identified articles; databases of ongoing trials, including the Chinese Cochrane Centre Controlled Trials Register; and PubMed. We applied no language restrictions.\nSelection criteria\nRandomised controlled trials (RCTs) comparing PEG with PRG in individuals with swallowing disturbances, regardless of the underlying disease.\nData collection and analysis\nTwo authors independently evaluated the search results and assessed the quality of the studies. Data analyses could not be performed as no RCTs were identified for inclusion in this review.\nMain results\nWe identified no RCTs comparing PEG and PRG for percutaneous gastrostomy in individuals with swallowing disturbances. The large body of evidence in this field comes from retrospective and non-randomised controlled studies and case series. Based on this evidence, both PEG and PRG can be safely performed in selected individuals, although both are associated with major and minor complications. A definitive RCT has yet to be conducted to identify the preferred percutaneous gastrostomy technique.\nAuthors' conclusions\nBoth PEG and PRG are effective for long-term enteral nutritional support in selected individuals, though current evidence is insufficient to recommend one technique over the other. Choice of technique should be based on indications and contraindications, operator experience and the facilities available. Large-scale RCTs are required to compare the two techniques and to determine the optimal approach for percutaneous gastrostomy.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This systematic review was conducted to compare two different methods for placing a feeding tube to the stomach via an opening in the skin (known as percutaneous gastrostomy) in order to provide food to an individual with swallowing difficulties; the aim was to find the most effective and safe approach."
}
]
|
query-laysum | 2321 | [
{
"role": "user",
"content": "Abstract: Background\nMagnesium sulphate is the drug of choice for the prevention and treatment of women with eclampsia. Regimens for administration of this drug have evolved over the years, but there is no clarity on the comparative benefits or harm of alternative regimens. This is an update of a review first published in 2010.\nObjectives\nTo assess if one magnesium sulphate regimen is better than another when used for the care of women with pre-eclampsia or eclampsia, or both, to reduce the risk of severe morbidity and mortality for the woman and her baby.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (29 April 2022), and reference lists of retrieved studies.\nSelection criteria\nWe included randomised trials and cluster-randomised trials comparing different regimens for administration of magnesium sulphate used in women with pre-eclampsia or eclampsia, or both. Comparisons included different dose regimens, intramuscular versus intravenous route for maintenance therapy, and different durations of therapy. We excluded studies with quasi-random or cross-over designs. We included abstracts of conference proceedings if compliant with the trustworthiness assessment.\nData collection and analysis\nFor this update, two review authors assessed trials for inclusion, performed risk of bias assessment, and extracted data. We checked data for accuracy. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nFor this update, a total of 16 trials (3020 women) met our inclusion criteria: four trials (409 women) compared regimens for women with eclampsia, and 12 trials (2611 women) compared regimens for women with pre-eclampsia. Most of the included trials had small sample sizes and were conducted in low- and middle-income countries. Eleven trials reported adequate randomisation and allocation concealment. Blinding of participants and clinicians was not possible in most trials. The included studies were for the most part at low risk of attrition and reporting bias.\nTreatment of women with eclampsia (four comparisons)\nOne trial compared a loading dose-alone regimen with a loading dose plus maintenance dose regimen (80 women). It is uncertain whether either regimen has an effect on the risk of recurrence of convulsions or maternal death (very low-certainty evidence).\nOne trial compared a lower-dose regimen with standard-dose regimen over 24 hours (72 women). It is uncertain whether either regimen has an effect on the risk of recurrence of convulsion, severe morbidity, perinatal death, or maternal death (very low-certainty evidence).\nOne trial (137 women) compared intravenous (IV) versus standard intramuscular (IM) maintenance regimen. It is uncertain whether either route has an effect on recurrence of convulsions, death of the baby before discharge (stillbirth and neonatal death), or maternal death (very low-certainty evidence).\nOne trial (120 women) compared a short maintenance regimen with a standard (24 hours after birth) maintenance regimen. It is uncertain whether the duration of the maintenance regimen has an effect on recurrence of convulsions, severe morbidity, or side effects such as nausea and respiratory failure. A short maintenance regimen may reduce the risk of flushing when compared to a standard 24 hours maintenance regimen (risk ratio (RR) 0.27, 95% confidence interval (CI) 0.08 to 0.93; 1 trial, 120 women; low-certainty evidence).\nMany of our prespecified critical outcomes were not reported in the included trials.\nPrevention of eclampsia for women with pre-eclampsia (five comparisons)\nTwo trials (462 women) compared loading dose alone with loading dose plus maintenance therapy. Low-certainty evidence suggests an uncertain effect with either regimen on the risk of eclampsia (RR 2.00, 95% CI 0.61 to 6.54; 2 trials, 462 women) or perinatal death (RR 0.50, 95% CI 0.19 to 1.36; 2 trials, 462 women).\nOne small trial (17 women) compared an IV versus IM maintenance regimen for 24 hours. It is uncertain whether IV or IM maintenance regimen has an effect on eclampsia or stillbirth (very low-certainty evidence).\nFour trials (1713 women) compared short postpartum maintenance regimens with continuing for 24 hours after birth. Low-certainty evidence suggests there may be a wide range of benefit or harm between groups regarding eclampsia (RR 1.99, 95% CI 0.18 to 21.87; 4 trials, 1713 women). Low-certainty evidence suggests there may be little or no effect on severe morbidity (RR 0.96, 95% CI 0.71 to 1.29; 2 trials, 1233 women) or side effects such as respiratory depression (RR 0.80, 95% CI 0.25 to 2.61; 2 trials, 1424 women).\nThree trials (185 women) compared a higher-dose maintenance regimen versus a lower-dose maintenance regimen. It is uncertain whether either regimen has an effect on eclampsia (very low-certainty evidence). Low-certainty evidence suggests that a higher-dose maintenance regimen has little or no effect on side effects when compared to a lower-dose regimen (RR 0.79, 95% CI 0.61 to 1.01; 1 trial 62 women).\nOne trial (200 women) compared a maintenance regimen by continuous infusion versus a serial IV bolus regimen. It is uncertain whether the duration of the maintenance regimen has an effect on eclampsia, side effects, perinatal death, maternal death, or other neonatal morbidity (very low-certainty evidence).\nMany of our prespecified critical outcomes were not reported in the included trials.\nAuthors' conclusions\nDespite the number of trials evaluating various magnesium sulphate regimens for eclampsia prophylaxis and treatment, there is still no compelling evidence that one particular regimen is more effective than another. Well-designed randomised controlled trials are needed to answer this question.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Three main factors reduced our confidence in the evidence. Firstly, people in many studies were aware of which treatment they were getting, which could have led to bias; however, this would be very difficult to prevent in future studies due to the characteristics of the intervention. Secondly, some of the studies provided little information on how they were done. Finally, some studies were very small. The results of further research could differ from the results of this review."
}
]
|
query-laysum | 2322 | [
{
"role": "user",
"content": "Abstract: Background\nConcerns regarding the safety and availability of transfused donor blood have prompted research into a range of techniques to minimise allogeneic transfusion requirements. Cell salvage (CS) describes the recovery of blood from the surgical field, either during or after surgery, for reinfusion back to the patient.\nObjectives\nTo examine the effectiveness of CS in minimising perioperative allogeneic red blood cell transfusion and on other clinical outcomes in adults undergoing elective or non-urgent surgery.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, three other databases and two clinical trials registers for randomised controlled trials (RCTs) and systematic reviews from 2009 (date of previous search) to 19 January 2023, without restrictions on language or publication status.\nSelection criteria\nWe included RCTs assessing the use of CS compared to no CS in adults (participants aged 18 or over, or using the study's definition of adult) undergoing elective (non-urgent) surgery only.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 106 RCTs, incorporating data from 14,528 participants, reported in studies conducted in 24 countries. Results were published between 1978 and 2021. We analysed all data according to a single comparison: CS versus no CS. We separated analyses by type of surgery.\nThe certainty of the evidence varied from very low certainty to high certainty. Reasons for downgrading the certainty included imprecision (small sample sizes below the optimal information size required to detect a difference, and wide confidence intervals), inconsistency (high statistical heterogeneity), and risk of bias (high risk from domains including sequence generation, blinding, and baseline imbalances).\nAggregate analysis (all surgeries combined: primary outcome only)\nVery low-certainty evidence means we are uncertain if there is a reduction in the risk of allogeneic transfusion with CS (risk ratio (RR) 0.65, 95% confidence interval (CI) 0.59 to 0.72; 82 RCTs, 12,520 participants).\nCancer: 2 RCTs (79 participants)\nVery low-certainty evidence means we are uncertain whether there is a difference for mortality, blood loss, infection, or deep vein thrombosis (DVT). There were no analysable data reported for the remaining outcomes.\nCardiovascular (vascular): 6 RCTs (384 participants)\nVery low- to low-certainty evidence means we are uncertain whether there is a difference for most outcomes. No data were reported for major adverse cardiovascular events (MACE).\nCardiovascular (no bypass): 6 RCTs (372 participants)\nModerate-certainty evidence suggests there is probably a reduction in risk of allogeneic transfusion with CS (RR 0.82, 95% CI 0.69 to 0.97; 3 RCTs, 169 participants).\nVery low- to low-certainty evidence means we are uncertain whether there is a difference for volume transfused, blood loss, mortality, re-operation for bleeding, infection, wound complication, myocardial infarction (MI), stroke, and hospital length of stay (LOS). There were no analysable data reported for thrombosis, DVT, pulmonary embolism (PE), and MACE.\nCardiovascular (with bypass): 29 RCTs (2936 participants)\nLow-certainty evidence suggests there may be a reduction in the risk of allogeneic transfusion with CS, and suggests there may be no difference in risk of infection and hospital LOS.\nVery low- to moderate-certainty evidence means we are uncertain whether there is a reduction in volume transfused because of CS, or if there is any difference for mortality, blood loss, re-operation for bleeding, wound complication, thrombosis, DVT, PE, MACE, and MI, and probably no difference in risk of stroke.\nObstetrics: 1 RCT (1356 participants)\nHigh-certainty evidence shows there is no difference between groups for mean volume of allogeneic blood transfused (mean difference (MD) -0.02 units, 95% CI -0.08 to 0.04; 1 RCT, 1349 participants).\nLow-certainty evidence suggests there may be no difference for risk of allogeneic transfusion. There were no analysable data reported for the remaining outcomes.\nOrthopaedic (hip only): 17 RCTs (2055 participants)\nVery low-certainty evidence means we are uncertain if CS reduces the risk of allogeneic transfusion, and the volume transfused, or if there is any difference between groups for mortality, blood loss, re-operation for bleeding, infection, wound complication, prosthetic joint infection (PJI), thrombosis, DVT, PE, stroke, and hospital LOS. There were no analysable data reported for MACE and MI.\nOrthopaedic (knee only): 26 RCTs (2568 participants)\nVery low- to low-certainty evidence means we are uncertain if CS reduces the risk of allogeneic transfusion, and the volume transfused, and whether there is a difference for blood loss, re-operation for bleeding, infection, wound complication, PJI, DVT, PE, MI, MACE, stroke, and hospital LOS. There were no analysable data reported for mortality and thrombosis.\nOrthopaedic (spine only): 6 RCTs (404 participants)\nModerate-certainty evidence suggests there is probably a reduction in the need for allogeneic transfusion with CS (RR 0.44, 95% CI 0.31 to 0.63; 3 RCTs, 194 participants).\nVery low- to moderate-certainty evidence suggests there may be no difference for volume transfused, blood loss, infection, wound complication, and PE. There were no analysable data reported for mortality, re-operation for bleeding, PJI, thrombosis, DVT, MACE, MI, stroke, and hospital LOS.\nOrthopaedic (mixed): 14 RCTs (4374 participants)\nVery low- to low-certainty evidence means we are uncertain if there is a reduction in the need for allogeneic transfusion with CS, or if there is any difference between groups for volume transfused, mortality, blood loss, infection, wound complication, PJI, thrombosis, DVT, MI, and hospital LOS. There were no analysable data reported for re-operation for bleeding, MACE, and stroke.\nAuthors' conclusions\nIn some types of elective surgery, cell salvage may reduce the need for and volume of allogeneic transfusion, alongside evidence of no difference in adverse events, when compared to no cell salvage. Further research is required to establish why other surgeries show no benefit from CS, through further analysis of the current evidence. More large RCTs in under-reported specialities are needed to expand the evidence base for exploring the impact of CS.\n\nGiven the provided abstract, please respond to the following query: \"What is 'cell salvage' and why is it used?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Some people who have surgery require blood transfusions to compensate for the blood lost during the procedure. 'Blood transfusion' is a routine medical procedure where someone receives blood through a thin tube inserted into a vein, usually in the arm. Often the blood used for the transfusion has been donated by a volunteer. Blood transfusions can save lives, but can also increase the risk of complications from surgery and should be avoided where possible. Hospitals have looked for ways to reduce the need for donor blood by (1) reducing how much blood is lost in the first place, and (2) returning the blood lost back to the patient using 'cell salvage'.\n'Cell salvage' or 'autotransfusion' involves the collection of a patient's own blood from surgical sites which can be transfused back into the same person during or after surgery, as required. This is blood that would otherwise have been discarded."
}
]
|
query-laysum | 2323 | [
{
"role": "user",
"content": "Abstract: Background\nThe optimal rhythm management strategy for people with non-paroxysmal (persistent or long-standing persistent) atrial fibrilation is currently not well defined. Antiarrhythmic drugs have been the mainstay of therapy. But recently, in people who have not responded to antiarrhythmic drugs, the use of ablation (catheter and surgical) has emerged as an alternative to maintain sinus rhythm to avoid long-term atrial fibrillation complications. However, evidence from randomised trials about the efficacy and safety of ablation in non-paroxysmal atrial fibrillation is limited.\nObjectives\nTo determine the efficacy and safety of ablation (catheter and surgical) in people with non-paroxysmal (persistent or long-standing persistent) atrial fibrillation compared to antiarrhythmic drugs.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE Ovid, Embase Ovid, conference abstracts, clinical trial registries, and Health Technology Assessment Database. We searched these databases from their inception to 1 April 2016. We used no language restrictions.\nSelection criteria\nWe included randomised trials evaluating the effect of radiofrequency catheter ablation (RFCA) or surgical ablation compared with antiarrhythmic drugs in adults with non-paroxysmal atrial fibrillation, regardless of any concomitant underlying heart disease, with at least 12 months of follow-up.\nData collection and analysis\nTwo review authors independently selected studies and extracted data. We evaluated risk of bias using the Cochrane 'Risk of bias' tool. We calculated risk ratios (RRs) for dichotomous data with 95% confidence intervals (CIs) a using fixed-effect model when heterogeneity was low (I² <= 40%) and a random-effects model when heterogeneity was moderate or substantial (I² > 40%). Using the GRADE approach, we evaluated the quality of the evidence and used the GRADE profiler (GRADEpro) to import data from Review Manager 5 to create 'Summary of findings' tables.\nMain results\nWe included three randomised trials with 261 participants (mean age: 60 years) comparing RFCA (159 participants) to antiarrhythmic drugs (102) for non-paroxysmal atrial fibrillation. We generally assessed the included studies as having low or unclear risk of bias across multiple domains, with reported outcomes generally lacking precision due to low event rates. Evidence showed that RFCA was superior to antiarrhythmic drugs in achieving freedom from atrial arrhythmias (RR 1.84, 95% CI 1.17 to 2.88; 3 studies, 261 participants; low-quality evidence), reducing the need for cardioversion (RR 0.62, 95% CI 0.47 to 0.82; 3 studies, 261 participants; moderate-quality evidence), and reducing cardiac-related hospitalisation (RR 0.27, 95% CI 0.10 to 0.72; 2 studies, 216 participants; low-quality evidence) at 12 months follow-up. There was substantial uncertainty surrounding the effect of RFCA regarding significant bradycardia (or need for a pacemaker) (RR 0.20, 95% CI 0.02 to 1.63; 3 studies, 261 participants; low-quality evidence), periprocedural complications, and other safety outcomes (RR 0.94, 95% CI 0.16 to 5.68; 3 studies, 261 participants; very low-quality evidence).\nAuthors' conclusions\nIn people with non-paroxysmal atrial fibrillation, evidence suggests a superiority of RFCA to antiarrhythmic drugs in achieving freedom from atrial arrhythmias, reducing the need for cardioversion, and reducing cardiac-related hospitalisations. There was uncertainty surrounding the effect of RFCA with significant bradycardia (or need for a pacemaker), periprocedural complications, and other safety outcomes. Evidence should be interpreted with caution, as event rates were low and quality of evidence ranged from moderate to very low.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When compared to participants receiving heart rhythm drugs, those participants receiving catheter ablation were more likely to be free from atrial fibrillation, had reduced risk of being hospitalised due to cardiac causes, and had a reduced risk of needing cardioversion after 12 months. There was uncertainty surrounding the effect of catheter ablation with significant bradycardia (or need for a pacemaker), periprocedural complications, and other safety outcomes."
}
]
|
query-laysum | 2324 | [
{
"role": "user",
"content": "Abstract: Background\nEarly accurate detection of all skin cancer types is essential to guide appropriate management, reduce morbidity and improve survival. Basal cell carcinoma (BCC) is usually localised to the skin but has potential to infiltrate and damage surrounding tissue, while cutaneous squamous cell carcinoma (cSCC) and melanoma have a much higher potential to metastasise and ultimately lead to death. Exfoliative cytology is a non-invasive test that uses the Tzanck smear technique to identify disease by examining the structure of cells obtained from scraped samples. This simple procedure is a less invasive diagnostic test than a skin biopsy, and for BCC it has the potential to provide an immediate diagnosis that avoids an additional clinic visit to receive skin biopsy results. This may benefit patients scheduled for either Mohs micrographic surgery or non-surgical treatments such as radiotherapy. A cytology scrape can never give the same information as a skin biopsy, however, so it is important to better understand in which skin cancer situations it may be helpful.\nObjectives\nTo determine the diagnostic accuracy of exfoliative cytology for detecting basal cell carcinoma (BCC) in adults, and to compare its accuracy with that of standard diagnostic practice (visual inspection with or without dermoscopy). Secondary objectives were: to determine the diagnostic accuracy of exfoliative cytology for detecting cSCC, invasive melanoma and atypical intraepidermal melanocytic variants, and any other skin cancer; and for each of these secondary conditions to compare the accuracy of exfoliative cytology with visual inspection with or without dermoscopy in direct test comparisons; and to determine the effect of observer experience.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane Central Register of Controlled Trials; MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We also studied the reference lists of published systematic review articles.\nSelection criteria\nStudies evaluating exfoliative cytology in adults with lesions suspicious for BCC, cSCC or melanoma, compared with a reference standard of histological confirmation.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). Where possible we estimated summary sensitivities and specificities using the bivariate hierarchical model.\nMain results\nWe synthesised the results of nine studies contributing a total of 1655 lesions to our analysis, including 1120 BCCs (14 datasets), 41 cSCCs (amongst 401 lesions in 2 datasets), and 10 melanomas (amongst 200 lesions in 1 dataset). Three of these datasets (one each for BCC, melanoma and any malignant condition) were derived from one study that also performed a direct comparison with dermoscopy. Studies were of moderate to poor quality, providing inadequate descriptions of participant selection, thresholds used to make cytological and histological diagnoses, and blinding. Reporting of participants' prior referral pathways was particularly poor, as were descriptions of the cytodiagnostic criteria used to make diagnoses. No studies evaluated the use of exfoliative cytology as a primary diagnostic test for detecting BCC or other skin cancers in lesions suspicious for skin cancer. Pooled data from seven studies using standard cytomorphological criteria (but various stain methods) to detect BCC in participants with a high clinical suspicion of BCC estimated the sensitivity and specificity of exfoliative cytology as 97.5% (95% CI 94.5% to 98.9%) and 90.1% (95% CI 81.1% to 95.1%). respectively. When applied to a hypothetical population of 1000 clinically suspected BCC lesions with a median observed BCC prevalence of 86%, exfoliative cytology would miss 21 BCCs and would lead to 14 false positive diagnoses of BCC. No false positive cases were histologically confirmed to be melanoma. Insufficient data are available to make summary statements regarding the accuracy of exfoliative cytology to detect melanoma or cSCC, or its accuracy compared to dermoscopy.\nAuthors' conclusions\nThe utility of exfoliative cytology for the primary diagnosis of skin cancer is unknown, as all included studies focused on the use of this technique for confirming strongly suspected clinical diagnoses. For the confirmation of BCC in lesions with a high clinical suspicion, there is evidence of high sensitivity and specificity. Since decisions to treat low-risk BCCs are unlikely in practice to require diagnostic confirmation given that clinical suspicion is already high, exfoliative cytology might be most useful for cases of BCC where the treatments being contemplated require a tissue diagnosis (e.g. radiotherapy). The small number of included studies, poor reporting and varying methodological quality prevent us from drawing strong conclusions to guide clinical practice. Despite insufficient data on the use of cytology for cSCC or melanoma, it is unlikely that cytology would be useful in these scenarios since preservation of the architecture of the whole lesion that would be available from a biopsy provides crucial diagnostic information. Given the paucity of good quality data, appropriately designed prospective comparative studies may be required to evaluate both the diagnostic value of exfoliative cytology by comparison to dermoscopy, and its confirmatory value in adequately reported populations with a high probability of BCC scheduled for further treatment requiring a tissue diagnosis.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Exfoliative cytology means scraping the surface of a possible skin cancer with a knife and then spreading a small layer of the scrape onto a glass slide so that the cells in the scrape can be stained and looked at under a microscope. It is less invasive than skin biopsy and quick to perform, with results available immediately. This could save patients an additional clinic visit to receive skin biopsy results."
}
]
|
query-laysum | 2325 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with supraventricular tachycardia (SVT) frequently are symptomatic and present to the emergency department for treatment. Although vagal manoeuvres may terminate SVT, they often fail, and subsequently adenosine or calcium channel antagonists (CCAs) are administered. Both are known to be effective, but both have a significant side effect profile. This is an update of a Cochrane review previously published in 2006.\nObjectives\nTo review all randomised controlled trials (RCTs) that compare effects of adenosine versus CCAs in terminating SVT.\nSearch methods\nWe identified studies by searching CENTRAL, MEDLINE, Embase, and two trial registers in July 2017. We checked bibliographies of identified studies and applied no language restrictions.\nSelection criteria\nWe planned to include all RCTs that compare adenosine versus a CCA for patients of any age presenting with SVT.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane. Two review authors independently checked results of searches to identify relevant studies and resolved differences by discussion with a third review author. At least two review authors independently assessed each included study and extracted study data. We entered extracted data into Review Manager 5. Primary outcomes were rate of reversion to sinus rhythm and major adverse effects of adenosine and CCAs. Secondary outcomes were rate of recurrence, time to reversion, and minor adverse outcomes. We measured outcomes by calculating odds ratios (ORs) and assessed the quality of primary outcomes using the GRADE approach through the GRADEproGDT website.\nMain results\nWe identified two new studies for inclusion in the review update; the review now includes seven trials with 622 participants who presented to an emergency department with SVT. All included studies were RCTs, but only three described the randomisation process, and none had blinded participants, personnel, or outcome assessors to the intervention given. Moderate-quality evidence shows no differences in the number of people reverting to sinus rhythm who were treated with adenosine or CCA (89.7% vs 92.9%; OR 1.51, 95% confidence interval (CI) 0.85 to 2.68; participants = 622; studies = 7; I2 = 36%). Low-quality evidence suggests no appreciable differences in major adverse event rates between CCAs and adenosine. Researchers reported only one case of hypotension in the CCA group and none in the adenosine group (0.66% vs 0%; OR 3.09, 95% CI 0.12 to 76.71; participants = 306; studies = 3; I2 = 0%). Included trials did not report length of stay in hospital nor patient satisfaction.\nAuthors' conclusions\nModerate-quality evidence shows no differences in effects of adenosine and calcium channel antagonists for treatment of SVT on reverting to sinus rhythm, and low-quality evidence suggests no appreciable differences in the incidence of hypotension. A study comparing patient experiences and prospectively studied adverse events would provide evidence on which treatment is preferable for management of SVT.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review compares effectiveness and side effects of adenosine and CCAs in terminating SVT episodes. We included in the review seven trials involving 622 patients. Evidence is current to July 2017."
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.