dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
7983
|
[
{
"role": "user",
"content": "Abstract: Background\nGlaucoma is a leading cause of blindness worldwide. It results in a progressive loss of peripheral vision and, in late stages, loss of central vision leading to blindness. Early treatment of glaucoma aims to prevent or delay vision loss. Elevated intraocular pressure (IOP) is the main causal modifiable risk factor for glaucoma. Aqueous outflow obstruction is the main cause of IOP elevation, which can be mitigated either by increasing outflow or reducing aqueous humor production. Cyclodestructive procedures use various methods to target and destroy the ciliary body epithelium, the site of aqueous humor production, thereby lowering IOP. The most common approach is laser cyclophotocoagulation.\nObjectives\nTo assess the effectiveness and safety of cyclodestructive procedures for the management of non-refractory glaucoma (i.e. glaucoma in an eye that has not undergone incisional glaucoma surgery). We also aimed to compare the effect of different routes of administration, laser delivery instruments, and parameters of cyclophotocoagulation with respect to IOP control, visual acuity, pain control, and adverse events.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 8); Ovid MEDLINE; Embase.com; LILACS; the metaRegister of Controlled Trials (mRCT) and ClinicalTrials.gov. The date of the search was 7 August 2017. We also searched the reference lists of reports from included studies.\nSelection criteria\nWe included randomized controlled trials of participants who had undergone cyclodestruction as a primary treatment for glaucoma. We included only head-to-head trials that had compared cyclophotocoagulation to other procedural interventions, or compared cyclophotocoagulation using different types of lasers, delivery methods, parameters, or a combination of these factors.\nData collection and analysis\nTwo review authors independently screened search results, assessed risks of bias, extracted data, and graded the certainty of the evidence in accordance with Cochrane standards.\nMain results\nWe included one trial (92 eyes of 92 participants) that evaluated the efficacy of diode transscleral cyclophotocoagulation (TSCPC) as primary surgical therapy. We identified no other eligible ongoing or completed trial. The included trial compared low-energy versus high-energy TSCPC in eyes with primary open-angle glaucoma. The trial was conducted in Ghana and had a mean follow-up period of 13.2 months post-treatment. In this trial, low-energy TSCPC was defined as 45.0 J delivered, high-energy as 65.5 J delivered; it is worth noting that other trials have defined high- and low-energy TSCPC differently. We assessed this trial to have had low risk of selection bias and reporting bias, unclear risk of performance bias, and high risk of detection bias and attrition bias. Trial authors excluded 13 participants with missing follow-up data; the analyses therefore included 40 (85%) of 47 participants in the low-energy group and 39 (87%) of 45 participants in the high-energy group.\nControl of IOP, defined as a decrease in IOP by 20% from baseline value, was achieved in 47% of eyes, at similar rates in the low-energy group and the high-energy groups; the small study size creates uncertainty about the significance of the difference, if any, between energy settings (risk ratio (RR) 1.03, 95% confidence interval (CI) 0.64 to 1.65; 79 participants; low-certainty evidence). The difference in effect between energy settings based on mean decrease in IOP, if any exists, also was uncertain (mean difference (MD) -0.50 mmHg, 95% CI -5.79 to 4.79; 79 participants; low-certainty evidence).\nDecreased vision was defined as the proportion of participants with a decrease of 2 or more lines on the Snellen chart or one or more categories of visual acuity when unable to read the eye chart. Twenty-three percent of eyes had a decrease in vision. The size of any difference between the low-energy group and the high-energy group was uncertain (RR 1.22, 95% CI 0.54 to 2.76; 79 participants; low-certainty evidence). Data were not available for mean visual acuity and proportion of participants with vision change defined as greater than 1 line on the Snellen chart.\nThe difference in the mean number of glaucoma medications used after cyclophotocoagulation was similar when comparing treatment groups (MD 0.10, 95% CI -0.43 to 0.63; 79 participants; moderate-certainty evidence). Twenty percent of eyes were retreated; the estimated effect of energy settings on the need for retreatment was inconclusive (RR 0.76, 95% CI 0.31 to 1.84; 79 participants; low-certainty evidence). No data for visual field, cost effectiveness, or quality-of-life outcomes were reported by the trial investigators.\nAdverse events were reported for the total study population, rather than by treatment group. The trial authors stated that most participants reported mild to moderate pain after the procedure, and many had transient conjunctival burns (percentages not reported). Severe iritis occurred in two eyes and hyphema occurred in three eyes. No instances of hypotony or phthisis bulbi were reported. The only adverse outcome that was reported by the treatment group was atonic pupil (RR 0.89 in the low-energy group, 95% CI 0.47 to 1.68; 92 participants; low-certainty evidence).\nAuthors' conclusions\nThere is insufficient evidence to evaluate the relative effectiveness and safety of cyclodestructive procedures for the primary procedural management of non-refractory glaucoma. Results from the one included trial did not compare cyclophotocoagulation to other procedural interventions and yielded uncertainty about any difference in outcomes when comparing low-energy versus high-energy diode TSCPC. Overall, the effect of laser treatment on IOP control was modest and the number of eyes experiencing vision loss was limited. More research is needed specific to the management of non-refractory glaucoma.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Glaucoma is a progressive disease of the optic nerve causing loss of vision. It is a common cause of blindness worldwide. When treated early, vision loss may be delayed or prevented.\nIntraocular pressure (IOP) is the main treatable risk factor for glaucoma. The ciliary body epithelium produces fluid that builds up pressure in the eye. It is thought that procedures that destroy the ciliary body epithelium, known as cyclodestructive procedures, may reduce IOP as a treatment for glaucoma. Different methods of cyclodestructive procedures are available; the most common is laser. The purpose of this review was to assess laser treatments that destroy the ciliary body epithelium. The review focused on the effectiveness and safety of the included procedures by assessing IOP control, vision, pain control, and side effects."
}
] |
query-laysum
|
7984
|
[
{
"role": "user",
"content": "Abstract: Background\nNew, minimally invasive surgeries have emerged as alternatives to transurethral resection of the prostate (TURP) for the management of lower urinary tract symptoms (LUTS) in men with benign prostatic hyperplasia (BPH). Aquablation is a novel, minimally invasive, water-based therapy, combining image guidance and robotics for the removal of prostatic tissue.\nObjectives\nTo assess the effects of Aquablation for the treatment of lower urinary tract symptoms in men with benign prostatic hyperplasia.\nSearch methods\nWe performed a comprehensive search using multiple databases (the Cochrane Library, MEDLINE, Embase, Scopus, Web of Science, and LILACS), trials registries, other sources of grey literature, and conference proceedings published up to 11 February 2019, with no restrictions on the language or status of publication.\nSelection criteria\nWe included parallel-group randomised controlled trials (RCTs) and cluster-RCTs, as well as non-randomised observational prospective studies with concurrent comparison groups in which participants with BPH who underwent Aquablation.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion at each stage, and undertook data extraction and 'Risk of bias' and GRADE assessments of the certainty of the evidence. We considered review outcomes measured up to and including 12 months after randomisation as short-term and beyond 12 months as long-term.\nMain results\nWe included one RCT with 184 participants comparing Aquablation to TURP. The mean age and International Prostate Symptom Score were 65.9 years and 22.6, respectively. The mean prostate volume was 53.2 mL. We only found short-term data for all outcomes based on a single randomised trial.\nPrimary outcomes\nUp to 12 months, Aquablation likely results in a similar improvement in urologic symptom scores to TURP (mean difference (MD) −0.06, 95% confidence interval (CI) −2.51 to 2.39; participants = 174; moderate-certainty evidence). We downgraded the evidence certainty by one level due to study limitations. Aquablation may also result in similar quality of life when compared to TURP (MD 0.27, 95% CI −0.24 to 0.78; participants = 174, low-certainty evidence). We downgraded the evidence certainty by two levels due to study limitations and imprecision. Aquablation may result in little to no difference in major adverse events (risk ratio (RR) 0.84, 95% CI 0.31 to 2.26; participants = 181, very low-certainty evidence) but we are very uncertain of this finding. This would correspond to 15 fewer major adverse events per 1000 participants (95% CI 64 fewer to 116 more). We downgraded the evidence certainty by one level for study limitations and two levels for imprecision.\nSecondary outcomes\nUp to 12 months, Aquablation may result in little to no difference in retreatments (RR 1.68, 95% CI 0.18 to 15.83; participants = 181, very low-certainty evidence) but we are very uncertain of this finding. This would correspond to 10 more retreatments per 1000 participants (95% CI 13 fewer to 228 more). We downgraded the evidence certainty by one level due to study limitations and two levels for imprecision.\nAquablation may result in little to no difference in erectile function as measured by International Index of Erectile Function questionnaire Erectile Function domain compared to TURP (MD 2.31, 95% CI −0.63 to 5.25; participants = 64, very low-certainty evidence), and may cause slightly less ejaculatory dysfunction than TURP, as measured by Male Sexual Health Questionnaire for Ejaculatory Dysfunction (MD 2.57, 95% CI 0.60 to 4.53; participants = 121, very low-certainty evidence). However, we are very uncertain of both findings. We downgraded the evidence certainty by two levels due to study limitations and one level for imprecision for both outcomes.\nWe did not find other prospective, comparative studies comparing Aquablation to TURP or other procedures such as laser ablation, enucleation, or other minimally invasive therapies.\nAuthors' conclusions\nBased on short-term (up to 12 months) follow-up, the effect of Aquablation on urological symptoms is probably similar to that of TURP (moderate-certainty evidence). The effect on quality of life may also be similar (low-certainty evidence). We are very uncertain whether patients undergoing Aquablation are at higher or lower risk for major adverse events (very low-certainty evidence). We are very uncertain whether Aquablation may result in little to no difference in erectile function but offer a small improvement in preservation of ejaculatory function (both very low-certainty evidence). These conclusions are based on a single study of men with a prostate volume up to 80 mL in size. Longer-term data and comparisons with other modalities appear critical to a more thorough assessment of the role of Aquablation for the treatment of LUTS in men with BPH.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found only one study in which chance decided how men were treated. The study compared Aquablation to transurethral resection of the prostate. On average, men were about 66 years old. We did not find any other studies.\nWe found that Aquablation likely improves urinary symptoms similarly to transurethral resection of the prostate and may also lead to similar quality of life. Rates of unwanted serious effects may also be similar but we are very uncertain about this.\nMen who have Aquablation may have a similar risk of needing a repeat procedure as those having transurethral resection of the prostate but we are very uncertain of this finding.\nAquablation may make little to no difference to erectile function but may have fewer issues with ejaculation, but we are very uncertain of both findings.\nThese findings are based on a single study funded by the company that makes the device used for Aquablation. All data were limited to 12 months' follow-up or less and prostate size was less than or equal to 80 mL."
}
] |
query-laysum
|
7985
|
[
{
"role": "user",
"content": "Abstract: Background\nDelirium is defined as a disturbance in attention, awareness and cognition with reduced ability to direct, focus, sustain and shift attention, and reduced orientation to the environment. Critically ill patients in the intensive care unit (ICU) frequently develop ICU delirium. It can profoundly affect both them and their families because it is associated with increased mortality, longer duration of mechanical ventilation, longer hospital and ICU stay and long-term cognitive impairment. It also results in increased costs for society.\nObjectives\nTo assess existing evidence for the effect of preventive interventions on ICU delirium, in-hospital mortality, the number of delirium- and coma-free days, ventilator-free days, length of stay in the ICU and cognitive impairment.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, BIOSIS, International Web of Science, Latin American Caribbean Health Sciences Literature, CINAHL from 1980 to 11 April 2018 without any language limits. We adapted the MEDLINE search for searching the other databases. Furthermore, we checked references, searched citations and contacted study authors to identify additional studies. We also checked the following trial registries: Current Controlled Trials; ClinicalTrials.gov; and CenterWatch.com (all on 24 April 2018).\nSelection criteria\nWe included randomized controlled trials (RCTs) of adult medical or surgical ICU patients receiving any intervention for preventing ICU delirium. The control could be standard ICU care, placebo or both. We assessed the quality of evidence with GRADE.\nData collection and analysis\nWe checked titles and abstracts to exclude obviously irrelevant studies and obtained full reports on potentially relevant ones. Two review authors independently extracted data. If possible we conducted meta-analyses, otherwise we synthesized data narratively.\nMain results\nThe electronic search yielded 8746 records. We included 12 RCTs (3885 participants) comparing usual care with the following interventions: commonly used drugs (four studies); sedation regimens (four studies); physical therapy or cognitive therapy, or both (one study); environmental interventions (two studies); and preventive nursing care (one study). We found 15 ongoing studies and five studies awaiting classification. The participants were 48 to 70 years old; 48% to 74% were male; the mean acute physiology and chronic health evaluation (APACHE II) score was 14 to 28 (range 0 to 71; higher scores correspond to more severe disease and a higher risk of death). With the exception of one study, all participants were mechanically ventilated in medical or surgical ICUs or mixed. The studies were overall at low risk of bias. Six studies were at high risk of detection bias due to lack of blinding of outcome assessors. We report results for the two most commonly explored approaches to delirium prevention: pharmacologic and a non-pharmacologic intervention.\nHaloperidol versus placebo (two RCTs, 1580 participants)\nThe event rate of ICU delirium was measured in one study including 1439 participants. No difference was identified between groups, (risk ratio (RR) 1.01, 95% confidence interval (CI) 0.87 to 1.17) (moderate-quality evidence). Haloperidol versus placebo neither reduced or increased in-hospital mortality, (RR 0.98, 95% CI 0.80 to 1.22; 2 studies; 1580 participants (moderate-quality evidence)); the number of delirium- and coma-free days, (mean difference (MD) -0.60, 95% CI -1.37 to 0.17; 2 studies, 1580 participants (moderate-quality of evidence)); number of ventilator-free days (mean 23.8 (MD -0.30, 95% CI -0.93 to 0.33) 1 study; 1439 participants, (high-quality evidence)); length of ICU stay, (MD 0.18, 95% CI 0.60 to 0.97); 2 studies, 1580 participants; high-quality evidence). None of the studies measured cognitive impairment. In one study there were three serious adverse events in the intervention group and five in the placebo group; in the other there were five serious adverse events and three patients died, one in each group. None of the serious adverse events were judged to be related to interventions received (moderate-quality evidence).\nPhysical and cognitive therapy interventions (one study, 65 participants)\nThe study did not measure the event rate of ICU delirium. A physical and cognitive therapy intervention versus standard care neither reduced nor increased in-hospital mortality, (RR 0.94, 95% CI 0.40 to 2.20, I² = 0; 1 study, 65 participants; very low-quality evidence); the number of delirium- and coma-free days, (MD -2.8, 95% CI -10.1 to 4.6, I² = 0; 1 study, 65 participants; very low-quality evidence); the number of ventilator-free days (within the first 28/30 days) was median 27.4 (IQR 0 to 29.2) and 25 (IQR 0 to 28.9); 1 study, 65 participants; very low-quality evidence, length of ICU stay, (MD 1.23, 95% CI -0.68 to 3.14, I² = 0; 1 study, 65 participants; very low-quality evidence); cognitive impairment measured by the MMSE: Mini-Mental State Examination with higher scores indicating better function, (MD 0.97, 95% CI -0.19 to 2.13, I² = 0; 1 study, 30 participants; very low-quality evidence); or measured by the Dysexecutive questionnaire (DEX) with lower scores indicating better function (MD -8.76, 95% CI -19.06 to 1.54, I² = 0; 1 study, 30 participants; very low-quality evidence). One patient experienced acute back pain accompanied by hypotensive urgency during physical therapy.\nAuthors' conclusions\nThere is probably little or no difference between haloperidol and placebo for preventing ICU delirium but further studies are needed to increase our confidence in the findings. There is insufficient evidence to determine the effects of physical and cognitive intervention on delirium. The effects of other pharmacological interventions, sedation, environmental, and preventive nursing interventions are unclear and warrant further investigation in large multicentre studies. Five studies are awaiting classification and we identified 15 ongoing studies, evaluating pharmacological interventions, sedation regimens, physical and occupational therapy combined or separately, and environmental interventions, that may alter the conclusions of the review in future.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 12 randomized controlled trials (3885 participants) in our review. The studies included adults aged 48 to 70 years from surgical and medical ICUs. The studies compared different drugs (three studies) various approaches to sedation (five studies), physical or cognitive therapy or both (one study), noise and light reduction in the ICU (two studies), and preventive nursing care (one study). The studies had mostly small numbers of participants and did not blind the researchers who assessed effects on outcomes. We report the findings regarding the effect of the two most commonly explored approaches for preventing delirium, drug and non-drug interventions, haloperidol versus a sham drug, and early physical and cognitive therapy versus usual care."
}
] |
query-laysum
|
7986
|
[
{
"role": "user",
"content": "Abstract: Background\n'Blue-light filtering', or 'blue-light blocking', spectacle lenses filter ultraviolet radiation and varying portions of short-wavelength visible light from reaching the eye. Various blue-light filtering lenses are commercially available. Some claims exist that they can improve visual performance with digital device use, provide retinal protection, and promote sleep quality. We investigated clinical trial evidence for these suggested effects, and considered any potential adverse effects.\nObjectives\nTo assess the effects of blue-light filtering lenses compared with non-blue-light filtering lenses, for improving visual performance, providing macular protection, and improving sleep quality in adults.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; containing the Cochrane Eyes and Vision Trials Register; 2022, Issue 3); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and WHO ICTRP, with no date or language restrictions. We last searched the electronic databases on 22 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs), involving adult participants, where blue-light filtering spectacle lenses were compared with non-blue-light filtering spectacle lenses.\nData collection and analysis\nPrimary outcomes were the change in visual fatigue score and critical flicker-fusion frequency (CFF), as continuous outcomes, between baseline and one month of follow-up. Secondary outcomes included best-corrected visual acuity (BCVA), contrast sensitivity, discomfort glare, proportion of eyes with a pathological macular finding, colour discrimination, proportion of participants with reduced daytime alertness, serum melatonin levels, subjective sleep quality, and patient satisfaction with their visual performance. We evaluated findings related to ocular and systemic adverse effects.\nWe followed standard Cochrane methods for data extraction and assessed risk of bias using the Cochrane Risk of Bias 1 (RoB 1) tool. We used GRADE to assess the certainty of the evidence for each outcome.\nMain results\nWe included 17 RCTs, with sample sizes ranging from five to 156 participants, and intervention follow-up periods from less than one day to five weeks. About half of included trials used a parallel-arm design; the rest adopted a cross-over design. A variety of participant characteristics was represented across the studies, ranging from healthy adults to individuals with mental health and sleep disorders.\nNone of the studies had a low risk of bias in all seven Cochrane RoB 1 domains. We judged 65% of studies to have a high risk of bias due to outcome assessors not being masked (detection bias) and 59% to be at high risk of bias of performance bias as participants and personnel were not masked. Thirty-five per cent of studies were pre-registered on a trial registry. We did not perform meta-analyses for any of the outcome measures, due to lack of available quantitative data, heterogenous study populations, and differences in intervention follow-up periods.\nThere may be no difference in subjective visual fatigue scores with blue-light filtering lenses compared to non-blue-light filtering lenses, at less than one week of follow-up (low-certainty evidence). One RCT reported no difference between intervention arms (mean difference (MD) 9.76 units (indicating worse symptoms), 95% confidence interval (CI) -33.95 to 53.47; 120 participants). Further, two studies (46 participants, combined) that measured visual fatigue scores reported no significant difference between intervention arms.\nThere may be little to no difference in CFF with blue-light filtering lenses compared to non-blue-light filtering lenses, measured at less than one day of follow-up (low-certainty evidence). One study reported no significant difference between intervention arms (MD - 1.13 Hz lower (indicating poorer performance), 95% CI - 3.00 to 0.74; 120 participants). Another study reported a less negative change in CFF (indicating less visual fatigue) with high- compared to low-blue-light filtering and no blue-light filtering lenses.\nCompared to non-blue-light filtering lenses, there is probably little or no effect with blue-light filtering lenses on visual performance (BCVA) (MD 0.00 logMAR units, 95% CI -0.02 to 0.02; 1 study, 156 participants; moderate-certainty evidence), and unknown effects on daytime alertness (2 RCTs, 42 participants; very low-certainty evidence); uncertainty in these effects was due to lack of available data and the small number of studies reporting these outcomes. We do not know if blue-light filtering spectacle lenses are equivalent or superior to non-blue-light filtering spectacle lenses with respect to sleep quality (very low-certainty evidence). Inconsistent findings were evident across six RCTs (148 participants); three studies reported a significant improvement in sleep scores with blue-light filtering lenses compared to non-blue-light filtering lenses, and the other three studies reported no significant difference between intervention arms. We noted differences in the populations across studies and a lack of quantitative data.\nDevice-related adverse effects were not consistently reported (9 RCTs, 333 participants; low-certainty evidence). Nine studies reported on adverse events related to study interventions; three studies described the occurrence of such events. Reported adverse events related to blue-light filtering lenses were infrequent, but included increased depressive symptoms, headache, discomfort wearing the glasses, and lower mood. Adverse events associated with non-blue-light filtering lenses were occasional hyperthymia, and discomfort wearing the spectacles.\nWe were unable to determine whether blue-light filtering lenses affect contrast sensitivity, colour discrimination, discomfort glare, macular health, serum melatonin levels or overall patient visual satisfaction, compared to non-blue-light filtering lenses, as none of the studies evaluated these outcomes.\nAuthors' conclusions\nThis systematic review found that blue-light filtering spectacle lenses may not attenuate symptoms of eye strain with computer use, over a short-term follow-up period, compared to non-blue-light filtering lenses. Further, this review found no clinically meaningful difference in changes to CFF with blue-light filtering lenses compared to non-blue-light filtering lenses. Based on the current best available evidence, there is probably little or no effect of blue-light filtering lenses on BCVA compared with non-blue-light filtering lenses. Potential effects on sleep quality were also indeterminate, with included trials reporting mixed outcomes among heterogeneous study populations. There was no evidence from RCT publications relating to the outcomes of contrast sensitivity, colour discrimination, discomfort glare, macular health, serum melatonin levels, or overall patient visual satisfaction. Future high-quality randomised trials are required to define more clearly the effects of blue-light filtering lenses on visual performance, macular health and sleep, in adult populations.\n\nGiven the provided abstract, please respond to the following query: \"Key message\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Blue-light filtering lenses may not reduce short-term eyestrain associated with computer work, compared to non-blue-light filtering lenses. Potential harmful effects were temporary and generally mild, and mostly thought to be related to the glasses more generally rather than specifically the lenses themselves.\nThere is a need for future research to provide evidence for the effects of blue-light filtering lenses on multiple aspects of visual performance and sleep, including vision level (best corrected visual acuity), ability to detect differences in shading and patterns (contrast sensitivity), colour discrimination, reducing glare due to bright light (discomfort glare), the health of the retina at the back of the eye (macular health), sleep measures (including blood melatonin levels and sleep quality), and patient satisfaction."
}
] |
query-laysum
|
7987
|
[
{
"role": "user",
"content": "Abstract: Background\nShoulder replacement surgery is an established treatment for patients with end-stage glenohumeral osteoarthritis or rotator cuff tear arthropathy who have not improved with non-operative treatment. Different types of shoulder replacement are commonly used, but their relative benefits and risks compared versus one another and versus other treatments are uncertain. This expanded scope review is an update of a Cochrane Review first published in 2010.\nObjectives\nTo determine the benefits and harms of shoulder replacement surgery in adults with osteoarthritis (OA) of the shoulder, including rotator cuff tear arthropathy (RCTA).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL, SportDiscus, and Web of Science up to January 2019. We also searched clinical trial registers, conference proceedings, and reference lists from previous systematic reviews and included studies.\nSelection criteria\nWe included randomised studies comparing any type of shoulder replacement surgery versus any other surgical or non-surgical treatment, no treatment, or placebo. We also included randomised studies comparing any type of shoulder replacement or technique versus another. Study participants were adults with osteoarthritis of the glenohumeral joint or rotator cuff tear arthropathy.\nWe assessed the following major outcomes: pain, function, participant-rated global assessment of treatment success, quality of life, adverse events, serious adverse events, and risk of revision or re-operation or treatment failure.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. We collected trial data on benefits and harms.\nMain results\nWe included 20 studies involving 1083 participants (1105 shoulders). We found five studies comparing one type of shoulder replacement surgery to another type of shoulder replacement surgery, including three studies comparing conventional stemmed total shoulder replacement (TSR) surgery to stemmed humeral hemiarthroplasty. The remaining 15 studies compared one type of shoulder replacement to the same type of replacement performed with a technical modification or a different prosthetic component. We found no studies comparing shoulder replacement surgery to any other type of surgical treatment or to any type of non-surgical treatment. We found no studies comparing reverse total shoulder replacement surgery to any other type of treatment or to any type of replacement.\n\nTrial size varied from 16 to 161 participants. Participant mean age ranged from 63 to 81 years. 47% of participants were male. Sixteen trials reported participants with a diagnosis of osteoarthritis and intact rotator cuff tendons. Four trials reported patients with osteoarthritis and a rotator cuff tear or rotator cuff tear arthropathy.\n\nAll studies were at unclear or high risk of bias for at least two domains, and only one study was free from high risk of bias (included in the main comparison). The most common sources of bias were lack of blinding of participants and assessors, attrition, and major baseline imbalance.\n\nThree studies allowed a comparison of conventional stemmed TSR surgery versus stemmed humeral hemiarthroplasty in people with osteoarthritis. At two years, low-quality evidence from two trials (downgraded for bias and imprecision) suggested there may be a small but clinically uncertain improvement in pain and function. On a scale of 0 to 10 (0 is no pain), mean pain was 2.78 points after stemmed humeral hemiarthroplasty and 1.49 points lower (0.1 lower to 2.88 lower) after conventional stemmed TSR. On a scale of 0 to 100 (100 = normal function), the mean function score was 72.8 points after stemmed humeral hemiarthroplasty and 10.57 points higher (2.11 higher to 19.02 higher) after conventional stemmed TSR. There may be no difference in quality of life based on low-quality evidence, downgraded for risk of bias and imprecision. On a scale of 0 to 100 (100 = normal), mean mental quality of life was rated as 57.4 points after stemmed humeral hemiarthroplasty and 1.0 point higher (5.1 lower to 7.1 higher) after conventional stemmed TSR.\nWe are uncertain whether there is any difference in the rate of adverse events or the rate of revision, re-operation, or treatment failure based on very low-quality evidence (downgraded three levels for risk of bias and serious imprecision). The rate of any adverse event following stemmed humeral hemiarthroplasty was 286 per 1000, and following conventional stemmed TSR 143 per 1000, for an absolute difference of 14% fewer events (25% fewer to 21% more). Adverse events included fractures, dislocations, infections, and rotator cuff failure. The rate of revision, re-operation, or treatment failure was 103 per 1000, and following conventional stemmed TSR 77 per 1000, for an absolute difference of 2.6% fewer events (8% fewer to 15% more).\nParticipant-rated global assessment of treatment success was not reported.\nAuthors' conclusions\nAlthough it is an established procedure, no high-quality randomised trials have been conducted to determine whether shoulder replacement might be more effective than other treatments for osteoarthritis or rotator cuff tear arthropathy of the shoulder. We remain uncertain about which type or technique of shoulder replacement surgery is most effective in different situations. When humeral hemiarthroplasty was compared to TSR surgery for osteoarthritis, low-quality evidence led to uncertainty about whether there is a clinically important benefit for patient-reported pain or function and suggested there may be little or no difference in quality of life. Evidence is insufficient to show whether TSR is associated with greater or less risk of harm than humeral hemiarthroplasty. Available randomised studies did not provide sufficient data to reliably inform conclusions about adverse events and harm. Although reverse TSR is now the most commonly performed type of shoulder replacement, we found no studies comparing reverse TSR to any other type of treatment.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Three trials (126 participants) met our inclusion criteria for our main comparison of conventional stemmed total shoulder replacement (TSR) versus stemmed humeral hemiarthroplasty (HA) for treatment of osteoarthritis. TSR may result in less pain and better function compared to HA at two-year follow-up, but this may not be noticeable. We are very uncertain whether there are any differences in the frequency of adverse events and further operations.\nTSR resulted in 15% less pain (1% less to 29% less).\n• People who had HA rated their pain as 2.8 points (0 to 10 scale).\n• People who had TSR rated their pain as 1.29 points.\nTSR resulted in 11% better function (2% better to 19% better).\n• People who had HA rated their function as 72.8 points (0 to 100 scale).\n• People who had TSR rated their function as 83.4 points.\nTSR resulted in similar quality of life to HA (5% lower to 7% higher, 5 points lower to 7 points higher (0 to 100 scale)).\n• People who had HA rated their quality of life as 57.4 points.\n• People who had TSR rated their quality of life as 58.4 points.\nTSR resulted in a similar number of adverse events (25% fewer to 21% more) and a similar number of further operations on the same shoulder (8% fewer to 15% more) compared to HA.\n• Following HA, 286 per 1000 people experienced an adverse event and 103 per 1000 required further operations.\n• Following TSR, 143 per 1000 people experienced an adverse event and 77 per 1000 required further operations."
}
] |
query-laysum
|
7988
|
[
{
"role": "user",
"content": "Abstract: Background\nZinc deficiency is prevalent in low- and middle-income countries, and is considered a significant risk factor for morbidity, mortality, and linear growth failure. The effectiveness of preventive zinc supplementation in reducing prevalence of zinc deficiency needs to be assessed.\nObjectives\nTo assess the effects of zinc supplementation for preventing mortality and morbidity, and for promoting growth, in children aged 6 months to 12 years.\nSearch methods\nA previous version of this review was published in 2014. In this update, we searched CENTRAL, MEDLINE, Embase, five other databases, and one trials register up to February 2022, together with reference checking and contact with study authors to identify additional studies.\nSelection criteria\nRandomized controlled trials (RCTs) of preventive zinc supplementation in children aged 6 months to 12 years compared with no intervention, a placebo, or a waiting list control. We excluded hospitalized children and children with chronic diseases or conditions. We excluded food fortification or intake, sprinkles, and therapeutic interventions.\nData collection and analysis\nTwo review authors screened studies, extracted data, and assessed the risk of bias. We contacted study authors for missing information and used GRADE to assess the certainty of evidence. The primary outcomes of this review were all-cause mortality; and cause-specific mortality, due to all-cause diarrhea, lower respiratory tract infection (LRTI, including pneumonia), and malaria. We also collected information on a number of secondary outcomes, such as those related to diarrhea and LRTI morbidity, growth outcomes and serum levels of micronutrients, and adverse events.\nMain results\nWe included 16 new studies in this review, resulting in a total of 96 RCTs with 219,584 eligible participants. The included studies were conducted in 34 countries; 87 of them in low- or middle-income countries. Most of the children included in this review were under five years of age. The intervention was delivered most commonly in the form of syrup as zinc sulfate, and the most common dose was between 10 mg and 15 mg daily. The median duration of follow-up was 26 weeks. We did not consider that the evidence for the key analyses of morbidity and mortality outcomes was affected by risk of bias.\nHigh-certainty evidence showed little to no difference in all-cause mortality with preventive zinc supplementation compared to no zinc (risk ratio (RR) 0.93, 95% confidence interval (CI) 0.84 to 1.03; 16 studies, 17 comparisons, 143,474 participants).\nModerate-certainty evidence showed that preventive zinc supplementation compared to no zinc likely results in little to no difference in mortality due to all-cause diarrhea (RR 0.95, 95% CI 0.69 to 1.31; 4 studies, 132,321 participants); but probably reduces mortality due to LRTI (RR 0.86, 95% CI 0.64 to 1.15; 3 studies, 132,063 participants) and mortality due to malaria (RR 0.90, 95% CI 0.77 to 1.06; 2 studies, 42,818 participants); however, the confidence intervals around the summary estimates for these outcomes were wide, and we could not rule out a possibility of increased risk of mortality.\nPreventive zinc supplementation likely reduces the incidence of all-cause diarrhea (RR 0.91, 95% CI 0.90 to 0.93; 39 studies, 19,468 participants; moderate-certainty evidence) but results in little to no difference in morbidity due to LRTI (RR 1.01, 95% CI 0.95 to 1.08; 19 studies, 10,555 participants; high-certainty evidence) compared to no zinc.\nThere was moderate-certainty evidence that preventive zinc supplementation likely leads to a slight increase in height (standardized mean difference (SMD) 0.12, 95% CI 0.09 to 0.14; 74 studies, 20,720 participants).\nZinc supplementation was associated with an increase in the number of participants with at least one vomiting episode (RR 1.29, 95% CI 1.14 to 1.46; 5 studies, 35,192 participants; high-certainty evidence). We report a number of other outcomes, including the effect of zinc supplementation on weight and serum markers such as zinc, hemoglobin, iron, copper, etc. We also performed a number of subgroup analyses and there was a consistent finding for a number of outcomes that co-supplementation of zinc with iron decreased the beneficial effect of zinc.\nAuthors' conclusions\nEven though we included 16 new studies in this update, the overall conclusions of the review remain unchanged. Zinc supplementation might help prevent episodes of diarrhea and improve growth slightly, particularly in children aged 6 months to 12 years of age. The benefits of preventive zinc supplementation may outweigh the harms in regions where the risk of zinc deficiency is relatively high.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 96 studies in the review, with 219,584 children. Studies took place in 34, mainly low- to middle-income countries. Most of the children in the studies were under five years old. Zinc was most commonly given as a syrup (zinc sulfate), and the most common dose was between 10 mg and 15 mg daily.\nWe found that giving children zinc supplementation might lead to a small to no reduction in the risk of death for any reason and the risk of death due to diarrhea. The risk of death due to lower respiratory tract infections or malaria may be reduced. Children given zinc experienced less disease due to diarrhea than children not given zinc; however, zinc does not seem to reduce children's risk of respiratory infection. Zinc supplementation may have a small positive effect on growth. Children who take zinc supplementation may experience vomiting as an unwanted effect."
}
] |
query-laysum
|
7990
|
[
{
"role": "user",
"content": "Abstract: Background\nKangaroo mother care (KMC), originally defined as skin-to-skin contact between a mother and her newborn, frequent and exclusive or nearly exclusive breastfeeding, and early discharge from hospital, has been proposed as an alternative to conventional neonatal care for low birthweight (LBW) infants.\nObjectives\nTo determine whether evidence is available to support the use of KMC in LBW infants as an alternative to conventional neonatal care before or after the initial period of stabilization with conventional care, and to assess beneficial and adverse effects.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group. This included searches in CENTRAL (Cochrane Central Register of Controlled Trials; 2016, Issue 6), MEDLINE, Embase, CINAHL (Cumulative Index to Nursing and Allied Health Literature), LILACS (Latin American and Caribbean Health Science Information database), and POPLINE (Population Information Online) databases (all from inception to June 30, 2016), as well as the WHO (World Health Organization) Trial Registration Data Set (up to June 30, 2016). In addition, we searched the web page of the Kangaroo Foundation, conference and symposia proceedings on KMC, and Google Scholar.\nSelection criteria\nRandomized controlled trials comparing KMC versus conventional neonatal care, or early-onset KMC versus late-onset KMC, in LBW infants.\nData collection and analysis\nData collection and analysis were performed according to the methods of the Cochrane Neonatal Review Group.\nMain results\nTwenty-one studies, including 3042 infants, fulfilled inclusion criteria. Nineteen studies evaluated KMC in LBW infants after stabilization, one evaluated KMC in LBW infants before stabilization, and one compared early-onset KMC with late-onset KMC in relatively stable LBW infants. Sixteen studies evaluated intermittent KMC, and five evaluated continuous KMC.\nKMC versus conventional neonatal care: At discharge or 40 to 41 weeks' postmenstrual age, KMC was associated with a statistically significant reduction in the risk of mortality (risk ratio [RR] 0.60, 95% confidence interval [CI] 0.39 to 0.92; eight trials, 1736 infants), nosocomial infection/sepsis (RR 0.35, 95% CI 0.22 to 0.54; five trials, 1239 infants), and hypothermia (RR 0.28, 95% CI 0.16 to 0.49; nine trials, 989 infants; moderate-quality evidence). At latest follow-up, KMC was associated with a significantly decreased risk of mortality (RR 0.67, 95% CI 0.48 to 0.95; 12 trials, 2293 infants; moderate-quality evidence) and severe infection/sepsis (RR 0.50, 95% CI 0.36 to 0.69; eight trials, 1463 infants; moderate-quality evidence). Moreover, KMC was found to increase weight gain (mean difference [MD] 4.1 g/d, 95% CI 2.3 to 5.9; 11 trials, 1198 infants; moderate-quality evidence), length gain (MD 0.21 cm/week, 95% CI 0.03 to 0.38; three trials, 377 infants) and head circumference gain (MD 0.14 cm/week, 95% CI 0.06 to 0.22; four trials, 495 infants) at latest follow-up, exclusive breastfeeding at discharge or 40 to 41 weeks' postmenstrual age (RR 1.16, 95% CI 1.07 to 1.25; six studies, 1453 mothers) and at one to three months' follow-up (RR 1.20, 95% CI 1.01 to 1.43; five studies, 600 mothers), any (exclusive or partial) breastfeeding at discharge or at 40 to 41 weeks' postmenstrual age (RR 1.20, 95% CI 1.07 to 1.34; 10 studies, 1696 mothers; moderate-quality evidence) and at one to three months' follow-up (RR 1.17, 95% CI 1.05 to 1.31; nine studies, 1394 mothers; low-quality evidence), and some measures of mother-infant attachment and home environment. No statistically significant differences were found between KMC infants and controls in Griffith quotients for psychomotor development at 12 months’ corrected age (low-quality evidence). Sensitivity analysis suggested that inclusion of studies with high risk of bias did not affect the general direction of findings nor the size of the treatment effect for main outcomes.\nEarly-onset KMC versus late-onset KMC in relatively stable infants: One trial compared early-onset continuous KMC (within 24 hours post birth) versus late-onset continuous KMC (after 24 hours post birth) in 73 relatively stable LBW infants. Investigators reported no significant differences between the two study groups in mortality, morbidity, severe infection, hypothermia, breastfeeding, and nutritional indicators. Early-onset KMC was associated with a statistically significant reduction in length of hospital stay (MD 0.9 days, 95% CI 0.6 to 1.2).\nAuthors' conclusions\nEvidence from this updated review supports the use of KMC in LBW infants as an alternative to conventional neonatal care, mainly in resource-limited settings. Further information is required concerning the effectiveness and safety of early-onset continuous KMC in unstabilized or relatively stabilized LBW infants, as well as long-term neurodevelopmental outcomes and costs of care.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 21 randomized controlled trials (3042 infants) for inclusion in this review by searching medical databases in June 2016."
}
] |
query-laysum
|
7991
|
[
{
"role": "user",
"content": "Abstract: Background\nNewborn infants are more prone to seizures than older children and adults. The neuronal injury caused by seizures in neonates often results in long-term neurodevelopmental sequelae. There are several options for anti-seizure medications (ASMs) in neonates. However, the ideal choice of first-, second- and third-line ASM is still unclear. Further, many other aspects of seizure management such as whether ASMs should be initiated for only-electrographic seizures and how long to continue the ASM once seizure control is achieved are elusive.\nObjectives\n1. To assess whether any ASM is more or less effective than an alternative ASM (both ASMs used as first-, second- or third-line treatment) in achieving seizure control and improving neurodevelopmental outcomes in neonates with seizures. We analysed EEG-confirmed seizures and clinically-diagnosed seizures separately.\n2. To assess maintenance therapy with ASM versus no maintenance therapy after achieving seizure control. We analysed EEG-confirmed seizures and clinically-diagnosed seizures separately.\n3. To assess treatment of both clinical and electrographic seizures versus treatment of clinical seizures alone in neonates.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, Epistemonikos and three databases in May 2022 and June 2023. These searches were not limited other than by study design to trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) that included neonates with EEG-confirmed or clinically diagnosed seizures and compared (1) any ASM versus an alternative ASM, (2) maintenance therapy with ASM versus no maintenance therapy, and (3) treatment of clinical or EEG seizures versus treatment of clinical seizures alone.\nData collection and analysis\nTwo review authors assessed trial eligibility, risk of bias and independently extracted data. We analysed treatment effects in individual trials and reported risk ratio (RR) for dichotomous data, and mean difference (MD) for continuous data, with respective 95% confidence interval (CI). We used GRADE to assess the certainty of evidence.\nMain results\nWe included 18 trials (1342 infants) in this review.\nPhenobarbital versus levetiracetam as first-line ASM in EEG-confirmed neonatal seizures (one trial)\nPhenobarbital is probably more effective than levetiracetam in achieving seizure control after first loading dose (RR 2.32, 95% CI 1.63 to 3.30; 106 participants; moderate-certainty evidence), and after maximal loading dose (RR 2.83, 95% CI 1.78 to 4.50; 106 participants; moderate-certainty evidence). However, we are uncertain about the effect of phenobarbital when compared to levetiracetam on mortality before discharge (RR 0.30, 95% CI 0.04 to 2.52; 106 participants; very low-certainty evidence), requirement of mechanical ventilation (RR 1.21, 95% CI 0.76 to 1.91; 106 participants; very low-certainty evidence), sedation/drowsiness (RR 1.74, 95% CI 0.68 to 4.44; 106 participants; very low-certainty evidence) and epilepsy post-discharge (RR 0.92, 95% CI 0.48 to 1.76; 106 participants; very low-certainty evidence). The trial did not report on mortality or neurodevelopmental disability at 18 to 24 months.\nPhenobarbital versus phenytoin as first-line ASM in EEG-confirmed neonatal seizures (one trial)\nWe are uncertain about the effect of phenobarbital versus phenytoin on achieving seizure control after maximal loading dose of ASM (RR 0.97, 95% CI 0.54 to 1.72; 59 participants; very low-certainty evidence). The trial did not report on mortality or neurodevelopmental disability at 18 to 24 months.\nMaintenance therapy with ASM versus no maintenance therapy in clinically diagnosed neonatal seizures (two trials)\nWe are uncertain about the effect of short-term maintenance therapy with ASM versus no maintenance therapy during the hospital stay (but discontinued before discharge) on the risk of repeat seizures before hospital discharge (RR 0.76, 95% CI 0.56 to 1.01; 373 participants; very low-certainty evidence). Maintenance therapy with ASM compared to no maintenance therapy may have little or no effect on mortality before discharge (RR 0.69, 95% CI 0.39 to 1.22; 373 participants; low-certainty evidence), mortality at 18 to 24 months (RR 0.94, 95% CI 0.34 to 2.61; 111 participants; low-certainty evidence), neurodevelopmental disability at 18 to 24 months (RR 0.89, 95% CI 0.13 to 6.12; 108 participants; low-certainty evidence) and epilepsy post-discharge (RR 3.18, 95% CI 0.69 to 14.72; 126 participants; low-certainty evidence).\nTreatment of both clinical and electrographic seizures versus treatment of clinical seizures alone in neonates (two trials)\nTreatment of both clinical and electrographic seizures when compared to treating clinical seizures alone may have little or no effect on seizure burden during hospitalisation (MD -1871.16, 95% CI -4525.05 to 782.73; 68 participants; low-certainty evidence), mortality before discharge (RR 0.59, 95% CI 0.28 to 1.27; 68 participants; low-certainty evidence) and epilepsy post-discharge (RR 0.75, 95% CI 0.12 to 4.73; 35 participants; low-certainty evidence). The trials did not report on mortality or neurodevelopmental disability at 18 to 24 months.\nWe report data from the most important comparisons here; readers are directed to Results and Summary of Findings tables for all comparisons.\nAuthors' conclusions\nPhenobarbital as a first-line ASM is probably more effective than levetiracetam in achieving seizure control after the first loading dose and after the maximal loading dose of ASM (moderate-certainty evidence). Phenobarbital + bumetanide may have little or no difference in achieving seizure control when compared to phenobarbital alone (low-certainty evidence). Limited data and very low-certainty evidence preclude us from drawing any reasonable conclusion on the effect of using one ASM versus another on other short- and long-term outcomes.\nIn neonates who achieve seizure control after the first loading dose of phenobarbital, maintenance therapy compared to no maintenance ASM may have little or no effect on all-cause mortality before discharge, mortality by 18 to 24 months, neurodevelopmental disability by 18 to 24 months and epilepsy post-discharge (low-certainty evidence).\nIn neonates with hypoxic-ischaemic encephalopathy, treatment of both clinical and electrographic seizures when compared to treating clinical seizures alone may have little or no effect on seizure burden during hospitalisation, all-cause mortality before discharge and epilepsy post-discharge (low-certainty evidence).\nAll findings of this review apply only to term and late preterm neonates.\nWe need well-designed RCTs for each of the three objectives of this review to improve the precision of the results. These RCTs should use EEG to diagnose seizures and should be adequately powered to assess long-term neurodevelopmental outcomes. We need separate RCTs evaluating the choice of ASM in preterm infants.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Newborns are more prone to develop seizures when compared to older children and adults. The brain damage caused by seizures in newborns is associated with cerebral palsy, intellectual disability, learning problems and a tendency to develop epilepsy in the future. There are only a few options for medications to treat seizures in newborns, and we do not know which is the ideal medication to use first, second or third. Similarly, whether to treat the seizures that are seen only on EEG and how long to continue the anti-seizure medication is also not clear."
}
] |
query-laysum
|
7992
|
[
{
"role": "user",
"content": "Abstract: Background\nBronchiectasis is being increasingly diagnosed and recognised as an important contributor to chronic lung disease in both adults and children in high- and low-income countries. It is characterised by irreversible dilatation of airways and is generally associated with airway inflammation and chronic bacterial infection. Medical management largely aims to reduce morbidity by controlling the symptoms, reduce exacerbation frequency, improve quality of life and prevent the progression of bronchiectasis. This is an update of a review first published in 2000.\nObjectives\nTo evaluate the efficacy and safety of inhaled corticosteroids (ICS) in children and adults with stable state bronchiectasis, specifically to assess whether the use of ICS: (1) reduces the severity and frequency of acute respiratory exacerbations; or (2) affects long-term pulmonary function decline.\nSearch methods\nWe searched the Cochrane Register of Controlled Trials (CENTRAL), the Cochrane Airways Group Register of trials, MEDLINE and Embase databases. We ran the latest literature search in June 2017.\nSelection criteria\nAll randomised controlled trials (RCTs) comparing ICS with a placebo or no medication. We included children and adults with clinical or radiographic evidence of bronchiectasis, but excluded people with cystic fibrosis.\nData collection and analysis\nWe reviewed search results against predetermined criteria for inclusion. In this update, two independent review authors assessed methodological quality and risk of bias in trials using established criteria and extracted data using standard pro forma. We analysed treatment as 'treatment received' and performed sensitivity analyses.\nMain results\nThe review included seven studies, involving 380 adults. Of the 380 randomised participants, 348 completed the studies.\nDue to differences in outcomes reported among the seven studies, we could only perform limited meta-analysis for both the short-term ICS use (6 months or less) and the longer-term ICS use (> 6 months).\nDuring stable state in the short-term group (ICS for 6 months or less), based on the two studies from which data could be included, there were no significant differences from baseline values in the forced expiratory volume in the first second (FEV1) at the end of the study (mean difference (MD) -0.09, 95% confidence interval (CI) -0.26 to 0.09) and forced vital capacity (FVC) (MD 0.01 L, 95% CI -0.16 to 0.17) in adults on ICS (compared to no ICS). Similarly, we did not find any significant difference in the average exacerbation frequency (MD 0.09, 95% CI -0.61 to 0.79) or health-related quality of life (HRQoL) total scores in adults on ICS when compared with no ICS, though data available were limited. Based on a single non-placebo controlled study from which we could not extract clinical data, there was marginal, though statistically significant improvement in sputum volume and dyspnoea scores on ICS.\nThe single study on long-term outcomes (over 6 months) that examined lung function and other clinical outcomes, showed no significant effect of ICS on any of the outcomes. We could not draw any conclusion on adverse effects due to limited available data.\nDespite the authors of all seven studies stating they were double-blind, we judged one study (in the short duration ICS) as having a high risk of bias based on blinding, attrition and reporting of outcomes. The GRADE quality of evidence was low for all outcomes (due to non-placebo controlled trial, indirectness and imprecision with small numbers of participants and studies).\nAuthors' conclusions\nThis updated review indicates that there is insufficient evidence to support the routine use of ICS in adults with stable state bronchiectasis. Further, we cannot draw any conclusion for the use of ICS in adults during an acute exacerbation or in children (for any state), as there were no studies.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review found that there is not enough evidence for the routine use of ICS in adults with stable bronchiectasis. We can make no conclusions about the use of ICS for flare-ups of bronchiectasis, or about their use in children, because we did not find any studies."
}
] |
query-laysum
|
7993
|
[
{
"role": "user",
"content": "Abstract: Background\nWe performed a systematic review, a standard meta-analysis and network meta-analysis (NMA), which updates the 2009 Cochrane Overview, 'Biologics for rheumatoid arthritis (RA)'. This review is focused on biologic monotherapy in people with RA in whom treatment with traditional disease-modifying anti-rheumatic drugs (DMARDs) including methotrexate (MTX) had failed (MTX/other DMARD-experienced).\nObjectives\nTo assess the benefits and harms of biologic monotherapy (includes anti-tumor necrosis factor (TNF) (adalimumab, certolizumab pegol, etanercept, golimumab, infliximab) or non-TNF (abatacept, anakinra, rituximab, tocilizumab)) or tofacitinib monotherapy (oral small molecule) versus comparator (placebo or MTX/other DMARDs) in adults with RA who were MTX/other DMARD-experienced.\nMethods\nWe searched for randomized controlled trials (RCTs) in the Cochrane Central Register of Controlled Trials (CENTRAL; The Cochrane Library 2015, Issue 6, June), MEDLINE (via OVID 1946 to June 2015), and Embase (via OVID 1947 to June 2015). Article selection, data extraction and risk of bias and GRADE assessments were done in duplicate. We calculated direct estimates with 95% confidence intervals (CI) using standard meta-analysis. We used a Bayesian mixed treatment comparisons (MTC) approach for NMA estimates with 95% credible intervals (CrI). We converted odds ratios (OR) to risk ratios (RR) for ease of understanding. We calculated absolute measures as risk difference (RD) and number needed to treat for an additional beneficial outcome (NNTB).\nMain results\nThis update includes 40 new RCTs for a total of 46 RCTs, of which 41 studies with 14,049 participants provided data. The comparator was placebo in 16 RCTs (4,532 patients), MTX or other DMARD in 13 RCTs (5,602 patients), and another biologic in 12 RCTs (3,915 patients).\nMonotherapy versus placebo\nBased on moderate-quality direct evidence, biologic monotherapy (without concurrent MTX/other DMARDs) was associated with a clinically meaningful and statistically significant improvement in American College of Rheumatology score (ACR50) and physical function, as measured by the Health Assessment Questionnaire (HAQ) versus placebo. RR was 4.68 for ACR50 (95% CI, 2.93 to 7.48); absolute benefit RD 23% (95% CI, 18% to 29%); and NNTB = 5 (95% CI, 3 to 8). The mean difference (MD) was -0.32 for HAQ (95% CI, -0.42 to -0.23; a negative sign represents greater HAQ improvement); absolute benefit of -10.7% (95% CI, -14% to -7.7%); and NNTB = 4 (95% CI, 3 to 5). Direct and NMA estimates for TNF biologic, non-TNF biologic or tofacitinib monotherapy showed similar results for ACR50 , downgraded to moderate-quality evidence. Direct and NMA estimates for TNF biologic, anakinra or tofacitinib monotherapy showed a similar results for HAQ versus placebo with mostly moderate quality evidence.\nBased on moderate-quality direct evidence, biologic monotherapy was associated with a clinically meaningful and statistically significant greater proportion of disease remission versus placebo with RR 1.12 (95% CI 1.03 to 1.22); absolute benefit 10% (95% CI, 3% to 17%; NNTB = 10 (95% CI, 8 to 21)).\nBased on low-quality direct evidence, results for biologic monotherapy for withdrawals due to adverse events and serious adverse events were inconclusive, with wide confidence intervals encompassing the null effect and evidence of an important increase. The direct estimate for TNF monotherapy for withdrawals due to adverse events showed a clinically meaningful and statistically significant result with RR 2.02 (95% CI, 1.08 to 3.78), absolute benefit RD 3% (95% CI,1% to 4%), based on moderate-quality evidence. The NMA estimates for TNF biologic, non-TNF biologic, anakinra, or tofacitinib monotherapy for withdrawals due to adverse events and for serious adverse events were all inconclusive and downgraded to low-quality evidence.\nMonotherapy versus active comparator (MTX/other DMARDs)\nBased on direct evidence of moderate quality, biologic monotherapy (without concurrent MTX/other DMARDs) was associated with a clinically meaningful and statistically significant improvement in ACR50 and HAQ scores versus MTX/other DMARDs with a RR of 1.54 (95% CI, 1.14 to 2.08); absolute benefit 13% (95% CI, 2% to 23%), NNTB = 7 (95% CI, 4 to 26) and a mean difference in HAQ of -0.27 (95% CI, -0.40 to -0.14); absolute benefit of -9% (95% CI, -13.3% to -4.7%), NNTB = 2 (95% CI, 2 to 4). Direct and NMA estimates for TNF monotherapy and NMA estimate for non-TNF biologic monotherapy for ACR50 showed similar results, based on moderate-quality evidence. Direct and NMA estimates for non-TNF biologic monotherapy, but not TNF monotherapy, showed similar HAQ improvements , based on mostly moderate-quality evidence.\nThere were no statistically significant or clinically meaningful differences for direct estimates of biologic monotherapy versus active comparator for RA disease remission. NMA estimates showed a statistically significant and clinically meaningful difference versus active comparator for TNF monotherapy (absolute improvement 7% (95% CI, 2% to 14%)) and non-TNF monotherapy (absolute improvement 19% (95% CrI, 7% to 36%)), both downgraded to moderate quality.\nBased on moderate-quality direct evidence from a single study, radiographic progression (scale 0 to 448) was statistically significantly reduced in those on biologic monotherapy versus active comparator, MD -4.34 (95% CI, -7.56 to -1.12), though the absolute reduction was small, -0.97% (95% CI, -1.69% to -0.25%). We are not sure of the clinical relevance of this reduction.\nDirect and NMA evidence (downgraded to low quality), showed inconclusive results for withdrawals due to adverse events, serious adverse events and cancer, with wide confidence intervals encompassing the null effect and evidence of an important increase.\nAuthors' conclusions\nBased mostly on RCTs of six to 12-month duration in people with RA who had previously experienced and failed treatment with MTX/other DMARDs, biologic monotherapy improved ACR50, function and RA remission rates compared to placebo or MTX/other DMARDs.\nRadiographic progression was reduced versus active comparator, although the clinical significance was unclear.\nResults were inconclusive for whether biologic monotherapy was associated with an increased risk of withdrawals due to adverse events, serious adverse events or cancer, versus placebo (no data on cancer) or MTX/other DMARDs.\n\nGiven the provided abstract, please respond to the following query: \"Remission\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "21 people out of 100 had disappearance of all symptoms of their RA compared to 15 people out of 100 who were on MTX/other DMARDs, that is 6 more people out of 100 (6% absolute improvement)."
}
] |
query-laysum
|
7994
|
[
{
"role": "user",
"content": "Abstract: Background\nFrom the societal and employers' perspectives, sickness absence has a large economic impact. Internationally, there is variation in sickness certification practices. However, in most countries a physician's certificate of illness or reduced work ability is needed at some point of sickness absence. In many countries, there is a time period of varying length called the 'self-certification period' at the beginning of sickness absence. During that time a worker is not obliged to provide his or her employer a medical certificate and it is usually enough that the employee notifies his or her supervisor when taken ill. Self-certification can be introduced at organisational, regional, or national level.\nObjectives\nTo evaluate the effects of introducing, abolishing, or changing the period of self-certification of sickness absence on: the total or average duration (number of sickness absence days) of short-term sickness absence periods; the frequency of short-term sickness absence periods; the associated costs (of sickness absence and (occupational) health care); and social climate, supervisor involvement, and workload or presenteeism (see Figure 1).\nSearch methods\nWe conducted a systematic literature search to identify all potentially eligible published and unpublished studies. We adapted the search strategy developed for MEDLINE for use in the other electronic databases. We also searched for unpublished trials on ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). We used Google Scholar for exploratory searches.\nSelection criteria\nWe considered randomised controlled trials (RCTs), controlled before-after (CBA) studies, and interrupted time-series (ITS) studies for inclusion. We included studies carried out with individual employees or insured workers. We also included studies in which participants were addressed at the aggregate level of organisations, companies, municipalities, healthcare settings, or general populations. We included studies evaluating the effects of introducing, abolishing, or changing the period of self-certification of sickness absence.\nData collection and analysis\nWe conducted a systematic literature search up to 14 June 2018. We calculated missing data from other data reported by the authors. We intended to perform a random-effects meta-analysis, but the studies were too different to enable meta-analysis.\nMain results\nWe screened 6091 records for inclusion. Five studies fulfilled our inclusion criteria: one is an RCT and four are CBA studies. One study from Sweden changed the period of self-certification in 1985 in two districts for all insured inhabitants. Three studies from Norway conducted between 2001 and 2014 changed the period of self-certification in municipalities for all or part of the workers. One study from 1969 introduced self-certification for all manual workers of an oil refinery in the UK.\nLonger compared to shorter self-certificationfor reducing sickness absence in workers\nOutcome: average duration of sickness absence periods\nExtending the period of self-certification from one week to two weeks produced a higher mean duration of sickness absence periods: mean difference in change values between the intervention and control group (MDchange) was 0.67 days/period up to 29 days (95% confidence interval (95% CI) 0.55 to 0.79; 1 RCT; low-certainty evidence).\nThe introduction of self-certification for a maximum of three days produced a lower mean duration of sickness absence up to three days (MDchange −0.32 days/period, 95% CI −0.39 to −0.25; 1 CBA study; very low-certainty evidence). The authors of a different study reported that prolonging self-certification from ≤ 3 days to ≤ 365 days did not lead to a change, but they did not provide numerical data (very low-certainty evidence).\nOutcome: number of sickness absence periods per worker\nExtending the period of self-certification from one week to two weeks resulted in no difference in the number of sickness absence periods in one RCT, but the authors did not report numerical data (low-certainty evidence).\nThe introduction of self-certification for a maximum of three days produced a higher mean number of sickness absence periods lasting up to three days (MDchange 0.48 periods, 95% CI 0.33 to 0.63) in one CBA study (very low-certainty evidence).\nExtending the period of self-certification from three days to up to a year decreased the number of periods in one CBA study, but the authors did not report data (very low-certainty evidence).\nOutcome: average lost work time per 100 person-years\nExtending the period of self-certification from one week to two weeks resulted in an inferred increase in lost work time in one RCT (very low-certainty evidence).\nExtending the period of self-certification (introduction of self-certification for a maximum of three days (from zero to three days) and from three days to five days, respectively) resulted in more work time lost due to sickness absence periods lasting up to three days in two CBA studies that could not be pooled (MDchange 0.54 days/person-year, 95% CI 0.47 to 0.61; and MDchange 1.38 days/person-year, 95% CI 1.16 to 1.60; very low-certainty evidence).\nExtending the period of self-certification from three days up to 50 days led to 0.65 days less lost work time in one CBA study, based on absence periods lasting between four and 16 days. Extending the period of self-certification from three days up to 365 days resulted in less work time lost due to sickness absence periods longer than 16 days (MDchange −2.84 days, 95% CI −3.35 to −2.33; 1 CBA study; very low-certainty evidence).\nOutcome: costs of sickness absence and physician certification\nOne RCT reported that the higher costs of sickness absence benefits incurred by extending the period of self-certification far outweighed the possible reduction in costs of fewer physician appointments by almost six to one (low-certainty evidence).\nIn summary, we found very low-certainty evidence that introducing self-certification of sickness absence or prolonging the self-certification period has inconsistent effects on the mean number of sickness absence days, the number of sickness absence periods, and on lost work time due to sickness absence periods.\nAuthors' conclusions\nThere is low- to very low-certainty evidence of inconsistent effects of changing the period of self-certification on the duration or frequency of short-term sickness absence periods or the amount of work time lost due to sickness absence. Because the evidence is of low or very low certainty, more and better studies are needed.\n\nGiven the provided abstract, please respond to the following query: \"Why is this topic important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sickness absence is costly to society and to employers. Employers may have to continue paying the sick employee’s salary. After the employer’s obligation to pay has ended, insurance covers sickness benefits. Changing the practice of sickness certification in short sickness absences is expected to change employees' attitudes and behaviour, co-operation and climate at the workplace, and diminish sickness absence. Self-certification makes more healthcare resources available for other purposes."
}
] |
query-laysum
|
7995
|
[
{
"role": "user",
"content": "Abstract: Background\nOral mucositis is a side effect of chemotherapy, head and neck radiotherapy, and targeted therapy, affecting over 75% of high-risk patients. Ulceration can lead to severe pain and difficulty with eating and drinking, which may necessitate opioid analgesics, hospitalisation and supplemental nutrition. These complications may disrupt cancer therapy, which may reduce survival. There is also a risk of death from sepsis if pathogens enter the ulcers of immunocompromised patients. Ulcerative oral mucositis can be costly to healthcare systems, yet there are few preventive interventions proven to be beneficial. Cytokines and growth factors may help the regeneration of cells lining of the mouth, thus preventing or reducing oral mucositis and its negative effects.\nObjectives\nTo assess the effects of cytokines and growth factors for preventing oral mucositis in patients with cancer who are receiving treatment.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (searched 10 May 2017); the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 4) in the Cochrane Library (searched 10 May 2017); MEDLINE Ovid (1946 to 10 May 2017); Embase Ovid (7 December 2015 to 10 May 2017); CINAHL EBSCO (Cumulative Index to Nursing and Allied Health Literature; 1937 to 10 May 2017); and CANCERLIT PubMed (1950 to 10 May 2017). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials.\nSelection criteria\nWe included parallel-design randomised controlled trials (RCTs) assessing the effects of cytokines and growth factors in patients with cancer receiving treatment.\nData collection and analysis\nTwo review authors independently screened the results of electronic searches, extracted data and assessed risk of bias. For dichotomous outcomes, we reported risk ratios (RR) and 95% confidence intervals (CI). For continuous outcomes, we reported mean differences (MD) and 95% CIs. We pooled similar studies in random-effects meta-analyses. We reported adverse effects in a narrative format.\nMain results\nWe included 35 RCTs analysing 3102 participants. Thirteen studies were at low risk of bias, 12 studies were at unclear risk of bias, and 10 studies were at high risk of bias.\nOur main findings were regarding keratinocyte growth factor (KGF) and are summarised as follows.\nThere might be a reduction in the risk of moderate to severe oral mucositis in adults receiving bone marrow/stem cell transplantation after conditioning therapy for haematological cancers (RR 0.89, 95% CI 0.80 to 0.99; 6 studies; 852 participants; low-quality evidence). We would need to treat 11 adults with KGF in order to prevent one additional adult from developing this outcome (95% CI 6 to 112). There might be a reduction in the risk of severe oral mucositis in this population, but there is also some possibility of an increase in risk (RR 0.85, 95% CI 0.65 to 1.11; 6 studies; 852 participants; low-quality evidence). We would need to treat 10 adults with KGF in order to prevent one additional adult from developing this outcome (95% CI 5 to prevent the outcome to 14 to cause the outcome).\nThere is probably a reduction in the risk of moderate to severe oral mucositis in adults receiving radiotherapy to the head and neck with cisplatin or fluorouracil (RR 0.91, 95% CI 0.83 to 1.00; 3 studies; 471 participants; moderate-quality evidence). We would need to treat 12 adults with KGF in order to prevent one additional adult from developing this outcome (95% CI 7 to infinity). It is very likely that there is a reduction in the risk of severe oral mucositis in this population (RR 0.79, 95% CI 0.69 to 0.90; 3 studies; 471 participants; high-quality evidence). We would need to treat 7 adults with KGF in order to prevent one additional adult from developing this outcome (95% CI 5 to 15).\nIt is likely that there is a reduction in the risk of moderate to severe oral mucositis in adults receiving chemotherapy alone for mixed solid and haematological cancers (RR 0.56, 95% CI 0.45 to 0.70; 4 studies; 344 participants; moderate-quality evidence). We would need to treat 4 adults with KGF in order to prevent one additional adult from developing this outcome (95% CI 3 to 6). There might be a reduction in the risk of severe oral mucositis in this population (RR 0.30, 95% CI 0.14 to 0.65; 3 studies; 263 participants; low -quality evidence). We would need to treat 10 adults with KGF in order to prevent one additional adult from developing this outcome (95% CI 8 to 19).\nDue to the low volume of evidence, single-study comparisons and insufficient sample sizes, we found no compelling evidence of a benefit for any other cytokines or growth factors and there was no evidence on children. There did not appear to be any serious adverse effects of any of the interventions assessed in this review.\nAuthors' conclusions\nWe are confident that KGF is beneficial in the prevention of oral mucositis in adults who are receiving: a) radiotherapy to the head and neck with cisplatin or fluorouracil; or b) chemotherapy alone for mixed solid and haematological cancers. We are less confident about a benefit for KGF in adults receiving bone marrow/stem cell transplant after conditioning therapy for haematological cancers because of multiple factors involved in that population, such as whether or not they received total body irradiation (TBI) and whether the transplant was autologous (the patients' own cells) or allogeneic (cells from a donor). KGF appears to be a relatively safe intervention.\nDue to limited research, we are not confident that there are any beneficial effects of other cytokines and growth factors. There is currently insufficient evidence to draw any conclusions about the use of cytokines and growth factors in children.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For reducing oral mucositis in adults receiving radiotherapy to the head and neck with chemotherapy, review authors rated the evidence for KGF as moderate to high quality. For reducing oral mucositis in adults receiving chemotherapy alone for mixed solid and blood cancers, they rated the evidence for KGF as low to moderate quality. This evidence was downgraded due to there not being enough data and because some results have not yet been published. For reducing oral mucositis in adults receiving bone marrow/stem cell transplant after conditioning therapy for blood cancers, they rated the evidence for KGF as low quality because results were not similar across the studies and some results have not yet been published. Evidence on side effects of KGF was poorly reported and inconsistent."
}
] |
query-laysum
|
7996
|
[
{
"role": "user",
"content": "Abstract: Background\nMost people who stop smoking gain weight. This can discourage some people from making a quit attempt and risks offsetting some, but not all, of the health advantages of quitting. Interventions to prevent weight gain could improve health outcomes, but there is a concern that they may undermine quitting.\nObjectives\nTo systematically review the effects of: (1) interventions targeting post-cessation weight gain on weight change and smoking cessation (referred to as 'Part 1') and (2) interventions designed to aid smoking cessation that plausibly affect post-cessation weight gain (referred to as 'Part 2').\nSearch methods\nPart 1 - We searched the Cochrane Tobacco Addiction Group's Specialized Register and CENTRAL; latest search 16 October 2020.\nPart 2 - We searched included studies in the following 'parent' Cochrane reviews: nicotine replacement therapy (NRT), antidepressants, nicotine receptor partial agonists, e-cigarettes, and exercise interventions for smoking cessation published in Issue 10, 2020 of the Cochrane Library. We updated register searches for the review of nicotine receptor partial agonists.\nSelection criteria\nPart 1 - trials of interventions that targeted post-cessation weight gain and had measured weight at any follow-up point or smoking cessation, or both, six or more months after quit day.\nPart 2 - trials included in the selected parent Cochrane reviews reporting weight change at any time point.\nData collection and analysis\nScreening and data extraction followed standard Cochrane methods. Change in weight was expressed as difference in weight change from baseline to follow-up between trial arms and was reported only in people abstinent from smoking. Abstinence from smoking was expressed as a risk ratio (RR). Where appropriate, we performed meta-analysis using the inverse variance method for weight, and Mantel-Haenszel method for smoking.\nMain results\nPart 1: We include 37 completed studies; 21 are new to this update. We judged five studies to be at low risk of bias, 17 to be at unclear risk and the remainder at high risk.\nAn intermittent very low calorie diet (VLCD) comprising full meal replacement provided free of charge and accompanied by intensive dietitian support significantly reduced weight gain at end of treatment compared with education on how to avoid weight gain (mean difference (MD) −3.70 kg, 95% confidence interval (CI) −4.82 to −2.58; 1 study, 121 participants), but there was no evidence of benefit at 12 months (MD −1.30 kg, 95% CI −3.49 to 0.89; 1 study, 62 participants). The VLCD increased the chances of abstinence at 12 months (RR 1.73, 95% CI 1.10 to 2.73; 1 study, 287 participants). However, a second study found that no-one completed the VLCD intervention or achieved abstinence.\nInterventions aimed at increasing acceptance of weight gain reported mixed effects at end of treatment, 6 months and 12 months with confidence intervals including both increases and decreases in weight gain compared with no advice or health education. Due to high heterogeneity, we did not combine the data. These interventions increased quit rates at 6 months (RR 1.42, 95% CI 1.03 to 1.96; 4 studies, 619 participants; I2 = 21%), but there was no evidence at 12 months (RR 1.25, 95% CI 0.76 to 2.06; 2 studies, 496 participants; I2 = 26%).\nSome pharmacological interventions tested for limiting post-cessation weight gain (PCWG) reduced weight gain at the end of treatment (dexfenfluramine, phenylpropanolamine, naltrexone). The effects of ephedrine and caffeine combined, lorcaserin, and chromium were too imprecise to give useful estimates of treatment effects. There was very low-certainty evidence that personalized weight management support reduced weight gain at end of treatment (MD −1.11 kg, 95% CI −1.93 to −0.29; 3 studies, 121 participants; I2 = 0%), but no evidence in the longer-term 12 months (MD −0.44 kg, 95% CI −2.34 to 1.46; 4 studies, 530 participants; I2 = 41%). There was low to very low-certainty evidence that detailed weight management education without personalized assessment, planning and feedback did not reduce weight gain and may have reduced smoking cessation rates (12 months: MD −0.21 kg, 95% CI −2.28 to 1.86; 2 studies, 61 participants; I2 = 0%; RR for smoking cessation 0.66, 95% CI 0.48 to 0.90; 2 studies, 522 participants; I2 = 0%).\nPart 2: We include 83 completed studies, 27 of which are new to this update.\nThere was low certainty that exercise interventions led to minimal or no weight reduction compared with standard care at end of treatment (MD −0.25 kg, 95% CI −0.78 to 0.29; 4 studies, 404 participants; I2 = 0%). However, weight was reduced at 12 months (MD −2.07 kg, 95% CI −3.78 to −0.36; 3 studies, 182 participants; I2 = 0%).\nBoth bupropion and fluoxetine limited weight gain at end of treatment (bupropion MD −1.01 kg, 95% CI −1.35 to −0.67; 10 studies, 1098 participants; I2 = 3%); (fluoxetine MD −1.01 kg, 95% CI −1.49 to −0.53; 2 studies, 144 participants; I2 = 38%; low- and very low-certainty evidence, respectively). There was no evidence of benefit at 12 months for bupropion, but estimates were imprecise (bupropion MD −0.26 kg, 95% CI −1.31 to 0.78; 7 studies, 471 participants; I2 = 0%). No studies of fluoxetine provided data at 12 months.\nThere was moderate-certainty that NRT reduced weight at end of treatment (MD −0.52 kg, 95% CI −0.99 to −0.05; 21 studies, 2784 participants; I2 = 81%) and moderate-certainty that the effect may be similar at 12 months (MD −0.37 kg, 95% CI −0.86 to 0.11; 17 studies, 1463 participants; I2 = 0%), although the estimates are too imprecise to assess long-term benefit.\nThere was mixed evidence of the effect of varenicline on weight, with high-certainty evidence that weight change was very modestly lower at the end of treatment (MD −0.23 kg, 95% CI −0.53 to 0.06; 14 studies, 2566 participants; I2 = 32%); a low-certainty estimate gave an imprecise estimate of higher weight at 12 months (MD 1.05 kg, 95% CI −0.58 to 2.69; 3 studies, 237 participants; I2 = 0%).\nAuthors' conclusions\nOverall, there is no intervention for which there is moderate certainty of a clinically useful effect on long-term weight gain. There is also no moderate- or high-certainty evidence that interventions designed to limit weight gain reduce the chances of people achieving abstinence from smoking.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are certain that there is no difference in weight gain at the end of treatment with varenicline, and further studies are unlikely to change this result. However, our confidence in all of the other evidence is limited, mainly because of small numbers of studies that could be compared, and small numbers of people taking part in them. The results varied widely, and there were not enough studies for us to be sure of the results. Our confidence is likely to change if further evidence becomes available."
}
] |
query-laysum
|
7997
|
[
{
"role": "user",
"content": "Abstract: Background\nCurrent guidelines recommend screening of people with oesophageal varices via oesophago-gastro-duodenoscopy at the time of diagnosis of hepatic cirrhosis. This requires that people repeatedly undergo unpleasant invasive procedures with their attendant risks, although half of these people have no identifiable oesophageal varices 10 years after the initial diagnosis of cirrhosis. Platelet count, spleen length, and platelet count-to-spleen length ratio are non-invasive tests proposed as triage tests for the diagnosis of oesophageal varices.\nObjectives\nPrimary objectives\nTo determine the diagnostic accuracy of platelet count, spleen length, and platelet count-to-spleen length ratio for the diagnosis of oesophageal varices of any size in paediatric or adult patients with chronic liver disease or portal vein thrombosis, irrespective of aetiology. To investigate the accuracy of these non-invasive tests as triage or replacement of oesophago-gastro-duodenoscopy.\nSecondary objectives\nTo compare the diagnostic accuracy of these same tests for the diagnosis of high-risk oesophageal varices in paediatric or adult patients with chronic liver disease or portal vein thrombosis, irrespective of aetiology.\nWe aimed to perform pair-wise comparisons between the three index tests, while considering predefined cut-off values.\nWe investigated sources of heterogeneity.\nSearch methods\nThe Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Hepato-Biliary Group Diagnostic Test Accuracy Studies Register, the Cochrane Library, MEDLINE (OvidSP), Embase (OvidSP), and Science Citation Index - Expanded (Web of Science) (14 June 2016). We applied no language or document-type restrictions.\nSelection criteria\nStudies evaluating the diagnostic accuracy of platelet count, spleen length, and platelet count-to-spleen length ratio for the diagnosis of oesophageal varices via oesophago-gastro-duodenoscopy as the reference standard in children or adults of any age with chronic liver disease or portal vein thrombosis, who did not have variceal bleeding.\nData collection and analysis\nStandard Cochrane methods as outlined in the Cochrane Handbook for Diagnostic Test of Accuracy Reviews.\nMain results\nWe included 71 studies, 67 of which enrolled only adults and four only children. All included studies were cross-sectional and were undertaken at a tertiary care centre. Eight studies reported study results in abstracts or letters. We considered all but one of the included studies to be at high risk of bias. We had major concerns about defining the cut-off value for the three index tests; most included studies derived the best cut-off values a posteriori, thus overestimating accuracy; 16 studies were designed to validate the 909 (n/mm3)/mm cut-off value for platelet count-to-spleen length ratio. Enrolment of participants was not consecutive in six studies and was unclear in 31 studies. Thirty-four studies assessed enrolment consecutively. Eleven studies excluded some included participants from the analyses, and in only one study, the time interval between index tests and the reference standard was longer than three months.\nDiagnosis of varices of any size. Platelet count showed sensitivity of 0.71 (95% confidence interval (CI) 0.63 to 0.77) and specificity of 0.80 (95% CI 0.69 to 0.88) (cut-off value of around 150,000/mm3 from 140,000 to 150,000/mm3; 10 studies, 2054 participants). When examining potential sources of heterogeneity, we found that of all predefined factors, only aetiology had a role: studies including participants with chronic hepatitis C reported different results when compared with studies including participants with mixed aetiologies (P = 0.036). Spleen length showed sensitivity of 0.85 (95% CI 0.75 to 0.91) and specificity of 0.54 (95% CI 0.46 to 0.62) (cut-off values of around 110 mm, from 110 to 112.5 mm; 13 studies, 1489 participants). Summary estimates for detection of varices of any size showed sensitivity of 0.93 (95% CI 0.83 to 0.97) and specificity of 0.84 (95% CI 0.75 to 0.91) in 17 studies, and 2637 participants had a cut-off value for platelet count-to-spleen length ratio of 909 (n/mm3)/mm. We found no effect of predefined sources of heterogeneity. An overall indirect comparison of the HSROCs of the three index tests showed that platelet count-to-spleen length ratio was the most accurate index test when compared with platelet count (P < 0.001) and spleen length (P < 0.001).\nDiagnosis of varices at high risk of bleeding. Platelet count showed sensitivity of 0.80 (95% CI 0.73 to 0.85) and specificity of 0.68 (95% CI 0.57 to 0.77) (cut-off value of around 150,000/mm3 from 140,000 to 160,000/mm3; seven studies, 1671 participants). For spleen length, we obtained only a summary ROC curve as we found no common cut-off between studies (six studies, 883 participants). Platelet count-to-spleen length ratio showed sensitivity of 0.85 (95% CI 0.72 to 0.93) and specificity of 0.66 (95% CI 0.52 to 0.77) (cut-off value of around 909 (n/mm3)/mm; from 897 to 921 (n/mm3)/mm; seven studies, 642 participants). An overall indirect comparison of the HSROCs of the three index tests showed that platelet count-to-spleen length ratio was the most accurate index test when compared with platelet count (P = 0.003) and spleen length (P < 0.001).\nDIagnosis of varices of any size in children. We found four studies including 277 children with different liver diseases and or portal vein thrombosis. Platelet count showed sensitivity of 0.71 (95% CI 0.60 to 0.80) and specificity of 0.83 (95% CI 0.70 to 0.91) (cut-off value of around 115,000/mm3; four studies, 277 participants). Platelet count-to-spleen length z-score ratio showed sensitivity of 0.74 (95% CI 0.65 to 0.81) and specificity of 0.64 (95% CI 0.36 to 0.84) (cut-off value of 25; two studies, 197 participants).\nAuthors' conclusions\nPlatelet count-to-spleen length ratio could be used to stratify the risk of oesophageal varices. This test can be used as a triage test before endoscopy, thus ruling out adults without varices. In the case of a ratio > 909 (n/mm3)/mm, the presence of oesophageal varices of any size can be excluded and only 7% of adults with varices of any size would be missed, allowing investigators to spare the number of oesophago-gastro-duodenoscopy examinations. This test is not accurate enough for identification of oesophageal varices at high risk of bleeding that require primary prophylaxis. Future studies should assess the diagnostic accuracy of this test in specific subgroups of patients, as well as its ability to predict variceal bleeding. New non-invasive tests should be examined.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 25 studies with 5096 participants assessing the use of platelet count to diagnose the presence of varices and grade the risk of bleeding, and comparing platelet count versus oesophago-gastro-duodenoscopy in adults with cirrhosis: 13 studies with 1489 participants assessed the diagnostic ability of spleen length, and 38 studies with 5235 participants assessed the diagnostic ability of platelet count-to-spleen length ratio. Platelet count-to-spleen length ratio was the most accurate and could be used to identify people with liver disease who were at high risk of having oesophageal varices. Particularly, in people with hepatic cirrhosis among whom 580 out of 1000 people are expected to have oesophageal varices, only 41 (7% of 580) people will be missed as having varices and will have no appropriate preventive treatment or follow-up. Thus, if platelet count-to-spleen length ratio is lower than 909 (n/mm3)/mm (the most used threshold), the presence of oesophageal varices can be excluded. Thus, it is possible to reduce the number of endoscopic examinations needed to find a person with oesophageal varices. On the contrary, this ratio is not accurate enough to replace endoscopy for identification of high risk of bleeding oesophageal varices."
}
] |
query-laysum
|
7998
|
[
{
"role": "user",
"content": "Abstract: Background\nChest physiotherapy is widely prescribed to assist the clearance of airway secretions in people with cystic fibrosis. Oscillating devices generate intra- or extra-thoracic oscillations orally or external to the chest wall. Internally they create variable resistances within the airways, generating controlled oscillating positive pressure which mobilises mucus. Extra-thoracic oscillations are generated by forces outside the respiratory system, e.g. high frequency chest wall oscillation. This is an update of a previously published review.\nObjectives\nTo identify whether oscillatory devices, oral or chest wall, are effective for mucociliary clearance and whether they are equivalent or superior to other forms of airway clearance in the successful management of secretions in people with cystic fibrosis.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches and hand searches of relevant journals and abstract books of conference proceedings. Latest search of the Cystic Fibrosis Trials Register: 29 July 2019.\nIn addition we searched the trials databases ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform. Latest search of trials databases: 15 August 2019.\nSelection criteria\nRandomised controlled studies and controlled clinical studies of oscillating devices compared with any other form of physiotherapy in people with cystic fibrosis. Single-treatment interventions (therapy technique used only once in the comparison) were excluded.\nData collection and analysis\nTwo authors independently applied the inclusion criteria to publications, assessed the quality of the included studies and assessed the evidence using GRADE.\nMain results\nThe searches identified 82 studies (330 references); 39 studies (total of 1114 participants) met the inclusion criteria. Studies varied in duration from up to one week to one year; 20 of the studies were cross-over in design. The studies also varied in type of intervention and the outcomes measured, data were not published in sufficient detail in most of these studies, so meta-analysis was limited. Few studies were considered to have a low risk of bias in any domain. It is not possible to blind participants and clinicians to physiotherapy interventions, but 13 studies did blind the outcome assessors. The quality of the evidence across all comparisons ranged from low to very low.\nForced expiratory volume in one second was the most frequently measured outcome and while many of the studies reported an improvement in those people using a vibrating device compared to before the study, there were few differences when comparing the different devices to each other or to other airway clearance techniques. One study identified an increase in frequency of exacerbations requiring antibiotics whilst using high frequency chest wall oscillation when compared to positive expiratory pressure (low-quality evidence). There were some small but significant changes in secondary outcome variables such as sputum volume or weight, but not wholly in favour of oscillating devices and due to the low- or very low-quality evidence, it is not clear whether these were due to the particular intervention. Participant satisfaction was reported in 13 studies but again with low- or very low-quality evidence and not consistently in favour of an oscillating device, as some participants preferred breathing techniques or techniques used prior to the study interventions. The results for the remaining outcome measures were not examined or reported in sufficient detail to provide any high-level evidence.\nAuthors' conclusions\nThere was no clear evidence that oscillation was a more or less effective intervention overall than other forms of physiotherapy; furthermore there was no evidence that one device is superior to another. The findings from one study showing an increase in frequency of exacerbations requiring antibiotics whilst using an oscillating device compared to positive expiratory pressure may have significant resource implications. More adequately-powered long-term randomised controlled trials are necessary and outcomes measured should include frequency of exacerbations, individual preference, adherence to therapy and general satisfaction with treatment. Increased adherence to therapy may then lead to improvements in other parameters, such as exercise tolerance and respiratory function. Additional evidence is needed to evaluate whether oscillating devices combined with other forms of airway clearance is efficacious in people with cystic fibrosis.There may also be a requirement to consider the cost implication of devices over other forms of equally advantageous airway clearance techniques. Using the GRADE method to assess the quality of the evidence, we judged this to be low or very low quality, which suggests that further research is very likely to have an impact on confidence in any estimate of effect generated by future interventions.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of vibrating devices (e.g. Flutter, acapella, cornet, Quake®, intrapulmonary percussive ventilation, high frequency chest wall oscillators (e.g. Vest®), VibraLung®, MetaNeb®, and Aerobika® to help people with cystic fibrosis clear their airways of mucus. This is an update of a previously published review."
}
] |
query-laysum
|
7999
|
[
{
"role": "user",
"content": "Abstract: Background\nKnee osteoarthritis (OA) is a major public health issue because it causes chronic pain, reduces physical function and diminishes quality of life. Ageing of the population and increased global prevalence of obesity are anticipated to dramatically increase the prevalence of knee OA and its associated impairments. No cure for knee OA is known, but exercise therapy is among the dominant non-pharmacological interventions recommended by international guidelines.\nObjectives\nTo determine whether land-based therapeutic exercise is beneficial for people with knee OA in terms of reduced joint pain or improved physical function and quality of life.\nSearch methods\nFive electronic databases were searched, up until May 2013.\nSelection criteria\nAll randomised controlled trials (RCTs) randomly assigning individuals and comparing groups treated with some form of land-based therapeutic exercise (as opposed to exercise conducted in the water) with a non-exercise group or a non-treatment control group.\nData collection and analysis\nThree teams of two review authors independently extracted data, assessed risk of bias for each study and assessed the quality of the body of evidence for each outcome using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. We conducted analyses on continuous outcomes (pain, physical function and quality of life) immediately after treatment and on dichotomous outcomes (proportion of study withdrawals) at the end of the study; we also conducted analyses on the sustained effects of exercise on pain and function (two to six months, and longer than six months).\nMain results\nIn total, we extracted data from 54 studies. Overall, 19 (20%) studies reported adequate random sequence generation and allocation concealment and adequately accounted for incomplete outcome data; we considered these studies to have an overall low risk of bias. Studies were largely free from selection bias, but research results may be vulnerable to performance and detection bias, as only four of the RCTs reported blinding of participants to treatment allocation, and, although most RCTs reported blinded outcome assessment, pain, physical function and quality of life were participant self-reported.\nHigh-quality evidence from 44 trials (3537 participants) indicates that exercise reduced pain (standardised mean difference (SMD) -0.49, 95% confidence interval (CI) -0.39 to -0.59) immediately after treatment. Pain was estimated at 44 points on a 0 to 100-point scale (0 indicated no pain) in the control group; exercise reduced pain by an equivalent of 12 points (95% CI 10 to 15 points). Moderate-quality evidence from 44 trials (3913 participants) showed that exercise improved physical function (SMD -0.52, 95% CI -0.39 to -0.64) immediately after treatment. Physical function was estimated at 38 points on a 0 to 100-point scale (0 indicated no loss of physical function) in the control group; exercise improved physical function by an equivalent of 10 points (95% CI 8 to 13 points). High-quality evidence from 13 studies (1073 participants) revealed that exercise improved quality of life (SMD 0.28, 95% CI 0.15 to 0.40) immediately after treatment. Quality of life was estimated at 43 points on a 0 to 100-point scale (100 indicated best quality of life) in the control group; exercise improved quality of life by an equivalent of 4 points (95% CI 2 to 5 points).\nHigh-quality evidence from 45 studies (4607 participants) showed a comparable likelihood of withdrawal from exercise allocation (event rate 14%) compared with the control group (event rate 15%), and this difference was not significant: odds ratio (OR) 0.93 (95% CI 0.75 to 1.15). Eight studies reported adverse events, all of which were related to increased knee or low back pain attributed to the exercise intervention provided. No study reported a serious adverse event.\nIn addition, 12 included studies provided two to six-month post-treatment sustainability data on 1468 participants for knee pain and on 1279 (10 studies) participants for physical function. These studies indicated sustainability of treatment effect for pain (SMD -0.24, 95% CI -0.35 to -0.14), with an equivalent reduction of 6 (3 to 9) points on 0 to 100-point scale, and of physical function (SMD -0.15 95% CI -0.26 to -0.04), with an equivalent improvement of 3 (1 to 5) points on 0 to 100-point scale.\nMarked variability was noted across included studies among participants recruited, symptom duration, exercise interventions assessed and important aspects of study methodology. Individually delivered programmes tended to result in greater reductions in pain and improvements in physical function, compared to class-based exercise programmes or home-based programmes; however between-study heterogeneity was marked within the individually provided treatment delivery subgroup.\nAuthors' conclusions\nHigh-quality evidence indicates that land-based therapeutic exercise provides short-term benefit that is sustained for at least two to six months after cessation of formal treatment in terms of reduced knee pain, and moderate-quality evidence shows improvement in physical function among people with knee OA. The magnitude of the treatment effect would be considered moderate (immediate) to small (two to six months) but comparable with estimates reported for non-steroidal anti-inflammatory drugs. Confidence intervals around demonstrated pooled results for pain reduction and improvement in physical function do not exclude a minimal clinically important treatment effect. Since the participants in most trials were aware of their treatment, this may have contributed to their improvement. Despite the lack of blinding we did not downgrade the quality of evidence for risk of performance or detection bias. This reflects our belief that further research in this area is unlikely to change the findings of our review.\n\nGiven the provided abstract, please respond to the following query: \"Background: What is OA of the knee, and what is exercise?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Osteoarthritis (OA) is a disease of joints, such as the hip. When the joint loses cartilage, the bone grows to try to repair the damage. However, instead of making things better, the bone grows abnormally and makes things worse. For example, the bone can become misshapen and make the joint painful and unstable. Doctors used to think that OA simply resulted in thinning of the cartilage. However, it is now known that OA is a disease of the whole joint.\nExercise can be any activity that enhances or maintains muscle strength, physical fitness and overall health. People exercise for many reasons; they may exercise to lose weight, to strengthen muscles or to relieve the symptoms of OA."
}
] |
query-laysum
|
8000
|
[
{
"role": "user",
"content": "Abstract: Background\nAttention deficit hyperactivity disorder (ADHD) is one of the most commonly diagnosed and treated psychiatric disorders in childhood. Typically, children and adolescents with ADHD find it difficult to pay attention and they are hyperactive and impulsive. Methylphenidate is the psychostimulant most often prescribed, but the evidence on benefits and harms is uncertain. This is an update of our comprehensive systematic review on benefits and harms published in 2015.\nObjectives\nTo assess the beneficial and harmful effects of methylphenidate for children and adolescents with ADHD.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, three other databases and two trials registers up to March 2022. In addition, we checked reference lists and requested published and unpublished data from manufacturers of methylphenidate.\nSelection criteria\nWe included all randomised clinical trials (RCTs) comparing methylphenidate versus placebo or no intervention in children and adolescents aged 18 years and younger with a diagnosis of ADHD. The search was not limited by publication year or language, but trial inclusion required that 75% or more of participants had a normal intellectual quotient (IQ > 70). We assessed two primary outcomes, ADHD symptoms and serious adverse events, and three secondary outcomes, adverse events considered non-serious, general behaviour, and quality of life.\nData collection and analysis\nTwo review authors independently conducted data extraction and risk of bias assessment for each trial. Six review authors including two review authors from the original publication participated in the update in 2022. We used standard Cochrane methodological procedures. Data from parallel-group trials and first-period data from cross-over trials formed the basis of our primary analyses. We undertook separate analyses using end-of-last period data from cross-over trials. We used Trial Sequential Analyses (TSA) to control for type I (5%) and type II (20%) errors, and we assessed and downgraded evidence according to the GRADE approach.\nMain results\nWe included 212 trials (16,302 participants randomised); 55 parallel-group trials (8104 participants randomised), and 156 cross-over trials (8033 participants randomised) as well as one trial with a parallel phase (114 participants randomised) and a cross-over phase (165 participants randomised). The mean age of participants was 9.8 years ranging from 3 to 18 years (two trials from 3 to 21 years). The male-female ratio was 3:1. Most trials were carried out in high-income countries, and 86/212 included trials (41%) were funded or partly funded by the pharmaceutical industry. Methylphenidate treatment duration ranged from 1 to 425 days, with a mean duration of 28.8 days. Trials compared methylphenidate with placebo (200 trials) and with no intervention (12 trials). Only 165/212 trials included usable data on one or more outcomes from 14,271 participants.\nOf the 212 trials, we assessed 191 at high risk of bias and 21 at low risk of bias. If, however, deblinding of methylphenidate due to typical adverse events is considered, then all 212 trials were at high risk of bias.\nPrimary outcomes: methylphenidate versus placebo or no intervention may improve teacher-rated ADHD symptoms (standardised mean difference (SMD) −0.74, 95% confidence interval (CI) −0.88 to −0.61; I² = 38%; 21 trials; 1728 participants; very low-certainty evidence). This corresponds to a mean difference (MD) of −10.58 (95% CI −12.58 to −8.72) on the ADHD Rating Scale (ADHD-RS; range 0 to 72 points). The minimal clinically relevant difference is considered to be a change of 6.6 points on the ADHD-RS. Methylphenidate may not affect serious adverse events (risk ratio (RR) 0.80, 95% CI 0.39 to 1.67; I² = 0%; 26 trials, 3673 participants; very low-certainty evidence). The TSA-adjusted intervention effect was RR 0.91 (CI 0.31 to 2.68).\nSecondary outcomes: methylphenidate may cause more adverse events considered non-serious versus placebo or no intervention (RR 1.23, 95% CI 1.11 to 1.37; I² = 72%; 35 trials 5342 participants; very low-certainty evidence). The TSA-adjusted intervention effect was RR 1.22 (CI 1.08 to 1.43). Methylphenidate may improve teacher-rated general behaviour versus placebo (SMD −0.62, 95% CI −0.91 to −0.33; I² = 68%; 7 trials 792 participants; very low-certainty evidence), but may not affect quality of life (SMD 0.40, 95% CI −0.03 to 0.83; I² = 81%; 4 trials, 608 participants; very low-certainty evidence).\nAuthors' conclusions\nThe majority of our conclusions from the 2015 version of this review still apply. Our updated meta-analyses suggest that methylphenidate versus placebo or no-intervention may improve teacher-rated ADHD symptoms and general behaviour in children and adolescents with ADHD. There may be no effects on serious adverse events and quality of life. Methylphenidate may be associated with an increased risk of adverse events considered non-serious, such as sleep problems and decreased appetite. However, the certainty of the evidence for all outcomes is very low and therefore the true magnitude of effects remain unclear.\nDue to the frequency of non-serious adverse events associated with methylphenidate, the blinding of participants and outcome assessors is particularly challenging. To accommodate this challenge, an active placebo should be sought and utilised. It may be difficult to find such a drug, but identifying a substance that could mimic the easily recognised adverse effects of methylphenidate would avert the unblinding that detrimentally affects current randomised trials.\nFuture systematic reviews should investigate the subgroups of patients with ADHD that may benefit most and least from methylphenidate. This could be done with individual participant data to investigate predictors and modifiers like age, comorbidity, and ADHD subtypes.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out if methylphenidate improves children's ADHD symptoms (attention, hyperactivity) based mainly on teachers' ratings using various scales, and whether it causes serious unwanted effects, like death, hospitalisation, or disability. We were also interested in less serious unwanted effects like sleep problems and loss of appetite, and its effects on children's general behaviour and quality of life."
}
] |
query-laysum
|
8001
|
[
{
"role": "user",
"content": "Abstract: Background\nPlacing a small volume of colostrum directly onto the buccal mucosa of preterm infants during the early neonatal period may provide immunological and growth factors that stimulate the immune system and enhance intestinal growth. These benefits could potentially reduce the risk of infection and necrotising enterocolitis (NEC) and improve survival and long-term outcome.\nObjectives\nTo determine if early (within the first 48 hours of life) oropharyngeal administration of mother’s own fresh or frozen/thawed colostrum can reduce rates of NEC, late-onset invasive infection, and/or mortality in preterm infants compared with controls. To assess trials for evidence of safety and harm (e.g. aspiration pneumonia). To compare effects of early oropharyngeal colostrum (OPC) versus no OPC, placebo, late OPC, and nasogastric colostrum.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 8), MEDLINE via PubMed (1966 to August 2017), Embase (1980 to August 2017), and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to August 2017). We also searched clinical trials registries for ongoing and recently completed trials (clinicaltrials.gov; the World Health Organization International Trials Registry (www.whoint/ictrp/search/en/), and the ISRCTN Registry), conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials. We performed the last search in August 2017. We contacted trial investigators regarding unpublished studies and data.\nSelection criteria\nWe searched for published and unpublished randomised controlled trials comparing early administration of oropharyngeal colostrum (OPC) versus sham administration of water, oral formula, or donor breast milk, or versus no intervention. We also searched for studies comparing early OPC versus early nasogastric or nasojejunal administration of colostrum. We considered only trials that included preterm infants at < 37 weeks' gestation. We did not limit the review to any particular region or language.\nData collection and analysis\nTwo review authors independently screened retrieved articles for inclusion and independently conducted data extraction, data analysis, and assessments of 'Risk of bias' and quality of evidence. We graded evidence quality using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. We contacted study authors for additional information or clarification when necessary.\nMain results\nWe included six studies that compared early oropharyngeal colostrum versus water, saline, placebo, or donor, or versus no intervention, enrolling 335 preterm infants with gestational ages ranging from 25 to 32 weeks' gestation and birth weights of 410 to 2500 grams. Researchers found no significant differences between OPC and control for primary outcomes - incidence of NEC (typical risk ratio (RR) 1.42, 95% confidence interval (CI) 0.50 to 4.02; six studies, 335 infants; P = 0.51; I² = 0%; very low-quality evidence), incidence of late-onset infection (typical RR 0.86, 95% CI 0.56 to 1.33; six studies, 335 infants; P = 0.50; I² = 0%; very low-quality evidence), and death before hospital discharge (typical RR 0.76, 95% CI 0.34 to 1.71; six studies, 335 infants; P = 0.51; I² = 0%; very low-quality evidence). Similarly, meta-analysis showed no difference in length of hospital stay between OPC and control groups (mean difference (MD) 0.81, 95% CI -5.87 to 7.5; four studies, 293 infants; P = 0.65; I² = 49%). Days to full enteral feeds were reduced in the OPC group with MD of -2.58 days (95% CI -4.01 to -1.14; six studies, 335 infants; P = 0.0004; I² = 28%; very low-quality evidence).\nThe effect of OPC was uncertain because of small sample sizes and imprecision in study results (very low-quality evidence).\nNo adverse effects were associated with OPC; however, data on adverse effects were insufficient, and no numerical data were available from the included studies.\nOverall the quality of included studies was low to very low across all outcomes. We downgraded GRADE outcomes because of concerns about allocation concealment and blinding, reporting bias, small sample sizes with few events, and wide confidence intervals.\nAuthors' conclusions\nLarge, well-designed trials would be required to evaluate more precisely and reliably the effects of oropharyngeal colostrum on important outcomes for preterm infants.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Placing a small volume of colostrum - the first milk produced by the mother during the first few days of life - directly onto the inside of the cheeks of preterm infants may provide immunological and growth factors that stimulate the immune system and enhance growth of the intestine. These benefits could potentially reduce infections, including severe infections in the intestine known as necrotising enterocolitis (NEC), thereby improving survival and long-term outcomes."
}
] |
query-laysum
|
8002
|
[
{
"role": "user",
"content": "Abstract: Background\nMany survivors of stroke report attentional impairments, such as diminished concentration and distractibility. However, the effectiveness of cognitive rehabilitation for improving these impairments is uncertain.This is an update of the Cochrane Review first published in 2000 and previously updated in 2013.\nObjectives\nTo determine whether people receiving cognitive rehabilitation for attention problems 1. show better outcomes in their attentional functions than those given no treatment or treatment as usual, and 2. have a better functional recovery, in terms of independence in activities of daily living, mood, and quality of life, than those given no treatment or treatment as usual.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register, CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, PsycBITE, REHABDATA and ongoing trials registers up to February 2019. We screened reference lists and tracked citations using Scopus.\nSelection criteria\nWe included controlled clinical trials (CCTs) and randomised controlled trials (RCTs) of cognitive rehabilitation for impairments of attention for people with stroke. We did not consider listening to music, meditation, yoga, or mindfulness to be a form of cognitive rehabilitation. We only considered trials that selected people with demonstrable or self-reported attentional deficits. The primary outcomes were measures of global attentional functions, and secondary outcomes were measures of attentional domains (i.e. alertness, selective attention, sustained attention, divided attention), functional abilities, mood, and quality of life.\nData collection and analysis\nTwo review authors independently selected trials, extracted data, and assessed the risk of bias. We used the GRADE approach to assess the certainty of evidence for each outcome.\nMain results\nWe included no new trials in this update. The results are unchanged from the previous review and are based on the data of six RCTs with 223 participants. All six RCTs compared cognitive rehabilitation with a usual care control.\nMeta-analyses demonstrated no convincing effect of cognitive rehabilitation on subjective measures of attention either immediately after treatment (standardised mean difference (SMD) 0.53, 95% confidence interval (CI) –0.03 to 1.08; P = 0.06; 2 studies, 53 participants; very low-quality evidence) or at follow-up (SMD 0.16, 95% CI –0.23 to 0.56; P = 0.41; 2 studies, 99 participants; very low-quality evidence).\nPeople receiving cognitive rehabilitation (when compared with control) showed that measures of divided attention recorded immediately after treatment may improve (SMD 0.67, 95% CI 0.35 to 0.98; P < 0.0001; 4 studies, 165 participants; low-quality evidence), but it is uncertain that these effects persisted (SMD 0.36, 95% CI –0.04 to 0.76; P = 0.08; 2 studies, 99 participants; very low-quality evidence). There was no evidence for immediate or persistent effects of cognitive rehabilitation on alertness, selective attention, and sustained attention.\nThere was no convincing evidence for immediate or long-term effects of cognitive rehabilitation for attentional problems on functional abilities, mood, and quality of life after stroke.\nAuthors' conclusions\nThe effectiveness of cognitive rehabilitation for attention deficits following stroke remains unconfirmed. The results suggest there may be an immediate effect after treatment on attentional abilities, but future studies need to assess what helps this effect persist and generalise to attentional skills in daily life. Trials also need to have higher methodological quality and better reporting.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified six studies that compared cognitive rehabilitation with a control group who received their usual care (but not cognitive rehabilitation) for people with attention problems following stroke. We did not consider listening to music, meditation, yoga, or mindfulness to be a form of cognitive rehabilitation. The six studies involved 223 participants who demonstrated attentional problems or reported having such problems following stroke. The evidence is current to February 2019."
}
] |
query-laysum
|
8003
|
[
{
"role": "user",
"content": "Abstract: Background\nEarly accurate detection of all skin cancer types is important to guide appropriate management, to reduce morbidity and to improve survival. Basal cell carcinoma (BCC) is almost always a localised skin cancer with potential to infiltrate and damage surrounding tissue, whereas a minority of cutaneous squamous cell carcinomas (cSCCs) and invasive melanomas are higher-risk skin cancers with the potential to metastasise and cause death. Dermoscopy has become an important tool to assist specialist clinicians in the diagnosis of melanoma, and is increasingly used in primary-care settings. Dermoscopy is a precision-built handheld illuminated magnifier that allows more detailed examination of the skin down to the level of the superficial dermis. Establishing the value of dermoscopy over and above visual inspection for the diagnosis of BCC or cSCC in primary- and secondary-care settings is critical to understanding its potential contribution to appropriate skin cancer triage, including referral of higher-risk cancers to secondary care, the identification of low-risk skin cancers that might be treated in primary care and to provide reassurance to those with benign skin lesions who can be safely discharged.\nObjectives\nTo determine the diagnostic accuracy of visual inspection and dermoscopy, alone or in combination, for the detection of (a) BCC and (b) cSCC, in adults. We separated studies according to whether the diagnosis was recorded face-to-face (in person) or based on remote (image-based) assessment.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane Central Register of Controlled Trials; MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nStudies of any design that evaluated visual inspection or dermoscopy or both in adults with lesions suspicious for skin cancer, compared with a reference standard of either histological confirmation or clinical follow-up.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic thresholds were missing. We estimated accuracy using hierarchical summary ROC methods. We undertook analysis of studies allowing direct comparison between tests. To facilitate interpretation of results, we computed values of sensitivity at the point on the SROC curve with 80% fixed specificity and values of specificity with 80% fixed sensitivity. We investigated the impact of in-person test interpretation; use of a purposely-developed algorithm to assist diagnosis; and observer expertise.\nMain results\nWe included 24 publications reporting on 24 study cohorts, providing 27 visual inspection datasets (8805 lesions; 2579 malignancies) and 33 dermoscopy datasets (6855 lesions; 1444 malignancies). The risk of bias was mainly low for the index test (for dermoscopy evaluations) and reference standard domains, particularly for in-person evaluations, and high or unclear for participant selection, application of the index test for visual inspection and for participant flow and timing. We scored concerns about the applicability of study findings as of ‘high’ or 'unclear' concern for almost all studies across all domains assessed. Selective participant recruitment, lack of reproducibility of diagnostic thresholds and lack of detail on observer expertise were particularly problematic.\nThe detection of BCC was reported in 28 datasets; 15 on an in-person basis and 13 image-based. Analysis of studies by prior testing of participants and according to observer expertise was not possible due to lack of data. Studies were primarily conducted in participants referred for specialist assessment of lesions with available histological classification. We found no clear differences in accuracy between dermoscopy studies undertaken in person and those which evaluated images. The lack of effect observed may be due to other sources of heterogeneity, including variations in the types of skin lesion studied, in dermatoscopes used, or in the use of algorithms and varying thresholds for deciding on a positive test result.\nMeta-analysis found in-person evaluations of dermoscopy (7 evaluations; 4683 lesions and 363 BCCs) to be more accurate than visual inspection alone for the detection of BCC (8 evaluations; 7017 lesions and 1586 BCCs), with a relative diagnostic odds ratio (RDOR) of 8.2 (95% confidence interval (CI) 3.5 to 19.3; P < 0.001). This corresponds to predicted differences in sensitivity of 14% (93% versus 79%) at a fixed specificity of 80% and predicted differences in specificity of 22% (99% versus 77%) at a fixed sensitivity of 80%. We observed very similar results for the image-based evaluations.\nWhen applied to a hypothetical population of 1000 lesions, of which 170 are BCC (based on median BCC prevalence across studies), an increased sensitivity of 14% from dermoscopy would lead to 24 fewer BCCs missed, assuming 166 false positive results from both tests. A 22% increase in specificity from dermoscopy with sensitivity fixed at 80% would result in 183 fewer unnecessary excisions, assuming 34 BCCs missed for both tests. There was not enough evidence to assess the use of algorithms or structured checklists for either visual inspection or dermoscopy.\nInsufficient data were available to draw conclusions on the accuracy of either test for the detection of cSCCs.\nAuthors' conclusions\nDermoscopy may be a valuable tool for the diagnosis of BCC as an adjunct to visual inspection of a suspicious skin lesion following a thorough history-taking including assessment of risk factors for keratinocyte cancer. The evidence primarily comes from secondary-care (referred) populations and populations with pigmented lesions or mixed lesion types. There is no clear evidence supporting the use of currently-available formal algorithms to assist dermoscopy diagnosis.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors searched for and used studies published up to August 2016.\n*In these studies biopsy, clinical follow-up or specialist clinician diagnosis were the reference standards (means of establishing the final diagnosis)."
}
] |
query-laysum
|
8004
|
[
{
"role": "user",
"content": "Abstract: Background\nAortic valve stenosis is the most common type of valvular heart disease in the USA and Europe. Aortic valve stenosis is considered similar to atherosclerotic disease. Some studies have evaluated statins for aortic valve stenosis.\nObjectives\nTo evaluate the effectiveness and safety of statins in aortic valve stenosis.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS - IBECS, Web of Science and CINAHL Plus. These databases were searched from their inception to 24 November 2015. We also searched trials in registers for ongoing trials. We used no language restrictions.\nSelection criteria\nRandomised controlled clinical trials (RCTs) comparing statins alone or in association with other systemic drugs to reduce cholesterol levels versus placebo or usual care.\nData collection and analysis\nPrimary outcomes were severity of aortic valve stenosis (evaluated by echocardiographic criteria: mean pressure gradient, valve area and aortic jet velocity), freedom from valve replacement and death from cardiovascular cause. Secondary outcomes were hospitalisation for any reason, overall mortality, adverse events and patient quality of life.\nTwo review authors independently selected trials for inclusion, extracted data and assessed the risk of bias. The GRADE methodology was employed to assess the quality of result findings and the GRADE profiler (GRADEPRO) was used to import data from Review Manager 5.3 to create a 'Summary of findings' table.\nMain results\nWe included four RCTs with 2360 participants comparing statins (1185 participants) with placebo (1175 participants). We found low-quality evidence for our primary outcome of severity of aortic valve stenosis, evaluated by mean pressure gradient (mean difference (MD) -0.54, 95% confidence interval (CI) -1.88 to 0.80; participants = 1935; studies = 2), valve area (MD -0.07, 95% CI -0.28 to 0.14; participants = 127; studies = 2), and aortic jet velocity (MD -0.06, 95% CI -0.26 to 0.14; participants = 155; study = 1). Moderate-quality evidence showed no effect on freedom from valve replacement with statins (risk ratio (RR) 0.93, 95% CI 0.81 to 1.06; participants = 2360; studies = 4), and no effect on muscle pain as an adverse event (RR 0.91, 95% CI 0.75 to 1.09; participants = 2204; studies = 3; moderate-quality evidence). Low- and very low-quality evidence showed uncertainty around the effect of statins on death from cardiovascular cause (RR 0.80, 95% CI 0.56 to 1.15; participants = 2297; studies = 3; low-quality evidence) and hospitalisation for any reason (RR 0.84, 95% CI 0.39 to 1.84; participants = 155; study = 1; very low-quality evidence). None of the four included studies reported on overall mortality and patient quality of life.\nAuthors' conclusions\nResult findings showed uncertainty surrounding the effect of statins for aortic valve stenosis.The quality of evidence from the reported outcomes ranged from moderate to very low. These results give support to European and USA guidelines (2012 and 2014, respectively) that so far there is no clinical treatment option for aortic valve stenosis.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Based on the evidence in this review, there is uncertainty surrounding the effect of statins for aortic valve stenosis. These results give support to European and USA guidelines (2012 and 2014, respectively) that so far there is no clinical treatment option for aortic valve stenosis. An alternative might be to broaden the knowledge of the pathophysiology of this disease and include risk factors such as calcium, heredity, vitamin D, inflammation, oxidative stress, diabetes, hypertension among others that could contribute to aortic valve stenosis. High-quality randomised trials to include risk factors for aortic valve stenosis are needed."
}
] |
query-laysum
|
8005
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatocellular carcinoma is the sixth most common cancer worldwide. Hepatic resection is regarded as the curative therapy for hepatocellular carcinoma. However, only about 20% of people with hepatocellular carcinoma are candidates for resection, which highlights the importance of effective nonsurgical therapies. Until now, transcatheter arterial chemoembolisation (TACE) is the most common palliative therapy for hepatocellular carcinoma, but its clinical benefits remain unsatisfactory. During recent years, some studies have reported that the combination of TACE plus thermal ablation can confer a more favourable prognosis than TACE alone. However, clear and compelling evidence to prove the beneficial or harmful effects of the combination of TACE and thermal ablation therapy is lacking.\nObjectives\nTo assess the beneficial and harmful effects of the combination of thermal ablation with TACE versus TACE alone in people with hepatocellular carcinoma.\nSearch methods\nWe performed searches in the Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials in the Cochrane Library, MEDLINE, Embase, LILACS, Science Citation Index Expanded, and Conference Proceedings Citation Index-Science. We endeavoured to identify relevant randomised clinical trials also in the China National Knowledge Infrastructure (CNKI) and Wanfang databases. We searched trial registration websites for ongoing studies. We also handsearched grey literature sources. The date of last search was 22 December 2020.\nSelection criteria\nWe planned to include all randomised clinical trials comparing the combination of TACE plus thermal ablation versus TACE alone for hepatocellular carcinoma, no matter the language, year of publication, publication status, and reported outcomes.\nData collection and analysis\nWe planned to use standard methodological procedures expected by Cochrane. We planned to calculate risk ratios (RRs) with the corresponding 95% confidence intervals (CIs). For time-to-event variables, we planned to use the methods of survival analysis and express the intervention effect as a hazard ratio (HR) with 95% Cl. If the log HR and the variance were not directly reported in reports, we planned to calculate them indirectly, following methods for incorporating summary time-to-event data into meta-analysis. We planned to assess the risk of bias of the included studies using the RoB 2 tool. We planned to assess the certainty of evidence with GRADE and present the evidence in a summary of findings table.\nMain results\nOut of 2224 records retrieved with the searches, we considered 135 records eligible for full-text screening. We excluded 21 of these records because the interventions used were outside the scope of our review or the studies were not randomised clinical trials. We listed the remaining 114 records, reporting on 114 studies, under studies awaiting classification because we could not be sure that these were randomised clinical trials from the information in the study paper. We could not obtain information on the registration of the study protocol for any of the 114 studies. We could not obtain information on study approval by regional research ethics committees, either from the study authors or through our own searches of trial registries. Corresponding authors did not respond to our enquiries about the design and conduct of the studies, except for one from whom we did not receive a satisfactory response. We also raised awareness of our concerns to editors of the journals that published the 114 studies, and we did not hear back with useful information. Moreover, there seemed to be inappropriate inclusion of trial participants, based on cancer stage and severity of liver disease, who should have obtained other interventions according to guidelines from learned societies.\nAccordingly, we found no confirmed randomised clinical trials evaluating the combination of TACE plus thermal ablation versus TACE alone for people with hepatocellular carcinoma for inclusion in our review.\nWe identified five ongoing trials, by handsearching in clinical trial websites.\nAuthors' conclusions\nWe could not find for inclusion any confirmed randomised clinical trials assessing the beneficial or harmful effects of the combination of TACE plus thermal ablation versus TACE alone in people with hepatocellular carcinoma. Therefore, our results did not show or reject the efficiency of the combination of TACE plus thermal ablation versus TACE alone for people with hepatocellular carcinoma.\nWe need trials that compare the beneficial and harmful effects of the combination of TACE plus thermal ablation versus TACE alone in people with hepatocellular carcinoma, not eligible for treatments with curative intent (liver transplantation, ablation surgical resection) and who have sufficient liver reserve, as assessed by the Child Pugh score, and who do not have extrahepatic metastases. Therefore, future trial participants must be classified at Barcelona Clinic Liver Cancer Stage B (intermediate stage) (BCLC-B) or an equivalent, with other staging systems.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Hepatocellular carcinoma (a common kind of liver cancer) is the sixth most common cancer in the world. Transcatheter arterial chemoembolisation (TACE) (injecting agents into the feeding vessels of the tumour to reduce the blood supply to the tumour and kill the tumour) is the most common therapy for hepatocellular carcinoma, but the clinical outcome is poor. In recent years, the combination of TACE plus thermal ablation (killing the tumour cell by producing heat or cold) has shown better efficacy than TACE alone. However, evidence to prove the beneficial or harmful effect of the combination of TACE with ablation for people with hepatocellular carcinoma is still lacking."
}
] |
query-laysum
|
8006
|
[
{
"role": "user",
"content": "Abstract: Background\nReceiving a diagnosis of cancer and the subsequent related treatments can have a significant impact on an individual's physical and psychosocial well-being. To ensure that cancer care addresses all aspects of well-being, systematic screening for distress and supportive care needs is recommended. Appropriate screening could help support the integration of psychosocial approaches in daily routines in order to achieve holistic cancer care and ensure that the specific care needs of people with cancer are met and that the organisation of such care is optimised.\nObjectives\nTo examine the effectiveness and safety of screening of psychosocial well-being and care needs of people with cancer. To explore the intervention characteristics that contribute to the effectiveness of these screening interventions.\nSearch methods\nWe searched five electronic databases in January 2018: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PsycINFO, and CINAHL. We also searched five trial registers and screened the contents of relevant journals, citations, and references to find published and unpublished trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) and non-randomised controlled trials (NRCTs) that studied the effect of screening interventions addressing the psychosocial well-being and care needs of people with cancer compared to usual care. These screening interventions could involve self-reporting of people with a patient-reported outcome measures (PROMs) or a semi-structured interview with a screening interventionist, and comprise a solitary screening intervention or screening with guided actions. We excluded studies that evaluated screening integrated as an element in more complex interventions (e.g. therapy, coaching, full care pathways, or care programmes).\nData collection and analysis\nTwo review authors independently extracted the data and assessed methodological quality for each included study using the Cochrane tool for RCTs and the Risk Of Bias In Non-randomised Studies - of Interventions (ROBINS-I) tool for NRCTs. Due to the high level of heterogeneity in the included studies, only three were included in meta-analysis. Results of the remaining 23 studies were analysed narratively.\nMain results\nWe included 26 studies (18 RCTs and 8 NRCTs) with sample sizes of 41 to 1012 participants, involving a total of 7654 adults with cancer. Two studies included only men or women; all other studies included both sexes. For most studies people with breast, lung, head and neck, colorectal, prostate cancer, or several of these diagnoses were included; some studies included people with a broader range of cancer diagnosis. Ten studies focused on a solitary screening intervention, while the remaining 16 studies evaluated a screening intervention combined with guided actions. A broad range of intervention instruments was used, and were described by study authors as a screening of health-related quality of life (HRQoL), distress screening, needs assessment, or assessment of biopsychosocial symptoms or overall well-being. In 13 studies, the screening was a self-reported questionnaire, while in the remaining 13 studies an interventionist conducted the screening by interview or paper-pencil assessment. The interventional screenings in the studies were applied 1 to 12 times, without follow-up or from 4 weeks to 18 months after the first interventional screening. We assessed risk of bias as high for eight RCTs, low for five RCTs, and unclear for the five remaining RCTs. There were further concerns about the NRCTs (1 = critical risk study; 6 = serious risk studies; 1 = risk unclear).\nDue to considerable heterogeneity in several intervention and study characteristics, we have reported the results narratively for the majority of the evidence.\nIn the narrative synthesis of all included studies, we found very low-certainty evidence for the effect of screening on HRQoL (20 studies). Of these studies, eight found beneficial effects of screening for several subdomains of HRQoL, and 10 found no effects of screening. One study found adverse effects, and the last study did not report quantitative results. We found very low-certainty evidence for the effect of screening on distress (16 studies). Of these studies, two found beneficial effects of screening, and 14 found no effects of screening. We judged the overall certainty of the evidence for the effect of screening on HRQoL to be very low. We found very low-certainty evidence for the effect of screening on care needs (seven studies). Of these studies, three found beneficial effects of screening for several subdomains of care needs, and two found no effects of screening. One study found adverse effects, and the last study did not report quantitative results. We judged the overall level of evidence for the effect of screening on HRQoL to be very low. None of the studies specifically evaluated or reported adverse effects of screening. However, three studies reported unfavourable effects of screening, including lower QoL, more unmet needs, and lower satisfaction.\nThree studies could be included in a meta-analysis. The meta-analysis revealed no beneficial effect of the screening intervention on people with cancer HRQoL (mean difference (MD) 1.65, 95% confidence interval (CI) −4.83 to 8.12, 2 RCTs, 6 months follow-up); distress (MD 0.0, 95% CI −0.36 to 0.36, 1 RCT, 3 months follow-up); or care needs (MD 2.32, 95% CI −7.49 to 12.14, 2 RCTs, 3 months follow-up). However, these studies all evaluated one specific screening intervention (CONNECT) in people with colorectal cancer.\nIn the studies where some effects could be identified, no recurring relationships were found between intervention characteristics and the effectiveness of screening interventions.\nAuthors' conclusions\nWe found low-certainty evidence that does not support the effectiveness of screening of psychosocial well-being and care needs in people with cancer. Studies were heterogeneous in population, intervention, and outcome assessment.\nThe results of this review suggest a need for more uniformity in outcomes and reporting; for the use of intervention description guidelines; for further improvement of methodological certainty in studies and for combining subjective patient-reported outcomes with objective outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our results do not support the screening of psychosocial well-being and care needs in people with cancer. The certainty of the evidence was low, which means that we are uncertain about the results of the review due to variations in characteristics, and results of the studies and study designs."
}
] |
query-laysum
|
8007
|
[
{
"role": "user",
"content": "Abstract: Background\nWithdrawal is a necessary step prior to drug-free treatment or as the endpoint of long-term substitution treatment.\nObjectives\nTo assess the effectiveness of interventions involving the use of alpha2-adrenergic agonists compared with placebo, reducing doses of methadone, symptomatic medications, or an alpha2-adrenergic agonist regimen different to the experimental intervention, for the management of the acute phase of opioid withdrawal. Outcomes included the withdrawal syndrome experienced, duration of treatment, occurrence of adverse effects, and completion of treatment.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (1946 to November week 2, 2015), EMBASE (January 1985 to November week 2, 2015), PsycINFO (1806 to November week 2, 2015), Web of Science, and reference lists of articles.\nSelection criteria\nRandomised controlled trials comparing alpha2-adrenergic agonists (clonidine, lofexidine, guanfacine, tizanidine) with reducing doses of methadone, symptomatic medications or placebo, or comparing different alpha2-adrenergic agonists to modify the signs and symptoms of withdrawal in participants who were opioid dependent.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included 26 randomised controlled trials involving 1728 participants. Six studies compared an alpha2-adrenergic agonist with placebo, 12 with reducing doses of methadone, four with symptomatic medications, and five compared different alpha2-adrenergic agonists. We assessed 10 studies as having a high risk of bias in at least one of the methodological domains that were considered.\nWe found moderate-quality evidence that alpha2-adrenergic agonists were more effective than placebo in ameliorating withdrawal in terms of the likelihood of severe withdrawal (risk ratio (RR) 0.32, 95% confidence interval (CI) 0.18 to 0.57; 3 studies; 148 participants). We found moderate-quality evidence that completion of treatment was significantly more likely with alpha2-adrenergic agonists compared with placebo (RR 1.95, 95% CI 1.34 to 2.84; 3 studies; 148 participants).\nPeak withdrawal severity may be greater with alpha2-adrenergic agonists than with reducing doses of methadone, as measured by the likelihood of severe withdrawal (RR 1.18, 95% CI 0.81 to 1.73; 5 studies; 340 participants; low quality), and peak withdrawal score (standardised mean difference (SMD) 0.22, 95% CI -0.02 to 0.46; 2 studies; 263 participants; moderate quality), but these differences were not significant and there is no significant difference in severity when considered over the entire duration of the withdrawal episode (SMD 0.13, 95% CI -0.24 to 0.49; 3 studies; 119 participants; moderate quality). The signs and symptoms of withdrawal occurred and resolved earlier with alpha2-adrenergic agonists. The duration of treatment was significantly longer with reducing doses of methadone (SMD -1.07, 95% CI -1.31 to -0.83; 3 studies; 310 participants; low quality). Hypotensive or other adverse effects were significantly more likely with alpha2-adrenergic agonists (RR 1.92, 95% CI 1.19 to 3.10; 6 studies; 464 participants; low quality), but there was no significant difference in rates of completion of withdrawal treatment (RR 0.85, 95% CI 0.69 to 1.05; 9 studies; 659 participants; low quality).\nThere were insufficient data for quantitative comparison of different alpha2-adrenergic agonists. Available data suggest that lofexidine does not reduce blood pressure to the same extent as clonidine, but is otherwise similar to clonidine.\nAuthors' conclusions\nClonidine and lofexidine are more effective than placebo for the management of withdrawal from heroin or methadone. We detected no significant difference in efficacy between treatment regimens based on clonidine or lofexidine and those based on reducing doses of methadone over a period of around 10 days, but methadone was associated with fewer adverse effects than clonidine, and lofexidine has a better safety profile than clonidine.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Opioid withdrawal was similar with alpha2-adrenergic agonists and reducing doses of methadone, but the duration of treatment was longer and there were fewer adverse effects with methadone. Withdrawal signs and symptoms occurred earlier with alpha2-adrenergic agonists, within a few days of cessation of the opioid drugs. The chances of completing withdrawal treatment were similar.\nClonidine and lofexidine were more effective than placebo in managing withdrawal from heroin or methadone, and were associated with higher chances of completing treatment.\nLofexidine had less effect on blood pressure than clonidine."
}
] |
query-laysum
|
8008
|
[
{
"role": "user",
"content": "Abstract: Background\nHospital charges for lumbar spinal stenosis have increased significantly worldwide in recent times, with great variation in the costs and rates of different surgical procedures. There have also been significant increases in the rate of complex fusion and the use of spinal spacer implants compared to that of traditional decompression surgery, even though the former is known to incur costs up to three times higher. Moreover, the superiority of these new surgical procedures over traditional decompression surgery is still unclear.\nObjectives\nTo determine the efficacy of surgery in the management of patients with symptomatic lumbar spinal stenosis and the comparative effectiveness between commonly performed surgical techniques to treat this condition on patient-related outcomes. We also aimed to investigate the safety of these surgical interventions by including perioperative surgical data and reoperation rates.\nSearch methods\nReview authors performed electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, AMED, Web of Science, LILACS and three trials registries from their inception to 16 June 2016. Authors also conducted citation tracking on the reference lists of included trials and relevant systematic reviews.\nSelection criteria\nThis review included only randomised controlled trials that investigated the efficacy and safety of surgery compared with no treatment, placebo or sham surgery, or with another surgical technique in patients with lumbar spinal stenosis.\nData collection and analysis\nTwo reviewers independently assessed the studies for inclusion and performed the 'Risk of bias' assessment, using the Cochrane Back and Neck Review Group criteria. Reviewers also extracted demographics, surgery details, and types of outcomes to describe the characteristics of included studies. Primary outcomes were pain intensity, physical function or disability status, quality of life, and recovery. The secondary outcomes included measurements related to surgery, such as perioperative blood loss, operation time, length of hospital stay, reoperation rates, and costs. We grouped trials according to the types of surgical interventions being compared and categorised follow-up times as short-term when less than 12 months and long-term when 12 months or more. Pain and disability scores were converted to a common 0 to 100 scale. We calculated mean differences for continuous outcomes and relative risks for dichotomous outcomes. We pooled data using the random-effects model in Review Manager 5.3, and used the GRADE approach to assess the quality of the evidence.\nMain results\nWe included a total of 24 randomised controlled trials (reported in 39 published research articles or abstracts) in this review. The trials included 2352 participants with lumbar spinal stenosis with symptoms of neurogenic claudication. None of the included trials compared surgery with no treatment, placebo or sham surgery. Therefore, all included studies compared two or more surgical techniques. We judged all trials to be at high risk of bias for the blinding of care provider domain, and most of the trials failed to adequately conceal the randomisation process, blind the participants or use intention-to-treat analysis. Five trials compared the effects of fusion in addition to decompression surgery. Our results showed no significant differences in pain relief at long-term (mean difference (MD) -0.29, 95% confidence interval (CI) -7.32 to 6.74). Similarly, we found no between-group differences in disability reduction in the long-term (MD 3.26, 95% CI -6.12 to 12.63). Participants who received decompression alone had significantly less perioperative blood loss (MD -0.52 L, 95% CI -0.70 L to -0.34 L) and required shorter operations (MD -107.94 minutes, 95% CI -161.65 minutes to -54.23 minutes) compared with those treated with decompression plus fusion, though we found no difference in the number of reoperations (risk ratio (RR) 1.25, 95% CI 0.81 to 1.92). Another three trials investigated the effects of interspinous process spacer devices compared with conventional bony decompression. These spacer devices resulted in similar reductions in pain (MD -0.55, 95% CI -8.08 to 6.99) and disability (MD 1.25, 95% CI -4.48 to 6.98). The spacer devices required longer operation time (MD 39.11 minutes, 95% CI 19.43 minutes to 58.78 minutes) and were associated with higher risk of reoperation (RR 3.95, 95% CI 2.12 to 7.37), but we found no difference in perioperative blood loss (MD 144.00 mL, 95% CI -209.74 mL to 497.74 mL). Two trials compared interspinous spacer devices with decompression plus fusion. Although we found no difference in pain relief (MD 5.35, 95% CI -1.18 to 11.88), the spacer devices revealed a small but significant effect in disability reduction (MD 5.72, 95% CI 1.28 to 10.15). They were also superior to decompression plus fusion in terms of operation time (MD 78.91 minutes, 95% CI 30.16 minutes to 127.65 minutes) and perioperative blood loss (MD 238.90 mL, 95% CI 182.66 mL to 295.14 mL), however, there was no difference in rate of reoperation (RR 0.70, 95% CI 0.32 to 1.51). Overall there were no differences for the primary or secondary outcomes when different types of surgical decompression techniques were compared among each other. The quality of evidence varied from 'very low quality' to 'high quality'.\nAuthors' conclusions\nThe results of this Cochrane review show a paucity of evidence on the efficacy of surgery for lumbar spinal stenosis, as to date no trials have compared surgery with no treatment, placebo or sham surgery. Placebo-controlled trials in surgery are feasible and needed in the field of lumbar spinal stenosis. Our results demonstrate that at present, decompression plus fusion and interspinous process spacers have not been shown to be superior to conventional decompression alone. More methodologically rigorous studies are needed in this field to confirm our results.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Twenty-four randomised controlled trials were included with a total of 2352 people. We did not find trials that compared surgery with no treatment or placebo surgery, so all included trials compared different surgical techniques. The quality of the evidence from these studies varied from very low quality to high quality. This large variation was mainly due to different study protocols, surgical techniques and quality of reporting according to the 'Risk of bias' assessment. We found that patients who had decompression plus fusion fared no better than those who underwent decompression surgery alone. In fact, decompression plus fusion resulted in more blood loss during surgery than decompression alone. Although the spinal spacers were slightly better than decompression plus fusion in terms of improvements on daily activities, there were no differences when they were compared with decompression alone. Finally, we found no differences between different forms of decompression."
}
] |
query-laysum
|
8009
|
[
{
"role": "user",
"content": "Abstract: Background\nNasal continuous positive airway pressure (NCPAP) is a useful method for providing respiratory support after extubation. Nasal intermittent positive pressure ventilation (NIPPV) can augment NCPAP by delivering ventilator breaths via nasal prongs.\nObjectives\nPrimary objective\nTo determine the effects of management with NIPPV versus NCPAP on the need for additional ventilatory support in preterm infants whose endotracheal tube was removed after a period of intermittent positive pressure ventilation.\nSecondary objectives\nTo compare rates of abdominal distension, gastrointestinal perforation, necrotising enterocolitis, chronic lung disease, pulmonary air leak, mortality, duration of hospitalisation, rates of apnoea and neurodevelopmental status at 18 to 24 months for NIPPV and NCPAP.\nTo compare the effect of NIPPV versus NCPAP delivered via ventilators versus bilevel devices, and assess the effects of the synchronisation of ventilation, and the strength of interventions in different economic settings.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was January 2023.\nSelection criteria\nWe included randomised and quasi-randomised trials of ventilated preterm infants (less than 37 weeks' gestational age (GA)) ready for extubation to non-invasive respiratory support. Interventions were NIPPV and NCPAP.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcome was 1. respiratory failure. Our secondary outcomes were 2. endotracheal reintubation, 3. abdominal distension, 4. gastrointestinal perforation, 5. necrotising enterocolitis (NEC), 6. chronic lung disease, 7. pulmonary air leak, 8. mortality, 9. hospitalisation, 10. apnoea and bradycardia, and 11. neurodevelopmental status.\nWe used GRADE to assess the certainty of evidence.\nMain results\nWe included 19 trials (2738 infants). Compared to NCPAP, NIPPV likely reduces the risk of respiratory failure postextubation (risk ratio (RR) 0.75, 95% confidence interval (CI) 0.67 to 0.84; number needed to treat for an additional beneficial outcome (NNTB) 11, 95% CI 8 to 17; 19 trials, 2738 infants; moderate-certainty evidence) and endotracheal reintubation (RR 0.78, 95% CI 0.70 to 0.87; NNTB 12, 95% CI 9 to 25; 17 trials, 2608 infants, moderate-certainty evidence), and may reduce pulmonary air leaks (RR 0.57, 95% CI 0.37 to 0.87; NNTB 50, 95% CI 33 to infinite; 13 trials, 2404 infants; low-certainty evidence). NIPPV likely results in little to no difference in gastrointestinal perforation (RR 0.89, 95% CI 0.58 to 1.38; 8 trials, 1478 infants, low-certainty evidence), NEC (RR 0.86, 95% CI 0.65 to 1.15; 10 trials, 2069 infants; moderate-certainty evidence), chronic lung disease defined as oxygen requirement at 36 weeks (RR 0.93, 95% CI 0.84 to 1.05; 9 trials, 2001 infants; moderate-certainty evidence) and mortality prior to discharge (RR 0.81, 95% CI 0.61 to 1.07; 11 trials, 2258 infants; low-certainty evidence).\nWhen considering subgroup analysis, ventilator-generated NIPPV likely reduces respiratory failure postextubation (RR 0.49, 95% CI 0.40 to 0.62; 1057 infants; I2 = 47%; moderate-certainty evidence), while bilevel devices (RR 0.95, 95% CI 0.77 to 1.17; 716 infants) or a mix of both ventilator-generated and bilevel devices likely results in little to no difference (RR 0.87, 95% CI 0.73 to 1.02; 965 infants).\nAuthors' conclusions\nNIPPV likely reduces the incidence of extubation failure and the need for reintubation within 48 hours to one-week postextubation more effectively than NCPAP in very preterm infants (GA 28 weeks and above). There is a paucity of data for infants less than 28 weeks' gestation. Pulmonary air leaks were also potentially reduced in the NIPPV group. However, it has no effect on other clinically relevant outcomes such as gastrointestinal perforation, NEC, chronic lung disease or mortality. Ventilator-generated NIPPV appears superior to bilevel devices in reducing the incidence of respiratory failure postextubation failure and need for reintubation. Synchronisation used to deliver NIPPV may be important; however, data are insufficient to support strong conclusions.\nFuture trials should enrol a sufficient number of infants, particularly those less than 28 weeks' GA, to detect differences in death or chronic lung disease and should compare different categories of devices, establish the impact of synchronisation of NIPPV on safety and efficacy of the technique as well as the best combination of settings for NIPPV (rate, peak pressure and positive end-expiratory). Trials should strive to match the mean airway pressure between the intervention groups to allow a better comparison. Neurally adjusted ventilatory assist needs further assessment with properly powered randomised trials.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "– Compared to NCPAP, NIPPV likely reduces the risk of respiratory failure after extubation and reintubation.\n– Compared to NCPAP, NIPPV may reduce leaks of air from the air spaces in the lungs."
}
] |
query-laysum
|
8010
|
[
{
"role": "user",
"content": "Abstract: Background\nAbout one-third of women have urinary incontinence (UI) and up to one-tenth have faecal incontinence (FI) after childbirth. Pelvic floor muscle training (PFMT) is commonly recommended during pregnancy and after birth for both preventing and treating incontinence.\nThis is an update of a Cochrane Review previously published in 2017.\nObjectives\nTo assess the effects of PFMT for preventing or treating urinary and faecal incontinence in pregnant or postnatal women, and summarise the principal findings of relevant economic evaluations.\nSearch methods\nWe searched the Cochrane Incontinence Specialised Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, CINAHL, ClinicalTrials.gov, WHO ICTRP, and handsearched journals and conference proceedings (searched 7 August 2019), and the reference lists of retrieved studies.\nSelection criteria\nWe included randomised or quasi-randomised trials in which one arm included PFMT. Another arm was no PFMT, usual antenatal or postnatal care, another control condition, or an alternative PFMT intervention.\nPopulations included women who, at randomisation, were continent (PFMT for prevention) or incontinent (PFMT for treatment), and a mixed population of women who were one or the other (PFMT for prevention or treatment).\nData collection and analysis\nWe independently assessed trials for inclusion and risk of bias. We extracted data and assessed the quality of evidence using GRADE.\nMain results\nWe included 46 trials involving 10,832 women from 21 countries. Overall, trials were small to moderately-sized. The PFMT programmes and control conditions varied considerably and were often poorly described. Many trials were at moderate to high risk of bias. Two participants in a study of 43 pregnant women performing PFMT for prevention of incontinence withdrew due to pelvic floor pain. No other trials reported any adverse effects of PFMT.\nPrevention of UI: compared with usual care, continent pregnant women performing antenatal PFMT probably have a lower risk of reporting UI in late pregnancy (62% less; risk ratio (RR) 0.38, 95% confidence interval (CI) 0.20 to 0.72; 6 trials, 624 women; moderate-quality evidence). Antenatal PFMT slightly decreased the risk of UI in the mid-postnatal period (more than three to six months' postpartum) (29% less; RR 0.71, 95% CI 0.54 to 0.95; 5 trials, 673 women; high-quality evidence). There was insufficient information available for the late postnatal period (more than six to 12 months) to determine effects at this time point (RR 1.20, 95% CI 0.65 to 2.21; 1 trial, 44 women; low-quality evidence).\nTreatment of UI: compared with usual care, there is no evidence that antenatal PFMT in incontinent women decreases incontinence in late pregnancy (very low-quality evidence), or in the mid-(RR 0.94, 95% CI 0.70 to 1.24; 1 trial, 187 women; low-quality evidence), or late postnatal periods (very low-quality evidence). Similarly, in postnatal women with persistent UI, there is no evidence that PFMT results in a difference in UI at more than six to 12 months postpartum (RR 0.55, 95% CI 0.29 to 1.07; 3 trials; 696 women; low-quality evidence).\nMixed prevention and treatment approach to UI: antenatal PFMT in women with or without UI probably decreases UI risk in late pregnancy (22% less; RR 0.78, 95% CI 0.64 to 0.94; 11 trials, 3307 women; moderate-quality evidence), and may reduce the risk slightly in the mid-postnatal period (RR 0.73, 95% CI 0.55 to 0.97; 5 trials, 1921 women; low-quality evidence). There was no evidence that antenatal PFMT reduces the risk of UI at late postpartum (RR 0.85, 95% CI 0.63 to 1.14; 2 trials, 244 women; moderate-quality evidence). For PFMT started after delivery, there was uncertainty about the effect on UI risk in the late postnatal period (RR 0.88, 95% CI 0.71 to 1.09; 3 trials, 826 women; moderate-quality evidence).\nFaecal incontinence: eight trials reported FI outcomes. In postnatal women with persistent FI, it was uncertain whether PFMT reduced incontinence in the late postnatal period compared to usual care (very low-quality evidence). In women with or without FI, there was no evidence that antenatal PFMT led to a difference in the prevalence of FI in late pregnancy (RR 0.64, 95% CI 0.36 to 1.14; 3 trials, 910 women; moderate-quality evidence). Similarly, for postnatal PFMT in a mixed population, there was no evidence that PFMT reduces the risk of FI in the late postnatal period (RR 0.73, 95% CI 0.13 to 4.21; 1 trial, 107 women, low-quality evidence).\nThere was little evidence about effects on UI or FI beyond 12 months' postpartum. There were few incontinence-specific quality of life data and little consensus on how to measure it.\nAuthors' conclusions\nThis review provides evidence that early, structured PFMT in early pregnancy for continent women may prevent the onset of UI in late pregnancy and postpartum. Population approaches (recruiting antenatal women regardless of continence status) may have a smaller effect on UI, although the reasons for this are unclear. A population-based approach for delivering postnatal PFMT is not likely to reduce UI. Uncertainty surrounds the effects of PFMT as a treatment for UI in antenatal and postnatal women, which contrasts with the more established effectiveness in mid-life women.\nIt is possible that the effects of PFMT might be greater with targeted rather than mixed prevention and treatment approaches, and in certain groups of women. Hypothetically, for instance, women with a high body mass index (BMI) are at risk of UI. Such uncertainties require further testing and data on duration of effect are also needed. The physiological and behavioural aspects of exercise programmes must be described for both PFMT and control groups, and how much PFMT women in both groups do, to increase understanding of what works and for whom.\nFew data exist on FI and it is important that this is included in any future trials. It is essential that future trials use valid measures of incontinence-specific quality of life for both urinary and faecal incontinence. In addition to further clinical studies, economic evaluations assessing the cost-effectiveness of different management strategies for FI and UI are needed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, studies were small and most had design problems, including limited details on how women were randomly allocated into groups and poor reporting of measurements. Some of the problems were expected because it was impossible to blind health professionals or women to whether they were exercising or not. The PFMT differed considerably between studies and was often poorly described. The quality of the evidence was generally low to moderate."
}
] |
query-laysum
|
8011
|
[
{
"role": "user",
"content": "Abstract: Background\nTrigger finger is a common clinical disorder, characterised by pain and catching as the patient flexes and extends digits because of disproportion between the diameter of flexor tendons and the A1 pulley. The treatment approach may include non-surgical or surgical treatments. Currently there is no consensus about the best surgical treatment approach (open, percutaneous or endoscopic approaches).\nObjectives\nTo evaluate the effectiveness and safety of different methods of surgical treatment for trigger finger (open, percutaneous or endoscopic approaches) in adults at any stage of the disease.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and LILACS up to August 2017.\nSelection criteria\nWe included randomised or quasi-randomised controlled trials that assessed adults with trigger finger and compared any type of surgical treatment with each other or with any other non-surgical intervention. The major outcomes were the resolution of trigger finger, pain, hand function, participant-reported treatment success or satisfaction, recurrence of triggering, adverse events and neurovascular injury.\nData collection and analysis\nTwo review authors independently selected the trial reports, extracted the data and assessed the risk of bias. Measures of treatment effect for dichotomous outcomes calculated risk ratios (RRs), and mean differences (MDs) or standardised mean differences (SMD) for continuous outcomes, with 95% confidence intervals (CIs). When possible, the data were pooled into meta-analysis using the random-effects model. GRADE was used to assess the quality of evidence for each outcome.\nMain results\nFourteen trials were included, totalling 1260 participants, with 1361 trigger fingers. The age of participants included in the studies ranged from 16 to 88 years; and the majority of participants were women (approximately 70%). The average duration of symptoms ranged from three to 15 months, and the follow-up after the procedure ranged from eight weeks to 23 months.\nThe studies reported nine types of comparisons: open surgery versus steroid injections (two studies); percutaneous surgery versus steroid injection (five studies); open surgery versus steroid injection plus ultrasound-guided hyaluronic acid injection (one study); percutaneous surgery plus steroid injection versus steroid injection (one study); percutaneous surgery versus open surgery (five studies); endoscopic surgery versus open surgery (one study); and three comparisons of types of incision for open surgery (transverse incision of the skin in the distal palmar crease, transverse incision of the skin about 2–3 mm distally from distal palmar crease, and longitudinal incision of the skin) (one study).\nMost studies had significant methodological flaws and were considered at high or unclear risk of selection bias, performance bias, detection bias and reporting bias. The primary comparison was open surgery versus steroid injections, because open surgery is the oldest and the most widely used treatment method and considered as standard surgery, whereas steroid injection is the least invasive control treatment method as reported in the studies in this review and is often used as first-line treatment in clinical practice.\nCompared with steroid injection, there was low-quality evidence that open surgery provides benefits with respect to less triggering recurrence, although it has the disadvantage of being more painful. Evidence was downgraded due to study design flaws and imprecision.\nBased on two trials (270 participants) from six up to 12 months, 50/130 (or 385 per 1000) individuals had recurrence of trigger finger in the steroid injection group compared with 8/140 (or 65 per 1000; range 35 to 127) in the open surgery group, RR 0.17 (95% CI 0.09 to 0.33), for an absolute risk difference that 29% fewer people had recurrence of symptoms with open surgery (60% fewer to 3% more individuals); relative change translates to improvement of 83% in the open surgery group (67% to 91% better).\nAt one week, 9/49 (184 per 1000) people had pain on the palm of the hand in the steroid injection group compared with 38/56 (or 678 per 1000; ranging from 366 to 1000) in the open surgery group, RR 3.69 (95% CI 1.99 to 6.85), for an absolute risk difference that 49% more had pain with open surgery (33% to 66% more); relative change translates to worsening of 269% (585% to 99% worse) (one trial, 105 participants).\nBecause of very low quality evidence from two trials we are uncertain whether open surgery improve resolution of trigger finger in the follow-up at six to 12 months, when compared with steroid injection (131/140 observed in the open surgery group compared with 80/130 in the control group; RR 1.48, 95% CI 0.79 to 2.76); evidence was downgraded due to study design flaws, inconsistency and imprecision. Low-quality evidence from two trials and few event rates (270 participants) from six up to 12 months of follow-up, we are uncertain whether open surgery increased the risk of adverse events (incidence of infection, tendon injury, flare, cutaneous discomfort and fat necrosis) (18/140 observed in the open surgery group compared with 17/130 in the control group; RR 1.02, 95% CI 0.57 to 1.84) and neurovascular injury (9/140 observed in the open surgery group compared with 4/130 in the control group; RR 2.17, 95% CI 0.7 to 6.77). Twelve participants (8 versus 4) did not complete the follow-up, and it was considered that they did not have a positive outcome in the data analysis. We are uncertain whether open surgery was more effective than steroid injection in improving hand function or participant satisfaction as studies did not report these outcomes.\nAuthors' conclusions\nLow-quality evidence indicates that, compared with steroid injection, open surgical treatment in people with trigger finger, may result in a less recurrence rate from six up to 12 months following the treatment, although it increases the incidence of pain during the first follow-up week. We are uncertain about the effect of open surgery with regard to the resolution rate in follow-up at six to 12 months, compared with steroid injections, due high heterogeneity and few events occurred in the trials; we are uncertain too about the risk of adverse events and neurovascular injury because of a few events occurred in the studies. Hand function or participant satisfaction were not reported.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This Cochrane Review is current to August 2017. We included 14 randomised controlled trials involving 1260 participants, totalling 1361 trigger fingers. Two studies compared open surgery versus steroid injections, five studies compared percutaneous surgery versus steroid injection, one study compared open surgery versus steroid injection plus hyaluronic acid injection, one study compared percutaneous surgery plus steroid injection versus steroid injection, five studies compared percutaneous surgery versus open surgery, one study compared endoscopic surgery versus open surgery and one study compared three types of skin incision to open surgery. The majority of participants were female (about 70%); they were aged between 16 and 88 years; and the mean follow-up of participants after the procedure was eight weeks to 23 months. Due to space constraints, the reporting of all results was limited to the main comparison — open surgery versus steroid injection — because open surgery is the oldest and the most widely used treatment method and considered as standard surgery, whereas steroid injection is the least invasive control treatment method as reported in the studies in this review and is often used as first-line treatment in clinical practice."
}
] |
query-laysum
|
8012
|
[
{
"role": "user",
"content": "Abstract: Background\nDiabetes is the leading cause of end-stage kidney disease (ESKD) around the world. Blood pressure lowering and glucose control are used to reduce diabetes-associated disability including kidney failure. However there is a lack of an overall evidence summary of the optimal target range for blood glucose control to prevent kidney failure.\nObjectives\nTo evaluate the benefits and harms of intensive (HbA1c < 7% or fasting glucose levels < 120 mg/dL versus standard glycaemic control (HbA1c ≥ 7% or fasting glucose levels ≥ 120 mg/dL for preventing the onset and progression of kidney disease among adults with diabetes.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Specialised Register up to 31 March 2017 through contact with the Information Specialist using search terms relevant to this review. Studies contained in the Specialised Register are identified through search strategies specifically designed for CENTRAL, MEDLINE, and EMBASE; handsearching conference proceedings; and searching the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nRandomised controlled trials evaluating glucose-lowering interventions in which people (aged 14 year or older) with type 1 or 2 diabetes with and without kidney disease were randomly allocated to tight glucose control or less stringent blood glucose targets.\nData collection and analysis\nTwo authors independently assessed studies for eligibility and risks of bias, extracted data and checked the processes for accuracy. Outcomes were mortality, cardiovascular complications, doubling of serum creatinine (SCr), ESKD and proteinuria. Confidence in the evidence was assessing using GRADE. Summary estimates of effect were obtained using a random-effects model, and results were expressed as risk ratios (RR) and their 95% confidence intervals (CI) for dichotomous outcomes, and mean difference (MD) and 95% CI for continuous outcomes.\nMain results\nFourteen studies involving 29,319 people with diabetes were included and 11 studies involving 29,141 people were included in our meta-analyses. Treatment duration was 56.7 months on average (range 6 months to 10 years). Studies included people with a range of kidney function. Incomplete reporting of key methodological details resulted in uncertain risks of bias in many studies. Using GRADE assessment, we had moderate confidence in the effects of glucose lowering strategies on ESKD, all-cause mortality, myocardial infarction, and progressive protein leakage by kidney disease and low or very low confidence in effects of treatment on death related to cardiovascular complications and doubling of serum creatinine (SCr).\nFor the primary outcomes, tight glycaemic control may make little or no difference to doubling of SCr compared with standard control (4 studies, 26,874 participants: RR 0.84, 95% CI 0.64 to 1.11; I2= 73%, low certainty evidence), development of ESKD (4 studies, 23,332 participants: RR 0.62, 95% CI 0.34 to 1.12; I2= 52%; low certainty evidence), all-cause mortality (9 studies, 29,094 participants: RR 0.99, 95% CI 0.86 to 1.13; I2= 50%; moderate certainty evidence), cardiovascular mortality (6 studies, 23,673 participants: RR 1.19, 95% CI 0.73 to 1.92; I2= 85%; low certainty evidence), or sudden death (4 studies, 5913 participants: RR 0.82, 95% CI 0.26 to 2.57; I2= 85%; very low certainty evidence). People who received treatment to achieve tighter glycaemic control probably experienced lower risks of non-fatal myocardial infarction (5 studies, 25,596 participants: RR 0.82, 95% CI 0.67 to 0.99; I2= 46%, moderate certainty evidence), onset of microalbuminuria (4 studies, 19,846 participants: RR 0.82, 95% CI 0.71 to 0.93; I2= 61%, moderate certainty evidence), and progression of microalbuminuria (5 studies, 13,266 participants: RR 0.59, 95% CI 0.38 to 0.93; I2= 75%, moderate certainty evidence). In absolute terms, tight versus standard glucose control treatment in 1,000 adults would lead to between zero and two people avoiding non-fatal myocardial infarction, while seven adults would avoid experiencing new-onset albuminuria and two would avoid worsening albuminuria.\nAuthors' conclusions\nThis review suggests that people who receive intensive glycaemic control for treatment of diabetes had comparable risks of kidney failure, death and major cardiovascular events as people who received less stringent blood glucose control, while experiencing small clinical benefits on the onset and progression of microalbuminuria and myocardial infarction. The adverse effects of glycaemic management are uncertain. Based on absolute treatment effects, the clinical impact of targeting an HbA1c < 7% or blood glucose < 6.6 mmol/L is unclear and the potential harms of this treatment approach are largely unmeasured.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked at the evidence for tighter blood glucose control (lower blood glucose in the long term, that is HbA1c < 7% ) compared with less tight blood glucose control (HbA1c > 7%) in people who have either type 1 or type 2 diabetes. Blood glucose was achieved by any sort of treatment including pills or insulin."
}
] |
query-laysum
|
8013
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic respiratory conditions are major causes of mortality and morbidity. Children with chronic health conditions have increased morbidity associated with their physical, emotional, and general well-being. Acute respiratory exacerbations (AREs) are common in children with chronic respiratory disease, often requiring admission to hospital. Reducing the frequency of AREs and recurrent hospitalisations is therefore an important goal in the individual and public health management of chronic respiratory illnesses in children. Discharge planning is used to decide what a person needs for transition from one level of care to another and is usually considered in the context of discharge from hospital to the home. Discharge planning from hospital for ongoing management of an illness has historically been referral to a general practitioner or allied health professional or self management by the individual and their family with limited communication between the hospital and patient once discharged. Effective discharge planning can decrease the risk of recurrent AREs requiring medical care. An individual caseworker-assigned discharge plan may further decrease exacerbations.\nObjectives\nTo evaluate the efficacy of individual caseworker-assigned discharge plans, as compared to non-caseworker-assigned plans, in preventing hospitalisation for AREs in children with chronic lung diseases such as asthma and bronchiectasis.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register of Trials, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, trials registries, and reference lists of articles. The latest searches were undertaken in November 2017.\nSelection criteria\nAll randomised controlled trials comparing individual caseworker-assigned discharge planning compared to traditional discharge-planning approaches (including self management), and their effectiveness in reducing the subsequent need for emergency care for AREs (hospital admissions, emergency department visits, and/or unscheduled general practitioner visits) in children hospitalised with an acute exacerbation of chronic respiratory disease. We excluded studies that included children with cystic fibrosis.\nData collection and analysis\nWe used standard Cochrane Review methodological approaches. Relevant studies were independently selected in duplicate. Two review authors independently assessed trial quality and extracted data. We contacted the authors of one study for further information.\nMain results\nWe included four studies involving a total of 773 randomised participants aged between 14 months and 16 years. All four studies involved children with asthma, with the case-planning undertaken by a trained nurse educator. However, the discharge planning/education differed among the studies. We could include data from only two studies (361 children) in the meta-analysis. Two further studies enrolled children in both inpatient and outpatient settings, and one of these studies also included children with acute wheezing illness (no previous asthma diagnosis); the data specific to this review could not be obtained. For the primary outcome of exacerbations requiring hospitalisation, those in the intervention group were significantly less likely to be rehospitalised (odds ratio (OR) 0.29, 95% confidence interval (CI) 0.16 to 0.50) compared to controls. This equates to 189 (95% CI 124 to 236) fewer admissions per 1000 children. No adverse events were reported in any study. In the context of substantial statistical heterogeneity between the two studies, there were no statistically significant effects on emergency department (OR 0.37, 95% CI 0.04 to 3.05) or general practitioner (OR 0.87, 95% CI 0.22 to 3.44) presentations. There were no data on cost-effectiveness, length of stay of subsequent hospitalisations, or adherence to medications. One study reported quality of life, with no significant differences observed between the intervention and control groups.\nWe considered three of the studies to have an unclear risk of bias, primarily due to inadequate description of the blinding of participants and investigators. The fourth study was assessed as at high risk of bias as a single unblinded investigator was used. Using the GRADE system, we assessed the quality of the evidence as moderate for the outcome of hospitalisation and low for the outcomes of emergency department visits and general practitioner consultations.\nAuthors' conclusions\nCurrent evidence suggests that individual caseworker-assigned discharge plans, as compared to non-caseworker-assigned plans, may be beneficial in preventing hospital readmissions for acute exacerbations in children with asthma. There was no clear indication that the intervention reduces emergency department and general practitioner attendances for asthma, and there is an absence of data for children with other chronic respiratory conditions. Given the potential benefit and cost savings to the healthcare sector and families if hospitalisations and outpatient attendances can be reduced, there is a need for further randomised controlled trials encompassing different chronic respiratory illnesses, ethnicity, socio-economic settings, and cost-effectiveness, as well as defining the essential components of a complex intervention.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Acute exacerbations (flare-ups) of long-term breathing diseases in children leads to high use of health resources and poor quality of life for children and their families. Providing extra support and education to a child and his/her family during a hospitalisation for an acute flare-up may improve their quality of life and reduce future healthcare visits. A caseworker assigned to each child during a hospital admission may help provide individual education and discharge planning and ongoing support once the child has been discharged. We reviewed whether individualised case management support services during and following hospitalisation for acute flare-ups of long-term breathing diseases in children is beneficial. Specifically, we wanted to know if caseworkers can help prevent further hospital admissions and reduce the number of visits to other health services such as emergency departments and general practitioners."
}
] |
query-laysum
|
8014
|
[
{
"role": "user",
"content": "Abstract: Background\nOrgan injury is a common and severe complication of cardiac surgery that contributes to the majority of deaths. There are no effective treatment or prevention strategies. It has been suggested that innate immune system activation may have a causal role in organ injury. A wide range of organ protection interventions targeting the innate immune response have been evaluated in randomised controlled trials (RCTs) in adult cardiac surgery patients, with inconsistent results in terms of effectiveness.\nObjectives\nThe aim of the review was to summarise the results of RCTs of organ protection interventions targeting the innate immune response in adult cardiac surgery. The review considered whether the interventions had a treatment effect on inflammation, important clinical outcomes, or both.\nSearch methods\nCENTRAL, MEDLINE, Embase, conference proceedings and two trial registers were searched on October 2022 together with reference checking to identify additional studies.\nSelection criteria\nRCTs comparing organ protection interventions targeting the innate immune response versus placebo or no treatment in adult patients undergoing cardiac surgery where the treatment effect on innate immune activation and on clinical outcomes of interest were reported.\nData collection and analysis\nSearches, study selection, quality assessment, and data extractions were performed independently by pairs of authors. The primary inflammation outcomes were peak IL-6 and IL-8 concentrations in blood post-surgery. The primary clinical outcome was in-hospital or 30-day mortality. Treatment effects were expressed as risk ratios (RR) and standardised mean difference (SMD) with 95% confidence intervals (CI). Meta-analyses were performed using random effects models, and heterogeneity was assessed using I2.\nMain results\nA total of 40,255 participants from 328 RCTs were included in the synthesis. The effects of treatments on IL-6 (SMD -0.77, 95% CI -0.97 to -0.58, I2 = 92%) and IL-8 (SMD -0.92, 95% CI -1.20 to -0.65, I2 = 91%) were unclear due to heterogeneity. Heterogeneity for inflammation outcomes persisted across multiple sensitivity and moderator analyses. The pooled treatment effect for in-hospital or 30-day mortality was RR 0.78, 95% CI 0.68 to 0.91, I2 = 0%, suggesting a significant clinical benefit. There was little or no treatment effect on mortality when analyses were restricted to studies at low risk of bias. Post hoc analyses failed to demonstrate consistent treatment effects on inflammation and clinical outcomes. Levels of certainty for pooled treatment effects on the primary outcomes were very low.\nAuthors' conclusions\nA systematic review of RCTs of organ protection interventions targeting innate immune system activation did not resolve uncertainty as to the effectiveness of these treatments, or the role of innate immunity in organ injury following cardiac surgery.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Results from 328 studies including 40,255 patients are hereby presented. The effect on inflammatory activity (measured via the level of the main signals circulated in our blood) was not clearly changed. Mortality, on the other hand, was reduced in these studies, although this effect could not be confirmed when we repeated our calculations in the studies conducted at the highest quality standard exclusively. Other signs of inflammation and organ damage were not clearly changed.\nIn conclusion, further studies are needed before we can understand whether, by targeting inflammation, we can prevent organ damage."
}
] |
query-laysum
|
8015
|
[
{
"role": "user",
"content": "Abstract: Background\nCannabis use disorder is the most commonly reported illegal substance use disorder in the general population; although demand for assistance from health services is increasing internationally, only a minority of those with the disorder seek professional assistance. Treatment studies have been published, but pressure to establish public policy requires an updated systematic review of cannabis-specific treatments for adults.\nObjectives\nTo evaluate the efficacy of psychosocial interventions for cannabis use disorder (compared with inactive control and/or alternative treatment) delivered to adults in an out-patient or community setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 6), MEDLINE, EMBASE, PsycINFO, the Cumulaive Index to Nursing and Allied Health Literature (CINAHL) and reference lists of articles. Searched literature included all articles published before July 2015.\nSelection criteria\nAll randomised controlled studies examining a psychosocial intervention for cannabis use disorder (without pharmacological intervention) in comparison with a minimal or inactive treatment control or alternative combinations of psychosocial interventions.\nData collection and analysis\nWe used standard methodological procedures as expected by The Cochrane Collaboration.\nMain results\nWe included 23 randomised controlled trials involving 4045 participants. A total of 15 studies took place in the United States, two in Australia, two in Germany and one each in Switzerland, Canada, Brazil and Ireland. Investigators delivered treatments over approximately seven sessions (range, one to 14) for approximately 12 weeks (range, one to 56).\nOverall, risk of bias across studies was moderate, that is, no trial was at high risk of selection bias, attrition bias or reporting bias. Further, trials included a large total number of participants, and each trial ensured the fidelity of treatments provided. In contrast, because of the nature of the interventions provided, participant blinding was not possible, and reports of researcher blinding often were unclear or were not provided. Half of the reviewed studies included collateral verification or urinalysis to confirm self report data, leading to concern about performance and detection bias. Finally, concerns of other bias were based on relatively consistent lack of assessment of non-cannabis substance use or use of additional treatments before or during the trial period.\nA subset of studies provided sufficient detail for comparison of effects of any intervention versus inactive control on primary outcomes of interest at early follow-up (median, four months). Results showed moderate-quality evidence that approximately seven out of 10 intervention participants completed treatment as intended (effect size (ES) 0.71, 95% confidence interval (CI) 0.63 to 0.78, 11 studies, 1424 participants), and that those receiving psychosocial intervention used cannabis on fewer days compared with those given inactive control (mean difference (MD) 5.67, 95% CI 3.08 to 8.26, six studies, 1144 participants). In addition, low-quality evidence revealed that those receiving intervention were more likely to report point-prevalence abstinence (risk ratio (RR) 2.55, 95% CI 1.34 to 4.83, six studies, 1166 participants) and reported fewer symptoms of dependence (standardised mean difference (SMD) 4.15, 95% CI 1.67 to 6.63, four studies, 889 participants) and cannabis-related problems compared with those given inactive control (SMD 3.34, 95% CI 1.26 to 5.42, six studies, 2202 participants). Finally, very low-quality evidence indicated that those receiving intervention reported using fewer joints per day compared with those given inactive control (SMD 3.55, 95% CI 2.51 to 4.59, eight studies, 1600 participants). Notably, subgroup analyses found that interventions of more than four sessions delivered over longer than one month (high intensity) produced consistently improved outcomes (particularly in terms of cannabis use frequency and severity of dependence) in the short term as compared with low-intensity interventions.\nThe most consistent evidence supports the use of cognitive-behavioural therapy (CBT), motivational enhancement therapy (MET) and particularly their combination for assisting with reduction of cannabis use frequency at early follow-up (MET: MD 4.45, 95% CI 1.90 to 7.00, four studies, 612 participants; CBT: MD 10.94, 95% CI 7.44 to 14.44, one study, 134 participants; MET + CBT: MD 7.38, 95% CI 3.18 to 11.57, three studies, 398 participants) and severity of dependence (MET: SMD 4.07, 95% CI 1.97 to 6.17, two studies, 316 participants; MET + CBT: SMD 7.89, 95% CI 0.93 to 14.85, three studies, 573 participants), although no particular intervention was consistently effective at nine-month follow-up or later. In addition, data from five out of six studies supported the utility of adding voucher-based incentives for cannabis-negative urines to enhance treatment effect on cannabis use frequency. A single study found contrasting results throughout a 12-month follow-up period, as post-treatment outcomes related to overall reduction in cannabis use frequency favoured CBT alone without the addition of abstinence-based or treatment adherence-based contingency management. In contrast, evidence of drug counselling, social support, relapse prevention and mindfulness meditation was weak because identified studies were few, information on treatment outcomes insufficient and rates of treatment adherence low. In line with treatments for other substance use, abstinence rates were relatively low overall, with approximately one-quarter of participants abstinent at final follow-up. Finally, three studies found that intervention was comparable with treatment as usual among participants in psychiatric clinics and reported no between-group differences in any of the included outcomes.\nAuthors' conclusions\nIncluded studies were heterogeneous in many aspects, and important questions regarding the most effective duration, intensity and type of intervention were raised and partially resolved. Generalisability of findings was unclear, most notably because of the limited number of localities and homogeneous samples of treatment seekers. The rate of abstinence was low and unstable although comparable with treatments for other substance use. Psychosocial intervention was shown, in comparison with minimal treatment controls, to reduce frequency of use and severity of dependence in a fairly durable manner, at least in the short term. Among the included intervention types, an intensive intervention provided over more than four sessions based on the combination of MET and CBT with abstinence-based incentives was most consistently supported for treatment of cannabis use disorder.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Similar to other illicit drug disorders, cannabis use disorder is not easily treated by psychosocial interventions provided in out-patient and community settings. CBT in individual and group sessions and MET in individual sessions were the most consistently explored treatments; they have demonstrated effectiveness over control conditions. In particular, psychosocial treatment was consistently effective over no treatment in reducing the frequency of cannabis use (with nine studies showing superior outcomes and four showing comparable outcomes), quantity used per occasion (seven studies showing superior outcomes and two showing comparable outcomes) and severity of dependence (with seven studies showing superior outcomes and two showing comparable outcomes). In contrast, treatment was not likely to be more effective than no treatment in improving cannabis-related problems (with four studies showing superior outcomes and seven showing comparable outcomes), motivation to quit (with no studies showing superior outcomes and three showing comparable outcomes), other substance use (with no studies showing superior outcomes and seven showing comparable outcomes) or mental health (with no studies showing superior outcomes and five showing comparable outcomes). Comparison of studies reporting treatment gains was possible for a subset of studies with short-term follow-up of approximately four months. This analysis found that those receiving any intervention reported fewer days of cannabis use, used fewer joints per day and reported fewer symptoms of dependence and fewer cannabis-related problems. High-intensity interventions of more than four sessions and those delivered over longer than one month, particularly MET + CBT interventions, were most effective. In addition, interventions were completed as intended by most participants. Notably, three studies investigated the effectiveness of psychosocial intervention compared with treatment as usual delivered at psychiatric out-patient centres and reported little evidence of significant group differences in treatment outcomes. Finally, results from six studies, which included contingency management adjunct treatments, were mixed but suggested that improvements in cannabis use frequency and severity of dependence were likely when combined with CBT or with MET + CBT. Invesigators reported no adverse effects."
}
] |
query-laysum
|
8016
|
[
{
"role": "user",
"content": "Abstract: Background\nLower urinary tract symptoms caused by benign prostatic obstruction (LUTS/BPO) represents one of the most common clinical complaints in men. Physical activity might represent a viable first-line intervention for treating LUTS/BPO.\nObjectives\nTo assess the effects of physical activity for lower urinary tract symptoms caused by benign prostatic obstruction (LUTS/BPO).\nSearch methods\nWe performed a comprehensive search of multiple databases (CENTRAL, MEDLINE, Embase, Web of Science, LILACS, ClinicalTrials.gov, and WHO ICTRP); checked the reference lists of retrieved articles; and handsearched abstract proceedings of conferences with no restrictions on the language of publication or publication status from database inception to 6 November 2018.\nSelection criteria\nWe included published and unpublished randomised controlled and controlled clinical trials that included men diagnosed with LUTS/BPO. We excluded studies in which medical history suggested non-BPO causes of LUTS or prior invasive therapies to physical activity or that used electrical stimulation.\nData collection and analysis\nTwo review authors independently assessed study eligibility, extracted data, and assessed the risk of bias of included studies. We assessed primary outcomes (symptom score for LUTS; response rate, defined as 20% improvement in symptom score; withdrawal due to adverse events) and secondary outcomes (change of medication use; need for an invasive procedure; postvoid residual urine). We assessed the quality of the evidence using the GRADE approach.\nMain results\nWe included six studies that randomised 652 men over 40 years old with moderate or severe LUTS. The four different comparisons were as follows:\nPhysical activity versus watchful waiting\nTwo RCTs randomised 119 participants. The interventions included tai chi and pelvic floor exercise. The evidence was overall of very low quality, and we are uncertain about the effects of physical activity on symptom score for LUTS (mean difference (MD) -8.1, 95% confidence interval (CI) -13.2 to -3.1); response rate (risk ratio (RR) 1.80, 95% CI 0.81 to 4.02; 286 more men per 1000, 95% CI 68 fewer to 1079 more); and withdrawal due to adverse events (RR 1.00, 95% CI 0.59 to 1.69; 0 fewer men per 1000, 95% CI 205 fewer to 345 more).\nPhysical activity as part of self-management programme versus watchful waiting\nTwo RCTs randomised 362 participants. Pelvic floor exercise was one of multiple intervention components. The evidence was of very low quality, and we are uncertain about the effects of physical activity for symptom score for LUTS (MD -6.2, 95% CI -9.9 to -2.5); response rate (RR 2.36, 95% CI 1.32 to 4.21; 424 more men per 1000, 95% CI 100 more to 1000 more); and withdrawal due to adverse events (risk difference 0.00, 95% CI -0.05 to 0.06; 65 fewer men per 1000, 95% CI 65 fewer to 65 fewer).\nPhysical activity as part of weight reduction programme versus watchful waiting\nOne RCT randomised 130 participants. An unclear type of intense exercise was one of multiple intervention components. The evidence was of very low quality, and we are uncertain about the effects for symptom score for LUTS (MD -1.1, 95% CI -3.5 to 1.3); response rate (RR 1.20, 95% CI 0.74 to 1.94; 67 more men per 1000, 95% CI 87 fewer to 313 more); and withdrawal due to adverse events (RR 1.63, 95% CI 1.03 to 2.57; 184 more men per 1000, 95% CI 9 more to 459 more).\nPhysical activity versus alpha-blockers\nOne RCT randomised 41 participants to pelvic floor exercise or alpha-blockers. The evidence was of very low quality, and we are uncertain about the effects for symptom score for LUTS (MD 2.8, 95% CI -0.9 to 6.4) and response rate (RR 0.80, 95% CI 0.55 to 1.15; 167 fewer men per 1000, 95% CI 375 fewer to 125 more). The evidence was of low quality for withdrawal due to adverse events; the effects for this outcome may be similar between interventions (RR 0.86, 95% CI 0.06 to 12.89; 7 fewer men per 1000, 95% CI 49 fewer to 626 more).\nAuthors' conclusions\nWe rated the quality of the evidence for most of the effects of physical activity for LUTS/BPO as very low. We are therefore uncertain whether physical activity affects symptom scores for LUTS, response rate, and withdrawal due to adverse events. Our confidence in the estimates was lowered due to study limitations, inconsistency, indirectness, and imprecision. Additional high-quality research is necessary.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "As men grow older, many develop bothersome urinary problems, such as a weak stream and frequent urination during the day or nightime. A common reason for these symptoms is enlargement of the prostate. Treatment for this problem includes lifestyle changes (like drinking less fluid), medications, and surgical procedures. Physical activity, defined as movement produced by muscles, may also improve these symptoms. We conducted this review to compare how physical activity compares to other methods, in treating such urinary tract symptoms."
}
] |
query-laysum
|
8017
|
[
{
"role": "user",
"content": "Abstract: Background\nHead and neck cancer treatment has developed over the last decade, with improved mortality and survival rates, but the treatments often result in dysphagia (a difficulty in swallowing) as a side effect. This may be acute, resolving after treatment, or remain as a long-term negative sequela of head and neck cancer (HNC) treatment. Interventions to counteract the problems associated with dysphagia include swallowing exercises or modification of diet (bolus texture, size), or both.\nObjectives\nTo determine the effects of therapeutic exercises, undertaken before, during and/or immediately after HNC treatment, on swallowing, aspiration and adverse events such as chest infections, aspiration pneumonia and profound weight loss, in people treated curatively for advanced-stage (stage III, stage IV) squamous cell carcinoma of the head and neck.\nSearch methods\nThe Cochrane ENT Information Specialist searched the ENT Trials Register; Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 6); MEDLINE; PubMed; Embase; CINAHL; LILACS; KoreaMed; IndMed; PakMediNet; Web of Science; ClinicalTrials.gov; ICTRP; speechBITE; Google Scholar; Google and additional sources for published and unpublished trials. The date of the search was 1 July 2016.\nSelection criteria\nWe selected randomised controlled trials (RCTs) of adults with head and neck cancer (stage III, stage IV) who underwent therapeutic exercises for swallowing before, during and/or immediately after HNC treatment to help produce safe and efficient swallowing. The main comparison was therapeutic exercises versus treatment as usual (TAU). Other possible comparison pairs included: therapeutic exercises versus sham exercises and therapeutic exercises plus TAU versus TAU. TAU consisted of reactive management of a patient's dysphagia, when this occurred. When severe, this included insertion of either a percutaneous endoscopic gastroscopy or nasogastric tube for non-oral feeding.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were: safety and efficiency of oral swallowing, as measured by reduced/no aspiration; oropharyngeal swallowing efficiency (OPSE) measures, taken from videofluoroscopy swallowing studies; and adverse events, such as chest infections, aspiration pneumonia and profound weight loss. Secondary outcomes were time to return to function (swallowing); self-reported changes to quality of life; changes to psychological well-being - depression, anxiety and stress; patient satisfaction with the intervention; patient compliance with the intervention; and cost-effectiveness of the intervention.\nMain results\nWe included six studies (reported as seven papers) involving 326 participants whose ages ranged from 39 to 83 years, with a gender bias towards men (73% to 95% across studies), reflecting the characteristics of patients with HNC. The risk of bias in the studies was generally high.\nWe did not pool data from studies because of significant differences in the interventions and outcomes evaluated. We found a lack of standardisation and consistency in the outcomes measured and the endpoints at which they were evaluated.\nWe found no evidence that therapeutic exercises were better than TAU, or any other treatment, in improving the safety and efficiency of oral swallowing (our primary outcome) or in improving any of the secondary outcomes.\nUsing the GRADE system, we classified the overall quality of the evidence for each outcome as very low, due to the limited number of trials and their low quality. There were no adverse events reported that were directly attributable to the intervention (swallowing exercises).\nAuthors' conclusions\nWe found no evidence that undertaking therapeutic exercises before, during and/or immediately after HNC treatment leads to improvement in oral swallowing. This absence of evidence may be due to the small participant numbers in trials, resulting in insufficient power to detect any difference. Data from the identified trials could not be combined due to differences in the choice of primary outcomes and in the measurement tools used to assess them, and the differing baseline and endpoints across studies.\nDesigning and implementing studies with stronger methodological rigour is essential. There needs to be agreement about the key primary outcomes, the choice of validated assessment tools to measure them and the time points at which those measurements are made.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A swallowing impairment (dysphagia) commonly occurs as a result of head and neck cancer treatment. It may be temporary, resulting from a dry mouth or irritation of the lining of the mouth during treatment, or permanent due to a narrowing (stricture) of the throat after surgery and/or radiotherapy. Undertaking swallowing exercises before, during and/or immediately after HNC treatment may prevent dysphagia occurring, or may reduce its severity.\nClinicians who are treating dysphagia in head and neck cancer patients lack evidence-based guidelines so it is challenging to determine which interventions are suitable, but many speech and language therapists encourage patients to undertake exercises intensively throughout head and neck cancer treatment, based on a 'use it or lose it' principle."
}
] |
query-laysum
|
8018
|
[
{
"role": "user",
"content": "Abstract: Background\nCardiovascular disease, which includes coronary artery disease, stroke and peripheral vascular disease, is a leading cause of death worldwide. Homocysteine is an amino acid with biological functions in methionine metabolism. A postulated risk factor for cardiovascular disease is an elevated circulating total homocysteine level. The impact of homocysteine-lowering interventions, given to patients in the form of vitamins B6, B9 or B12 supplements, on cardiovascular events has been investigated. This is an update of a review previously published in 2009, 2013, and 2015.\nObjectives\nTo determine whether homocysteine-lowering interventions, provided to patients with and without pre-existing cardiovascular disease are effective in preventing cardiovascular events, as well as reducing all-cause mortality, and to evaluate their safety.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL 2017, Issue 5), MEDLINE (1946 to 1 June 2017), Embase (1980 to 2017 week 22) and LILACS (1986 to 1 June 2017). We also searched Web of Science (1970 to 1 June 2017). We handsearched the reference lists of included papers. We also contacted researchers in the field. There was no language restriction in the search.\nSelection criteria\nWe included randomised controlled trials assessing the effects of homocysteine-lowering interventions for preventing cardiovascular events with a follow-up period of one year or longer. We considered myocardial infarction and stroke as the primary outcomes. We excluded studies in patients with end-stage renal disease.\nData collection and analysis\nWe performed study selection, 'Risk of bias' assessment and data extraction in duplicate. We estimated risk ratios (RR) for dichotomous outcomes. We calculated the number needed to treat for an additional beneficial outcome (NNTB). We measured statistical heterogeneity using the I2 statistic. We used a random-effects model. We conducted trial sequential analyses, Bayes factor, and fragility indices where appropriate.\nMain results\nIn this third update, we identified three new randomised controlled trials, for a total of 15 randomised controlled trials involving 71,422 participants. Nine trials (60%) had low risk of bias, length of follow-up ranged from one to 7.3 years. Compared with placebo, there were no differences in effects of homocysteine-lowering interventions on myocardial infarction (homocysteine-lowering = 7.1% versus placebo = 6.0%; RR 1.02, 95% confidence interval (CI) 0.95 to 1.10, I2 = 0%, 12 trials; N = 46,699; Bayes factor 1.04, high-quality evidence), death from any cause (homocysteine-lowering = 11.7% versus placebo = 12.3%, RR 1.01, 95% CI 0.96 to 1.06, I2 = 0%, 11 trials, N = 44,817; Bayes factor = 1.05, high-quality evidence), or serious adverse events (homocysteine-lowering = 8.3% versus comparator = 8.5%, RR 1.07, 95% CI 1.00 to 1.14, I2 = 0%, eight trials, N = 35,788; high-quality evidence). Compared with placebo, homocysteine-lowering interventions were associated with reduced stroke outcome (homocysteine-lowering = 4.3% versus comparator = 5.1%, RR 0.90, 95% CI 0.82 to 0.99, I2 = 8%, 10 trials, N = 44,224; high-quality evidence). Compared with low doses, there were uncertain effects of high doses of homocysteine-lowering interventions on stroke (high = 10.8% versus low = 11.2%, RR 0.90, 95% CI 0.66 to 1.22, I2 = 72%, two trials, N = 3929; very low-quality evidence).\nWe found no evidence of publication bias.\nAuthors' conclusions\nIn this third update of the Cochrane review, there were no differences in effects of homocysteine-lowering interventions in the form of supplements of vitamins B6, B9 or B12 given alone or in combination comparing with placebo on myocardial infarction, death from any cause or adverse events. In terms of stroke, this review found a small difference in effect favouring to homocysteine-lowering interventions in the form of supplements of vitamins B6, B9 or B12 given alone or in combination comparing with placebo.\nThere were uncertain effects of enalapril plus folic acid compared with enalapril on stroke; approximately 143 (95% CI 85 to 428) people would need to be treated for 5.4 years to prevent 1 stroke, this evidence emerged from one mega-trial.\nTrial sequential analyses showed that additional trials are unlikely to increase the certainty about the findings of this issue regarding homocysteine-lowering interventions versus placebo. There is a need for additional trials comparing homocysteine-lowering interventions combined with antihypertensive medication versus antihypertensive medication, and homocysteine-lowering interventions at high doses versus homocysteine-lowering interventions at low doses. Potential trials should be large and co-operative.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cardiovascular disease, which includes heart attacks and strokes, is the number one cause of death worldwide. Many people with cardiovascular disease may not have symptoms, but be at high risk. Diabetes mellitus, high blood pressure, smoking and a high cholesterol, as well as a family history of cardiovascular disease are well known risk factors. Elevated total homocysteine levels have recently been identified as a risk factor for cardiovascular disease. Homocysteine is an amino acid, its levels in the blood are influenced by blood levels of B vitamins: cyanocobalamin (B12), folic acid (B9) and pyridoxine (B6). This report is an update from a previous review published in 2015."
}
] |
query-laysum
|
8019
|
[
{
"role": "user",
"content": "Abstract: Background\nLow-back pain (LBP) is one of the most common and costly musculoskeletal problems in modern society. It is experienced by 70% to 80% of adults at some time in their lives. Massage therapy has the potential to minimize pain and speed return to normal function.\nObjectives\nTo assess the effects of massage therapy for people with non-specific LBP.\nSearch methods\nWe searched PubMed to August 2014, and the following databases to July 2014: MEDLINE, EMBASE, CENTRAL, CINAHL, LILACS, Index to Chiropractic Literature, and Proquest Dissertation Abstracts. We also checked reference lists. There were no language restrictions used.\nSelection criteria\nWe included only randomized controlled trials of adults with non-specific LBP classified as acute, sub-acute or chronic. Massage was defined as soft-tissue manipulation using the hands or a mechanical device. We grouped the comparison groups into two types: inactive controls (sham therapy, waiting list, or no treatment), and active controls (manipulation, mobilization, TENS, acupuncture, traction, relaxation, physical therapy, exercises or self-care education).\nData collection and analysis\nWe used standard Cochrane methodological procedures and followed CBN guidelines. Two independent authors performed article selection, data extraction and critical appraisal.\nMain results\nIn total we included 25 trials (3096 participants) in this review update. The majority was funded by not-for-profit organizations. One trial included participants with acute LBP, and the remaining trials included people with sub-acute or chronic LBP (CLBP). In three trials massage was done with a mechanical device, and the remaining trials used only the hands. The most common type of bias in these studies was performance and measurement bias because it is difficult to blind participants, massage therapists and the measuring outcomes. We judged the quality of the evidence to be \"low\" to \"very low\", and the main reasons for downgrading the evidence were risk of bias and imprecision. There was no suggestion of publication bias. For acute LBP, massage was found to be better than inactive controls for pain ((SMD -1.24, 95% CI -1.85 to -0.64; participants = 51; studies = 1)) in the short-term, but not for function ((SMD -0.50, 95% CI -1.06 to 0.06; participants = 51; studies = 1)). For sub-acute and chronic LBP, massage was better than inactive controls for pain ((SMD -0.75, 95% CI -0.90 to -0.60; participants = 761; studies = 7)) and function (SMD -0.72, 95% CI -1.05 to -0.39; 725 participants; 6 studies; ) in the short-term, but not in the long-term; however, when compared to active controls, massage was better for pain, both in the short ((SMD -0.37, 95% CI -0.62 to -0.13; participants = 964; studies = 12)) and long-term follow-up ((SMD -0.40, 95% CI -0.80 to -0.01; participants = 757; studies = 5)), but no differences were found for function (both in the short and long-term). There were no reports of serious adverse events in any of these trials. Increased pain intensity was the most common adverse event reported in 1.5% to 25% of the participants.\nAuthors' conclusions\nWe have very little confidence that massage is an effective treatment for LBP. Acute, sub-acute and chronic LBP had improvements in pain outcomes with massage only in the short-term follow-up. Functional improvement was observed in participants with sub-acute and chronic LBP when compared with inactive controls, but only for the short-term follow-up. There were only minor adverse effects with massage.\n\nGiven the provided abstract, please respond to the following query: \"Study funding sources\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Seven studies did not report the sources of funding, Sixteen studies were funded by not-for-profit organizations. One study reported not receiving any funding, and one study was funded by a College of Massage Therapists."
}
] |
query-laysum
|
8020
|
[
{
"role": "user",
"content": "Abstract: Background\nRoot caries is a well-recognised disease, with increasing prevalence as populations age and retain more of their natural teeth into later life. Like coronal caries, root caries can be associated with pain, discomfort, tooth loss, and contribute significantly to poorer oral health-related quality of life in the elderly. Supplementing the visual-tactile examination could prove beneficial in improving the accuracy of early detection and diagnosis. The detection of root caries lesions at an early stage in the disease continuum can inform diagnosis and lead to targeted preventive therapies and lesion arrest.\nObjectives\nTo assess the diagnostic test accuracy of index tests for the detection and diagnosis of root caries in adults, used alone or in combination with other tests.\nSearch methods\nCochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 31 December 2018); Embase Ovid (1980 to 31 December 2018); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 31 December 2018); and the World Health Organization International Clinical Trials Registry Platform (to 31 December 2018). We studied reference lists as well as published systematic review articles.\nSelection criteria\nWe included diagnostic accuracy study designs that compared one or more index tests (laser fluorescence, radiographs, visual examination, electronic caries monitor (ECM), transillumination), either independently or in combination, with a reference standard. This included prospective studies that evaluated the diagnostic accuracy of single index tests and studies that directly compared two or more index tests. In vitro and in vivo studies were eligible for inclusion but studies that artificially created carious lesions were excluded.\nData collection and analysis\nTwo review authors extracted data independently and in duplicate using a standardised data extraction and quality assessment form based on the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) specific to the review context. Estimates of diagnostic test accuracy were expressed as sensitivity and specificity with 95% confidence intervals (CI) for each dataset. We planned to use hierarchical models for data synthesis and explore potential sources of heterogeneity through meta-regression.\nMain results\nFour cross-sectional diagnostic test accuracy studies providing eight datasets with data from 4997 root surfaces were analysed. Two in vitro studies evaluated secondary root caries lesions on extracted teeth and two in vivo studies evaluated primary root caries lesions within the oral cavity. Four studies evaluated laser fluorescence and reported estimates of sensitivity ranging from 0.50 to 0.81 and specificity ranging from 0.40 to 0.80. Two studies evaluated radiographs and reported estimates of sensitivity ranging from 0.40 to 0.63 and specificity ranging from 0.31 to 0.80. One study evaluated visual examination and reported sensitivity of 0.75 (95% CI 0.48 to 0.93) and specificity of 0.38 (95% CI 0.14 to 0.68). One study evaluated the accuracy of radiograph and visual examination in combination and reported sensitivity of 0.81 (95% CI 0.54 to 0.96) and specificity of 0.54 (95% CI 0.25 to 0.81). Given the small number of studies and important differences in the clinical and methodological characteristics of the studies we were unable to pool the results. Consequently, we were unable to formally evaluate the comparative accuracy of the different tests considered in this review. Using QUADAS-2 we judged all four studies to be at overall high risk of bias, but only two to have applicability concerns (patient selection domain). Reasons included bias in the selection process, use of post hoc (data driven) positivity thresholds, use of an imperfect reference standard, and use of extracted teeth.\nWe downgraded the certainty of the evidence due to study limitations and serious imprecision of the results (downgraded two levels), and judged the certainty of the evidence to be very low.\nAuthors' conclusions\nVisual-tactile examination is the mainstay of root caries detection and diagnosis; however, due to the paucity of the evidence base and the very low certainty of the evidence we were unable to determine the additional benefit of adjunctive diagnostic tests for the detection and diagnosis of root caries.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out whether any diagnostic tools could be used to support the general dentist to correctly identify root caries in adults.\nResearchers in Cochrane included four studies to answer this question."
}
] |
query-laysum
|
8021
|
[
{
"role": "user",
"content": "Abstract: Background\nIn vitro maturation (IVM) is a fertility treatment that involves the transvaginal retrieval of immature oocytes, and their subsequent maturation and fertilisation. Although the live birth rate is lower than conventional in vitro fertilisation (IVF) with ovarian stimulation, it is a useful treatment, as it avoids the risk of ovarian hyperstimulation syndrome (OHSS). Women with polycystic ovaries (PCO) or polycystic ovarian syndrome (PCOS) are at an increased risk of OHSS. Thus, IVM may be a more useful treatment in this patient group.\nStrategies to maximise the maturation rates of the immature oocytes are important. This review focuses on the administration of human chorionic gonadotrophin (hCG) prior to immature oocyte retrieval.\nObjectives\nTo determine the effectiveness and safety of hCG priming in subfertile women who are undergoing IVM treatment in the context of assisted reproduction.\nSearch methods\nWe searched the following electronic databases up to 29 August 2016: Cochrane Gynaecology and Fertility Group Specialised Register of controlled trials, CENTRAL, MEDLINE, Embase, PsycINFO, and CINAHL. We also searched the trial registries ClinicalTrials.gov and WHO ICTPR to identify ongoing and registered trials. We sought recently published papers not yet indexed in the major databases, and reviewed the reference lists of reviews and retrieved studies as sources of potentially relevant studies. There were no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared hCG priming with placebo or no priming in women undergoing IVM. We also included RCTs that compared different doses of hCG, or the timing of oocyte retrieval. The primary outcomes were live birth rate and miscarriage rate per woman randomised.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, and with a third author, assessed risk of bias and extracted data. We contacted the original authors where data were missing. For dichotomous outcomes, we used the Mantel-Haenszel method to calculate odds ratios (OR). For continuous outcomes, we calculated the mean differences (MD) between treatment groups. We assessed statistical heterogeneity using the I² statistic. We assessed the overall quality of the evidence using GRADE methods.\nMain results\nWe included four studies, with a total of 522 women, in the review. One of these studies did not report outcomes per woman randomised, and so was not included in formal analysis. Three studies investigated 10,000 units hCG priming compared to no priming. One study investigated 20,000 units hCG compared to 10,000 units hCG priming. Three studies only included women with PCOS (N = 122), while this was an exclusion criteria in the fourth study (N = 400).\nWe rated all four studies as having an unclear risk of bias in more than one of the seven domains assessed. The quality of the evidence was low, the main limitations being lack of blinding and imprecision.\nWhen 10,000 units hCG priming was compared to no priming, we found no evidence of a difference in the live birth rates per woman randomised (OR 0.65, 95% confidence intervals (CI) 0.24 to 1.74; one RCT; N = 82; low quality evidence); miscarriage rate (OR 0.60, 95% CI 0.21 to 1.72; two RCTs; N = 282; I² statistic = 21%; low quality evidence), or clinical pregnancy rate (OR 0.52, 95% CI 0.26 to 1.03; two RCTs, N = 282, I² statistic = 0%, low quality evidence). Though inconclusive, our findings suggested that hCG may be associated with a reduction in clinical pregnancy rates; 22% of women who received no priming achieved pregnancy, while between 7% and 23% of women who received hCG priming did so.\nThe study comparing 20,000 units hCG with 10,000 units hCG did not report sufficient data to enable us to calculate odds ratios.\nNo studies reported on adverse events (other than miscarriage) or drug reactions.\nAuthors' conclusions\nThis review found no conclusive evidence that hCG priming had an effect on live birth, pregnancy, or miscarriage rates in IVM. There was low quality evidence that suggested that hCG priming may reduce clinical pregnancy rates, however, these findings were limited by the small number of data included. As no data were available on adverse events (other than miscarriage) or on drug reactions, we could not adequately assess the safety of hCG priming. We need further evidence from well-designed RCTs before we can come to definitive conclusions about the role of hCG priming, and the optimal dose and timing.\n\nGiven the provided abstract, please respond to the following query: \"Study Characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review included four randomised controlled trials, with a total of 522 women. One study investigated the use of 20,000 units hCG priming compared to 10,000 units. The remaining studies investigated 10,000 units hCG priming compared to no priming. The main outcomes were live birth rate and miscarriage rate per woman randomised. Evidence published up to 29 August 2016 was examined."
}
] |
query-laysum
|
8022
|
[
{
"role": "user",
"content": "Abstract: Background\nImproving mobility outcomes after hip fracture is key to recovery. Possible strategies include gait training, exercise and muscle stimulation. This is an update of a Cochrane Review last published in 2011.\nObjectives\nTo evaluate the effects (benefits and harms) of interventions aimed at improving mobility and physical functioning after hip fracture surgery in adults.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL, trial registers and reference lists, to March 2021.\nSelection criteria\nAll randomised or quasi-randomised trials assessing mobility strategies after hip fracture surgery. Eligible strategies aimed to improve mobility and included care programmes, exercise (gait, balance and functional training, resistance/strength training, endurance, flexibility, three-dimensional (3D) exercise and general physical activity) or muscle stimulation. Intervention was compared with usual care (in-hospital) or with usual care, no intervention, sham exercise or social visit (post-hospital).\nData collection and analysis\nMembers of the review author team independently selected trials for inclusion, assessed risk of bias and extracted data. We used standard methodological procedures expected by Cochrane. We used the assessment time point closest to four months for in-hospital studies, and the time point closest to the end of the intervention for post-hospital studies. Critical outcomes were mobility, walking speed, functioning, health-related quality of life, mortality, adverse effects and return to living at pre-fracture residence.\nMain results\nWe included 40 randomised controlled trials (RCTs) with 4059 participants from 17 countries. On average, participants were 80 years old and 80% were women. The median number of study participants was 81 and all trials had unclear or high risk of bias for one or more domains. Most trials excluded people with cognitive impairment (70%), immobility and/or medical conditions affecting mobility (72%).\nIn-hospital setting, mobility strategy versus control\nEighteen trials (1433 participants) compared mobility strategies with control (usual care) in hospitals. Overall, such strategies may lead to a moderate, clinically-meaningful increase in mobility (standardised mean difference (SMD) 0.53, 95% confidence interval (CI) 0.10 to 0.96; 7 studies, 507 participants; low-certainty evidence) and a small, clinically meaningful improvement in walking speed (CI crosses zero so does not rule out a lack of effect (SMD 0.16, 95% CI -0.05 to 0.37; 6 studies, 360 participants; moderate-certainty evidence). Mobility strategies may make little or no difference to short-term (risk ratio (RR) 1.06, 95% CI 0.48 to 2.30; 6 studies, 489 participants; low-certainty evidence) or long-term mortality (RR 1.22, 95% CI 0.48 to 3.12; 2 studies, 133 participants; low-certainty evidence), adverse events measured by hospital re-admission (RR 0.70, 95% CI 0.44 to 1.11; 4 studies, 322 participants; low-certainty evidence), or return to pre-fracture residence (RR 1.07, 95% CI 0.73 to 1.56; 2 studies, 240 participants; low-certainty evidence). We are uncertain whether mobility strategies improve functioning or health-related quality of life as the certainty of evidence was very low.\nGait, balance and functional training probably causes a moderate improvement in mobility (SMD 0.57, 95% CI 0.07 to 1.06; 6 studies, 463 participants; moderate-certainty evidence). There was little or no difference in effects on mobility for resistance training. No studies of other types of exercise or electrical stimulation reported mobility outcomes.\nPost-hospital setting, mobility strategy versus control\nTwenty-two trials (2626 participants) compared mobility strategies with control (usual care, no intervention, sham exercise or social visit) in the post-hospital setting. Mobility strategies lead to a small, clinically meaningful increase in mobility (SMD 0.32, 95% CI 0.11 to 0.54; 7 studies, 761 participants; high-certainty evidence) and a small, clinically meaningful improvement in walking speed compared to control (SMD 0.16, 95% CI 0.04 to 0.29; 14 studies, 1067 participants; high-certainty evidence). Mobility strategies lead to a small, non-clinically meaningful increase in functioning (SMD 0.23, 95% CI 0.10 to 0.36; 9 studies, 936 participants; high-certainty evidence), and probably lead to a slight increase in quality of life that may not be clinically meaningful (SMD 0.14, 95% CI -0.00 to 0.29; 10 studies, 785 participants; moderate-certainty evidence). Mobility strategies probably make little or no difference to short-term mortality (RR 1.01, 95% CI 0.49 to 2.06; 8 studies, 737 participants; moderate-certainty evidence). Mobility strategies may make little or no difference to long-term mortality (RR 0.73, 95% CI 0.39 to 1.37; 4 studies, 588 participants; low-certainty evidence) or adverse events measured by hospital re-admission (95% CI includes a large reduction and large increase, RR 0.86, 95% CI 0.52 to 1.42; 2 studies, 206 participants; low-certainty evidence).\nTraining involving gait, balance and functional exercise leads to a small, clinically meaningful increase in mobility (SMD 0.20, 95% CI 0.05 to 0.36; 5 studies, 621 participants; high-certainty evidence), while training classified as being primarily resistance or strength exercise may lead to a clinically meaningful increase in mobility measured using distance walked in six minutes (mean difference (MD) 55.65, 95% CI 28.58 to 82.72; 3 studies, 198 participants; low-certainty evidence). Training involving multiple intervention components probably leads to a substantial, clinically meaningful increase in mobility (SMD 0.94, 95% CI 0.53 to 1.34; 2 studies, 104 participants; moderate-certainty evidence). We are uncertain of the effect of aerobic training on mobility (very low-certainty evidence). No studies of other types of exercise or electrical stimulation reported mobility outcomes.\nAuthors' conclusions\nInterventions targeting improvement in mobility after hip fracture may cause clinically meaningful improvement in mobility and walking speed in hospital and post-hospital settings, compared with conventional care. Interventions that include training of gait, balance and functional tasks are particularly effective. There was little or no between-group difference in the number of adverse events reported. Future trials should include long-term follow-up and economic outcomes, determine the relative impact of different types of exercise and establish effectiveness in emerging economies.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Eighteen studies evaluated mobility strategies that started in the hospital within a week after hip fracture surgery. Mobility treatment undertaken in hospital may moderately increase people's mobility four months after their fracture and probably increases walking speed to a small but meaningful degree. Mobility treatment probably makes little or no difference to re-admission to hospital, return to living at home, or death. We are not certain if mobility treatment affects physical functioning (the ability to move around and function in one’s environment) or well-being.\nTwenty-two trials evaluated longer-duration mobility strategies that started after discharge from hospital and were undertaken in homes, retirement villages and outpatient clinics. In these settings, mobility treatment increases mobility to a small but meaningful degree, meaningfully increases walking speed, and leads to a small but non-meaningful increase in functioning. Compared to no treatment, social visits or usual care, mobility treatment probably slightly improves people's well-being but not to a meaningful level. Mobility treatment probably makes little or no difference to re-admission to hospital or death.\nThe types of treatment that appear effective in improving people’s mobility are exercises in additional to standard physiotherapy. Both in the hospital and after discharge from hospital, the helpful exercises target balance, walking and functional tasks. After discharge from hospital, extra strength or endurance training may also improve mobility. The effect of electrical stimulation was not clear.\nOverall, the review found that both in hospital and after discharge, there is enough evidence to say that treatment targeting mobility is probably better than no extra treatment in helping people get people safely back on their feet, moving and walking again after hip fracture surgery."
}
] |
query-laysum
|
8023
|
[
{
"role": "user",
"content": "Abstract: Background\nDiscoid lupus erythematosus (DLE) is a chronic form of cutaneous lupus, which can cause scarring. Many drugs have been used to treat this disease and some (such as thalidomide, cyclophosphamide and azathioprine) are potentially toxic. This is an update of a Cochrane Review first published in 2000, and previously updated in 2009. We wanted to update the review to assess whether any new information was available to treat DLE, as we were still unsure of the effectiveness of available drugs and how to select the most appropriate treatment for an individual with DLE.\nObjectives\nTo assess the effects of drugs for discoid lupus erythematosus.\nSearch methods\nWe updated our searches of the following databases to 22 September 2016: the Cochrane Skin Specialised Register, CENTRAL, MEDLINE, Embase and LILACS. We also searched five trials databases, and checked the reference lists of included studies for further references to relevant trials. Index Medicus (1956 to 1966) was handsearched and we approached authors for information about unpublished trials.\nSelection criteria\nWe included all randomised controlled trials (RCTs) of drugs to treat people with DLE in any population group and of either gender. Comparisons included any drug used for DLE against either another drug or against placebo cream. We excluded laser treatment, surgery, phototherapy, other forms of physical therapy, and photoprotection as we did not consider them drug treatments.\nData collection and analysis\nAt least two reviewers independently extracted data onto a data extraction sheet, resolving disagreements by discussion. We used standard methods to assess risk of bias, as expected by Cochrane.\nMain results\nFive trials involving 197 participants were included. Three new trials were included in this update. None of the five trials were of high quality.\n'Risk of bias' assessments identified potential sources of bias in each study. One study used an inappropriate randomisation method, and incomplete outcome data were a concern in another as 15 people did not complete the trial. We found most of the trials to be at low risk in terms of blinding, but three of the five did not describe allocation concealment.\nThe included trials inadequately addressed the primary outcome measures of this review (percentage with complete resolution of skin lesions, percentage with clearing of erythema in at least 50% of lesions, and improvement in patient satisfaction/quality of life measures).\nOne study of fluocinonide cream 0.05% (potent steroid) compared with hydrocortisone cream 1% (low-potency steroid) in 78 people reported complete resolution of skin lesions in 27% (10/37) of participants in the fluocinonide cream group and in 10% (4/41) in the hydrocortisone group, giving a 17% absolute benefit in favour of fluocinonide (risk ratio (RR) 2.77, 95% CI 0.95 to 8.08, 1 study, n = 78, low-quality evidence). The other primary outcome measures were not reported. Adverse events did not require discontinuation of the drug. Skin irritation occurred in three people using hydrocortisone, and one person developed acne. Burning occurred in two people using fluocinonide (moderate-quality evidence).\nA comparative trial of two oral agents, acitretin (50 mg daily) and hydroxychloroquine (400 mg daily), reported two of the outcomes of interest: complete resolution was seen in 13 of 28 participants (46%) on acitretin and 15 of 30 participants (50%) on hydoxychloroquine (RR 0.93, 95% CI 0.54 to 1.59, 1 study, n = 58, low-quality evidence). Clearing of erythema in at least 50% of lesions was reported in 10 of 24 participants (42%) on acitretin and 17 of 25 (68%) on hydroxychloroquine (RR 0.61, 95% CI 0.36 to 1.06, 1 study, n = 49, low-quality evidence). This comparison did not assess improvement in patient satisfaction/quality of life measures. Participants taking acitretin showed a small increase in serum triglyceride, not sufficient to require withdrawal of the drug. The main adverse effects were dry lips (93% of the acitretin group and 20% of the hydroxychloroquine group) and gastrointestinal disturbance (11% of the acitretin group and 17% of the hydroxychloroquine group). Four participants on acitretin withdrew due to gastrointestinal events or dry lips (moderate-quality evidence).\nOne trial randomised 10 people with DLE to apply a calcineurin inhibitor, pimecrolimus 1% cream, or a potent steroid, betamethasone 17-valerate 0.1% cream, for eight weeks. The study reported none of the primary outcome measures, nor did it present data on adverse events.\nA trial of calcineurin inhibitors compared tacrolimus cream 0.1% with placebo (vehicle) over 12 weeks in 14 people, but reported none of our primary outcome measures. In the tacrolimus group, five participants complained of slight burning and itching, and for one participant, a herpes simplex infection was reactivated (moderate-quality evidence).\nTopical R-salbutamol 0.5% cream was compared with placebo (vehicle) over eight weeks in one trial of 37 people with DLE. There was a significant improvement in pain and itch in the salbutamol group at two, four, six, and eight weeks compared to placebo, but the trial did not record a formal measure of quality of life. None of the primary outcome measures were reported. Changes in erythema did not show benefit of salbutamol over placebo, but we could not obtain from the trial report the number of participants with clearing of erythema in at least 50% of lesions. There were 15 events in the placebo group (experienced by 12 participants) and 24 in the salbutamol group (experienced by nine participants). None of the adverse events were considered serious (moderate-quality evidence).\nAuthors' conclusions\nFluocinonide cream may be more effective than hydrocortisone in clearing DLE skin lesions. Hydroxychloroquine and acitretin appear to be of equal efficacy in terms of complete resolution, although adverse effects might be more frequent with acitretin, and clearing of erythema in at least 50% of lesions occurred less often in participants applying acitretin. Moderate-quality evidence found adverse events were minor on the whole. There is not enough reliable evidence about other drugs used to treat DLE. Overall, the quality of the trials and levels of uncertainty were such that there is a need for further trials of sufficient duration comparing, in particular, topical steroids with other agents.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified five studies, involving 197 people (aged between 17 and 82). Participants were recruited from Europe, Scandinavia, Iran, and the United States. Most of the skin lesions were recorded on the face, ear, and scalp. The duration of disease ranged from one month to 16 years. Treatments included steroid creams of different potencies (fluocinonide and betamethasone cream, both potent steroids; and hydrocortisone, a low-potency steroid); oral hydroxychloroquine; oral acitretin; tacrolimus cream; pimecrolimus cream; and salbutamol cream. The tacrolimus and salbutamol trials used placebo in the control arm."
}
] |
query-laysum
|
8024
|
[
{
"role": "user",
"content": "Abstract: Background\nCataract is the leading cause of blindness in the world and, as such, cataract surgery is one of the most commonly performed operations globally. Surgical techniques have changed dramatically over the past half century with associated improvements in outcomes and safety. Femtosecond lasers can be used to perform the key steps in cataract surgery, such as corneal incisions, lens capsulotomy and fragmentation. The potential advantage of femtosecond laser-assisted cataract surgery (FLACS) is greater precision and reproducibility of these steps compared to manual techniques. The disadvantages are the costs associated with FLACS technology.\nObjectives\nTo compare the effectiveness and safety of FLACS with standard ultrasound phacoemulsification cataract surgery (PCS) by gathering evidence from randomised controlled trials (RCTs).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; which contains the Cochrane Eyes and Vision Trials Register; 2022, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov; the WHO ICTRP and the US Food and Drug Administration (FDA) website. We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 10 May 2022.\nSelection criteria\nWe included RCTs where FLACS was compared to PCS.\nData collection and analysis\nThree review authors independently screened the search results, assessed risk of bias and extracted data using the standard methodological procedures expected by Cochrane. The primary outcome for this review was intraoperative complications in the operated eye, namely anterior capsule, and posterior capsule tears. The secondary outcomes included corrected distance visual acuity (CDVA), quality of vision (as measured by any validated patient-reported outcome measure (PROM)), postoperative cystoid macular oedema complications, endothelial cell loss and cost-effectiveness. We assessed the certainty of the evidence using GRADE.\nMain results\nWe included 42 RCTs conducted in Europe, North America, South America and Asia, which enrolled a total of 7298 eyes of 5831 adult participants. Overall, the studies were at unclear or high risk of bias. In 16 studies the authors reported financial links with the manufacturer of the laser platform evaluated in their studies. Thirteen of the studies were within-person (paired-eye) studies with one eye allocated to one procedure and the other eye allocated to the other procedure. These studies were reported ignoring the paired nature of the data.\nThere was low-certainty evidence of little or no difference in the odds of developing anterior capsular tears when comparing FLACS and PCS (Peto odds ratio (OR) 0.83, 95% confidence interval (CI) 0.40 to 1.72; 5835 eyes, 27 studies) There was one fewer anterior capsule tear per 1000 operations in the FLACS group compared with the PCS group (95% CI 4 fewer to 3 more).\nThere was low-certainty evidence of lower odds of developing posterior capsular tears with FLACS compared to PCS (Peto OR 0.50, 95% CI 0.25 to 1.00; 5767 eyes, 26 studies). There were four fewer posterior capsule tears per 1000 operations in the FLACS group compared with the PCS group (95% CI 6 fewer to same).\nThere was moderate-certainty evidence of a very small advantage for the FLACS arm with regard to CDVA at six months or more follow-up, (mean difference (MD) -0.01 logMAR, 95% CI -0.02 to 0.00; 1323 eyes, 7 studies). This difference is equivalent to 1 logMAR letter between groups and is not thought to be clinically important.\nFrom the three studies (1205 participants) reporting a variety of PROMs (Cat-PROMS, EQ-5D, EQ-SD-3L, Catquest9-SF and patient survey) up to three months following surgery, there was moderate-certainty evidence of little or no difference in the various parameters between the two treatment arms.\nThere was low-certainty evidence of little or no difference in the odds of developing cystoid macular oedema when comparing FLACS and PCS (Peto OR 0.84, 95% CI 0.56 to 1.28; 4441 eyes, 18 studies). There were three fewer cystoid macular oedema cases per 1000 operations in the FLACS group compared with the PCS group (95% CI 10 fewer to 6 more).\nIn one study the incremental cost-effectiveness ratio (ICER) (cost difference divided by quality-adjusted life year (QALY) difference) was GBP £167,620 when comparing FLACS to PCS. In another study, the ICER was EUR €10,703 saved per additional patient who had treatment success with PCS compared to FLACS. Duration ranged from three minutes in favour of FLACS to eight minutes in favour of PCS (I2 = 100%, 11 studies) (low-certainty evidence).\nThere was low-certainty evidence of little or no important difference in endothelial cell loss when comparing FLACS with PCS (MD 12 cells per mm2 in favour of FLACS, 95% CI -40 to 64; 1512 eyes, 10 studies).\nAuthors' conclusions\nThis review of 42 studies provides evidence that there is probably little or no difference between FLACS and PCS in terms of intraoperative and postoperative complications, postoperative visual acuity and quality of life. Evidence from two studies suggests that FLACS may be the less cost-effective option. Many of the included studies only investigated very specific outcome measures such as effective phacoemulsification time, endothelial cell count change or aqueous flare, rather than those directly related to patient outcomes. Standardised reporting of complications and visual and refractive outcomes for cataract surgery would facilitate future synthesis, and guidance on this has been recently published.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared PCS to FLACS in people with age-related cataracts. We compared and summarised the results of the studies and rated our confidence in the evidence, based on factors such as study methods and study size."
}
] |
query-laysum
|
8025
|
[
{
"role": "user",
"content": "Abstract: Background\nConcerns regarding the safety and availability of transfused donor blood have prompted research into a range of techniques to minimise allogeneic transfusion requirements. Cell salvage (CS) describes the recovery of blood from the surgical field, either during or after surgery, for reinfusion back to the patient.\nObjectives\nTo examine the effectiveness of CS in minimising perioperative allogeneic red blood cell transfusion and on other clinical outcomes in adults undergoing elective or non-urgent surgery.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, three other databases and two clinical trials registers for randomised controlled trials (RCTs) and systematic reviews from 2009 (date of previous search) to 19 January 2023, without restrictions on language or publication status.\nSelection criteria\nWe included RCTs assessing the use of CS compared to no CS in adults (participants aged 18 or over, or using the study's definition of adult) undergoing elective (non-urgent) surgery only.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 106 RCTs, incorporating data from 14,528 participants, reported in studies conducted in 24 countries. Results were published between 1978 and 2021. We analysed all data according to a single comparison: CS versus no CS. We separated analyses by type of surgery.\nThe certainty of the evidence varied from very low certainty to high certainty. Reasons for downgrading the certainty included imprecision (small sample sizes below the optimal information size required to detect a difference, and wide confidence intervals), inconsistency (high statistical heterogeneity), and risk of bias (high risk from domains including sequence generation, blinding, and baseline imbalances).\nAggregate analysis (all surgeries combined: primary outcome only)\nVery low-certainty evidence means we are uncertain if there is a reduction in the risk of allogeneic transfusion with CS (risk ratio (RR) 0.65, 95% confidence interval (CI) 0.59 to 0.72; 82 RCTs, 12,520 participants).\nCancer: 2 RCTs (79 participants)\nVery low-certainty evidence means we are uncertain whether there is a difference for mortality, blood loss, infection, or deep vein thrombosis (DVT). There were no analysable data reported for the remaining outcomes.\nCardiovascular (vascular): 6 RCTs (384 participants)\nVery low- to low-certainty evidence means we are uncertain whether there is a difference for most outcomes. No data were reported for major adverse cardiovascular events (MACE).\nCardiovascular (no bypass): 6 RCTs (372 participants)\nModerate-certainty evidence suggests there is probably a reduction in risk of allogeneic transfusion with CS (RR 0.82, 95% CI 0.69 to 0.97; 3 RCTs, 169 participants).\nVery low- to low-certainty evidence means we are uncertain whether there is a difference for volume transfused, blood loss, mortality, re-operation for bleeding, infection, wound complication, myocardial infarction (MI), stroke, and hospital length of stay (LOS). There were no analysable data reported for thrombosis, DVT, pulmonary embolism (PE), and MACE.\nCardiovascular (with bypass): 29 RCTs (2936 participants)\nLow-certainty evidence suggests there may be a reduction in the risk of allogeneic transfusion with CS, and suggests there may be no difference in risk of infection and hospital LOS.\nVery low- to moderate-certainty evidence means we are uncertain whether there is a reduction in volume transfused because of CS, or if there is any difference for mortality, blood loss, re-operation for bleeding, wound complication, thrombosis, DVT, PE, MACE, and MI, and probably no difference in risk of stroke.\nObstetrics: 1 RCT (1356 participants)\nHigh-certainty evidence shows there is no difference between groups for mean volume of allogeneic blood transfused (mean difference (MD) -0.02 units, 95% CI -0.08 to 0.04; 1 RCT, 1349 participants).\nLow-certainty evidence suggests there may be no difference for risk of allogeneic transfusion. There were no analysable data reported for the remaining outcomes.\nOrthopaedic (hip only): 17 RCTs (2055 participants)\nVery low-certainty evidence means we are uncertain if CS reduces the risk of allogeneic transfusion, and the volume transfused, or if there is any difference between groups for mortality, blood loss, re-operation for bleeding, infection, wound complication, prosthetic joint infection (PJI), thrombosis, DVT, PE, stroke, and hospital LOS. There were no analysable data reported for MACE and MI.\nOrthopaedic (knee only): 26 RCTs (2568 participants)\nVery low- to low-certainty evidence means we are uncertain if CS reduces the risk of allogeneic transfusion, and the volume transfused, and whether there is a difference for blood loss, re-operation for bleeding, infection, wound complication, PJI, DVT, PE, MI, MACE, stroke, and hospital LOS. There were no analysable data reported for mortality and thrombosis.\nOrthopaedic (spine only): 6 RCTs (404 participants)\nModerate-certainty evidence suggests there is probably a reduction in the need for allogeneic transfusion with CS (RR 0.44, 95% CI 0.31 to 0.63; 3 RCTs, 194 participants).\nVery low- to moderate-certainty evidence suggests there may be no difference for volume transfused, blood loss, infection, wound complication, and PE. There were no analysable data reported for mortality, re-operation for bleeding, PJI, thrombosis, DVT, MACE, MI, stroke, and hospital LOS.\nOrthopaedic (mixed): 14 RCTs (4374 participants)\nVery low- to low-certainty evidence means we are uncertain if there is a reduction in the need for allogeneic transfusion with CS, or if there is any difference between groups for volume transfused, mortality, blood loss, infection, wound complication, PJI, thrombosis, DVT, MI, and hospital LOS. There were no analysable data reported for re-operation for bleeding, MACE, and stroke.\nAuthors' conclusions\nIn some types of elective surgery, cell salvage may reduce the need for and volume of allogeneic transfusion, alongside evidence of no difference in adverse events, when compared to no cell salvage. Further research is required to establish why other surgeries show no benefit from CS, through further analysis of the current evidence. More large RCTs in under-reported specialities are needed to expand the evidence base for exploring the impact of CS.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum
|
8026
|
[
{
"role": "user",
"content": "Abstract: Background\nOsteoarthritis affecting of the big toe joint of the foot (hallux limitus or rigidus) is a common and painful condition. Although several treatments have been proposed, few have been adequately evaluated.\nObjectives\nTo identify controlled trials evaluating interventions for osteoarthritis of the big toe joint and to determine the optimum intervention(s).\nSearch methods\nLiterature searches were conducted across the following electronic databases: CENTRAL; MEDLINE; EMBASE; CINAHL; and PEDro (to 14th January 2010). No language restrictions were applied.\nSelection criteria\nRandomised controlled trials, quasi-randomised trials, or controlled clinical trials that assessed treatment outcomes for osteoarthritis of the big toe joint. Participants of any age or gender with osteoarthritis of the big toe joint (defined either radiographically or clinically) were included.\nData collection and analysis\nTwo authors examined the list of titles and abstracts identified by the literature searches. One content area expert and one methodologist independently applied the pre-determined inclusion and exclusion criteria to the full text of identified trials. To minimise error and reduce potential bias, data were extracted independently by two content experts.\nMain results\nOnly one trial satisfactorily fulfilled the inclusion criteria and was included in this review. This trial evaluated the effectiveness of two physical therapy programs in 20 individuals with osteoarthritis of the big toe joint. Assessment outcomes included pain levels, big toe joint range of motion and plantar flexion strength of the hallux. Mean differences at four weeks follow up were 3.80 points (95% CI 2.74 to 4.86) for self reported pain, 28.30 ° (95% CI 21.37 to 35.23) for big toe joint range of motion, and 2.80 kg (95% CI 2.13 to 3.47) for muscle strength. Although differences in outcomes between treatment and control groups were reported, the risk of bias was high. The trial failed to employ appropriate randomisation or adequate allocation concealment, used a relatively small sample and incorporated a short follow up (four weeks). No adverse reactions were reported.\nAuthors' conclusions\nThe reviewed trial presented a high risk of bias, which limited conclusions that could be drawn from the presented data. The inclusion of only one trial indicates the need for more robust randomised controlled trials to determine the efficacy of interventions for this condition.\n\nGiven the provided abstract, please respond to the following query: \"What is osteoarthritis and how is it treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Osteoarthritis (OA) is a disease of the joints. When your joint loses cartilage, the bone grows to try and repair the damage. Instead of making things better, however, the bone grows abnormally and makes things worse. For example, the bone can become misshapen and make the joint painful.\nDoctors used to think that osteoarthritis was caused by wear and tear on the cartilage. However, it's now thought that osteoarthritis is a disease of the whole joint. Many factors may increase your risk of getting osteoarthritis in the big toe joint, such as particular foot structure, trauma, family history of the disease, joint disease, and gait abnormalities.\nOA is one of the most common forms of arthritis and affects men and women equally. OA is one of the main causes of disability as people grow older.\nInterventions such as physical therapy, including exercises aim to enhance or maintain muscle strength, physical fitness and overall health. People exercise for many different reasons including weight loss, strengthening muscles and to relieve the symptoms of OA."
}
] |
query-laysum
|
8027
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the original Cochrane Review published in Cochrane Library, Issue 10, 2012.\nHydatidiform mole (HM), also called a molar pregnancy, is characterised by an overgrowth of foetal chorionic tissue within the uterus. HMs may be partial (PM) or complete (CM) depending on their gross appearance, histopathology and karyotype. PMs usually have a triploid karyotype, derived from maternal and paternal origins, whereas CMs are diploid and have paternal origins only. Most women with HM can be cured by evacuation of retained products of conception (ERPC) and their fertility preserved. However, in some women the growth persists and develops into gestational trophoblastic neoplasia (GTN), a malignant form of the disease that requires treatment with chemotherapy. CMs have a higher rate of malignant transformation than PMs. It may be possible to reduce the risk of GTN in women with HM by administering prophylactic chemotherapy (P-Chem). However, P-Chem given before or after evacuation of HM to prevent malignant sequelae remains controversial, as the risks and benefits of this practice are unclear.\nObjectives\nTo evaluate the effectiveness and safety of P-Chem to prevent GTN in women with a molar pregnancy. To investigate whether any subgroup of women with HM may benefit more from P-Chem than others.\nSearch methods\nFor the original review we performed electronic searches in the Cochrane Gynaecological Cancer Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 2, 2012), MEDLINE (1946 to February week 4, 2012) and Embase (1980 to 2012, week 9). We developed the search strategy using free text and MeSH. For this update we searched the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 5, 2017), MEDLINE (February 2012 to June week 1, 2017) and Embase (February 2012 to 2017, week 23). We also handsearched reference lists of relevant literature to identify additional studies and searched trial registries.\nSelection criteria\nWe included randomised controlled trials (RCTs) of P-Chem for HM.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion in the review and extracted data using a specifically designed data collection form. Meta-analyses were performed by pooling data from individual trials using Review Manager 5 (RevMan 5) software in line with standard methodological procedures expected by Cochrane methodology.\nMain results\nThe searches identified 161 records; after de-duplication and title and abstract screening 90 full-text articles were retrieved. From these we included three RCTs with a combined total of 613 participants. One study compared prophylactic dactinomycin to no prophylaxis (60 participants); the other two studies compared prophylactic methotrexate to no prophylaxis (420 and 133 participants). All participants were diagnosed with CMs. We considered the latter two studies to be of poor methodological quality.\nP-Chem reduced the risk of GTN occurring in women following a CM (3 studies, 550 participants; risk ratio (RR) 0.37, 95% confidence interval (CI) 0.24 to 0.57; I² = 0%; P < 0.00001; low-quality evidence). However, owing to the poor quality (high risk of bias) of two of the included studies, we performed sensitivity analyses excluding these two studies. This left only one small study of high-risk women to contribute data for this primary outcome (59 participants; RR 0.28, 95% CI 0.10 to 0.73; P = 0.01); therefore we consider this evidence to be of low quality.\nThe time to diagnosis was longer in the P-Chem group than the control group (2 studies, 33 participants; mean difference (MD) 28.72, 95% CI 13.19 to 44.24; P = 0.0003; low-quality evidence); and the P-Chem group required more courses to cure subsequent GTN (1 poor-quality study, 14 participants; MD 1.10, 95% CI 0.52 to 1.68; P = 0.0002; very low quality evidence).\nThere were insufficient data to perform meta-analyses for toxicity, overall survival, drug resistance and reproductive outcomes.\nAuthors' conclusions\nP-Chem may reduce the risk of progression to GTN in women with CMs who are at a high risk of malignant transformation; however, current evidence in favour of P-Chem is limited by the poor methodological quality and small size of the included studies. As P-Chem may increase drug resistance, delays treatment of GTN and may expose women toxic side effects, this practice cannot currently be recommended.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We consider this evidence to be of a low to very low quality. This conclusion is based on our assessment that two of the included studies were of poor methodological quality and at a high risk of bias; the third study was of a good quality but consisted of only 60 participants."
}
] |
query-laysum
|
8028
|
[
{
"role": "user",
"content": "Abstract: Background\nGender dysphoria is described as a mismatch between an individual's experienced or expressed gender and their assigned gender, based on primary or secondary sexual characteristics. Gender dysphoria can be associated with clinically significant psychological distress and may result in a desire to change sexual characteristics. The process of adapting a person's sexual characteristics to their desired sex is called ‘transition.'\nCurrent guidelines suggest hormonal and, if needed, surgical intervention to aid transition in transgender women, i.e. persons who aim to transition from male to female. In adults, hormone therapy aims to reverse the body's male attributes and to support the development of female attributes. It usually includes estradiol, antiandrogens, or a combination of both. Many individuals first receive hormone therapy alone, without surgical interventions. However, this is not always sufficient to change such attributes as facial bone structure, breasts, and genitalia, as desired. For these transgender women, surgery may then be used to support transition.\nObjectives\nWe aimed to assess the efficacy and safety of hormone therapy with antiandrogens, estradiol, or both, compared to each other or placebo, in transgender women in transition.\nSearch methods\nWe searched MEDLINE, the Cochrane Central Register of Controlled Trials (CENTRAL), Embase, Biosis Preview, PsycINFO, and PSYNDEX. We carried out our final searches on 19 December 2019.\nSelection criteria\nWe aimed to include randomised controlled trials (RCTs), quasi-RCTs, and cohort studies that enrolled transgender women, age 16 years and over, in transition from male to female. Eligible studies investigated antiandrogen and estradiol hormone therapies alone or in combination, in comparison to another form of the active intervention, or placebo control.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane to establish study eligibility.\nMain results\nOur database searches identified 1057 references, and after removing duplicates we screened 787 of these. We checked 13 studies for eligibility at the full text screening stage. We excluded 12 studies and identified one as an ongoing study. We did not identify any completed studies that met our inclusion criteria. The single ongoing study is an RCT conducted in Thailand, comparing estradiol valerate plus cyproterone treatment with estradiol valerate plus spironolactone treatment. The primary outcome will be testosterone level at three month follow-up.\nAuthors' conclusions\nWe found insufficient evidence to determine the efficacy or safety of hormonal treatment approaches for transgender women in transition. This lack of studies shows a gap between current clinical practice and clinical research. Robust RCTs and controlled cohort studies are needed to assess the benefits and harms of hormone therapy (used alone or in combination) for transgender women in transition. Studies should specifically focus on short-, medium-, and long-term adverse effects, quality of life, and participant satisfaction with the change in male to female body characteristics of antiandrogen and estradiol therapy alone, and in combination. They should also focus on the relative effects of these hormones when administered orally, transdermally, and intramuscularly. We will include non-controlled cohort studies in the next iteration of this review, as our review has shown that such studies provide the highest quality evidence currently available in the field. We will take into account methodological limitations when doing so.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked for randomised controlled trials (RCTs) that included transgender women (age 16 and over) in transition from male to female. RCTs are a type of research study that can reduce the possibility of several types of bias. To be included in this review, studies needed to compare different hormone treatments used to support transgender women to transition (oestrogen alone, testosterone blockers alone, or oestrogen in combination with testosterone blockers), or compare these hormone treatments to placebos (fake or dummy treatments that appear to be the same as the actual treatment, but have no medical effects). We wanted to see whether hormone treatments help transgender women to make a transition that they are happy with. We also wanted to look at whether there were any health risks of the treatment."
}
] |
query-laysum
|
8029
|
[
{
"role": "user",
"content": "Abstract: Background\nTesting for carcino-embryonic antigen (CEA) in the blood is a recommended part of follow-up to detect recurrence of colorectal cancer following primary curative treatment. There is substantial clinical variation in the cut-off level applied to trigger further investigation.\nObjectives\nTo determine the diagnostic performance of different blood CEA levels in identifying people with colorectal cancer recurrence in order to inform clinical practice.\nSearch methods\nWe conducted all searches to January 29 2014. We applied no language limits to the searches, and translated non-English manuscripts. We searched for relevant reviews in the MEDLINE, EMBASE, MEDION and DARE databases. We searched for primary studies (including conference abstracts) in the Cochrane Central Register of Controlled Trials (CENTRAL), in MEDLINE, EMBASE, and the Science Citation Index & Conference Proceedings Citation Index – Science. We identified ongoing studies by searching WHO ICTRP and the ASCO meeting library.\nSelection criteria\nWe included cross-sectional diagnostic test accuracy studies, cohort studies, and randomised controlled trials (RCTs) of post-resection colorectal cancer follow-up that compared CEA to a reference standard. We included studies only if we could extract 2 x 2 accuracy data. We excluded case-control studies, as the ratio of cases to controls is determined by the study design, making the data unsuitable for assessing test accuracy.\nData collection and analysis\nTwo review authors (BDN, IP) assessed the quality of all articles independently, discussing any disagreements. Where we could not reach consensus, a third author (BS) acted as moderator. We assessed methodological quality against QUADAS-2 criteria. We extracted binary diagnostic accuracy data from all included studies as 2 x 2 tables. We conducted a bivariate meta-analysis. We used the xtmelogit command in Stata to produce the pooled estimates of sensitivity and specificity and we also produced hierarchical summary ROC plots.\nMain results\nIn the 52 included studies, sensitivity ranged from 41% to 97% and specificity from 52% to 100%. In the seven studies reporting the impact of applying a threshold of 2.5 µg/L, pooled sensitivity was 82% (95% confidence interval (CI) 78% to 86%) and pooled specificity 80% (95% CI 59% to 92%). In the 23 studies reporting the impact of applying a threshold of 5 µg/L, pooled sensitivity was 71% (95% CI 64% to 76%) and pooled specificity 88% (95% CI 84% to 92%). In the seven studies reporting the impact of applying a threshold of 10 µg/L, pooled sensitivity was 68% (95% CI 53% to 79%) and pooled specificity 97% (95% CI 90% to 99%).\nAuthors' conclusions\nCEA is insufficiently sensitive to be used alone, even with a low threshold. It is therefore essential to augment CEA monitoring with another diagnostic modality in order to avoid missed cases. Trying to improve sensitivity by adopting a low threshold is a poor strategy because of the high numbers of false alarms generated. We therefore recommend monitoring for colorectal cancer recurrence with more than one diagnostic modality but applying the highest CEA cut-off assessed (10 µg/L).\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "After surgery for cancer in the colon or rectum (colorectal cancer), most people are intensively followed up for at least five years to monitor for signs of the cancer returning. When this occurs, it usually causes a rise in a blood protein called CEA (carcino-embryonic antigen). An increased level of CEA can be picked up by a blood test, which is normally done every three to six months after colorectal cancer surgery. Those people with raised CEA levels are further investigated by x-ray imaging (usually a scan of the chest, abdomen and pelvis). We conducted this review to help decide what level of blood CEA should lead to further investigation."
}
] |
query-laysum
|
8030
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with chronic heart failure (HF) are at risk of thromboembolic events, including stroke, pulmonary embolism, and peripheral arterial embolism; coronary ischaemic events also contribute to the progression of HF. The use of long-term oral anticoagulation is established in certain populations, including people with HF and atrial fibrillation (AF), but there is wide variation in the indications and use of oral anticoagulation in the broader HF population.\nObjectives\nTo determine whether long-term oral anticoagulation reduces total deaths and stroke in people with heart failure in sinus rhythm.\nSearch methods\nWe updated the searches in CENTRAL, MEDLINE, and Embase in March 2020. We screened reference lists of papers and abstracts from national and international cardiovascular meetings to identify unpublished studies. We contacted relevant authors to obtain further data. We did not apply any language restrictions.\nSelection criteria\nRandomised controlled trials (RCT) comparing oral anticoagulants with placebo or no treatment in adults with HF, with treatment duration of at least one month. We made inclusion decisions in duplicate, and resolved any disagreements between review authors by discussion, or a third party.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, and assessed the risks and benefits of antithrombotic therapy by calculating odds ratio (OR), accompanied by the 95% confidence intervals (CI).\nMain results\nWe identified three RCTs (5498 participants). One RCT compared warfarin, aspirin, and no antithrombotic therapy, the second compared warfarin with placebo in participants with idiopathic dilated cardiomyopathy, and the third compared rivaroxaban with placebo in participants with HF and coronary artery disease.\nWe pooled data from the studies that compared warfarin with a placebo or no treatment. We are uncertain if there is an effect on all-cause death (OR 0.66, 95% CI 0.36 to 1.18; 2 studies, 324 participants; low-certainty evidence); warfarin may increase the risk of major bleeding events (OR 5.98, 95% CI 1.71 to 20.93; number needed to treat for an additional harmful outcome (NNTH) 17; 2 studies, 324 participants; low-certainty evidence). None of the studies reported stroke as an individual outcome.\nRivaroxaban makes little to no difference to all-cause death compared with placebo (OR 0.99, 95% CI 0.87 to 1.13; 1 study, 5022 participants; high-certainty evidence). Rivaroxaban probably reduces the risk of stroke compared to placebo (OR 0.67, 95% CI 0.47 to 0.95; number needed to treat for an additional beneficial outcome (NNTB) 101; 1 study, 5022 participants; moderate-certainty evidence), and probably increases the risk of major bleeding events (OR 1.65, 95% CI 1.17 to 2.33; NNTH 79; 1 study, 5008 participants; moderate-certainty evidence).\nAuthors' conclusions\nBased on the three RCTs, there is no evidence that oral anticoagulant therapy modifies mortality in people with HF in sinus rhythm. The evidence is uncertain if warfarin has any effect on all-cause death compared to placebo or no treatment, but it may increase the risk of major bleeding events. There is no evidence of a difference in the effect of rivaroxaban on all-cause death compared to placebo. It probably reduces the risk of stroke, but probably increases the risk of major bleedings.\nThe available evidence does not support the routine use of anticoagulation in people with HF who remain in sinus rhythm.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When the heart’s ability to pump blood is reduced, it is called heart failure (HF). When this happens, some people will develop severe clotting problems (called thromboembolism) in the lungs, legs, and brain. This happens because the blood is flowing more slowly, inflammation is increasing, and there is an overproduction of clotting molecules. These clots can cause stroke, lung or leg damage, or even death."
}
] |
query-laysum
|
8031
|
[
{
"role": "user",
"content": "Abstract: Background\nPhysical inactivity is a leading cause of preventable death and morbidity in developed countries. In addition physical activity can potentially be an effective treatment for various medical conditions (e.g. cardiovascular disease, osteoarthritis). Many types of physical activity programs exist ranging from simple home exercise programs to intense highly supervised hospital (center) based programs.\nObjectives\nTo assess the effectiveness of 'home based' versus 'center based' physical activity programs on the health of older adults.\nSearch methods\nThe reviewers searched the Cochrane Central Register of Controlled Trials (CENTRAL) (1991-present), MEDLINE (1966-Sept 2002), EMBASE (1988 to Sept 2002), CINAHL (1982-Sept 2002), Health Star (1975-Sept 2002), Dissertation Abstracts (1980 to Sept 2002), Sport Discus (1975-Sept 2002) and Science Citation Index (1975-Sept 2002), reference lists of relevant articles and contacted principal authors where possible.\nSelection criteria\nRandomised or quasi-randomised controlled trials of different physical activity interventions in older adults (50 years or older) comparing a 'home based' to a 'center based' exercise program. Study participants had to have either a recognised cardiovascular risk factor, or existing cardiovascular disease, or chronic obstructive airways disease (COPD) or osteoarthritis. Cardiac and post-operative programs within one year of the event were excluded.\nData collection and analysis\nThree reviewers selected and appraised the identified studies independently. Data from studies that then met the inclusion/exclusion criteria were extracted by two additional reviewers.\nMain results\nSix trials including 224 participants who received a 'home based' exercise program and 148 who received a 'center based' exercise program were included in this review. Five studies were of medium quality and one poor. A meta-analysis was not undertaken given the heterogeneity of these studies.\nCardiovascular \nThe largest trial (accounting for approximately 60% of the participants) looked at sedentary older adults. Three trials looked at patients with peripheral vascular disease (intermittent claudication). In patients with peripheral vascular disease center based programs were superior to home at improving distance walked and time to claudication pain at up to 6 months. However the risk of a training effect may be high. There are no longer term studies in this population.\nNotably home based programs appeared to have a significantly higher adherence rate than center based programs. However this was based primarily on the one study (with the highest quality rating of the studies found) of sedentary older adults. This showed an adherence rate of 68% in the home based program at two year follow-up compared with a 36% adherence in the center based group. There was essentially no difference in terms of treadmill performance or cardiovascular risk factors between groups.\nChronic Obstructive Pulmonary Disease (COPD) \nTwo trials looked at older adults with COPD. In patients with COPD the evidence is conflicting. One study showed similar changes in various physiological measures at 3 months that persisted in the home based group up to 18 months but not in the center based group. The other study showed significantly better improvements in physiological measures in the center based group after 8 weeks but again the possibility of a training effect is high.\nOsteoarthritis \nNo studies were found.\nNone of the studies dealt with measures of cost, or health service utilization.\nAuthors' conclusions\nIn the short-term, center based programs are superior to home based programs in patients with PVD. There is a high possibility of a training effect however as the center based groups were trained primarily on treadmills (and the home based were not) and the outcome measures were treadmill based. There is conflicting evidence which is better in patients with COPD. Home based programs appear to be superior to center based programs in terms of the adherence to exercise (especially in the long-term)\n\nGiven the provided abstract, please respond to the following query: \"What is the bottom line?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is 'silver-level' evidence (www.cochranemsk.org) that both exercising at home or at a center improves the health and physical function of older adults. But, people tend to stick with exercising at home more than in a center.\nPeople with heart disease or a high risk of heart disease may show more improvements exercising at a center than at home in the short-term (3 months). In people with COPD, it is still not clear whether exercising at home or at a center is better.\nMore research is still needed to test which type of programme might be better for people with osteoarthritis and what the costs are in general."
}
] |
query-laysum
|
8032
|
[
{
"role": "user",
"content": "Abstract: Background\nSelf-harm (SH; intentional self-poisoning or self-injury regardless of degree of suicidal intent or other types of motivation) is a growing problem in most countries, often repeated, and associated with suicide. Evidence assessing the effectiveness of interventions in the treatment of SH in children and adolescents is lacking, especially when compared with the evidence for psychosocial interventions in adults. This review therefore updates a previous Cochrane Review (last published in 2015) on the role of interventions for SH in children and adolescents.\nObjectives\nTo assess the effects of psychosocial interventions or pharmacological agents or natural products for SH compared to comparison types of care (e.g. treatment-as-usual, routine psychiatric care, enhanced usual care, active comparator, placebo, alternative pharmacological treatment, or a combination of these) for children and adolescents (up to 18 years of age) who engage in SH.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Specialized Register, the Cochrane Library (Central Register of Controlled Trials [CENTRAL] and Cochrane Database of Systematic Reviews [CDSR]), together with MEDLINE, Ovid Embase, and PsycINFO (to 4 July 2020).\nSelection criteria\nWe included all randomised controlled trials (RCTs) comparing specific psychosocial interventions or pharmacological agents or natural products with treatment-as-usual (TAU), routine psychiatric care, enhanced usual care (EUC), active comparator, placebo, alternative pharmacological treatment, or a combination of these, in children and adolescents with a recent (within six months of trial entry) episode of SH resulting in presentation to hospital or clinical services. The primary outcome was the occurrence of a repeated episode of SH over a maximum follow-up period of two years. Secondary outcomes included treatment adherence, depression, hopelessness, general functioning, social functioning, suicidal ideation, and suicide.\nData collection and analysis\nWe independently selected trials, extracted data, and appraised trial quality. For binary outcomes, we calculated odds ratios (ORs) and their 95% confidence internals (CIs). For continuous outcomes, we calculated the mean difference (MD) or standardised mean difference (SMD) and 95% CIs. The overall quality of evidence for the primary outcome (i.e. repetition of SH at post-intervention) was appraised for each intervention using the GRADE approach.\nMain results\nWe included data from 17 trials with a total of 2280 participants. Participants in these trials were predominately female (87.6%) with a mean age of 14.7 years (standard deviation (SD) 1.5 years). The trials included in this review investigated the effectiveness of various forms of psychosocial interventions. None of the included trials evaluated the effectiveness of pharmacological agents in this clinical population. There was a lower rate of SH repetition for DBT-A (30%) as compared to TAU, EUC, or alternative psychotherapy (43%) on repetition of SH at post-intervention in four trials (OR 0.46, 95% CI 0.26 to 0.82; N = 270; k = 4; high-certainty evidence). There may be no evidence of a difference for individual cognitive behavioural therapy (CBT)-based psychotherapy and TAU for repetition of SH at post-intervention (OR 0.93, 95% CI 0.12 to 7.24; N = 51; k = 2; low-certainty evidence). We are uncertain whether mentalisation based therapy for adolescents (MBT-A) reduces repetition of SH at post-intervention as compared to TAU (OR 0.70, 95% CI 0.06 to 8.46; N = 85; k = 2; very low-certainty evidence). Heterogeneity for this outcome was substantial ( I² = 68%). There is probably no evidence of a difference between family therapy and either TAU or EUC on repetition of SH at post-intervention (OR 1.00, 95% CI 0.49 to 2.07; N = 191; k = 2; moderate-certainty evidence). However, there was no evidence of a difference for compliance enhancement approaches on repetition of SH by the six-month follow-up assessment, for group-based psychotherapy at the six- or 12-month follow-up assessments, for a remote contact intervention (emergency cards) at the 12-month assessment, or for therapeutic assessment at the 12- or 24-month follow-up assessments.\nAuthors' conclusions\nGiven the moderate or very low quality of the available evidence, and the small number of trials identified, there is only uncertain evidence regarding a number of psychosocial interventions in children and adolescents who engage in SH. Further evaluation of DBT-A is warranted. Given the evidence for its benefit in adults who engage in SH, individual CBT-based psychotherapy should also be further developed and evaluated in children and adolescents.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Self-harm (SH), which includes intentional self-poisoning/overdose and self-injury, is a major problem in many countries and is strongly linked with suicide. It is therefore important that effective treatments for SH patients are developed. There has been an increase in the use of interventions for SH in children and adolescents. It is therefore important to assess the evidence for their effectiveness."
}
] |
query-laysum
|
8033
|
[
{
"role": "user",
"content": "Abstract: Background\nVestibular migraine is a form of migraine where one of the main features is recurrent attacks of vertigo. These episodes are often associated with other features of migraine, including headache and sensitivity to light or sound. These unpredictable and severe attacks of vertigo can lead to a considerable reduction in quality of life. The condition is estimated to affect just under 1% of the population, although many people remain undiagnosed. A number of interventions have been used, or proposed to be used, as prophylaxis for this condition, to help reduce the frequency of the attacks. Many of these interventions include dietary, lifestyle or behavioural changes, rather than medication.\nObjectives\nTo assess the benefits and harms of non-pharmacological treatments used for prophylaxis of vestibular migraine.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 23 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with definite or probable vestibular migraine comparing dietary modifications, sleep improvement techniques, vitamin and mineral supplements, herbal supplements, talking therapies, mind-body interventions or vestibular rehabilitation with either placebo or no treatment. We excluded studies with a cross-over design, unless data from the first phase of the study could be identified.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) improvement in headache, 6) improvement in other migrainous symptoms and 7) other adverse effects. We considered outcomes reported at three time points: < 3 months, 3 to < 6 months, > 6 to 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included three studies in this review with a total of 319 participants. Each study addressed a different comparison and these are outlined below. We did not identify any evidence for the remaining comparisons of interest in this review.\nDietary interventions (probiotics) versus placebo\nWe identified one study with 218 participants (85% female). The use of a probiotic supplement was compared to a placebo and participants were followed up for two years. Some data were reported on the change in vertigo frequency and severity over the duration of the study. However, there were no data regarding improvement of vertigo or serious adverse events.\nCognitive behavioural therapy (CBT) versus no intervention\nOne study compared CBT to no treatment in 61 participants (72% female). Participants were followed up for eight weeks. Data were reported on the change in vertigo over the course of the study, but no information was reported on the proportion of people whose vertigo improved, or on the occurrence of serious adverse events.\nVestibular rehabilitation versus no intervention\nThe third study compared the use of vestibular rehabilitation to no treatment in a group of 40 participants (90% female) and participants were followed up for six months. Again, this study reported some data on change in the frequency of vertigo during the study, but no information on the proportion of participants who experienced an improvement in vertigo or the number who experienced serious adverse events.\nWe are unable to draw meaningful conclusions from the numerical results of these studies, as the data for each comparison of interest come from single, small studies and the certainty of the evidence was low or very low.\nAuthors' conclusions\nThere is a paucity of evidence for non-pharmacological interventions that may be used for prophylaxis of vestibular migraine. Only a limited number of interventions have been assessed by comparing them to no intervention or a placebo treatment, and the evidence from these studies is all of low or very low certainty. We are therefore unsure whether any of these interventions may be effective at reducing the symptoms of vestibular migraine and we are also unsure whether they have the potential to cause harm.\n\nGiven the provided abstract, please respond to the following query: \"What is vestibular migraine?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Migraine (sometimes known as 'headache migraine') is a common condition that causes recurrent headaches. Vestibular migraine is a related condition where the main symptoms are recurring episodes of severe dizziness or vertigo (a spinning sensation). These episodes are often associated with headache, or other migraine-like symptoms (such as sensitivity to light or sound, nausea or vomiting). It is a relatively common condition, which affects up to 1 in every 100 people, and can have severe effects on day-to-day life."
}
] |
query-laysum
|
8034
|
[
{
"role": "user",
"content": "Abstract: Background\nAntipsychotic drugs are the mainstay treatment for schizophrenia, yet they are associated with diverse and potentially dose-related side effects which can reduce quality of life. For this reason, the lowest possible doses of antipsychotics are generally recommended, but higher doses are often used in clinical practice. It is still unclear if and how antipsychotic doses could be reduced safely in order to minimise the adverse-effect burden without increasing the risk of relapse.\nObjectives\nTo assess the efficacy and safety of reducing antipsychotic dose compared to continuing the current dose for people with schizophrenia.\nSearch methods\nWe conducted a systematic search on 10 February 2021 at the Cochrane Schizophrenia Group's Study-Based Register of Trials, which is based on CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, PubMed, ClinicalTrials.gov, ISRCTN, and WHO ICTRP. We also inspected the reference lists of included studies and previous reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing any dose reduction against continuation in people with schizophrenia or related disorders who were stabilised on their current antipsychotic treatment.\nData collection and analysis\nAt least two review authors independently screened relevant records for inclusion, extracted data from eligible studies, and assessed the risk of bias using RoB 2. We contacted study authors for missing data and additional information. Our primary outcomes were clinically important change in quality of life, rehospitalisations and dropouts due to adverse effects; key secondary outcomes were clinically important change in functioning, relapse, dropouts for any reason, and at least one adverse effect. We also examined scales measuring symptoms, quality of life, and functioning as well as a comprehensive list of specific adverse effects. We pooled outcomes at the endpoint preferably closest to one year. We evaluated the certainty of the evidence using the GRADE approach.\nMain results\nWe included 25 RCTs, of which 22 studies provided data with 2635 participants (average age 38.4 years old). The median study sample size was 60 participants (ranging from 18 to 466 participants) and length was 37 weeks (ranging from 12 weeks to 2 years). There were variations in the dose reduction strategies in terms of speed of reduction (i.e. gradual in about half of the studies (within 2 to 16 weeks) and abrupt in the other half), and in terms of degree of reduction (i.e. median planned reduction of 66% of the dose up to complete withdrawal in three studies). We assessed risk of bias across outcomes predominantly as some concerns or high risk.\nNo study reported data on the number of participants with a clinically important change in quality of life or functioning, and only eight studies reported continuous data on scales measuring quality of life or functioning. There was no difference between dose reduction and continuation on scales measuring quality of life (standardised mean difference (SMD) −0.01, 95% confidence interval (CI) −0.17 to 0.15, 6 RCTs, n = 719, I2 = 0%, moderate certainty evidence) and scales measuring functioning (SMD 0.03, 95% CI −0.10 to 0.17, 6 RCTs, n = 966, I2 = 0%, high certainty evidence).\nDose reduction in comparison to continuation may increase the risk of rehospitalisation based on data from eight studies with estimable effect sizes; however, the 95% CI does not exclude the possibility of no difference (risk ratio (RR) 1.53, 95% CI 0.84 to 2.81, 8 RCTs, n = 1413, I2 = 59% (moderate heterogeneity), very low certainty evidence). Similarly, dose reduction increased the risk of relapse based on data from 20 studies (RR 2.16, 95% CI 1.52 to 3.06, 20 RCTs, n = 2481, I2 = 70% (substantial heterogeneity), low certainty evidence).\nMore participants in the dose reduction group in comparison to the continuation group left the study early due to adverse effects (RR 2.20, 95% CI 1.39 to 3.49, 6 RCTs with estimable effect sizes, n = 1079, I2 = 0%, moderate certainty evidence) and for any reason (RR 1.38, 95% CI 1.05 to 1.81, 12 RCTs, n = 1551, I2 = 48% (moderate heterogeneity), moderate certainty evidence).\nLastly, there was no difference between the dose reduction and continuation groups in the number of participants with at least one adverse effect based on data from four studies with estimable effect sizes (RR 1.03, 95% CI 0.94 to 1.12, 5 RCTs, n = 998 (4 RCTs, n = 980 with estimable effect sizes), I2 = 0%, moderate certainty evidence).\nAuthors' conclusions\nThis review synthesised the latest evidence on the reduction of antipsychotic doses for stable individuals with schizophrenia. There was no difference between dose reduction and continuation groups in quality of life, functioning, and number of participants with at least one adverse effect. However, there was a higher risk for relapse and dropouts, and potentially for rehospitalisations, with dose reduction. Of note, the majority of the trials focused on relapse prevention rather potential beneficial outcomes on quality of life, functioning, and adverse effects, and in some studies there was rapid and substantial reduction of doses. Further well-designed RCTs are therefore needed to provide more definitive answers.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to February 2021."
}
] |
query-laysum
|
8035
|
[
{
"role": "user",
"content": "Abstract: Background\nAllopurinol, a xanthine oxidase inhibitor, is considered one of the most effective urate-lowering drugs and is frequently used in the treatment of chronic gout.\nObjectives\nTo assess the efficacy and safety of allopurinol compared with placebo and other urate-lowering therapies for treating chronic gout.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE on 14 January 2014. We also handsearched the 2011 to 2012 American College of Rheumatology (ACR) and European League against Rheumatism (EULAR) abstracts, trial registers and regulatory agency drug safety databases.\nSelection criteria\nAll randomised controlled trials (RCTs) or quasi-randomised controlled clinical trials (CCTs) that compared allopurinol with a placebo or an active therapy in adults with chronic gout.\nData collection and analysis\nWe extracted and analysed data using standard methods for Cochrane reviews. The major outcomes of interest were frequency of acute gout attacks, serum urate normalisation, pain, function, tophus regression, study participant withdrawal due to adverse events (AE) and serious adverse events (SAE). We assessed the quality of the body of evidence for these outcomes using the GRADE approach.\nMain results\nWe included 11 trials (4531 participants) that compared allopurinol (various doses) with placebo (two trials); febuxostat (four trials); benzbromarone (two trials); colchicine (one trial); probenecid (one trial); continuous versus intermittent allopurinol (one trial) and different doses of allopurinol (one trial). Only one trial was at low risk of bias in all domains. We deemed allopurinol versus placebo the main comparison, and allopurinol versus febuxostat and versus benzbromarone as the most clinically relevant active comparisons and restricted reporting to these comparisons here.\nModerate-quality evidence from one trial (57 participants) indicated allopurinol 300 mg daily probably does not reduce the rate of gout attacks (2/26 with allopurinol versus 3/25 with placebo; risk ratio (RR) 0.64, 95% confidence interval (CI) 0.12 to 3.52) but increases the proportion of participants achieving a target serum urate over 30 days (25/26 with allopurinol versus 0/25 with placebo, RR 49.11, 95% CI 3.15 to 765.58; number needed to treat for an additional beneficial outcome (NNTB) 1). In two studies (453 participants), there was no significant increase in withdrawals due to AE (6% with allopurinol versus 4% with placebo, RR 1.36, 95% CI 0.61 to 3.08) or SAE (2% with allopurinol versus 1% with placebo, RR 1.93, 95% CI 0.48 to 7.80). One trial reported no difference in pain reduction or tophus regression, but did not report outcome data or measures of variance sufficiently and we could not calculate the differences between groups. Neither trial reported function.\nLow-quality evidence from three trials (1136 participants) indicated there may be no difference in the incidence of acute gout attacks with allopurinol up to 300 mg daily versus febuxostat 80 mg daily over eight to 24 weeks (21% with allopurinol versus 23% with febuxostat, RR 0.89, 95% CI 0.71 to 1.1); however more participants may achieve target serum urate level (four trials; 2618 participants) with febuxostat 80 mg daily versus allopurinol 300 mg daily (38% with allopurinol versus 70% with febuxostat, RR 0.56, 95% CI 0.48 to 0.65, NNTB with febuxostat 4). Two trials reported no difference in tophus regression between allopurinol and febuxostat over a 28- to 52-week period; but as the trialists did not provide variance, we could not calculate the mean difference between groups. The trials did not report pain reduction or function. Moderate-quality evidence from pooled data from three trials (2555 participants) comparing allopurinol up to 300 mg daily versus febuxostat 80 mg daily indicated no difference in the number of withdrawals due to AE (7% with allopurinol versus 8% with febuxostat, RR 0.89, 95% CI 0.62 to 1.26) or SAE (4% with allopurinol versus 4% with febuxostat, RR 1.13, 95% CI 0.71 to 1.82) over a 24- to 52-week period.\nLow-quality evidence from one trial (65 participants) indicated there may be no difference in the incidence of acute gout attacks with allopurinol up to 600 mg daily compared with benzbromarone up to 200 mg daily over a four-month period (0/30 with allopurinol versus 1/25 with benzbromarone, RR 0.28, 95% CI 0.01 to 6.58). Based on the pooled results of two trials (102 participants), there was moderate-quality evidence of no probable difference in the proportion of participants achieving a target serum urate level with allopurinol versus benzbromarone (58% with allopurinol versus 74% with benzbromarone, RR 0.79, 95% CI 0.56 to 1.11). Low-quality evidence from two studies indicated there may be no difference in the number of participants who withdrew due to AE with allopurinol versus benzbromarone over a four- to nine-month period (6% with allopurinol versus 7% with benzbromarone, pooled RR 0.80, 95% CI 0.18 to 3.58). There were no SAEs. They did not report tophi regression, pain and function.\nAll other comparisons were supported by small, single studies only, limiting conclusions.\nAuthors' conclusions\nOur review found low- to moderate-quality evidence indicating similar effects on withdrawals due to AEs and SAEs and incidence of acute gout attacks when allopurinol (100 to 600 mg daily) was compared with placebo, benzbromarone (100 to 200 mg daily) or febuxostat (80 mg daily). There was moderate-quality evidence of little or no difference in the proportion of participants achieving target serum urate when allopurinol was compared with benzbromarone. However, allopurinol seemed more successful than placebo and may be less successful than febuxostat (80 mg daily) in achieving a target serum urate level (6 mg/dL or less; 0.36 mmol/L or less) based on moderate- to low-quality evidence. Single studies reported no difference in pain reduction when allopurinol (300 mg daily) was compared with placebo over 10 days, and no difference in tophus regression when allopurinol (200 to 300 mg daily) was compared with febuxostat (80 mg daily). None of the trials reported on function, health-related quality of life or participant global assessment of treatment success, where further research would be useful.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In people with chronic gout, moderate-quality evidence indicated that, compared with placebo, allopurinol (100 to 300 mg daily) probably does not reduce the number of acute gout attacks, but does increase the proportion achieving target serum urate levels, without increasing withdrawals due to AEs or serious adverse event rates. Further research may change the estimates. Pain and tophus regression were not reported sufficiently to calculate group differences. Function was not measured.\nIn people with chronic gout, compared with febuxostat (80 mg daily), low-quality evidence indicated that allopurinol (100 to 300 mg daily) may not reduce the number of acute gout attacks, and may be less effective in achieving target serum urate levels, without increasing withdrawals due to AEs, or serious adverse event rates. Further research is likely to change the estimates. Tophus regression was not reported sufficiently to calculate group differences. Pain and function were not measured.\nIn people with chronic gout, compared with benzbromarone (up to 200 mg daily), allopurinol (up to 600 mg daily) may not reduce the number of acute gout attacks (low-quality evidence), probably does not increase the proportion achieving target serum urate levels, or the withdrawals due to AEs or serious adverse event rates (moderate-quality evidence). Further research may change the estimates. Tophus regression was not reported sufficiently to calculate group differences and pain and function were not measured.\nThe evidence was downgraded due to limitations in study design indicating potential bias, and possible imprecision. All other comparisons were supported by small, single studies only, limiting conclusions."
}
] |
query-laysum
|
8036
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of a review first published in 2003 and updated in 2012.\nKetamine is a commonly used anaesthetic agent, and in subanaesthetic doses is also given as an adjuvant to opioids for the treatment of refractory cancer pain, when opioids alone or in combination with appropriate adjuvant analgesics prove to be ineffective. Ketamine is known to have psychomimetic (including hallucinogenic), urological, and hepatic adverse effects.\nObjectives\nTo determine the effectiveness and adverse effects of ketamine as an adjuvant to opioids for refractory cancer pain in adults.\nSearch methods\nFor this update, we searched MEDLINE (OVID) to December 2016. We searched CENTRAL (CRSO), Embase (OVID) and two clinical trial registries to January 2017.\nSelection criteria\nThe intervention considered by this review was the addition of ketamine, given by any route of administration, in any dose, to pre-existing opioid treatment given by any route and in any dose, compared with placebo or active control. We included studies with a group size of at least 10 participants who completed the trial.\nData collection and analysis\nTwo review authors independently assessed the search results and performed 'Risk of bias' assessments. We aimed to extract data on patient-reported pain intensity, total opioid consumption over the study period; use of rescue medication; adverse events; measures of patient satisfaction/preference; function; and distress. We also assessed participant withdrawal (dropout) from trial. We assessed the quality of the evidence using GRADE (Grading of Recommendations Assessment, Development and Evaluation).\nMain results\nOne new study (185 participants) was identified by the updated search and included in the review. We included a total of three studies in this update.\nTwo small studies, both with cross-over design, with 20 and 10 participants respectively, were eligible for inclusion in the original review. One study with 20 participants examined the addition of intrathecal ketamine to intrathecal morphine, compared with intrathecal morphine alone. The second study with 10 participants examined the addition of intravenous ketamine bolus in two different doses to ongoing morphine therapy, compared with placebo. Both of these studies reported reduction in pain intensity and reduction in morphine requirements when ketamine was added to opioid for refractory cancer pain. The new study identified by the updated search had a parallel group design and 185 participants. This placebo-controlled study examined rapid titration of subcutaneous ketamine to high dose (500 mg) in participants who were using different opioids. There were no differences between groups for patient-reported pain intensity.\nPooling of the data from the three included trials was not appropriate because of clinical heterogeneity.\nThe study examining intrathecal drug administration reported no adverse events related to ketamine. In the study using intravenous bolus administration, ketamine caused hallucinations in four of 10 participants. In the rapid dose escalation/high-dose subcutaneous ketamine study, there was almost twice the incidence of adverse events in the ketamine group, compared to the placebo group, with the most common adverse events being needle site irritation and cognitive disturbance. Two serious adverse events (bradyarrhythmia and cardiac arrest) thought to be related to ketamine were also reported in this trial.\nFor all three studies there was an unclear risk of bias overall. Using GRADE, we judged the quality of the evidence to be very low due to study limitations and imprecision due to the small number of participants in all comparisons.\nAuthors' conclusions\nCurrent evidence is insufficient to assess the benefits and harms of ketamine as an adjuvant to opioids for the relief of refractory cancer pain. The evidence was of very low quality, meaning that it does not provide a reliable indication of the likely effect, and the likelihood that the effect will be substantially different is high. Rapid dose escalation of ketamine to high dose (500 mg) does not appear to have clinical benefit and may be associated with serious adverse events. More randomised controlled trials (RCTs) examining specific low-dose ketamine clinical regimens in current use are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The two smallest studies reported that the addition of ketamine to morphine reduced pain intensity and morphine requirements. The third study which used high doses of ketamine reported no clinical benefit of adding ketamine to different opioids. Increased doses of ketamine in some participants caused side effects such as hallucinations. The study which examined high doses of ketamine reported two serious adverse events, which may have been related to ketamine. Although two out of three studies reported reduction in pain, this could be due to chance in such small studies."
}
] |
query-laysum
|
8037
|
[
{
"role": "user",
"content": "Abstract: Background\nChildren and adolescents who have experienced trauma are at high risk of developing post-traumatic stress disorder (PTSD) and other negative emotional, behavioural and mental health outcomes, all of which are associated with high personal and health costs. A wide range of psychological treatments are used to prevent negative outcomes associated with trauma in children and adolescents.\nObjectives\nTo assess the effects of psychological therapies in preventing PTSD and associated negative emotional, behavioural and mental health outcomes in children and adolescents who have undergone a traumatic event.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Group's Specialised Register to 29 May 2015. This register contains reports of relevant randomised controlled trials from The Cochrane Library (all years), EMBASE (1974 to date), MEDLINE (1950 to date) and PsycINFO (1967 to date). We also checked reference lists of relevant studies and reviews. We did not restrict the searches by date, language or publication status.\nSelection criteria\nAll randomised controlled trials of psychological therapies compared with a control such as treatment as usual, waiting list or no treatment, pharmacological therapy or other treatments in children or adolescents who had undergone a traumatic event.\nData collection and analysis\nTwo members of the review group independently extracted data. We calculated odds ratios for binary outcomes and standardised mean differences for continuous outcomes using a random-effects model. We analysed data as short-term (up to and including one month after therapy), medium-term (one month to one year after therapy) and long-term (one year or longer).\nMain results\nInvestigators included 6201 participants in the 51 included trials. Twenty studies included only children, two included only preschool children and ten only adolescents; all others included both children and adolescents. Participants were exposed to sexual abuse in 12 trials, to war or community violence in ten, to physical trauma and natural disaster in six each and to interpersonal violence in three; participants had suffered a life-threatening illness and had been physically abused or maltreated in one trial each. Participants in remaining trials were exposed to a range of traumas.\nMost trials compared a psychological therapy with a control such as treatment as usual, wait list or no treatment. Seventeen trials used cognitive-behavioural therapy (CBT); four used family therapy; three required debriefing; two trials each used eye movement desensitisation and reprocessing (EMDR), narrative therapy, psychoeducation and supportive therapy; and one trial each provided exposure and CBT plus narrative therapy. Eight trials compared CBT with supportive therapy, two compared CBT with EMDR and one trial each compared CBT with psychodynamic therapy, exposure plus supportive therapy with supportive therapy alone and narrative therapy plus CBT versus CBT alone. Four trials compared individual delivery of psychological therapy to a group model of the same therapy, and one compared CBT for children versus CBT for both mothers and children.\nThe likelihood of being diagnosed with PTSD in children and adolescents who received a psychological therapy was significantly reduced compared to those who received no treatment, treatment as usual or were on a waiting list for up to a month following treatment (odds ratio (OR) 0.51, 95% confidence interval (CI) 0.34 to 0.77; number needed to treat for an additional beneficial outcome (NNTB) 6.25, 95% CI 3.70 to 16.67; five studies; 874 participants). However the overall quality of evidence for the diagnosis of PTSD was rated as very low. PTSD symptoms were also significantly reduced for a month after therapy (standardised mean difference (SMD) -0.42, 95% CI -0.61 to -0.24; 15 studies; 2051 participants) and the quality of evidence was rated as low. These effects of psychological therapies were not apparent over the longer term.\nCBT was found to be no more or less effective than EMDR and supportive therapy in reducing diagnosis of PTSD in the short term (OR 0.74, 95% CI 0.29 to 1.91; 2 studies; 160 participants), however this was considered very low quality evidence. For reduction of PTSD symptoms in the short term, there was a small effect favouring CBT over EMDR, play therapy and supportive therapies (SMD -0.24, 95% CI -0.42 to -0.05; 7 studies; 466 participants). The quality of evidence for this outcome was rated as moderate.\nWe did not identify any studies that compared pharmacological therapies with psychological therapies.\nAuthors' conclusions\nThe meta-analyses in this review provide some evidence for the effectiveness of psychological therapies in prevention of PTSD and reduction of symptoms in children and adolescents exposed to trauma for up to a month. However, our confidence in these findings is limited by the quality of the included studies and by substantial heterogeneity between studies. Much more evidence is needed to demonstrate the relative effectiveness of different psychological therapies for children exposed to trauma, particularly over the longer term. High-quality studies should be conducted to compare these therapies.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Children and adolescents who have experienced trauma are at high risk of developing post-traumatic stress disorder (PTSD) and negative psychological and social outcomes."
}
] |
query-laysum
|
8038
|
[
{
"role": "user",
"content": "Abstract: Background\nGestational diabetes (GDM) is glucose intolerance, first recognised in pregnancy and usually resolving after birth. GDM is associated with both short- and long-term adverse effects for the mother and her infant. Lifestyle interventions are the primary therapeutic strategy for many women with GDM.\nObjectives\nTo evaluate the effects of combined lifestyle interventions with or without pharmacotherapy in treating women with gestational diabetes.\nSearch methods\nWe searched the Pregnancy and Childbirth Group's Trials Register (14 May 2016), ClinicalTrials.gov, WHO International Clinical Trials Registry Platform (ICTRP) (14th May 2016) and reference lists of retrieved studies.\nSelection criteria\nWe included only randomised controlled trials comparing a lifestyle intervention with usual care or another intervention for the treatment of pregnant women with GDM. Quasi-randomised trials were excluded. Cross-over trials were not eligible for inclusion. Women with pre-existing type 1 or type 2 diabetes were excluded.\nData collection and analysis\nWe used standard methodological procedures expected by the Cochrane Collaboration. All selection of studies, data extraction was conducted independently by two review authors.\nMain results\nFifteen trials (in 45 reports) are included in this review (4501 women, 3768 infants). None of the trials were funded by a conditional grant from a pharmaceutical company. The lifestyle interventions included a wide variety of components such as education, diet, exercise and self-monitoring of blood glucose. The control group included usual antenatal care or diet alone. Using GRADE methodology, the quality of the evidence ranged from high to very low quality. The main reasons for downgrading evidence were inconsistency and risk of bias. We summarised the following data from the important outcomes of this review.\nLifestyle intervention versus control group\nFor the mother:\nThere was no clear evidence of a difference between lifestyle intervention and control groups for the risk of hypertensive disorders of pregnancy (pre-eclampsia) (average risk ratio (RR) 0.70; 95% confidence interval (CI) 0.40 to 1.22; four trials, 2796 women; I2 = 79%, Tau2 = 0.23; low-quality evidence); caesarean section (average RR 0.90; 95% CI 0.78 to 1.05; 10 trials, 3545 women; I2 = 48%, Tau2 = 0.02; low-quality evidence); development of type 2 diabetes(up to a maximum of 10 years follow-up) (RR 0.98, 95% CI 0.54 to 1.76; two trials, 486 women; I2 = 16%; low-quality evidence); perineal trauma/tearing (RR 1.04, 95% CI 0.93 to 1.18; one trial, n = 1000 women; moderate-quality evidence) or induction of labour (average RR 1.20, 95% CI 0.99 to 1.46; four trials, n = 2699 women; I2 = 37%; high-quality evidence).\nMore women in the lifestyle intervention group had met postpartum weight goals one year after birth than in the control group (RR 1.75, 95% CI 1.05 to 2.90; 156 women; one trial, low-quality evidence). Lifestyle interventions were associated with a decrease in the risk of postnatal depression compared with the control group (RR 0.49, 95% CI 0.31 to 0.78; one trial, n = 573 women; low-quality evidence).\nFor the infant/child/adult:\nLifestyle interventions were associated with a reduction in the risk of being born large-for-gestational age (LGA) (RR 0.60, 95% CI 0.50 to 0.71; six trials, 2994 infants; I2 = 4%; moderate-quality evidence). Birthweight and the incidence of macrosomia were lower in the lifestyle intervention group.\nExposure to the lifestyle intervention was associated with decreased neonatal fat mass compared with the control group (mean difference (MD) -37.30 g, 95% CI -63.97 to -10.63; one trial, 958 infants; low-quality evidence). In childhood, there was no clear evidence of a difference between groups for body mass index (BMI) ≥ 85th percentile (RR 0.91, 95% CI 0.75 to 1.11; three trials, 767 children; I2 = 4%; moderate-quality evidence).\nThere was no clear evidence of a difference between lifestyle intervention and control groups for the risk of perinatal death (RR 0.09, 95% CI 0.01 to 1.70; two trials, 1988 infants; low-quality evidence). Of 1988 infants, only five events were reported in total in the control group and there were no events in the lifestyle group. There was no clear evidence of a difference between lifestyle intervention and control groups for a composite of serious infant outcome/s (average RR 0.57, 95% CI 0.21 to 1.55; two trials, 1930 infants; I2 = 82%, Tau2 = 0.44; very low-quality evidence) or neonatal hypoglycaemia (average RR 0.99, 95% CI 0.65 to 1.52; six trials, 3000 infants; I2 = 48%, Tau2 = 0.12; moderate-quality evidence).\nDiabetes and adiposity in adulthood and neurosensory disability in later childhoodwere not prespecified or reported as outcomes for any of the trials included in this review.\nAuthors' conclusions\nLifestyle interventions are the primary therapeutic strategy for women with GDM. Women receiving lifestyle interventions were less likely to have postnatal depression and were more likely to achieve postpartum weight goals. Exposure to lifestyle interventions was associated with a decreased risk of the baby being born LGA and decreased neonatal adiposity. Long-term maternal and childhood/adulthood outcomes were poorly reported.\nThe value of lifestyle interventions in low-and middle-income countries or for different ethnicities remains unclear. The longer-term benefits or harms of lifestyle interventions remains unclear due to limited reporting.\nThe contribution of individual components of lifestyle interventions could not be assessed. Ten per cent of participants also received some form of pharmacological therapy. Lifestyle interventions are useful as the primary therapeutic strategy and most commonly include healthy eating, physical activity and self-monitoring of blood glucose concentrations.\nFuture research could focus on which specific interventions are most useful (as the sole intervention without pharmacological treatment), which health professionals should give them and the optimal format for providing the information. Evaluation of long-term outcomes for the mother and her child should be a priority when planning future trials. There has been no in-depth exploration of the costs ‘saved’ from reduction in risk of LGA/macrosomia and potential longer-term risks for the infants.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Lifestyle interventions provide benefits to women with GDM and their babies. The interventions are useful as the primary therapeutic strategy and generally include, as a minimum, healthy eating, physical activity and self-monitoring of blood sugar levels.\nFurture research could focus on the effective components of lifestyle interventions and the use of lifestyle interventions as the sole intervention without pharmacological treatment. Future studies also need to consider long-term outcomes for the mother and her child as a priority when planning future trials."
}
] |
query-laysum
|
8039
|
[
{
"role": "user",
"content": "Abstract: Background\nObstructive sleep-disordered breathing (oSDB) is a condition that encompasses breathing problems when asleep, due to an obstruction of the upper airways, ranging in severity from simple snoring to obstructive sleep apnoea syndrome (OSAS). It affects both children and adults. In children, hypertrophy of the tonsils and adenoid tissue is thought to be the commonest cause of oSDB. As such, tonsillectomy - with or without adenoidectomy - is considered an appropriate first-line treatment for most cases of paediatric oSDB.\nObjectives\nTo assess the benefits and harms of tonsillectomy with or without adenoidectomy compared with non-surgical management of children with oSDB.\nSearch methods\nWe searched the Cochrane Register of Studies Online, PubMed, EMBASE, CINAHL, Web of Science, Clinicaltrials.gov, ICTRP and additional sources for published and unpublished trials. The date of the search was 5 March 2015.\nSelection criteria\nRandomised controlled trials comparing the effectiveness and safety of (adeno)tonsillectomy with non-surgical management in children with oSDB aged 2 to 16 years.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nThree trials (562 children) met our inclusion criteria. Two were at moderate to high risk of bias and one at low risk of bias. We did not pool the results because of substantial clinical heterogeneity. They evaluated three different groups of children: those diagnosed with mild to moderate OSAS by polysomnography (PSG) (453 children aged five to nine years; low risk of bias; CHAT trial), those with a clinical diagnosis of oSDB but with negative PSG recordings (29 children aged two to 14 years; moderate to high risk of bias; Goldstein) and children with Down syndrome or mucopolysaccharidosis (MPS) diagnosed with mild to moderate OSAS by PSG (80 children aged six to 12 years; moderate to high risk of bias; Sudarsan). Moreover, the trials included two different comparisons: adenotonsillectomy versus no surgery (CHAT trial and Goldstein) or versus continuous positive airway pressure (CPAP) (Sudarsan).\nDisease-specific quality of life and/or symptom score (using a validated instrument): first primary outcome\nIn the largest trial with lowest risk of bias (CHAT trial), at seven months, mean scores for those instruments measuring disease-specific quality of life and/or symptoms were lower (that is, better quality of life or fewer symptoms) in children receiving adenotonsillectomy than in those managed by watchful waiting:\n- OSA-18 questionnaire (scale 18 to 126): 31.8 versus 49.5 (mean difference (MD) -17.7, 95% confidence interval (CI) -21.2 to -14.2);\n- PSQ-SRBD questionnaire (scale 0 to 1): 0.2 versus 0.5 (MD -0.3, 95% CI -0.31 to -0.26);\n- Modified Epworth Sleepiness Scale (scale 0 to 24): 5.1 versus 7.1 (MD -2.0, 95% CI -2.9 to -1.1).\nNo data on this primary outcome were reported in the Goldstein trial.\nIn the Sudarsan trial, the mean OSA-18 score at 12 months did not significantly differ between the adenotonsillectomy and CPAP groups. The mean modified Epworth Sleepiness Scale scores did not differ at six months, but were lower in the surgery group at 12 months: 5.5 versus 7.9 (MD -2.4, 95% CI -3.1 to -1.7).\nAdverse events: second primary outcome\nIn the CHAT trial, 15 children experienced a serious adverse event: 6/194 (3%) in the adenotonsillectomy group and 9/203 (4%) in the control group (RD -1%, 95% CI -5% to 2%).\nNo major complications were reported in the Goldstein trial.\nIn the Sudarsan trial, 2/37 (5%) developed a secondary haemorrhage after adenotonsillectomy, while 1/36 (3%) developed a rash on the nasal dorsum secondary to the CPAP mask (RD -3%, 95% CI -6% to 12%).\nSecondary outcomes\nIn the CHAT trial, at seven months, mean scores for generic caregiver-rated quality of life were higher in children receiving adenotonsillectomy than in those managed by watchful waiting. No data on this outcome were reported by Sudarsan and Goldstein.\nIn the CHAT trial, at seven months, more children in the surgery group had normalisation of respiratory events during sleep as measured by PSG than those allocated to watchful waiting: 153/194 (79%) versus 93/203 (46%) (RD 33%, 95% CI 24% to 42%). In the Goldstein trial, at six months, PSG recordings were similar between groups and in the Sudarsan trial resolution of OSAS (Apnoea/Hypopnoea Index score below 1) did not significantly differ between the adenotonsillectomy and CPAP groups.\nIn the CHAT trial, at seven months, neurocognitive performance and attention and executive function had not improved with surgery: scores were similar in both groups. In the CHAT trial, at seven months, mean scores for caregiver-reported ratings of behaviour were lower (that is, better behaviour) in children receiving adenotonsillectomy than in those managed by watchful waiting, however, teacher-reported ratings of behaviour did not significantly differ.\nNo data on these outcomes were reported by Goldstein and Sudarsan.\nAuthors' conclusions\nIn otherwise healthy children, without a syndrome, of older age (five to nine years), and diagnosed with mild to moderate OSAS by PSG, there is moderate quality evidence that adenotonsillectomy provides benefit in terms of quality of life, symptoms and behaviour as rated by caregivers and high quality evidence that this procedure is beneficial in terms of PSG parameters. At the same time, high quality evidence indicates no benefit in terms of objective measures of attention and neurocognitive performance compared with watchful waiting. Furthermore, PSG recordings of almost half of the children managed non-surgically had normalised by seven months, indicating that physicians and parents should carefully weigh the benefits and risks of adenotonsillectomy against watchful waiting in these children. This is a condition that may recover spontaneously over time.\nFor non-syndromic children classified as having oSDB on purely clinical grounds but with negative PSG recordings, the evidence on the effects of adenotonsillectomy is of very low quality and is inconclusive.\nLow-quality evidence suggests that adenotonsillectomy and CPAP may be equally effective in children with Down syndrome or MPS diagnosed with mild to moderate OSAS by PSG.\nWe are unable to present data on the benefits of adenotonsillectomy in children with oSDB aged under five, despite this being a population in whom this procedure is often performed for this purpose.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In the largest trial with lowest risk of bias (CHAT trial), at seven months, mean scores for disease-specific quality of life and/or symptoms were lower (meaning better quality of life or fewer symptoms) in children receiving adenotonsillectomy than in those managed by watchful waiting.\nIn the Sudarsan trial, the mean OSAS quality of life score at 12 months did not differ significantly between the adenotonsillectomy and CPAP groups. The mean modified Epworth Sleepiness Scale score did not differ at six months, but was lower in the surgery group at 12 months.\nAdverse events\nIn the CHAT trial, 15 children experienced a serious adverse event: 6/194 (3%) in the adenotonsillectomy group and 9/203 (4%) in the control group. No major complications were reported by Goldstein. In the Sudarsan trial, 2/37 children (5%) developed a postoperative bleed in the surgery group and 1/36 (3%) in the CPAP group developed a rash due to the breathing mask.\nSecondary outcomes\nIn the CHAT trial, at seven months, mean scores for general quality of life were higher in children receiving adenotonsillectomy than those managed by watchful waiting.\nIn the CHAT trial, at seven months, more children in the surgery group had normalisation of overnight sleep study findings than those assigned to watchful waiting. At six months, sleep study recordings were similar between groups in the Goldstein trial and resolution of OSAS based on overnight sleep study findings did not significantly differ between the adenotonsillectomy and CPAP groups in the Sudarsan trial.\nIn the CHAT trial, at seven months, neurocognitive performance and attention and executive function scores were similar in both groups.\nIn the CHAT trial, at seven months, mean scores for caregiver-reported ratings of behaviour were lower (meaning better behaviour) in children receiving adenotonsillectomy than in those managed by watchful waiting. However, teacher-reported ratings of behaviour did not significantly differ."
}
] |
query-laysum
|
8040
|
[
{
"role": "user",
"content": "Abstract: Background\nBiologic disease-modifying anti-rheumatic drugs (biologics) are highly effective in treating rheumatoid arthritis (RA), however there are few head-to-head biologic comparison studies. We performed a systematic review, a standard meta-analysis and a network meta-analysis (NMA) to update the 2009 Cochrane Overview. This review is focused on the adults with RA who are naive to methotrexate (MTX) that is, receiving their first disease-modifying agent.\nObjectives\nTo compare the benefits and harms of biologics (abatacept, adalimumab, anakinra, certolizumab pegol, etanercept, golimumab, infliximab, rituximab, tocilizumab) and small molecule tofacitinib versus comparator (methotrexate (MTX)/other DMARDs) in people with RA who are naive to methotrexate.\nMethods\nIn June 2015 we searched for randomized controlled trials (RCTs) in CENTRAL, MEDLINE and Embase; and trials registers. We used standard Cochrane methods. We calculated odds ratios (OR) and mean differences (MD) along with 95% confidence intervals (CI) for traditional meta-analyses and 95% credible intervals (CrI) using a Bayesian mixed treatment comparisons approach for network meta-analysis (NMA). We converted OR to risk ratios (RR) for ease of interpretation. We also present results in absolute measures as risk difference (RD) and number needed to treat for an additional beneficial or harmful outcome (NNTB/H).\nMain results\nNineteen RCTs with 6485 participants met inclusion criteria (including five studies from the original 2009 review), and data were available for four TNF biologics (adalimumab (six studies; 1851 participants), etanercept (three studies; 678 participants), golimumab (one study; 637 participants) and infliximab (seven studies; 1363 participants)) and two non-TNF biologics (abatacept (one study; 509 participants) and rituximab (one study; 748 participants)).\nLess than 50% of the studies were judged to be at low risk of bias for allocation sequence generation, allocation concealment and blinding, 21% were at low risk for selective reporting, 53% had low risk of bias for attrition and 89% had low risk of bias for major baseline imbalance. Three trials used biologic monotherapy, that is, without MTX. There were no trials with placebo-only comparators and no trials of tofacitinib. Trial duration ranged from 6 to 24 months. Half of the trials contained participants with early RA (less than two years' duration) and the other half included participants with established RA (2 to 10 years).\nBiologic + MTX versus active comparator (MTX (17 trials (6344 participants)/MTX + methylprednisolone 2 trials (141 participants))\nIn traditional meta-analyses, there was moderate-quality evidence downgraded for inconsistency that biologics with MTX were associated with statistically significant and clinically meaningful benefit versus comparator as demonstrated by ACR50 (American College of Rheumatology scale) and RA remission rates. For ACR50, biologics with MTX showed a risk ratio (RR) of 1.40 (95% CI 1.30 to 1.49), absolute difference of 16% (95% CI 13% to 20%) and NNTB = 7 (95% CI 6 to 8). For RA remission rates, biologics with MTX showed a RR of 1.62 (95% CI 1.33 to 1.98), absolute difference of 15% (95% CI 11% to 19%) and NNTB = 5 (95% CI 6 to 7). Biologics with MTX were also associated with a statistically significant, but not clinically meaningful, benefit in physical function (moderate-quality evidence downgraded for inconsistency), with an improvement of HAQ scores of -0.10 (95% CI -0.16 to -0.04 on a 0 to 3 scale), absolute difference -3.3% (95% CI -5.3% to -1.3%) and NNTB = 4 (95% CI 2 to 15).\nWe did not observe evidence of differences between biologics with MTX compared to MTX for radiographic progression (low-quality evidence, downgraded for imprecision and inconsistency) or serious adverse events (moderate-quality evidence, downgraded for imprecision). Based on low-quality evidence, results were inconclusive for withdrawals due to adverse events (RR of 1.32, but 95% confidence interval included possibility of important harm, 0.89 to 1.97). Results for cancer were also inconclusive (Peto OR 0.71, 95% CI 0.38 to 1.33) and downgraded to low-quality evidence for serious imprecision.\nBiologic without MTX versus active comparator (MTX 3 trials (866 participants)\nThere was no evidence of statistically significant or clinically important differences for ACR50, HAQ, remission, (moderate-quality evidence for these benefits, downgraded for imprecision), withdrawals due to adverse events,and serious adverse events (low-quality evidence for these harms, downgraded for serious imprecision). All studies were for TNF biologic monotherapy and none for non-TNF biologic monotherapy. Radiographic progression was not measured.\nAuthors' conclusions\nIn MTX-naive RA participants, there was moderate-quality evidence that, compared with MTX alone, biologics with MTX was associated with absolute and relative clinically meaningful benefits in three of the efficacy outcomes (ACR50, HAQ scores, and RA remission rates). A benefit regarding less radiographic progression with biologics with MTX was not evident (low-quality evidence). We found moderate- to low-quality evidence that biologic therapy with MTX was not associated with any higher risk of serious adverse events compared with MTX, but results were inconclusive for withdrawals due to adverse events and cancer to 24 months.\nTNF biologic monotherapy did not differ statistically significantly or clinically meaningfully from MTX for any of the outcomes (moderate-quality evidence), and no data were available for non-TNF biologic monotherapy.\nWe conclude that biologic with MTX use in MTX-naive populations is beneficial and that there is little/inconclusive evidence of harms. More data are needed for tofacitinib, radiographic progression and harms in this patient population to fully assess comparative efficacy and safety.\n\nGiven the provided abstract, please respond to the following query: \"Serious adverse events\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum
|
8041
|
[
{
"role": "user",
"content": "Abstract: Background\nCommunication and language development are areas of particular weakness for young children with Down syndrome. Caregivers' interaction with children influences language development, so many early interventions involve training parents how best to respond to their children and provide appropriate language stimulation. Thus, these interventions are mediated through parents, who in turn are trained and coached in the implementation of interventions by clinicians. As the interventions involve a considerable commitment from clinicians and families, we undertook this review to synthesise the evidence of their effectiveness.\nObjectives\nTo assess the effects of parent-mediated interventions for improving communication and language development in young children with Down syndrome. Other outcomes are parental behaviour and responsivity, parental stress and satisfaction, and children's non-verbal means of communicating, socialisation and behaviour.\nSearch methods\nIn January 2018 we searched CENTRAL, MEDLINE, Embase and 14 other databases. We also searched three trials registers, checked the reference lists of relevant reports identified by the electronic searches, searched the websites of professional organizations, and contacted their staff and other researchers working in the field to identify other relevant published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs that compared parent-mediated interventions designed to improve communication and language versus teaching/treatment as usual (TAU) or no treatment or delayed (wait-listed) treatment, in children with Down syndrome aged between birth and six years. We included studies delivering the parent-mediated intervention in conjunction with a clinician-mediated intervention, as long as the intervention group was the only group to receive the former and both groups received the latter.\nData collection and analysis\nWe used standard Cochrane methodological procedures for data collection and analysis.\nMain results\nWe included three studies involving 45 children aged between 29 months and six years with Down syndrome. Two studies compared parent-mediated interventions versus TAU; the third compared a parent-mediated plus clinician-mediated intervention versus a clinician-mediated intervention alone. Treatment duration varied from 12 weeks to six months. One study provided nine group sessions and four individualised home-based sessions over a 13-week period. Another study provided weekly, individual clinic-based or home-based sessions lasting 1.5 to 2 hours, over a six-month period. The third study provided one 2- to 3-hour group session followed by bi-weekly, individual clinic-based sessions plus once-weekly home-based sessions for 12 weeks. Because of the different study designs and outcome measures used, we were unable to conduct a meta-analysis.\nWe judged all three studies to be at high risk of bias in relation to blinding of participants (not possible due to the nature of the intervention) and blinding of outcome assessors, and at an unclear risk of bias for allocation concealment. We judged one study to be at unclear risk of selection bias, as authors did not report the methods used to generate the random sequence; at high risk of reporting bias, as they did not report on one assessed outcome; and at high risk of detection bias, as the control group had a cointervention and only parents in the intervention group were made aware of the target words for their children. The sample sizes of each included study were very small, meaning that they are unlikely to be representative of the target population.\nThe findings from the three included studies were inconsistent. Two studies found no differences in expressive or receptive language abilities between the groups, whether measured by direct assessment or parent reports. However, they did find that children in the intervention group could use more targeted vocabulary items or utterances with language targets in certain contexts postintervention, compared to those in the control group; this was not maintained 12 months later. The third study found gains for the intervention group on total-language measures immediately postintervention.\nOne study did not find any differences in parental stress scores between the groups at any time point up to 12 months postintervention. All three studies noted differences in most measures of how the parents talked to and interacted with their children postintervention, and in one study most strategies were maintained in the intervention group at 12 months postintervention. No study reported evidence of language attrition following the intervention in either group, while one study found positive outcomes on children's socialisation skills in the intervention group. One study looked at adherence to the treatment through attendance data, finding that mothers in the intervention group attended seven out of nine group sessions and were present for four home visits. No study measured parental use of the strategies outside of the intervention sessions.\nA grant from the Hospital for Sick Children Foundation (Toronto, Ontario, Canada) funded one study. Another received partial funding from the National Institute of Child Health and Human Development and the Department of Education in the USA. The remaining study did not specify any funding sources.\nIn light of the serious limitations in methodology, and the small number of studies included, we considered the overall quality of the evidence, as assessed by GRADE, to be very low. This means that we have very little confidence in the results, and further research is very likely to have an important impact on our confidence in the estimate of treatment effect.\nAuthors' conclusions\nThere is currently insufficient evidence to determine the effects of parent-mediated interventions for improving the language and communication of children with Down syndrome. We found only three small studies of very low quality. This review highlights the need for well-designed studies, including RCTs, to evaluate the effectiveness of parent-mediated interventions. Trials should use valid, reliable and similar measures of language development, and they should include measures of secondary outcomes more distal to the intervention, such as family well-being. Treatment fidelity, in particular parental dosage of the intervention outside of prescribed sessions, also needs to be documented.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We rated the quality of the evidence in this review as very low, as only three studies fulfilled the criteria for inclusion, and all had small sizes and serious methodological limitations. There is currently insufficient evidence to determine the effect of parent-mediated interventions for improving the communication and language development in young children with Down syndrome."
}
] |
query-laysum
|
8042
|
[
{
"role": "user",
"content": "Abstract: Background\nPalatally displaced canines or PDCs are upper permanent canines, commonly known as 'eye' teeth, that are displaced in the roof of the mouth. This can leave unsightly gaps, cause damage to the surrounding roots (which can be so severe that neighbouring teeth are lost or have to be removed) and, occasionally, result in the development of cysts. PDCs are a frequent dental anomaly, present in 2% to 3% of young people.\nManagement of this problem is both time consuming and expensive. It involves surgical exposure (uncovering) followed by fixed braces for two to three years to bring the canine into alignment within the dental arch. Two techniques for exposing palatal canines are routinely used in the UK: the closed technique and the open technique. The closed technique involves uncovering the canine, attaching an eyelet and gold chain and then suturing the palatal mucosa back over the tooth. The tooth is then moved into position covered by the palatal mucosa. The open technique involves uncovering the canine tooth and removing the overlying palatal tissue to leave it uncovered. The orthodontist can then see the crown of the canine to align it.\nObjectives\nTo assess the effects of using either an open or closed surgical method to expose canines that have become displaced in the roof of the mouth, in terms of success and other clinical and patient-reported outcomes.\nSearch methods\nCochrane Oral Health’s Information Specialist searched the following databases: Cochrane Oral Health’s Trials Register (to 24 February 2017), the Cochrane Central Register of Controlled Trials (CENTRAL) (in the Cochrane Library, 2017, Issue 1), MEDLINE Ovid (1946 to 24 February 2017), and Embase Ovid (1980 to 24 February 2017). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials assessing young people receiving surgical treatment to correct upper PDCs. There was no restriction on age, presenting malocclusion or type of active orthodontic treatment undertaken. We included unilaterally and bilaterally displaced canines.\nData collection and analysis\nTwo review authors independently screened the results of the electronic searches, extracted data and assessed the risk of bias in the included studies. We attempted to contact study authors for missing data or clarification where feasible. We followed statistical guidelines from the Cochrane Handbook for Systematic Reviews of Interventions for data synthesis.\nMain results\nWe included three studies, involving 146 participants. Two studies were assessed as being at high risk of bias.\nThe main finding of the review was that the two techniques may be equally successful at exposing PDCs (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.93 to 1.06; three studies, 141 participants analysed, low-quality evidence).\nOne surgical failure was due to detachment of the gold chain (closed group). One study reported on complications following surgery and found two in the closed group: a post-operative infection requiring antibiotics and pain during alignment of the canine as the gold chain penetrated through the gum tissue of the palate.\nWe were unable to pool data for dental aesthetics, patient-reported pain and discomfort, periodontal health and treatment time; however, individual studies did not find any differences between the surgical techniques (low- to very low-quality evidence).\nAuthors' conclusions\nCurrently, the evidence suggests that neither the open or closed surgical technique for exposing palatally displaced maxillary canine teeth is superior for any of the outcomes included in this review; however, we considered the evidence to be low quality, with two of the three included studies being at high risk of bias. This suggests the need for more high-quality studies. Three ongoing clinical trials have been identified and it is hoped that these will produce data that can be pooled to increase the degree of certainty in these findings.\n\nGiven the provided abstract, please respond to the following query: \"Author conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It does not seem that one surgical technique is better than the other for moving displaced eye teeth into the correct position, or for other outcomes, but this finding is uncertain because the quality of the evidence is low. This suggests the need for more high-quality studies. Three studies are currently in process. When they are completed, we will include them in an update of this review and may be able to reach firmer conclusions."
}
] |
query-laysum
|
8043
|
[
{
"role": "user",
"content": "Abstract: Background\nCardiovascular disease is the most common cause of death globally. Traditionally, centre-based cardiac rehabilitation programmes are offered to individuals after cardiac events to aid recovery and prevent further cardiac illness. Home-based cardiac rehabilitation programmes have been introduced in an attempt to widen access and participation. This is an update of a review previously published in 2009 and 2015.\nObjectives\nTo compare the effect of home-based and supervised centre-based cardiac rehabilitation on mortality and morbidity, exercise-capacity, health-related quality of life, and modifiable cardiac risk factors in patients with heart disease.\nSearch methods\nWe updated searches from the previous Cochrane Review by searching the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid), Embase (Ovid), PsycINFO (Ovid) and CINAHL (EBSCO) on 21 September 2016. We also searched two clinical trials registers as well as previous systematic reviews and reference lists of included studies. No language restrictions were applied.\nSelection criteria\nWe included randomised controlled trials, including parallel group, cross-over or quasi-randomised designs) that compared centre-based cardiac rehabilitation (e.g. hospital, gymnasium, sports centre) with home-based programmes in adults with myocardial infarction, angina, heart failure or who had undergone revascularisation.\nData collection and analysis\nTwo review authors independently screened all identified references for inclusion based on pre-defined inclusion criteria. Disagreements were resolved through discussion or by involving a third review author. Two authors independently extracted outcome data and study characteristics and assessed risk of bias. Quality of evidence was assessed using GRADE principles and a Summary of findings table was created.\nMain results\nWe included six new studies (624 participants) for this update, which now includes a total of 23 trials that randomised a total of 2890 participants undergoing cardiac rehabilitation. Participants had an acute myocardial infarction, revascularisation or heart failure. A number of studies provided insufficient detail to enable assessment of potential risk of bias, in particular, details of generation and concealment of random allocation sequencing and blinding of outcome assessment were poorly reported.\nNo evidence of a difference was seen between home- and centre-based cardiac rehabilitation in clinical primary outcomes up to 12 months of follow up: total mortality (relative risk (RR) = 1.19, 95% CI 0.65 to 2.16; participants = 1505; studies = 11/comparisons = 13; very low quality evidence), exercise capacity (standardised mean difference (SMD) = -0.13, 95% CI -0.28 to 0.02; participants = 2255; studies = 22/comparisons = 26; low quality evidence), or health-related quality of life up to 24 months (not estimable). Trials were generally of short duration, with only three studies reporting outcomes beyond 12 months (exercise capacity: SMD 0.11, 95% CI -0.01 to 0.23; participants = 1074; studies = 3; moderate quality evidence). However, there was evidence of marginally higher levels of programme completion (RR 1.04, 95% CI 1.00 to 1.08; participants = 2615; studies = 22/comparisons = 26; low quality evidence) by home-based participants.\nAuthors' conclusions\nThis update supports previous conclusions that home- and centre-based forms of cardiac rehabilitation seem to be similarly effective in improving clinical and health-related quality of life outcomes in patients after myocardial infarction or revascularisation, or with heart failure. This finding supports the continued expansion of evidence-based, home-based cardiac rehabilitation programmes. The choice of participating in a more traditional and supervised centre-based programme or a home-based programme may reflect local availability and consider the preference of the individual patient. Further data are needed to determine whether the effects of home- and centre-based cardiac rehabilitation reported in the included short-term trials can be confirmed in the longer term and need to consider adequately powered non-inferiority or equivalence study designs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that home- and centre-based cardiac rehabilitation programmes are similar in benefits measured in terms of numbers of deaths, exercise capacity and health-related quality of life. Further data are needed to confirm if these short-term effects of home- and centre-based cardiac rehabilitation can be sustained over time."
}
] |
query-laysum
|
8044
|
[
{
"role": "user",
"content": "Abstract: Background\nThe post-COVID-19 condition (PCC) consists of a wide array of symptoms including fatigue and impaired daily living. People seek a wide variety of approaches to help them recover.\nA new belief, arising from a few laboratory studies, is that 'microclots' cause the symptoms of PCC. This belief has been extended outside these studies, suggesting that to recover people need plasmapheresis (an expensive process where blood is filtered outside the body). We appraised the laboratory studies, and it was clear that the term 'microclots' is incorrect to describe the phenomenon being described. The particles are amyloid and include fibrin(ogen); amyloid is not a part of a thrombus which is a mix of fibrin mesh and platelets. Initial acute COVID-19 infection is associated with clotting abnormalities; this review concerns amyloid fibrin(ogen) particles in PCC only.\nWe have reported here our appraisal of laboratory studies investigating the presence of amyloid fibrin(ogen) particles in PCC, and of evidence that plasmapheresis may be an effective therapy to remove amyloid fibrin(ogen) particles for treating PCC.\nObjectives\nLaboratory studies review\nTo summarize and appraise the research reports on amyloid fibrin(ogen) particles related to PCC.\nRandomized controlled trials review\nTo assess the evidence of the safety and efficacy of plasmapheresis to remove amyloid fibrin(ogen) particles in individuals with PCC from randomized controlled trials.\nSearch methods\nLaboratory studies review\nWe searched for all relevant laboratory studies up to 27 October 2022 using a comprehensive search strategy which included the search terms ‘COVID’, ‘amyloid’, ‘fibrin’, ‘fibrinogen’.\nRandomized controlled trials review\nWe searched the following databases on 21 October 2022: Cochrane COVID-19 Study Register; MEDLINE (Ovid); Embase (Ovid); and BIOSIS Previews (Web of Science). We also searched the WHO International Clinical Trials Registry Platform and ClinicalTrials.gov for trials in progress.\nSelection criteria\nLaboratory studies review\nLaboratory studies that investigate the presence of amyloid fibrin(ogen) particles in plasma samples from patients with PCC were eligible. This included studies with or without controls.\nRandomized controlled trials review\nStudies were eligible if they were of randomized controlled design and investigated the effectiveness or safety of plasmapheresis for removing amyloid fibrin(ogen) particles for treating PCC.\nData collection and analysis\nTwo review authors applied study inclusion criteria to identify eligible studies and extracted data.\nLaboratory studies review\nWe assessed the risk of bias of included studies using pre-developed methods for laboratory studies. We planned to perform synthesis without meta-analysis (SWiM) as described in our protocol.\nRandomized controlled trials review\nWe planned that if we identified any eligible studies, we would assess risk of bias and report results with 95% confidence intervals. The primary outcome was recovery, measured using the Post-COVID-19 Functional Status Scale (absence of symptoms related to the illness, ability to do usual daily activities, and a return to a previous state of health and mind).\nMain results\nLaboratory studies review\nWe identified five laboratory studies. Amyloid fibrin(ogen) particles were identified in participants across all studies, including those with PCC, healthy individuals, and those with diabetes. The results of three studies were based on visual images of amyloid fibrin(ogen) particles, which did not quantify the amount or size of the particles identified. Formal risk of bias assessment showed concerns in how the studies were conducted and reported. This means the results were insufficient to support the belief that amyloid fibrin(ogen) particles are associated with PCC, or to determine whether there is a difference in the amount or size of amyloid fibrin(ogen) particles in the plasma of people with PCC compared to healthy controls.\nRandomized controlled trials review\nWe identified no trials meeting our inclusion criteria.\nAuthors' conclusions\nIn the absence of reliable research showing that amyloid fibrin(ogen) particles contribute to the pathophysiology of PCC, there is no rationale for plasmapheresis to remove amyloid fibrin(ogen) particles in PCC. Plasmapheresis for this indication should not be used outside the context of a well-conducted randomized controlled trial.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "1. The term 'microclots' is not the correct term for the particles being investigated in people with post-COVID-19 syndrome, as they are not clots. The term 'amyloid fibrin(ogen) particles' is more appropriate.\n2. The evidence shows that amyloid fibrin(ogen) particles are found in healthy people and those with other diseases, so they are not unique to post-COVID-19 condition.\n3. Patients should not receive plasmapheresis for this indication outside the context of a properly conducted placebo (dummy)-controlled randomized clinical trial (a type of study where participants are randomly assigned to one of two or more treatment groups)."
}
] |
query-laysum
|
8045
|
[
{
"role": "user",
"content": "Abstract: Background\nThe prevalence of overweight and obesity is increasing globally, an increase which has major implications for both population health and costs to health services. This is an update of a Cochrane Review.\nObjectives\nTo assess the effects of strategies to change the behaviour of health professionals or the organisation of care compared to standard care, to promote weight reduction in children and adults with overweight or obesity.\nSearch methods\nWe searched the following databases for primary studies up to September 2016: CENTRAL, MEDLINE, Embase, CINAHL, DARE and PsycINFO. We searched the reference lists of included studies and two trial registries.\nSelection criteria\nWe considered randomised trials that compared routine provision of care with interventions aimed either at changing the behaviour of healthcare professionals or the organisation of care to promote weight reduction in children and adults with overweight or obesity.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane when conducting this review. We report the results for the professional interventions and the organisational interventions in seven 'Summary of findings' tables.\nMain results\nWe identified 12 studies for inclusion in this review, seven of which evaluated interventions targeting healthcare professional and five targeting the organisation of care. Eight studies recruited adults with overweight or obesity and four recruited children with obesity. Eight studies had an overall high risk of bias, and four had a low risk of bias. In total, 139 practices provided care to 89,754 people, with a median follow-up of 12 months.\nProfessional interventions\nEducational interventions aimed at general practitioners (GPs), may slightly reduce the weight of participants (mean difference (MD) -1.24 kg, 95% confidence interval (CI) -2.84 to 0.37; 3 studies, N = 1017 adults; low-certainty evidence).\nTailoring interventions to improve GPs' compliance with obesity guidelines probably leads to little or no difference in weight loss (MD 0.05 (kg), 95% CI -0.32 to 0.41; 1 study, N = 49,807 adults; moderate-certainty evidence).\nIt is uncertain if providing doctors with reminders results in a greater weight reduction than standard care (men: MD -11.20 kg, 95% CI -20.66 kg to -1.74 kg, and women: MD -1.30 kg, 95% CI [-7.34, 4.74] kg; 1 study, N = 90 adults; very low-certainty evidence).\nProviding clinicians with a clinical decision support (CDS) tool to assist with obesity management at the point of care leads to little or no difference in the body mass index (BMI) z-score of children (MD -0.08, 95% CI -0.15 to -0.01 in 378 children; moderate-certainty evidence), CDS tools may lead to little or no difference in weight loss in adults: MD -0.095 kg (-0.21 lbs), P = 0.47; 1 study, N = 35,665; low-certainty evidence.\nOrganisational interventions\nAdults with overweight or obesity may lose more weight if the care was provided by a dietitian (by -5.60 kg, 95% CI -4.83 kg to -6.37 kg) or by a doctor-dietitian team (by -6.70 kg, 95% CI -7.52 kg to -5.88 kg; 1 study, N = 270 adults; low-certainty evidence). Shared care leads to little or no difference in the BMI z-score of children with obesity (adjusted MD -0.05, 95% CI -0.14 to 0.03; 1 study, N = 105 children; low-certainty evidence).\nOrganisational restructuring of the delivery of primary care (i.e. introducing the chronic care model) may result in a slightly lower increase in the BMI of children who received care at intervention clinics (BMI change: adjusted MD -0.21, 95% CI -0.50 to 0.07; 1 study, unadjusted MD -0.18, 95% CI -0.20 to -0.16; N=473 participants; moderate-certainty evidence).\nMail and phone interventions probably lead to little or no difference in weight loss in adults (mean weight change (kg) using mail: -0.36, 95% CI -1.18 to 0.46; phone: -0.44, 95% CI -1.26 to 0.38; 1 study, N = 1801 adults; moderate-certainty evidence). Care delivered by a nurse at a primary care clinic may lead to little or no difference in the BMI z-score in children (MD -0.02, 95% CI -0.16 to 0.12; 1 study, N = 52 children; very low-certainty evidence).\nTwo studies reported data on cost effectiveness: one study favoured mail and standard care over telephone consultations, and the other study achieved weight loss at a modest cost in both intervention groups (doctor and doctor-dietitian). One study of shared care reported similar adverse effects in both groups.\nAuthors' conclusions\nWe found little convincing evidence for a clinically-important effect on participants' weight or BMI of any of the evaluated interventions. While pooled results from three studies indicate that educational interventions targeting healthcare professionals may lead to a slight weight reduction in adults, the certainty of these results is low. Two trials evaluating CDS tools (unpooled results) for improved weight management suggest little or no effect on weight or BMI change in adults or children with overweight or obesity. Evidence for all the other interventions evaluated came mostly from single studies. The certainty of the included evidence varied from moderate to very low for the main outcomes (weight and BMI). All of the evaluated interventions would need further investigation to ascertain their strengths and limitations as effective strategies to change the behaviour of healthcare professionals or the organisation of care. As only two studies reported on cost, we know little about cost effectiveness across the evaluated interventions.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 12 studies, eight in adults and four in children. One hundred and thirty-nine family practices were included, providing care to 89,754 people who were followed for 12 months. Seven studies evaluated the effects of various interventions directed at healthcare professionals (i.e. education, reminders, and decision support tools), and the other five evaluated different organisational interventions (i.e. changes in who delivers the health care, how and where it is delivered, etc.). The comparison intervention was standard care, or the opportunity to seek it. The main outcomes assessed were weight or weight change for adults, and how their weight compared with their peers for children.\nProfessional interventions\nBrief education of primary care physicians in weight management may slightly decrease the weight of their patients, .\nTailoring the education to the healthcare professional to improve how closely they follow guidelines probably led to little or no difference in obesity management or weight loss at study end.\nWe are uncertain whether issuing doctors with printed reminders about weight management strategies helped to reduce their patients' weight, compared to standard care.\nTwo studies reported that providing doctors with a clinical decision support tool within the practice may lead to little or no difference in the BMI of children with obesity or in the weight of adults with overweight or obesity, compared to patients receiving standard care.\nOrganisational interventions\nTwo studies assessed the effect of multidisciplinary teams. Weight-loss programmes led by a dietitian or by a doctor plus a dietitian may lead to greater weight loss in adult patients than standard care. Shared care (between family practice and hospital doctors and dietitians) probably leads to little or no difference in the BMI of children with obesity, compared to standard care.\nOrganisational restructuring of the delivery of family practice care (i.e. introducing the chronic care model: training of the whole practice team, enhanced electronic medical record system, the paediatric nurse practitioners playing a key role in delivering the intervention) led to a slightly lower increase in the BMI of children with obesity at intervention clinics, compared to standard care.\nTwo studies assessed changes in the setting of service delivery. The use of both mail and phone interventions to promote weight loss probably led to little or no difference in weight loss of adults with overweight or obesity, compared to standard care. Family practice weight management programmes conducted by nurses may lead to little or no difference in BMI in children with obesity, as compared to specialist obesity hospital clinics run by consultants."
}
] |
query-laysum
|
8046
|
[
{
"role": "user",
"content": "Abstract: Background\nItch in patients with chronic kidney disease (CKD) is common, often very distressing and associated with depression, reduced quality of life, and increased death. The most common first-line treatment has been the use of antihistamines despite the lack of substantial evidence for its use for uraemic itch. Few recommendations and guidelines exist for treatment.\nObjectives\nWe aimed to determine: 1) the benefits and harms (both absolute and relative) of all topical and systemic interventions for the treatment of uraemic itch, either alone or in combination, when compared with placebo or standard care; and, 2) the dose strength or frequency, stage of kidney disease or method of dialysis used (where applicable) in cases where the effects of these interventions vary depending on co-interventions.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 17 December 2019 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nRandomised controlled trials (RCTs) in adults with CKD stages 4 or 5 comparing treatments (pharmacological, topical, exposure, dialysis modality) for CKD associated itch to either placebo or other established treatments.\nData collection and analysis\nTwo authors independently abstracted study data and assessed study quality. Data were analysed using a random effects meta-analysis design estimating the relative effects of treatment versus placebo. Estimates of the relative effects between treatments are included where possible. For continuous measures of severity of itch up to three months, mean difference (MD) or standardised mean difference (SMD) were used. When reported, adverse effects were tabulated. The certainty of the evidence was estimated using GRADE.\nMain results\nNinety-two RCTs, randomising 4466 participants were included. Fifty-eight studies (3285 participants) provided sufficient data to be meta-analysed. Of these, 30 compared an intervention to a placebo or control. The 10 cm Visual Analogue Scale (VAS) was the dominant instrument utilized for itch reporting and the Duo score was used in a minority of studies.\nGABA analogues including, gabapentin and pregabalin, reduce itch in patients with CKD (5 studies, 297 participants: 4.95 cm reduction, 95% CI 5.46 to 4.44 lower in VAS compared to placebo; high certainty evidence). Kappa opioid agonists, including nalfurafine also reduced itch in this population (6 studies, 661 participants: 1.05 cm reduction, 95% CI 1.40 to 0.71 lower in VAS compared to placebo; high certainty evidence). Ondansetron had little or no effect on itch scores (3 studies, 183 participants: 0.38 cm reduction, 95% CI 1.04 lower to 0.29 higher in VAS compared to placebo; high certainty evidence). Reduction in the severity of itch was reported with oral montelukast, turmeric, zinc sulfate and topical capsaicin. For all other interventions, the certainty of the evidence was low to moderate, and the interventions had uncertain effects on uraemic pruritus.\nSix studies have disclosed significant financial support from their respective manufacturers, six were affected by lack of blinding, and 11 studies have 15 participants or less. Older, smaller RCTs often failed to follow intention-to-treat protocols with unexplained dropouts after randomisation.\nAdverse effects were generally poorly and inconsistently reported across all RCTs. No severe adverse events were reported for any intervention.\nAuthors' conclusions\nThe RCTs of this meta-analysis contain a large array of interventions with a diverse set of comparators. For many interventions, trials are sparse. This served to make informative meta-analysis challenging.\nOf all treatments for uraemic pruritus, gabapentinoids (gabapentin and pregabalin) were the most studied and show the greatest reduction in itch scores. Further RCTs, even of the scale of the largest trials included in this review, are unlikely to significantly change this finding. Kappa-opioid agonists (mainly nalfurafine) also may reduce itch, but indirect comparison suggests a much more modest effect in comparison to GABA analogues.\nEvidence for oral montelukast, turmeric, zinc sulfate, and topical capsaicin also showed an itch score reduction. However, these reductions were reported in small studies, and warrant further investigation. Ondansetron did not reduce itch. It is somewhat unlikely that a further study of ondansetron will change this result.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Drugs that work like neurotransmitters (gabapentin and pregabalin) reduce itch in patients with CKD. Other intervention either do not work, do not work as well, or need further study to make a conclusion."
}
] |
query-laysum
|
8047
|
[
{
"role": "user",
"content": "Abstract: Background\nChinese herbal medicines have been used for a long time to treat osteoporosis. The evidence of their benefits and harms needs to be systematically reviewed.\nObjectives\nTo assess the beneficial and harmful effects of Chinese herbal medicines as a general experimental intervention for treating primary osteoporosis by comparing herbal treatments with placebo, no intervention and conventional medicine.\nSearch methods\nWe searched the following electronic databases to January 2013: the Specialised Register of the Cochrane Complementary Medicine Field, CENTRAL, MEDLINE, EMBASE, LILACS, JICST-E, AMED, Chinese Biomedical Database and CINAHL.\nSelection criteria\nRandomised controlled trials of Chinese herbal medicines compared with placebo, no intervention or conventional medicine were included.\nData collection and analysis\nTwo authors extracted data and assessed risk of bias independently. Disagreement was resolved by discussion.\nMain results\nOne hundred and eight randomised trials involving 10,655 participants were included. Ninety-nine different Chinese herbal medicines were tested and compared with placebo (three trials), no intervention (five trials) or conventional medicine (61 trials), or Chinese herbal medicines plus western medicine were compared with western medicine (47 trials). The risk of bias across all studies was unclear for most domains primarily due to inadequate reporting of study design. Although we rated the risk of selective reporting for all studies as unclear, only a few studies contributed numerical data to the key outcomes.\nSeven trials reported fracture incidence, but they were small in sample size, suffered from various biases and tested different Chinese herbal medicines. These trials compared Kanggusong capsules versus placebo, Kanggusong granule versus Caltrate or ipriflavone plus Caltrate, Yigu capsule plus calcium versus placebo plus calcium, Xianlinggubao capsule plus Caltrate versus placebo plus Caltrate, Bushen Zhuanggu granules plus Caltrate versus placebo granules plus Caltrate, Kanggusong soup plus Caltrate versus Caltrate, Zhuangguqiangjin tablets and Shujinbogu tablets plus calcitonin ampoule versus calcitonin ampoule. The results were inconsistent.\nOne trial showed that Bushenhuoxue therapy plus calcium carbonate tablets and alfacalcidol had a better effect on quality of life score (scale 0 to 100, higher is better) than calcium carbonate tablets and alfacalcidol (mean difference (MD) 5.30; 95% confidence interval (CI) 3.67 to 6.93).\nCompared with placebo in three separate trials, Chinese herbal medicines (Migu decoction, Bushen Yigu soft extract, Kanggusong capsules) showed a statistically significant increase in bone mineral density (BMD) (e.g. Kanggusong capsules, MD 0.06 g/cm3; 95% CI 0.02 to 0.10). Compared with no intervention in five trials, only two showed that Chinese herbal medicines had a statistically significant effect on increase in BMD (e.g. Shigu yin, MD 0.08 g/cm3; 95% CI 0.03 to 0.13). Compared with conventional medicine in 61 trials, 23 showed that Chinese herbal medicines had a statistically significant effect on increase in BMD. In 48 trials evaluating Chinese herbal medicines plus western medication against western medication, 26 showed better effects of the combination therapy on increase in BMD.\nNo trial reported death or serious adverse events of Chinese herbal medicines, while some trials reported minor adverse effects such as nausea, diarrhoea, etc.\nAuthors' conclusions\nCurrent findings suggest that the beneficial effect of Chinese herbal medicines in improving BMD is still uncertain and more rigorous studies are warranted.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We conducted a review of the effects of Chinese herbal medicines for people with primary osteoporosis. We found 108 studies with 10,655 people."
}
] |
query-laysum
|
8048
|
[
{
"role": "user",
"content": "Abstract: Background\nSystemic lupus erythematosus (SLE) is a rare, chronic autoimmune inflammatory disease with a prevalence varying from 4.3 to 150 people in 100,000, or approximately five million people worldwide. Systemic manifestations frequently include internal organ involvement, a characteristic malar rash on the face, pain in joints and muscles, and profound fatigue. Exercise is purported to be beneficial for people with SLE. For this review, we focused on studies that examined all types of structured exercise as an adjunctive therapy in the management of SLE.\nObjectives\nTo evaluate the benefits and harms of structured exercise as adjunctive therapy for adults with SLE compared with usual pharmacological care, usual pharmacological care plus placebo and usual pharmacological care plus non-pharmacological care.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 30 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) of exercise as an adjunct to usual pharmacological treatment in SLE compared with placebo, usual pharmacological care alone and another non-pharmacological treatment. Major outcomes were fatigue, functional capacity, disease activity, quality of life, pain, serious adverse events, and withdrawals due to any reason, including any adverse events.\nData collection and analysis\nWe used standard Cochrane methods. Our major outcomes were 1. fatigue, 2. functional capacity, 3. disease activity, 4. quality of life, 5. pain, 6. serious adverse events, and 7. withdrawals due to any reason. Our minor outcomes were 8. responder rate, 9. aerobic fitness, 10. depression, and 11. anxiety. We used GRADE to assess certainty of evidence. The primary comparison was exercise compared with placebo.\nMain results\nWe included 13 studies (540 participants) in this review. Studies compared exercise as an adjunct to usual pharmacological care (antimalarials, immunosuppressants, and oral glucocorticoids) with usual pharmacological care plus placebo (one study); usual pharmacological care (six studies); and another non-pharmacological treatment such as relaxation therapy (seven studies). Most studies had selection bias, and all studies had performance and detection bias. We downgraded the evidence for all comparisons because of a high risk of bias and imprecision.\nExercise plus usual pharmacological care versus placebo plus usual pharmacological care\nEvidence from a single small study (17 participants) that compared whole body vibration exercise to whole body placebo vibration exercise (vibrations switched off) indicated that exercise may have little to no effect on fatigue, functional capacity, and pain (low-certainty evidence). We are uncertain whether exercise results in fewer or more withdrawals (very low-certainty evidence). The study did not report disease activity, quality of life, and serious adverse events.\nThe study measured fatigue using the self-reported Functional Assessment of Chronic Illness Therapy – Fatigue (FACIT-Fatigue), scale 0 to 52; lower score means less fatigue. People who did not exercise rated their fatigue at 38 points and those who did exercise rated their fatigue at 33 points (mean difference (MD) 5 points lower, 95% confidence interval (CI) 13.29 lower to 3.29 higher).\nThe study measured functional capacity using the self-reported 36-item Short Form health questionnaire (SF-36) Physical Function domain, scale 0 to 100; higher score means better function. People who did not exercise rated their functional capacity at 70 points and those who did exercise rated their functional capacity at 67.5 points (MD 2.5 points lower, 95% CI 23.78 lower to 18.78 higher).\nThe study measured pain using the SF-36 Pain domain, scale 0 to 100; lower scores mean less pain. People who did not exercise rated their pain at 43 points and those who did exercise rated their pain at 34 points (MD 9 points lower, 95% CI 28.88 lower to 10.88 higher).\nMore participants from the exercise group (3/11, 27%) withdrew from the study than the placebo group (1/10, 10%) (risk ratio (RR) 2.73, 95% CI 0.34 to 22.16).\nExercise plus usual pharmacological care versus usual pharmacological care alone\nThe addition of exercise to usual pharmacological care may have little to no effect on fatigue, functional capacity, and disease activity (low-certainty evidence). We are uncertain whether the addition of exercise improves pain (very low-certainty evidence), or results in fewer or more withdrawals (very low-certainty evidence). Serious adverse events and quality of life were not reported.\nExercise plus usual care versus another non-pharmacological intervention such as receiving information about the disease or relaxation therapy\nCompared with education or relaxation therapy, exercise may reduce fatigue slightly (low-certainty evidence), may improve functional capacity (low-certainty evidence), probably results in little to no difference in disease activity (moderate-certainty evidence), and may result in little to no difference in pain (low-certainty evidence). We are uncertain whether exercise results in fewer or more withdrawals (very low-certainty evidence). Quality of life and serious adverse events were not reported.\nAuthors' conclusions\nDue to low- to very low-certainty evidence, we are not confident on the benefits of exercise on fatigue, functional capacity, disease activity, and pain, compared with placebo, usual care, or advice and relaxation therapy. Harms data were not well reported.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that investigated structured exercise programmes such as aerobic exercise, resistance, stretching or combinations of these (including a specific dosage of exercise, e.g. frequency, intensity, time, type) in addition to usual care compared with placebo (pretend medicine), usual care alone, or another non-medicine intervention (e.g. relaxation therapy) in people with SLE.\nWe compared and summarised the results of the studies and rated our confidence in the evidence, based on factors such as study methods and sizes."
}
] |
query-laysum
|
8049
|
[
{
"role": "user",
"content": "Abstract: Background\nDecisions in clinical medicine can be associated with ethical challenges. Ethical case interventions (e.g. ethics committee, moral case deliberation) identify and analyse ethical conflicts which occur within the context of care for patients. Ethical case interventions involve ethical experts, different health professionals as well as the patient and his/her family. The aim is to support decision-making in clinical practice. This systematic review gathered and critically appraised the available evidence of controlled studies on the effectiveness of ethical case interventions.\nObjectives\nTo determine whether ethical case interventions result in reduced decisional conflict or moral distress of those affected by an ethical conflict in clinical practice; improved patient involvement in decision-making and a higher quality of life in adult patients. To determine the most effective models of ethical case interventions and to analyse the use and appropriateness of the outcomes in experimental studies.\nSearch methods\nWe searched the following electronic databases for primary studies to September 2018: CENTRAL, MEDLINE, Embase, CINAHL and PsycINFO. We also searched CDSR and DARE for related reviews. Furthermore, we searched Clinicaltrials.gov, International Clinical Trials Registry Platform Search Portal and conducted a cited reference search for all included studies in ISI WEB of Science. We also searched the references of the included studies.\nSelection criteria\nWe included randomised trials, non-randomised trials, controlled before-after studies and interrupted time series studies which compared ethical case interventions with usual care or an active control in any language. The included population were adult patients. However, studies with mixed populations consisting of adults and children were included, if a subgroup or sensitivity analysis (or both) was performed for the adult population.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane and the Effective Practice and Organisation of Care review group. We used meta-analysis based on a random-effects model for treatment costs and structured analysis for the remaining outcomes, because these were heterogeneously reported. We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included four randomised trials published in six articles. The publication dates ranged from 2000 to 2014. Three studies were conducted in the USA, and one study in Taiwan. All studies were conducted on intensive care units and included 1165 patients. We judged the included studies to be of moderate or high risk of bias. It was not possible to compare different models of the intervention regarding effectiveness due to the diverse character of the interventions and the small number of studies. Included studies did not directly measure the main outcomes. All studies received public funding and one received additional funding from private sources.\nWe identified two models of ethical case interventions: proactive and request-based ethics consultation. Three studies evaluated proactive ethics consultation (n = 1103) of which one study reported findings on one key outcome criterion. The studies did not report data on decisional conflict, moral distress of participants of ethical case interventions, patient involvement in decision-making, quality of life or ethical competency for proactive ethics consultation. One study assessed satisfaction with care on a 5-point Likert scale (1 = lowest rating, 5 = highest rating). The healthcare providers (nurses and physicians, n = 365) scored a value of 4 or 5 for 81.4% in the control group and 86.1% in the intervention group (P > 0.05). The patients or their surrogates (n = 275) scored a value of 4 or 5 for 83.6% in the control group and for 74.8% in the intervention group (P > 0.05). It was uncertain whether proactive ethics consultation led to high satisfaction with care, because the certainty of evidence was very low.\nOne study evaluated request-based ethics consultation (n = 62). The study indirectly measured decisional conflict by assessing consensus regarding patient care. The risk (increase in consensus, reduction in decisional conflict) increased by 80% as a result of the intervention. The risk ratio was 0.20 (95% confidence interval 0.09 to 0.46; P < 0.01). It was uncertain whether request-based ethics consultation reduced decisional conflict, because the certainty of evidence was very low. The study did not report data on moral distress of participants of ethical case interventions, patient involvement in decision-making, quality of life, or ethical competency or satisfaction with care for request-based ethics consultation.\nAuthors' conclusions\nIt is not possible to determine the effectiveness of ethical case interventions with certainty due to the low certainty of the evidence of included studies in this review. The effectiveness of ethical case interventions should be investigated in light of the outcomes reported in this systematic review. In addition, there is need for further research to identify and measure outcomes which reflect the goals of different types of ethical case intervention.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review ?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We chose the decisional conflict of participants affected by the decision, reduction of moral distress, patient involvement in decision-making and quality of life of patients as main outcome criteria to assess whether ethical case interventions improve health care."
}
] |
query-laysum
|
8050
|
[
{
"role": "user",
"content": "Abstract: Background\nPlatinum-based therapy, including cisplatin, carboplatin or oxaliplatin, or a combination of these, is used to treat a variety of paediatric malignancies. Unfortunately, one of the most important adverse effects is the occurrence of hearing loss or ototoxicity. In an effort to prevent this ototoxicity, different platinum infusion durations have been studied. This review is the third update of a previously published Cochrane Review.\nObjectives\nTo assess the effects of different durations of platinum infusion to prevent hearing loss or tinnitus, or both, in children with cancer. Secondary objectives were to assess possible effects of these infusion durations on: a) anti-tumour efficacy of platinum-based therapy, b) adverse effects other than hearing loss or tinnitus, and c) quality of life.\nSearch methods\nWe searched the electronic databases Cochrane Central Register of Controlled Trials (CENTRAL; the Cochrane Library 14 November 2019), MEDLINE (PubMed) (1945 to 14 November 2019) and Embase (Ovid) (1980 to 14 November 2019). In addition, we handsearched reference lists of relevant articles and we assessed the conference proceedings of the International Society for Paediatric Oncology (2009 up to and including 2019) and the American Society of Pediatric Hematology/Oncology (2014 up to and including 2019). We scanned ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP; apps.who.int/trialsearch) for ongoing trials (both searched on 4 November 2019).\nSelection criteria\nRandomised controlled trials (RCTs) or controlled clinical trials (CCTs) comparing different platinum infusion durations in children with cancer. Only the platinum infusion duration could differ between the treatment groups.\nData collection and analysis\nTwo review authors independently performed the study selection, 'Risk of bias' assessment and GRADE assessment of included studies, and data extraction including adverse effects. Analyses were performed according to the guidelines of the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nWe identified one RCT and no CCTs; in this update no additional eligible studies were identified. The RCT (total number of children = 91) evaluated the use of a continuous cisplatin infusion (N = 43) versus a one-hour bolus cisplatin infusion (N = 48) in children with neuroblastoma. For the continuous infusion, cisplatin was administered on days one to five of the cycle, but it is unclear if the infusion duration was a total of five days. Risk of bias was present. Only results from shortly after induction therapy were provided. No clear evidence of a difference in hearing loss (defined as asymptomatic and symptomatic disease combined) between the different infusion durations was identified as results were imprecise (risk ratio (RR) 1.39, 95% confidence interval (CI) 0.47 to 4.13, low-quality evidence). Although the numbers of children were not provided, it was stated that tumour response was equivalent in both treatment arms. With regard to adverse effects other than ototoxicity, we were only able to assess toxic deaths. Again, the confidence interval of the estimated effect was too wide to exclude differences between the treatment groups (RR 1.12, 95% CI 0.07 to 17.31, low-quality evidence). No data were available for the other outcomes of interest (i.e. tinnitus, overall survival, event-free survival and quality of life) or for other (combinations of) infusion durations or other platinum analogues.\nAuthors' conclusions\nSince only one eligible RCT evaluating the use of a continuous cisplatin infusion versus a one-hour bolus cisplatin infusion was found, and that had methodological limitations, no definitive conclusions can be made. It should be noted that 'no evidence of effect', as identified in this review, is not the same as 'evidence of no effect'. For other (combinations of) infusion durations and other platinum analogues no eligible studies were identified. More high-quality research is needed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to November 2019.\nWe found one study (91 participants) comparing a continuous cisplatin infusion with a one-hour cisplatin bolus infusion in children with neuroblastoma. For the continuous infusion, cisplatin was administered on days one to five of the treatment cycle but it is not clear if the infusion duration was a total of five days. Only results from shortly after induction therapy were available."
}
] |
query-laysum
|
8051
|
[
{
"role": "user",
"content": "Abstract: Background\nBehavioural activation is a brief psychotherapeutic approach that seeks to change the way a person interacts with their environment. Behavioural activation is increasingly receiving attention as a potentially cost-effective intervention for depression, which may require less resources and may be easier to deliver and implement than other types of psychotherapy.\nObjectives\nTo examine the effects of behavioural activation compared with other psychological therapies for depression in adults.\nTo examine the effects of behavioural activation compared with medication for depression in adults.\nTo examine the effects of behavioural activation compared with treatment as usual/waiting list/placebo no treatment for depression in adults.\nSearch methods\nWe searched CCMD-CTR (all available years), CENTRAL (current issue), Ovid MEDLINE (1946 onwards), Ovid EMBASE (1980 onwards), and Ovid PsycINFO (1806 onwards) on the 17 January 2020 to identify randomised controlled trials (RCTs) of 'behavioural activation', or the main elements of behavioural activation for depression in participants with clinically diagnosed depression or subthreshold depression. We did not apply any restrictions on date, language or publication status to the searches. We searched international trials registries via the World Health Organization's trials portal (ICTRP) and ClinicalTrials.gov to identify unpublished or ongoing trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) of behavioural activation for the treatment of depression or symptoms of depression in adults aged 18 or over. We excluded RCTs conducted in inpatient settings and with trial participants selected because of a physical comorbidity. Studies were included regardless of reported outcomes.\nData collection and analysis\nTwo review authors independently screened all titles/abstracts and full-text manuscripts for inclusion. Data extraction and 'Risk of bias' assessments were also performed by two review authors in duplicate. Where necessary, we contacted study authors for more information.\nMain results\nFifty-three studies with 5495 participants were included; 51 parallel group RCTs and two cluster-RCTs.\nWe found moderate-certainty evidence that behavioural activation had greater short-term efficacy than treatment as usual (risk ratio (RR) 1.40, 95% confidence interval (CI) 1.10 to 1.78; 7 RCTs, 1533 participants), although this difference was no longer evident in sensitivity analyses using a worst-case or intention-to-treat scenario. Compared with waiting list, behavioural activation may be more effective, but there were fewer data in this comparison and evidence was of low certainty (RR 2.14, 95% CI 0.90 to 5.09; 1 RCT, 26 participants). No evidence on treatment efficacy was available for behavioural activation versus placebo and behavioural activation versus no treatment.\nWe found moderate-certainty evidence suggesting no evidence of a difference in short-term treatment efficacy between behavioural activation and CBT (RR 0.99, 95% CI 0.92 to 1.07; 5 RCTs, 601 participants). Fewer data were available for other comparators. No evidence of a difference in short term-efficacy was found between behavioural activation and third-wave CBT (RR 1.10, 95% CI 0.91 to 1.33; 2 RCTs, 98 participants; low certainty), and psychodynamic therapy (RR 1.21, 95% CI 0.74 to 1.99; 1 RCT,60 participants; very low certainty). Behavioural activation was more effective than humanistic therapy (RR 1.84, 95% CI 1.15 to 2.95; 2 RCTs, 46 participants; low certainty) and medication (RR 1.77, 95% CI 1.14 to 2.76; 1 RCT; 141 participants; moderate certainty), but both of these results were based on a small number of trials and participants. No evidence on treatment efficacy was available for comparisons between behavioural activation versus interpersonal, cognitive analytic, and integrative therapies.\nThere was moderate-certainty evidence that behavioural activation might have lower treatment acceptability (based on dropout rate) than treatment as usual in the short term, although the data did not confirm a difference and results lacked precision (RR 1.64, 95% CI 0.81 to 3.31; 14 RCTs, 2518 participants). Moderate-certainty evidence did not suggest any difference in short-term acceptability between behavioural activation and waiting list (RR 1.17, 95% CI 0.70 to 1.93; 8 RCTs. 359 participants), no treatment (RR 0.97, 95% CI 0.45 to 2.09; 3 RCTs, 187 participants), medication (RR 0.52, 95% CI 0.23 to 1.16; 2 RCTs, 243 participants), or placebo (RR 0.72, 95% CI 0.31 to 1.67; 1 RCT; 96 participants; low-certainty evidence). No evidence on treatment acceptability was available comparing behavioural activation versus psychodynamic therapy.\nLow-certainty evidence did not show a difference in short-term treatment acceptability (dropout rate) between behavioural activation and CBT (RR 1.03, 95% CI 0.85 to 1.25; 12 RCTs, 1195 participants), third-wave CBT (RR 0.84, 95% CI 0.33 to 2.10; 3 RCTs, 147 participants); humanistic therapy (RR 1.06, 95% CI 0.20 to 5.55; 2 RCTs, 96 participants) (very low certainty), and interpersonal, cognitive analytic, and integrative therapy (RR 0.84, 95% CI 0.32 to 2.20; 4 RCTs, 123 participants).\nResults from medium- and long-term primary outcomes, secondary outcomes, subgroup analyses, and sensitivity analyses are summarised in the text.\nAuthors' conclusions\nThis systematic review suggests that behavioural activation may be more effective than humanistic therapy, medication, and treatment as usual, and that it may be no less effective than CBT, psychodynamic therapy, or being placed on a waiting list. However, our confidence in these findings is limited due to concerns about the certainty of the evidence.\nWe found no evidence of a difference in short-term treatment acceptability (based on dropouts) between behavioural activation and most comparison groups (CBT, humanistic therapy, waiting list, placebo, medication, no treatment or treatment as usual). Again, our confidence in all these findings is limited due to concerns about the certainty of the evidence.\nNo data were available about the efficacy of behavioural activation compared with placebo, or about treatment acceptability comparing behavioural activation and psychodynamic therapy, interpersonal, cognitive analytic and integrative therapies.\nThe evidence could be strengthened by better reporting and better quality RCTs of behavioural activation and by assessing working mechanisms of behavioural activation.\n\nGiven the provided abstract, please respond to the following query: \"Why this is important\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Depression is a common mental health problem that can cause a persistent feeling of sadness and loss of interest in people, activities, and things that were once enjoyable. A person with depression may feel tearful, irritable, or tired most of the time, and may have problems with sleep, concentration, and memory. These and other symptoms can make daily life more difficult than usual.\nTreatments for depression include medications (antidepressants) and psychological therapies (talking therapies). Behavioural activation is a type of psychological therapy that encourages a person to develop or get back into activities which are meaningful to them. The therapy involved scheduling activities and monitoring behaviours and looking at specific situations where changing these behaviours and activities may be helpful. A therapist may support people in person, over the phone, or online, usually over multiple sessions.\nIt is important to know whether behavioural activation could be an effective and acceptable treatment to offer to people with depression."
}
] |
query-laysum
|
8052
|
[
{
"role": "user",
"content": "Abstract: Background\nIndividuals with osteoarthritis (OA) of the knee can be treated with a knee brace or a foot/ankle orthosis. The main purpose of these aids is to reduce pain, improve physical function and, possibly, slow disease progression. This is the second update of the original review published in Issue 1, 2005, and first updated in 2007.\nObjectives\nTo assess the benefits and harms of braces and foot/ankle orthoses in the treatment of patients with OA of the knee.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and EMBASE (current contents, HealthSTAR) up to March 2014. We screened reference lists of identified trials and clinical trial registers for ongoing studies.\nSelection criteria\nRandomised and controlled clinical trials investigating all types of braces and foot/ankle orthoses for OA of the knee compared with an active control or no treatment.\nData collection and analysis\nTwo review authors independently selected trials and extracted data. We assessed risk of bias using the 'Risk of bias' tool of The Cochrane Collaboration. We analysed the quality of the results by performing an overall grading of evidence by outcome using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. As a result of heterogeneity of studies, pooling of outcome data was possible for only three insole studies.\nMain results\nWe included 13 studies (n = 1356): four studies in the first version, three studies in the first update and six additional studies (n = 529 participants) in the second update. We included studies that reported results when study participants with early to severe knee OA (Kellgren & Lawrence grade I-IV) were treated with a knee brace (valgus knee brace, neutral brace or neoprene sleeve) or an orthosis (laterally or medially wedged insole, neutral insole, variable or constant stiffness shoe) or were given no treatment. The main comparisons included (1) brace versus no treatment; (2) foot/ankle orthosis versus no treatment or other treatment; and (3) brace versus foot/ankle orthosis. Seven studies had low risk, two studies had high risk and four studies had unclear risk of selection bias. Five studies had low risk, three studies had high risk and five studies had unclear risk of detection bias. Ten studies had high risk and three studies had low risk of performance bias. Nine studies had low risk and four studies had high risk of reporting bias.\nFour studies compared brace versus no treatment, but only one provided useful data for meta-analysis at 12-month follow-up. One study (n = 117, low-quality evidence) showed lack of evidence of an effect on visual analogue scale (VAS) pain scores (absolute percent change 0%, mean difference (MD) 0.0, 95% confidence interval (CI) -0.84 to 0.84), function scores (absolute percent change 1%, MD 1.0, 95% CI -2.98 to 4.98) and health-related quality of life scores (absolute percent change 4%, MD -0.04, 95% CI -0.12 to 0.04) after 12 months. Many participants stopped their initial treatment because of lack of effect (24 of 60 participants in the brace group and 14 of 57 participants in the no treatment group; absolute percent change 15%, risk ratio (RR) 1.63, 95% CI 0.94 to 2.82). The other studies reported some improvement in pain, function and health-related quality of life (P value ≤ 0.001). Stiffness and treatment failure (need for surgery) were not reported in the included studies.\nFor the comparison of laterally wedged insole versus no insole, one study (n = 40, low-quality evidence) showed a lower VAS pain score in the laterally wedged insole group (absolute percent change 16%, MD -1.60, 95% CI -2.31 to -0.89) after nine months. Function, stiffness, health-related quality of life, treatment failure and adverse events were not reported in the included study.\nFor the comparison of laterally wedged versus neutral insole after pooling of three studies (n = 358, moderate-quality evidence), little evidence was found of an effect on numerical rating scale (NRS) pain scores (absolute percent change 1.0%, MD 0.1, 95% CI -0.45 to 0.65), Western Ontario-McMaster Osteoarthritis Scale (WOMAC) stiffness scores (absolute percent change 0.1%, MD 0.07, 95% CI -4.96 to 5.1) and WOMAC function scores (absolute percent change 0.9%, MD 0.94, 95% CI - 2.98 to 4.87) after 12 months. Evidence of an effect on health-related quality of life scores (absolute percent change 1.0%, MD 0.01, 95% CI -0.05 to 0.03) was lacking in one study (n = 179, moderate-quality evidence). Treatment failure and adverse events were not studied for this comparison in the included studies.\nData for the comparison of laterally wedged insole versus valgus knee brace could not be pooled. After six months' follow-up, no statistically significant difference was noted in VAS pain scores (absolute percent change -2.0%, MD -0.2, 95% CI -1.15 to 0.75) and WOMAC function scores (absolute percent change 0.1%, MD 0.1, 95% CI -7.26 to 0.75) in one study (n = 91, low-quality evidence); however both groups showed improvement. Stiffness, health-related quality of life, treatment failure and adverse events were not reported in the included studies for this comparison.\nAuthors' conclusions\nEvidence was inconclusive for the benefits of bracing for pain, stiffness, function and quality of life in the treatment of patients with medial compartment knee OA. On the basis of one laterally wedged insole versus no treatment study, we conclude that evidence of an effect on pain in patients with varus knee OA is lacking. Moderate-quality evidence shows lack of an effect on improvement in pain, stiffness and function between patients treated with a laterally wedged insole and those treated with a neutral insole. Low-quality evidence shows lack of an effect on improvement in pain, stiffness and function between patients treated with a valgus knee brace and those treated with a laterally wedged insole. The optimal choice for an orthosis remains unclear, and long-term implications are lacking.\n\nGiven the provided abstract, please respond to the following query: \"Wearing a laterally wedged insole compared with no insole\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• may result in little or no difference in reducing pain (low-quality evidence).\nFunction, stiffness, health-related quality of life, treatment failure and side effects were not reported."
}
] |
query-laysum
|
8053
|
[
{
"role": "user",
"content": "Abstract: Background\nReducing saturated fat reduces serum cholesterol, but effects on other intermediate outcomes may be less clear. Additionally, it is unclear whether the energy from saturated fats eliminated from the diet are more helpfully replaced by polyunsaturated fats, monounsaturated fats, carbohydrate or protein.\nObjectives\nTo assess the effect of reducing saturated fat intake and replacing it with carbohydrate (CHO), polyunsaturated (PUFA), monounsaturated fat (MUFA) and/or protein on mortality and cardiovascular morbidity, using all available randomised clinical trials.\nSearch methods\nWe updated our searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid) and Embase (Ovid) on 15 October 2019, and searched Clinicaltrials.gov and WHO International Clinical Trials Registry Platform (ICTRP) on 17 October 2019.\nSelection criteria\nIncluded trials fulfilled the following criteria: 1) randomised; 2) intention to reduce saturated fat intake OR intention to alter dietary fats and achieving a reduction in saturated fat; 3) compared with higher saturated fat intake or usual diet; 4) not multifactorial; 5) in adult humans with or without cardiovascular disease (but not acutely ill, pregnant or breastfeeding); 6) intervention duration at least 24 months; 7) mortality or cardiovascular morbidity data available.\nData collection and analysis\nTwo review authors independently assessed inclusion, extracted study data and assessed risk of bias. We performed random-effects meta-analyses, meta-regression, subgrouping, sensitivity analyses, funnel plots and GRADE assessment.\nMain results\nWe included 15 randomised controlled trials (RCTs) (16 comparisons, 56,675 participants), that used a variety of interventions from providing all food to advice on reducing saturated fat. The included long-term trials suggested that reducing dietary saturated fat reduced the risk of combined cardiovascular events by 17% (risk ratio (RR) 0.83; 95% confidence interval (CI) 0.70 to 0.98, 12 trials, 53,758 participants of whom 8% had a cardiovascular event, I² = 67%, GRADE moderate-quality evidence). Meta-regression suggested that greater reductions in saturated fat (reflected in greater reductions in serum cholesterol) resulted in greater reductions in risk of CVD events, explaining most heterogeneity between trials. The number needed to treat for an additional beneficial outcome (NNTB) was 56 in primary prevention trials, so 56 people need to reduce their saturated fat intake for ~four years for one person to avoid experiencing a CVD event. In secondary prevention trials, the NNTB was 53. Subgrouping did not suggest significant differences between replacement of saturated fat calories with polyunsaturated fat or carbohydrate, and data on replacement with monounsaturated fat and protein was very limited.\nWe found little or no effect of reducing saturated fat on all-cause mortality (RR 0.96; 95% CI 0.90 to 1.03; 11 trials, 55,858 participants) or cardiovascular mortality (RR 0.95; 95% CI 0.80 to 1.12, 10 trials, 53,421 participants), both with GRADE moderate-quality evidence.\nThere was little or no effect of reducing saturated fats on non-fatal myocardial infarction (RR 0.97, 95% CI 0.87 to 1.07) or CHD mortality (RR 0.97, 95% CI 0.82 to 1.16, both low-quality evidence), but effects on total (fatal or non-fatal) myocardial infarction, stroke and CHD events (fatal or non-fatal) were all unclear as the evidence was of very low quality. There was little or no effect on cancer mortality, cancer diagnoses, diabetes diagnosis, HDL cholesterol, serum triglycerides or blood pressure, and small reductions in weight, serum total cholesterol, LDL cholesterol and BMI. There was no evidence of harmful effects of reducing saturated fat intakes.\nAuthors' conclusions\nThe findings of this updated review suggest that reducing saturated fat intake for at least two years causes a potentially important reduction in combined cardiovascular events. Replacing the energy from saturated fat with polyunsaturated fat or carbohydrate appear to be useful strategies, while effects of replacement with monounsaturated fat are unclear. The reduction in combined cardiovascular events resulting from reducing saturated fat did not alter by study duration, sex or baseline level of cardiovascular risk, but greater reduction in saturated fat caused greater reductions in cardiovascular events.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is a large body of evidence assessing effects of reducing saturated fat for at least two years. These studies provide moderate-quality evidence that reducing saturated fat reduces our risk of cardiovascular disease."
}
] |
query-laysum
|
8054
|
[
{
"role": "user",
"content": "Abstract: Background\nThe common cold is a frequent illness, which, although benign and self limiting, results in many consultations to primary care and considerable loss of school or work days. Current symptomatic treatments have limited benefit. Corticosteroids are an effective treatment in other upper respiratory tract infections and their anti-inflammatory effects may also be beneficial in the common cold. This updated review has included one additional study.\nObjectives\nTo compare corticosteroids versus usual care for the common cold on measures of symptom resolution and improvement in children and adults.\nSearch methods\nWe searched Cochrane Central Register of Controlled Trials (CENTRAL 2015, Issue 4), which includes the Acute Respiratory Infections (ARI) Group's Specialised Register, the Database of Reviews of Effects (DARE) (2015, Issue 2), NHS Health Economics Database (2015, Issue 2), MEDLINE (1948 to May week 3, 2015) and EMBASE (January 2010 to May 2015).\nSelection criteria\nRandomised, double-blind, controlled trials comparing corticosteroids to placebo or to standard clinical management.\nData collection and analysis\nTwo review authors independently extracted data and assessed trial quality. We were unable to perform meta-analysis and instead present a narrative description of the available evidence.\nMain results\nWe included three trials (353 participants). Two trials compared intranasal corticosteroids to placebo and one trial compared intranasal corticosteroids to usual care; no trials studied oral corticosteroids. In the two placebo-controlled trials, no benefit of intranasal corticosteroids was demonstrated for duration or severity of symptoms. The risk of bias overall was low or unclear in these two trials. In a trial of 54 participants, the mean number of symptomatic days was 10.3 in the placebo group, compared to 10.7 in those using intranasal corticosteroids (P value = 0.72). A second trial of 199 participants reported no significant differences in the duration of symptoms. The single-blind trial in children aged two to 14 years, who were also receiving oral antibiotics, had inadequate reporting of outcome measures regarding symptom resolution. The overall risk of bias was high for this trial. Mean symptom severity scores were significantly lower in the group receiving intranasal steroids in addition to oral amoxicillin. One placebo-controlled trial reported the presence of rhinovirus in nasal aspirates and found no differences. Only one of the three trials reported on adverse events; no differences were found. Two trials reported secondary bacterial infections (one case of sinusitis, one case of acute otitis media; both in the corticosteroid groups). A lack of comparable outcome measures meant that we were unable to combine the data.\nAuthors' conclusions\nCurrent evidence does not support the use of intranasal corticosteroids for symptomatic relief from the common cold. However, there were only three trials, one of which was very poor quality, and there was limited statistical power overall. Further large, randomised, double-blind, placebo-controlled trials in adults and children are required to answer this question.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence for using steroid medications to improve symptoms in patients who have a common cold."
}
] |
query-laysum
|
8055
|
[
{
"role": "user",
"content": "Abstract: Background\nApproximately 40% to 95% of people with cirrhosis have oesophageal varices. About 15% to 20% of oesophageal varices bleed in about one to three years of diagnosis. Several different treatments are available, which include endoscopic sclerotherapy, variceal band ligation, beta-blockers, transjugular intrahepatic portosystemic shunt (TIPS), and surgical portocaval shunts, among others. However, there is uncertainty surrounding their individual and relative benefits and harms.\nObjectives\nTo compare the benefits and harms of different initial treatments for secondary prevention of variceal bleeding in adults with previous oesophageal variceal bleeding due to decompensated liver cirrhosis through a network meta-analysis and to generate rankings of the different treatments for secondary prevention according to their safety and efficacy.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Science Citation Index Expanded, World Health Organization International Clinical Trials Registry Platform, and trials registers until December 2019 to identify randomised clinical trials in people with cirrhosis and a previous history of bleeding from oesophageal varices.\nSelection criteria\nWe included only randomised clinical trials (irrespective of language, blinding, or status) in adults with cirrhosis and previous history of bleeding from oesophageal varices. We excluded randomised clinical trials in which participants had no previous history of bleeding from oesophageal varices, previous history of bleeding only from gastric varices, those who failed previous treatment (refractory bleeding), those who had acute bleeding at the time of treatment, and those who had previously undergone liver transplantation.\nData collection and analysis\nWe performed a network meta-analysis with OpenBUGS using Bayesian methods and calculated the differences in treatments using hazard ratios (HR), odds ratios (OR) and rate ratios with 95% credible intervals (CrI) based on an available-case analysis, according to National Institute of Health and Care Excellence Decision Support Unit guidance.\nMain results\nWe included a total of 48 randomised clinical trials (3526 participants) in the review. Forty-six trials (3442 participants) were included in one or more comparisons. The trials that provided the information included people with cirrhosis due to varied aetiologies. The follow-up ranged from two months to 61 months. All the trials were at high risk of bias. A total of 12 interventions were compared in these trials (sclerotherapy, beta-blockers, variceal band ligation, beta-blockers plus sclerotherapy, no active intervention, TIPS (transjugular intrahepatic portosystemic shunt), beta-blockers plus nitrates, portocaval shunt, sclerotherapy plus variceal band ligation, beta-blockers plus nitrates plus variceal band ligation, beta-blockers plus variceal band ligation, sclerotherapy plus nitrates).\nOverall, 22.5% of the trial participants who received the reference treatment (chosen because this was the commonest treatment compared in the trials) of sclerotherapy died during the follow-up period ranging from two months to 61 months. There was considerable uncertainty in the effects of interventions on mortality. Accordingly, none of the interventions showed superiority over another. None of the trials reported health-related quality of life. Based on low-certainty evidence, variceal band ligation may result in fewer serious adverse events (number of people) than sclerotherapy (OR 0.19; 95% CrI 0.06 to 0.54; 1 trial; 100 participants).\nBased on low or very low-certainty evidence, the adverse events (number of participants) and adverse events (number of events) may be different across many comparisons; however, these differences are due to very small trials at high risk of bias showing large differences in some comparisons leading to many differences despite absence of direct evidence.\nBased on low-certainty evidence, TIPS may result in large decrease in symptomatic rebleed than variceal band ligation (HR 0.12; 95% CrI 0.03 to 0.41; 1 trial; 58 participants). Based on moderate-certainty evidence, any variceal rebleed was probably lower in sclerotherapy than in no active intervention (HR 0.62; 95% CrI 0.35 to 0.99, direct comparison HR 0.66; 95% CrI 0.11 to 3.13; 3 trials; 296 participants), beta-blockers plus sclerotherapy than sclerotherapy alone (HR 0.60; 95% CrI 0.37 to 0.95; direct comparison HR 0.50; 95% CrI 0.07 to 2.96; 4 trials; 231 participants); TIPS than sclerotherapy (HR 0.18; 95% CrI 0.08 to 0.38; direct comparison HR 0.22; 95% CrI 0.01 to 7.51; 2 trials; 109 participants), and in portocaval shunt than sclerotherapy (HR 0.21; 95% CrI 0.05 to 0.77; no direct comparison) groups.\nBased on low-certainty evidence, beta-blockers alone and TIPS might result in more, other compensation, events than sclerotherapy (rate ratio 2.37; 95% CrI 1.35 to 4.67; 1 trial; 65 participants and rate ratio 2.30; 95% CrI 1.20 to 4.65; 2 trials; 109 participants; low-certainty evidence).\nThe evidence indicates considerable uncertainty about the effect of the interventions including those related to beta-blockers plus variceal band ligation in the remaining comparisons.\nAuthors' conclusions\nThe evidence indicates considerable uncertainty about the effect of the interventions on mortality. Variceal band ligation might result in fewer serious adverse events than sclerotherapy. TIPS might result in a large decrease in symptomatic rebleed than variceal band ligation. Sclerotherapy probably results in fewer 'any' variceal rebleeding than no active intervention. Beta-blockers plus sclerotherapy and TIPS probably result in fewer 'any' variceal rebleeding than sclerotherapy. Beta-blockers alone and TIPS might result in more other compensation events than sclerotherapy. The evidence indicates considerable uncertainty about the effect of the interventions in the remaining comparisons. Accordingly, high-quality randomised comparative clinical trials are needed.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this Cochrane Review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To find out the best available preventive treatment for repeated bleeding from oesophageal varices (enlarged veins in the food pipe) in people with advanced liver disease (liver cirrhosis, or late-stage scarring of the liver with complications). People with cirrhosis who had previously bled from oesophageal varices are at significant risk of death from another episode of bleeding. Therefore, it is important to provide preventive treatment to prevent rebleeding in such people, but the benefits and harms of different treatments available are currently unclear. The authors of this review collected and analysed all relevant randomised clinical trials with the aim of finding out the best treatment. They found 48 randomised clinical trials (studies where participants are randomly assigned to one of two treatment groups). During analysis of data, authors used standard Cochrane methods, which allow comparison of only two treatments at a time. Authors also used advanced techniques that allow comparison of multiple treatments at the same time (usually referred as 'network (or indirect) meta-analysis')."
}
] |
query-laysum
|
8056
|
[
{
"role": "user",
"content": "Abstract: Background\nSeveral studies have demonstrated that the use of pancreatic duct stents following pancreaticoduodenectomy is associated with a lower risk of pancreatic fistula. However, to date there is a lack of accord in the literature on whether the use of stents is beneficial and, if so, whether internal or external stenting, with or without replacement, is preferable. This is an update of a systematic review.\nObjectives\nTo determine the efficacy of pancreatic stents in preventing pancreatic fistula after pancreaticoduodenectomy.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science, and four major Chinese biomedical databases up to November 2015. We also searched several major trials registers.\nSelection criteria\nRandomized controlled trials (RCTs) comparing the use of stents (either internal or external) versus no stents, and comparing internal stents versus external stents, replacement versus no replacement following pancreaticoduodenectomy.\nData collection and analysis\nTwo review authors independently extracted the data. The outcomes studied were incidence of pancreatic fistula, need for reoperation, length of hospital stay, overall complications, and in-hospital mortality. We showed the results as risk ratio (RR) or mean difference (MD), with 95% confidence interval (CI). We assessed the quality of evidence using GRADE (http://www.gradeworkinggroup.org/).\nMain results\nWe included eight studies (1018 participants). The average age of the participants ranged from 56 to 68 years. Most of the studies were conducted in single centers in Japan (four studies), China (two studies), France (one study), and the USA (one study). The risk of bias was low or unclear for most domains across the studies.\nStents versus no stents\nThe effect of stents on reducing pancreatic fistula in people undergoing pancreaticoduodenectomy was uncertain due to the low quality of the evidence (RR 0.67, 95% CI 0.39 to 1.14; 605 participants; 4 studies). The risk of in-hospital mortality was 3% in people who did receive stents compared with 2% (95% CI 1% to 6%) in people who had stents (RR 0.73, 0.28 to 1.94; 605 participants; 4 studies; moderate-quality evidence). The effect of stents on reoperation was uncertain due to wide confidence intervals (RR 0.67, 0.36 to 1.22; 512 participants; 3 studies; moderate-quality evidence). We found moderate-quality evidence that using stents reduces total hospital stay by just under four days (mean difference (MD) -3.68, 95% CI -6.52 to -0.84; 605 participants; 4 studies). The risk of delayed gastric emptying, wound infection, and intra-abdominal abscess was uncertain (gastric emptying: RR 0.75, 95% CI 0.24 to 2.35; moderate-quality evidence) (wound infection: RR 0.73, 95% CI 0.40 to 1.32; moderate-quality evidence) (abscess: RR 1.38, 0.49 to 3.85; low-quality evidence). Subgroup analysis by type of stent provided limited evidence that external stents lead to lower risk of fistula compared with internal stents.\nExternal versus internal stents\nThe effect of external stents on the risk of pancreatic fistula, reoperation, delayed gastric emptying, and intra-abdominal abscess compared with internal stents was uncertain due to low-quality evidence (fistula: RR 1.44, 0.94 to 2.21; 362 participants; 3 studies) (reoperation: RR 2.02, 95% CI 0.38 to 10.79; 319 participants; 3 studies) (gastric emptying: RR 1.65, 0.66 to 4.09; 362 participants; 3 studies) (abscess: RR 1.91, 95% CI 0.80 to 4.58; 362 participants; 3 studies). The rate of in-hospital mortality was lower in studies comparing internal and external stents than in those comparing stents with no stents. One death occurred in the external-stent group (RR 0.33, 0.01 to 7.99; low-quality evidence). There were no cases of pancreatitis in participants who had internal stents compared with three in those who had external stents (RR 0.15, 0.01 to 2.73; low-quality evidence). The difference between internal and external stents on total hospital stay was uncertain due to the wide confidence intervals around the average effect of 1.7 days fewer with internal stents (9.18 days fewer to 5.84 days longer; 262 participants; 2 studies; low-quality evidence). The analysis of wound infection could not exclude a protective effect with either approach (RR 1.41, 0.44 to 4.48; 319 participants; 2 studies; moderate-quality evidence).\nOperative replacement of pancreatic juice versus not replacing pancreatic juice\nThere was insufficient evidence available from a small trial to ascertain the effect of replacing pancreatic juice.\nAuthors' conclusions\nThis systematic review has identified limited evidence on the effects of stents. We have not been able to identify convincing direct evidence of superiority of external over internal stents. We found a limited number of RCTs with small sample sizes. Further RCTs on the use of stents after pancreaticoduodenectomy are warranted.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no accord in studies on whether the use of stents is beneficial and, if so, whether the use of internal or external stents, with or without replacement is preferable."
}
] |
query-laysum
|
8057
|
[
{
"role": "user",
"content": "Abstract: Background\nFamilial Mediterranean fever (FMF), a hereditary auto-inflammatory disease, mainly affects ethnic groups living in the Mediterranean region. Early studies reported colchicine may potentially prevent FMF attacks. For people who are colchicine-resistant or intolerant, drugs such as anakinra, rilonacept, canakinumab, etanercept, infliximab or adalimumab might be beneficial. This is an update of the review last published in 2018.\nObjectives\nTo evaluate the efficacy and safety of interventions for reducing inflammation in people with FMF.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and four Chinese databases on in August 2021. We searched clinical trials registries and references listed in relevant reports.\nThe last search was 17 August 2021.\nSelection criteria\nWe included randomized controlled trials (RCTs) of people with FMF, comparing active interventions (including colchicine, anakinra, rilonacept, canakinumab, etanercept, infliximab, adalimumab, thalidomide, tocilizumab, interferon-α and ImmunoGuard (herbal dietary supplement)) with placebo or no treatment, or comparing active drugs to each other.\nData collection and analysis\nWe used standard Cochrane methodology. We assessed certainty of the evidence using GRADE.\nMain results\nWe included 10 RCTs with 312 participants (aged three to 53 years), including five parallel and five cross-over designed studies. Six studies used oral colchicine, one used oral ImmunoGuard, and the remaining three used rilonacept, anakinra or canakinumab as a subcutaneous injection. The duration of each study arm ranged from one to eight months.\nThere were inadequacies in the design of the four older colchicine studies and the two studies comparing a single to a divided dose of colchicine. However, the four studies of ImmunoGuard, rilonacept, anakinra and canakinumab were generally well-designed.\nWe aimed to report on the number of participants experiencing an attack, the timing of attacks, the prevention of amyloid A amyloidosis, adverse drug reactions and the response of a number of biochemical markers from the acute phase of an attack; but no study reported on the prevention of amyloid A amyloidosis.\nColchicine (oral) versus placebo\nAfter three months, colchicine 0.6 mg three times daily may reduce the number of people experiencing attacks (risk ratio (RR) 0.21, 95% confidence interval (CI) 0.05 to 0.95; 1 study, 10 participants; low-certainty evidence). One study (20 participants) of colchicine 0.5 mg twice daily showed there may be no difference in the number of participants experiencing attacks at two months (RR 0.78, 95% CI 0.49 to 1.23; low-certainty evidence).\nThere may be no differences in the duration of attacks (narrative summary; very low-certainty evidence), or in the number of days between attacks: (narrative summary; very low-certainty evidence).\nRegarding adverse drug reactions, one study reported loose stools and frequent bowel movements and a second reported diarrhea (narrative summary; both very low-certainty evidence).\nThere were no data on acute-phase response.\nRilonacept versus placebo\nThere is probably no difference in the number of people experiencing attacks at three months (RR 0.87, 95% CI 0.59 to 1.26; moderate-certainty evidence).\nThere may be no differences in the duration of attacks (narrative summary; low-certainty evidence) or in the number of days between attacks (narrative summary; low-certainty evidence).\nRegarding adverse drug reactions, the rilonacept study reported there may be no differences in gastrointestinal symptoms, hypertension, headache, respiratory tract infections, injection site reactions and herpes, compared to placebo (narrative summary; low-certainty evidence).\nThe study narratively reported there may be no differences in acute-phase response indicators after three months (low-certainty evidence).\nImmunoGuard versus placebo\nThe ImmunoGuard study observed there are probably no differences in adverse effects (moderate-certainty evidence) or in acute-phase response indicators after one month of treatment (moderate-certainty evidence).\nNo data were reported for the number of people experiencing an attack, duration of attacks or days between attacks.\nAnakinra versus placebo\nA study of anakinra given to 25 colchicine-resistant participants found there is probably no difference in the number of participants experiencing an attack at four months (RR 0.76, 95% CI 0.54 to 1.07; moderate-certainty evidence).\nThere were no data for duration of attacks or days between attacks.\nThere are probably no differences between anakinra and placebo with regards to injection site reaction, headache, presyncope, dyspnea and itching (narrative summary; moderate-certainty evidence).\nFor acute-phase response, anakinra probably reduced C-reactive protein (CRP) after four months (narrative summary; moderate-certainty evidence).\nCanakinumab versus placebo\nCanakinumab probably reduces the number of participants experiencing an attack at 16 weeks (RR 0.41, 95% CI 0.26 to 0.65; 1 study, 63 colchicine-resistant participants; moderate-certainty evidence).\nThere were no data for the duration of attacks or days between attacks.\nThe included study reported the number of serious adverse events per 100 patient-years was probably 42.7 with canakinumab versus 97.4 with placebo among people with colchicine-resistant FMF (moderate-certainty evidence).\nFor acute-phase response, canakinumab probably caused a higher proportion of participants to have a CRP level of 10 mg/L or less compared to placebo (68% with canakinumab versus 6% with placebo; 1 study, 63 participants; moderate-certainty evidence).\nColchicine single dose versus divided dose\nThere is probably no difference in the duration of attacks at three months (MD −0.04 hours, 95% CI −10.91 to 10.83) or six months (MD 2.80 hours, 95% CI −5.39 to 10.99; moderate-certainty evidence).\nThere were no data for the number of participants experiencing an attack or days between attacks.\nThere is probably no difference in adverse events (including anorexia, nausea, diarrhea, abdominal pain, vomiting and elevated liver enzymes) between groups (narrative summary; moderate-certainty evidence).\nFor acute-phase response, there may be no evidence of a difference between groups (narrative summary; low- to moderate-certainty evidence).\nAuthors' conclusions\nThere were limited RCTs assessing interventions for people with FMF. Based on the evidence, three times daily colchicine may reduce the number of people experiencing attacks, colchicine single dose and divided dose may not be different for children with FMF, canakinumab probably reduces the number of people experiencing attacks, and anakinra or canakinumab probably reduce CRP in colchicine-resistant participants; however, only a few RCTs contributed data for analysis. Further RCTs examining active interventions, not only colchicine, are necessary before a comprehensive conclusion regarding the efficacy and safety of interventions for reducing inflammation in FMF can be drawn.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 17 August 2021."
}
] |
query-laysum
|
8058
|
[
{
"role": "user",
"content": "Abstract: Background\nOcrelizumab is a humanised anti-CD20 monoclonal antibody developed for the treatment of multiple sclerosis (MS). It was approved by the Food and Drug Administration (FDA) in March 2017 for using in adults with relapsing-remitting multiple sclerosis (RRMS) and primary progressive multiple sclerosis (PPMS). Ocrelizumab is the only disease-modifying therapy (DMT) approved for PPMS. In November 2017, the European Medicines Agency (EMA) also approved ocrelizumab as the first drug for people with early PPMS. Therefore, it is important to evaluate the benefits, harms, and tolerability of ocrelizumab in people with MS.\nObjectives\nTo assess the benefits, harms, and tolerability of ocrelizumab in people with RRMS and PPMS.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, and two trials registers on 8 October 2021. We screened reference lists, contacted experts, and contacted the main authors of studies.\nSelection criteria\nAll randomised controlled trials (RCTs) involving adults diagnosed with RRMS or PPMS according to the McDonald criteria, comparing ocrelizumab alone or associated with other medications, at the approved dose of 600 mg every 24 weeks for any duration, versus placebo or any other active drug therapy.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nFour RCTs met our selection criteria. The overall population included 2551 participants; 1370 treated with ocrelizumab 600 mg and 1181 controls. Among the controls, 298 participants received placebo and 883 received interferon beta-1a. The treatment duration was 24 weeks in one study, 96 weeks in two studies, and at least 120 weeks in one study. One study was at high risk of allocation concealment and blinding of participants and personnel; all four studies were at high risk of bias for incomplete outcome data.\nFor RRMS, compared with interferon beta-1a, ocrelizumab was associated with: 1. lower relapse rate (risk ratio (RR) 0.61, 95% confidence interval (CI) 0.52 to 0.73; 2 studies, 1656 participants; moderate-certainty evidence); 2. a lower number of participants with disability progression (hazard ratio (HR) 0.60, 95% CI 0.43 to 0.84; 2 studies, 1656 participants; low-certainty evidence); 3. little to no difference in the number of participants with any adverse event (RR 1.00, 95% CI 0.96 to 1.04; 2 studies, 1651 participants; moderate-certainty evidence); 4. little to no difference in the number of participants with any serious adverse event (RR 0.79, 95% CI 0.57 to 1.11; 2 studies, 1651 participants; low-certainty evidence); 5. a lower number of participants experiencing treatment discontinuation caused by adverse events (RR 0.58, 95% CI 0.37 to 0.91; 2 studies, 1651 participants; low-certainty evidence); 6. a lower number of participants with gadolinium-enhancing T1 lesions on magnetic resonance imaging (MRI) (RR 0.27, 95% CI 0.22 to 0.35; 2 studies, 1656 participants; low-certainty evidence); 7. a lower number of participants with new or enlarging T2-hyperintense lesions on MRI (RR 0.63, 95% CI 0.57 to 0.69; 2 studies, 1656 participants; low-certainty evidence) at 96 weeks.\nFor PPMS, compared with placebo, ocrelizumab was associated with: 1. a lower number of participants with disability progression (HR 0.75, 95% CI 0.58 to 0.98; 1 study, 731 participants; low-certainty evidence); 2. a higher number of participants with any adverse events (RR 1.06, 95% CI 1.01 to 1.11; 1 study, 725 participants; moderate-certainty evidence); 3. little to no difference in the number of participants with any serious adverse event (RR 0.92, 95% CI 0.68 to 1.23; 1 study, 725 participants; low-certainty evidence); 4. little to no difference in the number of participants experiencing treatment discontinuation caused by adverse events (RR 1.23, 95% CI 0.55 to 2.75; 1 study, 725 participants; low-certainty evidence) for at least 120 weeks. There were no data for number of participants with gadolinium-enhancing T1 lesions on MRI and number of participants with new or enlarging T2-hyperintense lesions on MRI.\nAuthors' conclusions\nFor people with RRMS, ocrelizumab probably results in a large reduction in relapse rate and little to no difference in adverse events when compared with interferon beta-1a at 96 weeks (moderate-certainty evidence). Ocrelizumab may result in a large reduction in disability progression, treatment discontinuation caused by adverse events, number of participants with gadolinium-enhancing T1 lesions on MRI, and number of participants with new or enlarging T2-hyperintense lesions on MRI, and may result in little to no difference in serious adverse events (low-certainty evidence).\nFor people with PPMS, ocrelizumab probably results in a higher rate of adverse events when compared with placebo for at least 120 weeks (moderate-certainty evidence). Ocrelizumab may result in a reduction in disability progression and little to no difference in serious adverse events and treatment discontinuation caused by adverse events (low-certainty evidence).\nOcrelizumab was well tolerated clinically; the most common adverse events were infusion-related reactions and nasopharyngitis, and urinary tract and upper respiratory tract infections.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "– Ocrelizumab is a recently approved medicine to treat people with multiple sclerosis (MS). In relapsing-remitting MS (where people experience flare-ups of symptoms), ocrelizumab probably substantially reduces flare-ups, may substantially reduce worsening of symptoms, and probably makes little or no difference to unwanted effects compared with interferon beta-1a (a standard treatment for MS), 96 weeks after treatment starts.\n– Compared to placebo (a dummy medicine) after 120 weeks of treatment for primary progressive MS (where people's symptoms worsen gradually), ocrelizumab may reduce worsening of symptoms. Ocrelizumab probably increases unwanted effects but makes little or no difference to the number of serious unwanted effects.\n– We need more, better-designed studies to test the effectiveness of ocrelizumab and measure unwanted effects."
}
] |
query-laysum
|
8059
|
[
{
"role": "user",
"content": "Abstract: Background\nCorticosteroid treatment is considered the 'gold standard' for Duchenne muscular dystrophy (DMD); however, it is also known to induce osteoporosis and thus increase the risk of vertebral fragility fractures. Good practice in the care of those with DMD requires prevention of these adverse effects. Treatments to increase bone mineral density include bisphosphonates and vitamin D and calcium supplements, and in adolescents with pubertal delay, testosterone. Bone health management is an important part of lifelong care for patients with DMD.\nObjectives\nTo assess the effects of interventions to prevent or treat osteoporosis in children and adults with DMD taking long-term corticosteroids; to assess the effects of these interventions on the frequency of vertebral fragility fractures and long-bone fractures, and on quality of life; and to assess adverse events.\nSearch methods\nOn 12 September 2016, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL Plus to identify potentially eligible trials. We also searched the Web of Science ISI Proceedings (2001 to September 2016) and three clinical trials registries to identify unpublished studies and ongoing trials. We contacted correspondence authors of the included studies in the review to obtain information on unpublished studies or work in progress.\nSelection criteria\nWe considered for inclusion in the review randomised controlled trials (RCTs) and quasi-RCTs involving any bone health intervention for corticosteroid-induced osteoporosis and fragility fractures in children, adolescents, and adults with a confirmed diagnosis of DMD. The interventions might have included oral and intravenous bisphosphonates, vitamin D supplements, calcium supplements, dietary calcium, testosterone, and weight-bearing activity.\nData collection and analysis\nTwo review authors independently assessed reports and selected potential studies for inclusion, following standard Cochrane methodology. We contacted study authors to obtain further information for clarification on published work, unpublished studies, and work in progress.\nMain results\nWe identified 18 potential studies, of which two, currently reported only as abstracts, met the inclusion criteria for this review. Too little information was available for us to present full results or adequately assess risk of bias. The participants were children aged five to 15 years with DMD, ambulant and non-ambulant. The interventions were risedronate versus no treatment in one trial (13 participants) and whole-body vibration versus a placebo device in the second (21 participants). Both studies reported improved bone mineral density with the active treatments, with no improvement in the control groups, but the abstracts did not compare treatment and control conditions. All children tolerated whole-body vibration treatment. No study provided information on adverse events. Two studies are ongoing: one investigating whole-body vibration, the other investigating zoledronic acid.\nAuthors' conclusions\nWe know of no high-quality evidence from RCTs to guide use of treatments to prevent or treat corticosteroid-induced osteoporosis and reduce the risk of fragility fractures in children and adults with DMD; only limited results from two trials reported in abstracts were available. We await formal trial reports. Findings from two ongoing relevant studies and two trials, for which only abstracts are available, will be important in future updates of this review.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "What are the effects of treatments to prevent or treat thinning of bones (osteoporosis) and prevent fragile bones from breaking (fragility fractures) in adults and children with Duchenne muscular dystrophy (DMD) who are taking long-term corticosteroids?"
}
] |
query-laysum
|
8060
|
[
{
"role": "user",
"content": "Abstract: Background\nThe detection and diagnosis of caries at the initial (non-cavitated) and moderate (enamel) levels of severity is fundamental to achieving and maintaining good oral health and prevention of oral diseases. An increasing array of methods of early caries detection have been proposed that could potentially support traditional methods of detection and diagnosis. Earlier identification of disease could afford patients the opportunity of less invasive treatment with less destruction of tooth tissue, reduce the need for treatment with aerosol-generating procedures, and potentially result in a reduced cost of care to the patient and to healthcare services.\nObjectives\nTo determine the diagnostic accuracy of different visual classification systems for the detection and diagnosis of non-cavitated coronal dental caries for different purposes (detection and diagnosis) and in different populations (children or adults).\nSearch methods\nCochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 30 April 2020); Embase Ovid (1980 to 30 April 2020); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 30 April 2020); and the World Health Organization International Clinical Trials Registry Platform (to 30 April 2020). We studied reference lists as well as published systematic review articles.\nSelection criteria\nWe included diagnostic accuracy study designs that compared a visual classification system (index test) with a reference standard (histology, excavation, radiographs). This included cross-sectional studies that evaluated the diagnostic accuracy of single index tests and studies that directly compared two or more index tests. Studies reporting at both the patient or tooth surface level were included. In vitro and in vivo studies were considered. Studies that explicitly recruited participants with caries into dentine or frank cavitation were excluded. We also excluded studies that artificially created carious lesions and those that used an index test during the excavation of dental caries to ascertain the optimum depth of excavation.\nData collection and analysis\nWe extracted data independently and in duplicate using a standardised data extraction and quality assessment form based on QUADAS-2 specific to the review context. Estimates of diagnostic accuracy were determined using the bivariate hierarchical method to produce summary points of sensitivity and specificity with 95% confidence intervals (CIs) and regions, and 95% prediction regions. The comparative accuracy of different classification systems was conducted based on indirect comparisons. Potential sources of heterogeneity were pre-specified and explored visually and more formally through meta-regression.\nMain results\nWe included 71 datasets from 67 studies (48 completed in vitro) reporting a total of 19,590 tooth sites/surfaces. The most frequently reported classification systems were the International Caries Detection and Assessment System (ICDAS) (36 studies) and Ekstrand-Ricketts-Kidd (ERK) (15 studies). In reporting the results, no distinction was made between detection and diagnosis. Only two studies were at low risk of bias across all four domains, and 15 studies were at low concern for applicability across all three domains. The patient selection domain had the highest proportion of high risk of bias studies (49 studies). Four studies were assessed at high risk of bias for the index test domain, nine for the reference standard domain, and seven for the flow and timing domain. Due to the high number of studies on extracted teeth concerns regarding applicability were high for the patient selection and index test domains (49 and 46 studies respectively).\nStudies were synthesised using a hierarchical bivariate method for meta-analysis. There was substantial variability in the results of the individual studies: sensitivities ranged from 0.16 to 1.00 and specificities from 0 to 1.00. For all visual classification systems the estimated summary sensitivity and specificity point was 0.86 (95% CI 0.80 to 0.90) and 0.77 (95% CI 0.72 to 0.82) respectively, diagnostic odds ratio (DOR) 20.38 (95% CI 14.33 to 28.98). In a cohort of 1000 tooth surfaces with 28% prevalence of enamel caries, this would result in 40 being classified as disease free when enamel caries was truly present (false negatives), and 163 being classified as diseased in the absence of enamel caries (false positives). The addition of test type to the model did not result in any meaningful difference to the sensitivity or specificity estimates (Chi2(4) = 3.78, P = 0.44), nor did the addition of primary or permanent dentition (Chi2(2) = 0.90, P = 0.64). The variability of results could not be explained by tooth surface (occlusal or approximal), prevalence of dentinal caries in the sample, nor reference standard. Only one study intentionally included restored teeth in its sample and no studies reported the inclusion of sealants.\nWe rated the certainty of the evidence as low, and downgraded two levels in total for risk of bias due to limitations in the design and conduct of the included studies, indirectness arising from the in vitro studies, and inconsistency of results.\nAuthors' conclusions\nWhilst the confidence intervals for the summary points of the different visual classification systems indicated reasonable performance, they do not reflect the confidence that one can have in the accuracy of assessment using these systems due to the considerable unexplained heterogeneity evident across the studies. The prediction regions in which the sensitivity and specificity of a future study should lie are very broad, an important consideration when interpreting the results of this review. Should treatment be provided as a consequence of a false-positive result then this would be non-invasive, typically the application of fluoride varnish where it was not required, with low potential for an adverse event but healthcare resource and finance costs.\nDespite the robust methodology applied in this comprehensive review, the results should be interpreted with some caution due to shortcomings in the design and execution of many of the included studies. Studies to determine the diagnostic accuracy of methods to detect and diagnose caries in situ are particularly challenging. Wherever possible future studies should be carried out in a clinical setting, to provide a realistic assessment of performance within the oral cavity with the challenges of plaque, tooth staining, and restorations, and consider methods to minimise bias arising from the use of imperfect reference standards in clinical studies.\n\nGiven the provided abstract, please respond to the following query: \"What are the implications of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We observed substantial variation in the results, which is perhaps unsurprising as the use of these classification systems involve interpretation by the user. There is considerable uncertainty in the likely performance of a future study. Further research studies should be carried out in a clinical setting."
}
] |
query-laysum
|
8061
|
[
{
"role": "user",
"content": "Abstract: Background\nScreening hysteroscopy in infertile women with unexplained infertility, or prior to intrauterine insemination (IUI) or in vitro fertilisation (IVF) may reveal intrauterine pathology that may not be detected by routine transvaginal ultrasound. Hysteroscopy, whether purely diagnostic or operative may improve reproductive outcomes.\nObjectives\nTo assess the effectiveness and safety of screening hysteroscopy in subfertile women undergoing evaluation for infertility, and subfertile women undergoing IUI or IVF.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register, CENTRAL CRSO, MEDLINE, Embase, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (September 2018). We searched reference lists of relevant articles and handsearched relevant conference proceedings.\nSelection criteria\nRandomised controlled trials comparing screening hysteroscopy versus no intervention in subfertile women wishing to conceive spontaneously, or before undergoing IUI or IVF.\nData collection and analysis\nWe independently screened studies, extracted data, and assessed the risk of bias. The primary outcomes were live birth rate and complications following hysteroscopy. We analysed data using risk ratio (RR) and a fixed-effect model. We assessed the quality of the evidence by using GRADE criteria.\nMain results\nWe retrieved 11 studies. We included one trial that evaluated screening hysteroscopy versus no hysteroscopy, in women with unexplained subfertility, who were trying to conceive spontaneously. We are uncertain whether ongoing pregnancy rate improves following a screening hysteroscopy in women with at least two years of unexplained subfertility (RR 4.30, 95% CI 2.29 to 8.07; 1 RCT; participants = 200; very low-quality evidence). For a typical clinic with a 10% ongoing pregnancy rate without hysteroscopy, performing a screening hysteroscopy would be expected to result in ongoing pregnancy rates between 23% and 81%. The included study reported no adverse events in either treatment arm. We are uncertain whether clinical pregnancy rate is improved (RR 3.80, 95% CI 2.31 to 6.24; 1 RCT; participants = 200; very low-quality evidence), or miscarriage rate increases (RR 2.80, 95% CI 1.05 to 7.48; 1 RCT; participants = 200; very low-quality evidence), following screening hysteroscopy in women with at least two years of unexplained subfertility.\nWe included ten trials that included 1836 women who had a screening hysteroscopy and 1914 women who had no hysteroscopy prior to IVF. Main limitations in the quality of evidence were inadequate reporting of study methods and higher statistical heterogeneity. Eight of the ten trials had unclear risk of bias for allocation concealment.\nPerforming a screening hysteroscopy before IVF may increase live birth rate (RR 1.26, 95% CI 1.11 to 1.43; 6 RCTs; participants = 2745; I² = 69 %; low-quality evidence). For a typical clinic with a 22% live birth rate, performing a screening hysteroscopy would be expected to result in live birth rates between 25% and 32%. However, sensitivity analysis done by pooling results from trials at low risk of bias showed no increase in live birth rate following a screening hysteroscopy (RR 0.99, 95% CI 0.82 to 1.18; 2 RCTs; participants = 1452; I² = 0%).\nOnly four trials reported complications following hysteroscopy; of these, three trials recorded no events in either group. We are uncertain whether a screening hysteroscopy is associated with higher adverse events (Peto odds ratio 7.47, 95% CI 0.15 to 376.42; 4 RCTs; participants = 1872; I² = not applicable; very low-quality evidence).\nPerforming a screening hysteroscopy before IVF may increase clinical pregnancy rate (RR 1.32, 95% CI 1.20 to 1.45; 10 RCTs; participants = 3750; I² = 49%; low-quality evidence). For a typical clinic with a 28% clinical pregnancy rate, performing a screening hysteroscopy would be expected to result in clinical pregnancy rates between 33% and 40%.\nThere may be little or no difference in miscarriage rate following screening hysteroscopy (RR 1.01, 95% CI 0.67 to 1.50; 3 RCTs; participants = 1669; I² = 0%; low-quality evidence).\nWe found no trials that compared a screening hysteroscopy versus no hysteroscopy before IUI.\nAuthors' conclusions\nAt present, there is no high-quality evidence to support the routine use of hysteroscopy as a screening tool in the general population of subfertile women with a normal ultrasound or hysterosalpingogram in the basic fertility work-up for improving reproductive success rates.\nIn women undergoing IVF, low-quality evidence, including all of the studies reporting these outcomes, suggests that performing a screening hysteroscopy before IVF may increase live birth and clinical pregnancy rates. However, pooled results from the only two trials with a low risk of bias did not show a benefit of screening hysteroscopy before IVF.\nSince the studies showing an effect are those with unclear allocation concealment, we are uncertain whether a routine screening hysteroscopy increases live birth and clinical pregnancy, be it for all women, or those with two or more failed IVF attempts. There is insufficient data to draw conclusions about the safety of screening hysteroscopy.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess the safety and usefulness of performing a screening hysteroscopy on reproductive outcomes in women trying to conceive spontaneously, and those undergoing in vitro fertilisation (IVF)."
}
] |
query-laysum
|
8062
|
[
{
"role": "user",
"content": "Abstract: Background\nTransitional care provides for the continuity of care as patients move between different stages and settings of care. Medication discrepancies arising at care transitions have been reported as prevalent and are linked with adverse drug events (ADEs) (e.g. rehospitalisation).\nMedication reconciliation is a process to prevent medication errors at transitions. Reconciliation involves building a complete list of a person's medications, checking them for accuracy, reconciling and documenting any changes. Despite reconciliation being recognised as a key aspect of patient safety, there remains a lack of consensus and evidence about the most effective methods of implementing reconciliation and calls have been made to strengthen the evidence base prior to widespread adoption.\nObjectives\nTo assess the effect of medication reconciliation on medication discrepancies, patient-related outcomes and healthcare utilisation in people receiving this intervention during care transitions compared to people not receiving medication reconciliation.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, seven other databases and two trials registers on 18 January 2018 together with reference checking, citation searching, grey literature searches and contact with study authors to identify additional studies.\nSelection criteria\nWe included only randomised trials. Eligible studies described interventions fulfilling the Institute for Healthcare Improvement definition of medication reconciliation aimed at all patients experiencing a transition of care as compared to standard care in that institution. Included studies had to report on medication discrepancies as an outcome.\nData collection and analysis\nTwo review authors independently screened titles and abstracts, assessed studies for eligibility, assessed risk of bias and extracted data. Study-specific estimates were pooled, using a random-effects model to yield summary estimates of effect and 95% confidence intervals (CI). We used the GRADE approach to assess the overall certainty of evidence for each pooled outcome.\nMain results\nWe identified 25 randomised trials involving 6995 participants. All studies were conducted in hospital or immediately related settings in eight countries. Twenty-three studies were provider orientated (pharmacist mediated) and two were structural (an electronic reconciliation tool and medical record changes). A pooled result of 20 studies comparing medication reconciliation interventions to standard care of participants with at least one medication discrepancy showed a risk ratio (RR) of 0.53 (95% CI 0.42 to 0.67; 4629 participants). The certainty of the evidence on this outcome was very low and therefore the effect of medication reconciliation to reduce discrepancies was uncertain. Similarly, reconciliation's effect on the number of reported discrepancies per participant was also uncertain (mean difference (MD) –1.18, 95% CI –2.58 to 0.23; 4 studies; 1963 participants), as well as its effect on the number of medication discrepancies per participant medication (RR 0.13, 95% CI 0.01 to 1.29; 2 studies; 3595 participants) as the certainty of the evidence for both outcomes was very low.\nReconciliation may also have had little or no effect on preventable adverse drug events (PADEs) due to the very low certainty of the available evidence (RR 0.37. 95% CI 0.09 to 1.57; 3 studies; 1253 participants), with again uncertainty on its effect on ADE (RR 1.09, 95% CI 0.91 to 1.30; 4 studies; 1363 participants; low-certainty evidence). Evidence of the effect of the interventions on healthcare utilisation was conflicting; it probably made little or no difference on unplanned rehospitalisation when reported alone (RR 0.72, 95% CI 0.44 to 1.18; 5 studies; 1206 participants; moderate-certainty evidence), and had an uncertain effect on a composite measure of hospital utilisation (emergency department, rehospitalisation RR 0.78, 95% CI 0.50 to 1.22; 4 studies; 597 participants; very low-certainty evidence).\nAuthors' conclusions\nThe impact of medication reconciliation interventions, in particular pharmacist-mediated interventions, on medication discrepancies is uncertain due to the certainty of the evidence being very low. There was also no certainty of the effect of the interventions on the secondary clinical outcomes of ADEs, PADEs and healthcare utilisation.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors found 25 studies conducted in eight different countries in hospital or immediately related settings. Twenty-three studies were primarily pharmacist delivered, one was an electronic reconciliation tool and one medical record changes. Studies mainly included older people prescribed multiple medications.\nWhile many studies reduced the presence of at least one medication discrepancy in people receiving the intervention, we were uncertain whether reconciliation reduced discrepancies as the reliability of the evidence was very low. The evidence for the intervention's effect on the number of discrepancies and on clinical outcomes such as actual and preventable medication side effects, combined measures of healthcare utilisation and unplanned readmissions to hospital itself was varying with evidence ranging from moderate to low or very low reliability."
}
] |
query-laysum
|
8063
|
[
{
"role": "user",
"content": "Abstract: Background\nAutism spectrum disorder (ASD) is a behaviourally diagnosed condition. It is defined by impairments in social communication or the presence of restricted or repetitive behaviours, or both. Diagnosis is made according to existing classification systems. In recent years, especially following publication of the Diagnostic and Statistical Manual of Mental Disorders - Fifth Edition (DSM-5; APA 2013), children are given the diagnosis of ASD, rather than subclassifications of the spectrum such as autistic disorder, Asperger syndrome, or pervasive developmental disorder - not otherwise specified. Tests to diagnose ASD have been developed using parent or carer interview, child observation, or a combination of both.\nObjectives\nPrimary objectives\n1. To identify which diagnostic tools, including updated versions, most accurately diagnose ASD in preschool children when compared with multi-disciplinary team clinical judgement.\n2. To identify how the best of the interview tools compare with CARS, then how CARS compares with ADOS.\na. Which ASD diagnostic tool - among ADOS, ADI-R, CARS, DISCO, GARS, and 3di - has the best diagnostic test accuracy?\nb. Is the diagnostic test accuracy of any one test sufficient for that test to be suitable as a sole assessment tool for preschool children?\nc. Is there any combination of tests that, if offered in sequence, would provide suitable diagnostic test accuracy and enhance test efficiency?\nd. If data are available, does the combination of an interview tool with a structured observation test have better diagnostic test accuracy (i.e. fewer false-positives and fewer false-negatives) than either test alone?\nAs only one interview tool was identified, we modified the first three aims to a single aim (Differences between protocol and review): This Review evaluated diagnostic tests in terms of sensitivity and specificity. Specificity is the most important factor for diagnosis; however, both sensitivity and specificity are of interest in this Review because there is an inherent trade-off between these two factors.\nSecondary objectives\n1. To determine whether any diagnostic test has greater diagnostic test accuracy for age-specific subgroups within the preschool age range.\nSearch methods\nIn July 2016, we searched CENTRAL, MEDLINE, Embase, PsycINFO, 10 other databases, and the reference lists of all included publications.\nSelection criteria\nPublications had to: \n1. report diagnostic test accuracy for any of the following six included diagnostic tools: Autism Diagnostic Interview - Revised (ADI-R), Gilliam Autism Rating Scale (GARS), Diagnostic Interview for Social and Communication Disorder (DISCO), Developmental, Dimensional, and Diagnostic Interview (3di), Autism Diagnostic Observation Schedule - Generic (ADOS), and Childhood Autism Rating Scale (CARS); \n2. include children of preschool age (under six years of age) suspected of having an ASD; and \n3. have a multi-disciplinary assessment, or similar, as the reference standard.\nEligible studies included cohort, cross-sectional, randomised test accuracy, and case-control studies. The target condition was ASD.\nData collection and analysis\nTwo review authors independently assessed all studies for inclusion and extracted data using standardised forms. A third review author settled disagreements. We assessed methodological quality using the QUADAS-2 instrument (Quality Assessment of Studies of Diagnostic Accuracy - Revised). We conducted separate univariate random-effects logistical regressions for sensitivity and specificity for CARS and ADI-R. We conducted meta-analyses of pairs of sensitivity and specificity using bivariate random-effects methods for ADOS.\nMain results\nIn this Review, we included 21 sets of analyses reporting different tools or cohorts of children from 13 publications, many with high risk of bias or potential conflicts of interest or a combination of both. Overall, the prevalence of ASD for children in the included analyses was 74%.\nFor versions and modules of ADOS, there were 12 analyses with 1625 children. Sensitivity of ADOS ranged from 0.76 to 0.98, and specificity ranged from 0.20 to 1.00. The summary sensitivity was 0.94 (95% confidence interval (CI) 0.89 to 0.97), and the summary specificity was 0.80 (95% CI 0.68 to 0.88).\nFor CARS, there were four analyses with 641 children. Sensitivity of CARS ranged from 0.66 to 0.89, and specificity ranged from 0.21 to 1.00. The summary sensitivity for CARS was 0.80 (95% CI 0.61 to 0.91), and the summary specificity was 0.88 (95% CI 0.64 to 0.96).\nFor ADI-R, there were five analyses with 634 children. Sensitivity for ADI-R ranged from 0.19 to 0.75, and specificity ranged from 0.63 to 1.00. The summary sensitivity for the ADI-R was 0.52 (95% CI 0.32 to 0.71), and the summary specificity was 0.84 (95% CI 0.61 to 0.95).\nStudies that compared tests were few and too small to allow clear conclusions.\nIn two studies that included analyses for both ADI-R and ADOS, tests scored similarly for sensitivity, but ADOS scored higher for specificity. In two studies that included analyses for ADI-R, ADOS, and CARS, ADOS had the highest sensitivity and CARS the highest specificity.\nIn one study that explored individual and additive sensitivity and specificity of ADOS and ADI-R, combining the two tests did not increase the sensitivity nor the specificity of ADOS used alone.\nPerformance for all tests was lower when we excluded studies at high risk of bias.\nAuthors' conclusions\nWe observed substantial variation in sensitivity and specificity of all tests, which was likely attributable to methodological differences and variations in the clinical characteristics of populations recruited.\nWhen we compared summary statistics for ADOS, CARS, and ADI-R, we found that ADOS was most sensitive. All tools performed similarly for specificity. In lower prevalence populations, the risk of falsely identifying children who do not have ASD would be higher.\nNow available are new versions of tools that require diagnostic test accuracy assessment, ideally in clinically relevant situations, with methods at low risk of bias and in children of varying abilities.\n\nGiven the provided abstract, please respond to the following query: \"What are the implications of this Review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Current findings suggest that ADOS is best for not missing children who have ASD and is similar to CARS and ADI-R in not falsely diagnosing ASD in a child who does not have ASD. ADOS has acceptable accuracy in populations with a high prevalence of ASD. However, overdiagnosis is likely if the tool is used in populations with a lower prevalence of ASD. This finding supports current recommended practice for ASD diagnostic tools to be used as part of a multi-disciplinary assessment, rather than as stand-alone diagnostic instruments."
}
] |
query-laysum
|
8064
|
[
{
"role": "user",
"content": "Abstract: Background\nVascular endothelial growth factor (VEGF) plays a key role in angiogenesis in foetal life. Researchers have recently attempted to use anti-VEGF agents for the treatment of retinopathy of prematurity (ROP), a vasoproliferative disorder. The safety and efficacy of these agents in preterm infants with ROP is currently uncertain.\nObjectives\nTo evaluate the efficacy and safety of anti-VEGF drugs when used either as monotherapy, that is without concomitant cryotherapy or laser therapy, or in combination with planned cryo/laser therapy in preterm infants with type 1 ROP (defined as zone I any stage with plus disease, zone I stage 3 with or without plus disease, or zone II stage 2 or 3 with plus disease).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 11), MEDLINE (1966 to 11 December 2016), Embase (1980 to 11 December 2016), CINAHL (1982 to 11 December 2016), and conference proceedings.\nSelection criteria\nRandomised or quasi-randomised controlled trials that evaluated the efficacy or safety of administration, or both, of anti-VEGF agents compared with conventional therapy in preterm infants with ROP.\nData collection and analysis\nWe used standard Cochrane and Cochrane Neonatal methods for data collection and analysis. We used the GRADE approach to assess the quality of the evidence.\nMain results\nSix trials involving a total of 383 infants fulfilled the inclusion criteria. Five trials compared intravitreal bevacizumab (n = 4) or ranibizumab (n = 1) with conventional laser therapy (monotherapy), while the sixth study compared intravitreal pegaptanib plus conventional laser therapy with laser/cryotherapy (combination therapy).\nWhen used as monotherapy, bevacizumab/ranibizumab did not reduce the risk of complete or partial retinal detachment (3 studies; 272 infants; risk ratio (RR) 1.04, 95% confidence interval (CI) 0.21 to 5.13; risk difference (RD) 0.00, 95% CI -0.04 to 0.04; very low-quality evidence), mortality before discharge (2 studies; 229 infants; RR 1.50, 95% CI 0.26 to 8.75), corneal opacity requiring corneal transplant (1 study; 286 eyes; RR 0.34, 95% CI 0.01 to 8.26), or lens opacity requiring cataract removal (3 studies; 544 eyes; RR 0.15, 95% CI 0.01 to 2.79). The risk of recurrence of ROP requiring retreatment also did not differ between groups (2 studies; 193 infants; RR 0.88, 95% CI 0.47 to 1.63; RD -0.02, 95% CI -0.12 to 0.07; very low-quality evidence). Subgroup analysis showed a significant reduction in the risk of recurrence in infants with zone I ROP (RR 0.15, 95% CI 0.04 to 0.62), but an increased risk of recurrence in infants with zone II ROP (RR 2.53, 95% CI 1.01 to 6.32). Pooled analysis of studies that reported eye-level outcomes also revealed significant increase in the risk of recurrence of ROP in the eyes that received bevacizumab (RR 5.36, 95% CI 1.22 to 23.50; RD 0.10, 95% CI 0.03 to 0.17). Infants who received intravitreal bevacizumab had a significantly lower risk of refractive errors (very high myopia) at 30 months of age (1 study; 211 eyes; RR 0.06, 95% CI 0.02 to 0.20; RD -0.40, 95% CI -0.50 to -0.30; low-quality evidence).\nWhen used in combination with laser therapy, intravitreal pegaptanib was found to reduce the risk of retinal detachment when compared to laser/cryotherapy alone (152 eyes; RR 0.26, 95% CI 0.12 to 0.55; RD -0.29, 95% CI -0.42 to -0.16; low-quality evidence). The incidence of recurrence of ROP by 55 weeks' postmenstrual age was also lower in the pegaptanib + laser therapy group (76 infants; RR 0.29, 95% CI 0.12 to 0.7; RD -0.35, 95% CI -0.55 to -0.16; low-quality evidence). There was no difference in the risk of perioperative retinal haemorrhages between the two groups (152 eyes; RR 0.62, 95% CI 0.24 to 1.56; RD -0.05, 95% CI -0.16 to 0.05; very low-quality evidence). However, the risk of delayed systemic adverse effects with any of the three anti-VEGF drugs is not known.\nAuthors' conclusions\nImplications for practice: Intravitreal bevacizumab/ranibizumab, when used as monotherapy, reduces the risk of refractive errors during childhood but does not reduce the risk of retinal detachment or recurrence of ROP in infants with type 1 ROP. While the intervention might reduce the risk of recurrence of ROP in infants with zone I ROP, it can potentially result in higher risk of recurrence requiring retreatment in those with zone II ROP. Intravitreal pegaptanib, when used in conjunction with laser therapy, reduces the risk of retinal detachment as well as the recurrence of ROP in infants with type 1 ROP. However, the quality of the evidence was very low to low for most outcomes due to risk of detection bias and other biases. The effects on other critical outcomes and, more importantly, the long-term systemic adverse effects of the drugs are not known. Insufficient data precludes strong conclusions favouring routine use of intravitreal anti-VEGF agents - either as monotherapy or in conjunction with laser therapy - in preterm infants with type 1 ROP.\nImplications for research: Further studies are needed to evaluate the effect of anti-VEGF agents on structural and functional outcomes in childhood and delayed systemic effects including adverse neurodevelopmental outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Setting\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Neonatal units in China, Czech Republic, Italy, Iran, Ireland, and USA."
}
] |
query-laysum
|
8065
|
[
{
"role": "user",
"content": "Abstract: Background\nTobacco use is the largest single preventable cause of death and disease worldwide. Standardised tobacco packaging is an intervention intended to reduce the promotional appeal of packs and can be defined as packaging with a uniform colour (and in some cases shape and size) with no logos or branding, apart from health warnings and other government-mandated information, and the brand name in a prescribed uniform font, colour and size. Australia was the first country to implement standardised tobacco packaging between October and December 2012, France implemented standardised tobacco packaging on 1 January 2017 and several other countries are implementing, or intending to implement, standardised tobacco packaging.\nObjectives\nTo assess the effect of standardised tobacco packaging on tobacco use uptake, cessation and reduction.\nSearch methods\nWe searched MEDLINE, Embase, PsycINFO and six other databases from 1980 to January 2016. We checked bibliographies and contacted study authors to identify additional peer-reviewed studies.\nSelection criteria\nPrimary outcomes included changes in tobacco use prevalence incorporating tobacco use uptake, cessation, consumption and relapse prevention. Secondary outcomes covered intermediate outcomes that can be measured and are relevant to tobacco use uptake, cessation or reduction. We considered multiple study designs: randomised controlled trials, quasi-experimental and experimental studies, observational cross-sectional and cohort studies. The review focused on all populations and people of any age; to be included, studies had to be published in peer-reviewed journals. We examined studies that assessed the impact of changes in tobacco packaging such as colour, design, size and type of health warnings on the packs in relation to branded packaging. In experiments, the control condition was branded tobacco packaging but could include variations of standardised packaging.\nData collection and analysis\nScreening and data extraction followed standard Cochrane methods. We used different 'Risk of bias' domains for different study types. We have summarised findings narratively.\nMain results\nFifty-one studies met our inclusion criteria, involving approximately 800,000 participants. The studies included were diverse, including observational studies, between- and within-participant experimental studies, cohort and cross-sectional studies, and time-series analyses. Few studies assessed behavioural outcomes in youth and non-smokers. Five studies assessed the primary outcomes: one observational study assessed smoking prevalence among 700,000 participants until one year after standardised packaging in Australia; four studies assessed consumption in 9394 participants, including a series of Australian national cross-sectional surveys of 8811 current smokers, in addition to three smaller studies. No studies assessed uptake, cessation, or relapse prevention. Two studies assessed quit attempts. Twenty studies examined other behavioural outcomes and 45 studies examined non-behavioural outcomes (e.g. appeal, perceptions of harm). In line with the challenges inherent in evaluating standardised tobacco packaging, a number of methodological imitations were apparent in the included studies and overall we judged most studies to be at high or unclear risk of bias in at least one domain. The one included study assessing the impact of standardised tobacco packaging on smoking prevalence in Australia found a 3.7% reduction in odds when comparing before to after the packaging change, or a 0.5 percentage point drop in smoking prevalence, when adjusting for confounders. Confidence in this finding is limited, due to the nature of the evidence available, and is therefore rated low by GRADE standards. Findings were mixed amongst the four studies assessing consumption, with some studies finding no difference and some studies finding evidence of a decrease; certainty in this outcome was rated very low by GRADE standards due to the limitations in study design. One national study of Australian adult smoker cohorts (5441 participants) found that quit attempts increased from 20.2% prior to the introduction of standardised packaging to 26.6% one year post-implementation. A second study of calls to quitlines provides indirect support for this finding, with a 78% increase observed in the number of calls after the implementation of standardised packaging. Here again, certainty is low. Studies of other behavioural outcomes found evidence of increased avoidance behaviours when using standardised packs, reduced demand for standardised packs and reduced craving. Evidence from studies measuring eye-tracking showed increased visual attention to health warnings on standardised compared to branded packs. Corroborative evidence for the latter finding came from studies assessing non-behavioural outcomes, which in general found greater warning salience when viewing standardised, than branded packs. There was mixed evidence for quitting cognitions, whereas findings with youth generally pointed towards standardised packs being less likely to motivate smoking initiation than branded packs. We found the most consistent evidence for appeal, with standardised packs rating lower than branded packs. Tobacco in standardised packs was also generally perceived as worse-tasting and lower quality than tobacco in branded packs. Standardised packaging also appeared to reduce misperceptions that some cigarettes are less harmful than others, but only when dark colours were used for the uniform colour of the pack.\nAuthors' conclusions\nThe available evidence suggests that standardised packaging may reduce smoking prevalence. Only one country had implemented standardised packaging at the time of this review, so evidence comes from one large observational study that provides evidence for this effect. A reduction in smoking behaviour is supported by routinely collected data by the Australian government. Data on the effects of standardised packaging on non-behavioural outcomes (e.g. appeal) are clearer and provide plausible mechanisms of effect consistent with the observed decline in prevalence. As standardised packaging is implemented in different countries, research programmes should be initiated to capture long term effects on tobacco use prevalence, behaviour, and uptake. We did not find any evidence suggesting standardised packaging may increase tobacco use.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Tobacco use kills more people worldwide than any other preventable cause of death. The best way to reduce tobacco use is by stopping people from starting to use tobacco and encouraging and helping existing users to stop. This can be done by introducing policies that can reach a wide number of people in a country, together with offering individual treatment and support to individuals who are already using tobacco to help them to stop. Many countries have introduced bans on tobacco advertising but have not controlled the look of the tobacco pack itself. Tobacco packs can be colourful and attractive, with exciting new shapes and sizes. Standardised tobacco packaging is a government policy which removes these bright designs by, for example, only allowing tobacco packs to be in one colour, shape or size. Standardised packaging generally involves the use of the same uniform colour on all tobacco packs, with no brand imagery, and the brand name written in a specified font, colour and size. Health warnings and other information that governments wish to put on the packs can remain. Australia was the first country to introduce standardised tobacco packaging by December 2012. France was the second by January 2017. Several other countries are introducing standardised packaging or planning to do so. We examined whether standardised packaging reduces tobacco use."
}
] |
query-laysum
|
8066
|
[
{
"role": "user",
"content": "Abstract: Background\nDental professionals are well placed to help their patients stop using tobacco products. Large proportions of the population visit the dentist regularly. In addition, the adverse effects of tobacco use on oral health provide a context that dental professionals can use to motivate a quit attempt.\nObjectives\nTo assess the effectiveness, adverse events and oral health effects of tobacco cessation interventions offered by dental professionals.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group's Specialised Register up to February 2020.\nSelection criteria\nWe included randomised and quasi-randomised clinical trials assessing tobacco cessation interventions conducted by dental professionals in the dental practice or community setting, with at least six months of follow-up.\nData collection and analysis\nTwo review authors independently reviewed abstracts for potential inclusion and extracted data from included trials. We resolved disagreements by consensus. The primary outcome was abstinence from all tobacco use (e.g. cigarettes, smokeless tobacco) at the longest follow-up, using the strictest definition of abstinence reported. Individual study effects and pooled effects were summarised as risk ratios (RR) and 95% confidence intervals (CI), using Mantel-Haenszel random-effects models to combine studies where appropriate. We assessed statistical heterogeneity with the I2 statistic. We summarised secondary outcomes narratively.\nMain results\nTwenty clinical trials involving 14,897 participants met the criteria for inclusion in this review. Sixteen studies assessed the effectiveness of interventions for tobacco-use cessation in dental clinics and four assessed this in community (school or college) settings. Five studies included only smokeless tobacco users, and the remaining studies included either smoked tobacco users only, or a combination of both smoked and smokeless tobacco users. All studies employed behavioural interventions, with four offering nicotine treatment (nicotine replacement therapy (NRT) or e-cigarettes) as part of the intervention. We judged three studies to be at low risk of bias, one to be at unclear risk of bias, and the remaining 16 studies to be at high risk of bias.\nCompared with usual care, brief advice, very brief advice, or less active treatment, we found very low-certainty evidence of benefit from behavioural support provided by dental professionals, comprising either one session (RR 1.86, 95% CI 1.01 to 3.41; I2 = 66%; four studies, n = 6328), or more than one session (RR 1.90, 95% CI 1.17 to 3.11; I2 = 61%; seven studies, n = 2639), on abstinence from tobacco use at least six months from baseline. We found moderate-certainty evidence of benefit from behavioural interventions provided by dental professionals combined with the provision of NRT or e-cigarettes, compared with no intervention, usual care, brief, or very brief advice only (RR 2.76, 95% CI 1.58 to 4.82; I2 = 0%; four studies, n = 1221). We did not detect a benefit from multiple-session behavioural support provided by dental professionals delivered in a high school or college, instead of a dental setting (RR 1.51, 95% CI 0.86 to 2.65; I2 = 83%; three studies, n = 1020; very low-certainty evidence). Only one study reported adverse events or oral health outcomes, making it difficult to draw any conclusions.\nAuthors' conclusions\nThere is very low-certainty evidence that quit rates increase when dental professionals offer behavioural support to promote tobacco cessation. There is moderate-certainty evidence that tobacco abstinence rates increase in cigarette smokers if dental professionals offer behavioural support combined with pharmacotherapy. Further evidence is required to be certain of the size of the benefit and whether adding pharmacological interventions is more effective than behavioural support alone. Future studies should use biochemical validation of abstinence so as to preclude the risk of detection bias. There is insufficient evidence on whether these interventions lead to adverse effects, but no reasons to suspect that these effects would be specific to interventions delivered by dental professionals. There was insufficient evidence that interventions affected oral health.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that tested whether advice and support from dental professionals helped people to stop smoking, chewing or sniffing tobacco.\nWe looked for randomised controlled studies, in which the people taking part were assigned to different treatment groups using chance to decide which people received support to stop using tobacco. This type of study usually gives reliable evidence about the effects of a treatment."
}
] |
query-laysum
|
8067
|
[
{
"role": "user",
"content": "Abstract: Background\nHelminth infections, such as soil-transmitted helminths, schistosomiasis, onchocerciasis, and lymphatic filariasis, are prevalent in many countries where human immunodeficiency virus (HIV) infection is also common. There is some evidence from observational studies that HIV and helminth co-infection may be associated with higher viral load and lower CD4+ cell counts. Treatment of helminth infections with antihelminthics (deworming drugs) may have benefits for people living with HIV beyond simply clearance of worm infections.\nThis is an update of a Cochrane Review published in 2009 and we have expanded it to include outcomes of anaemia and adverse events.\nObjectives\nTo evaluate the effects of deworming drugs (antihelminthic therapy) on markers of HIV disease progression, anaemia, and adverse events in children and adults.\nSearch methods\nIn this review update, we searched online for published and unpublished studies in the Cochrane Library, MEDLINE, EMBASE, CENTRAL, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICRTP), ClinicalTrials.gov, and the WHO Global Health Library up to 29 September 2015. We also searched databases listing conference abstracts, scanned reference lists of articles, and contacted the authors of included studies.\nSelection criteria\nWe searched for randomized controlled trials (RCTs) that compared antihelminthic drugs with placebo or no intervention in HIV-positive people.\nData collection and analysis\nTwo review authors independently extracted data and assessed trials for eligibility and risk of bias. The primary outcomes were changes in HIV viral load and CD4+ cell count, and secondary outcomes were anaemia, iron deficiency, adverse events, and mortality events. We compared the effects of deworming using mean differences, risk ratios (RR), and 95% confidence intervals (CIs). We assessed the quality of evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nEight trials met the inclusion criteria of this review, enrolling a total of 1612 participants. Three trials evaluated the effect of providing antihelminthics to all adults with HIV without knowledge of their helminth infection status, and five trials evaluated the effects of providing deworming drugs to HIV-positive individuals with confirmed helminth infections. Seven trials were conducted in sub-Saharan Africa and one in Thailand.\nAntihelminthics for people with unknown helminth infection status\nProviding antihelminthics (albendazole and praziquantel together or separately) to HIV-positive adults with unknown helminth infection status may have a small suppressive effect on mean viral load at six weeks but the 95% CI includes the possibility of no effect (difference in mean change −0.14 log10 viral RNA/mL, 95% CI −0.35 to 0.07, P = 0.19; one trial, 166 participants, low quality evidence).\nRepeated dosing with deworming drugs over two years (albendazole every three months plus annual praziquantel), probably has little or no effect on mean viral load (difference in mean change 0.01 log10 viral RNA, 95% CI: −0.03 to −0.05; one trial, 917 participants, moderate quality evidence), and little or no effect on mean CD4+ count (difference in mean change 2.60 CD4+ cells/µL, 95% CI −10.15 to 15.35; P = 0.7; one trial, 917 participants, low quality evidence).\nAntihelminthics for people with confirmed helminth infections\nTreating confirmed helminth infections in HIV-positive adults may have a small suppressive effect on mean viral load at six to 12 weeks following deworming (difference in mean change −0.13 log10 viral RNA, 95% CI −0.26 to −0.00; P = 0.04; four trials, 445 participants, low quality evidence). However, this finding is strongly influenced by a single study of praziquantel treatment for schistosomiasis. There may also be a small favourable effect on mean CD4+ cell count at 12 weeks after deworming in HIV-positive populations with confirmed helminth infections (difference in mean change 37.86 CD4+ cells/µL, 95% CI 7.36 to 68.35; P = 0.01; three trials, 358 participants, low quality evidence).\nAdverse events and mortality\nThere is no indication that antihelminthic drugs impart additional risks in HIV-positive populations. However, adverse events were not well reported (very low quality evidence) and trials were underpowered to evaluate effects on mortality (low quality evidence).\nAuthors' conclusions\nThere is low quality evidence that treating confirmed helminth infections in HIV-positive adults may have small, short-term favourable effects on markers of HIV disease progression. Further studies are required to confirm this finding. Current evidence suggests that deworming with antihelminthics is not harmful, and this is reassuring for the routine treatment of confirmed or suspected helminth infections in people living with HIV in co-endemic areas.\nFurther long-term studies are required to make confident conclusions regarding the impact of presumptively deworming all HIV-positive individuals irrespective of helminth infection status, as the only long-term trial to date did not demonstrate an effect.\n\nGiven the provided abstract, please respond to the following query: \"What the evidence in this review suggests\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Treating all HIV-positive adults with deworming drugs without knowledge of their helminth infection status may have a small suppressive effect on viral load at six weeks (low quality evidence), but repeated dosing over two years appears to have little or no effect on either viral load (moderate quality evidence) or CD4+ cell count (low quality evidence). These findings are based on two included studies.\nProviding deworming drugs to HIV-positive adults with diagnosed helminth infection may result in a small suppressive effect on mean viral load at six to 12 weeks (low quality evidence) and a small favourable effect on mean CD4+ cell count at 12 weeks (low quality evidence). However, these findings are based on small studies and are strongly influenced by a single study of praziquantel for schistosomiasis. Further studies from different settings and populations are needed for confirmation.\nAdverse events were not well reported (very low quality evidence), and trials were too small to evaluate the effects on mortality (low quality evidence). However there is no suggestion that deworming drugs are harmful for HIV-positive individuals."
}
] |
query-laysum
|
8068
|
[
{
"role": "user",
"content": "Abstract: Background\nMany surgical approaches are available to treat varicose veins secondary to chronic venous insufficiency. One of the least invasive techniques is the ambulatory conservative hemodynamic correction of venous insufficiency method (in French 'cure conservatrice et hémodynamique de l'insuffisance veineuse en ambulatoire' (CHIVA)), an approach based on venous hemodynamics with deliberate preservation of the superficial venous system. This is the second update of the review first published in 2013.\nObjectives\nTo compare the efficacy and safety of the CHIVA method with alternative therapeutic techniques to treat varicose veins.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, AMED, and the World Health Organisation International Clinical Trials Registry Platform and ClinicalTrials.gov trials registries to 19 October 2020. We also searched PUBMED to 19 October 2020 and checked the references of relevant articles to identify additional studies.\nSelection criteria\nWe included randomized controlled trials (RCTs) that compared CHIVA to other therapeutic techniques to treat varicose veins.\nData collection and analysis\nTwo review authors independently assessed and selected studies, extracted data, and performed quantitative analysis from the selected papers. A third author solved any disagreements. We assessed the risk of bias in included trials with the Cochrane risk of bias tool. We calculated the risk ratio (RR), mean difference (MD), number of people needed to treat for an additional beneficial outcome (NNTB), and the number of people needed to treat for an additional harmful outcome (NNTH), with 95% confidence intervals (CI). We evaluated the certainty of the evidence using GRADE. The main outcomes of interest were the recurrence of varicose veins and side effects.\nMain results\nFor this update, we identified two new additional studies. In total, we included six RCTs with 1160 participants (62% women) and collected from them eight comparisons. Three RCTs compared CHIVA with vein stripping. One RCT compared CHIVA with compression dressings in people with venous ulcers. The new studies included three comparisons, one compared CHIVA with vein stripping and radiofrequency ablation (RFA), and one compared CHIVA with vein stripping and endovenous laser therapy. We judged the certainty of the evidence for our outcomes as low to very low due to inconsistency, imprecision caused by the low number of events and risk of bias. The overall risk of bias across studies was high because neither participants nor personnel were blinded to the interventions. Two studies attempted to blind outcome assessors, but the characteristics of the surgery limited concealment.\nFive studies reported the outcome clinical recurrence of varicose veins with a follow-up of 18 months to 10 years. CHIVA may make little or no difference to the recurrence of varicose veins in the lower limb compared to stripping (RR 0.74, 95% CI 0.46 to 1.20; 5 studies, 966 participants; low-certainty evidence). We are uncertain whether CHIVA reduced recurrence compared to compression dressing (RR 0.23, 95% CI 0.06 to 0.96; 1 study, 47 participants; very low-certainty evidence). CHIVA may make little or no difference to clinical recurrence compared to RFA (RR 2.02, 95% CI 0.74 to 5.53; 1 study, 146 participants; low-certainty evidence) and endovenous laser (RR 0.20, 95% CI 0.01 to 4.06; 1 study, 100 participants; low-certainty evidence).\nWe found no clear difference between CHIVA and stripping for the side effects of limb infection (RR 0.83, 95% CI 0.33 to 2.10; 3 studies, 746 participants; low-certainty evidence), and superficial vein thrombosis (RR 1.05, 95% CI 0.51 to 2.17; 4 studies, 846 participants; low-certainty evidence). CHIVA may reduce slightly nerve injury (RR 0.14, 95% CI 0.02 to 0.98; NNTH 9, 95% CI 5 to 100; 4 studies, 846 participants; low-certainty evidence) and hematoma compared to stripping (RR 0.59, 95% CI 0.37 to 0.97; NNTH 11, 95% CI 5 to 100; 2 studies, 245 participants; low-certainty evidence). For bruising, one study found no differences between groups while another study found reduced rates of bruising in the CHIVA group compared to the stripping group. Compared to RFA, CHIVA may make little or no difference to rates of limb infection, superficial vein thrombosis, nerve injury or hematoma, but may cause more bruising (RR 1.15, 95% CI 1.04 to 1.28; NNTH 8, CI 95% 5 to 25; 1 study, 144 participants; low-certainty evidence). Compared to endovenous laser, CHIVA may make little or no difference to rates of limb infection, superficial vein thrombosis, nerve injury or hematoma. The study comparing CHIVA versus compression did not report side effects.\nAuthors' conclusions\nThere may be little or no difference in the recurrence of varicose veins when comparing CHIVA to stripping (low-certainty evidence), but CHIVA may slightly reduce nerve injury and hematoma in the lower limb (low-certainty evidence). Very limited evidence means we are uncertain of any differences in recurrence when comparing CHIVA with compression (very low-certainty evidence). CHIVA may make little or no difference to recurrence compared to RFA (low-certainty evidence), but may result in more bruising (low-certainty evidence). CHIVA may make little or no difference to recurrence and side effects compared to endovenous laser therapy (low-certainty evidence). However, we based these conclusions on a small number of trials with a high risk of bias as the effects of surgery could not be concealed, and the results were imprecise due to the low number of events. New RCTs are needed to confirm these results and to compare CHIVA with approaches other than open surgery.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chronic venous insufficiency (CVI) is a disorder in which veins fail to pump blood back to the heart adequately. As a result, it can cause varicose veins, skin ulcers, and superficial or deep vein thrombosis in the legs, and symptoms such as pain, fatigue, heaviness, warmth and swelling of the leg. The ambulatory conservative hemodynamic correction of venous insufficiency (CHIVA) method is a minimally invasive surgical technique to treat varicose veins. The CHIVA treatment aims to eliminate venous-venous shunts (abnormal connections between veins) by disconnecting the escape points, preserving the saphenous vein (a vein from the top of the foot to the upper thigh area) and normal venous drainage of the limb's superficial tissues."
}
] |
query-laysum
|
8069
|
[
{
"role": "user",
"content": "Abstract: Background\nMcArdle disease (Glycogen Storage Disease type V) is caused by an absence of muscle phosphorylase leading to exercise intolerance, myoglobinuria rhabdomyolysis and acute renal failure. This is an update of a review first published in 2004.\nObjectives\nTo review systematically the evidence from randomised controlled trials (RCTs) of pharmacological or nutritional treatments for improving exercise performance and quality of life in McArdle disease.\nSearch methods\nWe searched the Cochrane Neuromuscular Disease Group Specialized Register, CENTRAL, MEDLINE and EMBASE on 11 August 2014.\nSelection criteria\nWe included RCTs (including cross-over studies) and quasi-RCTs. We included unblinded open trials and individual patient studies in the discussion. Interventions included any pharmacological agent or nutritional supplement. Primary outcome measures included any objective assessment of exercise endurance (for example aerobic capacity (VO2) max, walking speed, muscle force or power and fatigability). Secondary outcome measures included metabolic changes (such as reduced plasma creatine kinase and a reduction in the frequency of myoglobinuria), subjective measures (including quality of life scores and indices of disability) and serious adverse events.\nData collection and analysis\nThree review authors checked the titles and abstracts identified by the search and reviewed the manuscripts. Two review authors independently assessed the risk of bias of relevant studies, with comments from a third author. Two authors extracted data onto a specially designed form.\nMain results\nWe identified 31 studies, and 13 fulfilled the criteria for inclusion. We described trials that were not eligible for the review in the Discussion. The included studies involved a total of 85 participants, but the number in each individual trial was small; the largest treatment trial included 19 participants and the smallest study included only one participant. There was no benefit with: D-ribose, glucagon, verapamil, vitamin B6, branched chain amino acids, dantrolene sodium, and high-dose creatine. Minimal subjective benefit was found with low dose creatine and ramipril only for patients with a polymorphism known as the D/D angiotensin converting enzyme (ACE) phenotype. A carbohydrate-rich diet resulted in better exercise performance compared with a protein-rich diet. Two studies of oral sucrose given at different times and in different amounts before exercise showed an improvement in exercise performance. Four studies reported adverse effects. Oral ribose caused diarrhoea and symptoms suggestive of hypoglycaemia including light-headedness and hunger. In one study, branched chain amino acids caused a deterioration of functional outcomes. Dantrolene was reported to cause a number of adverse effects including tiredness, somnolence, dizziness and muscle weakness. Low dose creatine (60 mg/kg/day) did not cause side-effects but high-dose creatine (150 mg/kg/day) worsened the symptoms of myalgia.\nAuthors' conclusions\nAlthough there was low quality evidence of improvement in some parameters with creatine, oral sucrose, ramipril and a carbohydrate-rich diet, none was sufficiently strong to indicate significant clinical benefit.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "McArdle disease (also known as glycogen storage disease type V) is a disorder affecting muscle metabolism. The condition is caused by the lack of an enzyme called muscle phosphorylase. This results in an inability to break down glycogen 'fuel' stores. McArdle disease leads to pain and fatigue with strenuous exercise. Sometimes severe muscle damage develops and occasionally this results in acute reversible kidney failure."
}
] |
query-laysum
|
8070
|
[
{
"role": "user",
"content": "Abstract: Background\nDifferent forms of vaccines have been developed to prevent the SARS-CoV-2 virus and subsequent COVID-19 disease. Several are in widespread use globally.\nObjectives\nTo assess the efficacy and safety of COVID-19 vaccines (as a full primary vaccination series or a booster dose) against SARS-CoV-2.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register and the COVID-19 L·OVE platform (last search date 5 November 2021). We also searched the WHO International Clinical Trials Registry Platform, regulatory agency websites, and Retraction Watch.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing COVID-19 vaccines to placebo, no vaccine, other active vaccines, or other vaccine schedules.\nData collection and analysis\nWe used standard Cochrane methods. We used GRADE to assess the certainty of evidence for all except immunogenicity outcomes.\nWe synthesized data for each vaccine separately and presented summary effect estimates with 95% confidence intervals (CIs).\nMain results\nWe included and analyzed 41 RCTs assessing 12 different vaccines, including homologous and heterologous vaccine schedules and the effect of booster doses. Thirty-two RCTs were multicentre and five were multinational. The sample sizes of RCTs were 60 to 44,325 participants. Participants were aged: 18 years or older in 36 RCTs; 12 years or older in one RCT; 12 to 17 years in two RCTs; and three to 17 years in two RCTs. Twenty-nine RCTs provided results for individuals aged over 60 years, and three RCTs included immunocompromized patients. No trials included pregnant women. Sixteen RCTs had two-month follow-up or less, 20 RCTs had two to six months, and five RCTs had greater than six to 12 months or less. Eighteen reports were based on preplanned interim analyses.\nOverall risk of bias was low for all outcomes in eight RCTs, while 33 had concerns for at least one outcome.\nWe identified 343 registered RCTs with results not yet available.\nThis abstract reports results for the critical outcomes of confirmed symptomatic COVID-19, severe and critical COVID-19, and serious adverse events only for the 10 WHO-approved vaccines. For remaining outcomes and vaccines, see main text. The evidence for mortality was generally sparse and of low or very low certainty for all WHO-approved vaccines, except AD26.COV2.S (Janssen), which probably reduces the risk of all-cause mortality (risk ratio (RR) 0.25, 95% CI 0.09 to 0.67; 1 RCT, 43,783 participants; high-certainty evidence).\nConfirmed symptomatic COVID-19\nHigh-certainty evidence found that BNT162b2 (BioNtech/Fosun Pharma/Pfizer), mRNA-1273 (ModernaTx), ChAdOx1 (Oxford/AstraZeneca), Ad26.COV2.S, BBIBP-CorV (Sinopharm-Beijing), and BBV152 (Bharat Biotect) reduce the incidence of symptomatic COVID-19 compared to placebo (vaccine efficacy (VE): BNT162b2: 97.84%, 95% CI 44.25% to 99.92%; 2 RCTs, 44,077 participants; mRNA-1273: 93.20%, 95% CI 91.06% to 94.83%; 2 RCTs, 31,632 participants; ChAdOx1: 70.23%, 95% CI 62.10% to 76.62%; 2 RCTs, 43,390 participants; Ad26.COV2.S: 66.90%, 95% CI 59.10% to 73.40%; 1 RCT, 39,058 participants; BBIBP-CorV: 78.10%, 95% CI 64.80% to 86.30%; 1 RCT, 25,463 participants; BBV152: 77.80%, 95% CI 65.20% to 86.40%; 1 RCT, 16,973 participants).\nModerate-certainty evidence found that NVX-CoV2373 (Novavax) probably reduces the incidence of symptomatic COVID-19 compared to placebo (VE 82.91%, 95% CI 50.49% to 94.10%; 3 RCTs, 42,175 participants).\nThere is low-certainty evidence for CoronaVac (Sinovac) for this outcome (VE 69.81%, 95% CI 12.27% to 89.61%; 2 RCTs, 19,852 participants).\nSevere or critical COVID-19\nHigh-certainty evidence found that BNT162b2, mRNA-1273, Ad26.COV2.S, and BBV152 result in a large reduction in incidence of severe or critical disease due to COVID-19 compared to placebo (VE: BNT162b2: 95.70%, 95% CI 73.90% to 99.90%; 1 RCT, 46,077 participants; mRNA-1273: 98.20%, 95% CI 92.80% to 99.60%; 1 RCT, 28,451 participants; AD26.COV2.S: 76.30%, 95% CI 57.90% to 87.50%; 1 RCT, 39,058 participants; BBV152: 93.40%, 95% CI 57.10% to 99.80%; 1 RCT, 16,976 participants).\nModerate-certainty evidence found that NVX-CoV2373 probably reduces the incidence of severe or critical COVID-19 (VE 100.00%, 95% CI 86.99% to 100.00%; 1 RCT, 25,452 participants).\nTwo trials reported high efficacy of CoronaVac for severe or critical disease with wide CIs, but these results could not be pooled.\nSerious adverse events (SAEs)\nmRNA-1273, ChAdOx1 (Oxford-AstraZeneca)/SII-ChAdOx1 (Serum Institute of India), Ad26.COV2.S, and BBV152 probably result in little or no difference in SAEs compared to placebo (RR: mRNA-1273: 0.92, 95% CI 0.78 to 1.08; 2 RCTs, 34,072 participants; ChAdOx1/SII-ChAdOx1: 0.88, 95% CI 0.72 to 1.07; 7 RCTs, 58,182 participants; Ad26.COV2.S: 0.92, 95% CI 0.69 to 1.22; 1 RCT, 43,783 participants); BBV152: 0.65, 95% CI 0.43 to 0.97; 1 RCT, 25,928 participants). In each of these, the likely absolute difference in effects was fewer than 5/1000 participants.\nEvidence for SAEs is uncertain for BNT162b2, CoronaVac, BBIBP-CorV, and NVX-CoV2373 compared to placebo (RR: BNT162b2: 1.30, 95% CI 0.55 to 3.07; 2 RCTs, 46,107 participants; CoronaVac: 0.97, 95% CI 0.62 to 1.51; 4 RCTs, 23,139 participants; BBIBP-CorV: 0.76, 95% CI 0.54 to 1.06; 1 RCT, 26,924 participants; NVX-CoV2373: 0.92, 95% CI 0.74 to 1.14; 4 RCTs, 38,802 participants).\nFor the evaluation of heterologous schedules, booster doses, and efficacy against variants of concern, see main text of review.\nAuthors' conclusions\nCompared to placebo, most vaccines reduce, or likely reduce, the proportion of participants with confirmed symptomatic COVID-19, and for some, there is high-certainty evidence that they reduce severe or critical disease. There is probably little or no difference between most vaccines and placebo for serious adverse events. Over 300 registered RCTs are evaluating the efficacy of COVID-19 vaccines, and this review is updated regularly on the COVID-NMA platform (covid-nma.com).\nImplications for practice\nDue to the trial exclusions, these results cannot be generalized to pregnant women, individuals with a history of SARS-CoV-2 infection, or immunocompromized people. Most trials had a short follow-up and were conducted before the emergence of variants of concern.\nImplications for research\nFuture research should evaluate the long-term effect of vaccines, compare different vaccines and vaccine schedules, assess vaccine efficacy and safety in specific populations, and include outcomes such as preventing long COVID-19. Ongoing evaluation of vaccine efficacy and effectiveness against emerging variants of concern is also vital.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out how well each vaccine works in reducing SARS-CoV-2 infection, COVID-19 disease with symptoms, severe COVID-19 disease, and total number of deaths (including any death, not only those related to COVID-19).\nWe wanted to find out about serious adverse events that might require hospitalization, be life-threatening, or both; systemic reactogenicity events (immediate short-term reactions to vaccines mainly due to immunological responses; e.g. fever, headache, body aches, fatigue); and any adverse events (which include non-serious adverse events)."
}
] |
query-laysum
|
8071
|
[
{
"role": "user",
"content": "Abstract: Background\nInduction of labour is carried out for a variety of indications and using a range of methods. For women at low risk of pregnancy complications, some methods of induction of labour or cervical ripening may be suitable for use in outpatient settings.\nObjectives\nTo examine pharmacological and mechanical interventions to induce labour or ripen the cervix in outpatient settings in terms of effectiveness, maternal satisfaction, healthcare costs and, where information is available, safety.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group’s Trials Register (30 November 2016) and reference lists of retrieved studies.\nSelection criteria\nWe included randomised controlled trials examining outpatient cervical ripening or induction of labour with pharmacological agents or mechanical methods. Cluster trials were eligible for inclusion.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We assessed evidence using the GRADE approach.\nMain results\nThis updated review included 34 studies of 11 different methods for labour induction with 5003 randomised women, where women received treatment at home or were sent home after initial treatment and monitoring in hospital.\nStudies examined vaginal and intracervical prostaglandin E₂ (PGE₂), vaginal and oral misoprostol, isosorbide mononitrate, mifepristone, oestrogens, amniotomy and acupuncture, compared with placebo, no treatment, or routine care. Trials generally recruited healthy women with a term pregnancy. The risk of bias was mostly low or unclear, however, in 16 trials blinding was unclear or not attempted. In general, limited data were available on the review's main and additional outcomes. Evidence was graded low to moderate quality.\n1. Vaginal PGE₂ versus expectant management or placebo (5 studies)\nFewer women in the vaginal PGE₂ group needed additional induction agents to induce labour, however, confidence intervals were wide (risk ratio (RR) 0.52, 95% confidence interval (CI) 0.27 to 0.99; 150 women; 2 trials). There were no clear differences between groups in uterine hyperstimulation (with or without fetal heart rate (FHR) changes) (RR 3.76, 95% CI 0.64 to 22.24; 244 women; 4 studies; low-quality evidence), caesarean section (RR 0.80, 95% CI 0.49 to 1.31; 288 women; 4 studies; low-quality evidence), or admission to a neonatal intensive care unit (NICU) (RR 0.32, 95% CI 0.10 to 1.03; 230 infants; 3 studies; low-quality evidence).\nThere was no information on vaginal birth within 24, 48 or 72 hours, length of hospital stay, use of emergency services or maternal or caregiver satisfaction. Serious maternal and neonatal morbidity or deaths were not reported.\n2. Intracervical PGE₂ versus expectant management or placebo (7 studies)\nThere was no clear difference between women receiving intracervical PGE₂ and no treatment or placebo in terms of need for additional induction agents (RR 0.98, 95% CI 0.74 to 1.32; 445 women; 3 studies), vaginal birth not achieved within 48 to 72 hours (RR 0.83, 95% CI 0.68 to 1.02; 43 women; 1 study; low-quality evidence), uterine hyperstimulation (with FHR changes) (RR 2.66, 95% CI 0.63 to 11.25; 488 women; 4 studies; low-quality evidence), caesarean section (RR 0.90, 95% CI 0.72 to 1.12; 674 women; 7 studies; moderate-quality evidence), or babies admitted to NICU (RR 1.61, 95% CI 0.43 to 6.05; 215 infants; 3 studies; low-quality evidence). There were no uterine ruptures in either the PGE₂ group or placebo group.\nThere was no information on vaginal birth not achieved within 24 hours, length of hospital stay, use of emergency services, mother or caregiver satisfaction, or serious morbidity or neonatal morbidity or perinatal death.\n3. Vaginal misoprostol versus placebo (4 studies)\nOne small study reported on the rate of perinatal death with no clear differences between groups; there were no deaths in the treatment group compared with one stillbirth (reason not reported) in the control group (RR 0.34, 95% CI 0.01 to 8.14; 77 infants; 1 study; low-quality evidence).\nThere was no clear difference between groups in rates of uterine hyperstimulation with FHR changes (RR 1.97, 95% CI 0.43 to 9.00; 265 women; 3 studies; low-quality evidence), caesarean section (RR 0.94, 95% CI 0.61 to 1.46; 325 women; 4 studies; low-quality evidence), and babies admitted to NICU (RR 0.89, 95% CI 0.54 to 1.47; 325 infants; 4 studies; low-quality evidence).\nThere was no information on vaginal birth not achieved within 24, 48 or 72 hours, additional induction agents required, length of hospital stay, use of emergency services, mother or caregiver satisfaction, serious maternal, and other neonatal, morbidity or death.\nNo substantive differences were found for other comparisons. One small study found that women who received oral misoprostol were more likely to give birth within 24 hours (RR 0.65, 95% CI 0.48 to 0.86; 87 women; 1 study) and were less likely to require additional induction agents (RR 0.60, 95% CI 0.37 to 0.97; 127 women; 2 studies). Women who received mifepristone were also less likely to require additional induction agents (average RR 0.59, 95% CI 0.37 to 0.95; 311 women; 4 studies; I² = 74%); however, this result should be interpreted with caution due to high heterogeneity. One trial each of acupuncture and outpatient amniotomy were included, but few review outcomes were reported.\nAuthors' conclusions\nInduction of labour in outpatient settings appears feasible and important adverse events seem rare, however, in general there is insufficient evidence to detect differences. There was no strong evidence that agents used to induce labour in outpatient settings had an impact (positive or negative) on maternal or neonatal health. There was some evidence that compared to placebo or no treatment, induction agents administered on an outpatient basis reduced the need for further interventions to induce labour, and shortened the interval from intervention to birth.\nWe do not have sufficient evidence to know which induction methods are preferred by women, the interventions that are most effective and safe to use in outpatient settings, or their cost effectiveness. Further studies where various women-friendly outpatient protocols are compared head-to-head are required. As part of such work, women should be consulted on what sort of management they would prefer.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For healthy, low-risk pregnant women at term, outpatient induction and enabling women to return home to wait for labour to start appears to be feasible. Outpatient induction treatments may reduce both need for further drugs and time from treatment to birth. It does not appear to increase the likelihood of needing other interventions in labour. However, there is insufficient evidence to say definitively whether outpatient induction is safe. Future research should focus on which methods women prefer, and are most effective and safe."
}
] |
query-laysum
|
8072
|
[
{
"role": "user",
"content": "Abstract: Background\nConversion and dissociative disorders are conditions where people experience unusual neurological symptoms or changes in awareness or identity. However, symptoms and clinical signs cannot be explained by a neurological disease or other medical condition. Instead, a psychological stressor or trauma is often present. The symptoms are real and can cause significant distress or problems with functioning in everyday life for the people experiencing them.\nObjectives\nTo assess the beneficial and harmful effects of psychosocial interventions of conversion and dissociative disorders in adults.\nSearch methods\nWe conducted database searches between 16 July and 16 August 2019. We searched Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and eight other databases, together with reference checking, citation searching and contact with study authors to identify additional studies.\nSelection criteria\nWe included all randomised controlled trials that compared psychosocial interventions for conversion and dissociative disorders with standard care, wait list or other interventions (pharmaceutical, somatic or psychosocial).\nData collection and analysis\nWe selected, quality assessed and extracted data from the identified studies. Two review authors independently performed all tasks. We used standard Cochrane methodology. For continuous data, we calculated mean differences (MD) and standardised mean differences (SMD) with 95% confidence interval (CI). For dichotomous outcomes, we calculated risk ratio (RR) with 95% CI. We assessed and downgraded the evidence according to the GRADE system for risk of bias, imprecision, indirectness, inconsistency and publication bias.\nMain results\nWe included 17 studies (16 with parallel-group designs and one with a cross-over design), with 894 participants aged 18 to 80 years (female:male ratio 3:1).\nThe data were separated into 12 comparisons based on the different interventions and comparators. Studies were pooled into the same comparison when identical interventions and comparisons were evaluated. The certainty of the evidence was downgraded as a consequence of potential risk of bias, as many of the studies had unclear or inadequate allocation concealment. Further downgrading was performed due to imprecision, few participants and inconsistency.\nThere were 12 comparisons for the primary outcome of reduction in physical signs.\nInpatient paradoxical intention therapy compared with outpatient diazepam: inpatient paradoxical intention therapy did not reduce conversive symptoms compared with outpatient diazepam at the end of treatment (RR 1.44, 95% CI 0.91 to 2.28; 1 study, 30 participants; P = 0.12; very low-quality evidence).\nInpatient treatment programme plus hypnosis compared with inpatient treatment programme: inpatient treatment programme plus hypnosis did not reduce severity of impairment compared with inpatient treatment programme at the end of treatment (MD –0.49 (negative value better), 95% CI –1.28 to 0.30; 1 study, 45 participants; P = 0.23; very low-quality evidence).\nOutpatient hypnosis compared with wait list: outpatient hypnosis might reduce severity of impairment compared with wait list at the end of treatment (MD 2.10 (higher value better), 95% CI 1.34 to 2.86; 1 study, 49 participants; P < 0.00001; low-quality evidence).\nBehavioural therapy plus routine clinical care compared with routine clinical care: behavioural therapy plus routine clinical care might reduce the number of weekly seizures compared with routine clinical care alone at the end of treatment (MD –21.40 (negative value better), 95% CI –27.88 to –14.92; 1 study, 18 participants; P < 0.00001; very low-quality evidence).\nCognitive behavioural therapy (CBT) compared with standard medical care: CBT did not reduce monthly seizure frequency compared to standard medical care at end of treatment (RR 1.56, 95% CI 0.39 to 6.19; 1 study, 16 participants; P = 0.53; very low-quality evidence). CBT did not reduce physical signs compared to standard medical care at the end of treatment (MD –4.75 (negative value better), 95% CI –18.73 to 9.23; 1 study, 61 participants; P = 0.51; low-quality evidence). CBT did not reduce seizure freedom compared to standard medical care at end of treatment (RR 2.33, 95% CI 0.30 to 17.88; 1 trial, 16 participants; P = 0.41; very low-quality evidence).\nPsychoeducational follow-up programmes compared with treatment as usual (TAU): no study measured reduction in physical signs at end of treatment.\nSpecialised CBT-based physiotherapy inpatient programme compared with wait list: no study measured reduction in physical signs at end of treatment.\nSpecialised CBT-based physiotherapy outpatient intervention compared with TAU: no study measured reduction in physical signs at end of treatment.\nBrief psychotherapeutic intervention (psychodynamic interpersonal treatment approach) compared with standard care: brief psychotherapeutic interventions did not reduce conversion symptoms compared to standard care at end of treatment (RR 0.12, 95% CI 0.01 to 2.00; 1 study, 19 participants; P = 0.14; very low-quality evidence).\nCBT plus adjunctive physical activity (APA) compared with CBT alone: CBT plus APA did not reduce overall physical impacts compared to CBT alone at end of treatment (MD 5.60 (negative value better), 95% CI –15.48 to 26.68; 1 study, 21 participants; P = 0.60; very low-quality evidence).\nHypnosis compared to diazepam: hypnosis did not reduce symptoms compared to diazepam at end of treatment (RR 0.69, 95% CI 0.39 to 1.24; 1 study, 40 participants; P = 0.22; very low-quality evidence).\nOutpatient motivational interviewing (MI) and mindfulness-based psychotherapy compared with psychotherapy alone: psychotherapy preceded by MI might decrease seizure frequency compared with psychotherapy alone at end of treatment (MD 41.40 (negative value better), 95% CI 4.92 to 77.88; 1 study, 54 participants; P = 0.03; very low-quality evidence).\nThe effect on the secondary outcomes was reported in 16/17 studies. None of the studies reported results on adverse effects. In the studies reporting on level of functioning and quality of life at end of treatment the effects ranged from small to no effect.\nAuthors' conclusions\nThe results of the meta-analysis and reporting of single studies suggest there is lack of evidence regarding the effects of any psychosocial intervention on conversion and dissociative disorders in adults. It is not possible to draw any conclusions about potential benefits or harms from the included studies.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most of the included studies had methodological flaws and the quality of evidence used to assess the effectiveness of the different treatments was judged as low or very low. Due to this low-quality evidence, we cannot say how reliable the results are."
}
] |
query-laysum
|
8073
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the original Cochrane Review published in 2018, Issue 5.\nEpilepsy affects over 70 million people worldwide, and nearly a quarter of patients with seizures have drug-resistant epilepsy. People with drug-resistant epilepsy have increased risks of premature death, injuries, psychosocial dysfunction, and a reduced quality of life.\nObjectives\nTo assess the efficacy and tolerability of clonazepam when used as an add-on therapy for adults and children with drug-resistant focal onset or generalised onset epileptic seizures, when compared with placebo or another antiepileptic agent.\nSearch methods\nFor the latest update we searched the following databases on 4 June 2019: Cochrane Register of Studies (CRS Web), MEDLINE (Ovid) 1946 to 3 June, 2019. The Cochrane Register of Studies (CRS Web) includes the Cochrane Epilepsy Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), and randomised or quasi-randomised, controlled trials from Embase, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (ICTRP).\nSelection criteria\nDouble-blind randomised controlled studies of add-on clonazepam in people with resistant focal or generalised onset seizures, with a minimum treatment period of eight weeks. The studies could be of parallel or cross-over design.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, extracted relevant data, and assessed trial quality. We contacted study authors for additional information.\nMain results\nWe found no double-blind randomised controlled trials which met the inclusion criteria.\nAuthors' conclusions\nThere is no evidence from double-blind randomised controlled trials for or against the use of clonazepam as an add-on therapy for adults and children with drug-resistant focal or generalised onset epileptic seizures. Since the last version of this review no new studies have been found.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review aimed to evaluate the efficacy and tolerability of clonazepam when used as an add-on therapy for adults and children with drug-resistant focal onset or generalised onset epileptic seizures, when compared with placebo or another antiepileptic agent."
}
] |
query-laysum
|
8074
|
[
{
"role": "user",
"content": "Abstract: Background\nIt is unclear whether blood pressure should be altered actively during the acute phase of stroke. This is an update of a Cochrane review first published in 1997, and previously updated in 2001 and 2008.\nObjectives\nTo assess the clinical effectiveness of altering blood pressure in people with acute stroke, and the effect of different vasoactive drugs on blood pressure in acute stroke.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (last searched in February 2014), the Cochrane Database of Systematic reviews (CDSR) and the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2014, Issue 2), MEDLINE (Ovid) (1966 to May 2014), EMBASE (Ovid) (1974 to May 2014), Science Citation Index (ISI, Web of Science, 1981 to May 2014) and the Stroke Trials Registry (searched May 2014).\nSelection criteria\nRandomised controlled trials of interventions that aimed to alter blood pressure compared with control in participants within one week of acute ischaemic or haemorrhagic stroke.\nData collection and analysis\nTwo review authors independently applied the inclusion criteria, assessed trial quality and extracted data. The review authors cross-checked data and resolved discrepancies by discussion to reach consensus. We obtained published and unpublished data where available.\nMain results\nWe included 26 trials involving 17,011 participants (8497 participants were assigned active therapy and 8514 participants received placebo/control). Not all trials contributed to each outcome. Most data came from trials that had a wide time window for recruitment; four trials gave treatment within six hours and one trial within eight hours. The trials tested alpha-2 adrenergic agonists (A2AA), angiotensin converting enzyme inhibitors (ACEI), angiotensin receptor antagonists (ARA), calcium channel blockers (CCBs), nitric oxide (NO) donors, thiazide-like diuretics, and target-driven blood pressure lowering. One trial tested phenylephrine.\nAt 24 hours after randomisation oral ACEIs reduced systolic blood pressure (SBP, mean difference (MD) -8 mmHg, 95% confidence interval (CI) -17 to 1) and diastolic blood pressure (DBP, MD -3 mmHg, 95% CI -9 to 2), sublingual ACEIs reduced SBP (MD -12.00 mm Hg, 95% CI -26 to 2) and DBP (MD -2, 95%CI -10 to 6), oral ARA reduced SBP (MD -1 mm Hg, 95% CI -3 to 2) and DBP (MD -1 mm Hg, 95% CI -3 to 1), oral beta blockers reduced SBP (MD -14 mm Hg; 95% CI -27 to -1) and DBP (MD -1 mm Hg, 95% CI -9 to 7), intravenous (iv) beta blockers reduced SBP (MD -5 mm Hg, 95% CI -18 to 8) and DBP (-5 mm Hg, 95% CI -13 to 3), oral CCBs reduced SBP (MD -13 mmHg, 95% CI -43 to 17) and DBP (MD -6 mmHg, 95% CI -14 to 2), iv CCBs reduced SBP (MD -32 mmHg, 95% CI -65 to 1) and DBP (MD -13, 95% CI -31 to 6), NO donors reduced SBP (MD -12 mmHg, 95% CI -19 to -5) and DBP (MD -3, 95% CI -4 to -2) while phenylephrine, non-significantly increased SBP (MD 21 mmHg, 95% CI -13 to 55) and DBP (MD 1 mmHg, 95% CI -15 to 16).\nBlood pressure lowering did not reduce death or dependency either by drug class (OR 0.98, 95% CI 0.92 to 1.05), stroke type (OR 0.98, 95% CI 0.92 to 1.05) or time to treatment (OR 0.98, 95% CI 0.92 to 1.05). Treatment within six hours of stroke appeared effective in reducing death or dependency (OR 0.86, 95% CI 0.76 to 0.99) but not death (OR 0.70, 95% CI 0.38 to 1.26) at the end of the trial. Although death or dependency did not differ between people who continued pre-stroke antihypertensive treatment versus those who stopped it temporarily (worse outcome with continuing treatment, OR 1.06, 95% CI 0.91 to 1.24), disability scores at the end of the trial were worse in participants randomised to continue treatment (Barthel Index, MD -3.2, 95% CI -5.8, -0.6).\nAuthors' conclusions\nThere is insufficient evidence that lowering blood pressure during the acute phase of stroke improves functional outcome. It is reasonable to withhold blood pressure-lowering drugs until patients are medically and neurologically stable, and have suitable oral or enteral access, after which drugs can than be reintroduced. In people with acute stroke, CCBs, ACEI, ARA, beta blockers and NO donors each lower blood pressure while phenylephrine probably increases blood pressure. Further trials are needed to identify which people are most likely to benefit from early treatment, in particular whether treatment started very early is beneficial.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review is up-to-date to May 2014. We included 26 trials involving 17,011 participants: 24 trials assessed lowering blood pressure, one trial tested raising blood pressure, and two trials assessed what to do with drugs taken before stroke. All studies took place in hospitals that were used to treating people with stroke. Not all trials contributed information to all outcomes, and we have used data that were available in publications."
}
] |
query-laysum
|
8075
|
[
{
"role": "user",
"content": "Abstract: Background\nDuring intensive care unit (ICU) admission, patients and their carers experience physical and psychological stressors that may result in psychological conditions including anxiety, depression, and post-traumatic stress disorder (PTSD). Improving communication between healthcare professionals, patients, and their carers may alleviate these disorders. Communication may include information or educational interventions, in different formats, aiming to improve knowledge of the prognosis, treatment, or anticipated challenges after ICU discharge.\nObjectives\nTo assess the effects of information or education interventions for improving outcomes in adult ICU patients and their carers.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, and PsycINFO from database inception to 10 April 2017. We searched clinical trials registries and grey literature, and handsearched reference lists of included studies and related reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs), and planned to include quasi-RCTs, comparing information or education interventions presented to participants versus no information or education interventions, or comparing information or education interventions as part of a complex intervention versus a complex intervention without information or education. We included participants who were adult ICU patients, or their carers; these included relatives and non-relatives, including significant representatives of patients.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, assessed risk of bias, and applied GRADE criteria to assess certainty of the evidence.\nMain results\nWe included eight RCTs with 1157 patient participants and 943 carer participants. We found no quasi-RCTs. We identified seven studies that await classification, and three ongoing studies.\nThree studies designed an intervention targeted at patients, four at carers, and one at both patients and carers. Studies included varied information: standardised or tailored, presented once or several times, and that included verbal or written information, audio recordings, multimedia information, and interactive information packs. Five studies reported robust methods of randomisation and allocation concealment. We noted high attrition rates in five studies. It was not feasible to blind participants, and we rated all studies as at high risk of performance bias, and at unclear risk of detection bias because most outcomes required self reporting.\nWe attempted to pool data statistically, however this was not always possible due to high levels of heterogeneity. We calculated mean differences (MDs) using data reported from individual study authors where possible, and narratively synthesised the results. We reported the following two comparisons.\nInformation or education intervention versus no information or education intervention (4 studies)\nFor patient anxiety, we did not pool data from three studies (332 participants) owing to unexplained substantial statistical heterogeneity and possible clinical or methodological differences between studies. One study reported less anxiety when an intervention was used (MD -3.20, 95% confidence interval (CI) -3.38 to -3.02), and two studies reported little or no difference between groups (MD -0.40, 95% CI -4.75 to 3.95; MD -1.00, 95% CI -2.94 to 0.94). Similarly, for patient depression, we did not pool data from two studies (160 patient participants). These studies reported less depression when an information or education intervention was used (MD -2.90, 95% CI -4.00 to -1.80; MD -1.27, 95% CI -1.47 to -1.07). However, it is uncertain whether information or education interventions reduce patient anxiety or depression due to very low-certainty evidence.\nIt is uncertain whether information or education interventions improve health-related quality of life due to very low-certainty evidence from one study reporting little or no difference between intervention groups (MD -1.30, 95% CI -4.99 to 2.39; 143 patient participants). No study reported adverse effects, knowledge acquisition, PTSD severity, or patient or carer satisfaction.\nWe used the GRADE approach and downgraded certainty of the evidence owing to study limitations, inconsistencies between results, and limited data from few small studies.\nInformation or education intervention as part of a complex intervention versus a complex intervention without information or education (4 studies)\nOne study (three comparison groups; 38 participants) reported little or no difference between groups in patient anxiety (tailored information pack versus control: MD 0.09, 95% CI -3.29 to 3.47; standardised general ICU information versus control: MD -0.25, 95% CI -4.34 to 3.84), and little or no difference in patient depression (tailored information pack versus control: MD -1.26, 95% CI -4.48 to 1.96; standardised general ICU information versus control: MD -1.47, 95% CI -6.37 to 3.43). It is uncertain whether information or education interventions as part of a complex intervention reduce patient anxiety and depression due to very low-certainty evidence.\nOne study (175 carer participants) reported fewer carer participants with poor comprehension among those given information (risk ratio 0.28, 95% CI 0.15 to 0.53), but again this finding is uncertain due to very low-certainty evidence.\nTwo studies (487 carer participants) reported little or no difference in carer satisfaction; it is uncertain whether information or education interventions as part of a complex intervention increase carer satisfaction due to very low-certainty evidence. Adverse effects were reported in only one study: one participant withdrew because of deterioration in mental health on completion of anxiety and depression questionnaires, but the study authors did not report whether this participant was from the intervention or comparison group.\nWe downgraded certainty of the evidence owing to study limitations, and limited data from few small studies.\nNo studies reported severity of PTSD, or health-related quality of life.\nAuthors' conclusions\nWe are uncertain of the effects of information or education interventions given to adult ICU patients and their carers, as the evidence in all cases was of very low certainty, and our confidence in the evidence was limited. Ongoing studies may contribute more data and introduce more certainty when incorporated into future updates of the review.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "During intensive care unit (ICU) admission, patients and their carers experience physical and psychological stressors that may lead to increased anxiety, depression, and post-traumatic stress disorder (PTSD). Improving communication among patients, their carers, and doctors, nurses, and other ICU staff may improve these outcomes. Communication may include information or educational interventions, in different formats, which aim to improve knowledge of the patient's condition, their treatment plan, or challenges they may face after ICU discharge."
}
] |
query-laysum
|
8076
|
[
{
"role": "user",
"content": "Abstract: Background\nPolycystic ovarian syndrome (PCOS) is characterised by the clinical signs of oligo-amenorrhoea, infertility and hirsutism. Conventional treatment of PCOS includes a range of oral pharmacological agents, lifestyle changes and surgical modalities. Beta-endorphin is present in the follicular fluid of both normal and polycystic ovaries. It was demonstrated that the beta-endorphin levels in ovarian follicular fluid of otherwise healthy women who were undergoing ovulation were much higher than the levels measured in plasma. Given that acupuncture impacts on beta-endorphin production, which may affect gonadotropin-releasing hormone (GnRH) secretion, it is postulated that acupuncture may have a role in ovulation induction via increased beta-endorphin production effecting GnRH secretion. This is an update of our previous review published in 2016.\nObjectives\nTo assess the effectiveness and safety of acupuncture treatment for oligo/anovulatory women with polycystic ovarian syndrome (PCOS) for both fertility and symptom control.\nSearch methods\nWe identified relevant studies from databases including the Gynaecology and Fertility Group Specialised Register, CENTRAL, MEDLINE, Embase, PsycINFO, CNKI, CBM and VIP. We also searched trial registries and reference lists from relevant papers. CENTRAL, MEDLINE, Embase, PsycINFO, CNKI and VIP searches are current to May 2018. CBM database search is to November 2015.\nSelection criteria\nWe included randomised controlled trials (RCTs) that studied the efficacy of acupuncture treatment for oligo/anovulatory women with PCOS. We excluded quasi- or pseudo-RCTs.\nData collection and analysis\nTwo review authors independently selected the studies, extracted data and assessed risk of bias. We calculated risk ratios (RR), mean difference (MD), standardised mean difference (SMD) and 95% confidence intervals (CIs). Primary outcomes were live birth rate, multiple pregnancy rate and ovulation rate, and secondary outcomes were clinical pregnancy rate, restored regular menstruation period, miscarriage rate and adverse events. We assessed the quality of the evidence using GRADE methods.\nMain results\nWe included eight RCTs with 1546 women. Five RCTs were included in our previous review and three new RCTs were added in this update of the review. They compared true acupuncture versus sham acupuncture (three RCTs), true acupuncture versus relaxation (one RCT), true acupuncture versus clomiphene (one RCT), low-frequency electroacupuncture versus physical exercise or no intervention (one RCT) and true acupuncture versus Diane-35 (two RCTs). Studies that compared true acupuncture versus Diane-35 did not measure fertility outcomes as they were focused on symptom control.\nSeven of the studies were at high risk of bias in at least one domain.\nFor true acupuncture versus sham acupuncture, we could not exclude clinically relevant differences in live birth (RR 0.97, 95% CI 0.76 to 1.24; 1 RCT, 926 women; low-quality evidence); multiple pregnancy rate (RR 0.89, 95% CI 0.33 to 2.45; 1 RCT, 926 women; low-quality evidence); ovulation rate (SMD 0.02, 95% CI –0.15 to 0.19, I2 = 0%; 2 RCTs, 1010 women; low-quality evidence); clinical pregnancy rate (RR 1.03, 95% CI 0.82 to 1.29; I2 = 0%; 3 RCTs, 1117 women; low-quality evidence) and miscarriage rate (RR 1.10, 95% CI 0.77 to 1.56; 1 RCT, 926 women; low-quality evidence).\nNumber of intermenstrual days may have improved in participants receiving true acupuncture compared to sham acupuncture (MD –312.09 days, 95% CI –344.59 to –279.59; 1 RCT, 141 women; low-quality evidence).\nTrue acupuncture probably worsens adverse events compared to sham acupuncture (RR 1.16, 95% CI 1.02 to 1.31; I2 = 0%; 3 RCTs, 1230 women; moderate-quality evidence).\nNo studies reported data on live birth rate and multiple pregnancy rate for the other comparisons: physical exercise or no intervention, relaxation and clomiphene. Studies including Diane-35 did not measure fertility outcomes.\nWe were uncertain whether acupuncture improved ovulation rate (measured by ultrasound three months post treatment) compared to relaxation (MD 0.35, 95% CI 0.14 to 0.56; 1 RCT, 28 women; very low-quality evidence) or Diane-35 (RR 1.45, 95% CI 0.87 to 2.42; 1 RCT, 58 women; very low-quality evidence).\nOverall evidence ranged from very low quality to moderate quality. The main limitations were failure to report important clinical outcomes and very serious imprecision.\nAuthors' conclusions\nFor true acupuncture versus sham acupuncture we cannot exclude clinically relevant differences in live birth rate, multiple pregnancy rate, ovulation rate, clinical pregnancy rate or miscarriage. Number of intermenstrual days may improve in participants receiving true acupuncture compared to sham acupuncture. True acupuncture probably worsens adverse events compared to sham acupuncture.\nNo studies reported data on live birth rate and multiple pregnancy rate for the other comparisons: physical exercise or no intervention, relaxation and clomiphene. Studies including Diane-35 did not measure fertility outcomes as the women in these trials did not seek fertility.\nWe are uncertain whether acupuncture improves ovulation rate (measured by ultrasound three months post treatment) compared to relaxation or Diane-35. The other comparisons did not report on this outcome.\nAdverse events were recorded in the acupuncture group for the comparisons physical exercise or no intervention, clomiphene and Diane-35. These included dizziness, nausea and subcutaneous haematoma. Evidence was very low quality with very wide CIs and very low event rates.\nThere are only a limited number of RCTs in this area, limiting our ability to determine effectiveness of acupuncture for PCOS.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence ranged from very low to moderate quality, the main limitations were not reporting important clinical results and not enough data."
}
] |
query-laysum
|
8077
|
[
{
"role": "user",
"content": "Abstract: Background\nHip fractures are a major healthcare problem, presenting a considerable challenge and burden to individuals and healthcare systems. The number of hip fractures globally is rising rapidly. The majority of intracapsular hip fractures are treated surgically.\nObjectives\nTo assess the relative effects (benefits and harms) of all surgical treatments used in the management of intracapsular hip fractures in older adults, using a network meta-analysis of randomised trials, and to generate a hierarchy of interventions according to their outcomes.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Web of Science, and five other databases in July 2020. We also searched clinical trials databases, conference proceedings, reference lists of retrieved articles and conducted backward-citation searches.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs comparing different treatments for fragility intracapsular hip fractures in older adults. We included total hip arthroplasties (THAs), hemiarthroplasties (HAs), internal fixation, and non-operative treatments. We excluded studies of people with hip fracture with specific pathologies other than osteoporosis or resulting from high-energy trauma.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion. One review author completed data extraction which was checked by a second review author. We collected data for three outcomes at different time points: mortality and health-related quality of life (HRQoL) - both reported within 4 months, at 12 months, and after 24 months of surgery, and unplanned return to theatre (at end of study follow-up).\nWe performed a network meta-analysis (NMA) with Stata software, using frequentist methods, and calculated the differences between treatments using risk ratios (RRs) and standardised mean differences (SMDs) and their corresponding 95% confidence intervals (CIs). We also performed direct comparisons using the same codes.\nMain results\nWe included 119 studies (102 RCTS, 17 quasi-RCTs) with 17,653 participants with 17,669 intracapsular fractures in the review; 83% of fractures were displaced. The mean participant age ranged from 60 to 87 years and 73% were women.\nAfter discussion with clinical experts, we selected 12 nodes that represented the best balance between clinical plausibility and efficiency of the networks: cemented modern unipolar HA, dynamic fixed angle plate, uncemented first-generation bipolar HA, uncemented modern bipolar HA, cemented modern bipolar HA, uncemented first-generation unipolar HA, uncemented modern unipolar HA, THA with single articulation, dual-mobility THA, pins, screws, and non-operative treatment. Seventy-five studies (with 11,855 participants) with data for at least two of these treatments contributed to the NMA.\nWe selected cemented modern unipolar HA as a reference treatment against which other treatments were compared. This was a common treatment in the networks, providing a clinically appropriate comparison. In order to provide a concise summary of the results, we report only network estimates when there was evidence of difference between treatments.\nWe downgraded the certainty of the evidence for serious and very serious risks of bias and when estimates included possible transitivity, particularly for internal fixation which included more undisplaced fractures. We also downgraded for incoherence, or inconsistency in indirect estimates, although this affected few estimates. Most estimates included the possibility of benefits and harms, and we downgraded the evidence for these treatments for imprecision.\nWe found that cemented modern unipolar HA, dynamic fixed angle plate and pins seemed to have the greatest likelihood of reducing mortality at 12 months. Overall, 23.5% of participants who received the reference treatment died within 12 months of surgery. Uncemented modern bipolar HA had higher mortality than the reference treatment (RR 1.37, 95% CI 1.02 to 1.85; derived only from indirect evidence; low-certainty evidence), and THA with single articulation also had higher mortality (network estimate RR 1.62, 95% CI 1.13 to 2.32; derived from direct evidence from 2 studies with 225 participants, and indirect evidence; very low-certainty evidence). In the remaining treatments, the certainty of the evidence ranged from low to very low, and we noted no evidence of any differences in mortality at 12 months.\nWe found that THA (single articulation), cemented modern bipolar HA and uncemented modern bipolar HA seemed to have the greatest likelihood of improving HRQoL at 12 months. This network was comparatively sparse compared to other outcomes and the certainty of the evidence of differences between treatments was very low. We noted no evidence of any differences in HRQoL at 12 months, although estimates were imprecise.\nWe found that arthroplasty treatments seemed to have a greater likelihood of reducing unplanned return to theatre than internal fixation and non-operative treatment. We estimated that 4.3% of participants who received the reference treatment returned to theatre during the study follow-up. Compared to this treatment, we found low-certainty evidence that more participants returned to theatre if they were treated with a dynamic fixed angle plate (network estimate RR 4.63, 95% CI 2.94 to 7.30; from direct evidence from 1 study with 190 participants, and indirect evidence). We found very low-certainty evidence that more participants returned to theatre when treated with pins (RR 4.16, 95% CI 2.53 to 6.84; only from indirect evidence), screws (network estimate RR 5.04, 95% CI 3.25 to 7.82; from direct evidence from 2 studies with 278 participants, and indirect evidence), and non-operative treatment (RR 5.41, 95% CI 1.80 to 16.26; only from indirect evidence). There was very low-certainty evidence of a tendency for an increased risk of unplanned return to theatre for all of the arthroplasty treatments, and in particular for THA, compared with cemented modern unipolar HA, with little evidence to suggest the size of this difference varied strongly between the arthroplasty treatments.\nAuthors' conclusions\nThere was considerable variability in the ranking of each treatment such that there was no one outstanding, or subset of outstanding, superior treatments. However, cemented modern arthroplasties tended to more often yield better outcomes than alternative treatments and may be a more successful approach than internal fixation. There is no evidence of a difference between THA (single articulation) and cemented modern unipolar HA in the outcomes measured in this review. THA may be an appropriate treatment for a subset of people with intracapsular fracture but we have not explored this further.\n\nGiven the provided abstract, please respond to the following query: \"Why is this question important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A hip fracture is a break at the top of the leg bone. We included people with a break just below the ball and socket joint. These types of broken hip are common in older adults whose bones may be fragile because of a condition called osteoporosis. They often happen after a fall from a standing or sitting position. The broken hip can be treated in different ways, and we don't know whether some treatments are better than others."
}
] |
query-laysum
|
8078
|
[
{
"role": "user",
"content": "Abstract: Background\nRehabilitation is effective for recovery after stroke and other non-progressive brain injuries but it is unclear if the rehabilitation environment itself, outside of limited therapy hours, is maximally conducive to recovery. Environmental enrichment is a relatively new concept within rehabilitation for humans. In this review, this is defined as an intervention designed to facilitate physical (motor and sensory), cognitive and social activity by the provision of equipment and organisation of a structured, stimulating environment. The environment should be designed to encourage (but not force) activities without additional specialised rehabilitation input.\nObjectives\nTo assess the effects of environmental enrichment on well-being, functional recovery, activity levels and quality of life in people who have stroke or non-progressive brain injury.\nSearch methods\nWe conducted the search on 26 October 2020. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library; MEDLINE (from 1950); Embase (from 1980); the Cumulative Index to Nursing and Allied Health Literature (CINAHL; from 1982); the Allied and Complementary Medicine Database (AMED; from 1985); PsycINFO (from 1806); the Physiotherapy Evidence Database (PEDro; from 1999); and 10 additional bibliographic databases and ongoing trial registers.\nSelection criteria\nWe planned to include randomised controlled trials (RCTs) that compared environmental enrichment with standard services.\nData collection and analysis\nTwo review authors independently assessed eligible studies, extracted data, and assessed study quality. Any disagreements were resolved through discussion with a third review author. We determined the risk of bias for the included study and performed a 'best evidence' synthesis using the GRADE approach.\nMain results\nWe identified one RCT, involving 53 participants with stroke, comparing environmental enrichment (which included physical, cognitive and social activities such as reading material, board and card games, gaming technology, music, artwork, and computer with Internet) with standard services in an inpatient rehabilitation setting. We excluded five studies, found two studies awaiting classification and one ongoing study which described environmental enrichment in their interventions. Of the excluded studies, three were non-RCTs and two described co-interventions with a significant component of rehabilitation. Based on the single small included RCT at high risk of bias, data are insufficient to provide any reliable indication of benefit or risk to guide clinical practice in terms of the provision of environmental enrichment.\nAuthors' conclusions\nThe gap in current research should not, however, be interpreted as proof that environmental enrichment is ineffective.\nFurther research is needed with robust study designs, such as cluster RCTs, and consistent outcome measurement evaluating the effectiveness of environmental enrichment in different settings (inpatient versus outpatient), the relative effectiveness of various components of environmental enrichment, cost-effectiveness, and safety of the intervention in people following stroke or other non-progressive brain injuries. It should be noted, however, that it is challenging to randomise or double-blind trials of environmental enrichment given the nature of the intervention.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Population: we planned to include studies in which participants were adults who had had a stroke or a non-progressive brain injury (such as traumatic brain injury but not dementia, Alzheimer's diease, or multiple sclerosis).\nIntervention: environmental enrichment interventions will usually include multiple activities, such as computers plus gaming technology plus music and reading.\nComparison: we planned to compare environmental interventions with usual care (regular physiotherapy, speech therapy, occupational therapy) or alternative treatment.\nOutcomes: we divided outcomes into primary and secondary outcomes. Primary outcomes focused on psychological well-being (anxiety, depression, stress) and coping. Secondary outcomes focused on quality of life, physical function, communication and cognitive function, and activity levels. We also planned to report adverse events."
}
] |
query-laysum
|
8079
|
[
{
"role": "user",
"content": "Abstract: Background\nSymptoms of anxiety and depression are common in inflammatory bowel disease (IBD). Antidepressants are taken by approximately 30% of people with IBD. However, there are no current guidelines on treating co-morbid anxiety and depression in people with IBD with antidepressants, nor are there clear data on the role of antidepressants in managing physical symptoms of IBD.\nObjectives\nThe objectives were to assess the efficacy and safety of antidepressants for treating anxiety and depression in IBD, and to assess the effects of antidepressants on quality of life (QoL) and managing disease activity in IBD.\nSearch methods\nWe searched MEDLINE; Embase, CINAHL, PsycINFO, CENTRAL, and the Cochrane IBD Group Specialized Register from inception to 23 August 2018. Reference lists, trials registers, conference proceedings and grey literature were also searched.\nSelection criteria\nRandomised controlled trials (RCTs) and observational studies comparing any type of antidepressant to placebo, no treatment or an active therapy for IBD were included.\nData collection and analysis\nTwo authors independently screened search results, extracted data and assessed bias using the Cochrane risk of bias tool. We used the Newcastle-Ottawa Scale to assess quality of observational studies. GRADE was used to evaluate the certainty of the evidence supporting the outcomes. Primary outcomes included anxiety and depression. Anxiety was assessed using the Hospital Anxiety and Depression Scale (HADS) or the Hamilton Anxiety Rating Scale (HARS). Depression was assessed using HADS or the Beck Depression Inventory. Secondary outcomes included adverse events (AEs), serious AEs, withdrawal due to AEs, quality of life (QoL), clinical remission, relapse, pain, hospital admissions, surgery, and need for steroid treatment. QoL was assessed using the WHO-QOL-BREF questionnaire. We calculated the risk ratio (RR) and corresponding 95% confidence intervals (CI) for dichotomous outcomes. For continuous outcomes, we calculated the mean difference (MD) with 95% CI. A fixed-effect model was used for analysis.\nMain results\nWe included four studies (188 participants). Two studies were double-blind RCTs, one was a non-randomised controlled trial, and one was an observational retrospective case-matched study. The age of participants ranged from 27 to 37.8 years. In three studies participants had quiescent IBD and in one study participants had active or quiescent IBD. Participants in one study had co-morbid anxiety or depression. One study used duloxetine (60 mg daily), one study used fluoxetine (20 mg daily), one study used tianeptine (36 mg daily), and one study used various antidepressants in clinical ranges. Three studies had placebo controls and one study had a no treatment control group. One RCT was rated as low risk of bias and the other was rated as high risk of bias (incomplete outcome data). The non-randomised controlled trial was rated as high risk of bias (random sequence generation, allocation concealment, blinding). The observational study was rated as high methodological quality, but is still considered to be at high risk of bias given its observational design.\nThe effect of antidepressants on anxiety and depression is uncertain. At 12 weeks, the mean anxiety score in antidepressant participants was 6.11 + 3 compared to 8.5 + 3.45 in placebo participants (MD -2.39, 95% -4.30 to -0.48, 44 participants, low certainty evidence). At 12 months, the mean anxiety score in antidepressant participants was 3.8 + 2.5 compared to 4.2 + 4.9 in placebo participants (MD -0.40, 95% -3.47 to 2.67, 26 participants; low certainty evidence). At 12 weeks, the mean depression score in antidepressant participants was 7.47 + 2.42 compared to 10.5 + 3.57 in placebo participants (MD -3.03, 95% CI -4.83 to -1.23, 44 participants; low certainty evidence). At 12 months, the mean depression score in antidepressant participants was 2.9 + 2.8 compared to 3.1 + 3.4 in placebo participants (MD -0.20, 95% -2.62 to 2.22, 26 participants; low certainty evidence).\nThe effect of antidepressants on AEs is uncertain. Fifty-seven per cent (8/14) of antidepressant participants group reported AEs versus 25% (3/12) of placebo participants (RR 2.29, 95% CI 0.78 to 6.73, low certainty evidence). Commonly reported AEs include nausea, headache, dizziness, drowsiness, sexual problems, insomnia, fatigue, low mood/anxiety, dry mouth, muscle spasms and hot flushes. None of the included studies reported any serious AEs. None of the included studies reported on pain.\nOne study (44 participants) reported on QoL at 12 weeks and another study (26 participants) reported on QoL at 12 months. Physical, Psychological, Social and Environmental QoL were improved at 12 weeks compared to placebo (all low certainty evidence). There were no group differences in QoL at 12 months (all low certainty evidence). The effect of antidepressants on maintenance of clinical remission and endoscopic relapse is uncertain. At 12 months, 64% (9/14) of participants in the antidepressant group maintained clinical remission compared to 67% (8/12) of placebo participants (RR 0.96, 95% CI 0.55 to 1.69; low certainty evidence). At 12 months, none (0/30) of participants in the antidepressant group had endoscopic relapse compared to 10% (3/30) of placebo participants (RR 0.14, 95% CI 0.01 to 2.65; very low certainty evidence).\nAuthors' conclusions\nThe results for the outcomes assessed in this review are uncertain and no firm conclusions regarding the efficacy and safety of antidepressants in IBD can be drawn. Future studies should employ RCT designs, with a longer follow-up and develop solutions to address attrition. Inclusion of objective markers of disease activity is strongly recommended as is testing antidepressants from different classes, as at present it is unclear if any antidepressant (or class thereof) has differential efficacy.\n\nGiven the provided abstract, please respond to the following query: \"What did the researchers investigate?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Previously conducted studies of antidepressant therapy in IBD were reviewed. The data from some of these studies were combined using a method called a meta-analysis. During the analysis, people who took antidepressants were compared with those who did not take antidepressants with regard to rates of anxiety and depression and also other measures such as quality of life, side effects and IBD disease activity."
}
] |
query-laysum
|
8080
|
[
{
"role": "user",
"content": "Abstract: Background\nDespite conflicting evidence, chest physiotherapy has been widely used as an adjunctive treatment for adults with pneumonia. This is an update of a review first published in 2010 and updated in 2013.\nObjectives\nTo assess the effectiveness and safety of chest physiotherapy for pneumonia in adults.\nSearch methods\nWe updated our searches in the following databases to May 2022: the Cochrane Central Register of Controlled Trials (CENTRAL) via OvidSP, MEDLINE via OvidSP (from 1966), Embase via embase.com (from 1974), Physiotherapy Evidence Database (PEDro) (from 1929), CINAHL via EBSCO (from 2009), and the Chinese Biomedical Literature Database (CBM) (from 1978).\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs assessing the efficacy of chest physiotherapy for treating pneumonia in adults.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included two new trials in this update (540 participants), for a total of eight RCTs (974 participants). Four RCTs were conducted in the United States, two in Sweden, one in China, and one in the United Kingdom. The studies looked at five types of chest physiotherapy: conventional chest physiotherapy; osteopathic manipulative treatment (OMT, which includes paraspinal inhibition, rib raising, and myofascial release); active cycle of breathing techniques (which includes active breathing control, thoracic expansion exercises, and forced expiration techniques); positive expiratory pressure; and high-frequency chest wall oscillation.\nWe assessed four trials as at unclear risk of bias and four trials as at high risk of bias.\nConventional chest physiotherapy (versus no physiotherapy) may have little to no effect on improving mortality, but the certainty of evidence is very low (risk ratio (RR) 1.03, 95% confidence interval (CI) 0.15 to 7.13; 2 trials, 225 participants; I² = 0%). OMT (versus placebo) may have little to no effect on improving mortality, but the certainty of evidence is very low (RR 0.43, 95% CI 0.12 to 1.50; 3 trials, 327 participants; I² = 0%). Similarly, high-frequency chest wall oscillation (versus no physiotherapy) may also have little to no effect on improving mortality, but the certainty of evidence is very low (RR 0.75, 95% CI 0.17 to 3.29; 1 trial, 286 participants).\nConventional chest physiotherapy (versus no physiotherapy) may have little to no effect on improving cure rate, but the certainty of evidence is very low (RR 0.93, 95% CI 0.56 to 1.55; 2 trials, 225 participants; I² = 85%). Active cycle of breathing techniques (versus no physiotherapy) may have little to no effect on improving cure rate, but the certainty of evidence is very low (RR 0.60, 95% CI 0.29 to 1.23; 1 trial, 32 participants). OMT (versus placebo) may improve cure rate, but the certainty of evidence is very low (RR 1.59, 95% CI 1.01 to 2.51; 2 trials, 79 participants; I² = 0%).\nOMT (versus placebo) may have little to no effect on mean duration of hospital stay, but the certainty of evidence is very low (mean difference (MD) −1.08 days, 95% CI −2.39 to 0.23; 3 trials, 333 participants; I² = 50%). Conventional chest physiotherapy (versus no physiotherapy, MD 0.7 days, 95% CI −1.39 to 2.79; 1 trial, 54 participants) and active cycle of breathing techniques (versus no physiotherapy, MD 1.4 days, 95% CI −0.69 to 3.49; 1 trial, 32 participants) may also have little to no effect on duration of hospital stay, but the certainty of evidence is very low. Positive expiratory pressure (versus no physiotherapy) may reduce the mean duration of hospital stay by 1.4 days, but the certainty of evidence is very low (MD −1.4 days, 95% CI −2.77 to −0.03; 1 trial, 98 participants).\nPositive expiratory pressure (versus no physiotherapy) may reduce the duration of fever by 0.7 days, but the certainty of evidence is very low (MD −0.7 days, 95% CI −1.36 to −0.04; 1 trial, 98 participants). Conventional chest physiotherapy (versus no physiotherapy, MD 0.4 days, 95% CI −1.01 to 1.81; 1 trial, 54 participants) and OMT (versus placebo, MD 0.6 days, 95% CI −1.60 to 2.80; 1 trial, 21 participants) may have little to no effect on duration of fever, but the certainty of evidence is very low.\nOMT (versus placebo) may have little to no effect on the mean duration of total antibiotic therapy, but the certainty of evidence is very low (MD −1.07 days, 95% CI −2.37 to 0.23; 3 trials, 333 participants; I² = 61%). Active cycle of breathing techniques (versus no physiotherapy) may have little to no effect on duration of total antibiotic therapy, but the certainty of evidence is very low (MD 0.2 days, 95% CI −4.39 to 4.69; 1 trial, 32 participants).\nHigh-frequency chest wall oscillation plus fibrobronchoscope alveolar lavage (versus fibrobronchoscope alveolar lavage alone) may reduce the MD of intensive care unit (ICU) stay by 3.8 days (MD −3.8 days, 95% CI −5.00 to −2.60; 1 trial, 286 participants) and the MD of mechanical ventilation by three days (MD −3 days, 95% CI −3.68 to −2.32; 1 trial, 286 participants), but the certainty of evidence is very low.\nOne trial reported transient muscle tenderness emerging after OMT in two participants. In another trial, three serious adverse events led to early withdrawal after OMT. One trial reported no adverse events after positive expiratory pressure treatment.\nLimitations of this review were the small sample size and unclear or high risk of bias of the included trials.\nAuthors' conclusions\nThe inclusion of two new trials in this update did not change the main conclusions of the original review. The current evidence is very uncertain about the effect of chest physiotherapy on improving mortality and cure rate in adults with pneumonia. Some physiotherapies may slightly shorten hospital stays, fever duration, and ICU stays, as well as mechanical ventilation. However, all of these findings are based on very low certainty evidence and need to be further validated.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Pneumonia is one of the most common health problems affecting all age groups around the world. Antibiotics represent the mainstay of pneumonia treatment, whilst some other supportive therapies, such as supplementary oxygen, might also be beneficial in improving patient outcomes. Chest physiotherapy, an airway clearance technique, has been widely used as a supportive therapy for pneumonia in adults without reliable evidence."
}
] |
query-laysum
|
8081
|
[
{
"role": "user",
"content": "Abstract: Background\nGlobally, cannabis use is prevalent and widespread. There are currently no pharmacotherapies approved for treatment of cannabis use disorders.\nThis is an update of a Cochrane Review first published in the Cochrane Library in Issue 12, 2014.\nObjectives\nTo assess the effectiveness and safety of pharmacotherapies as compared with each other, placebo or no pharmacotherapy (supportive care) for reducing symptoms of cannabis withdrawal and promoting cessation or reduction of cannabis use.\nSearch methods\nWe updated our searches of the following databases to March 2018: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PsycINFO and Web of Science.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs involving the use of medications to treat cannabis withdrawal or to promote cessation or reduction of cannabis use, or both, in comparison with other medications, placebo or no medication (supportive care) in people diagnosed as cannabis dependent or who were likely to be dependent.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 21 RCTs involving 1755 participants: 18 studies recruited adults (mean age 22 to 41 years); three studies targeted young people (mean age 20 years). Most (75%) participants were male. The studies were at low risk of performance, detection and selective outcome reporting bias. One study was at risk of selection bias, and three studies were at risk of attrition bias.\nAll studies involved comparison of active medication and placebo. The medications were diverse, as were the outcomes reported, which limited the extent of analysis.\nAbstinence at end of treatment was no more likely with Δ9-tetrahydrocannabinol (THC) preparations than with placebo (risk ratio (RR) 0.98, 95% confidence interval (CI) 0.64 to 1.52; 305 participants; 3 studies; moderate-quality evidence). For selective serotonin reuptake inhibitor (SSRI) antidepressants, mixed action antidepressants, anticonvulsants and mood stabilisers, buspirone and N-acetylcysteine, there was no difference in the likelihood of abstinence at end of treatment compared to placebo (low- to very low-quality evidence).\nThere was qualitative evidence of reduced intensity of withdrawal symptoms with THC preparations compared to placebo. For other pharmacotherapies, this outcome was either not examined, or no significant differences was reported.\nAdverse effects were no more likely with THC preparations (RR 1.02, 95% CI 0.89 to 1.17; 318 participants; 3 studies) or N-acetylcysteine (RR 0.94, 95% CI 0.71 to 1.23; 418 participants; 2 studies) compared to placebo (moderate-quality evidence). For SSRI antidepressants, mixed action antidepressants, buspirone and N-acetylcysteine, there was no difference in adverse effects compared to placebo (low- to very low-quality evidence).\nThere was no difference in the likelihood of withdrawal from treatment due to adverse effects with THC preparations, SSRIs antidepressants, mixed action antidepressants, anticonvulsants and mood stabilisers, buspirone and N-acetylcysteine compared to placebo (low- to very low-quality evidence).\nThere was no difference in the likelihood of treatment completion with THC preparations, SSRI antidepressants, mixed action antidepressants and buspirone compared to placebo (low- to very low-quality evidence) or with N-acetylcysteine compared to placebo (RR 1.06, 95% CI 0.93 to 1.21; 418 participants; 2 studies; moderate-quality evidence). Anticonvulsants and mood stabilisers appeared to reduce the likelihood of treatment completion (RR 0.66, 95% CI 0.47 to 0.92; 141 participants; 3 studies; low-quality evidence).\nAvailable evidence on gabapentin (anticonvulsant), oxytocin (neuropeptide) and atomoxetine was insufficient for estimates of effectiveness.\nAuthors' conclusions\nThere is incomplete evidence for all of the pharmacotherapies investigated, and for many outcomes the quality of the evidence was low or very low. Findings indicate that SSRI antidepressants, mixed action antidepressants, bupropion, buspirone and atomoxetine are probably of little value in the treatment of cannabis dependence. Given the limited evidence of efficacy, THC preparations should be considered still experimental, with some positive effects on withdrawal symptoms and craving. The evidence base for the anticonvulsant gabapentin, oxytocin, and N-acetylcysteine is weak, but these medications are also worth further investigation.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence for many of the outcomes in this review was low or very low because each medicine was investigated by a small number of studies (ranging from one to four), each study involved small numbers of participants, there was some inconsistency in the findings and there was a risk of bias due to study participants dropping out of treatment."
}
] |
query-laysum
|
8082
|
[
{
"role": "user",
"content": "Abstract: Background\nFemale pattern hair loss (FPHL), or androgenic alopecia, is the most common type of hair loss affecting women. It is characterised by progressive shortening of the duration of the growth phase of the hair with successive hair cycles, and progressive follicular miniaturisation with conversion of terminal to vellus hair follicles (terminal hairs are thicker and longer, while vellus hairs are soft, fine, and short). The frontal hair line may or may not be preserved. Hair loss can have a serious psychological impact on women.\nObjectives\nTo determine the efficacy and safety of the available options for the treatment of female pattern hair loss in women.\nSearch methods\nWe updated our searches of the following databases to July 2015: the Cochrane Skin Group Specialised Register, CENTRAL in the Cochrane Library (2015, Issue 6), MEDLINE (from 1946), EMBASE (from 1974), PsycINFO (from 1872), AMED (from 1985), LILACS (from 1982), PubMed (from 1947), and Web of Science (from 1945). We also searched five trial registries and checked the reference lists of included and excluded studies.\nSelection criteria\nWe included randomised controlled trials that assessed the efficacy of interventions for FPHL in women.\nData collection and analysis\nTwo review authors independently assessed trial quality, extracted data and carried out analyses.\nMain results\nWe included 47 trials, with 5290 participants, of which 25 trials were new to this update. Only five trials were at 'low risk of bias', 26 were at 'unclear risk', and 16 were at 'high risk of bias'.\nThe included trials evaluated a wide range of interventions, and 17 studies evaluated minoxidil. Pooled data from six studies indicated that a greater proportion of participants (157/593) treated with minoxidil (2% and one study with 1%) reported a moderate to marked increase in their hair regrowth when compared with placebo (77/555) (risk ratio (RR) = 1.93, 95% confidence interval (CI) 1.51 to 2.47; moderate quality evidence). These results were confirmed by the investigator-rated assessments in seven studies with 1181 participants (RR 2.35, 95% CI 1.68 to 3.28; moderate quality evidence). Only one study reported on quality of life (QoL) (260 participants), albeit inadequately (low quality evidence). There was an important increase of 13.18 in total hair count per cm² in the minoxidil group compared to the placebo group (95% CI 10.92 to 15.44; low quality evidence) in eight studies (1242 participants). There were 40/407 adverse events in the twice daily minoxidil 2% group versus 28/320 in the placebo group (RR 1.24, 95% CI 0.82 to 1.87; low quality evidence). There was also no statistically significant difference in adverse events between any of the individual concentrations against placebo.\nFour studies (1006 participants) evaluated minoxidil 2% versus 5%. In one study, 25/57 participants in the minoxidil 2% group experienced moderate to greatly increased hair regrowth versus 22/56 in the 5% group (RR 1.12, 95% CI 0.72 to 1.73). In another study, 209 participants experienced no difference based on a visual analogue scale (P = 0.062; low quality evidence). The assessments of the investigators based on three studies (586 participants) were in agreement with these findings (moderate quality evidence). One study assessed QoL (209 participants) and reported limited data (low quality evidence). Four trials (1006 participants) did not show a difference in number of adverse events between the two concentrations (RR 1.02, 95% CI 0.91 to 1.20; low quality evidence). Both concentrations did not show a difference in increase in total hair count at end of study in three trials with 631 participants (mean difference (MD) −2.12, 95% CI −5.47 to 1.23; low quality evidence).\nThree studies investigated finasteride 1 mg compared to placebo. In the finasteride group 30/67 participants experienced improvement compared to 33/70 in the placebo group (RR 0.95, 95% CI 0.66 to 1.37; low quality evidence). This was consistent with the investigators' assessments (RR 0.77, 95% CI 0.31 to 1.90; low quality evidence). QoL was not assessed. Only one study addressed adverse events (137 participants) (RR 1.03, 95% CI 0.45 to 2.34; low quality evidence). In two studies (219 participants) there was no clinically meaningful difference in change of hair count, whilst one study (12 participants) favoured finasteride (low quality evidence).\nTwo studies (141 participants) evaluated low-level laser comb therapy compared to a sham device. According to the participants, the low-level laser comb was not more effective than the sham device (RR 1.54, 95% CI 0.96 to 2.49; and RR 1.18, 95% CI 0.74 to 1.89; moderate quality evidence). However, there was a difference in favour of low-level laser comb for change from baseline in hair count (MD 17.40, 95% CI 9.74 to 25.06; and MD 17.60, 95% CI 11.97 to 23.23; low quality evidence). These studies did not assess QoL and did not report adverse events per treatment arm and only in a generic way (low quality evidence). Low-level laser therapy against sham comparisons in two separate studies also showed an increase in total hair count but with limited further data.\nSingle studies addressed the other comparisons and provided limited evidence of either the efficacy or safety of these interventions, or were unlikely to be examined in future trials.\nAuthors' conclusions\nAlthough there was a predominance of included studies at unclear to high risk of bias, there was evidence to support the efficacy and safety of topical minoxidil in the treatment of FPHL (mainly moderate to low quality evidence). Furthermore, there was no difference in effect between the minoxidil 2% and 5% with the quality of evidence rated moderate to low for most outcomes. Finasteride was no more effective than placebo (low quality evidence). There were inconsistent results in the studies that evaluated laser devices (moderate to low quality evidence), but there was an improvement in total hair count measured from baseline.\nFurther randomised controlled trials of other widely-used treatments, such as spironolactone, finasteride (different dosages), dutasteride, cyproterone acetate, and laser-based therapy are needed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We examined the available evidence up to 7 July 2015. Forty-seven studies, which included 5290 women, met the inclusion criteria of this Cochrane review. The mean age of participants in the studies varied from 27 to 57 years. We assessed over half of the included studies as at unclear risk of bias, 16 as high risk, and only five studies as low risk of bias. Funding was provided in 26 of the 47 studies, mainly by pharmaceutical companies."
}
] |
query-laysum
|
8083
|
[
{
"role": "user",
"content": "Abstract: Background\nAlthough combination formulas containing antihistamines, decongestants, and/or analgesics are sold over-the-counter in large quantities for the common cold, the evidence for their effectiveness is limited. This is an update of a review first published in 2012.\nObjectives\nTo assess the effectiveness of antihistamine-decongestant-analgesic combinations compared with placebo or other active controls (excluding antibiotics) in reducing the duration of symptoms and alleviating symptoms (general feeling of illness, nasal congestion, rhinorrhoea, sneezing, and cough) in children and adults with the common cold.\nSearch methods\nWe searched CENTRAL, MEDLINE via EBSCOhost, Embase, CINAHL via EBSCOhost, LILACS, and Web of Science to 10 June 2021. We searched the WHO ICTRP and ClinicalTrials.gov on 10 June 2021.\nSelection criteria\nRandomised controlled trials investigating the effectiveness of antihistamine-decongestant-analgesic combinations compared with placebo, other active treatment (excluding antibiotics), or no treatment in children and adults with the common cold.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We assessed the certainty of the evidence using the GRADE approach. We categorised the included trials according to the active ingredients.\nMain results\nWe identified 30 studies (6304 participants) including 31 treatment comparisons. The control intervention was placebo in 26 trials and an active substance (paracetamol, chlorphenindione + phenylpropanolamine + belladonna, diphenhydramine) in six trials (two trials had placebo as well as active treatment arms). Reporting of methods was generally poor, and there were large differences in study design, participants, interventions, and outcomes. Most of the included trials involved adult participants. Children were included in nine trials. Three trials included very young children (from six months to five years), and five trials included children aged 2 to 16. One trial included adults and children aged 12 years or older. The trials took place in different settings: university clinics, paediatric departments, family medicine departments, and general practice surgeries.\nAntihistamine-decongestant: 14 trials (1298 participants). Eight trials reported on global effectiveness, of which six studies were pooled (281 participants on active treatment and 284 participants on placebo). The odds ratio (OR) of treatment failure was 0.31 (95% confidence interval (CI) 0.20 to 0.48; moderate certainty evidence); number needed to treat for an additional beneficial outcome (NNTB) 3.9 (95% CI 3.03 to 5.2). On the final evaluation day (follow-up: 3 to 10 days), 55% of participants in the placebo group had a favourable response compared to 70% on active treatment. Of the two trials not pooled, one showed some global effect, whilst the other showed no effect.\nAdverse effects: the antihistamine-decongestant group experienced more adverse effects than the control group: 128/419 (31%) versus 100/423 (13%) participants suffered one or more adverse effects (OR 1.58, 95%CI 0.78 to 3.21; moderate certainty of evidence).\nAntihistamine-analgesic: four trials (1608 participants). Two trials reported on global effectiveness; data from one trial were presented (290 participants on active treatment and 292 participants on ascorbic acid). The OR of treatment failure was 0.33 (95% CI 0.23 to 0.46; moderate certainty evidence); NNTB 6.67 (95% CI 4.76 to 12.5). Forty-three per cent of participants in the control group and 70% in the active treatment group were cured after six days of treatment. The second trial also showed an effect in favour of the active treatment.\nAdverse effects: there were not significantly more adverse effects in the active treatment group compared to placebo (drowsiness, hypersomnia, sleepiness 10/152 versus 4/120; OR 1.64 (95 % CI 0.48 to 5.59; low certainty evidence).\nAnalgesic-decongestant: seven trials (2575 participants). One trial reported on global effectiveness: 73% of participants in the analgesic-decongestant group reported a benefit compared with 52% in the control group (paracetamol) (OR of treatment failure 0.28, 95% CI 0.15 to 0.52; moderate certainty evidence; NNTB 4.7).\nAdverse effects: the decongestant-analgesic group experienced significantly more adverse effects than the control group (199/1122 versus 75/675; OR 1.62 95% CI 1.18 to 2.23; high certainty evidence; number needed to treat for an additional harmful outcome (NNTH 17).\nAntihistamine-analgesic-decongestant: six trials (1014 participants). Five trials reported on global effectiveness, of which two studies in adults could be pooled: global effect reported with active treatment (52%) and placebo (34%) was equivalent to a difference of less than one point on a four- or five-point scale; the OR of treatment failure was 0.47 (95% CI 0.33 to 0.67; low certainty evidence); NNTB 5.6 (95% CI 3.8 to 10.2). One trial in children aged 2 to 12 years, and two trials in adults found no beneficial effect.\nAdverse effects: in one trial 5/224 (2%) suffered adverse effects with the active treatment versus 9/208 (4%) with placebo. Two other trials reported no differences between treatment groups.\nAuthors' conclusions\nWe found a lack of data on the effectiveness of antihistamine-analgesic-decongestant combinations for the common cold. Based on these scarce data, the effect on individual symptoms is probably too small to be clinically relevant. The current evidence suggests that antihistamine-analgesic-decongestant combinations have some general benefit in adults and older children. These benefits must be weighed against the risk of adverse effects. There is no evidence of effectiveness in young children. In 2005, the US Food and Drug Administration issued a warning about adverse effects associated with the use of over-the-counter nasal preparations containing phenylpropanolamine.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Are combination formulas containing antihistamines (AH), decongestants (DC), and/or analgesics (AN), sold over-the-counter, effective in treating the symptoms of the common cold?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.