dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
24161
|
[
{
"role": "user",
"content": "Abstract: Background\nPostherpetic neuralgia (PHN) is a painful and refractory complication of herpes zoster. Treatments are either partially or totally ineffective for many people with PHN. Antiviral agents, used at the time of the rash, have been proposed as an intervention to prevent the development of PHN. This is the first update since the first publication of the review in 2009.\nObjectives\nTo assess the effectiveness of antiviral agents in preventing PHN.\nSearch methods\nOn 26 April 2013, we updated the searches in the Cochrane Neuromuscular Disease Group Specialized Register, CENTRAL, MEDLINE, EMBASE, LILACS, and the Chinese Biomedical Retrieval System. We checked the references of published studies to identify additional trials, and contacted authors to obtain additional data. We searched other databases in The Cochrane Library for information for the Discussion and two clinical trials registries for ongoing trials.\nSelection criteria\nWe considered all randomised controlled trials (RCTs) of antiviral treatment given within 72 hours after the onset of herpes zoster for preventing PHN. There were no language restrictions.\nData collection and analysis\nTwo authors independently selected trials, evaluated the risk of bias in included trials, and extracted and analysed data.\nMain results\nSix RCTs with a total of 1211 participants were eligible; five trials evaluated oral aciclovir, and one, with 419 participants, evaluated oral famciclovir. We were able to conduct meta-analyses as there were sufficient similarities in the included studies, such as the reporting of the presence of PHN, duration of rash before treatment initiation and treatment regimen. For our primary outcome, based on three trials (609 participants) we found no significant difference between the aciclovir and control groups in the incidence of PHN four months after the onset of the acute herpetic rash (risk ratio (RR) 0.75, 95% confidence interval (CI) 0.51 to 1.11), nor was there a significant difference at six months (RR 1.05, 95% CI 0.87 to 1.27, two trials, 476 participants). In four of the trials (692 participants), there was some evidence for a reduction in the incidence of pain four weeks after the onset of rash. In the trial of famciclovir versus placebo, neither 500 mg nor 750 mg doses of famciclovir reduced the incidence of herpetic neuralgia significantly. The most commonly reported adverse events were nausea, vomiting, diarrhoea and headache for aciclovir, and headache and nausea for famciclovir. For neither treatment was the incidence of adverse events significantly different from placebo. None of the studies were at high risk of bias, although the risk of bias was unclear in at least one domain for all but one study. We found no new RCTs when we updated the searches in April 2013.\nAuthors' conclusions\nThere is high quality evidence that oral aciclovir does not reduce the incidence of PHN significantly. In addition, there is insufficient evidence to determine the effect of other antiviral treatments; therefore, further well-designed RCTs are needed to investigate famciclovir or other new antiviral agents in preventing PHN. Future trials should pay more attention to the severity of pain and quality of life of participants, and should be conducted among different subgroups of people, such as people who are immunocompromised.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Aciclovir, which is an antiviral medicine, was used in five trials (900 participants) and was not better than a placebo (dummy pill) in preventing PHN. In the other trial (419 participants), famciclovir, which is another antiviral drug, was no better than placebo in preventing pain following healing of the shingles rash. The number of side effects with aciclovir and famciclovir was not very different from the number with placebo. The trials did not have any major problems of design or conduct that put the results in doubt, although most of the reports did not provide enough information to fully assess every aspect. We conclude that according to high quality evidence, oral aciclovir was ineffective in reducing the incidence of PHN and there is not enough evidence on other antiviral treatments. There need to be further well-designed trials of famciclovir or other new antiviral agents with a greater number of participants. Future trials should pay more attention to the severity of pain and quality of life of participants, and should include different groups of people, such as those who have lowered immunity.\nThe evidence is current to April 2013, when the searches were last updated. Because new evidence on this topic is slow to emerge, we have scheduled the next update of this review for 2017."
}
] |
query-laysum
|
24162
|
[
{
"role": "user",
"content": "Abstract: Background\nInsufficient consumption of fruits and vegetables in childhood increases the risk of future non-communicable diseases, including cardiovascular disease. Testing the effects of interventions to increase consumption of fruit and vegetables, including those focused on specific child-feeding strategies or broader multicomponent interventions targeting the home or childcare environment is required to assess the potential to reduce this disease burden.\nObjectives\nTo assess the effectiveness, cost effectiveness and associated adverse events of interventions designed to increase the consumption of fruit, vegetables or both amongst children aged five years and under.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and two clinical trials registries to identify eligible trials on 25 January 2020. We searched Proquest Dissertations and Theses in November 2019. We reviewed reference lists of included trials and handsearched three international nutrition journals. We contacted authors of included trials to identify further potentially relevant trials.\nSelection criteria\nWe included randomised controlled trials, including cluster-randomised controlled trials and cross-over trials, of any intervention primarily targeting consumption of fruit, vegetables or both among children aged five years and under, and incorporating a dietary or biochemical assessment of fruit or vegetable consumption. Two review authors independently screened titles and abstracts of identified papers; a third review author resolved disagreements.\nData collection and analysis\nTwo review authors independently extracted data and assessed the risks of bias of included trials; a third review author resolved disagreements. Due to unexplained heterogeneity, we used random-effects models in meta-analyses for the primary review outcomes where we identified sufficient trials. We calculated standardised mean differences (SMDs) to account for the heterogeneity of fruit and vegetable consumption measures. We conducted assessments of risks of bias and evaluated the quality of evidence (GRADE approach) using Cochrane procedures.\nMain results\nWe included 80 trials with 218 trial arms and 12,965 participants. Fifty trials examined the impact of child-feeding practices (e.g. repeated food exposure) in increasing child vegetable intake. Fifteen trials examined the impact of parent nutrition education only in increasing child fruit and vegetable intake. Fourteen trials examined the impact of multicomponent interventions (e.g. parent nutrition education and preschool policy changes) in increasing child fruit and vegetable intake. Two trials examined the effect of a nutrition education intervention delivered to children in increasing child fruit and vegetable intake. One trial examined the impact of a child-focused mindfulness intervention in increasing vegetable intake.\nWe judged 23 of the 80 included trials as free from high risks of bias across all domains. Performance, detection and attrition bias were the most common domains judged at high risk of bias for the remaining trials.\nThere is low-quality evidence that child-feeding practices versus no intervention may have a small positive effect on child vegetable consumption, equivalent to an increase of 5.30 grams as-desired consumption of vegetables (SMD 0.50, 95% CI 0.29 to 0.71; 19 trials, 2140 participants; mean post-intervention follow-up = 8.3 weeks). Multicomponent interventions versus no intervention has a small effect on child consumption of fruit and vegetables (SMD 0.32, 95% CI 0.09 to 0.55; 9 trials, 2961 participants; moderate-quality evidence; mean post-intervention follow-up = 5.4 weeks), equivalent to an increase of 0.34 cups of fruit and vegetables a day. It is uncertain whether there are any short-term differences in child consumption of fruit and vegetables in meta-analyses of trials examining parent nutrition education versus no intervention (SMD 0.13, 95% CI −0.02 to 0.28; 11 trials, 3050 participants; very low-quality evidence; mean post-intervention follow-up = 13.2 weeks). We were unable to pool child nutrition education interventions in meta-analysis; both trials reported a positive intervention effect on child consumption of fruit and vegetables (low-quality evidence).\nVery few trials reported long-term effectiveness (6 trials), cost effectiveness (1 trial) or unintended adverse consequences of interventions (2 trials), limiting our ability to assess these outcomes. Trials reported receiving governmental or charitable funds, except for four trials reporting industry funding.\nAuthors' conclusions\nDespite identifying 80 eligible trials of various intervention approaches, the evidence for how to increase children's fruit and vegetable consumption remains limited in terms of quality of evidence and magnitude of effect. Of the types of interventions identified, there was moderate-quality evidence that multicomponent interventions probably lead to, and low-quality evidence that child-feeding practice may lead to, only small increases in fruit and vegetable consumption in children aged five years and under. It is uncertain whether parent nutrition education or child nutrition education interventions alone are effective in increasing fruit and vegetable consumption in children aged five years and under. Our confidence in effect estimates for all intervention approaches, with the exception of multicomponent interventions, is limited on the basis of the very low to low-quality evidence. Long-term follow-up of at least 12 months is required and future research should adopt more rigorous methods to advance the field.\nThis is a living systematic review. Living systematic reviews offer a new approach to review updating, in which the review is continually updated, incorporating relevant new evidence as it becomes available. Please refer to the Cochrane Database of Systematic Reviews for the current status of this review.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 80 trials with 12,965 people taking part. Fifty trials examined child-feeding practice interventions (e.g. repeated exposure to vegetables), 15 examined parent nutrition education interventions, 14 examined multicomponent interventions (e.g. combining preschool policy changes with parent education), two examined child nutrition education interventions and one examined a child-focused mindfulness intervention. Child-feeding practice interventions may lead to, and multicomponent interventions probably lead to, small increases in children's intake of fruit and vegetables in the short term (less than 12 months). It is uncertain whether parent or child nutrition education interventions alone are effective in increasing children's eating of fruit and vegetables. There was not enough information to assess long-term effectiveness, cost effectiveness or unintended harms. Trials reporting funding support received governmental or charitable funds, except for four trials that received industry funding."
}
] |
query-laysum
|
24163
|
[
{
"role": "user",
"content": "Abstract: Background\nExercise training is commonly recommended for adults with fibromyalgia. We defined flexibility exercise training programs as those involving movements of a joint or a series of joints, through complete range of motion, thus targeting major muscle-tendon units. This review is one of a series of reviews updating the first review published in 2002.\nObjectives\nTo evaluate the benefits and harms of flexibility exercise training in adults with fibromyalgia.\nSearch methods\nWe searched the Cochrane Library, MEDLINE, Embase, CINAHL (Cumulative Index to Nursing and Allied Health Literature), PEDro (Physiotherapy Evidence Database), Thesis and Dissertation Abstracts, AMED (Allied and Complementary Medicine Database), the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), and ClinicalTrials.gov up to December 2017, unrestricted by language, and we reviewed the reference lists of retrieved trials to identify potentially relevant trials.\nSelection criteria\nWe included randomized trials (RCTs) including adults diagnosed with fibromyalgia based on published criteria. Major outcomes were health-related quality of life (HRQoL), pain intensity, stiffness, fatigue, physical function, trial withdrawals, and adverse events.\nData collection and analysis\nTwo review authors independently selected articles for inclusion, extracted data, performed 'Risk of bias' assessments, and assessed the certainty of the body of evidence for major outcomes using the GRADE approach. All discrepancies were rechecked, and consensus was achieved by discussion.\nMain results\nWe included 12 RCTs (743 people). Among these RCTs, flexibility exercise training was compared to an untreated control group, land-based aerobic training, resistance training, or other interventions (i.e. Tai Chi, Pilates, aquatic biodanza, friction massage, medications). Studies were at risk of selection, performance, and detection bias (due to lack of adequate randomization and allocation concealment, lack of participant or personnel blinding, and lack of blinding for self-reported outcomes). With the exception of withdrawals and adverse events, major outcomes were self-reported and were expressed on a 0-to-100 scale (lower values are best, negative mean differences (MDs) indicate improvement). We prioritized the findings of flexibility exercise training compared to land-based aerobic training and present them fully here.\nVery low-certainty evidence showed that compared with land-based aerobic training, flexibility exercise training (five trials with 266 participants) provides no clinically important benefits with regard to HRQoL, pain intensity, fatigue, stiffness, and physical function. Low-certainty evidence showed no difference between these groups for withdrawals at completion of the intervention (8 to 20 weeks).\nMean HRQoL assessed on the Fibromyalgia Impact Questionnaire (FIQ) Total scale (0 to 100, higher scores indicating worse HRQoL) was 46 mm and 42 mm in the flexibility and aerobic groups, respectively (2 studies, 193 participants); absolute change was 4% worse (6% better to 14% worse), and relative change was 7.5% worse (10.5% better to 25.5% worse) in the flexibility group. Mean pain was 57 mm and 52 mm in the flexibility and aerobic groups, respectively (5 studies, 266 participants); absolute change was 5% worse (1% better to 11% worse), and relative change was 6.7% worse (2% better to 15.4% worse). Mean fatigue was 67 mm and 71 mm in the aerobic and flexibility groups, respectively (2 studies, 75 participants); absolute change was 4% better (13% better to 5% worse), and relative change was 6% better (19.4% better to 7.4% worse). Mean physical function was 23 points and 17 points in the flexibility and aerobic groups, respectively (1 study, 60 participants); absolute change was 6% worse (4% better to 16% worse), and relative change was 14% worse (9.1% better to 37.1% worse). We found very low-certainty evidence of an effect for stiffness. Mean stiffness was 49 mm to 79 mm in the flexibility and aerobic groups, respectively (1 study, 15 participants); absolute change was 30% better (8% better to 51% better), and relative change was 39% better (10% better to 68% better). We found no evidence of an effect in all-cause withdrawal between the flexibility and aerobic groups (5 studies, 301 participants). Absolute change was 1% fewer withdrawals in the flexibility group (8% fewer to 21% more), and relative change in the flexibility group compared to the aerobic training intervention group was 3% fewer (39% fewer to 55% more). It is uncertain whether flexibility leads to long-term effects (36 weeks after a 12-week intervention), as the evidence was of low certainty and was derived from a single trial.\nVery low-certainty evidence indicates uncertainty in the risk of adverse events for flexibility exercise training. One adverse effect was described among the 132 participants allocated to flexibility training. One participant had tendinitis of the Achilles tendon (McCain 1988), but it is unclear if the tendinitis was a pre-existing condition.\nAuthors' conclusions\nWhen compared with aerobic training, it is uncertain whether flexibility improves outcomes such as HRQoL, pain intensity, fatigue, stiffness, and physical function, as the certainty of the evidence is very low. Flexibility exercise training may lead to little or no difference for all-cause withdrawals. It is also uncertain whether flexibility exercise training has long-term effects due to the very low certainty of the evidence. We downgraded the evidence owing to the small number of trials and participants across trials, as well as due to issues related to unclear and high risk of bias (selection, performance, and detection biases). While flexibility exercise training appears to be well tolerated (similar withdrawal rates across groups), evidence on adverse events was scarce, therefore its safety is uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Harms\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no clear information on harms. One study reported that one participant had swelling (tendinitis) of an ankle tendon (Achilles), but it is unclear if this was related to participation in the flexibility exercise."
}
] |
query-laysum
|
24164
|
[
{
"role": "user",
"content": "Abstract: Background\nPlatinum-based therapy, including cisplatin, carboplatin, oxaliplatin or a combination of these, is used to treat a variety of paediatric malignancies. Unfortunately, one of the most important adverse effects is the occurrence of hearing loss or ototoxicity. There is a wide variation in the reported prevalence of platinum-induced ototoxicity and the associated risk factors. More insight into the prevalence of and risk factors for platinum-induced hearing loss is essential in order to develop less ototoxic treatment protocols for the future treatment of children with cancer and to develop adequate follow-up protocols for childhood cancer survivors treated with platinum-based therapy.\nObjectives\nTo evaluate the existing evidence on the association between childhood cancer treatment including platinum analogues and the occurrence of hearing loss.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2015, Issue 8), MEDLINE (PubMed) (1945 to 23 September 2015) and EMBASE (Ovid) (1980 to 23 September 2015). In addition, we searched reference lists of relevant articles and the conference proceedings of the International Society for Paediatric Oncology (2008 to 2014), the American Society of Pediatric Hematology/Oncology (2008 to 2015) and the International Conference on Long-Term Complications of Treatment of Children and Adolescents for Cancer (2010 to 2015). Experts in the field provided information on additional studies.\nSelection criteria\nAll study designs, except case reports, case series (i.e. a description of non-consecutive participants) and studies including fewer than 100 participants treated with platinum-based therapy who had an ototoxicity assessment, examining the association between childhood cancer treatment including platinum analogues and the occurrence of hearing loss.\nData collection and analysis\nTwo review authors independently performed the study selection. One review author performed data extraction and risk of bias assessment, which was checked by another review author.\nMain results\nWe identified 13 eligible cohort studies including 2837 participants with a hearing test after treatment with a platinum analogue for different types of childhood cancers. All studies had methodological limitations, with regard to both internal (risk of bias) and external validity. Participants were treated with cisplatin, carboplatin or both, in varying doses. The reported prevalence of hearing loss varied considerably between 0% and 90.1%; none of the studies provided data on tinnitus. Three studies reported a prevalence of 0%, but none of these studies provided a definition for hearing loss and there might be substantial or even complete overlap in included participants between these three studies. When only studies that did provide a definition for hearing loss were included, the prevalence of hearing loss still varied widely between 1.7% and 90.1%. All studies were very heterogeneous with regard to, for example, definitions of hearing loss, used diagnostic tests, participant characteristics, (prior) anti-tumour treatment, other ototoxic drugs and length of follow-up. Therefore, pooling of results was not possible.\nOnly two studies included a control group of people who had not received platinum treatment. In one study, the prevalence of hearing loss was 67.1% (95% confidence interval (CI) 59.3% to 74.1%) in platinum-treated participants, while in the control participants it was 7.4% (95% CI 6.2% to 8.8%). However, hearing loss was detected by screening in survivors treated with platinum analogues and by clinical presentation in control participants. It is uncertain what the effect of this difference in follow-up/diagnostic testing was. In the other study, the prevalence of hearing loss was 20.1% (95% CI 17.4% to 23.2%) in platinum-treated participants and 0.4% (95% CI 0.12% to 1.6%) in control participants. As neither study was a randomized controlled trial or controlled clinical trial, the calculation of a risk ratio was not feasible as it is very likely that both groups differed more than only the platinum treatment.\nOnly two studies evaluated possible risk factors using multivariable analysis. One study identified a significantly higher risk of hearing loss in people treated with cisplatin 400 mg/m2 plus carboplatin 1700 mg/m2 as compared to treatment with cisplatin 400 mg/m2 or less, irrespective of the definition of hearing loss. They also identified a significantly higher risk of hearing loss in people treated with non-anthracycline aminoglycosides antibiotics (using a surrogate marker) as compared to people not treated with them, for three out of four definitions of hearing loss. The other study reported that age at treatment (odds ratio less than 1 for each single-unit increase) and single maximum cisplatin dose (odds ratio greater than 1 for each single-unit increase) were significant predictors for hearing loss, while gender was not.\nAuthors' conclusions\nThis systematic review shows that children treated with platinum analogues are at risk for developing hearing loss, but the exact prevalence and risk factors remain unclear. There were no data available for tinnitus. Based on the currently available evidence we can only advise that children treated with platinum analogues are screened for ototoxicity in order to make it possible to diagnose hearing loss early and to take appropriate measures. However, we are unable to give recommendations for specific follow-up protocols including frequency of testing. Counselling regarding the prevention of noise pollution can be considered, such as the use of noise-limiting equipment, avoiding careers with excess noise and ototoxic medication. Before definitive conclusions on the prevalence and associated risk factors of platinum-induced ototoxicity can be made, more high-quality research is needed. Accurate and transparent reporting of findings will make it possible for readers to appraise the results of these studies critically.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to September 2015.\nWe found 13 studies including 2837 participants with a hearing test after platinum-based therapy for different types of childhood cancers. Participants were treated with cisplatin, carboplatin or both, in varying doses. All studies were very different with regard to definitions of hearing loss, used diagnostic tests, participant characteristics, (prior) anti-cancer treatment, other ototoxic drugs and length of follow-up."
}
] |
query-laysum
|
24165
|
[
{
"role": "user",
"content": "Abstract: Background\nAntidepressants are a first-line treatment for adults with moderate to severe major depression. However, many people prescribed antidepressants for depression don't respond fully to such medication, and little evidence is available to inform the most appropriate 'next step' treatment for such patients, who may be referred to as having treatment-resistant depression (TRD). National Institute for Health and Care Excellence (NICE) guidance suggests that the 'next step' for those who do not respond to antidepressants may include a change in the dose or type of antidepressant medication, the addition of another medication, or the start of psychotherapy. Different types of psychotherapies may be used for TRD; evidence on these treatments is available but has not been collated to date.\nAlong with the sister review of pharmacological therapies for TRD, this review summarises available evidence for the effectiveness of psychotherapies for adults (18 to 74 years) with TRD with the goal of establishing the best 'next step' for this group.\nObjectives\nTo assess the effectiveness of psychotherapies for adults with TRD.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Controlled Trials Register (until May 2016), along with CENTRAL, MEDLINE, Embase, and PsycINFO via OVID (until 16 May 2017). We also searched the World Health Organization (WHO) trials portal (ICTRP) and ClinicalTrials.gov to identify unpublished and ongoing studies. There were no date or language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) with participants aged 18 to 74 years diagnosed with unipolar depression that had not responded to minimum four weeks of antidepressant treatment at a recommended dose. We excluded studies of drug intolerance. Acceptable diagnoses of unipolar depression were based onthe Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) or earlier versions, International Classification of Diseases (ICD)-10, Feighner criteria, or Research Diagnostic Criteria. We included the following comparisons.\n1. Any psychological therapy versus antidepressant treatment alone, or another psychological therapy.\n2. Any psychological therapy given in addition to antidepressant medication versus antidepressant treatment alone, or a psychological therapy alone.\nPrimary outcomes required were change in depressive symptoms and number of dropouts from study or treatment (as a measure of acceptability).\nData collection and analysis\nWe extracted data, assessed risk of bias in duplicate, and resolved disagreements through discussion or consultation with a third person. We conducted random-effects meta-analyses when appropriate. We summarised continuous outcomes using mean differences (MDs) or standardised mean differences (SMDs), and dichotomous outcomes using risk ratios (RRs).\nMain results\nWe included six trials (n = 698; most participants were women approximately 40 years of age). All studies evaluated psychotherapy plus usual care (with antidepressants) versus usual care (with antidepressants). Three studies addressed the addition of cognitive-behavioural therapy (CBT) to usual care (n = 522), and one each evaluated intensive short-term dynamic psychotherapy (ISTDP) (n = 60), interpersonal therapy (IPT) (n = 34), or group dialectical behavioural therapy (DBT) (n = 19) as the intervention. Most studies were small (except one trial of CBT was large), and all studies were at high risk of detection bias for the main outcome of self-reported depressive symptoms.\nA random-effects meta-analysis of five trials (n = 575) showed that psychotherapy given in addition to usual care (vs usual care alone) produced improvement in self-reported depressive symptoms (MD -4.07 points, 95% confidence interval (CI) -7.07 to -1.07 on the Beck Depression Inventory (BDI) scale) over the short term (up to six months). Effects were similar when data from all six studies were combined for self-reported depressive symptoms (SMD -0.40, 95% CI -0.65 to -0.14; n = 635). The quality of this evidence was moderate. Similar moderate-quality evidence of benefit was seen on the Patient Health Questionnaire-9 Scale (PHQ-9) from two studies (MD -4.66, 95% CI 8.72 to -0.59; n = 482) and on the Hamilton Depression Rating Scale (HAMD) from four studies (MD -3.28, 95% CI -5.71 to -0.85; n = 193).\nHigh-quality evidence shows no differential dropout (a measure of acceptability) between intervention and comparator groups over the short term (RR 0.85, 95% CI 0.58 to 1.24; six studies; n = 698).\nModerate-quality evidence for remission from six studies (RR 1.92, 95% CI 1.46 to 2.52; n = 635) and low-quality evidence for response from four studies (RR 1.80, 95% CI 1.2 to 2.7; n = 556) indicate that psychotherapy was beneficial as an adjunct to usual care over the short term.\nWith the addition of CBT, low-quality evidence suggests lower depression scores on the BDI scale over the medium term (12 months) (RR -3.40, 95% CI -7.21 to 0.40; two studies; n = 475) and over the long term (46 months) (RR -1.90, 95% CI -3.22 to -0.58; one study; n = 248). Moderate-quality evidence for adjunctive CBT suggests no difference in acceptability (dropout) over the medium term (RR 0.98, 95% CI 0.66 to 1.47; two studies; n = 549) and lower dropout over long term (RR 0.80, 95% CI 0.66 to 0.97; one study; n = 248).\nTwo studies reported serious adverse events (one suicide, two hospitalisations, and two exacerbations of depression) in 4.2% of the total sample, which occurred only in the usual care group (no events in the intervention group).\nAn economic analysis (conducted as part of an included study) from the UK healthcare perspective (National Health Service (NHS)) revealed that adjunctive CBT was cost-effective over nearly four years.\nAuthors' conclusions\nModerate-quality evidence shows that psychotherapy added to usual care (with antidepressants) is beneficial for depressive symptoms and for response and remission rates over the short term for patients with TRD. Medium- and long-term effects seem similarly beneficial, although most evidence was derived from a single large trial. Psychotherapy added to usual care seems as acceptable as usual care alone.\nFurther evidence is needed on the effectiveness of different types of psychotherapies for patients with TRD. No evidence currently shows whether switching to a psychotherapy is more beneficial for this patient group than continuing an antidepressant medication regimen. Addressing this evidence gap is an important goal for researchers.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Depression is a common problem often treated with antidepressant medication. However, many people do not get better with antidepressants. These patients may be said to have TRD. For these people, several different treatments can be tried - such as increasing the dose of medicine being taken, adding another medicine, or switching to a new one. Another option is to add or switch to a psychotherapy. Evidence indicates that psychotherapies can help in depression. What we don't know is whether psychotherapies work in people with TRD. This review aimed to answer this question."
}
] |
query-laysum
|
24166
|
[
{
"role": "user",
"content": "Abstract: Background\nJaundice is a very common condition in newborns, affecting up to 60% of term newborns and 80% of preterm newborns in the first week of life. Jaundice is caused by increased bilirubin in the blood from the breakdown of red blood cells. The gold standard for measuring bilirubin levels is obtaining a blood sample and processing it in a laboratory. However, noninvasive transcutaneous bilirubin (TcB) measurement devices are widely available and used in many settings to estimate total serum bilirubin (TSB) levels.\nObjectives\nTo determine the diagnostic accuracy of transcutaneous bilirubin measurement for detecting hyperbilirubinaemia in newborns.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and trial registries up to 18 August 2022. We also checked the reference lists of all included studies and relevant systematic reviews for other potentially eligible studies.\nSelection criteria\nWe included cross-sectional and prospective cohort studies that evaluated the accuracy of any TcB device compared to TSB measurement in term or preterm newborn infants (0 to 28 days postnatal age). All included studies provided sufficient data and information to create a 2 × 2 table for the calculation of measures of diagnostic accuracy, including sensitivities and specificities. We excluded studies that only reported correlation coefficients.\nData collection and analysis\nTwo review authors independently applied the eligibility criteria to all citations from the search and extracted data from the included studies using a standard data extraction form. We summarised the available results narratively and, where possible, we combined study data in a meta-analysis.\nMain results\nWe included 23 studies, involving 5058 participants. All studies had low risk of bias as measured by the QUADAS 2 tool. The studies were conducted in different countries and settings, included newborns of different gestational and postnatal ages, compared various TcB devices (including the JM 101, JM 102, JM 103, BiliChek, Bilitest and JH20-1C) and used different cutoff values for a positive result. In most studies, the TcB measurement was taken from the forehead, sternum, or both. The sensitivity of various TcB cutoff values to detect significant hyperbilirubinaemia ranged from 74% to 100%, and specificity ranged from 18% to 89%.\nAuthors' conclusions\nThe high sensitivity of TcB to detect hyperbilirubinaemia suggests that TcB devices are reliable screening tests for ruling out hyperbilirubinaemia in newborn infants. Positive test results would require confirmation through serum bilirubin measurement.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The studies included in this review suggest that measuring bilirubin levels through the skin without a needle can identify high bilirubin levels in newborns."
}
] |
query-laysum
|
24167
|
[
{
"role": "user",
"content": "Abstract: Background\nA previous version of this Cochrane Review identified that insecticide-treated nets (ITNs) are effective at reducing child mortality, parasite prevalence, and uncomplicated and severe malaria episodes. Insecticide-treated nets have since become a core intervention for malaria control and have contributed greatly to the dramatic decline in disease incidence and malaria-related deaths seen since the turn of the millennium. However, this time period has also seen a rise in resistance to pyrethroids (the insecticide used in ITNs), raising questions over whether the evidence from trials conducted before resistance became widespread can be applied to estimate the impact of ITNs on malaria transmission today.\nObjectives\nThe primary objective of this review was to assess the impact of ITNs on mortality and malaria morbidity, incorporating any evidence published since the previous update into new and existing analyses, and assessing the certainty of the resulting evidence using GRADE.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL) published in the Cochrane Library, MEDLINE, Embase, LILACS, the World Health Organization (WHO) International Clinical Trials Registry Platform, ClinicalTrials.gov, and the ISRCTN registry for new trials published since 2004 and up to 18 April 2018.\nSelection criteria\nWe included individual randomized controlled trials (RCTs) and cluster RCTs comparing bed nets or curtains treated with a synthetic pyrethroid insecticide at a minimum target impregnation dose recommended by the WHO with no nets or untreated nets.\nData collection and analysis\nOne review author assessed the identified trials for eligibility and risk of bias, and extracted data. We compared intervention and control data using risk ratios (RRs), rate ratios, and mean differences, and presented all results with their associated 95% confidence intervals (CIs). We assessed the certainty of evidence using the GRADE approach. We drew on evidence from a meta-analysis of entomological outcomes stratified by insecticide resistance from 2014 to inform the GRADE assessments.\nMain results\nOur updated search identified three new trials. A total of 23 trials met the inclusion criteria, enrolling more than 275,793 adults and children. The included studies were conducted between 1987 and 2001.\nITN versus no nets\nInsecticide-treated nets reduce child mortality from all causes by 17% compared to no nets (rate ratio 0.83, 95% CI 0.77 to 0.89; 5 trials, 200,833 participants, high-certainty evidence). This corresponds to a saving of 5.6 lives (95% CI 3.6 to 7.6) each year for every 1000 children protected with ITNs. Insecticide-treated nets also reduce the incidence of uncomplicated episodes of Plasmodium falciparum malaria by almost a half (rate ratio 0.55, 95% CI 0.48 to 0.64; 5 trials, 35,551 participants, high-certainty evidence) and probably reduce the incidence of uncomplicated episodes of Plasmodium vivax malaria (risk ratio (RR) 0.61, 95% CI 0.48 to 0.77; 2 trials, 10,967 participants, moderate-certainty evidence).\nInsecticide-treated nets were also shown to reduce the prevalence of P falciparum malaria by 17% compared to no nets (RR 0.83, 95% CI 0.71 to 0.98; 6 trials, 18,809 participants, high-certainty evidence) but may have little or no effect on the prevalence of P vivax malaria (RR 1.00, 95% CI 0.75 to 1.34; 2 trials, 10,967 participants, low-certainty evidence). A 44% reduction in the incidence of severe malaria episodes was seen in the ITN group (rate ratio 0.56, 95% CI 0.38 to 0.82; 2 trials, 31,173 participants, high-certainty evidence), as well as an increase in mean haemoglobin (expressed as mean packed cell volume) compared to the no-net group (mean difference 1.29, 95% CI 0.42 to 2.16; 5 trials, 11,489 participants, high-certainty evidence).\nITN versus untreated nets\nInsecticide-treated nets probably reduce child mortality from all causes by a third compared to untreated nets (rate ratio 0.67, 95% CI 0.36 to 1.23; 2 trials, 25,389 participants, moderate-certainty evidence). This corresponds to a saving of 3.5 lives (95% CI -2.4 to 6.8) each year for every 1000 children protected with ITNs. Insecticide-treated nets also reduce the incidence of uncomplicated P falciparum malaria episodes (rate ratio 0.58, 95% CI 0.44 to 0.78; 5 trials, 2036 participants, high-certainty evidence) and may also reduce the incidence of uncomplicated P vixax malaria episodes (rate ratio 0.73, 95% CI 0.51 to 1.05; 3 trials, 1535 participants, low-certainty evidence).\nUse of an ITN probably reduces P falciparum prevalence by one-tenth in comparison to use of untreated nets (RR 0.91, 95% CI 0.78 to 1.05; 3 trials, 2,259 participants, moderate-certainty evidence). However, based on the current evidence it is unclear whether or not ITNs impact on P vivax prevalence (1 trial, 350 participants, very low certainty evidence) or mean packed cell volume (2 trials, 1,909 participants, low certainty evidence).\nAuthors' conclusions\nAlthough there is some evidence that insecticide resistance frequency has some effects on mosquito mortality, it is unclear how quantitatively important this is. It appeared insufficient to downgrade the strong evidence of benefit on mortality and malaria illness from the trials conducted earlier\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review update summarized trials published since the previous review that evaluated the impact of ITNs on malaria-related deaths and illness, compared to both no nets and untreated nets. After searching for relevant trials up to 18 April 2018, we identified three new randomized controlled trials (studies in which participants are assigned to a treatment group using a random method). In total, we included 23 trials, enrolling more than 275,000 adults and children, to evaluate the effectiveness of ITNs for reducing the burden of malaria. The included studies provided evidence of the impact of ITNs on infection from two types of malaria parasites, Plasmodium falciparum and Plasmodium vivax."
}
] |
query-laysum
|
24168
|
[
{
"role": "user",
"content": "Abstract: Background\nNasal masks and nasal prongs are used as interfaces for providing continuous positive airway pressure (CPAP) for preterm infants with or at risk of respiratory distress, either as primary support after birth or as ongoing support after endotracheal extubation from mechanical ventilation. It is unclear which type of interface is associated with lower rates of CPAP treatment failure, nasal trauma, or mortality and other morbidity.\nObjectives\nTo assess the benefits and harms of nasal masks versus nasal prongs for reducing CPAP treatment failure, nasal trauma, or mortality and other morbidity in newborn preterm infants with or at risk of respiratory distress.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was October 2021.\nSelection criteria\nWe included randomised controlled trials comparing masks versus prongs as interfaces for delivery of nasal CPAP in newborn preterm infants (less than 37 weeks' gestation) with or at risk of respiratory distress.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. treatment failure, 2. all-cause mortality, and 3. neurodevelopmental impairment. Our secondary outcomes were 4. pneumothorax, 5. moderate–severe nasal trauma, 6. bronchopulmonary dysplasia, 7. duration of CPAP use, 8. duration of oxygen supplementation, 9. duration of hospitalisation, 10. patent ductus arteriosus receiving medical or surgical treatment, 11. necrotising enterocolitis, 12. severe intraventricular haemorrhage, and 13. severe retinopathy of prematurity. We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included 12 trials with 1604 infants. All trials were small (median number of participants 118). The trials occurred after 2001 in care facilities internationally, predominantly in India (eight trials). Most participants were preterm infants of 26 to 34 weeks' gestation who received nasal CPAP as the primary form of respiratory support after birth. The studied interfaces included commonly used commercially available masks and prongs. Lack of measures to blind caregivers or investigators was a potential source of performance and detection bias in all the trials.\nMeta-analyses suggested that use of masks compared with prongs may reduce CPAP treatment failure (risk ratio (RR) 0.72, 95% confidence interval (CI) 0.58 to 0.90; 8 trials, 919 infants; low certainty). The type of interface may not affect mortality prior to hospital discharge (RR 0.83, 95% CI 0.56 to 1.22; 7 trials, 814 infants; low certainty). There are no data on neurodevelopmental impairment. Meta-analyses suggest that the choice of interface may result in little or no difference in the risk of pneumothorax (RR 0.93, 95% CI 0.45 to 1.93; 5 trials, 625 infants; low certainty). Use of masks rather than prongs may reduce the risk of moderate–severe nasal injury (RR 0.55, 95% CI 0.44 to 0.71; 10 trials, 1058 infants; low certainty). The evidence is very uncertain about the effect on bronchopulmonary dysplasia (RR 0.69, 95% CI 0.46 to 1.03; 7 trials, 843 infants; very low certainty).\nAuthors' conclusions\nThe available trial data provide low-certainty evidence that use of masks compared with prongs as the nasal CPAP interface may reduce treatment failure and nasal injury, and may have little or no effect on mortality or the risk of pneumothorax in newborn preterm infants with or at risk of respiratory distress. The effect on bronchopulmonary dysplasia is very uncertain. Large, high-quality trials would be needed to provide evidence of sufficient validity and applicability to inform policy and practice.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence for the effects of masks versus prongs for CPAP in preterm babies is low or very low because of concerns that the methods used in the included trials may have introduced biases and there were limited amounts of data from the trials. Consequently, our confidence in the results is limited, and the true effects may be substantially different from what we found."
}
] |
query-laysum
|
24169
|
[
{
"role": "user",
"content": "Abstract: Background\nNausea and vomiting remain a problem for children undergoing treatment for malignancies despite new antiemetic therapies. Optimising antiemetic regimens could improve quality of life by reducing nausea, vomiting, and associated clinical problems. This is an update of the original systematic review.\nObjectives\nTo assess the effectiveness and adverse events of pharmacological interventions in controlling anticipatory, acute, and delayed nausea and vomiting in children and young people (aged less than 18 years) about to receive or receiving chemotherapy.\nSearch methods\nSearches included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, LILACS, PsycINFO, conference proceedings of the American Society of Clinical Oncology, International Society of Paediatric Oncology, Multinational Association of Supportive Care in Cancer, and ISI Science and Technology Proceedings Index from incept to December 16, 2014, and trial registries from their earliest records to December 2014. We examined references of systematic reviews and contacted trialists for information on further studies. We also screened the reference lists of included studies.\nSelection criteria\nTwo review authors independently screened abstracts in order to identify randomised controlled trials (RCTs) that compared a pharmacological antiemetic, cannabinoid, or benzodiazepine with placebo or any alternative active intervention in children and young people (less than 18 years) with a diagnosis of cancer who were to receive chemotherapy.\nData collection and analysis\nTwo review authors independently extracted outcome and quality data from each RCT. When appropriate, we undertook meta-analysis.\nMain results\nWe included 34 studies that examined a range of different antiemetics, used different doses and comparators, and reported a variety of outcomes. The quality and quantity of included studies limited the exploration of heterogeneity to narrative approaches only.\nThe majority of quantitative data related to the complete control of acute vomiting (27 studies). Adverse events were reported in 29 studies and nausea outcomes in 16 studies.\nTwo studies assessed the addition of dexamethasone to 5-HT3 antagonists for complete control of vomiting (pooled risk ratio (RR) 2.03; 95% confidence interval (CI) 1.35 to 3.04). Three studies compared granisetron 20 mcg/kg with 40 mcg/kg for complete control of vomiting (pooled RR 0.93; 95% CI 0.80 to 1.07). Three studies compared granisetron with ondansetron for complete control of acute nausea (pooled RR 1.05; 95% CI 0.94 to 1.17; 2 studies), acute vomiting (pooled RR 2.26; 95% CI 2.04 to 2.51; 3 studies), delayed nausea (pooled RR 1.13; 95% CI 0.93 to 1.38; 2 studies), and delayed vomiting (pooled RR 1.13; 95% CI 0.98 to 1.29; 2 studies). No other pooled analyses were possible.\nNarrative synthesis suggests that 5-HT3 antagonists are more effective than older antiemetic agents, even when these agents are combined with a steroid. Cannabinoids are probably effective but produce frequent side effects.\nAuthors' conclusions\nOur overall knowledge of the most effective antiemetics to prevent chemotherapy-induced nausea and vomiting in childhood is incomplete. Future research should be undertaken in consultation with children, young people, and families that have experienced chemotherapy and should make use of validated, age-appropriate measures. This review suggests that 5-HT3 antagonists are effective in patients who are to receive emetogenic chemotherapy, with granisetron or palonosetron possibly better than ondansetron. Adding dexamethasone improves control of vomiting, although the risk-benefit profile of adjunctive steroid remains uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The use of chemotherapy to treat cancer in children and young people can produce nausea (a sensation that one might vomit) and vomiting. These extremely unpleasant sensations continue to be a problem despite better antiemetic (antisickness) drugs."
}
] |
query-laysum
|
24170
|
[
{
"role": "user",
"content": "Abstract: Background\nThe ideal quantity of dietary protein for formula-fed low birth weight infants is still a matter of debate. Protein intake must be sufficient to achieve normal growth without leading to negative effects such as acidosis, uremia, and elevated levels of circulating amino acids.\nObjectives\nTo determine whether higher (≥ 3.0 g/kg/d) versus lower (< 3.0 g/kg/d) protein intake during the initial hospital stay of formula-fed preterm infants or low birth weight infants (< 2.5 kilograms) results in improved growth and neurodevelopmental outcomes without evidence of short- or long-term morbidity.\nSpecific objectives were to examine the following comparisons of interventions and to conduct subgroup analyses if possible.\n1. Low protein intake if the amount was less than 3.0 g/kg/d.\n2. High protein intake if the amount was equal to or greater than 3.0 g/kg/d but less than 4.0 g/kg/d.\n3. Very high protein intake if the amount was equal to or greater than 4.0 g/kg/d.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 8), in the Cochrane Library (August 2, 2019); OVID MEDLINE Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily, and Ovid MEDLINE(R) (to August 2, 2019); MEDLINE via PubMed (to August 2, 2019) for the previous year; and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (to August 2, 2019). We also searched clinical trials databases and the reference lists of retrieved articles for randomized controlled trials (RCTs) and quasi-randomized trials.\nSelection criteria\nWe included RCTs contrasting levels of formula protein intake as low (< 3.0 g/kg/d), high (≥ 3.0 g/kg/d but < 4.0 g/kg/d), or very high (≥ 4.0 g/kg/d) in formula-fed hospitalized neonates weighing less than 2.5 kilograms. We excluded studies if infants received partial parenteral nutrition during the study period, or if infants were fed formula as a supplement to human milk.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane and the GRADE approach to assess the certainty of evidence.\nMain results\nWe identified six eligible trials that enrolled 218 infants through searches updated to August 2, 2019. Five studies compared low (< 3 g/kg/d) versus high (3.0 to 4.0 g/kg/d) protein intake using formulas that kept other nutrients constant. The trials were small (n = 139), and almost all had methodological limitations; the most frequent uncertainty was about attrition. Low-certainty evidence suggests improved weight gain (mean difference [MD] 2.36 g/kg/d, 95% confidence interval [CI] 1.31 to 3.40) and higher nitrogen accretion in infants receiving formula with higher protein content (3.0 to 4.0 g/kg/d) versus lower protein content (< 3 g/kg/d), while other nutrients were kept constant. No significant differences were seen in rates of necrotizing enterocolitis, sepsis, or diarrhea. We are uncertain whether high versus low protein intake affects head growth (MD 0.37 cm/week, 95% CI 0.16 to 0.58; n = 18) and length gain (MD 0.16 cm/week, 95% CI -0.02 to 0.34; n = 48), but sample sizes were small for these comparisons.\nOne study compared high (3.0 to 4.0 g/kg/d) versus very high (≥ 4 g/kg/d) protein intake (average intakes were 3.6 and 4.1 g/kg/d) during and after an initial hospital stay (n = 77). Moderate-certainty evidence shows no significant differences in weight gain or length gain to discharge, term, and 12 weeks corrected age from very high protein intake (4.1 versus 3.6 g/kg/d). Three of the 24 infants receiving very high protein intake developed uremia.\nAuthors' conclusions\nHigher protein intake (≥ 3.0 g/kg/d but < 4.0 g/kg/d) from formula accelerates weight gain. However, limited information is available regarding the impact of higher formula protein intake on long-term outcomes such as neurodevelopment. Research is needed to investigate the safety and effectiveness of protein intake ≥ 4.0 g/kg/d.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Infants grow quickly and need more protein for each kilogram of body weight than older children and adults. Infants born with low birth weight, for example, those who are born prematurely, need more protein because of their fast growth rates."
}
] |
query-laysum
|
24171
|
[
{
"role": "user",
"content": "Abstract: Background\nPelvic organ prolapse (POP) affects as many as 50% of parous women, with 14% to 19% of women undergoing a surgical correction. Although surgery for the treatment of POP is common, limited supportive data can be found in the literature regarding the preoperative and postoperative interventions related to these procedures. The main goal of perioperative interventions is to reduce the rate of adverse events while improving women's outcomes following surgical intervention for prolapse. A broad spectrum of perioperative interventions are available, and although the benefits of interventions such as prophylactic antibiotics before abdominal surgery are well established, others are unique to women undergoing POP surgeries and as such need to be investigated separately.\nObjectives\nThe aim of this review is to compare the safety and effectiveness of a range of perioperative interventions versus other interventions or no intervention (control group) at the time of surgery for pelvic organ prolapse.\nSearch methods\nWe searched the Cochrane Incontinence Group Specialised Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In Process, ClinicalTrials.gov, WHO ICTRP, handsearching of journals and conference proceedings (searched 30 November 2017), and reference lists of relevant articles. We also contacted researchers in the field.\nSelection criteria\nWe included randomised controlled trials (RCTs) of women undergoing surgical treatment for symptomatic pelvic organ prolapse that compared a perioperative intervention related to pelvic organ prolapse surgery versus no treatment or another intervention.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. Our primary outcomes were objective failure at any site and subjective postoperative prolapse symptoms. We also measured adverse effects, focusing on intraoperative blood loss and blood transfusion, intraoperative ureteral injury, and postoperative urinary tract infection.\nMain results\nWe included 15 RCTs that compared eight different interventions versus no treatment for 1992 women in five countries. Most interventions were assessed by only one RCT with evidence quality ranging from very low to moderate. The main limitation was imprecision, associated with small sample sizes and low event rates.\nPelvic floor muscle training (PFMT) compared with no treatment (three RCTs) - peri-operative intervention\nThe simplest of the PFMT programmes required women to attend six perioperative consultations in the three months surrounding prolapse surgery. Trial results provided no clear evidence of a difference between groups in objective failure at any site at 12 to 24 months (odds ratio (OR) 0.93, 95% confidence interval (CI) 0.56 to 1.54; two RCTs, 327 women; moderate-quality evidence). With respect to awareness of prolapse, findings were inconsistent. One RCT found no evidence of a difference between groups at 24 months (OR 1.07, 95% CI 0.61 to 1.87; one RCT, 305 women; low-quality evidence), and a second small RCT reported symptom reduction from the Pelvic Organ Prolapse Symptom Questionnaire completed by the intervention group at 12 months (mean difference (MD) -3.90, 95% CI -6.11 to -1.69; one RCT, 27 women; low-quality evidence). Researchers found no clear differences between groups at 24-month follow-up in rates of repeat surgery (or pessary) for prolapse (OR 1.92, 95% CI 0.74 to 5.02; one RCT, 316 women; low-quality evidence).\nOther interventions\nSingle RCTs evaluated the following interventions: preoperative guided imagery (N = 44); injection of vasoconstrictor agent at commencement of vaginal prolapse surgery (N = 76); ureteral stent placement during uterosacral ligament suspension (N = 91); vaginal pack (N = 116); prophylactic antibiotics for women requiring postoperative urinary catheterisation (N = 159); and postoperative vaginal dilators (N = 60).\nTwo RCTs evaluated bowel preparation (N = 298), and four RCTs assessed the method and timing of postoperative catheterisation (N = 514) - all in different comparisons.\nNone of these studies reported our primary review outcomes. One study reported intraoperative blood loss and suggested that vaginal injection of vasoconstrictors at commencement of surgery may reduce blood loss by a mean of about 30 mL. Another study reported intraoperative ureteral injury and found no clear evidence that ureteral stent placement reduces ureteral injury. Three RCTs reported postoperative urinary tract infection and found no conclusive evidence that rates of urinary tract infection were influenced by use of a vaginal pack, prophylactic antibiotics, or vaginal dilators. Other studies did not report these outcomes.\nAuthors' conclusions\nThere was a paucity of data about perioperative interventions in pelvic organ prolapse surgery. A structured programme of pelvic floor muscle training before and after prolapse surgery did not consistently demonstrate any benefit for the intervention; however, this finding is based on the results of two small studies. With regard to other interventions (preoperative bowel preparation and injection of vasoconstrictor agent, ureteral stent placement during uterosacral ligament suspension, postoperative vaginal pack insertion, use of vaginal dilators, prophylactic antibiotics for postoperative catheter care), we found no evidence regarding rates of recurrent prolapse and no clear evidence that these interventions were associated with clinically meaningful reductions in adverse effects, such as intraoperative or postoperative blood transfusion, intraoperative ureteral injury, or postoperative urinary tract infection.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cochrane review authors evaluated randomised controlled trials (RCTs) that compared prolapse surgery with and without any perioperative (before, during, or after) interventions. The evidence is current to 30 November 2017. Reviewers included 15 trials that evaluated eight different interventions related to prolapse surgery. Although primary outcomes of the review were objective failure (recurrence of prolapse on examination) and awareness of prolapse, reviewers also measured adverse effects, focusing on intraoperative blood loss, intraoperative ureteral injury, postoperative urinary tract infection, and repeat surgery."
}
] |
query-laysum
|
24172
|
[
{
"role": "user",
"content": "Abstract: Background\nBone is the most common site of metastatic disease associated with breast cancer (BC). Bisphosphonates inhibit osteoclast-mediated bone resorption, and novel targeted therapies such as denosumab inhibit other key bone metabolism pathways. We have studied these agents in both early breast cancer and advanced breast cancer settings. This is an update of the review originally published in 2002 and subsequently updated in 2005 and 2012.\nObjectives\nTo assess the effects of bisphosphonates and other bone agents in addition to anti-cancer treatment: (i) in women with early breast cancer (EBC); (ii) in women with advanced breast cancer without bone metastases (ABC); and (iii) in women with metastatic breast cancer and bone metastases (BCBM).\nSearch methods\nIn this review update, we searched Cochrane Breast Cancer's Specialised Register, CENTRAL, MEDLINE, Embase, the World Health Organization's International Clinical Trials Registry Platform (WHO ICTRP) and ClinicalTrials.gov on 19 September 2016.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing: (a) one treatment with a bisphosphonate/bone-acting agent with the same treatment without a bisphosphonate/bone-acting agent; (b) treatment with one bisphosphonate versus treatment with a different bisphosphonate; (c) treatment with a bisphosphonate versus another bone-acting agent of a different mechanism of action (e.g. denosumab); and (d) immediate treatment with a bisphosphonate/bone-acting agent versus delayed treatment of the same bisphosphonate/bone-acting agent.\nData collection and analysis\nTwo review authors independently extracted data, and assessed risk of bias and quality of the evidence. The primary outcome measure was bone metastases for EBC and ABC, and a skeletal-related event (SRE) for BCBM. We derived risk ratios (RRs) for dichotomous outcomes and the meta-analyses used random-effects models. Secondary outcomes included overall survival and disease-free survival for EBC; we derived hazard ratios (HRs) for these time-to-event outcomes where possible. We collected toxicity and quality-of-life information. GRADE was used to assess the quality of evidence for the most important outcomes in each treatment setting.\nMain results\nWe included 44 RCTs involving 37,302 women.\nIn women with EBC, bisphosphonates were associated with a reduced risk of bone metastases compared to placebo/no bisphosphonate (RR 0.86, 95% confidence interval (CI) 0.75 to 0.99; P = 0.03, 11 studies; 15,005 women; moderate-quality evidence with no significant heterogeneity). Bisphosphonates provided an overall survival benefit with time-to-event data (HR 0.91, 95% CI 0.83 to 0.99; P = 0.04; 9 studies; 13,949 women; high-quality evidence with evidence of heterogeneity). Subgroup analysis by menopausal status showed a survival benefit from bisphosphonates in postmenopausal women (HR 0.77, 95% CI 0.66 to 0.90; P = 0.001; 4 studies; 6048 women; high-quality evidence with no evidence of heterogeneity) but no survival benefit for premenopausal women (HR 1.03, 95% CI 0.86 to 1.22; P = 0.78; 2 studies; 3501 women; high-quality evidence with no heterogeneity). There was evidence of no effect of bisphosphonates on disease-free survival (HR 0.94, 95% 0.87 to 1.02; P = 0.13; 7 studies; 12,578 women; high-quality evidence with significant heterogeneity present) however subgroup analyses showed a disease-free survival benefit from bisphosphonates in postmenopausal women only (HR 0.82, 95% CI 0.74 to 0.91; P < 0.001; 7 studies; 8314 women; high-quality evidence with no heterogeneity). Bisphosphonates did not significantly reduce the incidence of fractures when compared to placebo/no bisphosphonates (RR 0.77, 95% CI 0.54 to 1.08, P = 0.13, 6 studies, 7602 women; moderate-quality evidence due to wide confidence intervals). We await mature overall survival and disease-free survival results for denosumab trials.\nIn women with ABC without clinically evident bone metastases, there was no evidence of an effect of bisphosphonates on bone metastases (RR 0.96, 95% CI 0.65 to 1.43; P = 0.86; 3 studies; 330 women; moderate-quality evidence with no heterogeneity) or overall survival (RR 0.89, 95% CI 0.73 to 1.09; P = 0.28; 3 studies; 330 women; high-quality evidence with no heterogeneity) compared to placebo/no bisphosphonates however the confidence intervals were wide. One study reported a trend towards an extended period of time without a SRE with bisphosphonate compared to placebo (low-quality evidence). One study reported quality of life and there was no apparent difference in scores between bisphosphonate and placebo (moderate-quality evidence).\nIn women with BCBM, bisphosphonates reduced the SRE risk by 14% (RR 0.86, 95% CI 0.78 to 0.95; P = 0.003; 9 studies; 2810 women; high-quality evidence with evidence of heterogeneity) compared with placebo/no bisphosphonates. This benefit persisted when administering either intravenous or oral bisphosphonates versus placebo. Bisphosphonates delayed the median time to a SRE with a median ratio of 1.43 (95% CI 1.29 to 1.58; P < 0.00001; 9 studies; 2891 women; high-quality evidence with no heterogeneity) and reduced bone pain (in 6 out of 11 studies; moderate-quality evidence) compared to placebo/no bisphosphonate. Treatment with bisphosphonates did not appear to affect overall survival (RR 1.01, 95% CI 0.91 to 1.11; P = 0.85; 7 studies; 1935 women; moderate-quality evidence with significant heterogeneity). Quality-of-life scores were slightly better with bisphosphonates than placebo at comparable time points (in three out of five studies; moderate-quality evidence) however scores decreased during the course of the studies. Denosumab reduced the risk of developing a SRE compared with bisphosphonates by 22% (RR 0.78, 0.72 to 0.85; P < 0.001; 3 studies, 2345 women). One study reported data on overall survival and observed no difference in survival between denosumab and bisphosphonate.\nReported toxicities across all settings were generally mild. Osteonecrosis of the jaw was rare, occurring less than 0.5% in the adjuvant setting (high-quality evidence).\nAuthors' conclusions\nFor women with EBC, bisphosphonates reduce the risk of bone metastases and provide an overall survival benefit compared to placebo or no bisphosphonates. There is preliminary evidence suggestive that bisphosphonates provide an overall survival and disease-free survival benefit in postmenopausal women only when compared to placebo or no bisphosphonate. This was not a planned subgroup for these early trials, and we await the completion of new large clinical trials assessing benefit for postmenopausal women. For women with BCBM, bisphosphonates reduce the risk of developing SREs, delay the median time to an SRE, and appear to reduce bone pain compared to placebo or no bisphosphonate.\n\nGiven the provided abstract, please respond to the following query: \"Study questions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The goal of bisphosphonates and denosumab differs based on the women's breast cancer status.\nWe asked three main questions:"
}
] |
query-laysum
|
24173
|
[
{
"role": "user",
"content": "Abstract: Background\nThe frequency of skin ulceration makes an important contributor to the morbidity burden in people with sickle cell disease. Many treatment options are available to the healthcare professional, although it is uncertain which treatments have been assessed for effectiveness in people with sickle cell disease. This is an update of a previously published Cochrane Review.\nObjectives\nTo assess the clinical effectiveness and harms of interventions for treating leg ulcers in people with sickle cell disease.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register.\nWe searched LILACS (1982 to January 2020), ISI Web of Knowledge (1985 to January 2020), and the Clinical Trials Search Portal of the World Health Organization (January 2020). We checked the reference lists of all the trials identified. We also contacted those groups or individuals who may have completed relevant randomised trials in this area.\nDate of the last search of the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register: 13 January 2020; date of the last search of the Cochrane Wounds Group Trials Register: 17 February 2017.\nSelection criteria\nRandomised controlled trials of interventions for treating leg ulcers in people with sickle cell disease compared to placebo or an alternative treatment.\nData collection and analysis\nTwo authors independently selected studies for inclusion. All three authors independently assessed the risk of bias of the included studies and extracted data. We used GRADE to assess the quality of the evidence.\nMain results\nSix studies met the inclusion criteria (198 participants with 250 ulcers). Each trial investigated a different intervention and within this review we have grouped these as systemic pharmaceutical interventions (L-cartinine, arginine butyrate, isoxsuprine) and topical pharmaceutical interventions (Solcoseryl® cream, arginine-glycine-aspartic acid (RGD) peptide dressing and topical antibiotics). No trials on non-pharmaceutical interventions were included in the review. All trials had an overall unclear or high risk of bias, and drug companies sponsored four of them. We were unable to pool findings due to the heterogeneity in outcome definitions, and inconsistency between the units of randomisation and analysis.\nThree interventions reported on the change in ulcer size (arginine butyrate, RGD peptide, L-cartinine). Of these, only arginine butyrate showed a reduction of ulcer size compared with a control group, mean reduction -5.10 cm² (95% CI -9.65 to -0.55), but we are uncertain whether this reduces ulcer size compared to standard care alone as the certainty of the evidence has been assessed as very low. Three trials reported on complete leg ulcer closure (isoxsuprine, arginine butyrate, RGD peptide matrix; very low quality of evidence). None reported a clinical benefit. No trial reported on: the time to complete ulcer healing; ulcer-free survival following treatment for sickle cell leg ulcers; quality of life measures; incidence of amputation or harms.\nAuthors' conclusions\nGiven the very low quality of the evidence identified in this updated Cochrane Review we are uncertain whether any of the assessed pharmaceutical interventions reduce ulcer size or result in leg ulcer closure in treated participants compared to controls. However, this intervention was assessed as having a high risk of bias due to inadequacies in the single trial report. Other included studies were also assessed as having an unclear or high risk of bias. The harm profile of the all interventions remains inconclusive.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is very limited evidence available on the use of interventions to treat people with sickle cell disease and chronic leg ulceration. All randomised clinical trials that we included in this review were associated with an unclear or high risk of bias, providing very low-quality evidence. This systematic review has shown the need for well-designed, high-quality randomised trials to assess the benefits and harms of interventions to improve the healing of leg ulcers in people with sickle cell disease."
}
] |
query-laysum
|
24175
|
[
{
"role": "user",
"content": "Abstract: Background\nPrimary angle-closure glaucoma (PACG) is characterized by a rise in intraocular pressure (IOP) secondary to aqueous outflow obstruction, with relative pupillary block being the most common underlying mechanism. There is increasing evidence that lens extraction may relieve pupillary block and thereby improve IOP control. As such, comparing the effectiveness of lens extraction against other commonly used treatment modalities can help inform the decision-making process.\nObjectives\nTo assess the effectiveness of lens extraction compared with other interventions in the treatment of chronic PACG in people without previous acute angle-closure attacks.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, one other database, and two trials registers (December 2019). We also screened the reference lists of included studies and the Science Citation Index database. We had no date or language restrictions.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing lens extraction with other treatment modalities for chronic PACG.\nData collection and analysis\nWe followed standard Cochrane methodology.\nMain results\nWe identified eight RCTs with 914 eyes. We obtained data for participants meeting our inclusion criteria for these studies (PACG only, no previous acute angle-closure attacks), resulting in 513 eyes included in this review. The participants were recruited from a diverse range of countries. We were unable to conduct meta-analyses due to different follow-up periods and insufficient data.\nOne study compared phacoemulsification with laser peripheral iridotomy (LPI) as standard care. Participants in the phacoemulsification group were less likely to experience progression of visual field loss (odds ratio [OR] 0.35, 95% confidence interval [CI] 0.13 to 0.91; 216 eyes; moderate certainty evidence), and required fewer IOP-lowering medications (mean difference [MD] -0.70, 95% CI -0.89 to -0.51; 263 eyes; moderate certainty evidence) compared with standard care at 12 months. Moderate certainty evidence also suggested that phacoemulsification improved gonioscopic findings at 12 months or later (MD -84.93, 95% CI -131.25 to -38.61; 106 eyes). There was little to no difference in health-related quality of life measures (MD 0.04, 95% CI -0.16 to 0.24; 254 eyes; moderate certainty evidence), and visual acuity (VA) (MD 2.03 ETDRS letter, 95% CI -0.77 to 4.84; 242 eyes) at 12 months, and no observable difference in mean IOP (MD -0.03mmHg, 95% CI -2.34 to 2.32; 257 eyes; moderate certainty evidence) compared to standard care. Irreversible loss of vision was observed in one participant in the phacoemulsification group, and three participants in standard care at 36 months (moderate-certainty evidence).\nOne study (91 eyes) compared phacoemulsification with phaco-viscogonioplasty (phaco-VGP). Low-certainty evidence suggested that fewer IOP-lowering medications were needed at 12 months with phacoemulsification (MD -0.30, 95% CI -0.55 to -0.05). Low-certainty evidence also suggested that phacoemulsification may have improved gonioscopic findings at 12 months or later compared to phaco-VGP (angle grading MD -0.60, 95% CI -0.91 to -0.29; TISA500 MD -0.03, 95% CI -0.06 to -0.01; TISA750 MD -0.03, 95% CI -0.06 to -0.01; 91 eyes). Phacoemulsification may result in little to no difference in best corrected VA at 12 months (MD -0.01 log MAR units, 95% CI -0.10 to 0.08; low certainty evidence), and the evidence is very uncertain about its effect on IOP at 12 months (MD 0.50 mmHg, 95% CI -2.64 to 3.64; very low certainty evidence). Postoperative fibrin reaction was observed in two participants in the phacoemulsification group and four in the phaco-VGP group. Three participants in the phaco-VGP group experienced hyphema. No data were available for progression of visual field loss and quality of life measurements at 12 months.\nTwo studies compared phacoemulsification with phaco-goniosynechialysis (phaco-GSL). Low-certainty evidence suggested that there may be little to no difference in mean IOP at 12 months (MD -0.12 mmHg, 95% CI -4.72 to 4.48; 1 study, 32 eyes) between the interventions. Phacoemulsification did not reduce the number of IOP-lowering medications compared to phaco-GSL at 12 months (MD -0.38, 95% CI -1.23 to 0.47; 1 study, 32 eyes; moderate certainty evidence). Three eyes in the phaco-GSL group developed hyphemas. No data were available at 12 months for progression of visual field loss, gonioscopic findings, visual acuity, and quality of life measures.\nThree studies compared phacoemulsification with combined phaco-trabeculectomy, but the data were only available for one study (63 eyes). In this study, low-certainty evidence suggested that there was little to no difference between groups in mean change in IOP from baseline (MD -0.60 mmHg, 95% CI -1.99 to 0.79), number of IOP-lowering medications at 12 months (MD 0.00, 95% CI -0.42 to 0.42), and VA measured by the Snellen chart (MD -0.03, 95% CI -0.18 to 0.12). Participants in the phacoemulsification group had fewer complications (risk ratio [RR] 0.59, 95% CI 0.34 to 1.04), and the phaco-trabeculectomy group required more IOP-lowering procedures (RR 5.81, 95% CI 1.41 to 23.88), but the evidence was very uncertain. No data were available for other outcomes.\nAuthors' conclusions\nModerate certainty evidence showed that lens extraction has an advantage over LPI in treating chronic PACG with clear crystalline lenses over three years of follow-up; ultimately, the decision for intervention should be part of a shared decision-making process between the clinician and the patient. For people with chronic PACG and visually significant cataracts, low certainty evidence suggested that combining phacoemulsification with either viscogonioplasty or goniosynechialysis does not confer any additional benefit over phacoemulsification alone. There was insufficient evidence to draw any meaningful conclusions regarding phacoemulsification versus trabeculectomy. Low certainty evidence suggested that combining phacoemulsification with trabeculectomy does not confer any additional benefit over phacoemulsification alone, and may cause more complications instead. These conclusions only apply to short- to medium-term outcomes; studies with longer follow-up periods can help assess whether these effects persist in the long term.\n\nGiven the provided abstract, please respond to the following query: \"How-up-to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this Cochrane Review is current to 13 December 2019."
}
] |
query-laysum
|
24176
|
[
{
"role": "user",
"content": "Abstract: Background\nUndernutrition contributes to five million deaths of children under five each year. Furthermore, throughout the life cycle, undernutrition contributes to increased risk of infection, poor cognitive functioning, chronic disease, and mortality. It is thus important for decision-makers to have evidence about the effectiveness of nutrition interventions for young children.\nObjectives\nPrimary objective\n1. To assess the effectiveness of supplementary feeding interventions, alone or with co-intervention, for improving the physical and psychosocial health of disadvantaged children aged three months to five years.\nSecondary objectives\n1. To assess the potential of such programmes to reduce socio-economic inequalities in undernutrition.\n2. To evaluate implementation and to understand how this may impact on outcomes.\n3. To determine whether there are any adverse effects of supplementary feeding.\nSearch methods\nWe searched CENTRAL, Ovid MEDLINE, PsycINFO, and seven other databases for all available years up to January 2014. We also searched ClinicalTrials.gov and several sources of grey literature. In addition, we searched the reference lists of relevant articles and reviews, and asked experts in the area about ongoing and unpublished trials.\nSelection criteria\nRandomised controlled trials (RCTs), cluster-RCTs, controlled clinical trials (CCTs), controlled before-and-after studies (CBAs), and interrupted time series (ITS) that provided supplementary food (with or without co-intervention) to children aged three months to five years, from all countries. Adjunctive treatments, such as nutrition education, were allowed. Controls had to be untreated.\nData collection and analysis\nTwo or more review authors independently reviewed searches, selected studies for inclusion or exclusion, extracted data, and assessed risk of bias. We conducted meta-analyses for continuous data using the mean difference (MD) or the standardised mean difference (SMD) with a 95% confidence interval (CI), correcting for clustering if necessary. We analysed studies from low- and middle-income countries and from high-income countries separately, and RCTs separately from CBAs. We conducted a process evaluation to understand which factors impact on effectiveness.\nMain results\nWe included 32 studies (21 RCTs and 11 CBAs); 26 of these (16 RCTs and 10 CBAs) were in meta-analyses. More than 50% of the RCTs were judged to have low risk of bias for random selection and incomplete outcome assessment. We judged most RCTS to be unclear for allocation concealment, blinding of outcome assessment, and selective outcome reporting. Because children and parents knew that they were given food, we judged blinding of participants and personnel to be at high risk for all studies.\nGrowth. Supplementary feeding had positive effects on growth in low- and middle-income countries. Meta-analysis of the RCTs showed that supplemented children gained an average of 0.12 kg more than controls over six months (95% confidence interval (CI) 0.05 to 0.18, 9 trials, 1057 participants, moderate quality evidence). In the CBAs, the effect was similar; 0.24 kg over a year (95% CI 0.09 to 0.39, 1784 participants, very low quality evidence). In high-income countries, one RCT found no difference in weight, but in a CBA with 116 Aboriginal children in Australia, the effect on weight was 0.95 kg (95% CI 0.58 to 1.33). For height, meta-analysis of nine RCTs revealed that supplemented children grew an average of 0.27 cm more over six months than those who were not supplemented (95% CI 0.07 to 0.48, 1463 participants, moderate quality evidence). Meta-analysis of seven CBAs showed no evidence of an effect (mean difference (MD) 0.52 cm, 95% CI -0.07 to 1.10, 7 trials, 1782 participants, very low quality evidence). Meta-analyses of the RCTs demonstrated benefits for weight-for-age z-scores (WAZ) (MD 0.15, 95% CI 0.05 to 0.24, 8 trials, 1565 participants, moderate quality evidence), and height-for-age z-scores (HAZ) (MD 0.15, 95% CI 0.06 to 0.24, 9 trials, 4638 participants, moderate quality evidence), but not for weight-for-height z-scores MD 0.10 (95% CI -0.02 to 0.22, 7 trials, 4176 participants, moderate quality evidence). Meta-analyses of the CBAs showed no effects on WAZ, HAZ, or WHZ (very low quality evidence). We found moderate positive effects for haemoglobin (SMD 0.49, 95% CI 0.07 to 0.91, 5 trials, 300 participants) in a meta-analysis of the RCTs.\nPsychosocial outcomes. Eight RCTs in low- and middle-income countries assessed psychosocial outcomes. Our meta-analysis of two studies showed moderate positive effects of feeding on psychomotor development (SMD 0.41, 95% CI 0.10 to 0.72, 178 participants). The evidence of effects on cognitive development was sparse and mixed.\nWe found evidence of substantial leakage. When feeding was given at home, children benefited from only 36% of the energy in the supplement. However, when the supplementary food was given in day cares or feeding centres, there was less leakage; children took in 85% of the energy provided in the supplement. Supplementary food was generally more effective for younger children (less than two years of age) and for those who were poorer/ less well-nourished. Results for sex were equivocal. Our results also suggested that feeding programmes which were given in day-care/feeding centres and those which provided a moderate-to-high proportion of the recommended daily intake (% RDI) for energy were more effective.\nAuthors' conclusions\nFeeding programmes for young children in low- and middle-income countries can work, but good implementation is key.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that, in low- and middle-income countries, providing additional food to children aged three months to five years led to small gains in weight (0.24 kg a year in both RCTs and CBAs) and height (0.54 cm a year in RCTs only; no evidence of an effect in other study designs),and moderate increases in haemoglobin. We also found positive impacts on psychomotor development (skills that involve mental and muscular activity). We found mixed evidence on effects of supplementary feeding on mental development.\nIn high-income countries, two studies found no benefits for growth. The one effective study involved Aboriginal children.\nWe found that food was often redistributed ('leakage') within the family; when feeding was home-delivered, children benefited from only 36% of the energy given in the supplement. However, when the supplementary food was given in day care centres or feeding centres, there was much less leakage; children took in 85% of the energy provided in the supplement. When we looked at different groups supplementary food was more effective for younger children (under two years old) and for those who were poorer or less well-nourished. Results for sex were mixed. Feeding programmes that were well-supervised and those that provided a greater proportion of required daily food for energy were generally more effective."
}
] |
query-laysum
|
24177
|
[
{
"role": "user",
"content": "Abstract: Background\nNeuropathic pain is a consequence of damage to the central nervous system (CNS), for example, cerebrovascular accident, multiple sclerosis or spinal cord injury, or peripheral nervous system (PNS), for example, painful diabetic neuropathy (PDN), postherpetic neuralgia (PHN), or surgery. Evidence suggests that people suffering from neuropathic pain are likely to seek alternative modes of pain relief such as herbal medicinal products due to adverse events brought about by current pharmacological agents used to treat neuropathic pain. This review includes studies in which participants were treated with herbal medicinal products (topically or ingested) who had experienced neuropathic pain for at least three months.\nObjectives\nTo assess the analgesic efficacy and effectiveness of herbal medicinal products or preparations for neuropathic pain, and the adverse events associated with their use.\nSearch methods\nWe searched CENTRAL and the Cochrane Database of Systematic Reviews, MEDLINE, Embase, CINAHL and AMED to March 2018. We identified additional studies from the reference lists of the retrieved papers. We also searched trials registries for ongoing trials and we contacted experts in the field for relevant data in terms of published, unpublished or ongoing studies.\nSelection criteria\nWe included randomised controlled trials (including cross-over designs) of double-blind design, assessing efficacy of herbal treatments for neuropathic pain compared to placebo, no intervention or any other active comparator. Participants were 18 years and above and had been suffering from one or more neuropathic pain conditions, for three months or more.\nWe applied no restrictions to language or gender. We excluded studies monitoring effects of isolated, single chemicals derived from the plant or synthetic chemicals based on constituents of the plant, if they were not administered at a concentration naturally present within the plant.\nWe excluded studies monitoring the effects of traditional Asian medicine and Cannabinoids as well as studies looking at headache or migraine as these treatments and conditions are addressed in distinct reviews.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently considered trials for inclusion, assessed risk of bias, and extracted data. We calculated the risk ratio (RR) and number needed to treat for an additional beneficial outcome (NNTB). The primary outcomes were participant-reported pain relief of 30%, or 50%, or greater, and participant-reported global impression of clinical change (PGIC). We also collected information on adverse events. We assessed evidence using GRADE and created a 'Summary of findings' table.\nMain results\nWe included two studies (128 participants). Both diabetic neuropathy and non-diabetic neuropathic pain conditions were investigated across these two studies.\nTwo herbal medicinal products, namely nutmeg (applied topically as a 125 mL spray for four weeks, containing mace oil 2%, nutmeg oil 14%, methyl salicylate 6%, menthol 6%, coconut oil and alcohol) and St John's wort (taken in capsule form containing 900 μg total hypericin each, taken three times daily, giving a total concentration of 2700 mg for five weeks). Both studies allowed the use of concurrent analgesia.\nBoth reported at least one pain-related outcome but we could not carry out meta-analysis of effectiveness due to heterogeneity between the primary outcomes and could not draw any conclusions of effect. Other outcomes included PGIC, adverse events and withdrawals. There were no data for participant-reported pain relief of 50% or greater or PGIC (moderate and substantial) outcomes.\nWhen looking at participant-reported pain relief of 30% or greater over baseline, we observed no evidence of a difference (P = 0.64) in response to nutmeg versus placebo (RR 1.12, 95% confidence interval (CI) 0.69 to 1.85; 48.6% vs 43.2%). We downgraded the evidence for this outcome to very low quality.\nWe observed no change between placebo and nutmeg treatment when looking at secondary pain outcomes. Visual analogue scale (VAS) scores for pain reduction (0 to 100, where 0 = no pain reduction), were 44 for both nutmeg and placebo with standard deviations of 21.5 and 26.5 respectively. There was no evidence of a difference (P = 0.09 to 0.33) in total pain score in response to St John’s wort compared to placebo, as there was only a reduction of 1 point when looking at median differences in change from baseline on a 0 to 10-point numeric rating scale.\nThere was a total of five withdrawals out of 91 participants (5%) in the treatment groups compared to six of 91 (6.5%) in the placebo groups, whilst adverse events were the same for both the treatment and placebo groups.\nWe judged neither study as having a low risk of bias. We attributed risk of bias to small study size and incomplete outcome data leading to attrition bias. We downgraded the evidence to very low quality for all primary and secondary outcomes reported in this review. We downgraded the quality of the evidence twice due to very serious limitations in study quality (due to small study size and attrition bias) and downgraded a further level due to indirectness as the included studies only measured outcomes at short-term time points. The results from this review should be treated with scepticism as we have very little confidence in the effect estimate.\nAuthors' conclusions\nThere was insufficient evidence to determine whether nutmeg or St John's wort has any meaningful efficacy in neuropathic pain conditions.\nThe quality of the current evidence raises serious uncertainties about the estimates of effect observed, therefore, we have very little confidence in the effect estimate; the true effect is likely to be substantially different from the estimate of effect.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There were no reports from participants of any reduction in pain intensity of 30% or above and there was no observable reduction in the total pain score in response to either nutmeg or St John’s wort. There were also no reductions in dropout rates or number of side effects between the treatment and placebo."
}
] |
query-laysum
|
24178
|
[
{
"role": "user",
"content": "Abstract: Background\nCataract is the leading cause of blindness in the world and, as such, cataract surgery is one of the most commonly performed operations globally. Surgical techniques have changed dramatically over the past half century with associated improvements in outcomes and safety. Femtosecond lasers can be used to perform the key steps in cataract surgery, such as corneal incisions, lens capsulotomy and fragmentation. The potential advantage of femtosecond laser-assisted cataract surgery (FLACS) is greater precision and reproducibility of these steps compared to manual techniques. The disadvantages are the costs associated with FLACS technology.\nObjectives\nTo compare the effectiveness and safety of FLACS with standard ultrasound phacoemulsification cataract surgery (PCS) by gathering evidence from randomised controlled trials (RCTs).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; which contains the Cochrane Eyes and Vision Trials Register; 2022, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov; the WHO ICTRP and the US Food and Drug Administration (FDA) website. We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 10 May 2022.\nSelection criteria\nWe included RCTs where FLACS was compared to PCS.\nData collection and analysis\nThree review authors independently screened the search results, assessed risk of bias and extracted data using the standard methodological procedures expected by Cochrane. The primary outcome for this review was intraoperative complications in the operated eye, namely anterior capsule, and posterior capsule tears. The secondary outcomes included corrected distance visual acuity (CDVA), quality of vision (as measured by any validated patient-reported outcome measure (PROM)), postoperative cystoid macular oedema complications, endothelial cell loss and cost-effectiveness. We assessed the certainty of the evidence using GRADE.\nMain results\nWe included 42 RCTs conducted in Europe, North America, South America and Asia, which enrolled a total of 7298 eyes of 5831 adult participants. Overall, the studies were at unclear or high risk of bias. In 16 studies the authors reported financial links with the manufacturer of the laser platform evaluated in their studies. Thirteen of the studies were within-person (paired-eye) studies with one eye allocated to one procedure and the other eye allocated to the other procedure. These studies were reported ignoring the paired nature of the data.\nThere was low-certainty evidence of little or no difference in the odds of developing anterior capsular tears when comparing FLACS and PCS (Peto odds ratio (OR) 0.83, 95% confidence interval (CI) 0.40 to 1.72; 5835 eyes, 27 studies) There was one fewer anterior capsule tear per 1000 operations in the FLACS group compared with the PCS group (95% CI 4 fewer to 3 more).\nThere was low-certainty evidence of lower odds of developing posterior capsular tears with FLACS compared to PCS (Peto OR 0.50, 95% CI 0.25 to 1.00; 5767 eyes, 26 studies). There were four fewer posterior capsule tears per 1000 operations in the FLACS group compared with the PCS group (95% CI 6 fewer to same).\nThere was moderate-certainty evidence of a very small advantage for the FLACS arm with regard to CDVA at six months or more follow-up, (mean difference (MD) -0.01 logMAR, 95% CI -0.02 to 0.00; 1323 eyes, 7 studies). This difference is equivalent to 1 logMAR letter between groups and is not thought to be clinically important.\nFrom the three studies (1205 participants) reporting a variety of PROMs (Cat-PROMS, EQ-5D, EQ-SD-3L, Catquest9-SF and patient survey) up to three months following surgery, there was moderate-certainty evidence of little or no difference in the various parameters between the two treatment arms.\nThere was low-certainty evidence of little or no difference in the odds of developing cystoid macular oedema when comparing FLACS and PCS (Peto OR 0.84, 95% CI 0.56 to 1.28; 4441 eyes, 18 studies). There were three fewer cystoid macular oedema cases per 1000 operations in the FLACS group compared with the PCS group (95% CI 10 fewer to 6 more).\nIn one study the incremental cost-effectiveness ratio (ICER) (cost difference divided by quality-adjusted life year (QALY) difference) was GBP £167,620 when comparing FLACS to PCS. In another study, the ICER was EUR €10,703 saved per additional patient who had treatment success with PCS compared to FLACS. Duration ranged from three minutes in favour of FLACS to eight minutes in favour of PCS (I2 = 100%, 11 studies) (low-certainty evidence).\nThere was low-certainty evidence of little or no important difference in endothelial cell loss when comparing FLACS with PCS (MD 12 cells per mm2 in favour of FLACS, 95% CI -40 to 64; 1512 eyes, 10 studies).\nAuthors' conclusions\nThis review of 42 studies provides evidence that there is probably little or no difference between FLACS and PCS in terms of intraoperative and postoperative complications, postoperative visual acuity and quality of life. Evidence from two studies suggests that FLACS may be the less cost-effective option. Many of the included studies only investigated very specific outcome measures such as effective phacoemulsification time, endothelial cell count change or aqueous flare, rather than those directly related to patient outcomes. Standardised reporting of complications and visual and refractive outcomes for cataract surgery would facilitate future synthesis, and guidance on this has been recently published.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out what the benefits and harms of FLACS are compared with standard ultrasound PCS."
}
] |
query-laysum
|
24179
|
[
{
"role": "user",
"content": "Abstract: Background\nAlpha-glucosidase inhibitors (AGI) reduce blood glucose levels and may thus prevent or delay type 2 diabetes mellitus (T2DM) and its associated complications in people at risk of developing of T2DM.\nObjectives\nTo assess the effects of AGI in people with impaired glucose tolerance (IGT), impaired fasting blood glucose (IFG), moderately elevated glycosylated haemoglobin A1c (HbA1c) or any combination of these.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform, and the reference lists of systematic reviews, articles and health technology assessment reports. The date of the last search of all databases was December 2017.\nSelection criteria\nWe included randomised controlled trials (RCTs), with a duration of one year or more, comparing AGI with any pharmacological glucose-lowering intervention, behaviour-changing intervention, placebo or no intervention in people with IFG, IGT, moderately elevated HbA1c or combinations of these.\nData collection and analysis\nTwo review authors read all abstracts and full-text articles or records, assessed quality and extracted outcome data independently. One review author extracted data, which were checked by a second review author. We resolved discrepancies by consensus or involvement of a third review author. For meta-analyses we used a random-effects model with assessment of risk ratios (RRs) for dichotomous outcomes and mean differences (MDs) for continuous outcomes, using 95% confidence intervals (CIs) for effect estimates. We assessed the overall quality of the evidence by using the GRADE instrument.\nMain results\nFor this update of the Cochrane Review (first published 2006, Issue 4) we included 10 RCTs (11,814 participants), eight investigating acarbose and two investigating voglibose, that included people with IGT or people \"at increased risk for diabetes\". The trial duration ranged from one to six years. Most trials compared AGI with placebo (N = 4) or no intervention (N = 4).\nAcarbose reduced the incidence of T2DM compared to placebo: 670 out of 4014 people (16.7%) in the acarbose groups developed T2DM, compared to 812 out of 3994 people (20.3%) in the placebo groups (RR 0.82, 95% CI 0.75 to 0.89; P < 0.0001; 3 trials; 8008 participants; moderate-certainty evidence). One trial including participants with coronary heart disease and IGT contributed 64% of cases for this outcome. Acarbose reduced the risk of T2DM compared to no intervention: 7 out 75 people (9.3%) in the acarbose groups developed T2DM, compared to 18 out of 65 people (27.7%) in the no-intervention groups (RR 0.31, 95% CI 0.14 to 0.69; P = 0.004; 2 trials; 140 participants; very low-certainty evidence).\nAcarbose compared to placebo did not reduce or increase the risk of all-cause mortality (RR 0.98, 95% CI 0.82 to 1.18; P = 0.86; 3 trials; 8069 participants; very low-certainty evidence), cardiovascular mortality (RR 0.88; 95% CI 0.71 to 1.10; P = 0.26; 3 trials; 8069 participants; very low-certainty evidence), serious adverse events (RR 1.12, 95% CI 0.97 to 1.29; P = 0.13; 2 trials; 6625 participants; low-certainty evidence), non-fatal stroke (RR 0.50, 95% CI 0.09 to 2.74; P = 0.43; 1 trial; 1368 participants; very low-certainty evidence) or congestive heart failure (RR of 0.87; 95% CI 0.63 to 1.12; P = 0.40; 2 trials; 7890 participants; low-certainty evidence). Acarbose compared to placebo reduced non-fatal myocardial infarction: one out of 742 participants (0.1%) in the acarbose groups had a non-fatal myocardial infarction compared to 15 out of 744 participants (2%) in the placebo groups (RR 0.10, 95% CI 0.02 to 0.53; P = 0.007; 2 trials; 1486 participants; very low-certainty evidence). Acarbose treatment showed an increased risk of non-serious adverse events (mainly gastro-intestinal events), compared to placebo: 751 of 775 people (96.9%) in the acarbose groups experienced an event, compared to 723 of 775 people (93.3%) in the placebo groups (RR 1.04; 95% CI 1.01 to 1.06; P = 0.0008; 2 trials; 1550 participants). Acarbose compared to no intervention showed no advantage or disadvantage for any of these outcome measures (very low-certainty evidence).\nOne trial each compared voglibose with placebo (1780 participants) or diet and exercise (870 participants). Voglibose compared to placebo reduced the incidence of T2DM: 50 out of 897 participants (5.6%) developed T2DM, compared to 106 out of 881 participants (12%) in the placebo group (RR 0.46, 95% CI 0.34 to 0.64; P < 0.0001; 1 trial; 1778 participants; low-certainty evidence). For all other reported outcome measures there were no clear differences between voglibose and comparator groups. One trial with 90 participants compared acarbose with diet and exercise and another trial with 98 participants reported data on acarbose versus metformin. There were no clear differences for any outcome measure between these two acarbose interventions and the associated comparator groups.\nNone of the trials reported amputation of lower extremity, blindness or severe vision loss, end-stage renal disease, health-related quality of life, time to progression to T2DM, or socioeconomic effects.\nAuthors' conclusions\nAGI may prevent or delay the development of T2DM in people with IGT. There is no firm evidence that AGI have a beneficial effect on cardiovascular mortality or cardiovascular events.\n\nGiven the provided abstract, please respond to the following query: \"Trial characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 10 randomised controlled trials representing 11,814 participants, eight investigating acarbose and two investigating voglibose. The trial duration ranged from one to six years.\nThis evidence is up to date as of December 2017."
}
] |
query-laysum
|
24180
|
[
{
"role": "user",
"content": "Abstract: Background\nDelirium is a very common condition associated with significant morbidity, mortality, and costs. Current critical care guidelines recommend first and foremost the use of nonpharmacological strategies in both the prevention and treatment of delirium. Pharmacological interventions may augment these approaches and they are currently used widely in clinical practice to manage the symptoms of delirium. Benzodiazepines are currently used in clinical practice to treat behavioural disturbances associated with delirium but current guidelines do not recommend their use for this indication. The use of these medicines is controversial because there is uncertainty about whether they are effective for patients or have the potential to harm them.\nObjectives\nTo assess the effectiveness and safety of benzodiazepines in the treatment of delirium (excluding delirium related to withdrawal from alcohol or benzodiazepines) in any healthcare settings other than intensive care units (ICU).\nSearch methods\nWe searched ALOIS: the Cochrane Dementia and Cognitive Improvement Group's Specialized Register up to 10 April 2019. ALOIS contains records of clinical trials identified from monthly searches of a number of major healthcare databases (including MEDLINE, Embase, PsycINFO, CINAHL, LILACS), numerous trial registries (including national, international and pharmaceutical registries), and grey literature sources.\nSelection criteria\nWe included randomised controlled trials (RCTs) conducted in healthcare settings that ranged from nursing homes and long-term care facilities to any hospital setting except for ICUs, involving adult patients with delirium excluding those with delirium related to alcohol or benzodiazepine withdrawal. Included RCTs had to assess the effect of benzodiazepines, at any dose and given by any route, compared with placebo or another drug intended to treat delirium.\nData collection and analysis\nTwo review authors independently assessed study eligibility, extracted data, and assessed the risk of bias of included studies. We decided whether or not to pool data on the basis of clinical heterogeneity between studies. We used GRADE (Grades of Recommendation, Assessment, Development and Evaluation) methods to assess the quality of evidence.\nMain results\nWe identified only two trials that satisfied the selection criteria. We did not pool the data because of the substantial clinical differences between the trials.\nIn one trial, participants (n = 58) were patients in an acute palliative care unit with advanced cancer who had a mean age of 64 years. All of the participants had delirium, were treated with haloperidol, and were randomised to receive either lorazepam or placebo in combination with it. Due to very serious imprecision, all evidence was of low certainty. We were unable to determine whether there were clinically important differences in the severity of delirium (mean difference (MD) 2.10, 95% CI -0.96 to 5.16; n = 50), length of hospital admission (MD 0.00, 95% CI -3.45 to 3.45; n = 58), mortality from all causes (risk ratio (RR) 0.33, 95% CI 0.04 to 3.02; participants = 58) or any of a number of adverse events. Important effects could not be confirmed or excluded. The study authors did not report the length of the delirium episode.\nIn the other trial, participants (n = 30) were patients in general medical wards with acquired immune deficiency syndrome (AIDS) who had a mean age of 39.2 years. Investigators compared three drug treatments: all participants had delirium, and were randomised to receive lorazepam, chlorpromazine, or haloperidol. Very low-certainty evidence was identified, and we could not determine whether lorazepam differed from either of the other treatments in the effect on severity of delirium, any adverse event, or mortality from all causes. The study authors did not report the length of the delirium episode or the length of hospital admission.\nAuthors' conclusions\nThere is no enough evidence to determine whether benzodiazepines are effective when used to treat patients with delirium who are cared for in non-ICU settings. The available evidence does not support their routine use for this indication. Because of the scarcity of data from randomised controlled trials, further research is required to determine whether or not there is a role for benzodiazepines in the treatment of delirium in non-ICU settings.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Delirium is a serious complication of many illnesses, which occurs most commonly in young children and older adults. It usually presents as a sudden change in a patient's behaviour or mental state. Another name for it is ‘acute confusional state.’ Patients with delirium may not know where they are, what time it is, or what is happening to them. They may have frightening experiences, such as vivid hallucinations. They may become either restless or lethargic and inactive. Delirium can be very distressing for patients and for those who are caring for them. Studies show that about a third of patients on general medical wards develop delirium. It is a frequent complication after surgery (e.g. it happens in up to 60% of people who have surgery for a hip fracture). The effects of delirium can last for a long time. For older people, it can lead to longer hospital stays and it has been associated with increased risks of death, disability, loss of independence, and later dementia. It adds significantly to healthcare costs.\nBenzodiazepines are medicines that are often used as sedatives. Sometimes, healthcare workers prescribe them to treat delirium when other strategies have not helped. Currently, it is not clear if benzodiazepines are an effective treatment for patients with delirium or whether they can harm them."
}
] |
query-laysum
|
24181
|
[
{
"role": "user",
"content": "Abstract: Background\nUpper gastrointestinal bleeding is one of the most frequent causes of morbidity and mortality in the course of liver cirrhosis. Several treatments are used for upper gastrointestinal bleeding in people with liver diseases. One of them is vitamin K administration, but it is not known whether it benefits or harms people with acute or chronic liver disease and upper gastrointestinal bleeding. This is an update of this Cochrane review.\nObjectives\nTo assess the beneficial and harmful effects of vitamin K for people with acute or chronic liver disease and upper gastrointestinal bleeding.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Controlled Trials Register (February 2015), the Cochrane Central Register of Controlled Trials (CENTRAL) (Issue 2 of 12, 2015), MEDLINE (Ovid SP) (1946 to February 2015), EMBASE (Ovid SP) (1974 to February 2015), Science Citation Index EXPANDED (1900 to February 2015), and LILACS (1982 to 25 February 2015). We sought additional randomised trials from two registries of clinical trials: the World Health Organization Clinical Trials Search Portal and the metaRegister of Controlled Trials. We looked through the reference lists of the retrieved publications and review articles.\nSelection criteria\nRandomised clinical trials irrespective of blinding, language, or publication status for assessment of benefits and harms. We considered observational studies for assessment of harms only.\nData collection and analysis\n\\We aimed to summarise data from randomised clinical trials using Standard Cochrane methodology and assess them according to the GRADE approach.\nMain results\nWe found no randomised trials on vitamin K for upper gastrointestinal bleeding in people with liver diseases assessing benefits and harms of the intervention. We identified no quasi-randomised studies, historically controlled studies, or observational studies assessing harms.\nAuthors' conclusions\nThis updated review found no randomised clinical trials of vitamin K for upper gastrointestinal bleeding in people with liver diseases. The benefits and harms of vitamin K need to be tested in randomised clinical trials. Until randomised clinical trials are conducted to assess the trade-off between benefits and harms, we cannot recommend or refute the use of vitamin K for upper gastrointestinal bleeding in people with liver diseases.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases for studies assessing the benefits and harms of vitamin K in people of any age with liver disease. This updated review found no randomised clinical trials (clinical studies where people are randomly put into one of two or more treatment groups) on the benefits or harms of vitamin K for upper gastrointestinal bleeding in people with acute or chronic liver diseases. The evidence is current to February 2015."
}
] |
query-laysum
|
24182
|
[
{
"role": "user",
"content": "Abstract: Background\nDiabetic peripheral neuropathy (DPN) is a common and severe complication that affects 50% of people with diabetes. Painful DPN is reported to occur in 16% to 24% of people with diabetes. A complete and comprehensive management strategy for the prevention and treatment of DPN, whether painful or not, has not yet been defined.\nResearch into treatment for DPN has been characterised by a series of failed clinical trials, with few noteworthy advances. Strategies that support peripheral nerve regeneration and restore neurological function in people with painful or painless DPN are needed. The amino acid acetyl-L-carnitine (ALC) plays a role in the transfer of long-chain fatty acids into mitochondria for β-oxidation. ALC supplementation also induces neuroprotective and neurotrophic effects in the peripheral nervous system. Therefore, ALC supplementation targets several mechanisms relevant to potential nerve repair and regeneration, and could have clinical therapeutic potential. There is a need for a systematic review of the evidence from clinical trials.\nObjectives\nTo assess the effects of ALC for the treatment of DPN.\nSearch methods\nOn 2 July 2018, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, LILACS, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform. We checked references, searched citations, and contacted study authors to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs of ALC compared with placebo, other therapy, or no intervention in the treatment of DPN. Participants could be of any sex and age, and have type 1 or type 2 diabetes mellitus, of any severity, with painful or painless DPN. We accepted any definition of minimum criteria for DPN, in accordance with the Toronto Consensus. We imposed no language restriction.\nPain was the primary outcome, measured as the proportion of participants with at least 30% (moderate) or 50% (substantial) decrease in pain over baseline, or as the score on a visual analogue scale (VAS) or Likert scale for pain.\nData collection and analysis\nWe followed standard Cochrane methods.\nMain results\nWe included four studies with 907 participants, which were reported in three publications. Three trials studied ALC versus placebo (675 participants); in one trial the dose of ALC was 2000 mg/day, and in the other two trials, it was 1500 mg/day or 3000 mg/day. The fourth trial studied ALC 1500 mg/day versus methylcobalamin 1.5 mg/day (232 participants). The risk of bias was high in both trials of different ALC doses and low in the other two trials.\nNo included trial measured the proportion of participants with at least moderate (30%) or substantial (50%) pain relief. ALC reduced pain more than placebo, measured on a 0- to 100-mm VAS (MD -9.16, 95% CI -16.76 to -1.57; three studies; 540 participants; P = 0.02; I² = 56%; random-effects; very low-certainty evidence; a higher score indicating more pain). At doses of 1500 mg/day or less, the VAS score after ALC treatment was little different from placebo (MD -0.05, 95% CI -10.00 to 9.89; two studies; 159 participants; P = 0.99; I² = 0%), but at doses greater than 1500 mg/day, ALC reduced pain more than placebo (MD -14.93, 95% CI -19.16 to -10.70; three studies; 381 participants; P < 0.00001; I² = 0%). This subgroup analysis should be viewed with caution as the evidence was even less certain than the overall analysis, which was already of very low certainty.\nTwo placebo-controlled studies reported that vibration perception improved after 12 months. We graded this evidence as very low certainty, due to inconsistency and a high risk of bias, as the trial authors did not provide any numerical data. The placebo-controlled studies did not measure functional impairment and disability scores. No study used validated symptom scales. One study performed sensory testing, but the evidence was very uncertain.\nThe fourth included study compared ALC with methylcobalamin, but did not report effects on pain. There was a reduction from baseline to 24 weeks in functional impairment and disability, based on the change in mean Neuropathy Disability Score (NDS; scale from zero to 10), but there was no important difference between the ALC group (mean score 1.66 ± 1.90) and the methylcobalamin group (mean score 1.35 ± 1.65) groups (P = 0.23; low-certainty evidence).\nOne placebo-controlled study reported that six of 147 participants in the ALC > 1500 mg/day group (4.1%) and two of 147 participants in the placebo group (1.4%) discontinued treatment because of adverse events (headache, facial paraesthesia, and gastrointestinal disorders) (P = 0.17). The other two placebo-controlled studies reported no dropouts due to adverse events, and more pain, paraesthesia, and hyperaesthesias in the placebo group than the 3000 mg/day ALC group, but provided no numerical data. The overall certainty of adverse event evidence for the comparison of ALC versus placebo was low.\nThe study comparing ALC with methylcobalamin reported that 34/117 participants (29.1%) experienced adverse events in the ALC group versus 33/115 (28.7%) in the methylcobalamin group (P = 0.95). Nine participants discontinued treatment due to adverse events (ALC: 4 participants, methylcobalamin: 5 participants), which were most commonly gastrointestinal symptoms. The certainty of the adverse event evidence for ALC versus methylcobalamin was low.\nTwo studies were funded by the manufacturer of ALC and the other two studies had at least one co-author who was a consultant for an ALC manufacturer.\nAuthors' conclusions\nWe are very uncertain whether ALC causes a reduction in pain after 6 to 12 months' treatment in people with DPN, when compared with placebo, as the evidence is sparse and of low certainty. Data on functional and sensory impairment and symptoms are lacking, or of very low certainty. The evidence on adverse events is too uncertain to make any judgements on safety.\n\nGiven the provided abstract, please respond to the following query: \"The aim of this review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this review was to assess the effects of acetyl-L-carnitine (ALC) on diabetic peripheral neuropathy (DPN) in people with diabetes. We were particularly interested in whether ALC could relieve pain, and also aimed to identify any harmful effects."
}
] |
query-laysum
|
24183
|
[
{
"role": "user",
"content": "Abstract: Background\nMalignant neoplasms of the lymphoid or myeloid cell lines including lymphoma, leukaemia and myeloma are referred to as haematological malignancies. Complementary and alternative treatment options such as meditation practice or yoga are becoming popular by treating all aspects of the disease including physical and psychological symptoms. However, there is still unclear evidence about meditation's effectiveness, and how its practice affects the lives of haematologically-diseased patients.\nObjectives\nThis review aims to assess the benefits and harms of meditation practice as an additional treatment to standard care for adults with haematological malignancies.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 8, 2015), MEDLINE (1950 to August 2015), databases of ongoing trials, the metaRegister of Controlled Trials (mRCT) (http://www.controlled-trials.com/mrct/), conference proceedings of annual meetings of: the American Society of Hematology; American Society of Clinical Oncology; European Hematology Association; European Congress for Integrative Medicine; and Global Advances in Health and Medicine (2010 to 2015).\nSelection criteria\nWe included randomised controlled trials (RCTs) using meditation practice for adult patients with haematological malignancies.\nData collection and analysis\nTwo review authors independently extracted data from eligible studies and assessed the risk of bias according to predefined criteria. We evaluated quality of life and depression. The other outcomes of overall survival, anxiety, fatigue, quality of sleep and adverse events could not be evaluated, because they were not assessed in the included trial.\nMain results\nWe included only one small trial published as an abstract article. The included study investigated the effects of meditation practice on patients newly hospitalised with acute leukaemia. Ninety-one participants enrolled in the study, but only 42 participants remained in the trial throughout the six-month follow-up period and were eligible for analysis. There was no information provided about the average age and sex of the study population. We found a high risk for attrition bias and unclear risk for reporting bias, performance and detection bias because of missing data due to abstract publication only, thus we judged the overall risk of bias as high. According to the GRADE criteria, we judged the overall quality of the body of evidence for all predefined outcomes as 'very low', due to the extent of missing data on the study population, and the small sample size.\nAs the abstract publication did not provide numbers and results except P values, we are not able to give more details.\nMeditation practice might be beneficial for the quality of life of haematologically-diseased patients, with higher scores for participants in the mediation arms compared to the participants in the usual care control group (low quality of evidence). Levels of depression decreased for those practising meditation in both the spiritually-framed meditation group and the secularly-focused meditation group in comparison to the usual care control group, whose levels of depression remained constant (low quality of evidence). The influence of meditation practice on overall survival, fatigue, anxiety, quality of sleep and adverse events remained unclear, as these outcomes were not evaluated in the included trial.\nAuthors' conclusions\nTo estimate the effects of meditation practice for patients suffering from haematological malignancies, more high quality randomised controlled trials are needed. At present there is not enough information available on the effects of meditation in haematologically-diseased patients to draw any conclusion.\n\nGiven the provided abstract, please respond to the following query: \"Objectives\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We investigated the effects of meditation practice on adult patients with haematological malignancies. Important outcomes were quality of life, overall survival, depression, fatigue, anxiety, quality of sleep and adverse events. We reviewed the effects of meditation practice in addition to standard care for patients newly diagnosed with acute leukaemia compared to standard care only."
}
] |
query-laysum
|
24184
|
[
{
"role": "user",
"content": "Abstract: Background\nHeavy menstrual bleeding (HMB) is an important physical and social problem for women. Oral treatment for HMB includes antifibrinolytic drugs, which are designed to reduce bleeding by inhibiting clot-dissolving enzymes in the endometrium.\nHistorically, there has been some concern that using the antifibrinolytic tranexamic acid (TXA) for HMB may increase the risk of venous thromboembolic disease. This is an umbrella term for deep venous thrombosis (blood clots in the blood vessels in the legs) and pulmonary emboli (blood clots in the blood vessels in the lungs).\nObjectives\nTo determine the effectiveness and safety of antifibrinolytic medications as a treatment for heavy menstrual bleeding.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility (CGF) Group trials register, CENTRAL, MEDLINE, Embase, PsycINFO and two trials registers in November 2017, together with reference checking and contact with study authors and experts in the field.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing antifibrinolytic agents versus placebo, no treatment or other medical treatment in women of reproductive age with HMB. Twelve studies utilised TXA and one utilised a prodrug of TXA (Kabi).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. The primary review outcomes were menstrual blood loss (MBL), improvement in HMB, and thromboembolic events.\nMain results\nWe included 13 RCTs (1312 participants analysed). The evidence was very low to moderate quality: the main limitations were risk of bias (associated with lack of blinding, and poor reporting of study methods), imprecision and inconsistency.\nAntifibrinolytics (TXA or Kabi) versus no treatment or placebo\nWhen compared with a placebo, antifibrinolytics were associated with reduced mean blood loss (MD −53.20 mL per cycle, 95% CI −62.70 to −43.70; I² = 8%; 4 RCTs, participants = 565; moderate-quality evidence) and higher rates of improvement (RR 3.34, 95% CI 1.84 to 6.09; 3 RCTS, participants = 271; moderate-quality evidence). This suggests that if 11% of women improve without treatment, 43% to 63% of women taking antifibrinolytics will do so. There was no clear evidence of a difference between the groups in adverse events (RR 1.05, 95% CI 0.93 to 1.18; 1 RCT, participants = 297; low-quality evidence). Only one thromboembolic event occurred in the two studies that reported this outcome.\nTXA versus progestogens\nThere was no clear evidence of a difference between the groups in mean blood loss measured using the Pictorial Blood Assessment Chart (PBAC) (MD −12.22 points per cycle, 95% CI −30.8 to 6.36; I² = 0%; 3 RCTs, participants = 312; very low quality evidence), but TXA was associated with a higher likelihood of improvement (RR 1.54, 95% CI 1.31 to 1.80; I² = 32%; 5 RCTs, participants = 422; low-quality evidence). This suggests that if 46% of women improve with progestogens, 61% to 83% of women will do so with TXA.\nAdverse events were less common in the TXA group (RR 0.66, 95% CI 0.46 to 0.94; I² = 28%; 4 RCTs, participants = 349; low-quality evidence). No thromboembolic events were reported in any group.\nTXA versus non-steroidal anti-inflammatory drugs (NSAIDs)\nTXA was associated with reduced mean blood loss (MD −73.00 mL per cycle, 95% CI −123.35 to −22.65; 1 RCT, participants = 49; low-quality evidence) and higher likelihood of improvement (RR 1.43, 95% CI 1.18 to 1.74; 12 = 0%; 2 RCTs, participants = 161; low-quality evidence). This suggests that if 61% of women improve with NSAIDs, 71% to 100% of women will do so with TXA. Adverse events were uncommon and no comparative data were available. No thromboembolic events were reported.\nTXA versus ethamsylate\nTXA was associated with reduced mean blood loss (MD 100 mL per cycle, 95% CI −141.82 to −58.18; 1 RCT, participants = 53; low-quality evidence), but there was insufficient evidence to determine whether the groups differed in rates of improvement (RR 1.56, 95% CI 0.95 to 2.55; 1 RCT, participants = 53; very low quality evidence) or withdrawal due to adverse events (RR 0.78, 95% CI 0.19 to 3.15; 1 RCT, participants = 53; very low quality evidence).\nTXA versus herbal medicines (Safoof Habis and Punica granatum)\nTXA was associated with a reduced mean PBAC score after three months' treatment (MD −23.90 pts per cycle, 95% CI −31.92 to −15.88; I² = 0%; 2 RCTs, participants = 121; low-quality evidence). No data were available for rates of improvement. TXA was associated with a reduced mean PBAC score three months after the end of the treatment phase (MD −10.40 points per cycle, 95% CI −19.20 to −1.60; I² not applicable; 1 RCT, participants = 84; very low quality evidence). There was insufficient evidence to determine whether the groups differed in rates of adverse events (RR 2.25, 95% CI 0.74 to 6.80; 1 RCT, participants = 94; very low quality evidence). No thromboembolic events were reported.\nTXA versus levonorgestrel intrauterine system (LIUS)\nTXA was associated with a higher median PBAC score than LIUS (median difference 125.5 points; 1 RCT, participants = 42; very low quality evidence) and a lower likelihood of improvement (RR 0.43, 95% CI 0.24 to 0.77; 1 RCT, participants = 42; very low quality evidence). This suggests that if 85% of women improve with LIUS, 20% to 65% of women will do so with TXA. There was insufficient evidence to determine whether the groups differed in rates of adverse events (RR 0.83, 95% CI 0.25 to 2.80; 1 RCT, participants = 42; very low quality evidence). No thromboembolic events were reported.\nAuthors' conclusions\nAntifibrinolytic treatment (such as TXA) appears effective for treating HMB compared to placebo, NSAIDs, oral luteal progestogens, ethamsylate, or herbal remedies, but may be less effective than LIUS. There were too few data for most comparisons to determine whether antifibrinolytics were associated with increased risk of adverse events, and most studies did not specifically include thromboembolism as an outcome.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Antifibrinolytic agents are designed to reduce bleeding by inhibiting endometrial clot-dissolving enzymes (in the uterine lining); Cochrane researchers reviewed the evidence about the effect of these medications (such as tranexamic acid, TXA) versus placebo and other medical therapies in women with heavy menstrual bleeding (HMB: defined as more than 80 millilitres (> 80 mL) of blood loss per menstrual cycle)."
}
] |
query-laysum
|
24185
|
[
{
"role": "user",
"content": "Abstract: Background\nDentistry is a profession with a high prevalence of work-related musculoskeletal disorders (WMSD) among practitioners, with symptoms often starting as early in the career as the student phase. Ergonomic interventions in physical, cognitive, and organisational domains have been suggested to prevent their occurrence, but evidence of their effects remains unclear.\nObjectives\nTo assess the effect of ergonomic interventions for the prevention of work-related musculoskeletal disorders among dental care practitioners.\nSearch methods\nWe searched CENTRAL, MEDLINE PubMed, Embase, PsycINFO ProQuest, NIOSHTIC, NIOSHTIC-2, HSELINE, CISDOC (OSH-UPDATE), ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (ICTRP) Search Portal to August 2018, without language or date restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs), quasi-RCTs, and cluster RCTs, in which participants were adults, aged 18 and older, who were engaged in the practice of dentistry. At least 75% of them had to be free from musculoskeletal pain at baseline. We only included studies that measured at least one of our primary outcomes; i.e. physician diagnosed WMSD, self-reported pain, or work functioning.\nData collection and analysis\nThree authors independently screened and selected 20 potentially eligible references from 946 relevant references identified from the search results. Based on the full-text screening, we included two studies, excluded 16 studies, and two are awaiting classification. Four review authors independently extracted data, and two authors assessed the risk of bias. We calculated the mean difference (MD) with 95% confidence intervals (CI) for continuous outcomes and risk ratios (RR) with 95% confidence intervals for dichotomous outcomes. We assessed the quality of the evidence for each outcome using the GRADE approach.\nMain results\nWe included two RCTs (212 participants), one of which was a cluster-randomised trial. Adjusting for the design effect from clustering, reduced the total sample size to 210. Both studies were carried out in dental clinics and assessed ergonomic interventions in the physical domain, one by evaluating a multi-faceted ergonomic intervention, which consisted of imparting knowledge and training about ergonomics, work station modification, training and surveying ergonomics at the work station, and a regular exercise program; the other by studying the effectiveness of two different types of instrument used for scaling in preventing WMSDs. We were unable to combine the results from the two studies because of the diversity of interventions and outcomes.\nPhysical ergonomic interventions. Based on one study, there is very low-quality evidence that a multi-faceted intervention has no clear effect on dentists' risk of WMSD in the thighs (RR 0.57, 95% CI 0.23 to 1.42; 102 participants), or feet (RR 0.64, 95% CI 0.29 to 1.41; 102 participants) when compared to no intervention over a six-month period. Based on one study, there is low-quality evidence of no clear difference in elbow pain (MD −0.14, 95% CI −0.39 to 0.11; 110 participants), or shoulder pain (MD −0.32, 95% CI −0.75 to 0.11; 110 participants) in participants who used light weight curettes with wider handles or heavier curettes with narrow handles for scaling over a 16-week period.\nCognitive ergonomic interventions. We found no studies evaluating the effectiveness of cognitive ergonomic interventions.\nOrganisational ergonomic interventions. We found no studies evaluating the effectiveness of organisational ergonomic interventions.\nAuthors' conclusions\nThere is very low-quality evidence from one study showing that a multi-faceted intervention has no clear effect on dentists' risk of WMSD in the thighs or feet when compared to no intervention over a six-month period. This was a poorly conducted study with several shortcomings and errors in statistical analysis of data. There is low-quality evidence from one study showing no clear difference in elbow pain or shoulder pain in participants using light weight, wider handled curettes or heavier and narrow handled curettes for scaling over a 16-week period.\nWe did not find any studies evaluating the effectiveness of cognitive ergonomic interventions or organisational ergonomic interventions.\nOur ability to draw definitive conclusions is restricted by the paucity of suitable studies available to us, and the high risk of bias of the studies that are available. This review highlights the need for well-designed, conducted, and reported RCTs, with long-term follow-up that assess prevention strategies for WMSDs among dental care practitioners.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found two studies, involving 212 participants, that were conducted in dental practices or clinics in Iran and the United States. Both studies assessed physical ergonomic interventions. One study assessed a comprehensive ergonomics intervention, consisting of training, work station modification, and a regular exercise program, and the other study assessed two different types of instruments used for carrying out a dental procedure. The first study found that the comprehensive ergonomics intervention did not reduce musculoskeletal pain in the thighs or feet. The second study found that people using the two different kinds of tools for scraping off dental plaque had similar levels of elbow and shoulder pain. These studies had shortcomings, like poor methodology and short follow-up times, hence we could not draw any definitive conclusions based on their findings.\nWe found no studies that assessed the effectiveness of cognitive or organisational ergonomics interventions. We need studies that are designed, conducted, and reported better to evaluate the effects of physical, cognitive, and organisational ergonomics interventions."
}
] |
query-laysum
|
24186
|
[
{
"role": "user",
"content": "Abstract: Background\nPrevention of relapse is a major issue in the management of quiescent Crohn's disease (CD). Current therapies (e.g. methotrexate, biologics, 6-mercaptopurine and azathioprine) may be effective for maintaining remission in CD, but these drugs may cause significant adverse events. Interventions that are effective and safe for maintenance of remission in CD are desirable.\nObjectives\nThe primary objectives were to evaluate the efficacy and safety of enteral nutrition for the maintenance of remission in CD and to assess the impact of formula composition on effectiveness.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, the Cochrane IBD Group Specialized Register and clinicaltrials.gov from inception to 27 July 2018. We also searched references of retrieved studies and reviews.\nSelection criteria\nRandomised controlled trials (RCTs) including participants of any age with quiescent CD were considered for inclusion. Studies that compared enteral nutrition with no intervention, placebo or any other intervention were selected for review.\nData collection and analysis\nTwo authors independently screened studies for inclusion, extracted data and assessed methodological quality using the Cochrane risk of bias tool. The primary outcome was clinical or endoscopic relapse as defined by the primary studies. Secondary outcomes included anthropometric measures (i.e. height and weight), quality of life (QoL), adverse events, serious adverse events and withdrawal due to adverse events. We calculated the risk ratio and 95% confidence interval (CI) for dichotomous outcomes. For continuous outcomes, we calculated the mean difference and 95% CI. A random-effects model was used for the statistical analysis. We used the GRADE criteria to assess the overall certainty of the evidence supporting the primary outcome and selected secondary outcomes.\nMain results\nFour RCTs (262 adult participants) met the inclusion criteria. One study (N = 33) compared an elemental diet to a non-elemental (polymeric) diet. One study (N = 51) compared a half elemental diet to a regular free diet. Another study (N = 95) compared an elemental diet to 6-mercaptopurine (6-MP) or a no treatment control group. One study (N= 83) compared a polymeric diet to mesalamine. Two studies were rated as high risk of bias due to lack of blinding or incomplete outcome data. The other two studies were judged to have an unclear risk of bias. The studies were not pooled due to differences in control interventions and the way outcomes were assessed.\nThe effect of an elemental diet compared to a polymeric diet on remission rates or withdrawal due to adverse events is uncertain. Fifty-eight per cent (11/19) of participants in the elemental diet group relapsed at 12 months compared to 57% (8/14) of participants in the polymeric diet group (RR 1.01, 95% CI 0.56 to 1.84; very low certainty evidence). Thirty-two per cent (6/19) of participants in the elemental diet group were intolerant to the enteral nutritional formula because of taste or smell and were withdrawn from the study in the first 2 weeks compared to zero participants (0/14) in the polymeric diet group (RR 9.75, 95% CI 0.59 to 159.93; low certainty evidence). Anthropometric measures, QoL, adverse events and serious adverse events were not reported as outcomes.\nThe effect of an elemental diet (half of total daily calorie requirements) compared to a normal free diet on relapse rates is uncertain. Thirty-five per cent (9/26) of participants in the elemental diet group relapsed at 12 months compared to 64% (16/25) of participants in the free diet group (RR 0.54, 95% CI 0.30 to 0.99; very low certainty evidence). No adverse events were reported. This study reported no differences in weight change between the two diet groups. Height and QoL were not reported as outcomes.\nThe effect of an elemental diet compared to 6-MP on relapse rates or adverse events is uncertain. Thirty-eight per cent (12/32) of participants in the elemental diet group relapsed at 12 months compared to 23% (7/30) of participants in the 6-MP group (RR 1.61; 95% CI 0.73 to 3.53; very low certainty evidence). Three per cent (1/32) of participants in the elemental diet group had an adverse event compared to 13% (4/30) of participants in the 6-MP group (RR 0.23, 95% CI 0.03 to 1.98; low certainty evidence). Adverse events in the elemental diet group included surgery due to worsening CD. Adverse events in the 6-MP group included liver injury (n = 2), hair loss (n = 1) and surgery due to an abscess (n = 1). No serious adverse events or withdrawals due to adverse events were reported. Weight, height and QoL were not reported as outcomes\nThe effect of a polymeric diet compared to mesalamine on relapse rates and weight is uncertain. Forty-two per cent (18/43) of participants in the polymeric diet group relapsed at 6 months compared to 55% (22/40) of participants in the mesalamine group (RR 0.76; 95% CI 0.49 to 1.19; low certainty evidence). The mean difference in weight gain over the study period was 1.9 kg higher in the polymeric diet group compared to mesalamine (95% CI -4.62 to 8.42; low certainty evidence). Two participants in the polymeric diet group experienced nausea and four had diarrhoea. It is unclear if any participants in the mesalamine group had an adverse event. Height, QoL, serious adverse events and withdrawal due to adverse events were not reported as outcomes.\nAuthors' conclusions\nThe results for the outcomes assessed in this review are uncertain and no firm conclusions regarding the efficacy and safety of enteral nutrition in quiescent CD can be drawn. More research is needed to determine the efficacy and safety of using enteral nutrition as maintenance therapy in CD. Currently, there are four ongoing studies (estimated enrolment of 280 participants). This review will be updated when the results of these studies are available.\n\nGiven the provided abstract, please respond to the following query: \"What did the researchers find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Four studies including 262 adult participants with Crohn's disease in remission were included. One study (33 participants) compared an elemental diet to a non-elemental (polymeric) diet. One study (51 participants) compared an elemental diet to a normal diet (no supplements). One study (95 participants) compared an elemental diet to 6-mercaptopurine or a no treatment control group. One study (83 participants) compared a non-elemental polymeric diet to mesalamine. The researchers searched the medical literature extensively up to 27 July 2018.\nThe study comparing an elemental diet to a polymeric diet found no difference in remission rates at 12 months. Six elemental diet participants were not able to tolerate the enteral nutritional formula because of taste or smell and were withdrawn from the study. Participants who received half of their total daily calorie requirements as elemental diet and the remaining half by normal diet had a lower chance of relapse at 12 months compared to participants who received a free diet. No side effects were reported in this study. The study comparing an elemental diet to 6-mercaptopurine did not show any difference in relapse rates at 12 months. There was no difference in side effect rates. The only side effect reported in the elemental diet group was surgery due to worsening Crohn's disease. Side effects in the 6-mercaptopurine group included liver injury in two participants, hair loss in one participant and surgery to treat an abscess in one participant. The study comparing a polymeric diet to mesalamine found no difference in relapse rates at six months. Two participants the polymeric diet group experienced nausea and four had diarrhoea. It is unclear if any participants in the mesalamine group had side effects. No serious side effects were reported in any of the studies.\nThe results for the outcomes assessed in this review are uncertain and no firm conclusions regarding the effectiveness and safety of enteral nutrition for maintenance of remission in Crohn's disease can be drawn. More research is needed to determine the effectiveness and safety of using enteral nutrition as maintenance therapy in Crohn's disease. Currently, there are four ongoing studies (estimated enrolment of 280 participants). This review will be updated when the results of these studies are available."
}
] |
query-laysum
|
24187
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease is an inherited autosomal recessive blood condition and is one of the most prevalent genetic blood diseases worldwide. Acute chest syndrome is a frequent complication of sickle cell disease, as well as a major cause of morbidity and the greatest single cause of mortality in children with sickle cell disease. Standard treatment may include intravenous hydration, oxygen as treatment for hypoxia, antibiotics to treat the infectious cause and blood transfusions may be given. This is an update of a Cochrane Review first published in 2010 and updated in 2016.\nObjectives\nTo assess the effectiveness of blood transfusions, simple and exchange, for treating acute chest syndrome by comparing improvement in symptoms and clinical outcomes against standard care.\nSearch methods\nWe searched The Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register, which comprises references identified from comprehensive electronic database searches and handsearching of relevant journals and abstract books of conference proceedings.\nDate of the most recent search: 30 May 2019.\nSelection criteria\nRandomised controlled trials and quasi-randomised controlled trials comparing either simple or exchange transfusion versus standard care (no transfusion) in people with sickle cell disease suffering from acute chest syndrome.\nData collection and analysis\nBoth authors independently selected trials and assessed the risk of bias, no data could be extracted.\nMain results\nOne trial was eligible for inclusion in the review. While in the multicentre trial 237 people were enrolled (169 SCC, 42 SC, 15 Sβ⁰-thalassaemia, 11Sβ+-thalassaemia); the majority were recruited to an observational arm and only ten participants met the inclusion criteria for randomisation. Of these, four were randomised to the transfusion arm and received a single transfusion of 7 to 13 mL/kg packed red blood cells, and six were randomised to standard care. None of the four participants who received packed red blood cells developed acute chest syndrome, while 33% (two participants) developed acute chest syndrome in standard care arm. No data for any pre-defined outcomes were available.\nAuthors' conclusions\nWe found only one very small randomised controlled trial; this is not enough to make any reliable conclusion to support the use of blood transfusion. Whilst there appears to be some indication that chronic blood transfusion may play a roll in reducing the incidence of acute chest syndrome in people with sickle cell disease and albeit offering transfusions may be a widely accepted clinical practice, there is currently no reliable evidence to support or refute the perceived benefits of these as treatment options; very limited information about any of the potential harms associated with these interventions or indeed guidance that can be used to aid clinical decision making. Clinicians should therefore base any treatment decisions on a combination of; their clinical experience, individual circumstances and the unique characteristics and preferences of adequately informed people with sickle cell disease who are suffering with acute chest syndrome. This review highlights the need of further high quality research to provide reliable evidence for the effectiveness of these interventions for the relief of the symptoms of acute chest syndrome in people with sickle cell disease.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One study was included in the review, there were two parts to the study, one larger observational study and one randomised trial which was to assess transfusion versus standard care to prevent acute chest syndrome in people with sickle cell disease, while twenty-six centres were contracted. Only 10 participants were enrolled into the randomised trial. The effects of blood transfusions could not be determined from the trial; there were no data that could be presented or therefore analysed within this review for the very small number of participants enrolled.\nTherefore, this unique study did not show how effective blood transfusions might be for treating acute chest syndrome in people with sickle cell disease. Future research is needed to provide evidence for people to make informed decisions on whether blood transfusions are effective for treating acute chest syndrome in people with sickle cell disease."
}
] |
query-laysum
|
24188
|
[
{
"role": "user",
"content": "Abstract: Background\nAntibiotics are sometimes used to prevent recurrent sore throat, despite concern about resistance. However, there is conflicting primary evidence regarding their effectiveness.\nObjectives\nTo assess the effects of antibiotics in patients with recurrent sore throat.\nSearch methods\nThe Cochrane Ear, Nose and Throat Disorders Group (CENTDG) Trials Search Co-ordinator searched the CENTDG Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 5); PubMed; EMBASE; CINAHL; Web of Science; Clinicaltrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 25 June 2015.\nSelection criteria\nRandomised controlled trials (RCTs) of antibiotics in adults and children suffering from pre-existing recurrent sore throat, defined as three or more sore throats in a year, examining the incidence of sore throat recurrence, with follow-up of at least 12 months post-antibiotic therapy.\nData collection and analysis\nTwo authors independently assessed trial quality and extracted data. Multiple attempts to contact the authors of one study yielded no response.\nMain results\nWe identified no trials that met the inclusion criteria for the review. We discarded the majority of the references retrieved from our search following screening of the title and abstract. We formally excluded four studies following review of the full-text report.\nAuthors' conclusions\nThere is insufficient evidence to determine the effectiveness of antibiotics for preventing recurrent sore throat. This finding must be balanced against the known adverse effects and cost of antibiotic therapy, when considering antibiotics for this purpose. There is a need for high quality RCTs that compare the effects of antibiotics versus placebo in adults and children with pre-existing recurrent sore throat on the following outcomes: incidence of sore throat recurrence, adverse effects, days off work and absence from school, and the incidence of complications. Future studies should be conducted and reported according to the CONSORT statement.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "No trials could be included in this review. We therefore conclude that there is insufficient evidence to determine the effectiveness of antibiotics for preventing recurrent sore throat and this finding must be balanced against the known adverse effects and cost of antibiotic therapy when considering antibiotics for this purpose. We have identified a need for high quality randomised controlled trials that compare the effects of antibiotics versus placebo in adults and children with pre-existing recurrent sore throat on the following outcomes: incidence of sore throat recurrence, adverse effects, days off work and absence from school, and the incidence of complications."
}
] |
query-laysum
|
24189
|
[
{
"role": "user",
"content": "Abstract: Background\nMedical treatment and detoxification from opiate disorders includes oral administration of opioid agonists. Dihydrocodeine (DHC) substitution treatment is typically low threshold and therefore has the capacity to reach wider groups of opiate users. Decisions to prescribe DHC to patients with less severe opiate disorders centre on its perceived safety, reduced toxicity, shorter half-life and more rapid onset of action, and potential retention of patients. This review set out to investigate the effects of DHC in comparison to other pharmaceutical opioids and placebos in the detoxification and substitution of individuals with opiate use disorders.\nObjectives\nTo investigate the effectiveness of DHC in reducing illicit opiate use and other health-related outcomes among adults compared to other drugs or placebos used for detoxification or substitution therapy.\nSearch methods\nIn February 2019 we searched Cochrane Drugs and Alcohol's Specialised Register, CENTRAL, PubMed, Embase and Web of Science. We also searched for ongoing and unpublished studies via ClinicalTrials.gov, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) and Trialsjournal.com. All searches included non-English language literature. We handsearched references of topic-related systematic reviews and the included studies.\nSelection criteria\nWe included randomised controlled trials that evaluated the effect of DHC for detoxification and maintenance substitution therapy for adolescent (aged 15 years and older) and adult illicit opiate users.\nThe primary outcomes were abstinence from illicit opiate use following detoxification or maintenance therapy measured by self-report or urinalysis. The secondary outcomes were treatment retention and other health and behaviour outcomes.\nData collection and analysis\nWe followed the standard methodological procedures that are outlined by Cochrane. This includes the GRADE approach to appraise the quality of evidence.\nMain results\nWe included three trials (in five articles) with 385 opiate-using participants that measured outcomes at different follow-up periods in this review. Two studies with 150 individuals compared DHC with buprenorphine for detoxification, and one study with 235 participants compared DHC to methadone for maintenance substitution therapy. We downgraded the quality of evidence mainly due to risk of bias and imprecision.\nFor the two studies that compared DHC to buprenorphine, we found low-quality evidence of no significant difference between DHC and buprenorphine for detoxification at six-month follow-up (risk ratio (RR) 0.59, 95% confidence interval (CI) 0.25 to 1.39; P = 0.23) in the meta-analysis for the primary outcome of abstinence from illicit opiates. Similarly, low-quality evidence indicated no difference for treatment retention (RR 1.29, 95% CI 0.99 to 1.68; P = 0.06).\nIn the single trial that compared DHC to methadone for maintenance substitution therapy, the evidence was also of low quality, and there may be no difference in effects between DHC and methadone for reported abstinence from illicit opiates (mean difference (MD) −0.01, 95% CI −0.31 to 0.29). For treatment retention at six months' follow-up in this single trial, the RR calculated with an intention-to-treat analysis also indicated that there may be no difference between DHC and methadone (RR 1.04, 95% CI 0.94 to 1.16).\nThe studies that compared DHC to buprenorphine reported no serious adverse events, while the DHC versus methadone study reported one death due to methadone overdose.\nAuthors' conclusions\nWe found low-quality evidence that DHC may be no more effective than other commonly used pharmacological interventions in reducing illicit opiate use. It is therefore premature to make any conclusive statements about the effectiveness of DHC, and it is suggested that further high-quality studies are conducted, especially in low- to middle-income countries.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, the evidence was of low quality. There were two major issues across the studies. There was no blinding of the participants or those who assessed the outcomes, so that they were aware of which group they were in. There was also a high level of participants who dropped out of two of the studies."
}
] |
query-laysum
|
24190
|
[
{
"role": "user",
"content": "Abstract: Background\nVisual problems in older people are common and frequently under-reported. The effects of poor vision in older people are wide reaching and include falls, confusion and reduced quality of life. Much of the visual impairment in older ages can be treated (e.g. cataract surgery, correction of refractive error). Vision screening may therefore reduce the number of older people living with sight loss.\nObjectives\nThe objective of this review was to assess the effects on vision of community vision screening of older people for visual impairment.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 10); Ovid MEDLINE; Ovid Embase; the ISRCTN registry; ClinicalTrials.gov and the ICTRP. The date of the search was 23 November 2017.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared vision screening alone or as part of a multi-component screening package as compared to no vision screening or standard care, on the vision of people aged 65 years or over in a community setting. We included trials that used self-reported visual problems or visual acuity testing as the screening tool.\nData collection and analysis\nWe used standard methods expected by Cochrane. We graded the certainty of the evidence using GRADE.\nMain results\nVisual outcome data were available for 10,608 people in 10 trials. Four trials took place in the UK, two in Australia, two in the United States and two in the Netherlands. Length of follow-up ranged from one to five years. Three of these studies were cluster-randomised trials whereby general practitioners or family physicians were randomly allocated to undertake vision screening or no vision screening. All studies were funded by government agencies. Overall we judged the studies to be at low risk of bias and only downgraded the certainty of the evidence (GRADE) for imprecision.\nSeven trials compared vision screening as part of a multi-component screening versus no screening. Six of these studies used self-reported vision as both screening tool and outcome measure, but did not directly measure vision. One study used a combination of self-reported vision and visual acuity measurement: participants reporting vision problems at screening were treated by the attending doctor, referred to an eye care specialist or given information about resources that were available to assist with poor vision. There was a similar risk of \"not seeing well\" at follow-up in people screened compared with people not screened in meta-analysis of six studies (risk ratio (RR) 1.05, 95% confidence interval (CI) 0.97 to 1.14, 4522 participants high-certainty evidence). One trial reported \"improvement in vision\" and this occurred slightly less frequently in the screened group (RR 0.85, 95% CI 0.52 to 1.40, 230 participants, moderate-certainty evidence).\nTwo trials compared vision screening (visual acuity testing) alone with no vision screening. In one study, distance visual acuity was similar in the two groups at follow-up (mean difference (MD) 0.02 logMAR, 95% CI −0.02 to 0.05, 532 participants, high-certainty evidence). There was also little difference in near acuity (MD 0.02 logMAR, 95% CI −0.03 to 0.07, 532 participants, high-certainty evidence). There was no evidence of any important difference in quality of life (MD −0.06 National Eye Institute 25-item visual function questionnaire (VFQ-25) score adjusted for baseline VFQ-25 score, 95% CI −2.3 to 1.1, 532 participants, high-certainty evidence). The other study could not be included in the data analysis as the number of participants in each of the arms at follow-up could not be determined. However the authors stated that there was no significant difference in mean visual acuity in participants who had visual acuity assessed at baseline (39 letters) as compared to those who did not have their visual acuity assessed (35 letters, P = 0.25, 121 participants).\nOne trial compared a detailed health assessment including measurement of visual acuity (intervention) with a brief health assessment including one question about vision (standard care). People given the detailed health assessment had a similar risk of visual impairment (visual acuity worse than 6/18 in either eye) at follow-up compared with people given the brief assessment (RR 1.07, 95% CI 0.84 to 1.36, 1807 participants, moderate-certainty evidence). The mean composite score of the VFQ-25 was 86.0 in the group that underwent visual acuity screening compared with 85.6 in the standard care group, a difference of 0.40 (95% CI −1.70 to 2.50, 1807 participants, high-certainty evidence).\nAuthors' conclusions\nThe evidence from RCTs undertaken to date does not support vision screening for older people living independently in a community setting, whether in isolation or as part of a multi-component screening package. This is true for screening programmes involving questions about visual problems, or direct measurements of visual acuity.\nThe most likely reason for this negative review is that the populations within the trials often did not take up the offered intervention as a result of the vision screening and large proportions of those who did not have vision screening appeared to seek their own intervention. Also, trials that use questions about vision have a lower sensitivity and specificity than formal visual acuity testing. Given the importance of visual impairment among older people, further research into strategies to improve vision of older people is needed. The effectiveness of an optimised primary care-based screening intervention that overcomes possible factors contributing to the observed lack of benefit in trials to date warrants assessment; trials should consider including more dependent participants, rather than those living independently in the community.\n\nGiven the provided abstract, please respond to the following query: \"Key message\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no evidence that community screening for visual impairment in older people reduces the level of visual impairment in people living independently in the community. Further research on the barriers to accessing care at older ages is needed, as well as research investigating the effect of vision screening on more dependent populations of older people."
}
] |
query-laysum
|
24191
|
[
{
"role": "user",
"content": "Abstract: Background\nCOVID-19 infection poses a serious risk to patients and – due to its contagious nature – to those healthcare workers (HCWs) treating them. The risks of transmission of infection are greater when a patient is undergoing an aerosol-generating procedure (AGP). Not all those with COVID-19 infection are symptomatic, or suspected of harbouring the infection. If a patient who is not known to have or suspected of having COVID-19 infection is to undergo an AGP, it would nonetheless be sensible to minimise the risk to those HCWs treating them.\nIf the mouth and nose of an individual undergoing an AGP are irrigated with antimicrobial solutions, this may be a simple and safe method of reducing the risk of any covert infection being passed to HCWs through droplet transmission or direct contact. Alternatively, the use of antimicrobial solutions by the HCW may decrease the chance of them acquiring COVID-19 infection. However, the use of such antimicrobial solutions may be associated with harms related to the toxicity of the solutions themselves or alterations in the natural microbial flora of the mouth or nose.\nObjectives\nTo assess the benefits and harms of antimicrobial mouthwashes and nasal sprays administered to HCWs and/or patients when undertaking AGPs on patients without suspected or confirmed COVID-19 infection.\nSearch methods\nInformation Specialists from Cochrane ENT and Cochrane Oral Health searched the Central Register of Controlled Trials (CENTRAL 2020, Issue 6); Ovid MEDLINE; Ovid Embase and additional sources for published and unpublished trials. The date of the search was 1 June 2020.\nSelection criteria\nThis is a question that urgently requires evidence, however at the present time we did not anticipate finding many completed RCTs. We therefore planned to include the following types of studies: randomised controlled trials (RCTs); quasi-RCTs; non-randomised controlled trials; prospective cohort studies; retrospective cohort studies; cross-sectional studies; controlled before-and-after studies. We set no minimum duration for the studies.\nWe sought studies comparing any antimicrobial mouthwash and/or nasal spray (alone or in combination) at any concentration, delivered to the patient or HCW before and/or after an AGP.\nData collection and analysis\nWe used standard Cochrane methodological procedures. Our primary outcomes were: 1) incidence of symptomatic or test-positive COVID-19 infection in HCWs or patients; 2) significant adverse event: anosmia (or disturbance in sense of smell). Our secondary outcomes were: 3) COVID-19 viral content of aerosol (when present); 4) change in COVID-19 viral load at site(s) of irrigation; 5) other adverse events: changes in microbiome in oral cavity, nasal cavity, oro- or nasopharynx; 6) other adverse events: allergy, irritation/burning of nasal, oral or oropharyngeal mucosa (e.g. erosions, ulcers, bleeding), long-term staining of mucous membranes or teeth, accidental ingestion. We planned to use GRADE to assess the certainty of the evidence for each outcome.\nMain results\nWe found no completed studies to include in this review.\nAuthors' conclusions\nWe identified no studies for inclusion in this review, nor any ongoing studies. The absence of completed studies is not surprising given the relatively recent emergence of COVID-19 infection. However, we are disappointed that this important clinical question is not being addressed by ongoing studies.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no completed, or ongoing, studies to include in this review."
}
] |
query-laysum
|
24192
|
[
{
"role": "user",
"content": "Abstract: Background\nAdalimumab is an IgG1 monoclonal antibody that targets and blocks tumor necrosis factor-alpha, a pro-inflammatory cytokine involved in the pathogenesis of Crohn's disease (CD). A significant proportion of people with CD fail conventional therapy or therapy with biologics or develop significant adverse events. Adalimumab may be an effective alternative for these individuals.\nObjectives\nThe objectives of this review were to assess the efficacy and safety of adalimumab for the induction of remission in CD.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, the Cochrane IBD Group Specialized Register, ClinicalTrials.Gov and the World Health Organization trial registry (ICTRP) from inception to 16 April 2019. References and conference abstracts were searched to identify additional studies.\nSelection criteria\nRandomized controlled trials (RCTs) comparing any dose of adalimumab to placebo or an active comparator in participants with active CD were included.\nData collection and analysis\nTwo authors independently screened studies, extracted data and assessed bias using the Cochrane 'Risk of bias' tool. The primary outcome was the failure to achieve clinical remission, as defined by the original studies. Clinical remission was defined as a Crohn's Disease Activity Index (CDAI) score of less than 150 points. Secondary outcomes included failure to achieve clinical response (defined as a decrease in CDAI of > 100 points or > 70 points from baseline), failure to achieve endoscopic remission and response, failure to achieve histological remission and response, failure to achieve steroid withdrawal, adverse events (AEs) and serious adverse events (SAEs), withdrawal from study due to AEs and quality of life measured by a validated instrument. We calculated the risk ratio (RR) and 95% confidence intervals (95% CI) for dichotomous outcomes. Data were pooled for analysis if the participants, interventions, outcomes and time frame were similar. Data were analyzed on an intention-to-treat basis. The overall certainty of the evidence was assessed using GRADE.\nMain results\nThree placebo-controlled RCTs (714 adult participants) were included. The participants had moderate to severely active CD (CDAI 220 to 450). Two studies were rated as at low risk of bias and one study was rated as at unclear risk of bias. Seventy-six per cent (342/451) of adalimumab participants failed to achieve clinical remission at four weeks compared to 91% (240/263) of placebo participants (RR 0.85, 95% CI 0.79 to 0.90; high-certainty evidence). Forty-four per cent (197/451) of adalimumab participants compared to 66% (173/263) of placebo participants failed to achieve a 70-point clinical response at four weeks (RR 0.68, 95% CI 0.59 to 0.79; high-certainty evidence). At four weeks, 57% (257/451) of adalimumab participants failed to achieve a 100-point clinical response compared to 76% (199/263) of placebo participants (RR 0.77, 95% CI 0.69 to 0.86; high-certainty evidence). Sixty-two per cent (165/268) of adalimumab participants experienced an AE compared to 72% (188/263) of participants in the placebo group (RR 0.90, 95% CI 0.74 to 1.09; moderate-certainty evidence). Two percent (6/268) of adalimumab participants experienced a SAE compared to 5% (13/263) of participants in the placebo group (RR 0.44, 95% CI 0.17 to 1.15; low-certainty evidence). Lastly, 1% (3/268) of adalimumab participants withdrew due to AEs compared to 3% (8/268) of participants in the placebo group (RR 0.38, 95% CI 0.11 to 1.30; low-certainty evidence). Commonly reported adverse events included injection site reactions, abdominal pain, fatigue, worsening CD and nausea. Quality of life data did not allow for meta-analysis. Three studies reported better quality of life at four weeks with adalimumab (measured with either Inflammatory Bowel Disease Questionnaire or Short-Form 36; moderate-certainty evidence). Endoscopic remission and response, histologic remission and response, and steroid withdrawal were not reported in the included studies.\nAuthors' conclusions\nHigh-certainty evidence suggests that adalimumab is superior to placebo for induction of clinical remission and clinical response in people with moderate to severely active CD. Although the rates of AEs, SAEs and withdrawals due to AEs were lower in adalimumab participants compared to placebo, we are uncertain about the effect of adalimumab on AEs due to the low number of events. Therefore, no firm conclusions can be drawn regarding the safety of adalimumab in CD. Futher studies are required to look at the long-term effectiveness and safety of using adalimumab in participants with CD.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "High-certainty evidence suggests that treating participants with adalimumab is better than treating them with placebo for inducing remission and improving symptoms in people with moderate to severely active Crohn's disease. Side effects were lower in adalimumab participants compared to placebo. However, we are uncertain about the effect of adalimumab on side effects due to the low number of events, therefore, no firm conclusions can be drawn regarding the harms (side effects) of adalimumab in Crohn's disease. Futher studies are required to look at the long-term benefits and harms of using adalimumab in people with Crohn's disease."
}
] |
query-laysum
|
24193
|
[
{
"role": "user",
"content": "Abstract: Background\nProphylactic removal of asymptomatic disease-free impacted wisdom teeth is the surgical removal of wisdom teeth in the absence of symptoms and with no evidence of local disease. Impacted wisdom teeth may be associated with pathological changes, such as pericoronitis, root resorption, gum and alveolar bone disease (periodontitis), caries and the development of cysts and tumours. When surgical removal is performed in older people, the risk of postoperative complications, pain and discomfort is increased. Other reasons to justify prophylactic removal of asymptomatic disease-free impacted third molars have included preventing late lower incisor crowding, preventing damage to adjacent structures such as the second molar or the inferior alveolar nerve, in preparation for orthognathic surgery, in preparation for radiotherapy or during procedures to treat people with trauma to the affected area. Removal of asymptomatic disease-free wisdom teeth is a common procedure, and researchers must determine whether evidence supports this practice. This review is an update of an review originally published in 2005 and previously updated in 2012 and 2016.\nObjectives\nTo evaluate the effects of removal compared with retention (conservative management) of asymptomatic disease-free impacted wisdom teeth in adolescents and adults.\nSearch methods\nCochrane Oral Health’s Information Specialist searched the following databases: Cochrane Oral Health’s Trials Register (to 10 May 2019), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library, 2019, Issue 4), MEDLINE Ovid (1946 to 10 May 2019), and Embase Ovid (1980 to 10 May 2019). The US National Institutes of Health Trials Registry (ClinicalTrials.gov)and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. .\nSelection criteria\nWe included randomised controlled trials (RCTs), with no restriction on length of follow-up, comparing removal (or absence) with retention (or presence) of asymptomatic disease-free impacted wisdom teeth in adolescents or adults. We also considered quasi-RCTs and prospective cohort studies for inclusion if investigators measured outcomes with follow-up of five years or longer.\nData collection and analysis\nEight review authors screened search results and assessed the eligibility of studies for inclusion according to the review inclusion criteria. Eight review authors independently and in duplicate conducted the risk of bias assessments. When information was unclear, we contacted the study authors for additional information.\nMain results\nThis review update includes the same two studies that were identified in our previous version of the review: one RCT with a parallel-group design, which was conducted in a dental hospital setting in the United Kingdom, and one prospective cohort study, which was conducted in the private sector in the USA.\nPrimary outcome\nNo eligible studies in this review reported the effects of removal compared with retention of asymptomatic disease-free impacted wisdom teeth on health-related quality of life\nSecondary outcomes\nWe found only low- to very low-certainty evidence of the effects of removal compared with retention of asymptomatic disease-free impacted wisdom teeth for a limited number of secondary outcome measures.\nOne prospective cohort study, reporting data from a subgroup of 416 healthy male participants, aged 24 to 84 years, compared the effects of the absence (previous removal or agenesis) against the presence of asymptomatic disease-free impacted wisdom teeth on periodontitis and caries associated with the distal aspect of the adjacent second molar during a follow-up period of three to over 25 years. Very low-certainty evidence suggests that the presence of asymptomatic disease-free impacted wisdom teeth may be associated with increased risk of periodontitis affecting the adjacent second molar in the long term. In the same study, which is at serious risk of bias, there is insufficient evidence to demonstrate a difference in caries risk associated with the presence or absence of impacted wisdom teeth.\nOne RCT with 164 randomised and 77 analysed adolescent participants compared the effect of extraction with retention of asymptomatic disease-free impacted wisdom teeth on dimensional changes in the dental arch after five years. Participants (55% female) had previously undergone orthodontic treatment and had 'crowded' wisdom teeth. No evidence from this study, which was at high risk of bias, was found to suggest that removal of asymptomatic disease-free impacted wisdom teeth has a clinically significant effect on dimensional changes in the dental arch.\nThe included studies did not measure any of our other secondary outcomes: costs, other adverse events associated with retention of asymptomatic disease-free impacted wisdom teeth (pericoronitis, root resorption, cyst formation, tumour formation, inflammation/infection) and adverse effects associated with their removal (alveolar osteitis/postoperative infection, nerve injury, damage to adjacent teeth during surgery, bleeding, osteonecrosis related to medication/radiotherapy, inflammation/infection).\nAuthors' conclusions\nInsufficient evidence is available to determine whether asymptomatic disease-free impacted wisdom teeth should be removed or retained. Although retention of asymptomatic disease-free impacted wisdom teeth may be associated with increased risk of periodontitis affecting adjacent second molars in the long term, the evidence is very low certainty. Well-designed RCTs investigating long-term and rare effects of retention and removal of asymptomatic disease-free impacted wisdom teeth, in a representative group of individuals, are unlikely to be feasible. In their continuing absence, high quality, long-term prospective cohort studies may provide valuable evidence in the future. Given the current lack of available evidence, patient values should be considered and clinical expertise used to guide shared decision-making with people who have asymptomatic disease-free impacted wisdom teeth. If the decision is made to retain these teeth, clinical assessment at regular intervals to prevent undesirable outcomes is advisable.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We assessed the evidence provided by the two studies to be low to very low certainty, so we cannot rely on these findings. High-quality research is urgently needed to support clinical practice in this area."
}
] |
query-laysum
|
24194
|
[
{
"role": "user",
"content": "Abstract: Background\n'Keratinocyte cancer' is now the preferred term for the most commonly identified skin cancers basal cell carcinoma (BCC) and cutaneous squamous cell carcinoma (cSCC), which were previously commonly categorised as non-melanoma skin cancers (NMSC). Keratinocyte cancer (KC) represents about 95% of malignant skin tumours. Lifestyle changes have led to increased exposure to the sun, which has, in turn, led to a significant increase of new cases of KC, with a worldwide annual incidence of between 3% and 8%. The successful use of preventive measures could mean a significant reduction in the resources used by health systems, compared with the high cost of the treatment of these conditions. At present, there is no information about the quality of the evidence for the use of these sun protection strategies with an assessment of their benefits and risks.\nObjectives\nTo assess the effects of sun protection strategies (i.e. sunscreen and barrier methods) for preventing keratinocyte cancer (that is, basal cell carcinoma (BCC) and cutaneous squamous cell carcinoma (cSCC) of the skin) in the general population.\nSearch methods\nWe searched the following databases up to May 2016: the Cochrane Skin Group Specialised Register, CENTRAL, MEDLINE, Embase, and LILACS. We also searched five trial registries and the bibliographies of included studies for further references to relevant trials.\nSelection criteria\nWe included randomised controlled clinical trials (RCTs) of preventive strategies for keratinocyte cancer, such as physical barriers and sunscreens, in the general population (children and adults), which may provide information about benefits and adverse events related to the use of solar protection measures. We did not include trials focused on educational strategies to prevent KC or preventive strategies in high-risk groups. Our prespecified primary outcomes were BCC or cSCC confirmed clinically or by histopathology at any follow-up and adverse events.\nData collection and analysis\nTwo review authors independently selected studies for eligibility using Early Review Organizing Software (EROS). Similarly, two review authors independently used predesigned data collection forms to extract information from the original study reports about the participants, methods of randomisation, blinding, comparisons of interest, number of participants originally randomised by arm, follow-up losses, and outcomes, and they assessed the risk of bias. We resolved any disagreement by consulting a third author and contacted trial investigators of identified trials to obtain additional information. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included one RCT (factorial design) that randomised 1621 participants.\nThis study compared the daily application of sunscreen compared with discretionary use of sunscreen, with or without beta-carotene administration, in the general population. The study was undertaken in Australia; 55.2% of participants had fair skin, and they were monitored for 4.5 years for new cases of BCC or cSCC assessed by histopathology. We found this study to be at low risk of bias for domains such as allocation, blinding, and incomplete outcome data. However, we found multiple unclear risks related to other biases, including an unclear assessment of possible interactions between the effects of the different interventions evaluated (that is, sunscreen and beta-carotene). We found no difference in terms of the number of participants developing BCC (n = 1621; risk ratio (RR) 1.03, 95% confidence interval (CI) 0.74 to 1.43) or cSCC (n = 1621; RR 0.88, 95% CI 0.50 to 1.54) when comparing daily application of sunscreen with discretionary use, even when analyses were restricted to groups without beta-carotene supplementation. This evidence was of low quality, which means that there is some certainty that future studies may alter our confidence in this evidence.\nWe reported adverse events in a narrative way and included skin irritation or contact allergy.\nWe identified no studies that evaluated other sun protection measures, such as the use of sun-protective clothing, sunglasses, or hats, or seeking the shade when outdoors.\nAuthors' conclusions\nIn this review, we assessed the effect of solar protection in preventing the occurrence of new cases of keratinocyte cancer. We only found one study that was suitable for inclusion. This was a study of sunscreens, so we were unable to assess any other forms of sun protection. The study addressed our prespecified primary outcomes, but not most of our secondary outcomes. We were unable to demonstrate from the available evidence whether sunscreen was effective for the prevention of basal cell carcinoma (BCC) or cutaneous squamous cell carcinoma (cSCC).\nOur certainty in the evidence was low because there was a lack of histopathological confirmation of BCC or cSCC in a significant percentage of cases. Amongst other sources of bias, it was not clear whether the study authors had assessed any interaction effects between the sunscreen and beta-carotene interventions. We think that further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our one included study was a study of sunscreens, so we were unable to assess any other forms of sun protection.\nWe identified no studies that evaluated other sun protection measures, such as the use of sun-protective clothing, sunglasses, or hats, or seeking the shade when outdoors.\nWe did not find evidence for the effectiveness of daily sunscreen for preventing BCC or cSCC compared with the occasional use of sunscreen. The certainty of the evidence was low, which means that future studies may alter this result.\nSide effects from the sunscreen used with or without the addition of beta-carotene included a low percentage of cases of contact allergy and skin irritation."
}
] |
query-laysum
|
24195
|
[
{
"role": "user",
"content": "Abstract: Background\nExtensive evidence exists showing analgesic effects of sweet solutions for newborns and infants. It is less certain if the same analgesic effects exist for children one year to 16 years of age. This is an updated version of the original Cochrane review published in Issue 10, 2011 (Harrison 2011) titled Sweet tasting solutions for reduction of needle-related procedural pain in children aged one to 16 years.\nObjectives\nTo determine the efficacy of sweet tasting solutions or substances for reducing needle-related procedural pain in children beyond one year of age.\nSearch methods\nSearches were run to the end of June 2014. We searched the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects (DARE), Cochrane Methodology Register, Health Technology Assessment, the NHS Economic Evaluation Database, MEDLINE, EMBASE, PsycINFO, and ACP Journal Club (all via OvidSP), and CINAH (via EBSCOhost). We applied no language restrictions.\nSelection criteria\nPublished or unpublished randomised controlled trials (RCT) in which children aged one year to 16 years, received a sweet tasting solution or substance for needle-related procedural pain. Control conditions included water, non-sweet tasting substances, pacifier, distraction, positioning/containment, breastfeeding, or no treatment.\nData collection and analysis\nOutcome measures included crying duration, composite pain scores, physiological or behavioral pain indicators, self-report of pain or parental or healthcare professional-report of the child's pain. We reported mean differences (MD), weighted mean difference (WMD), or standardized mean difference (SMD) with 95% confidence intervals (CI) using fixed-effect or random-effects models as appropriate for continuous outcome measures. We reported risk ratio (RR), risk difference (RD), and the number needed to treat to benefit (NNTB) for dichotomous outcomes. We used the I2 statistic to assess between-study heterogeneity.\nMain results\nWe included one unpublished and seven published studies (total of 808 participants); four more studies and 478 more participants than the 2011 review. Six trials included young children aged one to four years receiving sucrose or candy lollypops for immunisation pain compared with water or no treatment. Usual care included topical anaesthetics, upright parental holding, and distraction. All studies were well designed blinded RCTs, however, five of the six studies had a high risk of bias based on small sample sizes.\nTwo studies included school-aged children receiving sweet or unsweetened chewing gum before, or before and during, immunisation and blood collection. Both studies, conducted by the same author, had a high risk of bias based on small sample sizes.\nResults for the toddlers/pre-school children were conflicting. Duration of cry, using a random-effects model, was not significantly reduced by sweet taste (six trials, 520 children, WMD -15 seconds, 95% CI -54 to 24, I2 = 94%).\nComposite pain score at time of first needle was reported in four studies (n = 121 children). The scores were not significantly different between the sucrose and control group (SMD -0.26, 95% CI -1.27 to 0.75, I2 = 86%).\nA Children's Hospital of Eastern Ontario Pain Scale score > 4 was significantly less common in the sucrose group compared to the control group in one study (n = 472, RR 0.55, 95% CI 0.45 to 0.67; RD -0.29, 95% CI -0.37 to -0.20; NNTB 3, 95% CI 3 to 5; tests for heterogeneity not applicable.\nFor school-aged children, chewing sweet gum before needle-related painful procedures (two studies, n = 111 children) or during the procedures (two studies, n = 103 children) did not significantly reduce pain scores. A comparison of the Faces Pain Scale scores in children chewing sweet gum before the procedures compared with scores of children chewing unsweetened gum revealed a WMD of -0.15 (95% CI -0.61 to 0.30). Similar results were found when comparing the chewing of sweet gum with unsweetened gum during the procedure (WMD 0.23, 95% CI -0.28 to 0.74). The Colored Analogue Scale for children chewing sweet gum compared to unsweetened gum before the procedure was not significantly different (WMD 0.24 (-0.69 to 1.18)) nor was it different when children chewed the gum during the procedure (WMD 0.86 (95% CI -0.12 to 1.83)). There was no heterogeneity for any of these analyses in school-aged children (I2 = 0%).\nAuthors' conclusions\nBased on the eight studies included in this systematic review update, two of which were subgroups of small numbers of eligible toddlers from larger studies, and three of which were pilot RCTs with small numbers of participants, there is insufficient evidence of the analgesic effects of sweet tasting solutions or substances during acutely painful procedures in young children between one and four years of age. Further rigorously conducted, adequately powered RCTs are warranted in this population. Based on the two studies by the same author, there was no evidence of analgesic effects of sweet taste in school-aged children. As there are other effective evidence-based strategies available to use in this age group, further trials are not warranted.\nDespite the addition of four studies in this review, conclusions have not changed since the last version of the review.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the literature for published and unpublished studies up to October 2014."
}
] |
query-laysum
|
24196
|
[
{
"role": "user",
"content": "Abstract: Background\nThe genital infection caused by Chlamydia trachomatis (CT) is a common sexually transmitted infection (STI) globally. The infection is mainly asymptomatic in women, thus it can produce infertility and chronic pelvic pain. In men infection is mainly symptomatic, but can evolve to prostatitis. Clinical practice guidelines for CT urogenital infections do not give any specific recommendation about which antibiotic use as first option\nObjectives\nTo assess the efficacy and safety of antibiotic treatment for CT genital infection in men and non-pregnant women.\nSearch methods\nThe Cochrane Sexually Transmitted Infections' (STI) Information Specialist developed the electronic searches in electronic databases (CENTRAL, MEDLINE, Embase and LILACS), and trials registers. We searched studies published from inception to June 2018.\nSelection criteria\nWe included parallel, randomised controlled trials (RCTs) of men, and sexually-active, non-pregnant women with CT infection (urethritis or uterine cervicitis or asymptomatic), diagnosed by cell culture for CT, nucleic acid amplification tests (NAAT) or antigen-based detection methods, who had been treated with any of the antibiotic regimens recommended by any of the updated to 2013 CT Guidelines.\nData collection and analysis\nFour review authors screened evidence according to selection criteria and independently extracted data and assessed risk of bias. Two authors developed the 'Summary of findings' tables. We used a fixed-effect meta-analysis model for combining data where it was reasonable to assume that studies were estimating the same underlying treatment effect. We estimated the pooled risk ratio in order to establish the effects of the comparisons. Our primary outcomes were microbiological failure and adverse events, and our secondary outcomes were clinical failure, antimicrobial resistance and reinfection.\nMain results\nWe selected 14 studies ( 2715 participants: 2147 (79.08%) men and 568 (20.92%) women). The studies were conducted mainly at STD clinics. Sample sizes ranged from 71 to 606 participants; follow-up was 29.7 days on average.\nFor the comparison: azithromycin single dose versus doxycycline once or twice daily for 7 days, in men treated for CT, the risk of microbiological failure was higher in the azithromycin group (RR 2.45, 95% CI 1.36 to 4.41; participants = 821; studies = 9; moderate-quality evidence), but regarding clinical failure, the results showed that the effect is uncertain (RR 0.94, 95% CI 0.43 to 2,05; I² = 55%; participants = 525; studies = 3; low-quality evidence). Regarding adverse events (AE) in men there could be little or no difference between the antibiotics (RR 0.83, 95% CI 0.67 to 1.02; participants = 1424; studies = 6; low-quality evidence). About women treated for CT, the effect on microbiological failure was uncertain (RR = 1.71, 95% CI 0.48 to 6.16; participants = 338; studies = 5; very low-quality evidence). There were no studies assessing clinical failure or adverse events in women, however, we found that azithromycin probably has fewer adverse events in both genders (RR 0.83, 95% CI 0.71 to 0.98; I² = 0%; participants = 2261; studies = 9; moderate-quality evidence).\nFor the second comparison: doxycycline compared to ofloxacin, for men treated for CT the effect on microbiological failure was uncertain (RR 8.53, 95% CI 0.43 to 167.38, I² not applicable; participants = 80; studies = 2; very low-quality evidence), as also it was on clinical failure (RR 0.85, 95% CI 0.28 to 2.62; participants = 36; studies = 1; very low-quality evidence). The effect of in women on clinical failure was uncertain (RR 0.94, 95% CI 0.39 to 2.25; I² = 39%; participants = 127; studies = 2; very low-quality evidence).Regarding adverse events, the effect in both men and women was uncertain (RR 1.02 95% CI 0.66 to 1.55; participants = 339 studies = 3; very low-quality evidence). The effect on microbiological failure in women and in men and women together was not estimable. The most frequently AE reported were not serious and of gastrointestinal origin.No studies assessed antimicrobial resistance or reinfection in either comparison.\nAuthors' conclusions\nIn men, regimens with azithromycin are probably less effective than doxycycline for microbiological failure, however, there might be little or no difference for clinical failure. For women, we are uncertain whether azithromycin compared to doxycycline increases the risk of microbiological failure. Azithromycin probably slightly reduces adverse events compared to doxycycline in men and women together but may have little difference in men alone. We are uncertain whether doxycycline compared to ofloxacin reduces microbiological failure in men or women alone, or men and women together, nor if it reduces clinical failure or adverse events in men or women.\nBased on the fact that women suffer mainly asymptomatic infections, and in order to test the effectiveness and safety of the current recommendations (azithromycin, doxycycline and ofloxacin), for CT infection, especially in low and middle income countries, future RCTs should be designed and conducted to include a large enough sample size of women, and with low risk of bias. It is also important that future RCTs include adherence, CT resistance to antibiotic regimens, and risk of reinfection as outcomes to be measured. In addition, it is important to conduct a network meta-analysis in order to evaluate all those studies that included in one arm only the current antibiotic treatments for CT infection that are recommended by the updated clinical practice guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We selected 14 studies with a total of 2715 men and non-pregnant women with CT infection, who had been treated with any antibiotic recommended by clinical guidelines (2147 (79.08%) men and 568 (20.92%) women). Women showed no symptoms or had uterine cervicitis, and men had non-gonococcal urethritis (an inflammation of the urethra not caused by gonorrhoeal infection). All of the participants had a positive test for CT. The studies lasted from 7 to 84 days after the end of treatment, with an average of 28 days. Most of the studies took place in sexually transmitted disease clinics in the USA. Studies compared the antibiotics doxycycline with azithromycin, and doxycycline with ofloxacin."
}
] |
query-laysum
|
24197
|
[
{
"role": "user",
"content": "Abstract: Background\nTardive dyskinesia (TD) is a disabling movement disorder associated with the prolonged use of antipsychotic medication. Several strategies have been examined in the treatment of TD. Currently, however, there is no clear evidence of the effectiveness of these drugs in TD and they have been associated with many side effects. One particular strategy would be to use pharmaceutical agents which are known to influence the catecholaminergic system at various junctures.\nObjectives\n1. To determine the effects of any of the following drugs for antipsychotic-induced TD in people with schizophrenia or other chronic mental illnesses.\ni. Drugs which influence the noradrenergic system.\nii. Dopamine receptor agonists.\niii. Dopamine receptor antagonists.\niv. Dopamine-depletor drugs.\nv. Drugs that increase the production or release of dopamine.\n2. To examine whether any improvement occurred with short periods of intervention (less than 6 weeks) and, if this did occur, whether this effect was maintained at longer periods of follow-up.\n3. To examine if there was a differential effect for the various compounds.\n4. To examine whether the use of non-antipsychotic catecholaminergic drugs are most effective in those with more recent onset TD (less than five years).\nSearch methods\nWe retrieved 712 references from searching the Cochrane Schizophrenia Group Trials Register (July 2015 and April 2017). We also inspected references of all identified studies for further trials and contacted authors of trials for additional information.\nSelection criteria\nWe selected studies if they were randomised controlled trials focusing on people with schizophrenia or other chronic mental illnesses and antipsychotic-induced tardive dyskinesia. We compared the use of catecholaminergic interventions versus placebo, no intervention, or any other intervention for the treatment of antipsychotic-induced tardive dyskinesia.\nData collection and analysis\nWe independently extracted data from these trials and we estimated risk ratios (RRs) with 95% confidence intervals (CIs). We assumed that people who left the studies early had no improvement.\nMain results\nThere are 10 included trials (N = 261) published between 1973 and 2010; eight are new from the 2015 and 2017 update searches. Forty-eight studies are excluded. Participants were mostly chronically mentally ill inpatients in their 50s, and studies were primarily of short (2 to 6 weeks) duration. The overall risk of bias in these studies was unclear, mainly due to poor reporting of allocation concealment and generation of the sequence. Studies were also not clearly blinded and we are unsure if data are incomplete or selectively reported, or if other biases were operating.\nOne small, three-arm trial found that both alpha-methyldopa (N = 20; RR 0.33, 95% CI 0.14 to 0.80; low-quality evidence) and reserpine (N = 20; RR 0.52 95% CI 0.29 to 0.96; low-quality evidence) may lead to a clinically important improvement in tardive dyskinesia symptoms compared with placebo after 2 weeks' treatment, but found no evidence of a difference between alpha-methyldopa and reserpine (N = 20; RR 0.60, 95% CI 0.19 to 1.86; very low quality evidence). Another small trial compared tetrabenazine and haloperidol after 18 weeks' treatment and found no evidence of a difference on clinically important improvement in tardive dyskinesia symptoms (N = 13; RR 0.93, 95% CI 0.45 to 1.95; very low quality evidence). No study reported on adverse events.\nFor remaining outcomes there was no evidence of a difference between any of the interventions: alpha-methyldopa versus placebo for deterioration of tardive dyskinesia symptoms (1 RCT; N = 20; RR 0.33, 95% CI 0.02 to 7.32; very low quality evidence), celiprolol versus placebo for leaving the study early (1 RCT; N = 35; RR 5.28, 95% CI 0.27 to 102.58; very low quality evidence) and quality of life (1 RCT; N = 35; RR 0.87, 95% CI 0.68 to 1.12; very low quality evidence), alpha-methyldopa versus reserpine for deterioration of tardive dyskinesia symptoms (1 RCT; N = 20; not estimable, no reported events; very low quality evidence), reserpine or carbidopa/levodopa versus placebo for deterioration of tardive dyskinesia symptoms (2 RCTs; N = 37; RR 1.18, 95% CI 0.35 to 3.99; very low quality evidence), oxypertine versus placebo for deterioration of mental state (1 RCT; N = 42; RR 2.20, 95% CI 0.22 to 22.45; very low quality evidence), dopaminergic drugs (amantadine, bromocriptine, tiapride, oxypertine, carbidopa/levodopa) versus placebo for leaving the study early (6 RCTs; N = 163; RR 1.29, 95% CI 0.65 to 2.54; very low quality evidence), and tetrabenazine versus haloperidol for deterioration of tardive dyskinesia symptoms (1 RCT; N = 13; RR 1.17, 95% CI 0.09 to 14.92) and leaving the study early (1 RCT; N = 13; RR 0.23, 95% CI 0.01 to 4.00).\nAuthors' conclusions\nAlthough there has been a large amount of research in this area, many studies were excluded due to inherent problems in the nature of their cross-over designs. Usually data are not reported before the cross-over and the nature of TD and its likely response to treatments make it imprudent to use this data. The review provides little usable information for service users or providers and more well-designed and well-reported studies are indicated.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ".\nOne small study found that after 2 weeks' treatment both alpha-methyldopa and reserpine may lead to clinically important improvement in tardive dyskinesia symptoms compared with placebo, but the quality of evidence was low. We are uncertain about the effect of reserpine versus alpha-methyldopa; quality of evidence was very low. Another small trial compared tetrabenazine and haloperidol after 18 weeks' treatment, but again we are uncertain about the effect as the quality of evidence was very low. The included studies did not report on any harmful effects of the drugs."
}
] |
query-laysum
|
24198
|
[
{
"role": "user",
"content": "Abstract: Background\nPanic disorder (PD) is a common type of anxiety disorder, characterized by unexpected and repeated panic attacks or fear of future panic attacks, or both. Individuals with PD are often resistant to pharmacological or psychological treatments and this can lead to the disorder becoming a chronic and disabling illness. Repetitive transcranial magnetic stimulation (rTMS) can deliver sustained and spatially selective current to suppress or induce cortical excitability, and its therapeutic effect on pathological neuronal activity in people with PD has already been examined in case studies and clinical trials. However, a systematic review is necessary to assess the efficacy and safety of rTMS for PD.\nObjectives\nTo assess the effects of repetitive transcranial magnetic stimulation (rTMS) for panic disorder (PD) in adults aged 18 to 65 years, either as a monotherapy or as an augmentation strategy.\nSearch methods\nAn electronic search of the Cochrane Depression, Anxiety and Neurosis Review Group Controlled Trials Register (CCDANCTR) was conducted to 19 February 2014. The CCDANCTR includes reports of relevant randomised controlled trials (RCTs) from MEDLINE (1950 to date), EMBASE (1974 to date), PsycINFO (1967 to date) and the Cochrane Central Register of Controlled Trials (CENTRAL) (all years). Additional searches were conducted in Psyndex and the main Chinese medical databases.\nSelection criteria\nRCTs or quasi-randomised trials evaluating rTMS for PD in people aged between 18 and 65 years, either as a monotherapy or as an augmentation strategy.\nData collection and analysis\nTwo review authors independently selected studies and extracted data and verified the data by cross-checking. Disagreements were resolved by discussion. For binary data, we calculated fixed-effect model risk ratio (RR) and its 95% confidence interval (CI). For continuous data, we calculated fixed-effect model standardized mean difference (SMD) and its 95% CI.\nMain results\nTwo RCTs (n = 40) were included in this review. The included trials compared rTMS with sham rTMS; no trials comparing rTMS with active treatments (electroconvulsive therapy (ECT), pharmacotherapy, psychotherapy) met our inclusion criteria. Both included studies used 1 Hz rTMS over the right dorso-lateral prefrontal cortex (DLPFC) for two or four weeks as an augmentation treatment for PD. However, in both studies the data for the primary outcome, panic symptoms as measured by the Panic Disorder Severity Scale (PDSS), were skewed and could not be pooled for a quantitative analysis. For this primary outcome one trial with 25 participants reported a superior effect of rTMS in reducing panic symptoms compared with sham rTMS (t = 3.04, df = 16.57, P = 0.007), but this trial had a 16% dropout rate and so was deemed as having a high risk of attrition bias. The other trial found that all 15 participants exhibited a reduction in panic symptoms but there was no significant difference between rTMS and sham rTMS (Mann Whitney U test, P > 0.05). Regarding the acceptability of rTMS, no significant difference was found between rTMS and sham rTMS in dropout rates or in reports of side effects. The quality of evidence contributing to this review was assessed as very low. Assessments of the risk of bias for the two studies were hampered by the lack of information provided in the reports, especially on methods of sequence generation and whether allocation concealment had been applied. Of the remaining sources of bias, we considered one of the studies to have been at risk of attrition bias.\nAuthors' conclusions\nOnly two RCTs of rTMS were available and their sample sizes were small. The available data were insufficient for us to draw any conclusions about the efficacy of rTMS for PD. Further trials with large sample sizes and adequate methodology are needed to confirm the effectiveness of rTMS for PD.\n\nGiven the provided abstract, please respond to the following query: \"Who will be interested in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People affected by panic disorder.\nGeneral practitioners (GPs).\nProfessionals working in adult mental health services.\nFamilies and friends of people who suffer from major depression."
}
] |
query-laysum
|
24199
|
[
{
"role": "user",
"content": "Abstract: Background\nMénière's disease is a condition that causes recurrent episodes of vertigo, associated with hearing loss and tinnitus. It is often treated with medication, but different interventions are sometimes used. Positive pressure therapy is a treatment that creates small pressure pulses, generated by a pump that is attached to tubing placed in the ear canal. It is typically used for a few minutes, several times per day. The underlying cause of Ménière's disease is unknown, as is the way in which this treatment may work. The efficacy of this intervention at preventing vertigo attacks, and their associated symptoms, is currently unclear.\nObjectives\nTo evaluate the benefits and harms of positive pressure therapy versus placebo or no treatment in people with Ménière's disease.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; CENTRAL; Ovid MEDLINE; Ovid Embase; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 14 September 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in adults with a diagnosis of Ménière's disease comparing positive pressure therapy with either placebo or no treatment. We excluded studies with follow-up of less than three months.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: 1) improvement in vertigo (assessed as a dichotomous outcome - improved or not improved), 2) change in vertigo (assessed as a continuous outcome, with a score on a numerical scale) and 3) serious adverse events. Our secondary outcomes were: 4) disease-specific health-related quality of life, 5) change in hearing, 6) change in tinnitus and 7) other adverse effects. We considered outcomes reported at three time points: 3 to < 6 months, 6 to ≤ 12 months and > 12 months. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe included three studies with a total of 238 participants, all of which compared positive pressure using the Meniett device to sham treatment. The duration of follow-up was a maximum of four months.\nImprovement in vertigo\nA single study assessed whether participants had an improvement in the frequency of their vertigo whilst using positive pressure therapy, therefore we are unable to draw meaningful conclusions from the results.\nChange in vertigo\nOnly one study reported on the change in vertigo symptoms using a global score (at 3 to < 6 months), so we are again unable to draw meaningful conclusions from the numerical results. All three studies reported on the change in the frequency of vertigo. The summary effect showed that people receiving positive pressure therapy had, on average, 0.84 fewer days per month affected by vertigo (95% confidence interval from 2.12 days fewer to 0.45 days more; 3 studies; 202 participants). However, the evidence on the change in vertigo frequency was of very low certainty, therefore there is great uncertainty in this estimate.\nSerious adverse events\nNone of the included studies provided information on the number of people who experienced serious adverse events. It is unclear whether this is because no adverse events occurred, or whether they were not assessed and reported.\nAuthors' conclusions\nThe evidence for positive pressure therapy for Ménière's disease is very uncertain. There are few RCTs that compare this intervention to placebo or no treatment, and the evidence that is currently available from these studies is of low or very low certainty. This means that we have very low confidence that the effects reported are accurate estimates of the true effect of these interventions. Consensus on the appropriate outcomes to measure in studies of Ménière's disease is needed (i.e. a core outcome set) in order to guide future studies in this area and enable meta-analyses of the results. This must include appropriate consideration of the potential harms of treatment, as well as the benefits.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This evidence is up-to-date to September 2022."
}
] |
query-laysum
|
24200
|
[
{
"role": "user",
"content": "Abstract: Background\nFibromyalgia is characterized by chronic widespread pain that leads to reduced physical function. Exercise training is commonly recommended as a treatment for management of symptoms. We examined the literature on resistance training for individuals with fibromyalgia. Resistance training is exercise performed against a progressive resistance with the intention of improving muscle strength, muscle endurance, muscle power, or a combination of these.\nObjectives\nTo evaluate the benefits and harms of resistance exercise training in adults with fibromyalgia. We compared resistance training versus control and versus other types of exercise training.\nSearch methods\nWe searched nine electronic databases (The Cochrane Library, MEDLINE, EMBASE, CINAHL, PEDro, Dissertation Abstracts, Current Controlled Trials, World Health Organization (WHO) International Clinical Trials Registry Platform, AMED) and other sources for published full-text articles. The date of the last search was 5 March 2013. Two review authors independently screened 1856 citations, 766 abstracts and 156 full-text articles. We included five studies that met our inclusion criteria.\nSelection criteria\nSelection criteria included: a) randomized clinical trial, b) diagnosis of fibromyalgia based on published criteria, c) adult sample, d) full-text publication, and e) inclusion of between-group data comparing resistance training versus a control or other physical activity intervention.\nData collection and analysis\nPairs of review authors independently assessed risk of bias and extracted intervention and outcome data. We resolved disagreements between the two review authors and questions regarding interpretation of study methods by discussion within the pairs or when necessary the issue was taken to the full team of 11 members. We extracted 21 outcomes of which seven were designated as major outcomes: multidimensional function, self reported physical function, pain, tenderness, muscle strength, attrition rates, and adverse effects. We evaluated benefits and harms of the interventions using standardized mean differences (SMD) or mean differences (MD) or risk ratios or Peto odds ratios and 95% confidence intervals (CI). Where two or more studies provided data for an outcome, we carried out a meta-analysis.\nMain results\nThe literature search yielded 1865 citations with five studies meeting the selection criteria. One of the studies that had three arms contributed data for two comparisons. In the included studies, there were 219 women participants with fibromyalgia, 95 of whom were assigned to resistance training programs. Three randomized trials compared 16 to 21 weeks of moderate- to high-intensity resistance training versus a control group. Two studies compared eight weeks of progressive resistance training (intensity as tolerated) using free weights or body weight resistance exercise versus aerobic training (ie, progressive treadmill walking, indoor and outdoor walking), and one study compared 12 weeks of low-intensity resistance training using hand weights (1 to 3 lbs (0.45 to 1.36 kg)) and elastic tubing versus flexibility exercise (static stretches to major muscle groups).\nStatistically significant differences (MD; 95% CI) favoring the resistance training interventions over control group(s) were found in multidimensional function (Fibromyalgia Impact Questionnaire (FIQ) total decreased 16.75 units on a 100-point scale; 95% CI -23.31 to -10.19), self reported physical function (-6.29 units on a 100-point scale; 95% CI -10.45 to -2.13), pain (-3.3 cm on a 10-cm scale; 95% CI -6.35 to -0.26), tenderness (-1.84 out of 18 tender points; 95% CI -2.6 to -1.08), and muscle strength (27.32 kg force on bilateral concentric leg extension; 95% CI 18.28 to 36.36).\nDifferences between the resistance training group(s) and the aerobic training groups were not statistically significant for multidimensional function (5.48 on a 100-point scale; 95% CI -0.92 to 11.88), self reported physical function (-1.48 units on a 100-point scale; 95% CI -6.69 to 3.74) or tenderness (SMD -0.13; 95% CI -0.55 to 0.30). There was a statistically significant reduction in pain (0.99 cm on a 10-cm scale; 95% CI 0.31 to 1.67) favoring the aerobic groups.\nStatistically significant differences were found between a resistance training group and a flexibility group favoring the resistance training group for multidimensional function (-6.49 FIQ units on a 100-point scale; 95% CI -12.57 to -0.41) and pain (-0.88 cm on a 10-cm scale; 95% CI -1.57 to -0.19), but not for tenderness (-0.46 out of 18 tender points; 95% CI -1.56 to 0.64) or strength (4.77 foot pounds torque on concentric knee extension; 95% CI -2.40 to 11.94). This evidence was classified low quality due to the low number of studies and risk of bias assessment. There were no statistically significant differences in attrition rates between the interventions. In general, adverse effects were poorly recorded, but no serious adverse effects were reported. Assessment of risk of bias was hampered by poor written descriptions (eg, allocation concealment, blinding of outcome assessors). The lack of a priori protocols and lack of care provider blinding were also identified as methodologic concerns.\nAuthors' conclusions\nThe evidence (rated as low quality) suggested that moderate- and moderate- to high-intensity resistance training improves multidimensional function, pain, tenderness, and muscle strength in women with fibromyalgia. The evidence (rated as low quality) also suggested that eight weeks of aerobic exercise was superior to moderate-intensity resistance training for improving pain in women with fibromyalgia. There was low-quality evidence that 12 weeks of low-intensity resistance training was superior to flexibility exercise training in women with fibromyalgia for improvements in pain and multidimensional function. There was low-quality evidence that women with fibromyalgia can safely perform moderate- to high-resistance training.\n\nGiven the provided abstract, please respond to the following query: \"Tenderness estimated by number of points (out of 18) perceived as painful with 4 kilograms of pressure\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Women who did resistance training had 4 fewer tender points while women who did not do resistance training had 2 fewer tender points.\nTherefore, women who did resistance training had 2 fewer tender points at the end of the studies than women who did not do resistance training."
}
] |
query-laysum
|
24201
|
[
{
"role": "user",
"content": "Abstract: Background\nPolycystic ovarian syndrome (PCOS) is characterised by both metabolic and reproductive disorders, and affects 5% to 15% of women of reproductive age. Different western medicines have been proposed for PCOS-related subfertility, such as oral contraceptives, insulin sensitisers and laparoscopic ovarian drilling (LOD). Chinese herbal medicines (CHM) have also been used for subfertility caused by PCOS for decades, and are expected to become an alternative treatment for subfertile women with PCOS.\nObjectives\nTo assess the efficacy and safety of Chinese herbal medicine (CHM) for subfertile women with polycystic ovarian syndrome (PCOS).\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register, CENTRAL, MEDLINE, Embase and six other databases, from inception to 2 June 2020. In addition, we searched three trials registries, the reference lists of included trials and contacted experts in the field to locate trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing CHM versus placebo, no treatment or conventional (western) therapies for the treatment of subfertile women with PCOS.\nData collection and analysis\nTwo review authors independently screened trials for inclusion, assessed the risk of bias in included studies and extracted data. We contacted primary study authors for additional information. We conducted meta-analyses. We used the odds ratios (ORs) to report dichotomous data, with 95% confidence intervals (CIs). We assessed the certainty of the evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methods.\nMain results\nWe included eight RCTs with 609 participants. The comparisons in the included trials were as follows: CHM versus clomiphene, CHM plus clomiphene versus clomiphene (with or without ethinyloestradiol cyproterone acetate (EE/CPA)), CHM plus follicle aspiration plus ovulation induction versus follicle aspiration plus ovulation induction alone, and CHM plus laparoscopic ovarian drilling (LOD) versus LOD alone. The overall certainty of the evidence for most comparisons was very low.\nNone of the included studies reported the primary outcome, live birth rate. Most studies reported the secondary outcomes, and only one study reported data on adverse events.\nIn trials that compared CHM to clomiphene (with or without LOD in both study arms), we are uncertain of the effect of CHM on pregnancy rates (odds ratio (OR) 1.41, 95% confidence interval (CI) 0.63 to 3.19; I2 = 28%; 3 studies, 140 participants; very low certainty evidence). Results suggest that if the chance of pregnancy following clomiphene is assumed to be 21.5%, the chance following CHM would vary between 14.7% and 46.7%. No study reported data on adverse events.\nWhen CHM plus clomiphene was compared to clomiphene (with or without EE/CPA), there was low certainty evidence of a higher pregnancy rate in the CHM plus clomiphene group (OR 3.06, 95% CI 2.05 to 4.55; I2 = 10%; 6 studies, 470 participants; low certainty evidence). Results suggest that if the chance of pregnancy following clomiphene is assumed to be 31.5%, the chance following CHM plus clomiphene would vary between 48.5% and 67.7%.\nNo data were reported on adverse events.\nIn trials that compared CHM plus follicle aspiration and ovulation induction to follicle aspiration and ovulation induction alone, we are uncertain of the effect of CHM on pregnancy rates (OR 1.62, 95% CI 0.46 to 5.68; 1 study, 44 women; very low certainty evidence). Results suggest that if the chance of pregnancy following follicle aspiration and ovulation induction is assumed to be 29.2%, the chance following CHM with follicle aspiration and ovulation induction would vary between 15.9% and 70%. Reported adverse events included severe luteinised unruptured follicle syndrome (LUFS) (Peto OR 0.60, 95% CI 0.06 to 6.14; 1 study, 44 women; very low certainty evidence), ovarian hyperstimulation syndrome (OHSS) (Peto OR 0.16, 95% CI 0.00 to 8.19; 1 study, 44 women; very low certainty evidence) or multiple pregnancy (Peto OR 0.60, 95% CI 0.06 to 6.14; 1 study, 44 women; very low certainty evidence). These results suggest that if the chances of LUFS, OHSS, and multiple pregnancy following follicle aspiration and ovulation induction are assumed to be 8.3%, 4.2%, and 8.3% respectively, the chances following CHM with follicle aspiration and ovulation induction would be 0.5% to 35.8%, 0% to 26.3% and 0.5% to 35.8% respectively.\nIn trials that compared CHM plus LOD to LOD alone, we are uncertain if CHM improves pregnancy rates (OR 3.50, 95% CI 0.72 to 17.09; 1 study, 30 women; very low certainty evidence). Results suggest that if the chance of pregnancy following LOD is assumed to be 40%, the chance following CHM with LOD would vary between 32.4% and 91.9%. No data were reported on adverse events.\nWe are uncertain of the results in the comparison groups for all outcomes. The certainty of the evidence for all other comparisons and outcomes was very low. The main limitations in the evidence were failure to report live birth or adverse events, failure to describe study methods in adequate detail and imprecision due to very low event rates and wide CIs.\nAuthors' conclusions\nThere is insufficient evidence to support the use of CHM for subfertile women with PCOS. No data are available on live birth. We are uncertain of the effect of CHM on pregnancy rates for there is no consistent evidence to indicate that CHM influences fertility outcomes. However, we find that the addition of CHM to clomiphene may improve pregnancy rates, but there is very limited, low certainty evidence for this outcome. Furthermore, there is insufficient evidence on adverse effects to indicate whether CHM is safe. In the future, well-designed, carefully conducted RCTs are needed, with a particular focus on the live birth rate and other safety indexes.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There was insufficient evidence to support the use of CHM for subfertile women with PCOS. No data were available on live birth. There was no consistent evidence to indicate that CHM improves fertility outcomes.\nWhen CHM was compared to clomiphene (with or without laparoscopic ovarian drilling (LOD) in both study arms), the pregnancy rates were no different between the treatment and control groups. When CHM plus follicle aspiration and ovulation induction was compared to follicle aspiration and ovulation induction alone, pregnancy rates were no different between the groups. When CHM plus LOD was compared to LOD alone, pregnancy rates were no different between the groups. The certainty of the evidence was very low and therefore we could not draw any conclusions about the results.\nThere was, however, limited low certainty evidence to suggest that the addition of CHM to clomiphene may improve pregnancy rates.\nDue to the very low certainty evidence for all comparison groups for all outcomes, we were unable to draw conclusions. There was insufficient evidence on adverse effects to indicate whether CHM is safe."
}
] |
query-laysum
|
24202
|
[
{
"role": "user",
"content": "Abstract: Background\nIt has been reported that people with COVID-19 and pre-existing autoantibodies against type I interferons are likely to develop an inflammatory cytokine storm responsible for severe respiratory symptoms. Since interleukin 6 (IL-6) is one of the cytokines released during this inflammatory process, IL-6 blocking agents have been used for treating people with severe COVID-19.\nObjectives\nTo update the evidence on the effectiveness and safety of IL-6 blocking agents compared to standard care alone or to a placebo for people with COVID-19.\nSearch methods\nWe searched the World Health Organization (WHO) International Clinical Trials Registry Platform, the Living OVerview of Evidence (L·OVE) platform, and the Cochrane COVID-19 Study Register to identify studies on 7 June 2022.\nSelection criteria\nWe included randomized controlled trials (RCTs) evaluating IL-6 blocking agents compared to standard care alone or to placebo for people with COVID-19, regardless of disease severity.\nData collection and analysis\nPairs of researchers independently conducted study selection, extracted data and assessed risk of bias. We assessed the certainty of evidence using the GRADE approach for all critical and important outcomes. In this update we amended our protocol to update the methods used for grading evidence by establishing minimal important differences for the critical outcomes.\nMain results\nThis update includes 22 additional trials, for a total of 32 trials including 12,160 randomized participants all hospitalized for COVID-19 disease. We identified a further 17 registered RCTs evaluating IL-6 blocking agents without results available as of 7 June 2022.\nThe mean age range varied from 56 to 75 years; 66.2% (8051/12,160) of enrolled participants were men. One-third (11/32) of included trials were placebo-controlled. Twenty-two were published in peer-reviewed journals, three were reported as preprints, two trials had results posted only on registries, and results from five trials were retrieved from another meta-analysis. Eight were funded by pharmaceutical companies.\nTwenty-six included studies were multicenter trials; four were multinational and 22 took place in single countries. Recruitment of participants occurred between February 2020 and June 2021, with a mean enrollment duration of 21 weeks (range 1 to 54 weeks). Nineteen trials (60%) had a follow-up of 60 days or more. Disease severity ranged from mild to critical disease. The proportion of participants who were intubated at study inclusion also varied from 5% to 95%. Only six trials reported vaccination status; there were no vaccinated participants included in these trials, and 17 trials were conducted before vaccination was rolled out.\nWe assessed a total of six treatments, each compared to placebo or standard care. Twenty trials assessed tocilizumab, nine assessed sarilumab, and two assessed clazakizumab. Only one trial was included for each of the other IL-6 blocking agents (siltuximab, olokizumab, and levilimab). Two trials assessed more than one treatment.\nEfficacy and safety of tocilizumab and sarilumab compared to standard care or placebo for treating COVID-19\nAt day (D) 28, tocilizumab and sarilumab probably result in little or no increase in clinical improvement (tocilizumab: risk ratio (RR) 1.05, 95% confidence interval (CI) 1.00 to 1.11; 15 RCTs, 6116 participants; moderate-certainty evidence; sarilumab: RR 0.99, 95% CI 0.94 to 1.05; 7 RCTs, 2425 participants; moderate-certainty evidence). For clinical improvement at ≥ D60, the certainty of evidence is very low for both tocilizumab (RR 1.10, 95% CI 0.81 to 1.48; 1 RCT, 97 participants; very low-certainty evidence) and sarilumab (RR 1.22, 95% CI 0.91 to 1.63; 2 RCTs, 239 participants; very low-certainty evidence).\nThe effect of tocilizumab on the proportion of participants with a WHO Clinical Progression Score (WHO-CPS) of level 7 or above remains uncertain at D28 (RR 0.90, 95% CI 0.72 to 1.12; 13 RCTs, 2117 participants; low-certainty evidence) and that for sarilumab very uncertain (RR 1.10, 95% CI 0.90 to 1.33; 5 RCTs, 886 participants; very low-certainty evidence).\nTocilizumab reduces all cause-mortality at D28 compared to standard care/placebo (RR 0.88, 95% CI 0.81 to 0.94; 18 RCTs, 7428 participants; high-certainty evidence). The evidence about the effect of sarilumab on this outcome is very uncertain (RR 1.06, 95% CI 0.86 to 1.30; 9 RCTs, 3305 participants; very low-certainty evidence).\nThe evidence is uncertain for all cause-mortality at ≥ D60 for tocilizumab (RR 0.91, 95% CI 0.80 to 1.04; 9 RCTs, 2775 participants; low-certainty evidence) and very uncertain for sarilumab (RR 0.95, 95% CI 0.84 to 1.07; 6 RCTs, 3379 participants; very low-certainty evidence).\nTocilizumab probably results in little to no difference in the risk of adverse events (RR 1.03, 95% CI 0.95 to 1.12; 9 RCTs, 1811 participants; moderate-certainty evidence). The evidence about adverse events for sarilumab is uncertain (RR 1.12, 95% CI 0.97 to 1.28; 4 RCT, 860 participants; low-certainty evidence).\nThe evidence about serious adverse events is very uncertain for tocilizumab (RR 0.93, 95% CI 0.81 to 1.07; 16 RCTs; 2974 participants; very low-certainty evidence) and uncertain for sarilumab (RR 1.09, 95% CI 0.97 to 1.21; 6 RCTs; 2936 participants; low-certainty evidence).\nEfficacy and safety of clazakizumab, olokizumab, siltuximab and levilimab compared to standard care or placebo for treating COVID-19\nThe evidence about the effects of clazakizumab, olokizumab, siltuximab, and levilimab comes from only one or two studies for each blocking agent, and is uncertain or very uncertain.\nAuthors' conclusions\nIn hospitalized people with COVID-19, results show a beneficial effect of tocilizumab on all-cause mortality in the short term and probably little or no difference in the risk of adverse events compared to standard care alone or placebo. Nevertheless, both tocilizumab and sarilumab probably result in little or no increase in clinical improvement at D28.\nEvidence for an effect of sarilumab and the other IL-6 blocking agents on critical outcomes is uncertain or very uncertain. Most of the trials included in our review were done before the waves of different variants of concern and before vaccination was rolled out on a large scale.\nAn additional 17 RCTs of IL-6 blocking agents are currently registered with no results yet reported. The number of pending studies and the number of participants planned is low. Consequently, we will not publish further updates of this review.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that tested if medicines that block interleukin-6 can treat COVID-19 effectively. We looked for randomized controlled studies in which the treatments people received are decided by chance. We compared and summarized the results of the studies. We used a standardized method to rate our confidence in the evidence. The confidence is based on study features such as study design and the number of people included."
}
] |
query-laysum
|
24203
|
[
{
"role": "user",
"content": "Abstract: Background\nDiffuse brainstem glioma is a devastating disease with very poor prognosis. The most commonly used radiological treatment is conventional fractionated radiation. So far, there is no meta-analysis or systematic review available that assesses the benefits or harms of radiation in people with diffuse brainstem glioma.\nObjectives\nTo assess the effects of conventional fractionated radiotherapy (with or without chemotherapy) versus other therapies (including different radiotherapy techniques) for newly diagnosed diffuse brainstem gliomas in children and young adults aged 0 to 21 years.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE/PubMed, and EMBASE to 19 August 2015. We scanned conference proceedings from the International Society for Paediatric Oncology (SIOP), International Symposium on Paediatric Neuro-Oncology (ISPNO), Society of Neuro-Oncology (SNO), and European Association of Neuro-Oncology (EANO) from 1 January 2010 to 19 August 2015. We searched trial registers including the International Standard Randomised Controlled Trial Number (ISRCTN) Register, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), and the register of the National Institutes of Health to 19 August 2015. We imposed no language restrictions.\nSelection criteria\nAll randomised controlled trials (RCTs), quasi-randomised trials (QRCTs), or controlled clinical trials (CCTs) that compared conventional fractionated radiotherapy (with or without chemotherapy) versus other therapies (including different radiotherapy techniques) for newly diagnosed diffuse brainstem glioma in children and young adults aged 0 to 21 years.\nData collection and analysis\nTwo review authors independently screened studies for inclusion, extracted data, assessed the risk of bias in each eligible trial, and conducted GRADE assessment of included studies. We resolved disagreements through discussion. We performed analyses according to the guidelines of the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nWe identified two RCTs that fulfilled our inclusion criteria. The two trials tested different comparisons.\nOne multi-institutional RCT included 130 participants and compared hyperfractionated radiotherapy (six-week course with twice a day treatment of 117 cGy per fraction to a total dose of 7020 cGy) with conventional radiotherapy (six-week course with once a day treatment of 180 cGy per fraction to a total dose of 5400 cGy). The median time overall survival (OS) was 8.5 months in the conventional group and 8.0 months in the hyperfractionated group. We detected no clear evidence of effect on OS or event-free survival (EFS) in participants receiving hyperfractionated radiotherapy compared with conventional radiotherapy (OS: hazard ratio (HR) 1.07, 95% confidence interval (CI) 0.75 to 1.53; EFS: HR 1.26, 95% CI 0.83 to 1.90). Radiological response (risk ratio (RR) 0.94, 95% CI 0.54 to 1.63) and various types of toxicities were similar in the two groups. There was no information on other outcomes. According to the GRADE approach, we judged the quality of evidence to be low (i.e. further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate) for OS and EFS, and very low (i.e. we are very uncertain about the estimate) for radiological response and toxicities.\nThe second RCT included 71 participants and compared hypofractionated radiotherapy (39 Gy in 13 fractions over 2.6 weeks, 3 Gy per fraction) with conventional radiotherapy (54 Gy in 30 fractions over six weeks, 1.8 Gy per fraction). This trial reported a median OS of 7.8 months for the hypofractionated group and 9.5 months for the conventional group. It reported a progression-free survival (PFS) of 6.3 months for the hypofractionated group and 7.3 months for the conventional group. We found no clear evidence of effect on OS (HR 1.03, 95% CI 0.53 to 2.01) or PFS (HR 1.19, 95% CI 0.63 to 2.22) in participants receiving hypofractionated radiotherapy when compared with participants receiving conventional radiotherapy. The mainly observed adverse effect was local erythema and dry desquamation especially behind the auricles. There were some other toxicities, but there was no statistically significant difference between treatment groups. There was no information on other outcomes. We judged the quality of evidence to be moderate (i.e. further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate) for OS, and low for PFS and toxicities. It should be mentioned that the sample size in this RCT was small, which could lead to insufficient statistical power for a clinically relevant outcome.\nAuthors' conclusions\nWe could make no definitive conclusions from this review based on the currently available evidence. Further research is needed to establish the role of radiotherapy in the management of newly diagnosed diffuse brainstem glioma in children and young adults. Future RCTs should be conducted with adequate power and all relevant outcomes should be taken into consideration. Moreover, international multicentre collaboration is encouraged. Considering the potential advantage of hypofractionated radiotherapy to decrease the treatment burden and increase the quality of remaining life, we suggest that more attention should be paid to hypofractionated radiotherapy.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess the effects of conventional radiotherapy (with or without chemotherapy) versus other therapies (including different radiotherapy techniques) for newly diagnosed diffuse brainstem gliomas in children and young adults aged 0 to 21 years."
}
] |
query-laysum
|
24204
|
[
{
"role": "user",
"content": "Abstract: Background\nPressure ulcers are common in clinical practice and pose a significant health problem worldwide. Apart from causing suffering to patients, they also result in longer hospital stays and increase the cost of health care. A variety of methods are used for treating pressure ulcers, including pressure relief, patient repositioning, biophysical strategies, nutritional supplementation, debridement, topical negative pressure, and local treatments including dressings, ointments and creams such as bacitracin, silver sulphadiazine, neomycin, and phenytoin. Phenytoin is a drug more commonly used in the treatment of epilepsy, but may play an important role in accelerating ulcer healing.\nObjectives\nTo assess the effects of topical phenytoin on the rate of healing of pressure ulcers of any grade, in any care setting.\nSearch methods\nIn September 2016, we searched the following electronic databases to identify relevant randomized clinical trials: the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL; the Cochrane Library); Ovid MEDLINE; Ovid Embase; and EBSCO CINAHL Plus. We handsearched conference proceedings from the European Pressure Ulcer Advisory Panel, European Wound Management Association and the Tissue Viability Society for all available years. We searched the references of the retrieved trials to identify further relevant trials. We also searched clinical trials registries to identify ongoing and unpublished studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe included all randomized controlled trials (RCTs) addressing the effects (both benefits and harms) of topical phenytoin on the healing of pressure ulcers of any grade compared with placebo or alternative treatments or no therapy, irrespective of blinding, language, and publication status.\nData collection and analysis\nTwo review authors independently selected studies, extracted information on participants, interventions, methods and results and assessed risk of bias using Cochrane methodological procedures. For dichotomous variables, we calculated the risk ratio (RR) with 95% confidence interval (CI). For continuous variables, we calculated the mean difference with 95% CI. We rated the quality of the evidence by using Grading of Recommendations, Assessment, Development and Evaluation approach (GRADE).\nMain results\nThree small RCTs met our inclusion criteria and included a total of 148 participants. These compared three treatments with topical phenytoin: hydrocolloid dressings, triple antibiotic ointment and simple dressings. In the three RCTs, 79% of participants had grade II ulcers, and 21% of participants had grade I ulcers; no participants had grade III or IV ulcers. Two RCTs had a high risk of bias overall and the other RCT was at unclear risk of bias due to poor reporting. Two RCTs had three intervention arms and the other had two intervention arms.\nTwo studies compared topical phenytoin with hydrocolloid dressing (84 participants analysed). The available data suggest that hydrocolloid dressings may improve ulcer healing compared to topical phenytoin (39.3% ulcers healed for phenytoin versus 71.4% ulcers healed for hydrocolloid dressings (RR 0.55, 95% CI 0.33 to 0.92; 56 participants, 1 study; low quality evidence). We downgraded the evidence twice: once due to serious limitations (high risk of bias) and once due to the small sample size and small number of events. Two studies compared topical phenytoin with simple dressings (81 participants analysed). From the available data, we are uncertain whether topical phenytoin improves ulcer healing compared to simple dressings (39.3% ulcers healed for phenytoin versus 29.6% ulcers healed for the simple dressing (RR 1.33, 95% CI 0.63 to 2.78; 55 participants, 1 study; very low quality evidence). This evidence was downgraded once due to serious limitations (high risk of bias) and twice due to the low number of outcome events and resulting wide CI which included the possibility of both increased healing and reduced healing. We therefore considered it to be insufficient to determine the effect of topical phenytoin on ulcer healing. One study compared topical phenytoin with triple antibiotic ointment, however, none of the outcomes of interest to this review were reported. No adverse drug reactions or interactions were detected in any of the three RCTs. Minimal pain was reported in all groups in one trial that compared topical phenytoin with hydrocolloid dressings and triple antibiotic ointment.\nAuthors' conclusions\nThis review has considered the available evidence and the result shows that it is uncertain whether topical phenytoin improves ulcer healing for patients with grade I and II pressure ulcers. No adverse events were reported from three small trials and minimal pain was reported in one trial. Therefore, further rigorous, adequately powered RCTs examining the effects of topical phenytoin for treating pressure ulcers, and to report on adverse events, quality of life and costs are necessary.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of phenytoin (a medication used orally to treat epilepsy) when used as a cream or in a dressing applied directly to pressure ulcers. We wanted to find out if phenytoin affected ulcer healing, and if it had any harmful side effects."
}
] |
query-laysum
|
24205
|
[
{
"role": "user",
"content": "Abstract: Background\nLiver transplantation is considered the definitive treatment for people with liver failure. As part of post-liver transplantation management, immunosuppression (suppressing the host immunity) is given to prevent graft rejections. Immunosuppressive drugs can be classified into those that are used for a short period during the beginning phase of immunosuppression (induction immunosuppression) and those that are used over the entire lifetime of the individual (maintenance immunosuppression), because it is widely believed that graft rejections are more common during the first few months after liver transplantation. Some drugs such as glucocorticosteroids may be used for both induction and maintenance immunosuppression because of their multiple modalities of action. There is considerable uncertainty as to whether induction immunosuppression is necessary and if so, the relative efficacy of different immunosuppressive agents.\nObjectives\nTo assess the comparative benefits and harms of different induction immunosuppressive regimens in adults undergoing liver transplantation through a network meta-analysis and to generate rankings of the different induction immunosuppressive regimens according to their safety and efficacy.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Science Citation Index Expanded, World Health Organization International Clinical Trials Registry Platform, and trials registers until July 2019 to identify randomised clinical trials in adults undergoing liver transplantation.\nSelection criteria\nWe included only randomised clinical trials (irrespective of language, blinding, or status) in adults undergoing liver transplantation. We excluded randomised clinical trials in which participants had multivisceral transplantation and those who already had graft rejections.\nData collection and analysis\nWe performed a network meta-analysis with OpenBUGS using Bayesian methods and calculated the odds ratio (OR), rate ratio, and hazard ratio (HR) with 95% credible intervals (CrIs) based on an available case analysis, according to National Institute of Health and Care Excellence Decision Support Unit guidance.\nMain results\nWe included a total of 25 trials (3271 participants; 8 treatments) in the review. Twenty-three trials (3017 participants) were included in one or more outcomes in the review. The trials that provided the information included people undergoing primary liver transplantation for various indications and excluded those with HIV and those with renal impairment. The follow-up in the trials ranged from three to 76 months, with a median follow-up of 12 months among trials. All except one trial were at high risk of bias, and the overall certainty of evidence was very low. Overall, approximately 7.4% of people who received the standard regimen of glucocorticosteroid induction died and 12.2% developed graft failure.\nAll-cause mortality and graft failure was lower with basiliximab compared with glucocorticosteroid induction: all-cause mortality (HR 0.53, 95% CrI 0.31 to 0.93; network estimate, based on 2 direct comparison trials (131 participants; low-certainty evidence)); and graft failure (HR 0.44, 95% CrI 0.28 to 0.70; direct estimate, based on 1 trial (47 participants; low-certainty evidence)). There was no evidence of differences in all-cause mortality and graft failure between other induction immunosuppressants and glucocorticosteroids in either the direct comparison or the network meta-analysis (very low-certainty evidence).\nThere was also no evidence of differences in serious adverse events (proportion), serious adverse events (number), renal failure, any adverse events (proportion), any adverse events (number), liver retransplantation, graft rejections (any), or graft rejections (requiring treatment) between other induction immunosuppressants and glucocorticosteroids in either the direct comparison or the network meta-analysis (very low-certainty evidence). However, because of the wide CrIs, clinically important differences in these outcomes cannot be ruled out. None of the studies reported health-related quality of life.\nFunding: the source of funding for 14 trials was drug companies who would benefit from the results of the study; two trials were funded by neutral organisations who have no vested interests in the results of the study; and the source of funding for the remaining nine trials was unclear.\nAuthors' conclusions\nBased on low-certainty evidence, basiliximab induction may decrease mortality and graft failure compared to glucocorticosteroids induction in people undergoing liver transplantation. However, there is considerable uncertainty about this finding because this information is based on small trials at high risk of bias. The evidence is uncertain about the effects of different induction immunosuppressants on other clinical outcomes, including graft rejections.\nFuture randomised clinical trials should be adequately powered, employ blinding, avoid post-randomisation dropouts (or perform intention-to-treat analysis), and use clinically important outcomes such as mortality, graft failure, and health-related quality of life.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review studied adults of any sex, age, and ethnic origin, who underwent liver transplantation for various reasons. Participants were given different induction immunosuppressive agents or no induction immunosuppressive agents. The review authors excluded studies in people who underwent other organ transplants (such as kidney transplant) in addition to the liver, and studies in which people had already developed graft rejection. The average age of participants, when reported, ranged from 48 years to 62 years. The administered induction immunosuppressive groups included glucocorticosteroids, anti-thymocyte globulin, basiliximab, or dacluzimab either alone or in combination with glucocorticosteroids. The review authors wanted to gather and analyse data on death, graft failure, quality of life, serious and non-serious adverse events, kidney failure, time to liver retransplantation, and graft rejections."
}
] |
query-laysum
|
24207
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the original review published in 2005. Acute laryngitis is a common illness worldwide. Diagnosis is often made by case history alone and treatment often targets symptoms.\nObjectives\nTo assess the effectiveness and safety of different antibiotic therapies in adults with acute laryngitis. A secondary objective was to report the rates of adverse events associated with these treatments.\nSearch methods\nWe searched CENTRAL (2014, Issue 11), MEDLINE (January 1966 to November week 3, 2014), EMBASE (1974 to December 2014), LILACS (1982 to December 2014) and BIOSIS (1980 to December 2014).\nSelection criteria\nRandomised controlled trials (RCTs) comparing any antibiotic therapy with placebo for acute laryngitis. The main outcome was objective voice scores.\nData collection and analysis\nTwo review authors independently extracted and synthesised data.\nMain results\nWe included three RCTs (351 participants) that had moderate to high risk of bias. The quality of the evidence was very low for all outcomes. We downgraded the studies because of limitations in study design or execution (risk of bias), imprecision and inconsistency of results. We included a new trial presented only as a conference abstract in this update.\nIn one study of acute laryngitis in adults, 100 participants were randomised to receive penicillin V (800 mg twice daily for five days) or an identical placebo. A recording of each patient reading a standardised text was made at the first visit, during re-examination after one and two weeks, and at follow-up after two to six months. No significant differences were found between the groups. The trial also measured symptoms reported by participants and found no significant differences.\nOne study investigated erythromycin for acute laryngitis in 106 adults. The mean objective voice scores measured at the first visit, at re-examination after one and two weeks, and at follow-up after two to six months did not significantly differ between the groups. At one week there were significant beneficial differences in the severity of reported vocal symptoms (slight, moderate and severe) as judged by participants (P value = 0.042). However, the rates of participants having improved voice disturbance (subjective symptoms) at one and two weeks were not significantly different among groups. Comparing erythromycin and placebo groups on the rate of persistence of cough at two weeks, the risk ratio (RR) was 0.38 (95% confidence interval (CI) 0.15 to 0.97, P value = 0.04) and the number needed to treat for an additional beneficial outcome (NNTB) was 5.87 (95% CI 3.09 to 65.55). We calculated a RR of 0.64 (95% CI 0.46 to 0.90, P value = 0.034) and a NNTB of 3.76 (95% CI 2.27 to 13.52; P value = 0.01) for the subjective voice scores at one week.\nA third trial from Russia included 145 patients with acute laryngitis symptoms. Participants were randomised to three treatment groups: Group 1: seven-day course of fusafungine (six times a day by inhalation); Group 2: seven-day course of fusafungine (six times a day by inhalation) plus clarithromycin (250 mg twice daily for seven days); Group 3: no treatment. Clinical cure rates were measured at days 5 ± 1, 8 ± 1 and 28 ± 2. The authors reported significant differences in the rates of clinical cure at day 5 ± 1 favouring fusafungine (one trial; 93 participants; RR 1.50, 95% CI 1.02 to 2.20; P value = 0.04) and fusafungine plus clarithromycin (one trial 97 participants; RR 1.47, 95% CI 1.00 to 2.16; P value = 0.05) when compared to no treatment. However, no significant differences were found at days 8 ± 1 and 28 ± 2. Also, no significant differences were found when comparing fusafungine to fusafungine plus clarithromycin at days 5 ± 1, 8 ± 1 and 28 ± 2.\nAuthors' conclusions\nAntibiotics do not appear to be effective in treating acute laryngitis when assessing objective outcomes. They appear to be beneficial for some subjective outcomes. Erythromycin could reduce voice disturbance at one week and cough at two weeks when measured subjectively. Fusafungine could increase the cure rate at day five. The included RCTs had important methodological problems and these modest benefits from antibiotics may not outweigh their cost, adverse effects or negative consequences for antibiotic resistance patterns.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We ranked the quality of the evidence as low to very low, mainly because many studies had methodological limitations, outcome results were based on limited numbers of trials and the trials included participants that could not be pooled."
}
] |
query-laysum
|
24208
|
[
{
"role": "user",
"content": "Abstract: Background\nPain management is a high priority for patients with rheumatoid arthritis (RA). Muscle relaxants include drugs that reduce muscle spasm (for example benzodiazepines such as diazepam (Valium), alprazolam (Xanax), lorazepam (Ativan) and non-benzodiazepines such as metaxalone (Skelaxin) or a combination of paracetamol and orphenadrine (Muscol)) and drugs that prevent increased muscle tone (baclofen and dantrolene). Despite a paucity of evidence supporting their use, antispasmodic and antispasticity muscle relaxants have gained widespread clinical acceptance as adjuvants in the management of patients with chronic musculoskeletal pain.\nObjectives\nThe aim of this review was to determine the efficacy and safety of muscle relaxants in pain management in patients with RA. The muscle relaxants that were included in this review are the antispasmodic benzodiazepines (alprazolam, bromazepam, chlordiazepoxide,cinolazepam, clonazepam, cloxazolam, clorazepate, diazepam, estazolam, flunitrazepam, flurazepam, flutoprazepam, halazepam, ketazolam, loprazolam, lorazepam, lormetazepam, medazepam, midazolam, nimetazepam, nitrazepam, nordazepam, oxazepam, pinazepam, prazepam, quazepam, temazepam, tetrazepam, triazolam), antispasmodic non-benzodiazepines (cyclobenzaprine, carisoprodol, chlorzoxazone, meprobamate, methocarbamol, metaxalone, orphenadrine, tizanidine and zopiclone), and antispasticity drugs (baclofen and dantrolene sodium).\nSearch methods\nWe performed a search of the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, 4th quarter 2010), MEDLINE (1950 to week 1 November 2010), EMBASE (Week 44 2010), and PsycINFO (1806 to week 2 November 2010). We also searched the 2008 to 2009 American College of Rheumatology (ACR) and European League Against Rheumatism (EULAR) abstracts and performed a handsearch of reference lists of relevant articles.\nSelection criteria\nWe included randomised controlled trials which compared a muscle relaxant to another therapy (active, including non-pharmacological therapies, or placebo) in adult patients with RA and that reported at least one clinically relevant outcome.\nData collection and analysis\nTwo blinded review authors independently extracted data and assessed the risk of bias in the trials. Meta-analyses were used to examine the efficacy of muscle relaxants on pain, depression, sleep and function, as well as their safety.\nMain results\nSix trials (126 participants) were included in this review. All trials were rated at high risk of bias. Five cross-over trials evaluated a benzodiazepine, four assessed diazepam (n = 71) and one assessed triazolam (n = 15). The sixth trial assessed zopiclone (a non-benzodiazepine) (n = 40) and was a parallel group study. No trial duration was longer than two weeks while three single dose trials assessed outcomes at 24 hours only. Overall the included trials failed to find evidence of a beneficial effect of muscle relaxants over placebo, alone (at 24 hrs, 1 or 2 weeks) or in addition to non-steroidal anti-inflammatory drugs (NSAIDs) (at 24 hrs), on pain intensity, function, or quality of life. Data from two trials of longer than 24 hours duration (n = 74) (diazepam and zopiclone) found that participants who received a muscle relaxant had significantly more adverse events compared with those who received placebo (number needed to harm (NNTH) 3, 95% CI 2 to 7). These were predominantly central nervous system side effects, including dizziness and drowsiness (NNTH 3, 95% CI 2 to 11).\nAuthors' conclusions\nBased upon the currently available evidence in patients with RA, benzodiazepines (diazepam and triazolam) do not appear to be beneficial in improving pain over 24 hours or one week. The non-benzodiazepine agent zopiclone also did not significantly reduce pain over two weeks. However, even short term muscle relaxant use (24 hours to 2 weeks) is associated with significant adverse events, predominantly drowsiness and dizziness.\n\nGiven the provided abstract, please respond to the following query: \"What is rheumatoid arthritis and what are muscle relaxants?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When you have rheumatoid arthritis your immune system, which normally fights infection, attacks the lining of your joints. This makes your joints swollen, stiff, and painful. The small joints of your hands and feet are usually affected first. There is no cure for rheumatoid arthritis at present, so the treatments aim to relieve pain and stiffness and improve your ability to move.\nMuscle relaxants can be used to treat anxiety and promote sleep, and some people believe they may also reduce pain by acting on the nerves that cause pain, but this remains controversial. Muscle relaxants include drugs that reduce muscle spasm (for example benzodiazepines such as diazepam (Valium), Xanax, Ativan and non-benzodiazepines such as Skelaxin, Muscol) and drugs that prevent increased muscle tone (baclofen and dantrolene)."
}
] |
query-laysum
|
24209
|
[
{
"role": "user",
"content": "Abstract: Background\nCaesarean sections (CS) are the most frequent major surgery in the world. A transient impairment of bowel motility is expected after CS. Although this usually resolves spontaneously within a few days, it can cause considerable discomfort, require symptomatic medication and delay hospital discharge, thus increasing costs. Chewing gum in the immediate postoperative period is a simple intervention that may be effective in enhancing recovery of bowel function in other types of abdominal surgeries.\nObjectives\nTo assess the effects of chewing gum to reduce the duration of postoperative ileus and to enhance postoperative recovery after a CS.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (20 June 2016), LILACs (20 June 2016), ClinicalTrials.gov (20 June 2016), WHO International Clinical Trials Registry Platform (ICTRP) (20 June 2016) and the reference lists of retrieved studies.\nSelection criteria\nAll randomised controlled trials comparing chewing gum versus usual care, for women in the first 24 hours after a CS. We included studies published in abstract form only.\nQuasi-randomised, cross-over or cluster-randomised trials were not eligible for inclusion in this review.\nData collection and analysis\nTwo review authors independently selected the studies for inclusion, extracted data and assessed the risk of bias following standard Cochrane methods. We present dichotomous outcome results as risk ratio (RR) with 95% confidence intervals (CI) and continuous outcome results as mean differences (MD) and 95% CI. We pooled the results of similar studies using a random-effects model in case of important heterogeneity. We used the GRADE approach to assess the overall quality of evidence.\nMain results\nWe included 17 randomised trials (3149 participants) conducted in nine different countries. Seven studies (1325 women) recruited exclusively women undergoing elective CS and five studies (833 women) only included women having a primary CS. Ten studies (1731 women) used conventional feeding protocols (nil by mouth until the return of intestinal function). The gum-chewing regimen varied among studies, in relation to its initiation (immediately after CS, up to 12 hours later), duration of each session (from 15 to 60 minutes) and number of sessions per day (three to more than six). All the studies were classified as having a high risk of bias due to the nature of the intervention, women could not be blinded and most of the outcomes were self-reported.\nPrimary outcomes of this review: for the women that chewed gum, the time to passage of first flatus was seven hours shorter than those women in the 'usual care' control group (MD -7.09 hours, 95% CI -9.27 to -4.91 hours; 2399 women; 13 studies; random-effects Tau² = 14.63, I² = 95%, very low-quality evidence). This effect was consistent in all subgroup analyses (primary and repeat CS, time spent chewing gum per day, early and conventional feeding protocols, elective and non-elective CS and time after CS when gum-chewing was initiated). The rate of ileus was on average over 60% lower in the chewing-gum group compared to the control (RR 0.39, 95% CI 0.19 to 0.80; 1139 participants; four studies; I² = 39%, low-quality evidence). Tolerance to gum-chewing appeared to be high. Three women in one study complained about the chewing gum (but no further information was provided) and none of the studies reported adverse effects (eight studies, 925 women, low-quality evidence).\nSecondary outcomes of this review: the time to passage of faeces occurred on average nine hours earlier in the intervention group (MD -9.22 hours, 95% CI -11.49 to -6.95 hours; 2016 participants; 11 studies; random-effects Tau² = 12.53, I² = 93%, very low-quality evidence). The average duration of hospital stay was shorter in the intervention compared to the control group (MD -0.36 days, 95% CI -0.53 to -0.18 days; 1489 participants; seven studies; random-effects Tau² = 0.04, I² = 92%). The first intestinal sounds were heard earlier in the intervention than in the control group (MD -4.56 hours, 95% CI -6.18 to -2.93 hours; 1729 participants; nine studies; random-effects Tau² = 5.41, I² = 96%). None of the studies assessed women's satisfaction in relation to having to chew gum. The need for analgesia or antiemetic agents did not differ between the intervention and control groups (average RR 0.50, 95% CI 0.12 to 2.13; 726 participants; three studies; random-effects Tau² = 0.79, I² = 69%).\nAuthors' conclusions\nThis review found 17 randomised controlled trials (involving 3149 women). We downgraded the quality of the evidence for time to first passage of flatus and of faeces and for adverse effects/intolerance to gum chewing because of the high risk of bias of the studies (due to lack of blinding and self-report). For time to first flatus and faeces, we downgraded the quality of the evidence further because of the high heterogeneity in these meta-analyses and the potential for publication bias based on the visual inspection of the funnel plots. The quality of the evidence for adverse effects/tolerance to gum chewing and for ileus was downgraded because of the small number of events. The quality of the evidence for ileus was further downgraded due to the unclear risk of bias for the assessors evaluating this outcome.\nThe available evidence suggests that gum chewing in the immediate postoperative period after a CS is a well tolerated intervention that enhances early recovery of bowel function. However the overall quality of the evidence is very low to low.\nFurther research is necessary to establish the optimal regimen of gum-chewing (initiation, number and duration of sessions per day) to enhance bowel function recovery and to assess potential adverse effects of and women's satisfaction with this intervention. New studies also need to assess the compliance of the participants to the recommended gum-chewing instructions. Future large, well designed and conducted studies, with better methodological and reporting quality, will help to inform future updates of this review and enhance the body of evidence for this intervention.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Many women deliver by caesarean section (CS) nowadays. The proportion of women who deliver by CS ranges from 15% to over 50%, in some countries. After a CS it is common for the bowel to stop working for several hours or days. Although this usually resolves by itself in a few days, it may be very uncomfortable. The retained gases and stools can cause the mother's belly to become swollen and painful with cramps and she may feel nauseated and vomit so she is not able to eat. She may need additional medications to ease these symptoms and her hospital discharge may be delayed. The use of medications that relieve pain during labour and painkillers following the surgery can also delay bowel function.\nAlthough early feeding after a CS can stimulate the gut, it could also lead to vomiting. That is why many obstetricians still withhold food until bowel sounds are detected and there is passage of gas, or flatus. Chewing gum can help the bowels to function again earlier, as shown with other types of surgeries. We wanted to see if it also worked after a CS. Chewing gum in the first 24 hours after the surgery is a simple and cheap intervention."
}
] |
query-laysum
|
24210
|
[
{
"role": "user",
"content": "Abstract: Background\nAssisted reproductive technology (ART) has allowed couples with a family history of a monogenic genetic disease, or a disease-carrying gene, to reduce the chance of them having a child with the genetic disorder. This is achieved by genetically testing the embryos using an advanced process called preimplantation genetic testing for monogenic or single gene disorders (PGT-M), such as Huntington's disease or cystic fibrosis. This current terminology (PGT-M) has replaced the formerly-known preimplantation genetic diagnosis (PGD). During PGT-M, one or more embryo cells are biopsied and analysed for genetic or chromosomal anomalies before transferring the embryos to the endometrial cavity. Biopsy for PGT-M can be performed at day 3 of cleavage-stage embryo development when the embryo is at the six- to the eight-cell stage, with either one or two blastomeres being removed for analysis. Biopsy for PGT-M can also be performed on day 5 of the blastocyst stage of embryo development when the embryo has 80 to 100 cells, with five to six cells being removed for analysis. Day 5 biopsy has taken over from day 3 biopsy as the most widely-used biopsy technique; however, there is a lack of summarised evidence from randomised controlled trials (RCTs) that assesses the effectiveness and safety of day 5 biopsy compared to day 3 biopsy. Since biopsy is an invasive process, whether it is carried out at day 3 or day 5 of embryo development may have different impacts on further development, implantation, pregnancy, live birth and perinatal outcomes.\nObjectives\nTo assess the benefits and harms of day 5 embryo biopsy, in comparison to day 3 biopsy, in PGT-M in women undergoing in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI) cycles.\nSearch methods\nWe searched the following electronic bibliographic databases in December 2021 to identify relevant RCTs: the Cochrane Gynaecology and Fertility Group (CGFG) Specialised Trials Register; CENTRAL, MEDLINE, Embase and PsycINFO. We also handsearched grey literature, such as trial registers, relevant journals, reference lists, Google Scholar, and published conference abstracts.\nSelection criteria\nEligible RCTs compared day 5 versus day 3 embryo biopsy for PGT-M.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. The primary review outcomes were live births and miscarriages. We calculated outcomes per woman/couple randomised and reported odds ratios (ORs) with 95% confidence intervals (CIs).\nMain results\nWe included one RCT involving 20 women. The evidence was of very low certainty; the main limitations of the study were serious risk of bias due to lack of blinding of study personnel, and imprecision.\nWe are uncertain whether day 5 embryo biopsy compared to day 3 biopsy has an effect on live births (OR 1.50, 95% CI 0.26 to 8.82; 1 RCT, 20 women; very low-certainty evidence). The evidence suggests that if the chance of live birth following day 3 biopsy was assumed to be 40%, then the chance with day 5 biopsy is between 15% and 85%. It is also uncertain whether day 5 embryo biopsy compared to day 3 biopsy has an effect on miscarriages (OR 1.00, 95% CI 0.05 to 18.57; 1 RCT, 20 women; very low-certainty evidence).\nWe are uncertain whether day 5 embryo biopsy compared to day 3 biopsy has an effect on other secondary outcome measures, including viable intrauterine pregnancies (OR 2.25, 95% CI 0.38 to 13.47; 1 RCT, 20 women; very low-certainty evidence), ectopic pregnancies (OR 0.16, 95% CI 0.01 to 3.85; 1 RCT, 20 women; very low-certainty evidence), stillbirths (OR not estimable as no events in either group; 1 RCT, 20 women; very low-certainty evidence) or termination of pregnancies (OR 3.32, 95% CI 0.12 to 91.60; 1 RCT, 20 women; very low-certainty evidence).\nNo studies reported on gestational age at birth, birthweight, neonatal mortality and major congenital anomaly.\nAuthors' conclusions\nWe are uncertain if there is a difference in live births and miscarriages, viable intrauterine pregnancies, ectopic pregnancies, stillbirths or termination of pregnancies between day 5 and day 3 embryo biopsy for PGT-M. There was insufficient evidence to draw any conclusions regarding other adverse outcomes. The results should be interpreted with caution, as the evidence was of very low certainty due to limited studies, high risk of bias in the included study, and an overall low level of precision.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Couples who have a personal or family history of a single-gene genetic disease, such as cystic fibrosis, can now undergo assisted reproductive technology (ART) to reduce the chance of them having a child with that same genetic disorder. This is known as preimplantation genetic testing for single gene disorders (PGT-M). During PGT-M, cells from a fertilised embryo are removed (biopsied) and analysed for the genetic disorder before those embryos that do not have the disorder are transferred back into the woman's uterus. The embryo cells can be biopsied at day 3 or day 5 of embryo development, with day 5 biopsy currently being the most widely used technique. Whether the embryo is biopsied at day 3 or day 5 could have different impacts on further development and implantation of the embryo, as well pregnancy and perinatal outcomes."
}
] |
query-laysum
|
24211
|
[
{
"role": "user",
"content": "Abstract: Background\nThe development of supraglottic airway devices has revolutionized airway management during general anaesthesia. Two devices are widely used in clinical practice to facilitate positive pressure ventilation: the ProSeal laryngeal mask airway (pLMA) and the Classic laryngeal mask airway (cLMA). It is not clear whether these devices have important clinical differences in terms of efficacy or complications.\nObjectives\nTo compare the effectiveness of the ProSeal laryngeal mask airway (pLMA) and the Classic LMA (cLMA) for positive pressure ventilation in adults undergoing elective surgery.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 3) in the Cochrane Library; MEDLINE (Ovid SP, 1997 to April 2017); Embase (Ovid SP, 1997 to April 2017); the Institute for Scientific Information (ISI) Web of Science (1946 to April 2017); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCO host, 1982 to April 2017).\nWe searched trial registries for ongoing studies to April 2017.\nWe did not impose language restrictions. We restricted our search to the time from 1997 to April 2017 because pLMA was introduced into clinical practice in the year 2000.\nSelection criteria\nWe included randomized controlled trials (RCTs) that compared the effectiveness of pLMA and cLMA for positive pressure ventilation in adults undergoing elective surgery. We planned to include only data related to the first phase of cross-over RCTs.\nData collection and analysis\nWe used standard methodological procedures expected by the Cochrane Collaboration.\nMain results\nWe included eight RCTs that involved a total of 829 participants (416 and 413 participants in the pLMA and cLMA groups, respectively). We identified six cross-over studies that are awaiting classification; one is completed but has not been published, and data related to the first treatment period for the other five studies were not yet available. Seven included studies provided data related to the primary outcome, and eight studies provided data related to more than one secondary outcome.\nOur analysis was hampered by the fact that a large proportion of the included studies reported no events in either study arm. No studies reported significant differences between devices in relation to the primary review outcome: failure to adequately mechanically ventilate. We evaluated this outcome by assessing two variables: inadequate oxygenation (risk ratio (RR) 0.75, 95% confidence interval (CI) 0.17 to 3.31; four studies, N = 617) and inadequate ventilation (not estimable; one study, N = 80).\nMore time was required to establish an effective airway using pLMA (mean difference (MD) 10.12 seconds, 95% CI 5.04 to 15.21; P < 0.0001; I² = 73%; two studies, N = 434). Peak airway pressure during positive pressure ventilation was lower in cLMA participants (MD 0.84, 95% CI 0.02 to 1.67; P = 0.04; I² = 0%; four studies, N = 259). Mean oropharyngeal leak (OPL) pressure was higher in pLMA participants (MD 6.93, 95% CI 4.23 to 9.62; P < 0.00001; I² = 87%; six studies, N = 709).\nThe quality of evidence for all outcomes, as assessed by GRADE score, is low mainly owing to issues related to blinding and imprecision.\nData show no important differences between devices with regard to failure to insert the device, use of an alternate device, mucosal injury, sore throat, bronchospasm, gastric insufflation, regurgitation, coughing, and excessive leak. Data were insufficient to allow estimation of differences for obstruction related to the device. None of the studies reported postoperative nausea and vomiting as an outcome.\nAuthors' conclusions\nWe are uncertain about the effects of either of the airway devices in terms of failure of oxygenation or ventilation because there were very few events. Results were uncertain in terms of differences for several complications. Low-quality evidence suggests that the ProSeal laryngeal mask airway makes a better seal and therefore may be more suitable than the Classic laryngeal mask airway for positive pressure ventilation. The Classic laryngeal mask airway may be quicker to insert, but this is unlikely to be clinically meaningful.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out if one type of artificial breathing device used in adults during anaesthesia is more effective and safer than another."
}
] |
query-laysum
|
24212
|
[
{
"role": "user",
"content": "Abstract: Background\nThe optimal glycaemic control target in pregnant women with pre-existing diabetes is unclear, although there is a clear link between high glucose concentrations and adverse birth outcomes.\nObjectives\nTo assess the effects of different intensities of glycaemic control in pregnant women with pre-existing type 1 or type 2 diabetes.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 January 2016) and planned to search reference lists of retrieved studies.\nSelection criteria\nWe included randomised controlled trials comparing different glycaemic control targets in pregnant women with pre-existing diabetes.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, conducted data extraction, assessed risk of bias and checked for accuracy. We assessed the quality of the evidence using the GRADE approach.\nMain results\nWe included three trials, all in women with type 1 diabetes (223 women and babies). All three trials were at high risk of bias due to lack of blinding, unclear methods of randomisation and selective reporting of outcomes. Two trials compared very tight (3.33 to 5.0 mmol/L fasting blood glucose (FBG)) with tight-moderate (4.45 to 6.38 mmol/L) glycaemic control targets, with one trial of 22 babies reporting no perinatal deaths orserious perinatal morbidity (evidence graded low for both outcomes). In the same trial, there were two congenital anomalies in the very tight, and none in the tight-moderate group, with no significant differences in caesarean section between groups (risk ratio (RR) 0.92, 95% confidence interval (CI) 0.49 to 1.73; evidence graded very low). In these two trials, glycaemic control was not significantly different between the very tight and tight-moderate groups by the third trimester, although one trial of 22 women found significantly less maternal hypoglycaemia in the tight-moderate group.\nIn a trial of 60 women and babies comparing tight (≤ 5.6 mmol/L FBG); moderate (5.6 to 6.7 mmol/L); and loose (6.7 to 8.9 mmol/L) glycaemic control targets, there were two neonatal deaths in the loose and none in the tight or moderate groups (evidence graded very low). There were significantly fewer women with pre-eclampsia (evidence graded low), fewer caesarean sections (evidence graded low) and fewer babies with birthweights greater than 90th centile (evidence graded low) in the combined tight-moderate compared with the loose group.\nThe quality of the evidence was graded low or very low for important outcomes, because of design limitations to the studies, the small numbers of women included, and wide confidence intervals crossing the line of no effect. Many of the important outcomes were not reported in these studies.\nAuthors' conclusions\nIn a very limited body of evidence, few differences in outcomes were seen between very tight and tight-moderate glycaemic control targets in pregnant women with pre-existing type 1 diabetes, including actual glycaemic control achieved. There is evidence of harm (increased pre-eclampsia, caesareans and birthweights greater than 90th centile) for 'loose' control (FBG above 7 mmol/L). Future trials comparing interventions, rather than glycaemic control targets, may be more feasible. Trials in pregnant women with pre-existing type 2 diabetes are required.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found three small trials (in total 223 pregnant women with type 1 diabetes) looking at different blood glucose targets: very tight, tight, moderate, and loose. The quality of the studies and therefore the strength of the evidence was very low or low, so future research may change the results.\nThere were very few differences between very tight and tight-moderate blood glucose targets in two trials, although there were more cases of low blood glucose (hypoglycaemia) and longer hospital stays for women who had very tight blood glucose control.\nA single trial compared tight, moderate, and loose blood glucose targets. In the loose target group, more women had pre-eclampsia, and there were more caesareans and large babies. There were few differences between the tight and moderate groups, although more women in the tight control group had low blood glucose in the first half of pregnancy."
}
] |
query-laysum
|
24213
|
[
{
"role": "user",
"content": "Abstract: Background\nCognitive impairments affect functional ability in people with dementia. Cognitive rehabilitation (CR) is a personalised, solution-focused approach that aims to enable people with mild-to-moderate dementia to manage everyday activities and maintain as much independence as possible.\nObjectives\nTo evaluate the effects of CR on everyday functioning and other outcomes for people with mild-to-moderate dementia, and on outcomes for care partners.\nTo identify and explore factors that may be associated with the efficacy of CR.\nSearch methods\nWe searched the Cochrane Dementia and Cognitive Improvement Group Specialised Register, which contains records from MEDLINE, EMBASE, CINAHL, PsycINFO, LILACS, and other clinical trial databases, and grey literature sources. The most recent search was completed on 19 October 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing CR with control conditions and reporting relevant outcomes for the person with dementia and/or the care partner.\nData collection and analysis\nWe extracted relevant data from published manuscripts and contacted trial authors if necessary. Within each of the comparisons, we pooled data for each outcome of interest and conducted inverse-variance, random-effects meta-analyses. We evaluated the certainty of the evidence using GRADEpro GDT.\nMain results\nWe identified six eligible RCTs published in English between 2010 and 2022, which together included 1702 participants. The mean age of participants ranged from 76 to 80 and the proportion of male participants was between 29.4% and 79.3%. Most participants, in the studies where the type of dementia was reported, had a diagnosis of Alzheimer’s disease (AD; n = 1002, 58.9% of the whole sample, 81.2% of the participants for whom the specific diagnosis was reported).\nRisk of bias in the individual studies was relatively low. The exception was a high risk of bias in relation to blinding of participants and practitioners, which is not usually feasible with psychosocial interventions.\nOur primary outcome of everyday functioning was operationalised in the included studies as goal attainment in relation to activities targeted in the intervention. For our main comparison of CR with usual care, we pooled data for goal attainment evaluated from three perspectives (self-rating of performance, informant rating of performance, and self-rating of satisfaction with performance) at end of treatment and at medium-term follow-up (3 to 12 months). We could also pool data at these time points for 20 and 19 secondary outcomes respectively. The review findings were strongly driven by one large, high-quality RCT.\nWe found high-certainty evidence of large positive effects of CR on all three primary outcome perspectives at the end of treatment: participant self-ratings of goal attainment (standardised mean difference (SMD) 1.46, 95% confidence interval (CI) 1.26 to 1.66; I2 = 0%; 3 RCTs, 501 participants), informant ratings of goal attainment (SMD 1.61, 95% CI 1.01 to 2.21; I2 = 41%; 3 RCTs, 476 participants), and self-ratings of satisfaction with goal attainment (SMD 1.31, 95% CI 1.09 to 1.54; I2 = 5%; 3 RCTs, 501 participants), relative to an inactive control condition. At medium-term follow-up, we found high-certainty evidence showing a large positive effect of CR on all three primary outcome perspectives: participant self-ratings of goal attainment (SMD 1.46, 95% CI 1.25 to 1.68; I2 = 0%; 2 RCTs, 432 participants), informant ratings of goal attainment (SMD 1.25, 95% CI 0.78 to 1.72; I2 = 29%; 3 RCTs, 446 participants), and self-ratings of satisfaction with goal attainment (SMD 1.19, 95% CI 0.73 to 1.66; I2 = 28%; 2 RCTs, 432 participants), relative to an inactive control condition.\nFor participants at the end of treatment we found high-certainty evidence showing a small positive effect of CR on self-efficacy (2 RCTs, 456 participants) and immediate recall (2 RCTs, 459 participants).\nFor participants at medium-term follow-up we found moderate-certainty evidence showing a small positive effect of CR on auditory selective attention (2 RCTs, 386 participants), and a small negative effect on general functional ability (3 RCTs, 673 participants), and we found low-certainty evidence showing a small positive effect on sustained attention (2 RCTs, 413 participants), and a small negative effect on memory (2 RCTs, 51 participants) and anxiety (3 RCTs, 455 participants).\nWe found moderate- and low-certainty evidence indicating that at the end of treatment CR had negligible effects on participant anxiety, quality of life, sustained attention, memory, delayed recall, and general functional ability, and at medium-term follow-up on participant self-efficacy, depression, quality of life, immediate recall, and verbal fluency.\nFor care partners at the end of treatment we found low-certainty evidence showing a small positive effect on environmental aspects of quality of life (3 RCTs, 465 care partners), and small negative effects of CR on level of depression (2 RCTs, 32 care partners) and on psychological wellbeing (2 RCTs, 388 care partners).\nFor care partners at medium-term follow-up we found high-certainty evidence showing a small positive effect of CR on social aspects of quality of life (3 RCTs, 436 care partners) and moderate-certainty evidence showing a small positive effect on psychological aspects of quality of life (3 RCTs, 437 care partners).\nWe found moderate- and low-certainty evidence at the end of treatment that CR had negligible effects on care partners’ physical health, psychological and social aspects of quality of life, and stress, and at medium-term follow-up for the physical health aspect of care partners’ quality of life and psychological wellbeing.\nAuthors' conclusions\nCR is helpful in enabling people with mild or moderate dementia to improve their ability to manage the everyday activities targeted in the intervention. Confidence in these findings could be strengthened if more high-quality studies contributed to the observed effects. The available evidence suggests that CR can form a valuable part of a clinical toolkit to assist people with dementia in overcoming some of the everyday barriers imposed by cognitive and functional difficulties. Future research, including process evaluation studies, could help identify avenues to maximise CR effects and achieve wider impacts on functional ability and wellbeing.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cognitive rehabilitation helps people living with dementia to manage everyday activities that are important to them.\nFuture studies could explore how to use cognitive rehabilitation to also improve overall functioning and wellbeing."
}
] |
query-laysum
|
24214
|
[
{
"role": "user",
"content": "Abstract: Background\nBaby-led breastfeeding is recommended as best practice in determining the frequency and duration of a breastfeed. An alternative approach is described as scheduled, where breastfeeding is timed and restricted in frequency and duration. It is necessary to review the evidence that supports current recommendations, so that women are provided with high-quality evidence to inform their feeding decisions.\nObjectives\nTo evaluate the effects of baby-led compared with scheduled (or mixed) breastfeeding for successful breastfeeding, for healthy newborns.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (23 February 2016), CINAHL (1981 to 23 February 2016), EThOS, Index to Theses and ProQuest database and World Health Organization's 1998 evidence to support the 'Ten Steps' to successful breastfeeding (10 May 2016).\nSelection criteria\nWe planned to include randomised and quasi-randomised trials with randomisation at both the individual and cluster level. Studies presented in abstract form would have been eligible for inclusion if sufficient data were available. Studies using a cross-over design would not have been eligible for inclusion.\nData collection and analysis\nTwo review authors independently assessed for inclusion all potential studies we identified as a result of the search strategy. We would have resolved any disagreement through discussion or, if required, consulted a third review author, but this was not necessary.\nMain results\nNo studies were identified that were eligible for inclusion in this review.\nAuthors' conclusions\nThis review demonstrates that there is no evidence from randomised controlled trials evaluating the effect of baby-led compared with scheduled (or mixed) breastfeeding for successful breastfeeding, for healthy newborns. It is recommended that no changes are made to current practice guidelines without undertaking robust research, to include many patterns of breastfeeding and not limited to baby-led and scheduled breastfeeding. Future exploratory research is needed on baby-led breastfeeding that takes the mother's perspective into consideration.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Mothers require information on the frequency and duration of breastfeeds but they receive conflicting advice. Current guidelines encourage baby-led breastfeeding. It is important to systematically review the evidence, to inform women's decisions on the relative effectiveness of each method.\nThis review is also important as baby-led breastfeeding is not always followed, as many women and caregivers seem more comfortable with scheduled rather than baby-led feeding patterns."
}
] |
query-laysum
|
24215
|
[
{
"role": "user",
"content": "Abstract: Background\nWithout a phase of retention after successful orthodontic treatment, teeth tend to 'relapse', that is, to return to their initial position. Retention is achieved by fitting fixed or removable retainers to provide stability to the teeth while avoiding damage to teeth and gums. Removable retainers can be worn full- or part-time. Retainers vary in shape, material, and the way they are made. Adjunctive procedures are sometimes used to try to improve retention, for example, reshaping teeth where they contact ('interproximal reduction'), or cutting fibres around teeth ('percision').\nThis review is an update of one originally published in 2004 and last updated in 2016.\nObjectives\nTo evaluate the effects of different retainers and retention strategies used to stabilise tooth position after orthodontic braces.\nSearch methods\nAn information specialist searched Cochrane Oral Health Trials Register, CENTRAL, MEDLINE, Embase and OpenGrey up to 27 April 2022 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nRandomised controlled trials (RCTs) involving children and adults who had retainers fitted or adjunctive procedures undertaken to prevent relapse following orthodontic treatment with braces. We excluded studies with aligners.\nData collection and analysis\nTwo review authors independently screened eligible studies, assessed risk of bias and extracted data. Outcomes were stability or relapse of tooth position, retainer failure (i.e. broken, detached, worn out, ill-fitting or lost), adverse effects on teeth and gums (i.e. plaque, gingival and bleeding indices), and participant satisfaction. We calculated mean differences (MD) for continuous data, risk ratios (RR) or risk differences (RD) for dichotomous data, and hazard ratios (HR) for survival data, all with 95% confidence intervals (CI). We conducted meta-analyses when similar studies reported outcomes at the same time point; otherwise results were reported as mean ranges. We prioritised reporting of Little's Irregularity Index (crookedness of anterior teeth) to measure relapse, judging the minimum important difference to be 1 mm.\nMain results\nWe included 47 studies, with 4377 participants. The studies evaluated: removable versus fixed retainers (8 studies); different types of fixed retainers (22 studies) or bonding materials (3 studies); and different types of removable retainers (16 studies). Four studies evaluated more than one comparison. We judged 28 studies to have high risk of bias, 11 to have low risk, and eight studies as unclear.\nWe focused on 12-month follow-up.\nThe evidence is low or very low certainty. Most comparisons and outcomes were evaluated in only one study at high risk of bias, and most studies measured outcomes after less than a year.\nRemovable versus fixed retainers\nRemovable (part-time) versus fixed\nOne study reported that participants wearing clear plastic retainers part-time in the lower arch had more relapse than participants with multistrand fixed retainers, but the amount was not clinically significant (Little's Irregularity Index (LII) MD 0.92 mm, 95% CI 0.23 to 1.61; 56 participants). Removable retainers were more likely to cause discomfort (RR 12.22; 95% CI 1.69 to 88.52; 57 participants), but were associated with less retainer failure (RR 0.44, 95% CI 0.20 to 0.98; 57 participants) and better periodontal health (Gingival Index (GI) MD −0.34, 95% CI −0.66 to −0.02; 59 participants).\nRemovable (full-time) versus fixed\nOne study reported that removable clear plastic retainers worn full-time in the lower arch did not provide any clinically significant benefit for tooth stability over fixed retainers (LII MD 0.60 mm, 95% CI 0.17 to 1.03; 84 participants). Participants with clear plastic retainers had better periodontal health (gingival bleeding RR 0.53, 95% CI 0.31 to 0.88; 84 participants), but higher risk of retainer failure (RR 3.42, 95% CI 1.38 to 8.47; 77 participants). The study found no difference between retainers for caries.\nDifferent types of fixed retainers\nComputer-aided design/computer-aided manufacturing (CAD/CAM) nitinol versus conventional/analogue multistrand\nOne study reported that CAD/CAM nitinol fixed retainers were better for tooth stability, but the difference was not clinically significant (LII MD −0.46 mm, 95% CI −0.72 to −0.21; 66 participants). There was no evidence of a difference between retainers for periodontal health (GI MD 0.00, 95% CI -0.16 to 0.16; 2 studies, 107 participants), or retainer survival (RR 1.29, 95% CI 0.67 to 2.49; 1 study, 41 participants).\nFibre-reinforced composite versus conventional multistrand/spiral wire\nOne study reported that fibre-reinforced composite fixed retainers provided better stability than multistrand retainers, but this was not of a clinically significant amount (LII MD −0.70 mm, 95% CI −1.17 to −0.23; 52 participants). The fibre-reinforced retainers had better patient satisfaction with aesthetics (MD 1.49 cm on a visual analogue scale, 95% CI 0.76 to 2.22; 1 study, 32 participants), and similar retainer survival rates (RR 1.01, 95% CI 0.84 to 1.21; 7 studies; 1337 participants) at 12 months. However, failures occurred earlier (MD −1.48 months, 95% CI −1.88 to −1.08; 2 studies, 103 participants; 24-month follow-up) and more gingival inflammation at six months, though bleeding on probing (BoP) was similar (GI MD 0.59, 95% CI 0.13 to 1.05; BoP MD 0.33, 95% CI −0.13 to 0.79; 1 study, 40 participants).\nDifferent types of removable retainers\nClear plastic versus Hawley\nWhen worn in the lower arch for six months full-time and six months part-time, clear plastic provided similar stability to Hawley retainers (LII MD 0.01 mm, 95% CI −0.65 to 0.67; 1 study, 30 participants). Hawley retainers had lower risk of failure (RR 0.60, 95% CI 0.43 to 0.83; 1 study, 111 participants), but were less comfortable at six months (VAS MD -1.86 cm, 95% CI -2.19 to -1.53; 1 study, 86 participants).\nPart-time versus full-time wear of Hawley\nThere was no evidence of a difference in stability between part-time and full-time use of Hawley retainers (MD 0.20 mm, 95% CI −0.28 to 0.68; 1 study, 52 participants).\nAuthors' conclusions\nThe evidence is low to very low certainty, so we cannot draw firm conclusions about any one approach to retention over another. More high-quality studies are needed that measure tooth stability over at least two years, and measure how long retainers last, patient satisfaction and negative side effects from wearing retainers, such as tooth decay and gum disease.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- We cannot draw firm conclusions about any one approach to retention over another because we have little confidence in the evidence.\n- More well-planned studies are needed. They should measure tooth stability over at least two years, how long the retainers last, patient satisfaction and any unwanted effects from the retainers, such as tooth decay and gum disease."
}
] |
query-laysum
|
24216
|
[
{
"role": "user",
"content": "Abstract: Background\nMaternal pushing during the second stage of labour is an important and indispensable contributor to the involuntary expulsive force developed by uterine contraction. There is no consensus on an ideal strategy to facilitate these expulsive efforts and there are contradictory results about the influence on the mother and fetus.\nObjectives\nTo evaluate the benefits and possible disadvantages of different kinds of techniques regarding maternal pushing/breathing during the expulsive stage of labour on maternal and fetal outcomes.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth's Trials Register (19 September 2016) and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs assessing the effects of pushing/bearing down techniques (type and/or timing) performed during the second stage of labour on maternal and neonatal outcomes. Cluster-RCTs were eligible for inclusion, but none were identified. Studies using a cross-over design and those published in abstract form only were not eligible for inclusion in this review.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion, extracted data and assessed risk of bias. Data were checked for accuracy.\nMain results\nIn this updated review, we included 21 studies in total, eight (884 women) comparing spontaneous pushing versus directed pushing, with or without epidural analgesia and 13 (2879 women) comparing delayed pushing versus immediate pushing with epidural analgesia. Our GRADE assessments of evidence ranged from moderate to very low quality; the main reasons for downgrading were study design limitations and imprecision of effect estimates. Overall, the included studies varied in their risk of bias; most were judged to be at unclear risk of bias.\nComparison 1: types of pushing: spontaneous pushing versus directed pushing\nThere was no clear difference in the duration of the second stage of labour (mean difference (MD) 10.26 minutes; 95% confidence interval (CI) -1.12 to 21.64 minutes, six studies, 667 women, random-effects, I² = 81%) (very low-quality evidence). There was no clear difference in 3rd or 4th degree perineal laceration (risk ratio (RR) 0.87; 95% CI 0.45 to 1.66, one study, 320 women) (low-quality evidence), episiotomy (average RR 1.05; 95% CI 0.60 to 1.85, two studies, 420 women, random-effects, I² = 81%), duration of pushing (MD -9.76 minutes, 95% CI -19.54 to 0.02; two studies; 169 women; I² = 88%) (very low-quality evidence), or rate of spontaneous vaginal delivery (RR 1.01, 95% CI 0.97 to 1.05; five studies; 688 women; I² = 2%) (moderate-quality evidence). For primary neonatal outcomes such as five-minute Apgar score less than seven, there was no clear difference between groups (RR 0.35; 95% CI 0.01 to 8.43, one study, 320 infants) (very low-quality evidence), and the number of admissions to neonatal intensive care (RR 1.08; 95% CI 0.30 to 3.79, two studies, 393 infants) (very low-quality evidence) also showed no clear difference between spontaneous and directed pushing. No data were available on hypoxic ischaemic encephalopathy.\nComparison 2: timing of pushing: delayed pushing versus immediate pushing (all women with epidural)\nFor the primary maternal outcomes, delayed pushing was associated with an increase of 56 minutes in the duration of the second stage of labour (MD 56.40, 95% CI 42.05 to 70.76; 11 studies; 3049 women; I² = 91%) (very low-quality evidence), but no clear difference in third or 4th degree perineal laceration (RR 0.94; 95% CI 0.78 to 1.14, seven studies. 2775 women) (moderate-quality evidence) or episiotomy (RR 0.95; 95% CI 0.87 to 1.04, five studies, 2320 women). Delayed pushing was also associated with a 19-minute decrease in the duration of pushing (MD -19.05, 95% CI -32.27 to -5.83; 11 studies; 2932 women; I² = 95%) (very low-quality evidence) and an increase in spontaneous vaginal delivery (RR 1.07; 95% CI 1.02 to 1.11, 12 studies, 3114 women) (moderate-quality evidence).\nFor the primary neonatal outcomes, there was no clear difference between groups in admission to neonatal intensive care (RR 0.98; 95% CI 0.67 to 1.41, three studies, n = 2197) (low-quality evidence) and five-minute Apgar score less than seven (RR 0.15; 95% CI 0.01 to 3.00; three studies; 413 infants) (very low-quality evidence). There were no data on hypoxic ischaemic encephalopathy. Delayed pushing was associated with a greater incidence of low umbilical cord blood pH (RR 2.24; 95% CI 1.37 to 3.68, 4 studies, 2145 infants) and increased the cost of intrapartum care by CDN$ 68.22 (MD 68.22, 95% CI 55.37, 81.07, one study, 1862 women).\nAuthors' conclusions\nThis updated review is based on 21 included studies of moderate to very low quality of evidence (with evidence mainly downgraded due to study design limitations and imprecision of effect estimates).\nTiming of pushing with epidural is consistent in that delayed pushing leads to a shortening of the actual time pushing and increase of spontaneous vaginal delivery at the expense of an overall longer duration of the second stage of labour and an increased risk of a low umbilical cord pH (based only on one study). Nevertheless, there was no clear difference in serious perineal laceration and episiotomy, and in other neonatal outcomes (admission to neonatal intensive care, five-minute Apgar score less than seven and delivery room resuscitation) between delayed and immediate pushing.\nTherefore, for the type of pushing, with or without epidural, there is no conclusive evidence to support or refute any specific style as part of routine clinical practice, and in the absence of strong evidence supporting a specific method or timing of pushing, the woman's preference and comfort and clinical context should guide decisions.\nFurther properly well-designed RCTs, addressing clinically important maternal and neonatal outcomes are required to add evidence-based information to the current knowledge. Such trials will provide more complete data to be incorporated into a future update of this review.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "During the second stage of labour a common technique is to encourage women to take a deep breath at the beginning of a contraction then hold it and bear down throughout the contraction (this is known as directed pushing). In spontaneous pushing, women are free to follow their own instincts and generally push three to five times per contraction. Delayed pushing involves instructing women to avoid pushing until there is an irresistible urge to push or when the presenting part of the baby has descended to the perineum."
}
] |
query-laysum
|
24217
|
[
{
"role": "user",
"content": "Abstract: Background\nUndernutrition in the critical first 1000 days of life is the most common form of childhood malnutrition, and a significant problem in low- and middle-income countries (LMICs). The effects of undernutrition in children aged under five years are wide-ranging and include increased susceptibility to and severity of infections; impaired physical and cognitive development, which diminishes school and work performance later in life; and death. Growth monitoring and promotion (GMP) is a complex intervention that comprises regular measurement and charting of growth combined with promotion activities. Policymakers, particularly in international aid agencies, have differing and changeable interpretations and perceptions of the purpose of GMP. The effectiveness of GMP as an approach to preventing malnutrition remains a subject of debate, particularly regarding the added value of growth monitoring compared with promotion alone.\nObjectives\nTo evaluate the effectiveness of child growth monitoring and promotion for identifying and addressing faltering growth, improving infant and child feeding practices, and promoting contact with and use of health services in children under five years of age in low- and middle-income countries.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 3 November 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs), cohort studies, and controlled before-after studies that compared GMP with standard care or nutrition education alone in non-hospitalised children aged under five years.\nData collection and analysis\nWe used standard Cochrane methods to conduct a narrative synthesis. Our primary outcomes were anthropometric indicators, infant and child feeding practices, and health service usage. Secondary outcomes were frequency and severity of childhood illnesses, and mortality. We used GRADE to assess the certainty of evidence for each primary outcome.\nMain results\nWe included six studies reported in eight publications. We grouped the findings according to intervention.\nCommunity-based growth monitoring and promotion (without supplementary feeding) versus standard care\nWe are unsure if GMP compared to standard care improves infant and child feeding practices, as measured at 24 months by the proportion of infants who have fluids other than breast milk introduced early (49.7% versus 70.5%; 1 study; 4296 observations; very low-certainty evidence). We are unsure if GMP improves health service usage, as measured at 24 months by the proportion of children who receive vitamin A (72.5% versus 62.9%; 1 study; 4296 observations; very low-certainty evidence) and the proportion of children who receive deworming (29.2% versus 14.6%; 1 study; 4296 observations; very low-certainty evidence). No studies reported selected anthropometric indicators (weight-for-age z-score or height-for-age z-score) at 12 or 24 months, infant and child feeding practices at 12 months, or health service usage at 12 months.\nCommunity-based growth monitoring and promotion (with supplementary feeding) versus standard care\nTwo studies (with 569 participants) reported the mean weight-for-age z-score at 12 months, providing very low-certainty evidence: in one study, there was little or no difference between GMP and standard care (mean difference (MD) −0.07, 95% confidence interval (CI) −0.19 to 0.06); in the other study, mean weight-for-age z-score worsened in both groups, but we were unable to calculate a relative effect. GMP versus standard care may make little to no difference to the mean height-for-age z-score at 12 months (MD −0.15, 95% CI −0.34 to 0.04; 1 study, 337 participants; low-certainty evidence). Two studies (with 564 participants) reported a range of outcome measures related to infant and child feeding practices at 12 months, showing little or no difference between the groups (very low-certainty evidence). No studies reported health service usage at 12 or 24 months, feeding practices at 24 months, or selected anthropometric indicators at 24 months.\nAuthors' conclusions\nThere is limited uncertain evidence on the effectiveness of GMP for identifying and addressing faltering growth, improving infant and child feeding practices, and promoting contact with and use of health services in children aged under five years in LMICs. Future studies should explore the reasons for the apparent limited impact of GMP on key child health indicators. Reporting of GMP interventions and important outcomes must be transparent and consistent.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included six studies. They were conducted in resource-poor rural settings with high levels of malnutrition. We grouped the findings according to the type of intervention, which varied across studies."
}
] |
query-laysum
|
24218
|
[
{
"role": "user",
"content": "Abstract: Background\nShared decision making (SDM) is a process by which a healthcare choice is made by the patient, significant others, or both with one or more healthcare professionals. However, it has not yet been widely adopted in practice. This is the second update of this Cochrane review.\nObjectives\nTo determine the effectiveness of interventions for increasing the use of SDM by healthcare professionals. We considered interventions targeting patients, interventions targeting healthcare professionals, and interventions targeting both.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and five other databases on 15 June 2017. We also searched two clinical trials registries and proceedings of relevant conferences. We checked reference lists and contacted study authors to identify additional studies.\nSelection criteria\nRandomized and non-randomized trials, controlled before-after studies and interrupted time series studies evaluating interventions for increasing the use of SDM in which the primary outcomes were evaluated using observer-based or patient-reported measures.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nWe used GRADE to assess the certainty of the evidence.\nMain results\nWe included 87 studies (45,641 patients and 3113 healthcare professionals) conducted mainly in the USA, Germany, Canada and the Netherlands. Risk of bias was high or unclear for protection against contamination, low for differences in the baseline characteristics of patients, and unclear for other domains.\nForty-four studies evaluated interventions targeting patients. They included decision aids, patient activation, question prompt lists and training for patients among others and were administered alone (single intervention) or in combination (multifaceted intervention). The certainty of the evidence was very low. It is uncertain if interventions targeting patients when compared with usual care increase SDM whether measured by observation (standardized mean difference (SMD) 0.54, 95% confidence interval (CI) -0.13 to 1.22; 4 studies; N = 424) or reported by patients (SMD 0.32, 95% CI 0.16 to 0.48; 9 studies; N = 1386; risk difference (RD) -0.09, 95% CI -0.19 to 0.01; 6 studies; N = 754), reduce decision regret (SMD -0.10, 95% CI -0.39 to 0.19; 1 study; N = 212), improve physical (SMD 0.00, 95% CI -0.36 to 0.36; 1 study; N = 116) or mental health-related quality of life (QOL) (SMD 0.10, 95% CI -0.26 to 0.46; 1 study; N = 116), affect consultation length (SMD 0.10, 95% CI -0.39 to 0.58; 2 studies; N = 224) or cost (SMD 0.82, 95% CI 0.42 to 1.22; 1 study; N = 105).\nIt is uncertain if interventions targeting patients when compared with interventions of the same type increase SDM whether measured by observation (SMD 0.88, 95% CI 0.39 to 1.37; 3 studies; N = 271) or reported by patients (SMD 0.03, 95% CI -0.18 to 0.24; 11 studies; N = 1906); (RD 0.03, 95% CI -0.02 to 0.08; 10 studies; N = 2272); affect consultation length (SMD -0.65, 95% CI -1.29 to -0.00; 1 study; N = 39) or costs. No data were reported for decision regret, physical or mental health-related QOL.\nFifteen studies evaluated interventions targeting healthcare professionals. They included educational meetings, educational material, educational outreach visits and reminders among others. The certainty of evidence is very low. It is uncertain if these interventions when compared with usual care increase SDM whether measured by observation (SMD 0.70, 95% CI 0.21 to 1.19; 6 studies; N = 479) or reported by patients (SMD 0.03, 95% CI -0.15 to 0.20; 5 studies; N = 5772); (RD 0.01, 95%C: -0.03 to 0.06; 2 studies; N = 6303); reduce decision regret (SMD 0.29, 95% CI 0.07 to 0.51; 1 study; N = 326), affect consultation length (SMD 0.51, 95% CI 0.21 to 0.81; 1 study, N = 175), cost (no data available) or physical health-related QOL (SMD 0.16, 95% CI -0.05 to 0.36; 1 study; N = 359). Mental health-related QOL may slightly improve (SMD 0.28, 95% CI 0.07 to 0.49; 1 study, N = 359; low-certainty evidence).\nIt is uncertain if interventions targeting healthcare professionals compared to interventions of the same type increase SDM whether measured by observation (SMD -0.30, 95% CI -1.19 to 0.59; 1 study; N = 20) or reported by patients (SMD 0.24, 95% CI -0.10 to 0.58; 2 studies; N = 1459) as the certainty of the evidence is very low. There was insufficient information to determine the effect on decision regret, physical or mental health-related QOL, consultation length or costs.\nTwenty-eight studies targeted both patients and healthcare professionals. The interventions used a combination of patient-mediated and healthcare professional directed interventions. Based on low certainty evidence, it is uncertain whether these interventions, when compared with usual care, increase SDM whether measured by observation (SMD 1.10, 95% CI 0.42 to 1.79; 6 studies; N = 1270) or reported by patients (SMD 0.13, 95% CI -0.02 to 0.28; 7 studies; N = 1479); (RD -0.01, 95% CI -0.20 to 0.19; 2 studies; N = 266); improve physical (SMD 0.08, -0.37 to 0.54; 1 study; N = 75) or mental health-related QOL (SMD 0.01, -0.44 to 0.46; 1 study; N = 75), affect consultation length (SMD 3.72, 95% CI 3.44 to 4.01; 1 study; N = 36) or costs (no data available) and may make little or no difference to decision regret (SMD 0.13, 95% CI -0.08 to 0.33; 1 study; low-certainty evidence).\nIt is uncertain whether interventions targeting both patients and healthcare professionals compared to interventions of the same type increase SDM whether measured by observation (SMD -0.29, 95% CI -1.17 to 0.60; 1 study; N = 20); (RD -0.04, 95% CI -0.13 to 0.04; 1 study; N = 134) or reported by patients (SMD 0.00, 95% CI -0.32 to 0.32; 1 study; N = 150 ) as the certainty of the evidence was very low. There was insuffient information to determine the effects on decision regret, physical or mental health-related quality of life, or consultation length or costs.\nAuthors' conclusions\nIt is uncertain whether any interventions for increasing the use of SDM by healthcare professionals are effective because the certainty of the evidence is low or very low.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Healthcare professionals often do not involve their patients in decision making about their care. With shared decision making, healthcare professionals inform patients about their choices and invite them to choose the option that reflects what is important to them, including the option not to proceed with treatment. Shared decision making is said to be desirable because patient involvement is accepted as a right and patients in general want more information about their health condition and prefer to take an active role in decisions about their health. The aim of this review was to find out if activities to increase shared decision making by healthcare professionals are effective or not. Examples of these activities are training programs, giving out leaflets, or email reminders. Cochrane researchers collected and analyzed all relevant studies to answer this question, and found 87 studies."
}
] |
query-laysum
|
24219
|
[
{
"role": "user",
"content": "Abstract: Background\nTardive dyskinesia (TD) remains a troublesome adverse effect of conventional antipsychotic (neuroleptic) medication. It has been proposed that TD could have a component of central cholinergic deficiency. Cholinergic drugs have been used to treat TD.\nObjectives\nTo determine the effects of cholinergic drugs (arecoline, choline, deanol, lecithin, meclofenoxate, physostigmine, RS 86, tacrine, metoxytacrine, galantamine, ipidacrine, donepezil, rivastigmine, eptastigmine, metrifonate, xanomeline, cevimeline) for treating antipsychotic-induced TD in people with schizophrenia or other chronic mental illness.\nSearch methods\nAn electronic search of the Cochrane Schizophrenia Group's Study-Based Register of Trials (16 July 2015 and April 2017) was undertaken. This register is assembled by extensive searches for randomised controlled trials in many electronic databases, registers of trials, conference proceedings and dissertations. References of all identified studies were searched for further trial citations.\nSelection criteria\nWe included reports identified by the search if they were of controlled trials involving people with antipsychotic-induced TD and chronic mental illness, who had been randomly allocated to either a cholinergic agent or to a placebo or no intervention. Two review authors independently assessed the methodological quality of the trials.\nData collection and analysis\nTwo review authors extracted data and, where possible, estimated risk ratios (RR) or mean differences (MD), with 95% confidence intervals (CI). We analysed data on an intention-to-treat basis, with the assumption that people who left early had no improvement. We assessed risk of bias and created a 'Summary of findings' table using GRADE.\nMain results\nWe included 14 studies investigating the use of cholinergic drugs compared with placebo published between 1976 and 2014. All studies involved small numbers of participants (five to 60 people). Three studies that investigated the new cholinergic Alzheimer drugs for the treatment of TD are new to this update. Overall, the risk of bias in the included studies was unclear, mainly due to poor reporting; allocation concealment was not described, generation of the sequence was not explicit, studies were not clearly blinded, we are unsure if data are incomplete, and data were often poorly or selectively reported.\nWe are uncertain about the effect of new or old cholinergic drugs on no clinically important improvement in TD symptoms when compared with placebo; the quality of evidence was very low (RR 0.89, 95% CI 0.65 to 1.23; 27 people, 4 RCTs). Eight trials found that cholinergic drugs may make little or no difference to deterioration of TD symptoms (low-quality evidence, RR 1.11, 95% CI 0.55 to 2.24; 147 people). Again, due to very low-quality evidence, we are uncertain about the effects on mental state (RR 0.50, 95% CI 0.10 to 2.61; 77 people, 5 RCTs), adverse events (RR 0.56, 95% CI 0.15 to 2.14; 106 people, 4 RCTs), and leaving the study early (RR 1.09,95% CI 0.56 to 2.10; 288 people 12 RCTs). No study reported on social confidence, social inclusion, social networks, or personalised quality of life.\nAuthors' conclusions\nTD remains a major public health problem. The clinical effects of both older cholinergic drugs and new cholinergic agents, now used for treating Alzheimer's disease, are unclear, as too few, too small studies leave many questions unanswered. Cholinergic drugs should remain of interest to researchers and currently have little place in routine clinical work. However, with the advent of new cholinergic agents now used for treating Alzheimer's disease, scope exists for more informative trials. If these new cholinergic agents are to be investigated for treating people with TD, their effects should be demonstrated in large well-designed, conducted and reported randomised trials.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ".\nThe available evidence is weak, limited, and small scale. It is not possible to recommend these drugs as a treatment for tardive dyskinesia based on our findings. To fully investigate whether the use of cholinergic drugs have any positive effects for people with tardive dyskinesia, there would have to be well-designed, larger, longer-term studies, particularly on new cholinergic drugs currently being used for treating Alzheimer’s disease.\nBen Gray, Senior Peer Researcher, McPin Foundation.\nhttp://mcpin.org/"
}
] |
query-laysum
|
24220
|
[
{
"role": "user",
"content": "Abstract: Background\nThe first-line treatment in donor sperm treatment consists of inseminations that can be done by intrauterine insemination (IUI) or by intracervical insemination (ICI).\nObjectives\nTo compare the effectiveness and safety of intrauterine insemination (IUI) and intracervical insemination (ICI) in women who start donor sperm treatment.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Trials Register, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL in October 2016, checked references of relevant studies, and contacted study authors and experts in the field to identify additional studies. We searched PubMed, Google Scholar, the Grey literature, and five trials registers on 15 December 2017.\nSelection criteria\nWe included randomised controlled trials (RCTs) reporting on IUI versus ICI in natural cycles or with ovarian stimulation, and RCTs comparing different cointerventions in IUI and ICI. We included cross-over studies if pre-cross-over data were available.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. We collected data on primary outcomes of live birth and multiple pregnancy rates, and on secondary outcomes of clinical pregnancy, miscarriage, and cancellation rates.\nMain results\nWe included six RCTs (708 women analysed) on ICI and IUI in donor sperm treatment. Two studies compared IUI and ICI in natural cycles, two studies compared IUI and ICI in gonadotrophin-stimulated cycles, and two studies compared timing of IUI and ICI. There was very low-quality evidence; the main limitations were risk of bias due to poor reporting of study methods, and serious imprecision.\nIUI versus ICI in natural cycles\nThere was insufficient evidence to determine whether there was any clear difference in live birth rate between IUI and ICI in natural cycles (odds ratio (OR) 3.24, 95% confidence interval (CI) 0.12 to 87.13; 1 RCT, 26 women; very low-quality evidence). There was only one live birth in this study (in the IUI group). IUI resulted in higher clinical pregnancy rates (OR 6.18, 95% CI 1.91 to 20.03; 2 RCTs, 76 women; I² = 48%; very low-quality evidence).\nNo multiple pregnancies or miscarriages occurred in this study.\nIUI versus ICI in gonadotrophin-stimulated cycles\nThere was insufficient evidence to determine whether there was any clear difference in live birth rate between IUI and ICI in gonadotrophin-stimulated cycles (OR 2.55, 95% CI 0.72 to 8.96; 1 RCT, 43 women; very low-quality evidence). This suggested that if the chance of a live birth following ICI in gonadotrophin-stimulated cycles was assumed to be 30%, the chance following IUI in gonadotrophin-stimulated cycles would be between 24% and 80%. IUI may result in higher clinical pregnancy rates than ICI (OR 2.83, 95% CI 1.38 to 5.78; 2 RCTs, 131 women; I² = 0%; very low-quality evidence). IUI may be associated with higher multiple pregnancy rates than ICI (OR 2.77, 95% CI 1.00 to 7.69; 2 RCTs, 131 women; I² = 0%; very low-quality evidence). This suggested that if the risk of multiple pregnancy following ICI in gonadotrophin-stimulated cycles was assumed to be 10%, the risk following IUI would be between 10% and 46%.\nWe found insufficient evidence to determine whether there was any clear difference between the groups in miscarriage rates in gonadotrophin-stimulated cycles (OR 1.97, 95% CI 0.43 to 9.04; 2 RCTs, overall 67 pregnancies; I² = 50%; very low-quality evidence).\nTiming of IUI and ICI\nWe found no studies that reported on live birth rates.\nWe found a higher clinical pregnancy rate when IUI was timed one day after a rise in blood levels of luteinising hormone (LH) compared to IUI two days after a rise in blood levels of LH (OR 2.00, 95% CI 1.14 to 3.53; 1 RCT, 351 women; low-quality evidence). We found insufficient evidence to determine whether there was any clear difference in clinical pregnancy rates between ICI timed after a rise in urinary levels of LH versus a rise in basal temperature plus cervical mucus scores (OR 1.31, 95% CI 0.42 to 4.11; 1 RCT, 56 women; very low-quality evidence).\nNeither of these studies reported multiple pregnancy or miscarriage rates as outcomes.\nAuthors' conclusions\nThere was insufficient evidence to determine whether there was a clear difference in live birth rates between IUI and ICI in natural or gonadotrophin-stimulated cycles in women who started with donor sperm treatment. There was insufficient evidence available for the effect of timing of IUI or ICI on live birth rates. Very low-quality data suggested that in gonadotrophin-stimulated cycles, IUI may be associated with a higher clinical pregnancy rate than ICI, but also with a higher risk of multiple pregnancy rate. We concluded that the current evidence was too limited to choose between IUI or ICI, in natural cycles or with ovarian stimulation, in donor sperm treatment.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The first-line treatment in donor sperm treatment consists of inseminations that can be done by placing sperm inside a woman's uterus to facilitate fertilisation (IUI) or by inserting sperm into the vagina using a small, needleless syringe or a cervical cap (ICI). Both IUI and ICI can be performed in natural cycles or following ovarian stimulation.\nOvarian stimulation can be performed with gonadotrophins, which are injected, or clomiphene citrate, which is available as a tablet. One of the risks of ovarian stimulation is multiple pregnancies. Therefore, the first few cycles of IUI and ICI are usually performed without ovarian stimulation.\nIt is important that IUI and ICI are performed at a specific time in the woman's menstrual cycle (as close to ovulation as possible). Various techniques of determining the best timing for IUI and ICI in natural cycles are available, such as keeping basal temperature charts, checking cervical mucus scores, testing blood or urine levels of luteinising hormone (LH), or monitoring with ultrasounds.\nWe compared IUI and ICI with each other, and also compared different types of each technique."
}
] |
query-laysum
|
24221
|
[
{
"role": "user",
"content": "Abstract: Background\nBurn injuries are the fourth most common traumatic injury, causing an estimated 180,000 deaths annually worldwide. Superficial burns can be managed with dressings alone, but deeper burns or those that fail to heal promptly are usually treated surgically. Acute burns surgery aims to debride burnt skin until healthy tissue is reached, at which point skin grafts or temporising dressings are applied. Conventional debridement is performed with an angled blade, tangentially shaving burned tissue until healthy tissue is encountered. Hydrosurgery, an alternative to conventional blade debridement, simultaneously debrides, irrigates, and removes tissue with the aim of minimising damage to uninjured tissue. Despite the increasing use of hydrosurgery, its efficacy and the risk of adverse events following surgery for burns is unclear.\nObjectives\nTo assess the effects of hydrosurgical debridement and skin grafting versus conventional surgical debridement and skin grafting for the treatment of acute partial-thickness burns.\nSearch methods\nIn December 2019 we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe included randomised controlled trials (RCTs) that enrolled people of any age with acute partial-thickness burn injury and assessed the use of hydrosurgery.\nData collection and analysis\nTwo review authors independently performed study selection, data extraction, 'Risk of bias' assessment, and GRADE assessment of the certainty of the evidence.\nMain results\nOne RCT met the inclusion criteria of this review. The study sample size was 61 paediatric participants with acute partial-thickness burns of 3% to 4% total burn surface area. Participants were randomised to hydrosurgery or conventional debridement. There may be little or no difference in mean time to complete healing (mean difference (MD) 0.00 days, 95% confidence interval (CI) −6.25 to 6.25) or postoperative infection risk (risk ratio 1.33, 95% CI 0.57 to 3.11). These results are based on very low-certainty evidence, which was downgraded twice for risk of bias, once for indirectness, and once for imprecision.\nThere may be little or no difference in operative time between hydrosurgery and conventional debridement (MD 0.2 minutes, 95% CI −12.2 to 12.6); again, the certainty of the evidence is very low, downgraded once for risk of bias, once for indirectness, and once for imprecision. There may be little or no difference in scar outcomes at six months. Health-related quality of life, resource use, and other adverse outcomes were not reported.\nAuthors' conclusions\nThis review contains one randomised trial of hydrosurgery versus conventional debridement in a paediatric population with low percentage of total body surface area burn injuries. Based on the available trial data, there may be little or no difference between hydrosurgery and conventional debridement in terms of time to complete healing, postoperative infection, operative time, and scar outcomes at six months. These results are based on very low-certainty evidence. Further research evaluating these outcomes as well as health-related quality of life, resource use, and other adverse event outcomes is required.\n\nGiven the provided abstract, please respond to the following query: \"Our methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases for randomised controlled trials that compared burn treatment using hydrosurgery with conventional debridement. Randomised controlled trials are medical studies where the treatment people receive is chosen at random. This type of study provides the most reliable evidence about whether different approaches to health care make a difference. Participants in the studies could be any age. The studies could have taken place anywhere and be reported in any language."
}
] |
query-laysum
|
24223
|
[
{
"role": "user",
"content": "Abstract: Background\nHigh altitude illness (HAI) is a term used to describe a group of mainly cerebral and pulmonary syndromes that can occur during travel to elevations above 2500 metres (˜ 8200 feet). Acute mountain sickness (AMS), high altitude cerebral oedema (HACE) and high altitude pulmonary oedema (HAPE) are reported as potential medical problems associated with high altitude ascent. In this second review, in a series of three about preventive strategies for HAI, we assessed the effectiveness of five of the less commonly used classes of pharmacological interventions.\nObjectives\nTo assess the clinical effectiveness and adverse events of five of the less commonly used pharmacological interventions for preventing acute HAI in participants who are at risk of developing high altitude illness in any setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) in May 2017. We adapted the MEDLINE strategy for searching the other databases. We used a combination of thesaurus-based and free-text search terms. We scanned the reference lists and citations of included trials and any relevant systematic reviews that we identified for further references to additional trials.\nSelection criteria\nWe included randomized controlled trials conducted in any setting where one of five classes of drugs was employed to prevent acute HAI: selective 5-hydroxytryptamine(1) receptor agonists; N-methyl-D-aspartate (NMDA) antagonist; endothelin-1 antagonist; anticonvulsant drugs; and spironolactone. We included trials involving participants who are at risk of developing high altitude illness (AMS or HACE, or HAPE, or both). We included participants with and without a history of high altitude illness. We applied no age or gender restrictions. We included trials where the relevant medication was administered before the beginning of ascent. We excluded trials using these drugs during ascent or after ascent.\nData collection and analysis\nWe used the standard methodological procedures employed by Cochrane.\nMain results\nWe included eight studies (334 participants, 9 references) in this review. Twelve studies are ongoing and will be considered in future versions of this review as appropriate. We have been unable to obtain full-text versions of a further 12 studies and have designated them as 'awaiting classification'. Four studies were at a low risk of bias for randomization; two at a low risk of bias for allocation concealment. Four studies were at a low risk of bias for blinding of participants and personnel. We considered three studies at a low risk of bias for blinding of outcome assessors. We considered most studies at a high risk of selective reporting bias.\nWe report results for the following four main comparisons.\nSumatriptan versus placebo (1 parallel study; 102 participants)\nData on sumatriptan showed a reduction of the risk of AMS when compared with a placebo (risk ratio (RR) = 0.43, CI 95% 0.21 to 0.84; 1 study, 102 participants; low quality of evidence). The one included study did not report events of HAPE, HACE or adverse events related to administrations of sumatriptan.\nMagnesium citrate versus placebo (1 parallel study; 70 participants)\nThe estimated RR for AMS, comparing magnesium citrate tablets versus placebo, was 1.09 (95% CI 0.55 to 2.13; 1 study; 70 participants; low quality of evidence). In addition, the estimated RR for loose stools was 3.25 (95% CI 1.17 to 8.99; 1 study; 70 participants; low quality of evidence). The one included study did not report events of HAPE or HACE.\nSpironolactone versus placebo (2 parallel studies; 205 participants)\nPooled estimation of RR for AMS was not performed due to considerable heterogeneity between the included studies (I² = 72%). RR from individual studies was 0.40 (95% CI 0.12 to 1.31) and 1.44 (95% CI 0.79 to 2.01; very low quality of evidence). No events of HAPE or HACE were reported. Adverse events were not evaluated.\nAcetazolamide versus spironolactone (1 parallel study; 232 participants)\nData on acetazolamide compared with spironolactone showed a reduction of the risk of AMS with the administration of acetazolamide (RR = 0.36, 95% CI 0.18 to 0.70; 232 participants; low quality of evidence). No events of HAPE or HACE were reported. Adverse events were not evaluated.\nAuthors' conclusions\nThis Cochrane Review is the second in a series of three providing relevant information to clinicians and other interested parties on how to prevent high altitude illness. The assessment of five of the less commonly used classes of drugs suggests that there is a scarcity of evidence related to these interventions. Clinical benefits and harms related to potential interventions such as sumatriptan are still unclear. Overall, the evidence is limited due to the low number of studies identified (for most of the comparison only one study was identified); limitations in the quality of the evidence (moderate to low); and the number of studies pending classification (24 studies awaiting classification or ongoing). We lack the large and methodologically sound studies required to establish or refute the efficacy and safety of most of the pharmacological agents evaluated in this review.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to May 2017. We included eight studies and 334 participants related to five different classes of medicines sometimes recommended for HAI prevention. These medicines included those that mimic the action of serotonin at selected sites (selective 5-hydroxytryptamine(1) receptor agonists), medicines that regulate the action of calcium (N-methyl-D-aspartate (NMDA) antagonist), medicines that promote dilation of the blood vessels (endothelin-1 antagonist), medicines that prevent a neuron (nerve cell) from 'firing' (initiating an action) and convulsions from developing (anticonvulsant medicines), as well as medicines that regulate the body´s sodium and water levels (spironolactone). All studies were undertaken in high altitude mountain areas. The participants ranged between 16 and 65 years of age. Only one study included people at a high risk of this condition due to their history of HAI. Four trials provided the intervention between one to three days prior to the ascent (50%), and three trials less than 24 hours prior (37.5%). The participants in all these studies reached a final altitude of between 3500 and 5895 metres above sea level. Only one of the eight included studies did not provide clear information about the source of funding (12.5%). Twenty-four additional studies were classified as ongoing (12), or awaiting classification (12; unable to obtain full texts)."
}
] |
query-laysum
|
24224
|
[
{
"role": "user",
"content": "Abstract: Background\nPain is a common symptom with cancer, and 30% to 50% of all people with cancer will experience moderate to severe pain that can have a major negative impact on their quality of life. Non-opioid drugs are commonly used to treat cancer pain, and are recommended for this purpose in the World Health Organization (WHO) cancer pain treatment ladder, either alone or in combination with opioids.\nA previous Cochrane review that examined the evidence for nonsteroidal anti-inflammatory drugs (NSAIDs) or paracetamol, alone or combined with opioids, for cancer pain was withdrawn in 2015 because it was out of date; the date of the last search was 2005. This review, and another on paracetamol, updates the evidence.\nObjectives\nTo assess the efficacy of oral NSAIDs for cancer pain in adults, and the adverse events reported during their use in clinical trials.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, and Embase from inception to April 2017, together with reference lists of retrieved papers and reviews, and two online study registries.\nSelection criteria\nWe included randomised, double-blind, single-blind, or open-label studies of five days' duration or longer, comparing any oral NSAID alone with placebo or another NSAID, or a combination of NSAID plus opioid with the same dose of the opioid alone, for cancer pain of any pain intensity. The minimum study size was 25 participants per treatment arm at the initial randomisation.\nData collection and analysis\nTwo review authors independently searched for studies, extracted efficacy and adverse event data, and examined issues of study quality and potential bias. We did not carry out any pooled analyses. We assessed the quality of the evidence using GRADE and created a 'Summary of findings' table.\nMain results\nEleven studies satisfied inclusion criteria, lasting one week or longer; 949 participants with mostly moderate or severe pain were randomised initially, but fewer completed treatment or had results of treatment. Eight studies were double-blind, two single-blind, and one open-label. None had a placebo only control; eight compared different NSAIDs, three an NSAID with opioid or opioid combination, and one both. None compared an NSAID plus opioid with the same dose of opioid alone. Most studies were at high risk of bias for blinding, incomplete outcome data, or small size; none was unequivocally at low risk of bias.\nIt was not possible to compare NSAIDs as a group with another treatment, or one NSAID with another NSAID. Results for all NSAIDs are reported as a randomised cohort. We judged results for all outcomes as very low-quality evidence.\nNone of the studies reported our primary outcomes of participants with pain reduction of at least 50%, and at least 30%, from baseline; participants with Patient Global Impression of Change (PGIC) of much improved or very much improved (or equivalent wording). With NSAID, initially moderate or severe pain was reduced to no worse than mild pain after one or two weeks in four studies (415 participants in total), with a range of estimates between 26% and 51% in individual studies.\nAdverse event and withdrawal reporting was inconsistent. Two serious adverse events were reported with NSAIDs, and 22 deaths, but these were not clearly related to any pain treatment. Common adverse events were thirst/dry mouth (15%), loss of appetite (14%), somnolence (11%), and dyspepsia (11%). Withdrawals were common, mostly because of lack of efficacy (24%) or adverse events (5%).\nAuthors' conclusions\nThere is no high-quality evidence to support or refute the use of NSAIDs alone or in combination with opioids for the three steps of the three-step WHO cancer pain ladder. There is very low-quality evidence that some people with moderate or severe cancer pain can obtain substantial levels of benefit within one or two weeks.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One person in two or three who gets cancer will suffer from pain that becomes moderate or severe in intensity. The pain tends to get worse as the cancer progresses. In 1986, the World Health Organization (WHO) recommended taking morphine-like drugs (opioids) for moderate to severe pain from cancer, and non-opioid drugs like NSAIDs, alone for mild to moderate pain, or alongside opioids in people with moderate to severe pain. There are many different types of NSAIDs. Common NSAIDs are ibuprofen and diclofenac."
}
] |
query-laysum
|
24225
|
[
{
"role": "user",
"content": "Abstract: Background\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\nObjectives\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\nSelection criteria\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\nMain results\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks’ gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians’ interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect.\nFor low-back pain\nResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two).\nFor pelvic pain\nResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two).\nFor low-back and pelvic pain\nResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).\nThe results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.\nWhen reported, adverse effects were minor and transient.\nAuthors' conclusions\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.\nClinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.\nFurther evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence and conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is low-quality evidence suggesting that exercise improves pain and disability for women with low-back pain, and moderate-quality evidence that exercise results in less sick leave and fewer women reporting pain in those with both low-back and pelvic pain together. The quality of evidence is due to problems with the design of studies, small numbers of women and varied results. As a result, we believe that future studies are very likely to change our conclusions. There is simply not enough good quality evidence to make confident decisions about treatments for these complaints."
}
] |
query-laysum
|
24226
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic inflammatory demyelinating polyradiculoneuropathy (CIDP) causes progressive or relapsing weakness and numbness of the limbs, developing over at least two months. Uncontrolled studies suggest that intravenous immunoglobulin (IVIg) helps. This review was first published in 2002 and has since been updated, most recently in 2013.\nObjectives\nTo review systematically the evidence from randomised controlled trials (RCTs) concerning the efficacy and safety of IVIg in CIDP.\nSearch methods\nOn 4 December 2012, we searched the Cochrane Neuromuscular Disease Group Specialized Register, CENTRAL (2012, issue 11 in the Cochrane Library), MEDLINE and EMBASE to December 2012 and ISI from January 1985 to May 2008. We searched for ongoing trials through two metaRegistries (World Health Organization International Clinical Trials Registry Platform Search Portal and Current Controlled Trials).\nSelection criteria\nWe selected RCTs testing any dose of IVIg versus placebo, plasma exchange or corticosteroids in definite or probable CIDP.\nData collection and analysis\nTwo authors reviewed literature searches to identify potentially relevant RCTs, scored their quality and extracted data independently. We contacted authors for additional information.\nMain results\nWe considered eight RCTs, including 332 participants, to be eligible for inclusion in the review. These trials were homogeneous and the overall risk of bias low. Five studies, in a total of 235 participants compared IVIg against placebo. One trial with 20 participants compared IVIg with plasma exchange, one trial compared IVIg with prednisolone in 32 participants, and one trial, newly included at this update, compared IVIg with intravenous methylprednisolone in 46 participants.\nA significantly higher proportion of participants improved in disability within one month after IVIg treatment as compared with placebo (risk ratio (RR) 2.40, 95% confidence interval (CI) 1.72 to 3.36; number needed to treat for an additional beneficial outcome 3.03 (95% CI 2.33 to 4.55), high quality evidence). Whether all these improvements are equally clinically relevant cannot be deduced from this analysis because each trial used different disability scales and definitions of significant improvement. In three trials, including 84 participants, the disability score could be transformed to the modified Rankin score, on which improvement of one point after IVIg treatment compared to placebo was barely significant (RR 2.40, 95% CI 0.98 to 5.83) (moderate quality evidence). Only one placebo-controlled study included in this review had a long-term follow-up. The results of this study suggest that IVIg improves disability more than placebo over 24 and 48 weeks.\nThe mean disability score revealed no significant difference between IVIg and plasma exchange at six weeks (moderate quality evidence). There was no significant difference in improvement in disability on prednisolone compared with IVIg after two or six weeks, or on methylprednisolone compared to IVIg after two weeks or six months (moderate quality evidence).\nThere were no statistically significant differences in frequencies of side effects between the three types of treatment for which data were available (IVg versus placebo or steroids). (moderate or high quality evidence) Mild and transient adverse events were found in 49% of participants treated with IVIg, while serious adverse events were found in six per cent.\nAuthors' conclusions\nThe evidence from RCTs shows that IVIg improves disability for at least two to six weeks compared with placebo, with an NNTB of three. During this period it has similar efficacy to plasma exchange, oral prednisolone and intravenous methylprednisolone. In one large trial, the benefit of IVIg persisted for 24 and possibly 48 weeks. Further research is needed to compare the long-term benefits as well as side effects of IVIg with other treatments.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found five randomised trials which together prove that IVIg improves disability more than placebo (dummy treatment). The results showed that three people would need to be treated for one person to improve. In the three trials that compared IVIg with other treatments, results with IVIg were similar to plasma exchange, oral prednisolone or intravenous methylprednisolone. The evidence was of moderate or high quality. In this review, there were mild and short-term side effects in around half of those who received IVIg. Six per cent of those treated with IVIg had serious side effects, which is a similar rate as with plasma exchange or corticosteroids.\nEach trial defined improvement in its own way and the trials used different measurement scales, so it is difficult to relate them to changes in the clinical condition of people with CIDP. Only one of the studies that compared IVIg with placebo had a long-term follow-up. It suggested that IVIg improves disability more than placebo over 24 weeks and possibly 48 weeks. Further research is needed to compare the long-term benefits as well as side effects of IVIg with other treatments.\nThe most recent search for studies was in December 2012 and we updated the review with the results of one additional trial."
}
] |
query-laysum
|
24227
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with dementia who are being cared for in long-term care settings are often not engaged in meaningful activities. We wanted to know whether offering them activities which are tailored to their individual interests and preferences could improve their quality of life and reduce agitation. This review updates our earlier review published in 2018.\nObjectives\n∙ To assess the effects of personally tailored activities on psychosocial outcomes for people with dementia living in long-term care facilities.\n∙ To describe the components of the interventions.\n∙ To describe conditions which enhance the effectiveness of personally tailored activities in this setting.\nSearch methods\nWe searched the Cochrane Dementia and Cognitive Improvement Group’s Specialized Register, on 15 June 2022. We also performed additional searches in MEDLINE, Embase, PsycINFO, CINAHL, Web of Science, ClinicalTrials.gov, and the World Health Organization (WHO) ICTRP, to ensure that the search for the review was as up-to-date and as comprehensive as possible.\nSelection criteria\nWe included randomised controlled trials (RCTs) and controlled clinical trials offering personally tailored activities. All interventions included an assessment of the participants' present or past preferences for, or interest in, particular activities as a basis for an individual activity plan. Control groups received either usual care or an active control intervention.\nData collection and analysis\nTwo authors independently selected studies for inclusion, extracted data and assessed the risk of bias of included studies. Our primary efficacy outcomes were agitation and participant quality of life. Where possible, we pooled data across studies using a random effects model.\nMain results\nWe identified three new studies, and therefore included 11 studies with 1071 participants in this review update. The mean age of participants was 78 to 88 years and most had moderate or severe dementia. Ten studies were RCTs (three studies randomised clusters to the study groups, six studies randomised individual participants, and one study randomised matched pairs of participants) and one study was a non-randomised clinical trial. Five studies included a control group receiving usual care, five studies an active control group (activities which were not personally tailored) and one study included both types of control group. The duration of follow-up ranged from 10 days to nine months.\nIn nine studies personally tailored activities were delivered directly to the participants. In one study nursing staff, and in another study family members, were trained to deliver the activities. The selection of activities was based on different theoretical models, but the activities delivered did not vary substantially.\nWe judged the risk of selection bias to be high in five studies, the risk of performance bias to be high in five studies and the risk of detection bias to be high in four studies.\nWe found low-certainty evidence that personally tailored activities may slightly reduce agitation (standardised mean difference −0.26, 95% CI −0.53 to 0.01; I² = 50%; 7 studies, 485 participants). We also found low-certainty evidence from one study that was not included in the meta-analysis, indicating that personally tailored activities may make little or no difference to general restlessness, aggression, uncooperative behaviour, very negative and negative verbal behaviour (180 participants). Two studies investigated quality of life by proxy-rating. We found low-certainty evidence that personally tailored activities may result in little to no difference in quality of life in comparison with usual care or an active control group (MD -0.83, 95% CI -3.97 to 2.30; I² = 51%; 2 studies, 177 participants). Self-rated quality of life was only available for a small number of participants from one study, and there was little or no difference between personally tailored activities and usual care on this outcome (MD 0.26, 95% CI −3.04 to 3.56; 42 participants; low-certainty evidence). Two studies assessed adverse effects, but no adverse effects were observed.\nWe are very uncertain about the effects of personally tailored activities on mood and positive affect. For negative affect we found moderate-certainty evidence that there is probably little to no effect of personally tailored activities compared to usual care or activities which are not personalised (standardised mean difference -0.02, 95% CI −0.19 to 0.14; 6 studies, 632 participants). We were not able to undertake meta-analyses for engagement and sleep-related outcomes, and we are very uncertain whether personally tailored activities have any effect on these outcomes.\nTwo studies that investigated the duration of the effects of personally tailored activities indicated that the intervention effects they found persisted only during the period of delivery of the activities.\nAuthors' conclusions\nOffering personally tailored activities to people with dementia in long-term care may slightly reduce agitation. Personally tailored activities may result in little to no difference in quality of life rated by proxies, but we acknowledge concerns about the validity of proxy ratings of quality of life in severe dementia. Personally tailored activities probably have little or no effect on negative affect, and we are uncertain whether they have any effect on positive affect or mood. There was no evidence that interventions were more likely to be effective if based on one theoretical model rather than another. We included three new studies in this updated review, but two studies were pilot trials and included only a small number of participants. Certainty of evidence was predominately very low or low due to several methodological limitations of and inconsistencies between the included studies. Evidence is still limited, and we remain unable to describe optimal activity programmes. Further research should focus on methods for selecting appropriate and meaningful activities for people in different stages of dementia.\n\nGiven the provided abstract, please respond to the following query: \"What are the benefits of activities that are tailored to the interests and preferences of people with dementia living in care homes\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum
|
24228
|
[
{
"role": "user",
"content": "Abstract: Background\nMalignant melanoma is a form of skin cancer associated with significant mortality once it has spread beyond the skin. Melanoma in situ (MIS) is the earliest histologically recognisable stage of malignant melanoma and represents a precursor of invasive melanoma. Lentigo maligna (LM) represents a subtype of pre-invasive intraepidermal melanoma associated specifically with chronic exposure to ultraviolet (UV) radiation. Over the past two decades, the incidence of MIS has increased significantly, even more than the invasive counterpart. There are several treatment options for MIS, but no consensus exists on the best therapeutic management of this condition.\nObjectives\nTo assess the effects of all available interventions, surgical and non-surgical, for the treatment of melanoma in situ, including LM.\nSearch methods\nWe searched the following databases up to November 2014: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library (2014, Issue 10), MEDLINE (from 1946), Embase (from 1974), LILACS (from 1982), African Index Medicus (from inception), IndeMED of India (from inception), and Index Medicus for the South-East Asia Region (IMSEAR) (from inception). We scanned the references of included and excluded studies for further references to relevant trials and searched five trials registries. We checked the abstracts of major dermatology and oncology conference proceedings, and we shared our lists of included and excluded studies with industry contacts and other experts in the field of melanoma to try to identify further relevant trials.\nSelection criteria\nWe included randomised controlled trials (RCT) on the management of MIS, including LM, that compared any intervention to placebo or active treatment. We included individuals, irrespective of age and sex, diagnosed with MIS, including LM, based on histological examination.\nData collection and analysis\nTwo authors independently evaluated possible studies for inclusion; extracted data from the included study using a standard data extraction form modified for our review; assessed risk of bias; and analysed data on efficacy, safety, and tolerability. They resolved any disagreements by discussion with a third author. We collected adverse effects information from included studies.\nMain results\nOur search identified only 1 study eligible for inclusion (and 1 ongoing study in active recruitment stage), which was a single centre, open label, parallel group, 2-arm RCT with 90 participants, who had 91 histologically proven LM lesions.\nForty-four participants, with 44 LM lesions, were treated with imiquimod 5% cream 5 days per week plus tazarotene 0.1% gel 2 days/week for 3 months, and 46 participants, with 47 LM lesions, were treated with imiquimod 5% cream 5 days per week for 3 months. Two months after cessation of topical treatment, the initial tumour footprint was excised using 2 mm margins via a staged excision. This study was open label, and analysis was not intention-to-treat, leading to a high risk of incomplete outcome data.\nOur primary outcome 'Histological or clinical complete response' was measured at 5 months in 29/44 participants (66%) treated with imiquimod plus tazarotene (combination therapy) and 27/46 participants (59%) treated with imiquimod (monotherapy). The difference was not statistically significant (risk ratio (RR) 1.12, 95% confidence interval (CI) 0.81 to 1.55, P value = 0.48).\nWith regard to our secondary outcomes on recurrence and inflammation, after a mean follow up of 42 months, no local recurrences were observed among complete responders. Difference in overall inflammation score between the 2 groups was significant (mean difference (MD) 0.6, 95% CI 0.2 to 1, P value = 0.004), with the mean overall inflammation score being significantly higher in the combination group.\nThe study authors did not clearly report on side-effects. Because of adverse effects, there was a dropout rate of 6/44 participants (13.7%) in the combination group compared with 1/46 (2.2%) in the imiquimod monotherapy group (due to excessive inflammation) before the cessation of topical treatment (first 3 months), but this was not statistically significant (RR 6.27, 95% CI 0.79 to 50.02, P value = 0.08).\nAuthors' conclusions\nThere is a lack of high-quality evidence for the treatment of MIS and LM.\nFor the treatment of MIS, we found no RCTs of surgical interventions aiming to optimise margin control (square method, perimeter technique, 'slow Mohs', staged radial sections, staged \"mapped\" excisions, or Mohs micrographic surgery), which are the most widely used interventions recommended as first-line therapy. The use of non-surgical interventions in selected cases (patients with contraindications to surgical interventions) may be effective and may be considered preferable for experienced providers and under close and adequate follow up.\nFor the treatment of LM, we found no RCTs of surgical interventions, which remain the most widely used and recommended available treatment. The use of non-surgical interventions, such as imiquimod, as monotherapy may be effective and may be considered in selected cases where surgical procedures are contraindicated and used preferentially by experienced providers under close and adequate follow up. The use of topical therapies, such as 5-fluorouracil and imiquimod, as neoadjuvant therapies warrants further investigation. There is insufficient evidence to support or refute the addition of tazarotene to imiquimod as adjuvant therapy; the current evidence suggests that it can increase topical inflammatory response and withdrawal of participants because of treatment-related side-effects.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of evidence is poor. With regard to the treatment of MIS, surgical interventions that aim to excise the tumour so that none of the tumour cells are in the margin are the most recommended interventions in non-selected cases. The evidence does not support the use of non-surgical interventions in selected cases (i.e., in elderly people with contraindications to surgical interventions). However, clinical centres may consider it where there is experience of this treatment and where close and adequate follow up can be undertaken.\nWith regard to the treatment of LM, surgical interventions remain the most recommended available treatment. The evidence does not support the use of non-surgical interventions, such as imiquimod, as a single therapy in non-selected cases. It may be considered only in selected cases and in clinical centres with experience. The evidence so far does not support the use of imiquimod as a neoadjuvant (i.e., before surgery) therapy but warrants further investigation, in order to evaluate if use after surgery can minimise recurrence and if use before surgery of large lesions or difficult sites can help to achieve smaller surgical excisions. The evidence does not support addition of tazarotene to imiquimod as neoadjuvant therapy."
}
] |
query-laysum
|
24229
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) is a common disorder in which the two main clinical features are pelvic pain and lower urinary tract symptoms. There are currently many approaches for its management, using both pharmacological and non-pharmacological interventions. The National Institute of Health - Chronic Prostatitis Symptom Index (NIH-CPSI) score is a validated measure commonly used to measure CP/CPPS symptoms. We considered a 25% decrease of NIH-CPSI baseline score or a six-point reduction as MCID.\nObjectives\nTo assess the effects of pharmacological therapies for chronic prostatitis/chronic pelvic pain syndrome.\nSearch methods\nWe performed a comprehensive search using CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, trial registries, grey literature and conference proceedings, with no restrictions on the language of publication or publication status. The date of the latest search of all databases was July 2019.\nSelection criteria\nWe included randomised controlled trials. Inclusion criteria were men with a diagnosis of CP/CPPS. We included all available pharmacological interventions compared to placebo or in head-to-head comparisons.\nData collection and analysis\nTwo review authors independently assessed study eligibility, extracted data, and assessed the risks of bias of included studies. We assessed the quality of the evidence (QoE) using the GRADE approach.\nMain results\nWe included 99 unique studies in 9119 men with CP/CPPS, with assessments of 16 types of pharmacological interventions. Unless stated otherwise, our comparisons were based on short-term follow-up (less than 12 months). Most studies did not specify their funding sources; 21 studies reported funding from pharmaceutical companies.\n1. Alpha blockers: (24 studies, 2061 participants). We are uncertain about the effects of these drugs on prostatitis symptoms when compared to placebo at short-term follow-up (mean difference (MD) in total NIH-CPSI score −5.01, 95% confidence interval (CI) −7.41 to −2.61; 18 studies, 1524 participants, very low QoE) and at long-term follow-up (MD −5.60, 95% CI −10.89 to −0.32; 4 studies, 235 participants, very low QoE). Alpha blockers may be associated with an increased incidence of adverse events, such as dizziness and postural hypotension (risk ratio (RR) 1.60, 95% CI 1.09 to 2.34; 19 studies, 1588 participants; low QoE). Alpha blockers probably result in little to no difference in sexual dysfunction, quality of life and anxiety and depression (moderate to low QoE).\n2. 5-alpha reductase inhibitors (5-ARI): (2 studies, 177 participants). Finasteride probably reduces prostatitis symptoms compared to placebo (NIH-CPSI score MD −4.60, 95% CI −5.43 to −3.77; 1 study, 64 participants; moderate QoE) and may not be associated with an increased incidence of adverse events (low QoE). There was no information on sexual dysfunction, quality of life or anxiety and depression.\n3. Antibiotics: (6 studies, 693 participants). Antibiotics (quinolones) may reduce prostatitis symptoms compared to placebo (NIH-CPSI score MD −2.43, 95% CI −4.72 to −0.15; 5 studies, 372 participants; low QoE) and are probably not associated with an increased incidence in adverse events (moderate QoE). Antibiotics probably result in little to no difference in sexual dysfunction and quality of life (moderate QoE). There was no information on anxiety or depression.\n4. Anti-inflammatories: (7 studies, 585 participants). Anti-inflammatories may reduce prostatitis symptoms compared to placebo (NIH-CPSI scores MD −2.50, 95% CI −3.74 to −1.26; 7 studies, 585 participants; low QoE) and may not be associated with an increased incidence in adverse events (low QoE). There was no information on sexual dysfunction, quality of life or anxiety and depression.\n5. Phytotherapy: (7 studies, 551 participants). Phytotherapy may reduce prostatitis symptoms compared to placebo (NIH-CPSI scores MD −5.02, 95% CI −6.81 to −3.23; 5 studies, 320 participants; low QoE) and may not be associated with an increased incidence in adverse events (low QoE). Phytotherapy may not improve sexual dysfunction (low QoE). There was no information on quality of life or anxiety and depression.\n6. Botulinum toxin A (BTA): Intraprostatic BTA injection (1 study, 60 participants) may cause a large reduction in prostatitis symptom (NIH-CPSI scores MD −25.80, 95% CI −30.15 to −21.45), whereas pelvic floor muscle BTA injection (1 study, 29 participants) may not reduce prostatitis symptoms (low QoE). Both comparisons used a placebo injection. These interventions may not be associated with an increased incidence in adverse events (low QoE). There was no information on sexual dysfunction, quality of life or anxiety and depression.\n7. Allopurinol: (2 studies, 110 participants). Allopurinol may result in little to no difference in prostatitis symptoms and adverse events when compared to placebo (low QoE). There was no information on sexual dysfunction, quality of life or anxiety and depression.\n8. Traditional Chinese medicine (TCM): (7 studies, 835 participants); TCM may reduce prostatitis symptoms (NIH-CPSI score, MD -3.13, 95% CI -4.99 to -1.28; low QoE) and may not be associated with an increased incidence in adverse events (low QoE). TCM probably does not improve sexual dysfunction (moderate QoE) and may not improve symptoms of anxiety and depression (low QoE). There was no information on quality of life.\nThe most frequent reasons for downgrading the QoE were study limitations, inconsistency and imprecision. We found few trials with active comparators.\nAuthors' conclusions\nWe found low- to very low-quality evidence that alpha blockers, antibiotics, 5-ARI, anti-inflammatories, phytotherapy, intraprostatic BTA injection, and traditional Chinese medicine may cause a reduction in prostatitis symptoms without an increased incidence of adverse events in the short term, except for alpha blockers which may be associated with an increase in mild adverse events. We found few trials with active comparators and little evidence of the effects of these drugs on sexual dysfunction, quality of life or anxiety and depression. Future clinical trials should include a full report of their methods, including adequate masking, consistent assessment of all patient-important outcomes, including potential treatment-related adverse events, and appropriate sample sizes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was low to very low in most cases, meaning that there is much uncertainty surrounding the results. The most frequent problems detected in the included studies were an inadequate design, a small sample size and a short follow-up time (usually 12 weeks)."
}
] |
query-laysum
|
24230
|
[
{
"role": "user",
"content": "Abstract: Background\nHigh blood pressure is an important risk factor for cardiovascular disease, contributing to about 50% of cardiovascular events worldwide and 37% of cardiovascular-related deaths in Western populations. Epidemiological studies suggest that cocoa-rich products reduce the risk of cardiovascular disease. Flavanols found in cocoa have been shown to increase the formation of endothelial nitric oxide which promotes vasodilation and therefore blood pressure reduction. Here we update previous meta-analyses on the effect of cocoa on blood pressure.\nObjectives\nTo assess the effects on blood pressure of chocolate or cocoa products versus low-flavanol products or placebo in adults with or without hypertension when consumed for two weeks or longer.\nSearch methods\nThis is an updated version of the review initially published in 2012. In this updated version, we searched the following electronic databases from inception to November 2016: Cochrane Hypertension Group Specialised Register, CENTRAL, MEDLINE and Embase. We also searched international trial registries, and the reference lists of review articles and included trials.\nSelection criteria\nRandomised controlled trials (RCTs) investigating the effects of chocolate or cocoa products on systolic and diastolic blood pressure in adults for a minimum of two weeks duration.\nData collection and analysis\nTwo review authors independently extracted data and assessed the risks of bias in each trial. We conducted random-effects meta-analyses on the included studies using Review Manager 5. We explored heterogeneity with subgroup analyses by baseline blood pressure, flavanol content of control group, blinding, age and duration. Sensitivity analyses explored the influence of unusual study design.\nMain results\nThirty-five trials (including 40 treatment comparisons) met the inclusion criteria. Of these, we added 17 trials (20 treatment comparisons) to the 18 trials (20 treatment comparisons) in the previous version of this updated review.\nTrials provided participants with 30 to 1218 mg of flavanols (mean = 670 mg) in 1.4 to 105 grams of cocoa products per day in the active intervention group. The control group received either a flavanol-free product (n = 26 treatment comparisons) or a low-flavanol-containing cocoa powder (range 6.4 to 88 mg flavanols (mean = 55 mg, 13 treatment comparisons; 259 mg, 1 trial).\nMeta-analyses of the 40 treatment comparisons involving 1804 mainly healthy participants revealed a small but statistically significant blood pressure-reducing effect of flavanol-rich cocoa products compared with control in trials of two to 18 weeks duration (mean nine weeks):\nMean difference systolic blood pressure (SBP) (95% confidence interval (CI): -1.76 (-3.09 to -0.43) mmHg, P = 0.009, n = 40 treatment comparisons, 1804 participants;\nMean difference diastolic blood pressure (DBP) (95% CI): -1.76 (-2.57 to -0.94) mmHg, P < 0.001, n = 39 treatment comparisons, 1772 participants.\nBaseline blood pressure may play a role in the effect of cocoa on blood pressure. While systolic blood pressure was reduced significantly by 4 mmHg in hypertensive people (n = 9 treatment comparisons, 401 participants), and tended to be lowered in prehypertensive people (n= 8 treatment comparisons, 340 participants), there was no significant difference in normotensive people (n = 23 treatment comparisons, 1063 participants); however, the test for subgroup differences was of borderline significance (P = 0.08; I2 = 60%), requiring further research to confirm the findings.\nSubgroup meta-analysis by blinding suggested a trend towards greater blood pressure reduction in unblinded trials compared to double-blinded trials, albeit statistically not significant. Further research is needed to confirm whether participant expectation may influence blood pressure results. Subgroup analysis by type of control (flavanol-free versus low-flavanol control) did not reveal a significant difference.\nWhether the age of participants plays a role in the effect of cocoa on blood pressure, with younger participants responding with greater blood pressure reduction, needs to be further investigated.\nSensitivity analysis excluding trials with authors employed by trials sponsoring industry (33 trials, 1482 participants) revealed a small reduction in effect size, indicating some reporting bias.\nDue to the remaining heterogeneity, which we could not explain in terms of blinding, flavanol content of the control groups, age of participants, or study duration, we downgraded the quality of the evidence from high to moderate.\nResults of subgroup analyses should be interpreted with caution and need to be confirmed or refuted in trials using direct randomised comparisons.\nGenerally, cocoa products were highly tolerable, with adverse effects including gastrointestinal complaints and nausea being reported by 1% of participants in the active cocoa intervention group and 0.4% of participants in the control groups (moderate-quality evidence).\nAuthors' conclusions\nThis review provides moderate-quality evidence that flavanol-rich chocolate and cocoa products cause a small (2 mmHg) blood pressure-lowering effect in mainly healthy adults in the short term.\nThese findings are limited by the heterogeneity between trials, which could not be explained by prespecified subgroup analyses, including blinding, flavanol content of the control groups, age of participants, or study duration. However, baseline blood pressure may play a role in the effect of cocoa on blood pressure; subgroup analysis of trials with (pre)hypertensive participants revealed a greater blood pressure-reducing effect of cocoa compared to normotensive participants with borderline significance.\nLong-term trials investigating the effect of cocoa on clinical outcomes are also needed to assess whether cocoa has an effect on cardiovascular events and to assess potential adverse effects associated with chronic ingestion of cocoa products.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Meta-analysis of 40 treatment comparisons revealed a small but statistically significant lowering of blood pressure (systolic and diastolic) of 1.8 mmHg. This small reduction in blood pressure might complement other treatment options and might contribute to reducing the risk of cardiovascular disease.\nWe investigated whether participants' blood pressure at the start of the study, their age, an awareness of group allocation (active or control), the flavanol content used in the control group, or how long the study lasted may explain variations between trials. While blood pressure status (high blood pressure or normal blood pressure) is a likely factor in the effect size of cocoa on blood pressure, the impact of other factors needs to be confirmed or rejected in further trials.\nSide effects including digestive complaints and dislike of the trial product were reported by only 1% of people in the active cocoa intervention group and 0.4% of people in the control groups.\nLonger-term trials are needed to establish whether regularly eating flavanol-rich cocoa products has a beneficial effect on blood pressure and cardiovascular health over time, and whether there are any side effects of long-term use of cocoa products on a daily basis."
}
] |
query-laysum
|
24231
|
[
{
"role": "user",
"content": "Abstract: Background\nNeuroblastoma is a rare malignant disease that primarily affects children. The tumours mainly develop in the adrenal medullary tissue, and an abdominal mass is the most common presentation. High-risk disease is characterised by metastasis and other primary tumour characteristics resulting in increased risk for an adverse outcome. The GD2 carbohydrate antigen is expressed on the cell surface of neuroblastoma tumour cells and is thus a promising target for anti-GD2 antibody-containing immunotherapy.\nObjectives\nTo assess the efficacy of anti-GD2 antibody-containing postconsolidation immunotherapy after high-dose chemotherapy (HDCT) and autologous haematopoietic stem cell transplantation (HSCT) compared to standard therapy after HDCT and autologous HSCT in people with high-risk neuroblastoma. Our primary outcomes were overall survival and treatment-related mortality. Our secondary outcomes were progression-free survival, event-free survival, early toxicity, late non-haematological toxicity, and health-related quality of life.\nSearch methods\nWe searched the electronic databases CENTRAL (2018, Issue 9), MEDLINE (PubMed), and Embase (Ovid) on 20 September 2018. We searched trial registries and conference proceedings on 28 October 2018. Further searches included reference lists of recent reviews and relevant articles as well as contacting experts in the field. There were no limits on publication year or language.\nSelection criteria\nRandomised controlled trials evaluating anti-GD2 antibody-containing immunotherapy after HDCT and autologous HSCT in people with high-risk neuroblastoma.\nData collection and analysis\nTwo review authors independently performed study selection, abstracted data on study and participant characteristics, and assessed risk of bias and GRADE. Any differences were resolved by discussion, with third-party arbitration unnecessary. We performed analyses according to the guidelines of the Cochrane Handbook for Systematic Reviews of Interventions. We used the five GRADE considerations, that is study limitations, consistency of effect, imprecision, indirectness, and publication bias, to judge the quality of the evidence.\nMain results\nWe identified one randomised controlled trial that included 226 people with high-risk neuroblastoma who were pre-treated with autologous HSCT. The study randomised 113 participants to receive immunotherapy including isotretinoin, granulocyte-macrophage colony-stimulating factor (GM-CSF), interleukin-2, and ch14.18, a type of anti-GD2 antibody also known as dinutuximab. The study randomised another 113 participants to receive standard therapy including isotretinoin.\nThe results on overall survival favoured the dinutuximab-containing immunotherapy group (hazard ratio (HR) 0.50, 95% confidence interval (CI) 0.31 to 0.80; P = 0.004). The results on event-free survival also favoured the dinutuximab-containing immunotherapy group (HR 0.61, 95% CI 0.41 to 0.92; P = 0.020). Randomised data on adverse events were not reported separately. The study did not report progression-free survival, late non-haematological toxicity, and health-related quality of life as separate endpoints. We graded the quality of the evidence as moderate.\nAuthors' conclusions\nThe evidence base favours dinutuximab-containing immunotherapy compared to standard therapy concerning overall survival and event-free survival in people with high-risk neuroblastoma pre-treated with autologous HSCT. Randomised data on adverse events are lacking, therefore more research is needed before definitive conclusions can be made regarding this outcome.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies of people with high-risk neuroblastoma who received high-dose chemotherapy (which kills stem cells) followed by autologous (from the same person) haematopoietic stem cell (blood cell precursors) transplantation (to rescue and replace the killed stem cells). We reviewed the evidence concerning the effect of anti-GD2 antibody-containing immunotherapy compared to standard therapy in such people on overall survival, treatment-related death, progression-free survival, event-free survival, early toxicity, late non-haematological toxicity, and health-related quality of life."
}
] |
query-laysum
|
24232
|
[
{
"role": "user",
"content": "Abstract: Background\nExercise training is commonly recommended for individuals with fibromyalgia. This review is one of a series of reviews about exercise training for people with fibromyalgia that will replace the \"Exercise for treating fibromyalgia syndrome\" review first published in 2002.\nObjectives\n• To evaluate the benefits and harms of aerobic exercise training for adults with fibromyalgia\n• To assess the following specific comparisons\n० Aerobic versus control conditions (eg, treatment as usual, wait list control, physical activity as usual)\n० Aerobic versus aerobic interventions (eg, running vs brisk walking)\n० Aerobic versus non-exercise interventions (eg, medications, education)\n\nWe did not assess specific comparisons involving aerobic exercise versus other exercise interventions (eg, resistance exercise, aquatic exercise, flexibility exercise, mixed exercise). Other systematic reviews have examined or will examine these comparisons (Bidonde 2014; Busch 2013).\nSearch methods\nWe searched the Cochrane Library, MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Physiotherapy Evidence Database (PEDro), Thesis and Dissertation Abstracts, the Allied and Complementary Medicine Database (AMED), the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), and the ClinicalTrials.gov registry up to June 2016, unrestricted by language, and we reviewed the reference lists of retrieved trials to identify potentially relevant trials.\nSelection criteria\nWe included randomized controlled trials (RCTs) in adults with a diagnosis of fibromyalgia that compared aerobic training interventions (dynamic physical activity that increases breathing and heart rate to submaximal levels for a prolonged period) versus no exercise or another intervention. Major outcomes were health-related quality of life (HRQL), pain intensity, stiffness, fatigue, physical function, withdrawals, and adverse events.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, extracted data, performed a risk of bias assessment, and assessed the quality of the body of evidence for major outcomes using the GRADE approach. We used a 15% threshold for calculation of clinically relevant differences between groups.\nMain results\nWe included 13 RCTs (839 people). Studies were at risk of selection, performance, and detection bias (owing to lack of blinding for self-reported outcomes) and had low risk of attrition and reporting bias. We prioritized the findings when aerobic exercise was compared with no exercise control and present them fully here.\nEight trials (with 456 participants) provided low-quality evidence for pain intensity, fatigue, stiffness, and physical function; and moderate-quality evidence for withdrawals and HRQL at completion of the intervention (6 to 24 weeks). With the exception of withdrawals and adverse events, major outcome measures were self-reported and were expressed on a 0 to 100 scale (lower values are best, negative mean differences (MDs)/standardized mean differences (SMDs) indicate improvement). Effects for aerobic exercise versus control were as follows: HRQL: mean 56.08; five studies; N = 372; MD -7.89, 95% CI -13.23 to -2.55; absolute improvement of 8% (3% to 13%) and relative improvement of 15% (5% to 24%); pain intensity: mean 65.31; six studies; N = 351; MD -11.06, 95% CI -18.34 to -3.77; absolute improvement of 11% (95% CI 4% to 18%) and relative improvement of 18% (7% to 30%); stiffness: mean 69; one study; N = 143; MD -7.96, 95% CI -14.95 to -0.97; absolute difference in improvement of 8% (1% to 15%) and relative change in improvement of 11.4% (21.4% to 1.4%); physical function: mean 38.32; three studies; N = 246; MD -10.16, 95% CI -15.39 to -4.94; absolute change in improvement of 10% (15% to 5%) and relative change in improvement of 21.9% (33% to 11%); and fatigue: mean 68; three studies; N = 286; MD -6.48, 95% CI -14.33 to 1.38; absolute change in improvement of 6% (12% improvement to 0.3% worse) and relative change in improvement of 8% (16% improvement to 0.4% worse). Pooled analysis resulted in a risk ratio (RR) of moderate quality for withdrawals (17 per 100 and 20 per 100 in control and intervention groups, respectively; eight studies; N = 456; RR 1.25, 95%CI 0.89 to 1.77; absolute change of 5% more withdrawals with exercise (3% fewer to 12% more).\nThree trials provided low-quality evidence on long-term effects (24 to 208 weeks post intervention) and reported that benefits for pain and function persisted but did not for HRQL or fatigue. Withdrawals were similar, and investigators did not assess stiffness and adverse events.\nWe are uncertain about the effects of one aerobic intervention versus another, as the evidence was of low to very low quality and was derived from single trials only, precluding meta-analyses. Similarly, we are uncertain of the effects of aerobic exercise over active controls (ie, education, three studies; stress management training, one study; medication, one study) owing to evidence of low to very low quality provided by single trials. Most studies did not measure adverse events; thus we are uncertain about the risk of adverse events associated with aerobic exercise.\nAuthors' conclusions\nWhen compared with control, moderate-quality evidence indicates that aerobic exercise probably improves HRQL and all-cause withdrawal, and low-quality evidence suggests that aerobic exercise may slightly decrease pain intensity, may slightly improve physical function, and may lead to little difference in fatigue and stiffness. Three of the reported outcomes reached clinical significance (HRQL, physical function, and pain). Long-term effects of aerobic exercise may include little or no difference in pain, physical function, and all-cause withdrawal, and we are uncertain about long-term effects on remaining outcomes. We downgraded the evidence owing to the small number of included trials and participants across trials, and because of issues related to unclear and high risks of bias (performance, selection, and detection biases). Aerobic exercise appears to be well tolerated (similar withdrawal rates across groups), although evidence on adverse events is scarce, so we are uncertain about its safety.\n\nGiven the provided abstract, please respond to the following query: \"Key results at the end of treatment\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The findings of aerobic exercise compared with no exercise control were prioritised and are presented fully here. Moderate-quality evidence revealed that aerobic exercise improved HRQL, and low-quality evidence showed improvement in physical function and decreased pain, fatigue, and stiffness compared with control. Similar numbers of people dropped out of the aerobic interventions group and the comparison group. Minor adverse events were reported, but reporting was inconsistent in these studies.\nFour studies explored long-term effects at 24 to 208 weeks after the intervention ended. They reported benefits for pain and physical function among exercisers and noted no other effects."
}
] |
query-laysum
|
24233
|
[
{
"role": "user",
"content": "Abstract: Background\nWe know that the brain damage resulting from traumatic and other insults is not due solely to the direct consequences of the primary injury. A significant and potentially preventable contribution to the overall morbidity arises from secondary hypoxic-ischaemic damage. Brain swelling accompanied by raised intracranial pressure (ICP) prevents adequate cerebral perfusion with well-oxygenated blood.\nDetection of raised ICP could be useful in alerting clinicians to the need to improve cerebral perfusion, with consequent reductions in brain injury.\nObjectives\nTo determine whether routine ICP monitoring in severe coma of any cause reduces the risk of all-cause mortality or severe disability at final follow-up.\nSearch methods\nWe searched the Cochrane Injuries Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (OvidSP), EMBASE (OvidSP), CINAHL Plus, ISI Web of Science (SCI-EXPANDED & CPCI-S), clinical trials registries and reference lists. We ran the most recent search on 22 May 2015.\nSelection criteria\nAll randomised controlled studies of real-time ICP monitoring by invasive or semi-invasive means in acute coma (traumatic or non-traumatic aetiology) versus clinical care without ICP monitoring (that is, guided only by clinical or radiological inference of the presence of raised ICP).\nData collection and analysis\nTwo authors (ET and RF) worked independently to identify the one study that met inclusion criteria. JR and RF independently extracted data and assessed risk of bias. We contacted study authors for additional information, including details of methods and outcome data.\nMain results\nOne randomized controlled trial (RCT) meeting the selection criteria has been identified to date.\nThe included study had 324 participants. We judged risk of bias to be low for all categories except blinding of participants and personnel, which is not feasible for this intervention. There were few missing data, and we analysed all on an intention-to-treat basis.\nParticipants could be 13 years of age or older (mean age of sample 29; range 22 to 44), and all had severe traumatic brain injury, mostly due to traffic incidents. All were receiving care within intensive care units (ICUs) at one of six hospitals in either Bolivia or Ecuador. Investigators followed up 92% of participants for six months or until death. The trial excluded patients with a Glasgow Coma Score (GCS) equal to three and fixed dilated pupils on admission on the basis that they had sustained brain injury of an unsalvageable severity.\nThe study compared people managed using either an intracranial monitor or non-invasive monitoring (imaging and clinical examination) to identify potentially harmful raised intracranial pressure. Both study groups used imaging and clinical examination measures.\nMortality at six months was 56/144 (39%) in the ICP-monitored group and 67/153 (44%) in the non-invasive group.\nUnfavourable outcome (defined as death or moderate to severe disability at six months) as assessed by the extended Glasgow Outcome Scale (GOS-E) was 80/144 (56%) in the ICP-monitored group and 93/153 (61%) in the non-invasive group.\nSix percent of participants in the ICP monitoring group had complications related to the monitoring, none of which met criteria for being a serious adverse event. There were no complications relating to the non-invasive group.\nOther complications and adverse events were comparable between treatment groups, 70/157 (45%) in the ICP-monitored group and 76/167 (46%) in the non-invasive group.\nLate mortality in both the monitored and non-invasive groups was high, with 35% of deaths occurring > 14 days after injury. The authors comment that this high late mortality may reflect inadequacies in post-ICU services for disabled survivors requiring specialist rehabilitation care.\nAuthors' conclusions\nThe data from the single RCT studying the role of routine ICP monitoring in acute traumatic coma fails to provide evidence to support the intervention.\nResearch in this area is complicated by the fact that RCTs necessarily assess the combined impact of measurement of ICP with the clinical management decisions made in light of this data. Future studies will need to assess the added value of ICP data alongside other information from the multimodal monitoring typically performed in intensive care unit settings. Additionally, even within traumatically acquired brain injury (TBI), there is great heterogeneity in mechanisms, distribution, location and magnitude of injury, and studies within more homogeneous subgroups are likely to be more informative.\n\nGiven the provided abstract, please respond to the following query: \"Future research\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "More research is needed into how routine monitoring of intracranial pressure can inform clinical care."
}
] |
query-laysum
|
24234
|
[
{
"role": "user",
"content": "Abstract: Background\nGastrostomy has been established as the standard procedure for administering long-term enteral nutrition in individuals with swallowing disturbances. Percutaneous gastrostomy is a less-invasive approach than open surgical gastrostomy, and can be accomplished via endoscopy (percutaneous endoscopic gastrostomy or PEG) or sonographic or fluoroscopic guidance (percutaneous radiological gastrostomy or PRG). Both techniques have different limitations, advantages, and contraindications. In order to determine the optimal technique for long-term nutritional supplementation many studies have been conducted to compare the outcomes of these two techniques; however, it remains unclear as to which method is superior to the other with respect to both efficacy and safety.\nObjectives\nTo compare the safety and efficacy of PEG and PRG in the treatment of individuals with swallowing disturbances.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, January 2016); MEDLINE (1946 to 22 January 2016); EMBASE (1980 to 22 January 2016); the reference lists of identified articles; databases of ongoing trials, including the Chinese Cochrane Centre Controlled Trials Register; and PubMed. We applied no language restrictions.\nSelection criteria\nRandomised controlled trials (RCTs) comparing PEG with PRG in individuals with swallowing disturbances, regardless of the underlying disease.\nData collection and analysis\nTwo authors independently evaluated the search results and assessed the quality of the studies. Data analyses could not be performed as no RCTs were identified for inclusion in this review.\nMain results\nWe identified no RCTs comparing PEG and PRG for percutaneous gastrostomy in individuals with swallowing disturbances. The large body of evidence in this field comes from retrospective and non-randomised controlled studies and case series. Based on this evidence, both PEG and PRG can be safely performed in selected individuals, although both are associated with major and minor complications. A definitive RCT has yet to be conducted to identify the preferred percutaneous gastrostomy technique.\nAuthors' conclusions\nBoth PEG and PRG are effective for long-term enteral nutritional support in selected individuals, though current evidence is insufficient to recommend one technique over the other. Choice of technique should be based on indications and contraindications, operator experience and the facilities available. Large-scale RCTs are required to compare the two techniques and to determine the optimal approach for percutaneous gastrostomy.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched a number of online resources to identify studies that compared PEG and PRG in individuals with swallowing disturbances in a randomised controlled manner. This type of study is the best research method for identifying any differences between two techniques used for the same procedure."
}
] |
query-laysum
|
24235
|
[
{
"role": "user",
"content": "Abstract: Background\nSeborrhoeic dermatitis is a chronic inflammatory skin condition that is distributed worldwide. It commonly affects the scalp, face and flexures of the body. Treatment options include antifungal drugs, steroids, calcineurin inhibitors, keratolytic agents and phototherapy.\nObjectives\nTo assess the effects of antifungal agents for seborrhoeic dermatitis of the face and scalp in adolescents and adults.\nA secondary objective is to assess whether the same interventions are effective in the management of seborrhoeic dermatitis in patients with HIV/AIDS.\nSearch methods\nWe searched the following databases up to December 2014: the Cochrane Skin Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL) (2014, Issue 11), MEDLINE (from 1946), EMBASE (from 1974) and Latin American Caribbean Health Sciences Literature (LILACS) (from 1982). We also searched trials registries and checked the bibliographies of published studies for further trials.\nSelection criteria\nRandomised controlled trials of topical antifungals used for treatment of seborrhoeic dermatitis in adolescents and adults, with primary outcome measures of complete clearance of symptoms and improved quality of life.\nData collection and analysis\nReview author pairs independently assessed eligibility for inclusion, extracted study data and assessed risk of bias of included studies. We performed fixed-effect meta-analysis for studies with low statistical heterogeneity and used a random-effects model when heterogeneity was high.\nMain results\nWe included 51 studies with 9052 participants. Of these, 45 trials assessed treatment outcomes at five weeks or less after commencement of treatment, and six trials assessed outcomes over a longer time frame. We believe that 24 trials had some form of conflict of interest, such as funding by pharmaceutical companies.\nAmong the included studies were 12 ketoconazole trials (N = 3253), 11 ciclopirox trials (N = 3029), two lithium trials (N = 141), two bifonazole trials (N = 136) and one clotrimazole trial (N = 126) that compared the effectiveness of these treatments versus placebo or vehicle. Nine ketoconazole trials (N = 632) and one miconazole trial (N = 47) compared these treatments versus steroids. Fourteen studies (N = 1541) compared one antifungal versus another or compared different doses or schedules of administration of the same agent versus one another.\nKetoconazole\nTopical ketoconazole 2% treatment showed a 31% lower risk of failed clearance of rashes compared with placebo (risk ratio (RR) 0.69, 95% confidence interval (CI) 0.59 to 0.81, eight studies, low-quality evidence) at four weeks of follow-up, but the effect on side effects was uncertain because evidence was of very low quality (RR 0.97, 95% CI 0.58 to 1.64, six studies); heterogeneity between studies was substantial (I² = 74%). The median proportion of those who did not have clearance in the placebo groups was 69%.\nKetoconazole treatment resulted in a remission rate similar to that of steroids (RR 1.17, 95% CI 0.95 to 1.44, six studies, low-quality evidence), but occurrence of side effects was 44% lower in the ketoconazole group than in the steroid group (RR 0.56, 95% CI 0.32 to 0.96, eight studies, moderate-quality evidence).\nKetoconozale yielded a similar remission failure rate as ciclopirox (RR 1.09, 95% CI 0.95 to 1.26, three studies, low-quality evidence). Most comparisons between ketoconazole and other antifungals were based on single studies that showed comparability of treatment effects.\nCiclopirox\nCiclopirox 1% led to a lower failed remission rate than placebo at four weeks of follow-up (RR 0.79, 95% CI 0.67 to 0.94, eight studies, moderate-quality evidence) with similar rates of side effects (RR 0.9, 95% CI 0.72 to 1.11, four studies, moderate-quality evidence).\nOther antifungals\nClotrimazole and miconazole efficacies were comparable with those of steroids on short-term assessment in single studies.\nTreatment effects on individual symptoms were less clear and were inconsistent, possibly because of difficulties encountered in measuring these symptoms.\nEvidence was insufficient to conclude that dose or mode of delivery influenced treatment outcome. Only one study reported on treatment compliance. No study assessed quality of life. One study assessed the maximum rash-free period but provided insufficient data for analysis. One small study in patients with HIV compared the effect of lithium versus placebo on seborrhoeic dermatitis of the face, but treatment outcomes were similar.\nAuthors' conclusions\nKetoconazole and ciclopirox are more effective than placebo, but limited evidence suggests that either of these agents is more effective than any other agent within the same class. Very few studies have assessed symptom clearance for longer periods than four weeks. Ketoconazole produced findings similar to those of steroids, but side effects were fewer. Treatment effect on overall quality of life remains unknown. Better outcome measures, studies of better quality and better reporting are all needed to improve the evidence base for antifungals for seborrhoeic dermatitis.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Particpants taking ketoconazole were 31% less likely than those given placebo to have symptoms that persisted at four weeks of follow-up. This was seen in eight studies with 2520 participants, but wide variation was noted between studies. Ketoconazole was as effective as steroids but had 44% fewer side effects. Without causing more side effects, ciclopirox was 21% more effective than placebo in achieving clinical clearance of rashes. Treatment effect on redness, itching or scaling symptoms of the skin was less clear. Evidence was insufficient to conclude that that one antifungal was superior to other antifungals, but this observation was based on few studies. Ketoconazole and ciclopirox are the most heavily investigated antifungals and are more effective than placebo. Other antifungals might have similar effects, but data are insufficient to underpin this.\nCommon side effects were increased skin redness or itching, burning sensation and hair loss.\nNo studies measured quality of life. Only one study reported on percentage of compliance in different treatment groups. Other studies used surrogates such as acceptability to represent compliance. We therefore could not assess the effect of compliance on treatment outcomes. One study on patients with HIV reported no clear effects of treatments."
}
] |
query-laysum
|
24236
|
[
{
"role": "user",
"content": "Abstract: Background\nAcute bronchiolitis is the leading cause of medical emergencies during winter months in infants younger than 24 months old. Chest physiotherapy is sometimes used to assist infants in the clearance of secretions in order to decrease ventilatory effort. This is an update of a Cochrane Review first published in 2005 and updated in 2006, 2012, and 2016.\nObjectives\nTo determine the efficacy of chest physiotherapy in infants younger than 24 months old with acute bronchiolitis. A secondary objective was to determine the efficacy of different techniques of chest physiotherapy (vibration and percussion, passive exhalation, or instrumental).\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, LILACS, Web of Science, PEDro (October 2011 to 20 April 2022), and two trials registers (5 April 2022).\nSelection criteria\nRandomised controlled trials (RCTs) in which chest physiotherapy was compared to control (conventional medical care with no physiotherapy intervention) or other respiratory physiotherapy techniques in infants younger than 24 months old with bronchiolitis.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nOur update of the searches dated 20 April 2022 identified five new RCTs with 430 participants. We included a total of 17 RCTs (1679 participants) comparing chest physiotherapy with no intervention or comparing different types of physiotherapy.\nFive trials (246 participants) assessed percussion and vibration techniques plus postural drainage (conventional chest physiotherapy), and 12 trials (1433 participants) assessed different passive flow-oriented expiratory techniques, of which three trials (628 participants) assessed forced expiratory techniques, and nine trials (805 participants) assessed slow expiratory techniques. In the slow expiratory subgroup, two trials (78 participants) compared the technique with instrumental physiotherapy techniques, and two recent trials (116 participants) combined slow expiratory techniques with rhinopharyngeal retrograde technique (RRT). One trial used RRT alone as the main component of the physiotherapy intervention. Clinical severity was mild in one trial, severe in four trials, moderate in six trials, and mild to moderate in five trials. One study did not report clinical severity. Two trials were performed on non-hospitalised participants.\nOverall risk of bias was high in six trials, unclear in five, and low in six trials.\nThe analyses showed no effects of conventional techniques on change in bronchiolitis severity status, respiratory parameters, hours with oxygen supplementation, or length of hospital stay (5 trials, 246 participants).\nRegarding instrumental techniques (2 trials, 80 participants), one trial observed similar results in bronchiolitis severity status when comparing slow expiration to instrumental techniques (mean difference 0.10, 95% confidence interval (C) −0.17 to 0.37).\nForced passive expiratory techniques failed to show an effect on bronchiolitis severity in time to recovery (2 trials, 509 participants; high-certainty evidence) and time to clinical stability (1 trial, 99 participants; high-certainty evidence) in infants with severe bronchiolitis. Important adverse effects were reported with the use of forced expiratory techniques.\nRegarding slow expiratory techniques, a mild to moderate improvement was observed in bronchiolitis severity score (standardised mean difference −0.43, 95% CI −0.73 to −0.13; I2 = 55%; 7 trials, 434 participants; low-certainty evidence). Also, in one trial an improvement in time to recovery was observed with the use of slow expiratory techniques. No benefit was observed in length of hospital stay, except for one trial which showed a one-day reduction. No effects were shown or reported for other clinical outcomes such as duration on oxygen supplementation, use of bronchodilators, or parents' impression of physiotherapy benefit.\nAuthors' conclusions\nWe found low-certainty evidence that passive slow expiratory technique may result in a mild to moderate improvement in bronchiolitis severity when compared to control. This evidence comes mostly from infants with moderately acute bronchiolitis treated in hospital. The evidence was limited with regard to infants with severe bronchiolitis and those with moderately severe bronchiolitis treated in ambulatory settings.\nWe found high-certainty evidence that conventional techniques and forced expiratory techniques result in no difference in bronchiolitis severity or any other outcome. We found high-certainty evidence that forced expiratory techniques in infants with severe bronchiolitis do not improve their health status and can lead to severe adverse effects.\nCurrently, the evidence regarding new physiotherapy techniques such as RRT or instrumental physiotherapy is scarce, and further trials are needed to determine their effects and potential for use in infants with moderate bronchiolitis, as well as the potential additional effect of RRT when combined with slow passive expiratory techniques. Finally, the effectiveness of combining chest physiotherapy with hypertonic saline should also be investigated.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for all randomised controlled trials (a type of study where participants are randomly assigned to one of two or more treatment groups) comparing chest physiotherapy interventions against a control or other types of physiotherapy, and looked at their effectiveness by type of technique and bronchiolitis severity."
}
] |
query-laysum
|
24237
|
[
{
"role": "user",
"content": "Abstract: Background\nMany women undergoing an assisted reproductive technology (ART) cycle will not achieve a live birth. Failure at the embryo transfer stage may be due to lack of good-quality embryo/s, lack of uterine receptivity, or the transfer technique itself. Numerous methods, including the use of ultrasound guidance for proper catheter placement in the endometrial cavity, have been suggested as more effective techniques of embryo transfer. This review evaluates the efficacy of ultrasound-guided embryo transfer (UGET) compared with 'clinical touch' (CTET), which is the traditional method of embryo transfer and relies on the clinician's tactile senses to judge when the transfer catheter is in the correct position.\nObjectives\nTo determine whether ultrasound guidance compared with clinical touch improves pregnancy outcomes in women undergoing embryo transfer during ART cycles.\nSearch methods\nFor the 2016 update of this review, we ran updated searches in the Cochrane Gynaecology and Fertility Group trials register (May 2015), the Cochrane Central Register of Controlled Trials (the Cochrane Library, May 2015), MEDLINE (2009 to May 2015), and EMBASE (2009 to May 2015). We also handsearched relevant conference proceedings: American Society for Reproductive Medicine (ASRM), European Society for Human Reproduction and Embryology (ESHRE), and International Federation of Gynecology and Obstetrics (FIGO). There were no language restrictions.\nSelection criteria\nWe included only randomised controlled trials.\nData collection and analysis\nTwo review authors independently assessed eligibility and quality of trials and extracted data from those selected. We calculated odds ratio (OR) and 95% confidence interval (CI) for dichotomous outcomes. No outcomes were reported using continuous data. We assessed the overall quality of the evidence for the main findings using the GRADE working group methods.\nMain results\nThis systematic review now has 21 included studies (four of which we added in the 2016 update), two studies awaiting assessment, and 47 excluded studies. In total, data for meta-analyses were available in 21 trials (n = 6218 women), of which only four reported live births.\nUGET was associated with an increased chance of a live birth/ongoing pregnancy compared with CTET (OR 1.47, 95% CI 1.30 to 1.65; 13 trials; n = 5859 women; I2 = 74%; low-quality evidence). Sensitivity analysis by including only trials with low risk of selection bias or by using a random-effects model did not alter the effect. We estimate that for women with a chance of a live birth/ongoing pregnancy of 23% using CTET, this would increase to between 28% and 33% using UGET. We considered the quality of the evidence using GRADE methodology to be low.\nUGET was associated with an increase in the chance of a clinical pregnancy (OR 1.31, 95% CI 1.17 to 1.45; 20 trials; n = 6711 women; I2 = 42%; moderate-quality evidence). We identified no differences between groups for the incidence of adverse events including multiple pregnancy, ectopic pregnancy, or miscarriage. These events were relatively rare, and sample sizes limited the ability to detect such differences.\nAuthors' conclusions\nThe evidence suggests ultrasound guidance improves the chance of live birth/ongoing and clinical pregnancies compared with clinical touch, without increasing the chance of multiple pregnancy, ectopic pregnancy, or miscarriage. Methodological limitations included: only four studies reporting details of both computerised randomisation techniques and adequate allocation concealment, only four studies reported on the outcome of live birth, and none of the nine studies that reported on ongoing pregnancy reported on live birth, suggesting possible reporting bias. Adequate reporting of randomisation and allocation concealment will improve the quality of future studies. The primary outcome measure of future studies should be the reporting of live births per woman randomised.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to learn whether ultrasound-guided embryo transfer (UGET) improves pregnancy outcomes compared with clinical touch. The traditional method of embryo transfer, clinical touch relies on the clinician's tactile senses to judge when the transfer catheter is in the correct position."
}
] |
query-laysum
|
24238
|
[
{
"role": "user",
"content": "Abstract: Background\nThe detection and diagnosis of caries at the initial (non-cavitated) and moderate (enamel) levels of severity is fundamental to achieving and maintaining good oral health and prevention of oral diseases. An increasing array of methods of early caries detection have been proposed that could potentially support traditional methods of detection and diagnosis. Earlier identification of disease could afford patients the opportunity of less invasive treatment with less destruction of tooth tissue, reduce the need for treatment with aerosol-generating procedures, and potentially result in a reduced cost of care to the patient and to healthcare services.\nObjectives\nTo determine the diagnostic accuracy of different visual classification systems for the detection and diagnosis of non-cavitated coronal dental caries for different purposes (detection and diagnosis) and in different populations (children or adults).\nSearch methods\nCochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 30 April 2020); Embase Ovid (1980 to 30 April 2020); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 30 April 2020); and the World Health Organization International Clinical Trials Registry Platform (to 30 April 2020). We studied reference lists as well as published systematic review articles.\nSelection criteria\nWe included diagnostic accuracy study designs that compared a visual classification system (index test) with a reference standard (histology, excavation, radiographs). This included cross-sectional studies that evaluated the diagnostic accuracy of single index tests and studies that directly compared two or more index tests. Studies reporting at both the patient or tooth surface level were included. In vitro and in vivo studies were considered. Studies that explicitly recruited participants with caries into dentine or frank cavitation were excluded. We also excluded studies that artificially created carious lesions and those that used an index test during the excavation of dental caries to ascertain the optimum depth of excavation.\nData collection and analysis\nWe extracted data independently and in duplicate using a standardised data extraction and quality assessment form based on QUADAS-2 specific to the review context. Estimates of diagnostic accuracy were determined using the bivariate hierarchical method to produce summary points of sensitivity and specificity with 95% confidence intervals (CIs) and regions, and 95% prediction regions. The comparative accuracy of different classification systems was conducted based on indirect comparisons. Potential sources of heterogeneity were pre-specified and explored visually and more formally through meta-regression.\nMain results\nWe included 71 datasets from 67 studies (48 completed in vitro) reporting a total of 19,590 tooth sites/surfaces. The most frequently reported classification systems were the International Caries Detection and Assessment System (ICDAS) (36 studies) and Ekstrand-Ricketts-Kidd (ERK) (15 studies). In reporting the results, no distinction was made between detection and diagnosis. Only two studies were at low risk of bias across all four domains, and 15 studies were at low concern for applicability across all three domains. The patient selection domain had the highest proportion of high risk of bias studies (49 studies). Four studies were assessed at high risk of bias for the index test domain, nine for the reference standard domain, and seven for the flow and timing domain. Due to the high number of studies on extracted teeth concerns regarding applicability were high for the patient selection and index test domains (49 and 46 studies respectively).\nStudies were synthesised using a hierarchical bivariate method for meta-analysis. There was substantial variability in the results of the individual studies: sensitivities ranged from 0.16 to 1.00 and specificities from 0 to 1.00. For all visual classification systems the estimated summary sensitivity and specificity point was 0.86 (95% CI 0.80 to 0.90) and 0.77 (95% CI 0.72 to 0.82) respectively, diagnostic odds ratio (DOR) 20.38 (95% CI 14.33 to 28.98). In a cohort of 1000 tooth surfaces with 28% prevalence of enamel caries, this would result in 40 being classified as disease free when enamel caries was truly present (false negatives), and 163 being classified as diseased in the absence of enamel caries (false positives). The addition of test type to the model did not result in any meaningful difference to the sensitivity or specificity estimates (Chi2(4) = 3.78, P = 0.44), nor did the addition of primary or permanent dentition (Chi2(2) = 0.90, P = 0.64). The variability of results could not be explained by tooth surface (occlusal or approximal), prevalence of dentinal caries in the sample, nor reference standard. Only one study intentionally included restored teeth in its sample and no studies reported the inclusion of sealants.\nWe rated the certainty of the evidence as low, and downgraded two levels in total for risk of bias due to limitations in the design and conduct of the included studies, indirectness arising from the in vitro studies, and inconsistency of results.\nAuthors' conclusions\nWhilst the confidence intervals for the summary points of the different visual classification systems indicated reasonable performance, they do not reflect the confidence that one can have in the accuracy of assessment using these systems due to the considerable unexplained heterogeneity evident across the studies. The prediction regions in which the sensitivity and specificity of a future study should lie are very broad, an important consideration when interpreting the results of this review. Should treatment be provided as a consequence of a false-positive result then this would be non-invasive, typically the application of fluoride varnish where it was not required, with low potential for an adverse event but healthcare resource and finance costs.\nDespite the robust methodology applied in this comprehensive review, the results should be interpreted with some caution due to shortcomings in the design and execution of many of the included studies. Studies to determine the diagnostic accuracy of methods to detect and diagnose caries in situ are particularly challenging. Wherever possible future studies should be carried out in a clinical setting, to provide a realistic assessment of performance within the oral cavity with the challenges of plaque, tooth staining, and restorations, and consider methods to minimise bias arising from the use of imperfect reference standards in clinical studies.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out how accurate visual classification systems are for detecting early tooth decay as part of the dental 'check-up' for children and adults who visit their dentist. Researchers in Cochrane included 67 studies to answer this question."
}
] |
query-laysum
|
24239
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is the fourth update of a Cochrane Review first published in 2002 and last updated in 2016.\nIt is common clinical practice to follow patients with colorectal cancer for several years following their curative surgery or adjuvant therapy, or both. Despite this widespread practice, there is considerable controversy about how often patients should be seen, what tests should be performed, and whether these varying strategies have any significant impact on patient outcomes.\nObjectives\nTo assess the effect of follow-up programmes (follow-up versus no follow-up, follow-up strategies of varying intensity, and follow-up in different healthcare settings) on overall survival for patients with colorectal cancer treated with curative intent. Secondary objectives are to assess relapse-free survival, salvage surgery, interval recurrences, quality of life, and the harms and costs of surveillance and investigations.\nSearch methods\nFor this update, on 5 April 2109 we searched CENTRAL, MEDLINE, Embase, CINAHL, and Science Citation Index. We also searched reference lists of articles, and handsearched the Proceedings of the American Society for Radiation Oncology. In addition, we searched the following trials registries: ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform. We contacted study authors. We applied no language or publication restrictions to the search strategies.\nSelection criteria\nWe included only randomised controlled trials comparing different follow-up strategies for participants with non-metastatic colorectal cancer treated with curative intent.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently determined study eligibility, performed data extraction, and assessed risk of bias and methodological quality. We used GRADE to assess evidence quality.\nMain results\nWe identified 19 studies, which enrolled 13,216 participants (we included four new studies in this second update). Sixteen out of the 19 studies were eligible for quantitative synthesis. Although the studies varied in setting (general practitioner (GP)-led, nurse-led, or surgeon-led) and 'intensity' of follow-up, there was very little inconsistency in the results.\nOverall survival: we found intensive follow-up made little or no difference (hazard ratio (HR) 0.91, 95% confidence interval (CI) 0.80 to 1.04: I² = 18%; high-quality evidence). There were 1453 deaths among 12,528 participants in 15 studies. In absolute terms, the average effect of intensive follow-up on overall survival was 24 fewer deaths per 1000 patients, but the true effect could lie between 60 fewer to 9 more per 1000 patients.\nColorectal cancer-specific survival: we found intensive follow-up probably made little or no difference (HR 0.93, 95% CI 0.81 to 1.07: I² = 0%; moderate-quality evidence). There were 925 colorectal cancer deaths among 11,771 participants enrolled in 11 studies. In absolute terms, the average effect of intensive follow-up on colorectal cancer-specific survival was 15 fewer colorectal cancer-specific survival deaths per 1000 patients, but the true effect could lie between 47 fewer to 12 more per 1000 patients.\nRelapse-free survival: we found intensive follow-up made little or no difference (HR 1.05, 95% CI 0.92 to 1.21; I² = 41%; high-quality evidence). There were 2254 relapses among 8047 participants enrolled in 16 studies. The average effect of intensive follow-up on relapse-free survival was 17 more relapses per 1000 patients, but the true effect could lie between 30 fewer and 66 more per 1000 patients.\nSalvage surgery with curative intent: this was more frequent with intensive follow-up (risk ratio (RR) 1.98, 95% CI 1.53 to 2.56; I² = 31%; high-quality evidence). There were 457 episodes of salvage surgery in 5157 participants enrolled in 13 studies. In absolute terms, the effect of intensive follow-up on salvage surgery was 60 more episodes of salvage surgery per 1000 patients, but the true effect could lie between 33 to 96 more episodes per 1000 patients.\nInterval (symptomatic) recurrences: these were less frequent with intensive follow-up (RR 0.59, 95% CI 0.41 to 0.86; I² = 66%; moderate-quality evidence). There were 376 interval recurrences reported in 3933 participants enrolled in seven studies. Intensive follow-up was associated with fewer interval recurrences (52 fewer per 1000 patients); the true effect is between 18 and 75 fewer per 1000 patients.\nIntensive follow-up probably makes little or no difference to quality of life, anxiety, or depression (reported in 7 studies; moderate-quality evidence). The data were not available in a form that allowed analysis.\nIntensive follow-up may increase the complications (perforation or haemorrhage) from colonoscopies (OR 7.30, 95% CI 0.75 to 70.69; 1 study, 326 participants; very low-quality evidence). Two studies reported seven colonoscopic complications in 2292 colonoscopies, three perforations and four gastrointestinal haemorrhages requiring transfusion. We could not combine the data, as they were not reported by study arm in one study.\nThe limited data on costs suggests that the cost of more intensive follow-up may be increased in comparison with less intense follow-up (low-quality evidence). The data were not available in a form that allowed analysis.\nAuthors' conclusions\nThe results of our review suggest that there is no overall survival benefit for intensifying the follow-up of patients after curative surgery for colorectal cancer. Although more participants were treated with salvage surgery with curative intent in the intensive follow-up groups, this was not associated with improved survival. Harms related to intensive follow-up and salvage therapy were not well reported.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Colorectal cancer affects about 1 in 20 people in high-income countries. Most patients (about two thirds) have curable disease. Follow-up after surgical resection +/- chemotherapy treatment usually means visits to the doctor as well as having some tests. Many people believe that follow-up saves lives, but we are not sure how often the patient should see the doctor and what tests they should have, and when."
}
] |
query-laysum
|
24240
|
[
{
"role": "user",
"content": "Abstract: Background\nPatients with rheumatoid arthritis may have an increased risk of falls due to impairments in lower-extremity joints, which may result in either mobility, or postural stability problems. There is evidence in the literature suggesting that balance, agility and coordination training techniques can induce changes in lower-extremity muscle activity patterns that result in improvement in dynamic joint stability.\nThe mechanoreceptors present in and around the joints are responsible for maintaining postural control and joint position sense. These receptors are integrated to compose the somatosensorial system. In combination with visual and auditory inputs, which improve our spatial perception even further, the systems are able to maintain a stable body posture.\nHowever, there is a lack of information on the efficacy of balance training alone in patients with rheumatoid arthritis.\nObjectives\nTo assess the effectiveness and safety of balance training (proprioceptive training) to improve functional capacity in patients with rheumatoid arthritis.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2008, Issue 4), MEDLINE via PubMed (January 1966 to December 2008), EMBASE (January 1980 to December 2008), LILACS (January 1982 to December 2008), CINAHL (January 1982 to December 2008), PEDro and Scirus (inception to 2008). We also handsearched conference abstracts.\nSelection criteria\nAll eligible randomised controlled trials (RCT) or controlled clinical trials (CCT) comparing balance training (proprioceptive training) with any other intervention or with no intervention.\nData collection and analysis\nTwo review authors independently assessed titles or abstracts, or both, for inclusion criteria.\nMain results\nThe electronic search identified 864 studies. From this search, 17 studies described general exercises in rheumatoid arthritis patients as the main topic. After analysing them, we observed that the main interventions were exercises to improve muscle strength, endurance, and dynamic exercises (swimming, walking, etc). As we did not find any studies investigating the effects of balance training alone or in combination with other therapies in patients with rheumatoid arthritis, it was not possible to include any data regarding the chosen topic in this systematic review.\nAuthors' conclusions\nThere is no research available examining the efficacy of balance training alone in patients with rheumatoid arthritis. The effectiveness and safety of balance training to improve functional capacity of these patients remains unclear. We suggest that future research should give more importance to balance training by either increasing the number and duration of sessions or investigating its efficacy alone.\n\nGiven the provided abstract, please respond to the following query: \"What is rheumatoid arthritis and what is balance training?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When you have rheumatoid arthritis, your immune system, which normally fights infection, attacks the lining of your joints. This makes your joints swollen, stiff and painful. The small joints of your hands and feet are usually affected first. There is no cure for rheumatoid arthritis at present, so the treatments aim to relieve pain and stiffness and improve your ability to move. Patients with rheumatoid arthritis may have an increased risk of falls due to impairments in the legs and feet which can lead to instability.\nBalance training is a set of exercises in which participants use their muscles against an external force as a planned movement or in response to an unexpected movement or changes in order to maintain their balance. Balance exercises and positions can include feet together, semi and tandem position, one leg stance, unstable platforms or foams, eyes opened and closed. In addition, balance training can include some coordination and agility exercises such as: shuttle walk, tandem walk, crossover walk, side-to-side walk and figure of eight."
}
] |
query-laysum
|
24241
|
[
{
"role": "user",
"content": "Abstract: Background\nCommunity pharmacies are an easily accessible and cost-effective platform for delivering health care worldwide, and the range of services provided has undergone rapid expansion in recent years. Thus, in addition to dispensing medication, pharmacy workers within community pharmacies now give advice on a range of health-promoting behaviours that aim to improve health and to optimise the management of long-term conditions. However, it remains uncertain whether these health-promotion interventions can change the professional practice of pharmacy workers, improve health behaviours and outcomes for pharmacy users and have the potential to address health inequalities.\nObjectives\nTo assess the effectiveness and safety of health-promotion interventions to change community pharmacy workers' professional practice and improve outcomes for users of community pharmacies.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, six other databases and two trials registers to 6 February 2018. We also conducted reference checking, citation searches and contacted study authors to identify any additional studies.\nSelection criteria\nWe included randomised trials of health-promotion interventions in community pharmacies targeted at, or delivered by, pharmacy workers that aimed to improve the health-related behaviour of people attending the pharmacy compared to no treatment, or usual treatment received in the community pharmacy. We excluded interventions where there was no interaction between pharmacy workers and pharmacy users, and those that focused on medication use only.\nData collection and analysis\nWe used standard procedures recommended by Cochrane and the Effective Practice and Organisation of Care review group for both data collection and analysis. We compared intervention to no intervention or to usual treatment using standardised mean differences (SMD) and 95% confidence intervals (95% CI) (higher scores represent better outcomes for pharmacy user health-related behaviour and quality of life, and lower scores represent better outcomes for clinical outcomes, costs and adverse events). Interpretation of effect sizes (SMD) was in line with Cochrane recommendations.\nMain results\nWe included 57 randomised trials with 16,220 participants, described in 83 reports. Forty-nine studies were conducted in high-income countries, and eight in middle-income countries. We found no studies that had been conducted in low-income countries. Most interventions were educational, or incorporated skills training. Interventions were directed at pharmacy workers (n = 8), pharmacy users (n = 13), or both (n = 36). The clinical areas most frequently studied were diabetes, hypertension, asthma, and modification of cardiovascular risk. Duration of follow-up of interventions was often unclear. Only five studies gave details about the theoretical basis for the intervention, and studies did not provide sufficient data to comment on health inequalities.\nThe most common sources of bias were lack of protection against contamination - mainly in individually randomised studies - and inadequate blinding of participants. The certainty of the evidence for all outcomes was moderate. We downgraded the certainty because of the heterogeneity across studies and evidence of potential publication bias.\nProfessional practice outcomes\nWe conducted a narrative analysis for pharmacy worker behaviour due to high heterogeneity in the results. Health-promotion interventions probably improve pharmacy workers' behaviour (2944 participants; 9 studies; moderate-certainty evidence) when compared to no intervention. These studies typically assessed behaviour using a simulated patient (mystery shopper) methodology.\nPharmacy user outcomes\nHealth-promotion interventions probably lead to a slight improvement in health-related behaviours of pharmacy users when compared to usual treatment (SMD 0.43, 95% CI 0.14 to 0.72; I2 = 89%; 10 trials; 2138 participants; moderate-certainty evidence). These interventions probably also lead to a slight improvement in intermediate clinical outcomes, such as levels of cholesterol or glycated haemoglobin, for pharmacy users (SMD -0.43, 95% CI -0.65 to -0.21; I2 = 90%; 20 trials; 3971 participants; moderate-certainty evidence).\nWe identified no studies that evaluated the impact of health-promotion interventions on event-based clinical outcomes, such as stroke or myocardial infarction, or the psychological well-being of pharmacy users.\nHealth-promotion interventions probably lead to a slight improvement in quality of life for pharmacy users (SMD 0.29, 95% CI 0.08 to 0.50; I2= 82%; 10 trials, 2687 participants; moderate-certainty evidence).\nAdverse events\nNo studies reported adverse events for either pharmacy workers or pharmacy users.\nCosts\nWe found that health-promotion interventions are likely to be cost-effective, based on moderate-certainty evidence from five of seven studies that reported an economic evaluation.\nAuthors' conclusions\nHealth-promotion interventions in the community pharmacy context probably improve pharmacy workers' behaviour and probably have a slight beneficial effect on health-related behaviour, intermediate clinical outcomes, and quality of life for pharmacy users.\nSuch interventions are likely to be cost-effective and the effects are seen across a range of clinical conditions and health-related behaviours. Nevertheless the magnitude of the effects varies between conditions, and more effective interventions might be developed if greater consideration were given to the theoretical basis of the intervention and mechanisms for effecting behaviour change.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Community pharmacies are an easy place for many people to access healthcare advice. In the past this advice was limited to how best to take medicines, but, increasingly, community pharmacy workers are carrying out other activities, such as giving advice on healthy eating and management of long-term conditions. While some community pharmacy workers may offer the sale of products without a strong evidence-base, the professional guidance issued to pharmacists has attempted to reduce these transactions, and has placed more emphasis on developing evidence-based public health services. Many people find health-related lifestyle and self-management behaviours difficult. Pharmacies may be convenient for people to use, but it is important to understand whether health-promoting activities delivered in pharmacies are worthwhile and effective, so that those responsible for commissioning health care can decide whether it is worth spending resources to support them."
}
] |
query-laysum
|
24242
|
[
{
"role": "user",
"content": "Abstract: Background\nLiver transplantation is an established treatment option for end-stage liver failure. To date, no consensus has been reached on the use of immunosuppressive T-cell specific antibody induction compared with corticosteroid induction of immunosuppression after liver transplantation.\nObjectives\nTo assess the benefits and harms of T-cell specific antibody induction versus corticosteroid induction for prevention of acute rejection in liver transplant recipients.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Science Citation Index Expanded, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) on 30 September 2013 together with reference checking, citation searching, contact with trial authors and pharmaceutical companies to identify additional trials.\nSelection criteria\nWe included all randomised clinical trials assessing immunosuppression with T-cell specific antibody induction versus corticosteroid induction in liver transplant recipients. Our inclusion criteria stated that participants within each included trial should have received the same maintenance immunosuppressive therapy.\nData collection and analysis\nWe used RevMan for statistical analysis of dichotomous data with risk ratio (RR) and of continuous data with mean difference (MD), both with 95% confidence intervals (CIs). We assessed risk of systematic errors (bias) using bias risk domains with definitions. We used trial sequential analysis to control for random errors (play of chance).\nMain results\nWe included 10 randomised trials with a total of 1589 liver transplant recipients, which studied the use of T-cell specific antibody induction versus corticosteroid induction. All trials were with high risk of bias. We compared any kind of T-cell specific antibody induction versus corticosteroid induction in 10 trials with 1589 participants, including interleukin-2 receptor antagonist induction versus corticosteroid induction in nine trials with 1470 participants, and polyclonal T-cell specific antibody induction versus corticosteroid induction in one trial with 119 participants.\nOur analyses showed no significant differences regarding mortality (RR 1.01, 95% CI 0.72 to 1.43), graft loss (RR 1.12, 95% CI 0.82 to 1.53) and acute rejection (RR 0.84, 95% CI 0.70 to 1.00), infection (RR 0.96, 95% CI 0.85 to 1.09), hepatitis C virus recurrence (RR 0.89, 95% CI 0.79 to 1.00), malignancy (RR 0.59, 95% CI 0.13 to 2.73), and post-transplantation lymphoproliferative disorder (RR 1.00, 95% CI 0.07 to 15.38) when any kind of T-cell specific antibody induction was compared with corticosteroid induction (all low-quality evidence). Cytomegalovirus infection was less frequent in patients receiving any kind of T-cell specific antibody induction compared with corticosteroid induction (RR 0.50, 95% CI 0.33 to 0.75; low-quality evidence). This was also observed when interleukin-2 receptor antagonist induction was compared with corticosteroid induction (RR 0.55, 95% CI 0.37 to 0.83; low-quality evidence), and when polyclonal T-cell specific antibody induction was compared with corticosteroid induction (RR 0.21, 95% CI 0.06 to 0.70; low-quality evidence). However, when trial sequential analysis regarding cytomegalovirus infection was applied, the required information size was not reached. Furthermore, diabetes mellitus occurred less frequently when T-cell specific antibody induction was compared with corticosteroid induction (RR 0.45, 95% CI 0.34 to 0.60; low-quality evidence), when interleukin-2 receptor antagonist induction was compared with corticosteroid induction (RR 0.45, 95% CI 0.35 to 0.61; low-quality evidence), and when polyclonal T-cell specific antibody induction was compared with corticosteroid induction (RR 0.12, 95% CI 0.02 to 0.95; low-quality evidence). When trial sequential analysis was applied, the trial sequential monitoring boundary for benefit was crossed. We found no subgroup differences for type of interleukin-2 receptor antagonist (basiliximab versus daclizumab). Four trials reported on adverse events. However, no differences between trial groups were noted. Limited data were available for meta-analysis on drug-specific adverse events such as haematological adverse events for antithymocyte globulin. No data were available on quality of life.\nAuthors' conclusions\nBecause of the low quality of the evidence, the effects of T-cell antibody induction remain uncertain. T-cell specific antibody induction seems to reduce diabetes mellitus and may reduce cytomegalovirus infection when compared with corticosteroid induction. No other clear benefits or harms were associated with the use of T-cell specific antibody induction compared with corticosteroid induction. For some analyses, the number of trials investigating the use of T-cell specific antibody induction after liver transplantation is small, and the numbers of participants and outcomes in these randomised trials are limited. Furthermore, the included trials are heterogeneous in nature and have applied different types of T-cell specific antibody induction therapy. All trials were at high risk of bias. Hence, additional randomised clinical trials are needed to assess the benefits and harms of T-cell specific antibody induction compared with corticosteroid induction for liver transplant recipients. Such trials ought to be conducted with low risks of systematic error and of random error.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our analyses showed no significant differences regarding mortality, graft loss, acute rejection, infection, hepatitis C virus recurrence, malignancy, and post-transplantation lymphoproliferative disorder. Cytomegalovirus infection was less frequent in patients receiving any kind of T-cell specific antibody induction compared with corticosteroid induction (low-quality evidence). This finding was also observed when interleukin-2 receptor antagonist induction was compared with corticosteroid induction (low-quality evidence) and when polyclonal T-cell specific antibody induction was compared with corticosteroid induction (low-quality evidence). However, when trial sequential analysis regarding cytomegalovirus infection was applied, random errors could not be excluded. Furthermore, diabetes mellitus occurred less frequently when T-cell specific antibody induction was compared with corticosteroid induction (low-quality evidence), when interleukin-2 receptor antagonist induction was compared with corticosteroid induction (low-quality evidence), and when polyclonal T-cell specific antibody induction was compared with corticosteroid induction (low-quality evidence). When trial sequential analysis was applied, we could exclude random errors regarding a reduction in diabetes mellitus by T-cell specific antibodies compared with corticosteroids."
}
] |
query-laysum
|
24243
|
[
{
"role": "user",
"content": "Abstract: Background\nLarge territory middle cerebral artery (MCA) ischaemic strokes account for around 10% of all ischaemic strokes and have a particularly devastating prognosis when associated with malignant oedema. Progressive cerebral oedema starts developing in the first 24 to 48 hours of stroke ictus with an associated rise in intracranial pressure. The rise in intracranial pressure may eventually overwhelm compensatory mechanisms leading to a cascading secondary damage to surrounding unaffected parenchyma. This downward spiral can rapidly progress to death or severe neurological disability. Early decompressive craniectomy to relieve intracranial pressure and associated tissue shift can help ameliorate this secondary damage and improve outcomes. Evidence has been accumulating of the benefit of early surgical decompression in stroke patients. Earlier studies have excluded people above the age of 60 due to associated poor outcomes; however, newer trials have included this patient subgroup. This review follows a Cochrane Review published in 2012.\nObjectives\nTo assess the effectiveness of surgical decompression in people with malignant oedema after ischaemic stroke with regard to reduction in mortality and improved functional outcome. We also aimed to examine the adverse effects of surgical decompression in this patient cohort.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL; 2022, Issue 7 of 12), MEDLINE Ovid, Embase Ovid, Web of Science Core Collection, Scopus databases, ClinicalTrials.gov, and the WHO ICTRP to July 2022. We also reviewed the reference lists of relevant articles.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing decompressive craniectomy with medical management to best medical management alone for people with malignant cerebral oedema after MCA ischaemic stroke.\nData collection and analysis\nTwo review authors independently screened the search results, assessed study eligibility, performed risk of bias assessment, and extracted the data. The primary outcomes were death and death or severe disability (modified Rankin Scale (mRS) > 4) at 6 to 12 months follow-up. Other outcomes included death or moderate disability (mRS > 3), severe disability (mRS = 5), and adverse events. We assessed the certainty of the evidence using the GRADE approach, categorising it as high, moderate, low, or very low.\nMain results\nWe included nine RCTs with a total of 513 participants included in the final analysis. Three studies included patients younger than 60 years of age; two trials accepted patients up to 80 years of age; and one trial only included patients 60 years or older. The majority of included trials (six) mandated a time from stroke ictus to treatment of < 48 hours, whilst in two of them this was < 96 hours.\nSurgical decompression was associated with a reduction in death (odds ratio (OR) 0.18, 95% confidence interval (CI) 0.12 to 0.27, 9 trials, 513 participants, P < 0.001; high-certainty evidence); death or severe disability (mRS > 4, OR 0.22, 95% CI 0.15 to 0.32, 9 trials, 513 participants, P < 0.001; high-certainty evidence); and death or moderate disability (mRS > 3, OR 0.34, 95% CI 0.22 to 0.52, 9 trials, 513 participants, P < 0.001; moderate-certainty evidence). Subgroup analysis did not reveal any significant effect on treatment outcomes when analysing age (< 60 years versus ≥ 60 years); time from stroke ictus to intervention (< 48 hours versus ≥ 48 hours); or dysphasia. There was a significant subgroup effect of time at follow-up (6 versus 12 months, P = 0.02) on death as well as death or severe disability (mRS > 4); however, the validity of this finding was affected by fewer participant numbers in the six-month follow-up subgroup. There was no consistent reporting of per-participant adverse event rates in any of the included studies, which prevented further analysis.\nAuthors' conclusions\nSurgical decompression improves outcomes in the management of malignant oedema after acute ischaemic stroke, including a considerable reduction in death or severe disability (mRS > 4) and a reduction in death or moderate disability (mRS > 3). Whilst there is evidence that this positive treatment effect is present in patients > 60 years old, it is important to take into account that these patients have a poorer prospect of functional survival independent of this treatment effect. In interpreting these results it must also be considered that the data demonstrating benefit are drawn from a unique patient subset with profound neurological deficit, reduced level of consciousness, and no pre-morbid disability or severe comorbidity.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most strokes are caused by blockage of a blood vessel to the brain (ischaemic stroke), which is a major cause of death and disability worldwide. This blockage prevents the oxygen-carrying blood from supplying the brain, and part of the brain being supplied by this vessel begins to die (infarct). Over the following 24 to 48 hours, the damaged brain begins to swell. Sometimes the swelling can be very dramatic, causing a rise in the pressure inside the skull which can lead to the surrounding brain being affected and rapidly progressing to death.\nSurgical decompression can help relieve the pressure by creating a large enough hole in the skull and tissue layers around the affected brain (decompressive craniectomy). Recent studies suggest that early use of this treatment after a large stroke can prevent death or disability in survivors. Early evidence only studied the use of this technique in younger patients; however, more recent studies have begun to address its use in older patients. We wanted to find out whether the use of surgical decompression is better or worse than standard medical management alone in people who have had a large stroke."
}
] |
query-laysum
|
24244
|
[
{
"role": "user",
"content": "Abstract: Background\nCaring for people with dementia is highly challenging, and family carers are recognised as being at increased risk of physical and mental ill-health. Most current interventions have limited success in reducing stress among carers of people with dementia. Mindfulness-based stress reduction (MBSR) draws on a range of practices and may be a promising approach to helping carers of people with dementia.\nObjectives\nTo assess the effectiveness of MBSR in reducing the stress of family carers of people with dementia.\nSearch methods\nWe searched ALOIS - the Cochrane Dementia and Cognitive Improvement Group's Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL) (all years to Issue 9 of 12, 2017), MEDLINE (Ovid SP 1950 to September 2017), Embase (Ovid SP 1974 to Sepetmber 2017), Web of Science (ISI Web of Science 1945 to September 2017), PsycINFO (Ovid SP 1806 to September 2017), CINAHL (all dates to September 2017), LILACS (all dates to September 2017), World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), ClinicalTrials.gov, and Dissertation Abstracts International (DAI) up to 6 September 2017, with no language restrictions.\nSelection criteria\nRandomised controlled trials (RCTs) of MBSR for family carers of people with dementia.\nData collection and analysis\nTwo review authors independently screened references for inclusion criteria, extracted data, assessed the risk of bias of trials with the Cochrane 'Risk of bias' tool, and evaluated the quality of the evidence using the GRADE instrument. We contacted study authors for additional information, then conducted meta-analyses, or reported results narratively in the case of insufficient data. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included five RCTs involving 201 carers assessing the effectiveness of MBSR. Controls used in included studies varied in structure and content. Mindfulness-based stress reduction programmes were compared with either active controls (those matched for time and attention with MBSR, i.e. education, social support, or progressive muscle relaxation), or inactive controls (those not matched for time and attention with MBSR, i.e. self help education or respite care). One trial used both active and inactive comparisons with MBSR. All studies were at high risk of bias in terms of blinding of outcome assessment. Most studies provided no information about selective reporting, incomplete outcome data, or allocation concealment.\n1. Compared with active controls, MBSR may reduce depressive symptoms of carers at the end of the intervention (3 trials, 135 participants; standardised mean difference (SMD) -0.63, 95% confidence interval (CI) -0.98 to -0.28; P<0.001; low-quality evidence). We could not be certain of any effect on clinically significant depressive symptoms (very low-quality evidence).\nMindfulness-based stress reduction compared with active control may decrease carer anxiety at the end of the intervention (1 trial, 78 participants; mean difference (MD) -7.50, 95% CI -13.11 to -1.89; P<0.001; low-quality evidence) and may slightly increase carer burden (3 trials, 135 participants; SMD 0.24, 95% CI -0.11 to 0.58; P=0.18; low-quality evidence), although both results were imprecise, and we could not exclude little or no effect. Due to the very low quality of the evidence, we could not be sure of any effect on carers' coping style, nor could we determine whether carers were more or less likely to drop out of treatment.\n2. Compared with inactive controls, MBSR showed no clear evidence of any effect on depressive symptoms (2 trials, 50 participants; MD -1.97, 95% CI -6.89 to 2.95; P=0.43; low-quality evidence). We could not be certain of any effect on clinically significant depressive symptoms (very low-quality evidence).\nIn this comparison, MBSR may also reduce carer anxiety at the end of the intervention (1 trial, 33 participants; MD -7.27, 95% CI -14.92 to 0.38; P=0.06; low-quality evidence), although we were unable to exclude little or no effect. Due to the very low quality of the evidence, we could not be certain of any effects of MBSR on carer burden, the use of positive coping strategies, or dropout rates.\nWe found no studies that looked at quality of life of carers or care-recipients, or institutionalisation.\nOnly one included study reported on adverse events, noting a single adverse event related to yoga practices at home\nAuthors' conclusions\nAfter accounting for non-specific effects of the intervention (i.e. comparing it with an active control), low-quality evidence suggests that MBSR may reduce carers' depressive symptoms and anxiety, at least in the short term.\nThere are significant limitations to the evidence base on MBSR in this population. Our GRADE assessment of the evidence was low to very low quality. We downgraded the quality of the evidence primarily because of high risk of detection or performance bias, and imprecision.\nIn conclusion, MBSR has the potential to meet some important needs of the carer, but more high-quality studies in this field are needed to confirm its efficacy.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We were able to analyse study data from five randomised controlled trials involving a total of 201 carers. Findings from three studies (135 carers) showed that carers receiving MBSR may have a lower level of depressive symptoms at the end of treatment than those receiving an active control treatment. However, we found no clear evidence of any effect on depression when MBSR was compared with an inactive control treatment. Mindfulness-based stress reduction may also lead to a reduction in carers' anxiety symptoms at the end of treatment. Mindfulness-based stress reduction may slightly increase carers' feelings of burden. However, the results on anxiety and burden were very uncertain. We were unable to draw conclusions about carers' coping strategies and the risk of dropping out of treatment due to the very low quality of the evidence.\nNone of the studies measured quality of life of carers or people with dementia, or the rate of admission of people with dementia to care homes or hospitals.\nOnly one included study reported on adverse events, noting one minor adverse event (neck strain in one participant practising yoga at home)"
}
] |
query-laysum
|
24245
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with dementia can have feeding and swallowing difficulties (dysphagia). Modification of the consistency of food or fluids, or both, is a common management strategy. However, diet modification can affect quality of life and may lead to dehydration and malnutrition. Evidence on the benefits and risks of modifying food and fluids is mandatory to improve the care of people with dementia and dysphagia.\nObjectives\nTo determine the effectiveness and adverse effects associated with modifying the consistency of food and fluids in improving oral intake and eliminating aspiration in adults with dysphagia and dementia.\nSearch methods\nWe searched ALOIS (the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group), the Cochrane Library, MEDLINE via Ovid SP, Embase via Ovid SP, PsycINFO via Ovid SP, CINAHL via EBSCOhost, LILACS via BIREME, ClinicalTrials.gov and the World Health Organization (WHO) Portal on 9 May 2018. We also checked the reference lists of relevant articles to identify any additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), quasi-RCTs and cluster-RCTs published in any language that measured any of the outcomes of interest. We included trials with adults with a clinical diagnosis of dementia with symptoms and signs of dysphagia confirmed on instrumental assessment. We included participants with all types, stages and severities of dementia. Control groups received either no intervention or interventions not involving diet modification or modification to sensory properties of food.\nData collection and analysis\nTwo review authors independently assessed for inclusion all potential studies identified. Data were extracted independently along with assessment of methodological quality using standard Cochrane methods. We contacted study authors for additional unpublished information.\nMain results\nNo trials on modification of food met the inclusion criteria. We included two studies that examined modification to fluids. Both were part of the same large multicentre trial and included people with dementia and people with or without dementia and Parkinson's disease. Participation in the second trial was determined by results from the first trial. With unpublished data supplied by study authors, we examined data from participants with dementia only. The first study, a cross-over trial, investigated the immediate effects on aspiration of two viscosities of liquids (nectar thick and honey thick) compared to regular liquids in 351 participants with dementia using videofluoroscopy. Regular liquids with a chin down head posture, as well as regular liquids without any intervention were also compared. The sequence of interventions during videofluoroscopy may have influenced response to intervention. The second study, a parallel designed RCT, compared the effect of nectar and honey thick liquids with a chin down head posture over a three-month period in a subgroup of 260 participants with dementia. Outcomes were pneumonia and adverse intervention effects. Honey thick liquids, which are more consistent with descriptors for 'spoon thick' or 'extremely thick' liquids, showed a more positive impact on immediate elimination of aspiration during videofluoroscopy, but this consistency showed more adverse effects in the second follow-up study. During the second three-month follow-up trial, there were a greater number of incidents of pneumonia in participants receiving honey thick liquids than those receiving nectar thick liquids or taking regular liquids with a chin down posture. There were no deaths classified as 'definitely related' to the type of fluids prescribed. Neither trial addressed quality of life. Risk of bias for both studies is high. The overall quality of evidence for outcomes in this review is low.\nAuthors' conclusions\nWe are uncertain about the immediate and long-term effects of modifying the consistency of fluid for swallowing difficulties in dementia as too few studies have been completed. There may be differences in outcomes depending on the grade of thickness of fluids and the sequence of interventions trialled in videofluoroscopy for people with dementia. Clinicians should be aware that while thickening fluids may have an immediate positive effect on swallowing, the long-term impact of thickened fluids on the health of the person with dementia should be considered. Further high-quality clinical trials are required.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Individuals with dementia often present with swallowing difficulties (dysphagia). The consequences can include choking, dehydration, malnutrition, weight loss, pneumonia and death. Modification of food and liquid is a popular management strategy. It is believed that increasing the viscosity of liquids or altering the consistency of food allows individuals a better opportunity to swallow, with a reduced risk of choking or liquids entering the airway. However, there is growing evidence suggesting that this strategy can lead to dehydration, malnutrition, negative psychological/social consequences, and can affect quality of life for the person with dementia."
}
] |
query-laysum
|
24246
|
[
{
"role": "user",
"content": "Abstract: Background\nCongenital adrenal hyperplasia (CAH) is an autosomal recessive condition which leads to glucocorticoid deficiency and is the most common cause of adrenal insufficiency in children. In over 90% of cases, 21-hydroxylase enzyme deficiency is found which is caused by mutations in the 21-hydroxylase gene. Managing individuals with CAH due to 21-hydroxylase deficiency involves replacing glucocorticoids with oral glucocorticoids (including prednisolone and hydrocortisone), suppressing adrenocorticotrophic hormones and replacing mineralocorticoids to prevent salt wasting. During childhood, the main aims of treatment are to prevent adrenal crises and to achieve normal stature, optimal adult height and to undergo normal puberty. In adults, treatment aims to prevent adrenal crises, ensure normal fertility and to avoid the long-term consequences of glucocorticoid use. Current glucocorticoid treatment regimens can not optimally replicate the normal physiological cortisol level and over-treatment or under-treatment is often reported.\nObjectives\nTo compare and determine the efficacy and safety of different glucocorticoid replacement regimens in the treatment of CAH due to 21-hydroxylase deficiency in children and adults.\nSearch methods\nWe searched the Cochrane Inborn Errors of Metabolism Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. We also searched the reference lists of relevant articles and reviews, and trial registries (ClinicalTrials.gov and WHO ICTRP).\nDate of last search of trials register: 24 June 2019.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs comparing different glucocorticoid replacement regimens for treating CAH due to 21-hydroxylase deficiency in children and adults.\nData collection and analysis\nThe authors independently extracted and analysed the data from different interventions. They undertook the comparisons separately and used GRADE to assess the quality of the evidence.\nMain results\nSearches identified 1729 records with 43 records subject to further examination. After screening, we included five RCTs (six references) with a total of 101 participants and identified a further six ongoing RCTs. The number of participants in each trial varied from six to 44, with participants' ages ranging from 3.6 months to 21 years. Four trials were of cross-over design and one was of parallel design. Duration of treatment ranged from two weeks to six months per treatment arm with an overall follow-up between six and 12 months for all trials. Overall, we judged the quality of the trials to be at moderate to high risk of bias; with lack of methodological detail leading to unclear or high risk of bias judgements across many of the domains.\nAll trials employed an oral glucocorticoid replacement therapy, but with different daily schedules and dose levels. Three trials compared different dose schedules of hydrocortisone (HC), one three-arm trial compared HC to prednisolone (PD) and dexamethasone (DXA) and one trial compared HC with fludrocortisone to PD with fludrocortisone. Due to the heterogeneity of the trials and the limited amount of evidence, we were unable to perform any meta-analyses.\nNo trials reported on quality of life, prevention of adrenal crisis, presence of osteopenia, presence of testicular or ovarian adrenal rest tumours, subfertility or final adult height.\nFive trials (101 participants) reported androgen normalisation but using different measurements (very low-quality evidence for all measurements). Five trials reported 17 hydroxyprogesterone (17 OHP) levels, four trials reported androstenedione, three trials reported testosterone and one trial reported dehydroepiandrosterone sulphate (DHEAS). After four weeks, results from one trial (15 participants) showed a high morning dose of HC or a high evening dose made little or no difference in 17 OHP, testosterone, androstenedione and DHEAS. One trial (27 participants) found that HC and DXA treatment suppressed 17 OHP and androstenedione more than PD treatment after six weeks and a further trial (eight participants) reported no difference in 17 OHP between the five different dosing schedules of HC at between four and six weeks. One trial (44 participants) comparing HC and PD found no differences in the values of 17 OHP, androstenedione and testosterone at one year. One trial (26 participants) of HC versus HC plus fludrocortisone found that at six months 17 OHP and androstenedione levels were more suppressed on HC alone, but there were no differences noted in testosterone levels.\nWhile no trials reported on absolute final adult height, we reported some surrogate markers. Three trials reported on growth and bone maturation and two trials reported on height velocity. One trial found height velocity was reduced at six months in 26 participants given once daily HC 25 mg/m²/day compared to once daily HC 15 mg/m²/day (both groups also received fludrocortisone 0.1 mg/day), but as the quality of the evidence was very low we are unsure whether the variation in HC dose caused the difference. There were no differences noted in growth hormone or IGF1 levels. The results from another trial (44 participants) indicate no difference in growth velocity between HC and PD at one year (very low-quality evidence), but this trial did report that once daily PD treatment may lead to better control of bone maturation compared to HC in prepubertal children and that the absolute change in bone age/chronological age ratio was higher in the HC group compared to the PD group.\nAuthors' conclusions\nThere are currently limited trials comparing the efficacy and safety of different glucocorticoid replacement regimens for treating 21-hydroxylase deficiency CAH in children and adults and we were unable to draw any firm conclusions based on the evidence that was presented in the included trials.\nNo trials included long-term outcomes such as quality of life, prevention of adrenal crisis, presence of osteopenia, presence of testicular or ovarian adrenal rest tumours, subfertility and final adult height. There were no trials examining a modified-release formulation of HC or use of 24-hour circadian continuous subcutaneous infusion of hydrocortisone. As a consequence, uncertainty remains about the most effective form of glucocorticoid replacement therapy in CAH for children and adults.\nFuture trials should include both children and adults with CAH. A longer duration of follow-up is required to monitor biochemical and clinical outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All trials used an oral therapy, but with different daily schedules and dose levels of steroids. Three trials compared different dose schedules of hydrocortisone, one trial compared hydrocortisone to prednisolone and dexamethasone and one trial compared hydrocortisone with fludrocortisone to prednisolone with fludrocortisone. We found no trials using a modified-release formulation of hydrocortisone or a continuous 24-hour delivery under the skin of hydrocortisone.\nFive trials reported androgen normalisation but using different measurements; none of these results showed any consistent and real difference between therapies. In one trial (26 participants) participants taking a higher dose of hydrocortisone reported a higher growth rate, but we are not sure whether this was directly due to the treatment. In a second trial (44 participants) comparing hydrocortisone to prednisolone we are unsure whether the therapies affect growth rate (due to the very low quality of the evidence).\nNo trials included long-term outcomes for quality of life, preventing an adrenal crisis, presence of bone fragility, presence of testicular or ovarian adrenal rest tumours, difficulty in conceiving and final adult height."
}
] |
query-laysum
|
24247
|
[
{
"role": "user",
"content": "Abstract: Background\nThe majority of people with epilepsy have a good prognosis and their seizures are controlled by a single antiepileptic drug. However, up to 20% of patients from population-based studies, and up to 30% from clinical series (not population-based), develop drug-resistant epilepsy, especially those with focal-onset seizures. In this review, we summarise the current evidence regarding topiramate, an antiepileptic drug first marketed in 1996, when used as an add-on treatment for drug-resistant focal epilepsy.\nThis is an update of a Cochrane Review first published in 1999, and last updated in 2014.\nObjectives\nTo evaluate the efficacy and tolerability of topiramate when used as an add-on treatment for people with drug-resistant focal epilepsy.\nSearch methods\nFor the latest update of this review we searched the following databases on 2 July 2018: Cochrane Register of Studies (CRS Web), which includes the Cochrane Epilepsy Group Specialized Register and the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (Ovid, 1946- ); ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform (ICTRP). We imposed no language restrictions. We also contacted the manufacturers of topiramate and researchers in the field to identify any ongoing or unpublished studies.\nSelection criteria\nRandomised, placebo-controlled add-on trials of topiramate, recruiting people with drug-resistant focal epilepsy.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted the relevant data. We assessed the following outcomes: (1) 50% or greater reduction in seizure frequency; (2) seizure freedom; (3) treatment withdrawal (any reason); (4) adverse effects. Primary analyses were intention-to-treat (ITT), and summary risk ratios (RRs) with 95% confidence intervals (95% CIs) are presented. We evaluated dose-response in regression models. We carried out a 'Risk of bias' assessment for each included study using the Cochrane 'Risk of bias' tool and assessed the overall certainty of evidence using the GRADE approach.\nMain results\nWe included 12 trials, representing 1650 participants. Baseline phases ranged from four to 12 weeks and double-blind phases ranged from 11 to 19 weeks. The RR for a 50% or greater reduction in seizure frequency with add-on topiramate compared to placebo was 2.71 (95% CI 2.05 to 3.59; 12 studies; high-certainty evidence). Dose regression analysis showed increasing effect with increasing topiramate dose demonstrated by an odds ratio (OR) of 1.45 (95% CI 1.28 to 1.64; P < 0.001) per 200 mg/d increase in topiramate dosage. The proportion of participants achieving seizure freedom was also significantly increased with add-on topiramate compared to placebo (RR 3.67, 95% CI 1.79 to 7.54; 8 studies; moderate-certainty evidence). Treatment withdrawal was significantly higher for add-on topiramate compared to placebo (RR 2.37, 95% CI 1.66 to 3.37; 12 studies; high-certainty evidence). The RRs for the following adverse effects indicate that they are significantly more prevalent with topiramate, compared to placebo: ataxia 2.29 (99% CI 1.10 to 4.77; 4 studies); concentration difficulties 7.81 (99% CI 2.08 to 29.29; 6 studies; moderate-certainty evidence); dizziness 1.52 (99% CI 1.07 to 2.16; 8 studies); fatigue 2.08 (99% CI 1.37 to 3.15; 10 studies); paraesthesia 3.65 (99% CI 1.58 to 8.39; 7 studies; moderate-certainty evidence); somnolence 2.44 (99% CI 1.61 to 3.68; 9 studies); 'thinking abnormally' 5.70 (99% CI 2.26 to 14.38; 4 studies; high-certainty evidence); and weight loss 3.99 (99% CI 1.82 to 8.72; 9 studies; low-certainty evidence). Evidence of publication bias for the primary outcome was found (Egger test, P = 0.001). We rated all studies included in the review as having either low or unclear risk of bias. Overall, we assessed the evidence as moderate to high certainty due to the evidence of publication bias, statistical heterogeneity and imprecision, which was partially compensated for by large effect sizes.\nAuthors' conclusions\nTopiramate has efficacy as an add-on treatment for drug-resistant focal epilepsy as it is almost three times more effective compared to a placebo in reducing seizures. The trials reviewed were of relatively short duration and provided no evidence for the long-term efficacy of topiramate. Short-term use of add-on topiramate was shown to be associated with several adverse events. The results of this review should only be applied to adult populations as only one study included children. Future research should consider further examining the effect of dose.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Epilepsy is a disorder where recurrent seizures are caused by abnormal electrical discharges from the brain. Most seizures can be controlled by a single antiepileptic drug. Unfortunately, some people require more than one antiepileptic medication to control their seizures, especially if these originate from one area of the brain (focal epilepsy), instead of affecting the entire brain (generalised epilepsy). These people are said to have drug-resistant epilepsy. Topiramate can be used in addition to other antiepileptic drugs, called an add-on treatment, to try to control drug-resistant epilepsy."
}
] |
query-laysum
|
24248
|
[
{
"role": "user",
"content": "Abstract: Background\nContractures are a common complication of neurological and non-neurological conditions, and are characterised by a reduction in joint mobility. Stretch is widely used for the treatment and prevention of contractures. However, it is not clear whether stretch is effective. This review is an update of the original 2010 version of this review.\nObjectives\nThe aim of this review was to determine the effects of stretch on contractures in people with, or at risk of developing, contractures.The outcomes of interest were joint mobility, quality of life, pain, activity limitations, participation restrictions, spasticity and adverse events.\nSearch methods\nIn November 2015 we searched CENTRAL, DARE, HTA; MEDLINE; Embase; CINAHL; SCI-EXPANDED; PEDro and trials registries.\nSelection criteria\nWe included randomised controlled trials and controlled clinical trials of stretch applied for the purpose of treating or preventing contractures.\nData collection and analysis\nTwo review authors independently selected trials, extracted data, and assessed risk of bias. The outcomes of interest were joint mobility, quality of life, pain, activity limitations, participation restrictions and adverse events. We evaluated outcomes in the short term (up to one week after the last stretch) and in the long term (more than one week). We expressed effects as mean differences (MD) or standardised mean differences (SMD) with 95% confidence intervals (CI). We conducted meta-analyses with a random-effects model. We assessed the quality of the body of evidence for the main outcomes using GRADE.\nMain results\nForty-nine studies with 2135 participants met the inclusion criteria. No study performed stretch for more than seven months. Just over half the studies (51%) were at low risk of selection bias; all studies were at risk of detection bias for self reported outcomes such as pain and at risk of performance bias due to difficulty of blinding the intervention. However, most studies were at low risk of detection bias for objective outcomes including range of motion, and the majority of studies were free from attrition and selective reporting biases. The effect of these biases were unlikely to be important, given that there was little benefit with treatment. There was high-quality evidence that stretch did not have clinically important short-term effects on joint mobility in people with neurological conditions (MD 2°; 95% CI 0° to 3°; 26 studies with 699 participants) or non-neurological conditions (SMD 0.2, 95% CI 0 to 0.3, 19 studies with 925 participants).\nIn people with neurological conditions, it was uncertain whether stretch had clinically important short-term effects on pain (SMD 0.2; 95% CI -0.1 to 0.5; 5 studies with 174 participants) or activity limitations (SMD 0.2; 95% CI -0.1 to 0.5; 8 studies with 247 participants). No trials examined the short-term effects of stretch on quality of life or participation restrictions in people with neurological conditions. Five studies involving 145 participants reported eight adverse events including skin breakdown, bruising, blisters and pain but it was not possible to statistically analyse these data.\nIn people with non-neurological conditions, there was high-quality evidence that stretch did not have clinically important short-term effects on pain (SMD -0.2, 95% CI -0.4 to 0.1; 7 studies with 422 participants) and moderate-quality evidence that stretch did not have clinically important short-term effects on quality of life (SMD 0.3, 95% CI -0.1 to 0.7; 2 studies with 97 participants). The short-term effect of stretch on activity limitations (SMD 0.1; 95% CI -0.2 to 0.3; 5 studies with 356 participants) and participation restrictions were uncertain (SMD -0.2; 95% CI -0.6 to 0.1; 2 studies with 192 participants). Nine studies involving 635 participants reported 41 adverse events including numbness, pain, Raynauds’ phenomenon, venous thrombosis, need for manipulation under anaesthesia, wound infections, haematoma, flexion deficits and swelling but it was not possible to statistically analyse these data.\nAuthors' conclusions\nThere was high-quality evidence that stretch did not have clinically important effects on joint mobility in people with or without neurological conditions if performed for less than seven months. Sensitivity analyses indicate results were robust in studies at risk of selection and detection biases in comparison to studies at low risk of bias. Sub-group analyses also suggest the effect of stretch is consistent in people with different types of neurological or non-neurological conditions. The effects of stretch performed for periods longer than seven months have not been investigated. There was moderate- and high-quality evidence that stretch did not have clinically important short-term effects on quality of life or pain in people with non-neurological conditions, respectively. The short-term effects of stretch on quality of life and pain in people with neurological conditions, and the short-term effects of stretch on activity limitations and participation restrictions for people with and without neurological conditions are uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "there was high-quality evidence that stretch does not have clinically important short-term effects on joint mobility in people with neurological or non-neurological conditions. There was high quality evidence that stretch does not have clinically important short-term effects on pain, and moderate-quality evidence that stretch does not have clinically important short-term effects on quality of life in people with non-neurological conditions."
}
] |
query-laysum
|
24249
|
[
{
"role": "user",
"content": "Abstract: Background\n18F-florbetaben uptake by brain tissue, measured by positron emission tomography (PET), is accepted by regulatory agencies like the Food and Drug Administration (FDA) and the European Medicine Agencies (EMA) for assessing amyloid load in people with dementia. Its added value is mainly demonstrated by excluding Alzheimer's pathology in an established dementia diagnosis. However, the National Institute on Aging and Alzheimer's Association (NIA-AA) revised the diagnostic criteria for Alzheimer's disease and confidence in the diagnosis of mild cognitive impairment (MCI) due to Alzheimer's disease may be increased when using some amyloid biomarkers tests like 18F-florbetaben. These tests, added to the MCI core clinical criteria, might increase the diagnostic test accuracy (DTA) of a testing strategy. However, the DTA of 18F-florbetaben to predict the progression from MCI to Alzheimer’s disease dementia (ADD) or other dementias has not yet been systematically evaluated.\nObjectives\nTo determine the DTA of the 18F-florbetaben PET scan for detecting people with MCI at time of performing the test who will clinically progress to ADD, other forms of dementia (non-ADD), or any form of dementia at follow-up.\nSearch methods\nThe most recent search for this review was performed in May 2017. We searched MEDLINE (OvidSP), Embase (OvidSP), PsycINFO (OvidSP), BIOSIS Citation Index (Thomson Reuters Web of Science), Web of Science Core Collection, including the Science Citation Index (Thomson Reuters Web of Science) and the Conference Proceedings Citation Index (Thomson Reuters Web of Science), LILACS (BIREME), CINAHL (EBSCOhost), ClinicalTrials.gov (https://clinicaltrials.gov), and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) (http://www.who.int/ictrp/search/en/). We also searched ALOIS, the Cochrane Dementia & Cognitive Improvement Group’s specialised register of dementia studies (http://www.medicine.ox.ac.uk/alois/). We checked the reference lists of any relevant studies and systematic reviews, and performed citation tracking using the Science Citation Index to identify any additional relevant studies. No language or date restrictions were applied to electronic searches.\nSelection criteria\nWe included studies that had prospectively defined cohorts with any accepted definition of MCI at time of performing the test and the use of 18F-florbetaben scan to evaluate the DTA of the progression from MCI to ADD or other forms of dementia. In addition, we only selected studies that applied a reference standard for Alzheimer’s dementia diagnosis, for example, the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer’s Disease and Related Disorders Association (NINCDS-ADRDA) or Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV) criteria.\nData collection and analysis\nWe screened all titles and abstracts identified in electronic-database searches. Two review authors independently selected studies for inclusion and extracted data to create two-by-two tables, showing the binary test results cross-classified with the binary reference standard. We used these data to calculate sensitivities, specificities, and their 95% confidence intervals. Two independent assessors performed quality assessment using the QUADAS-2 tool plus some additional items to assess the methodological quality of the included studies.\nMain results\nProgression from MCI to ADD, any other form of dementia, and any form of dementia was evaluated in one study (Ong 2015). It reported data on 45 participants at four years of follow-up; 21 participants met NINCDS-ADRDA criteria for Alzheimer’s disease dementia at four years of follow-up, the proportion converting to ADD was 47% of the 45 participants, and 11% of the 45 participants met criteria for other types of dementias (three cases of FrontoTemporal Dementia (FTD), one of Dementia with Lewy body (DLB), and one of Progressive Supranuclear Palsy (PSP)). We considered the study to be at high risk of bias in the domains of the reference standard, flow, and timing (QUADAS-2).\nMCI to ADD; 18F-florbetaben PET scan analysed visually: the sensitivity was 100% (95% confidence interval (CI) 84% to 100%) and the specificity was 83% (95% CI 63% to 98%) (n = 45, 1 study). Analysed quantitatively: the sensitivity was 100% (95% CI 84% to 100%) and the specificity was 88% (95% CI 68% to 97%) for the diagnosis of ADD at follow-up (n = 45, 1 study).\nMCI to any other form of dementia (non-ADD);18F-florbetaben PET scan analysed visually: the sensitivity was 0% (95% CI 0% to 52%) and the specificity was 38% (95% CI 23% to 54%) (n = 45, 1 study). Analysed quantitatively: the sensitivity was 0% (95% CI 0% to 52%) and the specificity was 40% (95% CI 25% to 57%) for the diagnosis of any other form of dementia at follow-up (n = 45, 1 study).\nMCI to any form of dementia;18F-florbetaben PET scan analysed visually: the sensitivity was 81% (95% CI 61% to 93%) and the specificity was 79% (95% CI 54% to 94%) (n = 45, 1 study). Analysed quantitatively: the sensitivity was 81% (95% CI 61% to 93%) and the specificity was 84% (95% CI 60% to 97%) for the diagnosis of any form of dementia at follow-up (n = 45, 1 study).\nAuthors' conclusions\nAlthough we were able to calculate one estimation of DTA in, especially, the prediction of progression from MCI to ADD at four years follow-up, the small number of participants implies imprecision of sensitivity and specificity estimates. We cannot make any recommendation regarding the routine use of 18F-florbetaben in clinical practice based on one single study with 45 participants. 18F-florbetaben has high financial costs, therefore, clearly demonstrating its DTA and standardising the process of the 18F-florbetaben modality are important prior to its wider use.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In people with mild cognitive impairment (MCI), does using a18F PET scan with florbetaben predict progression to Alzheimer's disease dementia (ADD) and other dementias?"
}
] |
query-laysum
|
24250
|
[
{
"role": "user",
"content": "Abstract: Background\nA pneumoperitoneum of 12 to 16 mm Hg is used for laparoscopic cholecystectomy. Lower pressures are claimed to be safe and effective in decreasing cardiopulmonary complications and pain.\nObjectives\nTo assess the benefits and harms of low pressure pneumoperitoneum compared with standard pressure pneumoperitoneum in people undergoing laparoscopic cholecystectomy.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE, and Science Citation Index Expanded until February 2013 to identify randomised trials,\nusing search strategies.\nSelection criteria\nWe considered only randomised clinical trials, irrespective of language, blinding, or publication status for inclusion in the review.\nData collection and analysis\nTwo review authors independently identified trials and independently extracted data. We calculated the risk ratio (RR), mean difference (MD), or standardised mean difference (SMD) with 95% confidence intervals (CI) using both fixed-effect and random-effects models with RevMan 5 based on available case analysis.\nMain results\nA total of 1092 participants randomly assigned to the low pressure group (509 participants) and the standard pressure group (583 participants) in 21 trials provided information for this review on one or more outcomes. Three additional trials comparing low pressure pneumoperitoneum with standard pressure pneumoperitoneum (including 179 participants) provided no information for this review. Most of the trials included low anaesthetic risk participants undergoing elective laparoscopic cholecystectomy. One trial including 140 participants was at low risk of bias. The remaining 20 trials were at high risk of bias. The overall quality of evidence was low or very low. No mortality was reported in either the low pressure group (0/199; 0%) or the standard pressure group (0/235; 0%) in eight trials that reported mortality. One participant experienced the outcome of serious adverse events (low pressure group 1/179, 0.6%; standard pressure group 0/215, 0%; seven trials; 394 participants; RR 3.00; 95% CI 0.14 to 65.90; very low quality evidence). Quality of life, return to normal activity, and return to work were not reported in any of the trials. The difference between groups in the conversion to open cholecystectomy was imprecise (low pressure group 2/269, adjusted proportion 0.8%; standard pressure group 2/287, 0.7%; 10 trials; 556 participants; RR 1.18; 95% CI 0.29 to 4.72; very low quality evidence) and was compatible with an increase, a decrease, or no difference in the proportion of conversion to open cholecystectomy due to low pressure pneumoperitoneum. No difference in the length of hospital stay was reported between the groups (five trials; 415 participants; MD -0.30 days; 95% CI -0.63 to 0.02; low quality evidence). Operating time was about two minutes longer in the low pressure group than in the standard pressure group (19 trials; 990 participants; MD 1.51 minutes; 95% CI 0.07 to 2.94; very low quality evidence).\nAuthors' conclusions\nLaparoscopic cholecystectomy can be completed successfully using low pressure in approximately 90% of people undergoing laparoscopic cholecystectomy. However, no evidence is currently available to support the use of low pressure pneumoperitoneum in low anaesthetic risk patients undergoing elective laparoscopic cholecystectomy. The safety of low pressure pneumoperitoneum has to be established. Further well-designed trials are necessary, particularly in people with cardiopulmonary disorders who undergo laparoscopic cholecystectomy.\n\nGiven the provided abstract, please respond to the following query: \"Future research\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Further well-designed trials are necessary, particularly in high surgical risk patients undergoing laparoscopic cholecystectomy."
}
] |
query-laysum
|
24251
|
[
{
"role": "user",
"content": "Abstract: Background\nAnxiety in relation to surgery is a well-known problem. Melatonin offers an alternative treatment to benzodiazepines for ameliorating this condition in the preoperative and postoperative periods.\nObjectives\nTo assess the effects of melatonin on preoperative and postoperative anxiety compared to placebo or benzodiazepines.\nSearch methods\nWe searched the following databases on 10 July 2020: CENTRAL, MEDLINE, Embase, CINAHL, and Web of Science. For ongoing trials and protocols, we searched clinicaltrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform.\nSelection criteria\nWe included randomized, placebo-controlled or standard treatment-controlled (or both) studies that evaluated the effects of preoperatively administered melatonin on preoperative or postoperative anxiety. We included adult patients of both sexes (15 to 90 years of age) undergoing any kind of surgical procedure for which it was necessary to use general, regional, or topical anaesthesia.\nData collection and analysis\nOne review author conducted data extraction in duplicate. Data extracted included information about study design, country of origin, number of participants and demographic details, type of surgery, type of anaesthesia, intervention and dosing regimens, preoperative anxiety outcome measures, and postoperative anxiety outcome measures.\nMain results\nWe included 27 randomized controlled trials (RCTs), involving 2319 participants, that assessed melatonin for treating preoperative anxiety, postoperative anxiety, or both.\nTwenty-four studies compared melatonin with placebo. Eleven studies compared melatonin to a benzodiazepine (seven studies with midazolam, three studies with alprazolam, and one study with oxazepam). Other comparators in a small number of studies were gabapentin, clonidine, and pregabalin.\nNo studies were judged to be at low risk of bias for all domains. Most studies were judged to be at unclear risk of bias overall. Eight studies were judged to be at high risk of bias in one or more domain, and thus, to be at high risk of bias overall.\nMelatonin versus placebo\nMelatonin probably results in a reduction in preoperative anxiety measured by a visual analogue scale (VAS, 0 to 100 mm) compared to placebo (mean difference (MD) -11.69, 95% confidence interval (CI) -13.80 to -9.59; 18 studies, 1264 participants; moderate-certainty evidence), based on a meta-analysis of 18 studies.\nMelatonin may reduce immediate postoperative anxiety measured on a 0 to 100 mm VAS compared to placebo (MD -5.04, 95% CI -9.52 to -0.55; 7 studies, 524 participants; low-certainty evidence), and may reduce delayed postoperative anxiety measured six hours after surgery using the State-Trait Anxiety Inventory (STAI) (MD -5.31, 95% CI -8.78 to -1.84; 2 studies; 73 participants; low-certainty evidence).\nMelatonin versus benzodiazepines (midazolam and alprazolam)\nMelatonin probably results in little or no difference in preoperative anxiety measured on a 0 to 100 mm VAS (MD 0.78, 95% CI -2.02 to 3.58; 7 studies, 409 participants; moderate-certainty evidence) and there may be little or no difference in immediate postoperative anxiety (MD -2.12, 95% CI -4.61 to 0.36; 3 studies, 176 participants; low-certainty evidence).\nAdverse events\nFourteen studies did not report on adverse events. Six studies specifically reported that no side effects were observed, and the remaining seven studies reported cases of nausea, sleepiness, dizziness, and headache; however, no serious adverse events were reported. Eleven studies measured psychomotor and cognitive function, or both, and in general, these studies found that benzodiazepines impaired psychomotor and cognitive function more than placebo and melatonin. Fourteen studies evaluated sedation and generally found that benzodiazepine caused the highest degree of sedation, but melatonin also showed sedative properties compared to placebo. Several studies did not report on adverse events; therefore, it is not possible to conclude with certainty, from the data on adverse effects collected in this review, that melatonin is better tolerated than benzodiazepines.\nAuthors' conclusions\nWhen compared with placebo, melatonin given as premedication (as tablets or sublingually) probably reduces preoperative anxiety in adults (measured 50 to 120 minutes after administration), which is potentially clinically relevant. The effect of melatonin on postoperative anxiety compared to placebo (measured in the recovery room and six hours after surgery) was also evident but was much smaller, and the clinical relevance of this finding is uncertain. There was little or no difference in anxiety when melatonin was compared with benzodiazepines. Thus, melatonin may have a similar effect to benzodiazepines in reducing preoperative and postoperative anxiety in adults.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 27 randomized studies involving 2319 adult participants that looked at the effects of melatonin given before surgery on the level of anxiety both before and after surgery. Most studies were conducted in developing countries. We included any kind of surgical procedure in which general, regional, or topical anaesthesia was used.\nMelatonin doses varied from 3 to 10 mg or from 0.05 to 0.4 mg/kg. Benzodiazepine (midazolam, oxazepam, or alprazolam) doses ranged from 0.25 to 15 mg or from 0.05 to 0.2 mg/kg.\nNone of the studies reported receipt of funding from drug manufacturers or agencies with commercial interests."
}
] |
query-laysum
|
24252
|
[
{
"role": "user",
"content": "Abstract: Background\nThe Neonatal Task Force of the International Liaison Committee on Resuscitation (ILCOR) makes practice recommendations for the care of newborn infants in the delivery room (DR). ILCOR recommends that all infants who are gasping, apnoeic, or bradycardic (heart rate < 100 per minute) should be given positive pressure ventilation (PPV) with a manual ventilation device (T-piece, self-inflating bag, or flow-inflating bag) via an interface. The most commonly used interface is a face mask that encircles the infant’s nose and mouth. However, gas leak and airway obstruction are common during face mask PPV. Nasal interfaces (single and binasal prongs (long or short), or nasal masks) and laryngeal mask airways (LMAs) may also be used to deliver PPV to newborns in the DR, and may be more effective than face masks.\nObjectives\nTo determine whether newborn infants receiving PPV in the delivery room with a nasal interface compared to a face mask, laryngeal mask airway (LMA), or another type of nasal interface have reduced mortality and morbidity. To assess whether safety and efficacy of the nasal interface differs according to gestational age or ventilation device.\nSearch methods\nSearches were conducted in September 2022 in CENTRAL, MEDLINE, Embase, Epistemonikos, and two trial registries. We searched conference abstracts and checked the reference lists of included trials and related systematic reviews identified through the search.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCT's that compared the use of nasal interfaces to other interfaces (face masks, LMAs, or one nasal interface to another) to deliver PPV to newborn infants in the DR.\nData collection and analysis\nEach review author independently evaluated the search results against the selection criteria, screened retrieved records, extracted data, and appraised the risk of bias. If they were study authors, they did not participate in the selection, risk of bias assessment, or data extraction related to the study. In such instances, the study was independently assessed by other review authors. We contacted trial investigators to obtain additional information. We completed data analysis according to the standards of Cochrane Neonatal, using risk ratio (RR) and 95% confidence Intervals (CI) to measure the effect of the different interfaces. We used fixed-effect models and the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included five trials, in which 1406 infants participated. They were conducted in 13 neonatal centres across Europe and Australia. Each of these trials compared a nasal interface to a face mask for the delivery of respiratory support to newborn infants in the DR. Potential sources of bias were a lack of blinding to treatment allocation of the caregivers and investigators in all trials.\nThe evidence suggests that resuscitation with a nasal interface in the DR, compared with a face mask, may have little to no effect on reducing death before discharge (typical risk ratio (RR) 0.72, 95% CI 0.47 to 1.13; 3 studies, 1124 infants; low-certainty evidence).\nResuscitation with a nasal interface may reduce the rate of intubation in the DR, but the evidence is very uncertain (RR 0.68, 95% CI 0.54 to 0.85; 5 studies, 1406 infants; very low-certainty evidence). The evidence is very uncertain for the rate of intubation within 24 hours of birth (RR 0.97, 95% CI 0.85 to 1.09; 3 studies, 749 infants; very low-certainty evidence), endotracheal intubation outside the DR during hospitalisation (RR 1.15, 95% CI 0.93 to 1.42; 1 study, 144 infants; very low-certainty evidence) and cranial ultrasound abnormalities (intraventricular haemorrhage (IVH) grade ≥ 3, or periventricular leukomalacia; RR 0.94, 95% CI 0.55 to 1.61; 3 studies, 749 infants; very low-certainty evidence). Resuscitation with a nasal interface in the DR, compared with a face mask, may have little to no effect on the incidence of air leaks (RR 1.09, 95% CI 0.85 to 1.09; 2 studies, 507 infants; low-certainty evidence), or the need for supplemental oxygen at 36 weeks' corrected gestational age (RR 1.06, 95% CI 0.8 to 1.40; 2 studies, 507 infants; low-certainty evidence).\nWe identified one ongoing study, which compares a nasal mask to a face mask to deliver PPV to infants in the DR. We did not identify any completed trials that compared nasal interfaces to LMAs or one nasal interface to another.\nAuthors' conclusions\nNasal interfaces were found to offer comparable efficacy to face masks (low- to very low-certainty evidence), supporting resuscitation guidelines that state that nasal interfaces are a comparable alternative to face masks for providing respiratory support in the DR.\nResuscitation with a nasal interface may reduce the rate of intubation in the DR when compared with a face mask. However, the evidence is very uncertain. This uncertainty is attributed to the use of a new ventilation system in the nasal interface group in two of the five trials. As such, it is not possible to differentiate separate, specific effects related to the ventilation device or to the interface in these studies.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Applying a nasal interface or face mask is difficult to conceal and caregivers in all studies were aware of which group babies were allocated to. This may lead to unequal care for different study groups. It is not clear if the effect on intubation rates in the delivery room is due to the nasal interface or a new breathing system that was used in two of the five studies included in our analysis. As a result, we have little confidence in the evidence and the results of this outcome should be interpreted with caution."
}
] |
query-laysum
|
24253
|
[
{
"role": "user",
"content": "Abstract: Background\nKeloid scarring is one of the most common types of pathological scarring. Keloid scars that fail to heal can affect a person's physical and psychological function by causing pain, pruritus, contractures, and cosmetic disfigurement. Silicone gel sheeting (SGS) is made from medical-grade silicone reinforced with a silicone membrane backing and is one of the most commonly used treatments for keloid scars. However, there is no up-to-date systematic review assessing the effectiveness of SGS for keloid scars. A clear and rigorous review of current evidence is required to guide clinicians, healthcare managers and people with keloid scarring.\nObjectives\nTo assess the effectiveness of silicone gel sheeting for the treatment of keloid scars compared with standard care or other therapies.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was December 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs) that recruited people with any keloid scars and assessed the effectiveness of SGS.\nData collection and analysis\nTwo review authors independently performed study selection, risk of bias assessment, data extraction and GRADE assessment of the certainty of evidence. We resolved initial disagreements by discussion, or by consulting a third review author when necessary.\nMain results\nTwo studies met the inclusion criteria. Study sample sizes were 16 and 20 participants. The trials were clinically heterogeneous with differences in causes for scarring (e.g. surgery, infected wounds, and trauma), site (e.g. chest and back), and ages of scars. The duration of follow-up was three and four and a half months. The included studies reported three comparisons; SGS compared with no treatment, SGS compared with non-silicone gel sheeting (a dressing similar to SGS but which does not contain silicone), and SGS compared with intralesional injections of triamcinolone acetonide. One trial had a split-body design and one trial had an unclear design (resulting in a mix of paired and clustered data).\nThe included studies reported limited outcome data for the primary review outcome of scar severity measured by health professionals and no data were reported for severity of scar measured by patients or adverse events. For secondary outcomes some data on pain were reported, but health-related quality of life and cost-effectiveness were not reported. Both trials had suboptimal outcome reporting, thus many domains in the risk of bias were assessed as unclear. All evidence was rated as being very low-certainty, mainly due to risk of bias, indirectness, and imprecision.\nSGS compared with no treatment\nTwo studies with 33 participants (76 scars) reported the severity of scar assessed by health professionals, and we are uncertain about the effect of SGS on scar severity compared with no treatment (very low-certainty evidence, downgraded once for risk of bias, once for inconsistency, once for indirectness, and once for imprecision). We are uncertain about the effect of SGS on pain compared with no treatment (21 participants with 40 scars; very low-certainty evidence, downgraded once for risk of bias, once for inconsistency, once for indirectness, and once for imprecision). No data were reported for other outcomes including scar severity assessed by patients, adverse events, adherence to treatment, health-related quality of life and cost-effectiveness.\nSGS compared with non-SGS\nOne study with 16 participants (25 scars) was included in this comparison. We are uncertain about the effect of SGS on scar severity assessed by health professionals compared with non-SGS (very low-certainty evidence, downgraded once for risk of bias, once for indirectness, and once for imprecision). We are also uncertain about the effect of SGS on pain compared with non-SGS (very low-certainty evidence, downgraded once for risk of bias, once for indirectness, and once for imprecision). No data were reported for other outcomes including scar severity assessed by patients, adverse events, adherence to treatment, health-related quality of life and cost-effectiveness.\nSGS compared with intralesional injections of triamcinolone acetonide\nOne study with 17 participants (51 scars) reported scar severity assessed by health professionals, and we are uncertain about the effect of SGS on scar severity compared with intralesional injections of triamcinolone acetonide (very low-certainty evidence, downgraded once for risk of bias, once for indirectness, and once for imprecision). This study also reported pain assessed by health professionals among 5 participants (15 scars) and we are uncertain about the effect of SGS on pain compared with intralesional injections of triamcinolone acetonide (very low-certainty evidence, downgraded once for risk of bias, once for indirectness, and twice for imprecision). No data were reported for other outcomes including scar severity assessed by patients, adverse events, adherence to treatment, health-related quality of life and cost-effectiveness.\nAuthors' conclusions\nThere is currently a lack of RCT evidence about the clinical effectiveness of SGS in the treatment of keloid scars. From the two studies identified, there is insufficient evidence to demonstrate whether the use of SGS compared with no treatment, non-SGS, or intralesional injections of triamcinolone acetonide makes any difference in the treatment of keloid scars. Evidence from the included studies is of very low certainty, mainly driven by the risk of bias, indirectness, and imprecision due to small sample size. Further well-designed studies that have good reporting methodologies and address important clinical, quality of life and economic outcomes are required to reduce uncertainty around decision-making in the use of SGS to treat keloid scars.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are uncertain whether silicone gel sheeting improves a scar's appearance more than:\n- no treatment;\n- treatment with a dressing similar to silicone gel sheeting that does not contain silicone;\n- injections of triamcinolone acetonide (a medication) directly into a lesion or below the skin.\nWe are uncertain about the effect of silicone gel sheeting on pain compared with no treatment.\nWe do not know if silicone gel sheeting has an effect on pain compared with non-silicone gel sheeting or intralesional injections of triamcinolone acetonide."
}
] |
query-laysum
|
24254
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic obstructive pulmonary disease (COPD) is a chronic respiratory condition characterised by persistent respiratory symptoms and airflow limitation. Acute exacerbations punctuate the natural history of COPD and are associated with increased morbidity and mortality and disease progression. Chronic airflow limitation is caused by a combination of small airways (bronchitis) and parenchymal destruction (emphysema), which can impact day-to-day activities and overall quality of life. In carefully selected patients with COPD, long-term, prophylactic use of antibiotics may reduce bacterial load, inflammation of the airways, and the frequency of exacerbations.\nObjectives\nTo assess effects of different prophylactic antibiotics on exacerbations, quality of life, and serious adverse events in people with COPD in three separate network meta-analyses (NMAs), and to provide rankings of identified antibiotics.\nSearch methods\nTo identify eligible randomised controlled trials (RCTs), we searched the Cochrane Airways Group Specialised Register of trials and clinical trials registries. We conducted the most recent search on 22 January 2020.\nSelection criteria\nWe included RCTs with a parallel design of at least 12 weeks' duration evaluating long-term administration of antibiotics prophylactically compared with other antibiotics, or placebo, for patients with COPD.\nData collection and analysis\nThis Cochrane Review collected and updated pair-wise data from two previous Cochrane Reviews. Searches were updated and additional studies included. We conducted three separate network meta-analyses (NMAs) within a Bayesian framework to assess three outcomes: exacerbations, quality of life, and serious adverse events. For quality of life, we collected data from St George's Respiratory Questionnaire (SGRQ). Using previously validated methods, we selected the simplest model that could adequately fit the data for every analysis. We used threshold analysis to indicate which results were robust to potential biases, taking into account each study’s contributions to the overall results and network structure. Probability ranking was performed for each antibiotic class for exacerbations, quality of life, and serious adverse events.\nMain results\nCharacteristics of studies and participants\nEight trials were conducted at multiple sites that included hospital clinics or academic health centres. Seven were single-centre trials conducted in hospital clinics. Two trials did not report settings. Trials durations ranged from 12 to 52 weeks. Most participants had moderate to severe disease. Mean age ranged from 64 years to 73 years, and more males were recruited (51% to 100%). Forced expiratory volume in one second (FEV₁) ranged from 0.935 to 1.36 L. Most participants had previous exacerbations. Data from 12 studies were included in the NMAs (3405 participants; 16 treatment arms including placebo). Prophylactic antibiotics evaluated were macrolides (azithromycin and erythromycin), tetracyclines (doxycyclines), quinolones (moxifloxacin) and macrolides plus tetracyclines (roxithromycin plus doxycycline).\nRisk of bias and threshold analysis\nMost studies were at low risk across domains, except detection bias, for which only seven studies were judged at low risk. In the threshold analysis for exacerbations, all comparisons in which one antibiotic was compared with another were robust to sampling variation, especially macrolide comparisons. Comparisons of classes with placebo were sensitive to potential bias, especially macrolide versus placebo, therefore, any bias in the comparison was likely to favour the active class, so any adjustment would bring the estimated relative effect closer to the null value, thus quinolone may become the best class to prevent exacerbations.\nExacerbations\nNine studies were included (2732 participants) in this NMA (exacerbations analysed as time to first exacerbation or people with one or more exacerbations). Macrolides and quinolones reduced exacerbations. Macrolides had a greater effect in reducing exacerbations compared with placebo (macrolides: hazard ratio (HR) 0.67, 95% credible interval (CrI) 0.60 to 0.75; quinolones: HR 0.89, 95% CrI 0.75 to 1.04), resulting in 127 fewer people per 1000 experiencing exacerbations on macrolides. The difference in exacerbations between tetracyclines and placebo was uncertain (HR 1.29, 95% CrI 0.66 to 2.41). Macrolides ranked first (95% CrI first to second), with quinolones ranked second (95% CrI second to third). Tetracyclines ranked fourth, which was lower than placebo (ranked third). Contributing studies were considered as low risk of bias in a threshold analysis.\nQuality of life (SGRQ)\nSeven studies were included (2237 participants) in this NMA. SGRQ scores improved with macrolide treatment compared with placebo (fixed effect-fixed class effect: mean difference (MD) -2.30, 95% CrI -3.61 to -0.99), but the mean difference did not reach the minimally clinical important difference (MCID) of 4 points. Tetracyclines and quinolones did not improve quality of life any more than placebo, and we did not detect a difference between antibiotic classes.\nSerious adverse events\nNine studies were included (3180 participants) in the NMA. Macrolides reduced the odds of a serious adverse event compared with placebo (fixed effect-fixed class effect: odds ratio (OR) 0.76, 95% CrI 0.62 to 0.93). There was probably little to no difference in the effect of quinolone compared with placebo or tetracycline plus macrolide compared with placebo. There was probably little to no difference in serious adverse events between quinolones or tetracycline plus macrolide. With macrolide treatment 49 fewer people per 1000 experienced a serious adverse event compared with those given placebo. Macrolides ranked first, followed by quinolones. Tetracycline did not rank better than placebo.\nDrug resistance\nTen studies reported drug resistance. Results were not combined due to variation in outcome measures. All studies concluded that prophylactic antibiotic administration was associated with the development of antimicrobial resistance.\nAuthors' conclusions\nThis NMA evaluated the safety and efficacy of different antibiotics used prophylactically for COPD patients. Compared to placebo, prolonged administration of macrolides (ranked first) appeared beneficial in prolonging the time to next exacerbation, improving quality of life, and reducing serious adverse events. No clear benefits were associated with use of quinolones or tetracyclines. In addition, antibiotic resistance was a concern and could not be thoroughly assessed in this review. Given the trade-off between effectiveness, safety, and risk of antibiotic resistance, prophylactic administration of antibiotics may be best reserved for selected patients, such as those experiencing frequent exacerbations. However, none of the eligible studies excluded patients with previously isolated non-tuberculous mycobacteria, which would contraindicate prophylactic administration of antibiotics, due to the risk of developing resistant non-tuberculous mycobacteria.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Which preventative antibiotic is effective and safe for reducing exacerbations, improving quality of life, and reducing serious side effects in people with COPD?"
}
] |
query-laysum
|
24255
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatic encephalopathy is a common complication of cirrhosis which results in poor brain functioning. The spectrum of changes associated with hepatic encephalopathy ranges from the clinically 'indiscernible' or minimal hepatic encephalopathy to the clinically 'obvious' or overt hepatic encephalopathy. Flumazenil is a synthetic benzodiazepine antagonist with high affinity for the central benzodiazepine recognition site. Flumazenil may benefit people with hepatic encephalopathy through an indirect negative allosteric modulatory effect on gamma-aminobutyric acid receptor function. The previous version of this review, which included 13 randomised clinical trials, found no effect of flumazenil on all-cause mortality, based on an analysis of 10 randomised clinical trials, but found a beneficial effect on hepatic encephalopathy, based on an analysis of eight randomised clinical trials.\nObjectives\nTo evaluate the beneficial and harmful effects of flumazenil versus placebo or no intervention for people with cirrhosis and hepatic encephalopathy.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE, Embase, Science Citation Index Expanded, and LILACS; meeting and conference proceedings; and bibliographies in May 2017.\nSelection criteria\nWe included randomised clinical trials regardless of publication status, blinding, or language in the analyses of benefits and harms, and observational studies in the assessment of harms.\nData collection and analysis\nTwo review authors extracted data independently. We undertook meta-analyses and presented results using risk ratios (RR) with 95% confidence intervals (CI) and I2 values as a marker of heterogeneity. We assessed bias control using the Cochrane Hepato-Biliary Group domains; determined the quality of the evidence using GRADE; evaluated the risk of small-study effects in regression analyses; and conducted trial sequential, subgroup, and sensitivity analyses.\nMain results\nWe identified 14 eligible randomised clinical trials with 867 participants, the majority of whom had an acute episode of overt hepatic encephalopathy. In addition, we identified one ongoing randomised clinical trial. We were unable to gather outcome data from two randomised clinical trials with 25 participants. Thus, our analyses include 842 participants from 12 randomised clinical trials comparing flumazenil versus placebo. We classified one randomised clinical trial at low risk of bias in the overall assessment and the remaining randomised clinical trials at high risk of bias. The duration of follow-up ranged from a few minutes to two weeks, but it was less than one day in the majority of the trials.\nIn total, 32/433 (7.4%) participants allocated to flumazenil versus 38/409 (9.3%) participants allocated to placebo died (RR 0.75, 95% CI 0.48 to 1.16; 11 randomised clinical trials; low quality evidence). The Trial Sequential Analysis and the one randomised clinical trial assessed as low risk of bias (RR 0.76, 95% CI 0.37 to 1.53) found no beneficial or harmful effects of flumazenil on all-cause mortality. The methods used to evaluate hepatic encephalopathy included several different clinical scales, electrophysiological variables, and psychometric tests. Flumazenil was associated with a beneficial effect on hepatic encephalopathy when including all randomised clinical trials (RR 0.75, 95% CI 0.71 to 0.80; 824 participants; nine randomised clinical trials; low quality evidence), or just the trial at low risk of bias (RR 0.78, 95% CI 0.72 to 0.84; 527 participants). The Trial Sequential Analysis supported a beneficial effect of flumazenil on hepatic encephalopathy. The randomised clinical trials included little information about causes of death and little information on non-fatal serious adverse events.\nAuthors' conclusions\nWe found low quality evidence suggesting a short-term beneficial effect of flumazenil on hepatic encephalopathy in people with cirrhosis, but no evidence of an effect on all-cause mortality. Additional evidence from large, high quality randomised clinical trials is needed to evaluate the potential benefits and harms of flumazenil in people with cirrhosis and hepatic encephalopathy.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases and conducted manual searches in May 2017."
}
] |
query-laysum
|
24256
|
[
{
"role": "user",
"content": "Abstract: Background\nSleep disturbances, including reduced nocturnal sleep time, sleep fragmentation, nocturnal wandering, and daytime sleepiness are common clinical problems in dementia, and are associated with significant carer distress, increased healthcare costs, and institutionalisation. Although non-drug interventions are recommended as the first-line approach to managing these problems, drug treatment is often sought and used. However, there is significant uncertainty about the efficacy and adverse effects of the various hypnotic drugs in this clinically vulnerable population.\nObjectives\nTo assess the effects, including common adverse effects, of any drug treatment versus placebo for sleep disorders in people with dementia.\nSearch methods\nWe searched ALOIS (www.medicine.ox.ac.uk/alois), the Cochrane Dementia and Cognitive Improvement Group's Specialized Register, on 19 February 2020, using the terms: sleep, insomnia, circadian, hypersomnia, parasomnia, somnolence, rest-activity, and sundowning.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared a drug with placebo, and that had the primary aim of improving sleep in people with dementia who had an identified sleep disturbance at baseline.\nData collection and analysis\nTwo review authors independently extracted data on study design, risk of bias, and results. We used the mean difference (MD) or risk ratio (RR) with 95% confidence intervals (CI) as the measures of treatment effect, and where possible, synthesised results using a fixed-effect model. Key outcomes to be included in our summary tables were chosen with the help of a panel of carers. We used GRADE methods to rate the certainty of the evidence.\nMain results\nWe found nine eligible RCTs investigating: melatonin (5 studies, n = 222, five studies, but only two yielded data on our primary sleep outcomes suitable for meta-analysis), the sedative antidepressant trazodone (1 study, n = 30), the melatonin-receptor agonist ramelteon (1 study, n = 74, no peer-reviewed publication), and the orexin antagonists suvorexant and lemborexant (2 studies, n = 323).\nParticipants in the trazodone study and most participants in the melatonin studies had moderate-to-severe dementia due to Alzheimer's disease (AD); those in the ramelteon study and the orexin antagonist studies had mild-to-moderate AD. Participants had a variety of common sleep problems at baseline. Primary sleep outcomes were measured using actigraphy or polysomnography. In one study, melatonin treatment was combined with light therapy. Only four studies systematically assessed adverse effects. Overall, we considered the studies to be at low or unclear risk of bias.\nWe found low-certainty evidence that melatonin doses up to 10 mg may have little or no effect on any major sleep outcome over eight to 10 weeks in people with AD and sleep disturbances. We could synthesise data for two of our primary sleep outcomes: total nocturnal sleep time (TNST) (MD 10.68 minutes, 95% CI –16.22 to 37.59; 2 studies, n = 184), and the ratio of day-time to night-time sleep (MD –0.13, 95% CI –0.29 to 0.03; 2 studies; n = 184). From single studies, we found no evidence of an effect of melatonin on sleep efficiency, time awake after sleep onset, number of night-time awakenings, or mean duration of sleep bouts. There were no serious adverse effects of melatonin reported.\nWe found low-certainty evidence that trazodone 50 mg for two weeks may improve TNST (MD 42.46 minutes, 95% CI 0.9 to 84.0; 1 study, n = 30), and sleep efficiency (MD 8.53%, 95% CI 1.9 to 15.1; 1 study, n = 30) in people with moderate-to-severe AD. The effect on time awake after sleep onset was uncertain due to very serious imprecision (MD –20.41 minutes, 95% CI –60.4 to 19.6; 1 study, n = 30). There may be little or no effect on number of night-time awakenings (MD –3.71, 95% CI –8.2 to 0.8; 1 study, n = 30) or time asleep in the day (MD 5.12 minutes, 95% CI –28.2 to 38.4). There were no serious adverse effects of trazodone reported.\nThe small (n = 74), phase 2 trial investigating ramelteon 8 mg was reported only in summary form on the sponsor's website. We considered the certainty of the evidence to be low. There was no evidence of any important effect of ramelteon on any nocturnal sleep outcomes. There were no serious adverse effects.\nWe found moderate-certainty evidence that an orexin antagonist taken for four weeks by people with mild-to-moderate AD probably increases TNST (MD 28.2 minutes, 95% CI 11.1 to 45.3; 1 study, n = 274) and decreases time awake after sleep onset (MD –15.7 minutes, 95% CI –28.1 to –3.3: 1 study, n = 274) but has little or no effect on number of awakenings (MD 0.0, 95% CI –0.5 to 0.5; 1 study, n = 274). It may be associated with a small increase in sleep efficiency (MD 4.26%, 95% CI 1.26 to 7.26; 2 studies, n = 312), has no clear effect on sleep latency (MD –12.1 minutes, 95% CI –25.9 to 1.7; 1 study, n = 274), and may have little or no effect on the mean duration of sleep bouts (MD –2.42 minutes, 95% CI –5.53 to 0.7; 1 study, n = 38). Adverse events were probably no more common among participants taking orexin antagonists than those taking placebo (RR 1.29, 95% CI 0.83 to 1.99; 2 studies, n = 323).\nAuthors' conclusions\nWe discovered a distinct lack of evidence to guide decisions about drug treatment of sleep problems in dementia. In particular, we found no RCTs of many widely prescribed drugs, including the benzodiazepine and non-benzodiazepine hypnotics, although there is considerable uncertainty about the balance of benefits and risks for these common treatments. We found no evidence for beneficial effects of melatonin (up to 10 mg) or a melatonin receptor agonist. There was evidence of some beneficial effects on sleep outcomes from trazodone and orexin antagonists and no evidence of harmful effects in these small trials, although larger trials in a broader range of participants are needed to allow more definitive conclusions to be reached. Systematic assessment of adverse effects in future trials is essential.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with dementia frequently experience sleep disturbances. These can include reduced sleep at night, frequent wakening, wandering at night, and sleeping excessively during the day.\nThese behaviours cause a lot of stress to carers, and may be associated with earlier admission to institutional care for people with dementia. They can also be difficult for care-home staff to manage.\nNon-drug approaches to treatment should be tried first, However, these may not help and medicines are often used. Since the source of the sleep problems may be changes in the brain caused by dementia, it is not clear whether normal sleeping tablets are effective for people with dementia, and there are worries that the medicines could cause significant side effects (harms)."
}
] |
query-laysum
|
24257
|
[
{
"role": "user",
"content": "Abstract: Background\nAmbulatory or outpatient anaesthesia is performed in patients who are discharged on the same day as their surgery. Perioperative complications such as postoperative nausea and vomiting (PONV), postoperative behavioural disturbances and cardiorespiratory complications should be minimized in ambulatory anaesthesia. The choice of anaesthetic agents and techniques can influence the occurrence of these complications and thus delay in discharge.\nObjectives\nThe objective of this review was to evaluate the risk of complications (the risk of postoperative nausea and vomiting (PONV), admission or readmission to hospital, postoperative behavioural disturbances and perioperative respiratory and cardiovascular complications) and recovery times (time to discharge from recovery ward and time to discharge from hospital) comparing the use of intravenous to inhalational anaesthesia for paediatric outpatient surgery.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library (2013, Issue 8); MEDLINE (1948 to 1 October 2013); EMBASE (1974 to 1 October 2013); Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) (1982 to 1 October 2013). We also handsearched relevant journals and searched the reference lists of the articles identified.\nSelection criteria\nWe included randomized controlled trials comparing paediatric outpatient surgery using intravenous versus inhalational anaesthesia.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted the data. When necessary, we requested additional information and clarification of published data from the authors of individual trials.\nMain results\nWe included 16 trials that involved 900 children in this review. Half of all the studies did not describe the generation of randomized sequence and most studies did not describe adequate allocation sequence concealment. The included studies showed variability in the types and combinations of drugs and the duration of anaesthesia, limiting the meta-analysis and interpretation of the results.\nFor the induction and maintenance of anaesthesia there was a significant difference favouring intravenous anaesthesia with propofol; the incidence of PONV was 32.6% for sevoflurane and 16.1% for propofol (odds ratio (OR) 2.96; 95% confidence interval (CI) 1.35 to 6.49, four studies, 176 children, low quality evidence). The risk of postoperative behavioural disturbances also favoured intravenous anaesthesiaas the incidence was 24.7% for sevoflurane and 11.5% for propofol (OR 2.67; 95% CI 1.14 to 6.23, four studies, 176 children, very low quality evidence). There were no differences between groups in the risk of intraoperative and postoperative respiratory and cardiovascular complications (OR 0.75; 95% CI 0.27 to 2.13, three studies,130 children, very low quality evidence) and there was no difference in the time to recovery from anaesthesia and discharge from hospital. These results should be interpreted with caution due to heterogeneity between studies in the type and duration of operations, types of reported complications and the high risk of bias in almost all studies. Two studies (105 participants) compared halothane to propofol and showed heterogeneity in duration of anaesthesia and in the type of ambulatory procedure. For the risk of PONV the results of the studies were conflicting, and for the risks of intraoperative and postoperative complications there were no significant differences between the groups.\nFor the maintenance of anaesthesia there was a significant difference favouring anaesthesia with propofol, with or without nitrous oxide (N2O), when compared to thiopentone and halothane + N2O (OR 3.23; 95% CI 1.49 to 7.02, four studies, 176 children, low quality evidence; and OR 7.44; 95% CI 2.60 to 21.26, two studies, 87 children, low quality evidence), respectively. For the time to discharge from the recovery room, there were no significant differences between groups. The studies were performed with different ambulatory surgeries and a high risk of bias.\nFour studies (250 participants) compared the induction of anaesthesia by the inhalational or intravenous route, with inhalational anaesthesia for maintenance, and found no significant differences between groups in all outcomes (the risk of PONV, behavioural disturbances, respiratory and cardiovascular complications and time to discharge from recovery room). Meta-analysis was not done in this comparison because of significant clinical heterogeneity.\nReadmission to hospital was not reported in any of the included studies. No other adverse effects were reported.\nAuthors' conclusions\nThere is insufficient evidence to determine whether intravenous anaesthesia with propofol for induction and maintenance of anaesthesia in paediatric outpatients undergoing surgery reduces the risk of postoperative nausea and vomiting and the risk of behavioural disturbances compared with inhaled anaesthesia. This evidence is of poor quality. More high-quality studies are needed to compare the different types of anaesthesia in different subsets of children undergoing ambulatory surgery.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effects of general anaesthesia (GA) for children having day surgery. The GA was given to the children either by intravenous injection (injection of a drug such as propofol or thiopental) or by the child inhaling or breathing a gas such as sevoflurane or halothane through a mask. We found 16 studies that compared these two types of GA."
}
] |
query-laysum
|
24258
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic kidney disease (CKD) is defined as reduced function of the kidneys present for 3 months or longer with adverse implications for health and survival. For several decades low protein diets have been proposed for participants with CKD with the aim of slowing the progression to end-stage kidney disease (ESKD) and delaying the onset of renal replacement therapy. However the relative benefits and harms of dietary protein restriction for preventing progression of CKD have not been resolved. This is an update of a systematic review first published in 2000 and updated in 2006, 2009, and 2018.\nObjectives\nTo determine the efficacy of low protein diets in preventing the natural progression of CKD towards ESKD and in delaying the need for commencing dialysis treatment in non-diabetic adults.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 7 September 2020 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nWe included randomised controlled trials (RCTs) or quasi RCTs in which adults with non-diabetic CKD (stages 3 to 5) not on dialysis were randomised to receive a very low protein intake (0.3 to 0.4 g/kg/day) compared with a low protein intake (0.5 to 0.6 g/kg/day) or a low protein intake compared with a normal protein intake (≥ 0.8 g/kg/day) for 12 months or more.\nData collection and analysis\nTwo authors independently selected studies and extracted data. For dichotomous outcomes (death, all causes), requirement for dialysis, adverse effects) the risk ratios (RR) with 95% confidence intervals (CI) were calculated and summary statistics estimated using the random effects model. Where continuous scales of measurement were used (glomerular filtration rate (GFR), weight), these data were analysed as the mean difference (MD) or standardised mean difference (SMD) if different scales had been used. The certainty of the evidence was assessed using GRADE.\nMain results\nWe identified 17 studies with 2996 analysed participants (range 19 to 840). Four larger multicentre studies were subdivided according to interventions so that the review included 21 separate data sets. Mean duration of participant follow-up ranged from 12 to 50 months.\nRandom sequence generation and allocation concealment were considered at low risk of bias in eleven and nine studies respectively. All studies were considered at high risk for performance bias as they were open-label studies. We assessed detection bias for outcome assessment for GFR and ESKD separately. As GFR measurement was a laboratory outcome all studies were assessed at low risk of detection bias. For ESKD, nine studies were at low risk of detection bias as the need to commence dialysis was determined by personnel independent of the study investigators. Five studies were assessed at high risk of attrition bias with eleven studies at low risk. Ten studies were at high risk for reporting bias as they did not include data which could be included in a meta-analysis. Eight studies reported funding from government bodies while the remainder did not report on funding.\nTen studies compared a low protein diet with a normal protein diet in participants with CKD categories 3a and b (9 studies) or 4 (one study). There was probably little or no difference in the numbers of participants who died (5 studies 1680 participants: RR 0.77, 95% CI 0.51 to 1.18; 13 fewer deaths per 1000; moderate certainty evidence). A low protein diet may make little or no difference in the number of participants who reached ESKD compared with a normal protein diet (6 studies, 1814 participants: RR 1.05, 95% CI 0.73 to 1.53; 7 more per 1000 reached ESKD; low certainty evidence). It remains uncertain whether a low protein diet compared with a normal protein intake impacts on the outcome of final or change in GFR (8 studies, 1680 participants: SMD -0.18, 95% CI -0.75 to 0.38; very low certainty evidence).\nEight studies compared a very low protein diet with a low protein diet and two studies compared a very low protein diet with a normal protein diet. A very low protein intake compared with a low protein intake probably made little or no difference to death (6 studies, 681 participants: RR 1.26, 95% CI 0.62 to 2.54; 10 more deaths per 1000; moderate certainty evidence). However it probably reduces the number who reach ESKD (10 studies, 1010 participants: RR 0.65, 95% CI 0.49 to 0.85; 165 per 1000 fewer reached ESKD; moderate certainty evidence). It remains uncertain whether a very low protein diet compared with a low or normal protein intake influences the final or change in GFR (6 studies, 456 participants: SMD 0.12, 95% CI -0.27 to 0.52; very low certainty evidence).\nFinal body weight was reported in only three studies. It is uncertain whether the intervention alters final body weight (3 studies, 89 participants: MD -0.40 kg, 95% CI -6.33 to 5.52; very low certainty evidence).Twelve studies reported no evidence of protein energy wasting (malnutrition) in their study participants while three studies reported small numbers of participants in each group with protein energy wasting. Most studies reported that adherence to diet was satisfactory. Quality of life was not formally assessed in any studies.\nAuthors' conclusions\nThis review found that very low protein diets probably reduce the number of people with CKD 4 or 5, who progress to ESKD. In contrast low protein diets may make little difference to the number of people who progress to ESKD. Low or very low protein diets probably do not influence death. However there are limited data on adverse effects such as weight differences and protein energy wasting. There are no data on whether quality of life is impacted by difficulties in adhering to protein restriction. Studies evaluating the adverse effects and the impact on quality of life of dietary protein restriction are required before these dietary approaches can be recommended for widespread use.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Various forms of kidney disease can lead to kidney failure with affected people ultimately requiring dialysis treatment. A diet low in protein may be recommended to try to slow the progress of kidney disease to kidney failure. We still do not know whether low protein diets can slow the progress of kidney disease and delay the need to start dialysis."
}
] |
query-laysum
|
24259
|
[
{
"role": "user",
"content": "Abstract: Background\nPain is a common feature of childhood and adolescence around the world, and for many young people, that pain is chronic. The World Health Organization guidelines for pharmacological treatments for children's persisting pain acknowledge that pain in children is a major public health concern of high significance in most parts of the world. While in the past pain was largely dismissed and was frequently left untreated, views on children's pain have changed over time, and relief of pain is now seen as important.\nWe designed a suite of seven reviews on chronic non-cancer pain and cancer pain (looking at antidepressants, antiepileptic drugs, non-steroidal anti-inflammatory drugs, opioids, and paracetamol) in order to review the evidence for children's pain utilising pharmacological interventions.\nAs the leading cause of morbidity in the world today, chronic disease (and its associated pain) is a major health concern. Chronic pain (that is pain lasting three months or longer) can arise in the paediatric population in a variety of pathophysiological classifications (nociceptive, neuropathic, or idiopathic) from genetic conditions, nerve damage pain, chronic musculoskeletal pain, and chronic abdominal pain, as well as for other unknown reasons.\nNon-steroidal anti-inflammatory drugs (NSAIDs) are used to treat pain, reduce fever, and for their anti-inflammation properties. They are commonly used within paediatric pain management. Non-steroidal anti-inflammatory drugs are currently licensed for use in Western countries, however they are not approved for infants under three months old. The main adverse effects include renal impairment and gastrointestinal issues. Common side effects in children include diarrhoea, headache, nausea, constipation, rash, dizziness, and abdominal pain.\nObjectives\nTo assess the analgesic efficacy and adverse events of NSAIDs used to treat chronic non-cancer pain in children and adolescents aged between birth and 17 years, in any setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online, MEDLINE via Ovid, and Embase via Ovid from inception to 6 September 2016. We also searched the reference lists of retrieved studies and reviews, as well as online clinical trial registries.\nSelection criteria\nRandomised controlled trials, with or without blinding, of any dose and any route, treating chronic non-cancer pain in children and adolescents, comparing any NSAID with placebo or an active comparator.\nData collection and analysis\nTwo review authors independently assessed studies for eligibility. We planned to use dichotomous data to calculate risk ratio and number needed to treat for one additional event, using standard methods. We assessed GRADE and created three 'Summary of findings' tables.\nMain results\nWe included seven studies with a total of 1074 participants (aged 2 to 18 years) with chronic juvenile polyarthritis or chronic juvenile rheumatoid arthritis. All seven studies compared an NSAID with an active comparator. None of the studies were placebo controlled. No two studies investigated the same type of NSAID compared with another. We were unable to perform a meta-analysis.\nRisk of bias varied. For randomisation and allocation concealment, one study was low risk and six studies were unclear risk. For blinding of participants and personnel, three studies were low risk and four studies were unclear to high risk. For blinding of outcome assessors, all studies were unclear risk. For attrition, four studies were low risk and three studies were unclear risk. For selective reporting, four studies were low risk, two studies were unclear risk, and one study was high risk. For size, three studies were unclear risk and four studies were high risk. For other potential sources of bias, seven studies were low risk.\nPrimary outcomes\nThree studies reported participant-reported pain relief of 30% or greater, showing no statistically significant difference in pain scores between meloxicam and naproxen, celecoxib and naproxen, or rofecoxib and naproxen (P > 0.05) (low-quality evidence).\nOne study reported participant-reported pain relief of 50% or greater, showing no statistically significant difference in pain scores between low-dose meloxicam (0.125 mg/kg) and high-dose meloxicam (0.25 mg/kg) when compared to naproxen 10 mg/kg (P > 0.05) (low-quality evidence).\nOne study reported Patient Global Impression of Change, showing 'very much improved' in 85% of ibuprofen and 90% of aspirin participants (low-quality evidence).\nSecondary outcomes\nParticipants reporting an adverse event (one or more per person) by drug were: aspirin 85/202; fenoprofen 28/49; ibuprofen 40/45; indomethacin 9/30; ketoprofen 9/30; meloxicam 18/47; naproxen 44/202; and rofecoxib 47/209 (seven studies) (very low-quality evidence).\nParticipants withdrawn due to an adverse event by drug were: aspirin 16/120; celecoxib 10/159; fenoprofen 0/49; ibuprofen 0/45; indomethacin 0/30; ketoprofen 0/30; meloxicam 10/147; naproxen 17/285; and rofecoxib 3/209 (seven studies) (very low-quality evidence).\nParticipants experiencing a serious adverse event by drug were: aspirin 13/120; celecoxib 5/159; fenoprofen 0/79; ketoprofen 0/30; ibuprofen 4/45; indomethacin 0/30; meloxicam 11/147; naproxen 10/285; and rofecoxib 0/209 (seven studies) (very low-quality evidence).\nThere were too few or no data for our remaining secondary outcomes: Carer Global Impression of Change; requirement for rescue analgesia; sleep duration and quality; acceptability of treatment; physical functioning as defined by validated scales; and quality of life as defined by validated scales.\nQuality of evidence\nWe downgraded the low-quality outcomes twice due to serious study limitations (risk of bias) and imprecision. We downgraded the very-low quality outcomes three times due to too few data, or the fact that the number of events was too small to be meaningful, or both.\nAuthors' conclusions\nWe identified only a small number of studies, with insufficient data for analysis.\nAs we could undertake no meta-analysis, we are unable to comment about efficacy or harm from the use of NSAIDs to treat chronic non-cancer pain in children and adolescents. Similarly, we cannot comment on our remaining secondary outcomes: Carer Global Impression of Change; requirement for rescue analgesia; sleep duration and quality; acceptability of treatment; physical functioning; and quality of life.\nWe know from adult randomised controlled trials that some NSAIDs, such as ibuprofen, naproxen, and aspirin, can be effective in certain chronic pain conditions.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Children can experience chronic or recurrent pain related to genetic conditions, nerve damage, muscle or bone pain, stomach pain, or from unknown reasons. Chronic pain is pain that lasts three months or longer and is commonly accompanied by changes in lifestyle and functional abilities, as well as by signs and symptoms of depression and anxiety.\nNon-steroidal anti-inflammatory drugs are used to treat pain or reduce fever, and are commonly used in children. They include over-the-counter medications such as ibuprofen, aspirin, and naproxen, as well as prescription-only drugs. NSAIDs are currently licensed for use in Western countries, but are not approved for infants under three months old. The key side effects of NSAIDs are kidney failure and stomach problems. Other common side effects in children include diarrhoea, headache, nausea, constipation, rash, dizziness, flatulence, stomach pain, and indigestion."
}
] |
query-laysum
|
24260
|
[
{
"role": "user",
"content": "Abstract: Background\nHypoxic-ischaemic encephalopathy (HIE) is a serious birth complication affecting term and late preterm newborns. Although therapeutic hypothermia (cooling) has been shown to be an effective therapy for neonatal HIE, many cooled infants have poor long-term neurodevelopmental outcomes. In animal models of neonatal encephalopathy, inhaled xenon combined with cooling has been shown to offer better neuroprotection than cooling alone.\nObjectives\nTo determine the effects of xenon as an adjuvant to therapeutic hypothermia on mortality and neurodevelopmental morbidity, and to ascertain clinically important side effects of xenon plus therapeutic hypothermia in newborn infants with HIE. To assess early predictors of adverse outcomes and potential side effects of xenon.\nSearch methods\nWe used the standard strategy of the Cochrane Neonatal Review Group to search the Cochrane Library (2017, Issue 8), MEDLINE (from 1966), Embase (from 1966), and PubMed (from 1966) for randomised controlled and quasi-randomised trials. We also searched conference proceedings and the reference lists of cited articles. We conducted our most recent search in August 2017.\nSelection criteria\nWe included all trials allocating term or late preterm encephalopathic newborns to cooling plus xenon or cooling alone, irrespective of timing (starting age and duration) and concentrations used for xenon administration.\nData collection and analysis\nTwo review authors independently assessed results of searches against predetermined criteria for inclusion, assessed risk of bias, and extracted data. We performed meta-analyses using risk ratios (RRs), risk differences (RDs), and number needed to treat for an additional beneficial outcome (NNTB) with 95% confidence intervals (CIs) for dichotomous outcomes and mean differences (MDs) for continuous data.\nMain results\nA single randomised controlled trial enrolling 92 participants was eligible for this review. Researchers have not reported long-term neurodevelopmental outcomes, including the primary outcome of this review - death or long-term major neurodevelopmental disability in infancy (18 months to three years of age). Cooling plus xenon was not associated with reduced mortality at latest follow-up, based upon low quality evidence. Investigators noted no substantial differences between groups for other secondary outcomes of this review, such as biomarkers of brain damage assessed with magnetic resonance imaging and occurrence of seizures during primary hospitalisation. Available data do not show an increased adverse event rate in the cooling plus xenon group compared with the cooling alone group.\nAuthors' conclusions\nCurrent evidence from one small randomised controlled pilot trial is inadequate to show whether cooling plus xenon is safe or effective in near-term and term newborns with HIE. Further trials reporting long-term neurodevelopmental outcomes are needed.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Hypoxic-ischaemic encephalopathy, or HIE, is a brain injury caused by oxygen deprivation to the brain during birth (birth asphyxia). Hypoxic-ischaemic encephalopathy is a leading cause of death or severe impairment among infants. Therapeutic hypothermia (cooling) is a treatment option available to reduce the chances of severe brain damage when an infant's body temperature is reduced shortly after birth. Although cooling has been shown to be an effective therapy for neonatal HIE, half of treated newborn babies still die or face neurodevelopmental sequelae later in life. Evidence indicates that inhaled xenon, an odourless gas, in combination with body cooling, can help to improve survival and development at 18 to 36 months."
}
] |
query-laysum
|
24261
|
[
{
"role": "user",
"content": "Abstract: Background\nInclusion body myositis (IBM) is a late-onset inflammatory muscle disease (myopathy) associated with progressive proximal and distal limb muscle atrophy and weakness. Treatment options have attempted to target inflammatory and atrophic features of this condition (for example with immunosuppressive and immunomodulating drugs, anabolic steroids, and antioxidant treatments), although as yet there is no known effective treatment for reversing or minimising the progression of inclusion body myositis. In this review we have considered the benefits, adverse effects, and costs of treatment in targeting cardinal effects of the condition, namely muscle atrophy, weakness, and functional impairment.\nObjectives\nTo assess the effects of treatment for IBM.\nSearch methods\nOn 7 October 2014 we searched the Cochrane Neuromuscular Disease Group Specialized Register, the Cochrane Central Register for Controlled Trials (CENTRAL), MEDLINE, and EMBASE. Additionally in November 2014 we searched clinical trials registries for ongoing or completed but unpublished trials.\nSelection criteria\nWe considered randomised or quasi-randomised trials, including cross-over trials, of treatment for IBM in adults compared to placebo or any other treatment for inclusion in the review. We specifically excluded people with familial IBM and hereditary inclusion body myopathy, but we included people who had connective tissue and autoimmune diseases associated with IBM, which may or may not be identified in trials. We did not include studies of exercise therapy or dysphagia management, which are topics of other Cochrane systematic reviews.\nData collection and analysis\nWe used standard Cochrane methodological procedures.\nMain results\nThe review included 10 trials (249 participants) using different treatment regimens. Seven of the 10 trials assessed single agents, and 3 assessed combined agents. Many of the studies did not present adequate data for the reporting of the primary outcome of the review, which was the percentage change in muscle strength score at six months. Pooled data from two trials of interferon beta-1a (n = 58) identified no important difference in normalised manual muscle strength sum scores from baseline to six months (mean difference (MD) -0.06, 95% CI -0.15 to 0.03) between IFN beta-1a and placebo (moderate-quality evidence). A single trial of methotrexate (MTX) (n = 44) provided moderate-quality evidence that MTX did not arrest or slow disease progression, based on reported percentage change in manual muscle strength sum scores at 12 months. None of the fully published trials were adequately powered to detect a treatment effect.\nWe assessed six of the nine fully published trials as providing very low-quality evidence in relation to the primary outcome measure. Three trials (n = 78) compared intravenous immunoglobulin (combined in one trial with prednisone) to a placebo, but we were unable to perform meta-analysis because of variations in study analysis and presentation of trial data, with no access to the primary data for re-analysis. Other comparisons were also reported in single trials. An open trial of anti-T lymphocyte immunoglobulin (ATG) combined with MTX versus MTX provided very low-quality evidence in favour of the combined therapy, based on percentage change in quantitative muscle strength sum scores at 12 months (MD 12.50%, 95% CI 2.43 to 22.57). Data from trials of oxandrolone versus placebo, azathioprine (AZA) combined with MTX versus MTX, and arimoclomol versus placebo did not allow us to report either normalised or percentage change in muscle strength sum scores. A complete analysis of the effects of arimoclomol is pending data publication. Studies of simvastatin and bimagrumab (BYM338) are ongoing.\nAll analysed trials reported adverse events. Only 1 of the 10 trials interpreted these for statistical significance. None of the trials included prespecified criteria for significant adverse events.\nAuthors' conclusions\nTrials of interferon beta-1a and MTX provided moderate-quality evidence of having no effect on the progression of IBM. Overall trial design limitations including risk of bias, low numbers of participants, and short duration make it difficult to say whether or not any of the drug treatments included in this review were effective. An open trial of ATG combined with MTX versus MTX provided very low-quality evidence in favour of the combined therapy based on the percentage change data given. We were unable to draw conclusions from trials of IVIg, oxandrolone, and AZA plus MTX versus MTX. We need more randomised controlled trials that are larger, of longer duration, and that use fully validated, standardised, and responsive outcome measures.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "IBM is a disease that mainly affects older people. The main symptoms are increasing muscle wasting and weakness of the arms and legs. Some people experience swallowing difficulties. As yet no therapy has been shown to alter the course of the disease. Treatments that have been tested include agents that suppress or alter the immune response, drugs that promote muscle growth, and antioxidants."
}
] |
query-laysum
|
24262
|
[
{
"role": "user",
"content": "Abstract: Background\nPostpolio syndrome (PPS) may affect survivors of paralytic poliomyelitis and is characterised by a complex of neuromuscular symptoms leading to a decline in physical functioning. The effectiveness of pharmacological treatment and rehabilitation management in PPS is not yet established. This is an update of a review first published in 2011.\nObjectives\nTo systematically review the evidence from randomised and quasi-randomised controlled trials for the effect of any pharmacological or non-pharmacological treatment for PPS compared to placebo, usual care or no treatment.\nSearch methods\nWe searched the following databases on 21 July 2014: Cochrane Neuromuscular Disease Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO and CINAHL Plus. We also checked reference lists of all relevant articles, searched the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment (HTA) Database and trial registers and contacted investigators known to be involved in research in this area.\nSelection criteria\nRandomised and quasi-randomised trials of any form of pharmacological or non-pharmacological treatment for people with PPS. The primary outcome was self perceived activity limitations and secondary outcomes were muscle strength, muscle endurance, fatigue, pain and adverse events.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included 10 pharmacological (modafinil, intravenous immunoglobulin (IVIg), pyridostigmine, lamotrigine, amantadine, prednisone) and three non-pharmacological (muscle strengthening, rehabilitation in a warm climate (that is temperature ± 25°C, dry and sunny) and a cold climate (that is temperature ± 0°C, rainy or snowy), static magnetic fields) studies with a total of 675 participants with PPS in this review. None of the included studies were completely free from any risk of bias, the most prevalent risk of bias being lack of blinding.\nThere was moderate- and low-quality evidence that IVIg has no beneficial effect on activity limitations in the short term and long term, respectively, and inconsistency in the evidence for effectiveness on muscle strength. IVIg caused minor adverse events in a substantial proportion of the participants. Results of one trial provided very low-quality evidence that lamotrigine might be effective in reducing pain and fatigue, resulting in fewer activity limitations without generating adverse events. Data from two single trials suggested that muscle strengthening of thumb muscles (very low-quality evidence) and static magnetic fields (moderate-quality evidence) are safe and beneficial for improving muscle strength and pain, respectively, with unknown effects on activity limitations. Finally, there was evidence varying from very low quality to high quality that modafinil, pyridostigmine, amantadine, prednisone and rehabilitation in a warm or cold climate are not beneficial in PPS.\nAuthors' conclusions\nDue to insufficient good-quality data and lack of randomised studies, it was impossible to draw definite conclusions about the effectiveness of interventions for PPS. Results indicated that IVIg, lamotrigine, muscle strengthening exercises and static magnetic fields may be beneficial but need further investigation to clarify whether any real and meaningful effect exists.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "PPS is a condition that can affect polio survivors years after recovery from an initial paralytic attack by the polio virus. PPS is characterised by progressive or new muscle weakness or decreased muscle endurance in muscles that were previously affected by the polio infection and in muscles that were seemingly unaffected. Other symptoms may include generalised fatigue and pain. These symptoms often lead to a decline in physical functioning, for example trouble walking. The objective of this review was to assess the benefits and harms of different drugs and rehabilitation treatments compared to placebo (a pill or procedure without any physiological effect), usual care or no treatment."
}
] |
query-laysum
|
24263
|
[
{
"role": "user",
"content": "Abstract: Background\nUrinary incontinence (UI) is the involuntary loss of urine and can be caused by several different conditions. The common types of UI are stress (SUI), urgency (UUI) and mixed (MUI). A wide range of interventions can be delivered to reduce the symptoms of UI in women. Conservative interventions are generally recommended as the first line of treatment.\nObjectives\nTo summarise Cochrane Reviews that assessed the effects of conservative interventions for treating UI in women.\nMethods\nWe searched the Cochrane Library to January 2021 (CDSR; 2021, Issue 1) and included any Cochrane Review that included studies with women aged 18 years or older with a clinical diagnosis of SUI, UUI or MUI, and investigating a conservative intervention aimed at improving or curing UI. We included reviews that compared a conservative intervention with 'control' (which included placebo, no treatment or usual care), another conservative intervention or another active, but non-conservative, intervention. A stakeholder group informed the selection and synthesis of evidence.\nTwo overview authors independently applied the inclusion criteria, extracted data and judged review quality, resolving disagreements through discussion. Primary outcomes of interest were patient-reported cure or improvement and condition-specific quality of life. We judged the risk of bias in included reviews using the ROBIS tool. We judged the certainty of evidence within the reviews based on the GRADE approach. Evidence relating to SUI, UUI or all types of UI combined (AUI) were synthesised separately. The AUI group included evidence relating to participants with MUI, as well as from studies that combined women with different diagnoses (i.e. SUI, UUI and MUI) and studies in which the type of UI was unclear.\nMain results\nWe included 29 relevant Cochrane Reviews. Seven focused on physical therapies; five on education, behavioural and lifestyle advice; one on mechanical devices; one on acupuncture and one on yoga. Fourteen focused on non-conservative interventions but had a comparison with a conservative intervention. No reviews synthesised evidence relating to psychological therapies. There were 112 unique trials (including 8975 women) that had primary outcome data included in at least one analysis.\nStress urinary incontinence (14 reviews)\nConservative intervention versus control: there was moderate or high certainty evidence that pelvic floor muscle training (PFMT), PFMT plus biofeedback and cones were more beneficial than control for curing or improving UI. PFMT and intravaginal devices improved quality of life compared to control.\nOne conservative intervention versus another conservative intervention: for cure and improvement of UI, there was moderate or high certainty evidence that: continence pessary plus PFMT was more beneficial than continence pessary alone; PFMT plus educational intervention was more beneficial than cones; more-intensive PFMT was more beneficial than less-intensive PFMT; and PFMT plus an adherence strategy was more beneficial than PFMT alone. There was no moderate or high certainty evidence for quality of life.\nUrgency urinary incontinence (five reviews)\nConservative intervention versus control: there was moderate to high-certainty evidence demonstrating that PFMT plus feedback, PFMT plus biofeedback, electrical stimulation and bladder training were more beneficial than control for curing or improving UI. Women using electrical stimulation plus PFMT had higher quality of life than women in the control group.\nOne conservative intervention versus another conservative intervention: for cure or improvement, there was moderate certainty evidence that electrical stimulation was more effective than laseropuncture. There was high or moderate certainty evidence that PFMT resulted in higher quality of life than electrical stimulation and electrical stimulation plus PFMT resulted in better cure or improvement and higher quality of life than PFMT alone.\nAll types of urinary incontinence (13 reviews)\nConservative intervention versus control: there was moderate to high certainty evidence of better cure or improvement with PFMT, electrical stimulation, weight loss and cones compared to control. There was moderate certainty evidence of improved quality of life with PFMT compared to control.\nOne conservative intervention versus another conservative intervention: there was moderate or high certainty evidence of better cure or improvement for PFMT with bladder training than bladder training alone. Likewise, PFMT with more individual health professional supervision was more effective than less contact/supervision and more-intensive PFMT was more beneficial than less-intensive PFMT. There was moderate certainty evidence that PFMT plus bladder training resulted in higher quality of life than bladder training alone.\nAuthors' conclusions\nThere is high certainty that PFMT is more beneficial than control for all types of UI for outcomes of cure or improvement and quality of life. We are moderately certain that, if PFMT is more intense, more frequent, with individual supervision, with/without combined with behavioural interventions with/without an adherence strategy, effectiveness is improved. We are highly certain that, for cure or improvement, cones are more beneficial than control (but not PFMT) for women with SUI, electrical stimulation is beneficial for women with UUI, and weight loss results in more cure and improvement than control for women with AUI.\nMost evidence within the included Cochrane Reviews is of low certainty. It is important that future new and updated Cochrane Reviews develop questions that are more clinically useful, avoid multiple overlapping reviews and consult women with UI to further identify outcomes of importance.\n\nGiven the provided abstract, please respond to the following query: \"What did we aim to do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There are a growing number of Cochrane Reviews relating to conservative management for different types of urinary incontinence, and our aim was to bring together these research findings into one accessible overview document, with input from clinicians and women affected by incontinence."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.