dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum | 5558 | [
{
"role": "user",
"content": "Abstract: Background\nIn many low- and middle-income countries women are encouraged to give birth in clinics and hospitals so that they can receive care from skilled birth attendants. A skilled birth attendant (SBA) is a health worker such as a midwife, doctor, or nurse who is trained to manage normal pregnancy and childbirth. (S)he is also trained to identify, manage, and refer any health problems that arise for mother and baby. The skills, attitudes and behaviour of SBAs, and the extent to which they work in an enabling working environment, impact on the quality of care provided. If any of these factors are missing, mothers and babies are likely to receive suboptimal care.\nObjectives\nTo explore the views, experiences, and behaviours of skilled birth attendants and those who support them; to identify factors that influence the delivery of intrapartum and postnatal care in low- and middle-income countries; and to explore the extent to which these factors were reflected in intervention studies.\nSearch methods\nOur search strategies specified key and free text terms related to the perinatal period, and the health provider, and included methodological filters for qualitative evidence syntheses and for low- and middle-income countries. We searched MEDLINE, OvidSP (searched 21 November 2016), Embase, OvidSP (searched 28 November 2016), PsycINFO, OvidSP (searched 30 November 2016), POPLINE, K4Health (searched 30 November 2016), CINAHL, EBSCOhost (searched 30 November 2016), ProQuest Dissertations and Theses (searched 15 August 2013), Web of Science (searched 1 December 2016), World Health Organization Reproductive Health Library (searched 16 August 2013), and World Health Organization Global Health Library for WHO databases (searched 1 December 2016).\nSelection criteria\nWe included qualitative studies that focused on the views, experiences, and behaviours of SBAs and those who work with them as part of the team. We included studies from all levels of health care in low- and middle-income countries.\nData collection and analysis\nOne review author extracted data and assessed study quality, and another review author checked the data. We synthesised data using the best fit framework synthesis approach and assessed confidence in the evidence using the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach. We used a matrix approach to explore whether the factors identified by health workers in our synthesis as important for providing maternity care were reflected in the interventions evaluated in the studies in a related intervention review.\nMain results\nWe included 31 studies that explored the views and experiences of different types of SBAs, including doctors, midwives, nurses, auxiliary nurses and their managers. The included studies took place in Africa, Asia, and Latin America.\nOur synthesis pointed to a number of factors affecting SBAs’ provision of quality care. The following factors were based on evidence assessed as of moderate to high confidence. Skilled birth attendants reported that they were not always given sufficient training during their education or after they had begun clinical work. Also, inadequate staffing of facilities could increase the workloads of skilled birth attendants, make it difficult to provide supervision and result in mothers being offered poorer care. In addition, SBAs did not always believe that their salaries and benefits reflected their tasks and responsibilities and the personal risks they undertook. Together with poor living and working conditions, these issues were seen to increase stress and to negatively affect family life. Some SBAs also felt that managers lacked capacity and skills, and felt unsupported when their workplace concerns were not addressed.\nPossible causes of staff shortages in facilities included problems with hiring and assigning health workers to facilities where they were needed; lack of funding; poor management and bureaucratic systems; and low salaries. Skilled birth attendants and their managers suggested factors that could help recruit, keep, and motivate health workers, and improve the quality of care; these included good-quality housing, allowances for extra work, paid vacations, continuing education, appropriate assessments of their work, and rewards.\nSkilled birth attendants’ ability to provide quality care was also limited by a lack of equipment, supplies, and drugs; blood and the infrastructure to manage blood transfusions; electricity and water supplies; and adequate space and amenities on maternity wards. These factors were seen to reduce SBAs’ morale, increase their workload and infection risk, and make them less efficient in their work. A lack of transport sometimes made it difficult for SBAs to refer women on to higher levels of care. In addition, women’s negative perceptions of the health system could make them reluctant to accept referral.\nWe identified some other factors that also may have affected the quality of care, which were based on findings assessed as of low or very low confidence. Poor teamwork and lack of trust and collaboration between health workers appeared to negatively influence care. In contrast, good collaboration and teamwork appeared to increase skilled birth attendants’ motivation, their decision-making abilities, and the quality of care. Skilled birth attendants’ workloads and staff shortages influenced their interactions with mothers. In addition, poor communication undermined trust between skilled birth attendants and mothers.\nAuthors' conclusions\nMany factors influence the care that SBAs are able to provide to mothers during childbirth. These include access to training and supervision; staff numbers and workloads; salaries and living conditions; and access to well-equipped, well-organised healthcare facilities with water, electricity, and transport. Other factors that may play a role include the existence of teamwork and of trust, collaboration, and communication between health workers and with mothers. Skilled birth attendants reported many problems tied to all of these factors.\n\nGiven the provided abstract, please respond to the following query: \"Review aim\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane synthesis of qualitative evidence was to identify factors that influence the provision of care by skilled birth attendants. To answer this question, we searched for and analysed qualitative studies of skilled birth attendants’ views, experiences, and behaviour.\nThis synthesis complements another Cochrane Review assessing the effect of strategies to promote women’s use of healthcare facilities when giving birth."
}
]
|
query-laysum | 5559 | [
{
"role": "user",
"content": "Abstract: Background\nAge-related cataract is the opacification of the lens, which occurs as a result of denaturation of lens proteins. Age-related cataract remains the leading cause of blindness globally, except in the most developed countries. A key question is what is the best way of removing the lens, especially in lower income settings.\nObjectives\nTo compare two different techniques of lens removal in cataract surgery: manual small incision surgery (MSICS) and extracapsular cataract extraction (ECCE).\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 8), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to September 2014), EMBASE (January 1980 to September 2014), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to September 2014), Web of Science Conference Proceedings Citation Index- Science (CPCI-S), (January 1990 to September 2014), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 23 September 2014.\nSelection criteria\nWe included randomised controlled trials (RCTs) only. Participants in the trials were people with age-related cataract. We included trials where MSICS with a posterior chamber intraocular lens (IOL) implant was compared to ECCE with a posterior chamber IOL implant.\nData collection and analysis\nData were collected independently by two authors. We aimed to collect data on presenting visual acuity 6/12 or better and best-corrected visual acuity of less than 6/60 at three months and one year after surgery. Other outcomes included intraoperative complications, long-term complications (one year or more after surgery), quality of life, and cost-effectiveness. There were not enough data available from the included trials to perform a meta-analysis.\nMain results\nThree trials randomly allocating people with age-related cataract to MSICS or ECCE were included in this review (n = 953 participants). Two trials were conducted in India and one in Nepal. Trial methods, such as random allocation and allocation concealment, were not clearly described; in only one trial was an effort made to mask outcome assessors. The three studies reported follow-up six to eight weeks after surgery. In two studies, more participants in the MSICS groups achieved unaided visual acuity of 6/12 or 6/18 or better compared to the ECCE group, but overall not more than 50% of people achieved good functional vision in the two studies. 10/806 (1.2%) of people enrolled in two trials had a poor outcome after surgery (best-corrected vision less than 6/60) with no evidence of difference in risk between the two techniques (risk ratio (RR) 1.58, 95% confidence interval (CI) 0.45 to 5.55). Surgically induced astigmatism was more common with the ECCE procedure than MSICS in the two trials that reported this outcome. In one study there were more intra- and postoperative complications in the MSICS group. One study reported that the costs of the two procedures were similar.\nAuthors' conclusions\nThere are no other studies from other countries other than India and Nepal and there are insufficient data on cost-effectiveness of each procedure. Better evidence is needed before any change may be implemented. Future studies need to have longer-term follow-up and be conducted to minimize biases revealed in this review with a larger sample size to allow examination of adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The data were limited. People whose lens was removed with MSICS were more likely to achieve good functional vision, however, overall not more than 50% of people achieved good functional vision in the two studies. 1.2% of people enrolled in two trials had a poor outcome after surgery with best-corrected vision less than 6/60. There was no evidence of any difference between the two groups with respect to this outcome. Surgically induced astigmatism was more common with the ECCE procedure than MSICS in the two trials that reported this outcome. In one study there were more intra- and postoperative complications in the MSICS group. One study reported that the costs of the two procedures were similar."
}
]
|
query-laysum | 5560 | [
{
"role": "user",
"content": "Abstract: Background\nOrbital lymphangiomas are a subset of localized vascular and lymphatic malformations, which most commonly occur in the head and neck region. Orbital lymphangiomas typically present in the first decade of life with signs of ptosis, proptosis, restriction of ocular motility, compressive optic neuropathy, and disfigurement. Therefore, early and effective treatment is crucial to preserving vision. Due to proximity to vital structures, such as the globe, optic nerve, and extraocular muscles, treatment for these lesions is complicated and includes a large array of approaches including observation, sclerotherapy, systemic therapy, and surgical excision. Of these options, there is no clear gold standard of treatment.\nObjectives\nTo assess the evidence supporting medical and surgical interventions for the reduction/treatment of orbital lymphangiomas in children and young adults.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2018, Issue 5); Ovid MEDLINE; Embase.com; PubMed; Latin American and Caribbean Health Sciences Literature Database (LILACS); ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 22 May 2018.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) comparing at least two of the following interventions with each other for the treatment of orbital lymphangiomas: observation; sildenafil therapy; sirolimus therapy; sclerotherapy; surgery (partial or complete resection). We planned to include trials that enrolled children and adults up to 32 years of age, based on a prior clinical trial protocol. There were no restrictions regarding location or demographic factors.\nData collection and analysis\nTwo review authors independently screened the titles, abstracts, and full articles to assess their suitability for inclusion in this review. No risk of bias or data extraction was performed because we did not find any trials for inclusion. If there had been RCTs, two authors would have assessed the risk of bias and abstracted data independently with discrepancies being settled by consensus or consultation with a third review author.\nMain results\nThere were no RCTs that compared any two of the mentioned interventions (medical or surgical) for treating orbital lymphangiomas in children and young adults.\nAuthors' conclusions\nCurrently, there are no published RCTs of orbital lymphangioma treatments. Without these types of studies, conclusions cannot be drawn regarding the effectiveness of the medical and surgical treatment options for patients with orbital lymphangiomas. The presence of only case reports and case series on orbital lymphangiomas makes it clear that RCTs are needed to address the differences between these options and help guide treatment plans. Such trials would ideally compare outcomes between individuals randomized to one of the following treatment options: observation, sclerotherapy, systemic sirolimus therapy, systemic sildenafil therapy, and surgical excision.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Lymphangiomas are localized malformations of the vascular and lymphatic system that most commonly occur in the head and neck regions of children. Lymphangiomas of the orbit (eye socket) typically present in children under the age of 16 years old with ptosis (droopy eyelid), swelling around the eye area, or with bleeding within the lesion from a minor injury. People with this condition can also present with other symptoms such as cosmetic deformity, proptosis (protrusion of the eye), restriction of eye movement (an eye that looks like it is wandering and can lead to vision loss in a child), compression of the optic nerve (a type of vision loss), and amblyopia (another type of vision loss) in children. Due to these vision-threatening complications, early and effective treatment is crucial in preventing cosmetic disfigurement, pain, and visual impairment.\nOrbital lymphangiomas are notorious for being very difficult to treat due to how close they are to the eye and other important structures of the eye socket, all of which are needed for good vision. Treatment type also depends on lymphangioma size, cyst type, and location. One option is called observation, and this means carefully watching patients without doing any treatment. This is because some tumors are smaller or in hard-to-reach locations and are not threatening the vision. This may be a good option since each treatment option has side effect risks. For example, surgery can damage nearby structures, while medications by mouth can cause fever, diarrhea, headaches, and high blood pressure, amongst other problems. The option of surgery has typically been delayed until absolutely necessary as there is a high rate of the lesion growing back, there is a high risk to surrounding tissue (like the eye, the optic nerve, the eye muscles), and it is difficult to remove the entire lesion. In addition to observation and surgery, another treatment option is to inject agents called sclerosants into the lesions with the goal of reducing their size. Finally, in lesions that are difficult to access surgically or with injections, medication taken by mouth (called 'systemic medication') has also been used to reduce the size and resulting symptoms. The goal of these therapies is to reduce cosmetic disfigurement and pain caused by these lesions, in addition to avoiding vision-threatening complications."
}
]
|
query-laysum | 5561 | [
{
"role": "user",
"content": "Abstract: Background\nOvarian response to stimulation during in-vitro fertilisation (IVF) and intra-cytoplasmic sperm injection (ICSI) plays an important role in determining live birth rates. Adjuvant treatments during ovarian stimulation that have different modes of action have been used to improve ovarian response to stimulation and outcome of IVF. Glucocorticoids (GCs) are a class of steroid hormones that have been used either alone or in combination with other stimulatory regimens in order to improve folliculogenesis and pregnancy rates. However, considerable uncertainty remains over whether administration of glucocorticoid during ovarian stimulation until oocyte recovery is superior to no glucocorticoid in improving live birth rates in women undergoing IVF/ICSI.\nObjectives\nTo determine the safety and effectiveness of systemic glucocorticoids during ovarian stimulation for IVF and ICSI cycles.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register, the Cochrane Central Register of Studies Online (CRSO), MEDLINE, Embase, CINAHL and PsycINFO from inception to 10 October 2016. We handsearched reference lists of articles, trial registers and relevant conference proceedings and contacted researchers in the field.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing adjuvant treatment with systemic glucocorticoids during ovarian stimulation for IVF or ICSI cycles versus no adjuvant treatment.\nData collection and analysis\nTwo review authors independently selected studies, assessed risk of bias and extracted the data. Our primary outcome was live birth. Secondary outcomes included clinical pregnancy, multiple pregnancy, miscarriage, ovarian hyperstimulation syndrome (OHSS) and side-effects. We calculated odds ratios (ORs) with 95% confidence intervals (CIs) and pooled the data using a fixed-effect model. The quality of the evidence was assessed using GRADE methods.\nMain results\nFour RCTs were included in the review (416 women). The trials compared glucocorticoid supplementation during IVF stimulation versus placebo. Two of the studies had data in a form that we could not enter into analysis, so results include data from only two trials (310) women. For the outcome of live birth, data were available for only 212 women, as the larger study had data available from only one study centre.\nOne of the studies gave inadequate description of randomisation methods, but the other was at low risk of bias in all domains. The evidence was rated as low or very low quality for all outcomes, mainly due to imprecision, with low sample sizes and few events.\nThere was insufficient evidence to determine whether there was any difference between the groups in live birth rate (OR 1.08, 95% CI 0.45 to 2.58; 2 RCTs, n = 212, I2 = 0%, low-quality evidence). Our findings suggest that if the chance of live birth with placebo is assumed to be 15%, the chance following supplementation would be between 7% and 31%. There was no conclusive evidence of a difference in the clinical pregnancy rate (OR 1.69, 95% CI 0.98 to 2.90; 2 RCTs, n = 310, I2 = 0%, low-quality evidence).The evidence suggests that if the chance of clinical pregnancy with placebo is assumed to be 24%, the chance following treatment with glucocorticoid supplementation would be between 23% and 47%. There was also insufficient evidence to determine whether there was any difference between the groups in multiple-pregnancy rate (OR 3.32 , 95% CI 0.12 to 91.60; 1 RCT , n = 20, very low-quality evidence) or miscarriage rate (OR 1.00, 95% CI 0.05 to 18.57; 1 RCT, n = 20, very low-quality evidence). Neither of the studies reported OHSS or side-effects.\nAuthors' conclusions\nThe safety and effectiveness of glucocorticoid administration in women undergoing controlled ovarian hyperstimulation for IVF/ICSI cycles (until the day of oocyte retrieval) is unclear due to the small number of studies and low event rates. Whilst glucocorticoids possibly increase the clinical pregnancy rate, there may be little or no impact on live birth rate. More research is needed.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Ovarian response to stimulation during IVF and ICSI cycles plays an important role in determining live birth rates. Most ovarian stimulation regimens administer a combination of hormones to stimulate the ovaries, suppress the pituitary gland and prevent surges of natural hormones that may be detrimental to egg maturation. Various add-on (adjuvant) treatments during ovarian stimulation have been used in an attempt to improve this process, and it has been suggested that the addition of a glucocorticoid component may improve results."
}
]
|
query-laysum | 5562 | [
{
"role": "user",
"content": "Abstract: Background\nPre-eclampsia and eclampsia are common causes of serious morbidity and death. Calcium supplementation may reduce the risk of pre-eclampsia, and may help to prevent preterm birth. This is an update of a review last published in 2014.\nObjectives\nTo assess the effects of calcium supplementation during pregnancy on hypertensive disorders of pregnancy and related maternal and child outcomes.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (18 September 2017), and reference lists of retrieved studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), including cluster-randomised trials, comparing high-dose calcium supplementation (at least 1 g daily of calcium) during pregnancy with placebo. For low-dose calcium we included quasi-randomised trials, trials without placebo, trials with cointerventions and dose comparison trials.\nData collection and analysis\nTwo researchers independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. Two researchers assessed the evidence using the GRADE approach.\nMain results\nWe included 27 studies (18,064 women). We assessed the included studies as being at low risk of bias, although bias was frequently difficult to assess due to poor reporting and inadequate information on methods.\nHigh-dose calcium supplementation ( ≥ 1 g/day) versus placebo\nFourteen studies examined this comparison, however one study contributed no data. The 13 studies contributed data from 15,730 women to our meta-analyses. The average risk of high blood pressure (BP) was reduced with calcium supplementation compared with placebo (12 trials, 15,470 women: risk ratio (RR) 0.65, 95% confidence interval (CI) 0.53 to 0.81; I² = 74%). There was also a reduction in the risk of pre-eclampsia associated with calcium supplementation (13 trials, 15,730 women: average RR 0.45, 95% CI 0.31 to 0.65; I² = 70%; low-quality evidence). This effect was clear for women with low calcium diets (eight trials, 10,678 women: average RR 0.36, 95% CI 0.20 to 0.65; I² = 76%) but not those with adequate calcium diets. The effect appeared to be greater for women at higher risk of pre-eclampsia, though this may be due to small-study effects (five trials, 587 women: average RR 0.22, 95% CI 0.12 to 0.42). These data should be interpreted with caution because of the possibility of small-study effects or publication bias. In the largest trial, the reduction in pre-eclampsia was modest (8%) and the CI included the possibility of no effect.\nThe composite outcome maternal death or serious morbidity was reduced with calcium supplementation (four trials, 9732 women; RR 0.80, 95% CI 0.66 to 0.98). Maternal deaths were no different (one trial of 8312 women: one death in the calcium group versus six in the placebo group). There was an anomalous increase in the risk of HELLP syndrome in the calcium group (two trials, 12,901 women: RR 2.67, 95% CI 1.05 to 6.82, high-quality evidence), however, the absolute number of events was low (16 versus six).\nThe average risk of preterm birth was reduced in the calcium supplementation group (11 trials, 15,275 women: RR 0.76, 95% CI 0.60 to 0.97; I² = 60%; low-quality evidence); this reduction was greatest amongst women at higher risk of developing pre-eclampsia (four trials, 568 women: average RR 0.45, 95% CI 0.24 to 0.83; I² = 60%). Again, these data should be interpreted with caution because of the possibility of small-study effects or publication bias. There was no clear effect on admission to neonatal intensive care. There was also no clear effect on the risk of stillbirth or infant death before discharge from hospital (11 trials, 15,665 babies: RR 0.90, 95% CI 0.74 to 1.09).\nOne study showed a reduction in childhood systolic BP greater than 95th percentile among children exposed to calcium supplementation in utero (514 children: RR 0.59, 95% CI 0.39 to 0.91). In a subset of these children, dental caries at 12 years old was also reduced (195 children, RR 0.73, 95% CI 0.62 to 0.87).\nLow-dose calcium supplementation (< 1 g/day) versus placebo or no treatment\nTwelve trials (2334 women) evaluated low-dose (usually 500 mg daily) supplementation with calcium alone (four trials) or in association with vitamin D (five trials), linoleic acid (two trials), or antioxidants (one trial). Most studies recruited women at high risk for pre-eclampsia, and were at high risk of bias, thus the results should be interpreted with caution. Supplementation with low doses of calcium reduced the risk of pre-eclampsia (nine trials, 2234 women: RR 0.38, 95% CI 0.28 to 0.52). There was also a reduction in high BP (five trials, 665 women: RR 0.53, 95% CI 0.38 to 0.74), admission to neonatal intensive care unit (one trial, 422 women, RR 0.44, 95% CI 0.20 to 0.99), but not preterm birth (six trials, 1290 women, average RR 0.83, 95% CI 0.34 to 2.03), or stillbirth or death before discharge (five trials, 1025 babies, RR 0.48, 95% CI 0.14 to 1.67).\nHigh-dose (=/> 1 g) versus low-dose (< 1 g) calcium supplementation\nWe included one trial with 262 women, the results of which should be interpreted with caution due to unclear risk of bias. Risk of pre-eclampsia appeared to be reduced in the high-dose group (RR 0.42, 95% CI 0.18 to 0.96). No other differences were found (preterm birth: RR 0.31, 95% CI 0.09 to 1.08; eclampsia: RR 0.32, 95% CI 0.07 to 1.53; stillbirth: RR 0.48, 95% CI 0.13 to 1.83).\nAuthors' conclusions\nHigh-dose calcium supplementation (≥ 1 g/day) may reduce the risk of pre-eclampsia and preterm birth, particularly for women with low calcium diets (low-quality evidence). The treatment effect may be overestimated due to small-study effects or publication bias. It reduces the occurrence of the composite outcome 'maternal death or serious morbidity', but not stillbirth or neonatal high care admission. There was an increased risk of HELLP syndrome with calcium supplementation, which was small in absolute numbers.\nThe limited evidence on low-dose calcium supplementation suggests a reduction in pre-eclampsia, hypertension and admission to neonatal high care, but needs to be confirmed by larger, high-quality trials.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for evidence on 18 September 2017, and found 27 trials. We found evidence from 13 studies (involving 15,730 women) that calcium supplementation in high doses (at least 1 gram (g) daily) during pregnancy may be a safe way of reducing the risk of pre-eclampsia, especially in women from communities with low dietary calcium and those at increased risk of pre-eclampsia. Women receiving calcium supplements may also be less likely to die or have serious problems related to pre-eclampsia (low-quality evidence) and high blood pressure. Babies may be less likely to be born preterm (low-quality evidence). The syndrome of haemolysis, elevated liver enzymes and low platelets was increased with calcium, but the absolute numbers were small (high-quality evidence). High-dose calcium did not have a clear effect on babies admitted to neonatal intensive care, or the number of stillbirths or deaths before discharge from hospital.\nFurther research is needed into the ideal dosage of supplementation. Limited evidence from 12 trials (2334 women) suggested that a relatively low dose of calcium may be effective in reducing pre-eclampsia, high blood pressure, and babies admitted to intensive care (however, the quality of the evidence on calcium alone was reduced because eight of the included trials gave other medicines alongside calcium, such as vitamin D, linoleic acid or antioxidants). Low-dose calcium did not have a clear effect on preterm birth, stillbirth or death before discharge from hospital.\nOne small study compared high-dose calcium with low-dose calcium. Pre-eclampsia appeared to be reduced in the high-dose group, but no other differences were found in preterm birth, or stillbirth."
}
]
|
query-laysum | 5563 | [
{
"role": "user",
"content": "Abstract: Background\nBetween 6% and 15% of neonates develop hyperbilirubinaemia requiring treatment. Successful management of neonatal hyperbilirubinaemia relies on prevention and early treatment, with phototherapy being the mainstay of treatment. Oral zinc has been reported to decrease the serum total bilirubin (STB), presumably by decreasing the enterohepatic circulation.\nObjectives\nTo determine the effect of oral zinc supplementation compared to placebo or no treatment on the incidence of hyperbilirubinaemia in neonates during the first week of life and to assess the safety of oral zinc in enrolled neonates.\nSearch methods\nWe searched CENTRAL (The Cochrane Library 2014, Issue 1), MEDLINE (1966 to November 30, 2014), and EMBASE (1990 to November 30, 2014).\nSelection criteria\nRandomised controlled trials were eligible for inclusion if they enrolled neonates (term and preterm) to whom oral zinc, in a dose of 10 to 20 mg/day, was initiated within the first 96 hours of life, for any duration until day seven, compared with no treatment or placebo.\nData collection and analysis\nWe used the standard methods of The Cochrane Collaboration and its Neonatal Review Group for data collection and analysis.\nMain results\nOnly one study met the criteria of inclusion in the review. This study compared oral zinc with placebo. Oral zinc was administered in a dose of 5 mL twice daily from day 2 to day 7 postpartum. The drug was administered into the mouth of the infant by the plastic measure provided with the bottle or with a spoon. Incidence of hyperbilirubinaemia, defined as serum total bilirubin (STB) ≥ 15 mg/dL, was similar between groups (N = 286; risk ratio (RR) 0.94, 95% confidence interval (CI) 0.58 to 1.52). Mean STB levels, mg/dL, at 72 ± 12 hours were comparable in both the groups (N = 286; mean difference (MD) -0.20; 95% CI -1.03 to 0.63). Although the duration of phototherapy in the zinc group was significantly shorter compared to the placebo group (N = 286; MD -12.80, 95% CI -16.93 to -8.67), the incidence of need for phototherapy was comparable across both the groups (N = 286; RR 1.20; 95% CI 0.66 to 2.18). Incidences of side effects like vomiting (N = 286; RR 0.65, 95% CI 0.19 to 2.25), diarrhoea (N = 286; RR 2.92, 95% CI 0.31 to 27.71), and rash (N = 286; RR 2.92, 95% CI 0.12 to 71.03) were found to be rare and statistically comparable between groups.\nAuthors' conclusions\nThe limited evidence available has not shown that oral zinc supplementation given to infants up to one week old reduces the incidence of hyperbilirubinaemia or need for phototherapy.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In newborn infants less than one week old, does oral zinc salt supplementation compared to placebo or no treatment decrease the incidence of hyperbilirubinaemia (jaundice)?"
}
]
|
query-laysum | 5564 | [
{
"role": "user",
"content": "Abstract: Background\nInvasive fungal infections are important causes of morbidity and mortality among critically ill patients. Early institution of antifungal therapy is pivotal for mortality reduction. Starting a targeted antifungal therapy after culture positivity and fungi identification requires a long time. Therefore, alternative strategies (globally defined as 'untargeted antifungal treatments') for antifungal therapy institution in patients without proven microbiological evidence of fungal infections have been discussed by international guidelines. This review was originally published in 2006 and updated in 2016. This updated review provides additional evidence for the clinician dealing with suspicion of fungal infection in critically ill, non-neutropenic patients, taking into account recent findings in this field.\nObjectives\nTo assess the effects of untargeted treatment with any antifungal drug (either systemic or nonabsorbable) compared to placebo or no antifungal or any other antifungal drug (either systemic or nonabsorbable) in non-neutropenic, critically ill adults and children. We assessed effectiveness in terms of total (all-cause) mortality and incidence of proven invasive fungal infections as primary outcomes.\nSearch methods\nWe searched the following databases to February 2015: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (OVID), and EMBASE (OVID). We also searched reference lists of identified studies and major reviews, abstracts of conference proceedings, scientific meetings and clinical trials registries. We contacted experts in the field, study authors and pharmaceutical companies as part of the search strategy.\nSelection criteria\nWe included randomized controlled trials (RCTs) (irrespective of language or publication status) comparing the use of untargeted treatment with any antifungal drug (either systemic or nonabsorbable) to placebo, no antifungal, or another antifungal agent in non-neutropenic critically ill participants.\nData collection and analysis\nThree authors independently applied selection criteria, extracted data and assessed the risk of bias. We resolved any discrepancies by discussion. We synthesized data using the random-effects model and expressed the results as risk ratios (RR) with 95% confidence intervals. We assessed overall evidence quality using the GRADE approach.\nMain results\nWe included 22 studies (total of 2761 participants). Of those 22 studies, 12 were included in the original published review and 10 were newly identified. Eleven trials compared the use of fluconazole to placebo or no antifungal treatment. Three trials compared ketoconazole versus placebo. One trial compared anidulafungin with placebo. One trial compared caspofungin to placebo. Two trials compared micafungin to placebo. One trial compared amphotericin B to placebo. Two trials compared nystatin to placebo and one trial compared the effect of clotrimazole, ketoconazole, nystatin and no treatment. We found two new ongoing studies and four new studies awaiting classification. The RCTs included participants of both genders with wide age range, severity of critical illness and clinical characteristics. Funding sources from pharmaceutical companies were reported in 11 trials and one trial reported funding from a government agency. Most of the studies had an overall unclear risk of bias for key domains of this review (random sequence generation, allocation concealment, incomplete outcome data). Two studies had a high risk of bias for key domains. Regarding the other domains (blinding of participants and personnel, outcome assessment, selective reporting, other bias), most of the studies had a low or unclear risk but four studies had a high risk of bias.\nThere was moderate grade evidence that untargeted antifungal treatment did not significantly reduce or increase total (all-cause) mortality (RR 0.93, 95% CI 0.79 to 1.09, P value = 0.36; participants = 2374; studies = 19). With regard to the outcome of proven invasive fungal infection, there was low grade evidence that untargeted antifungal treatment significantly reduced the risk (RR 0.57, 95% CI 0.39 to 0.83, P value = 0.0001; participants = 2024; studies = 17). The risk of fungal colonization was significantly reduced (RR 0.71, 95% CI 0.52 to 0.97, P value = 0.03; participants = 1030; studies = 12) but the quality of evidence was low. There was no difference in the risk of developing superficial fungal infection (RR 0.69, 95% CI 0.37 to 1.29, P value = 0.24; participants = 662; studies = 5; low grade of evidence) or in adverse events requiring cessation of treatment between the untargeted treatment group and the other group (RR 0.89, 95% CI 0.62 to 1.27, P value = 0.51; participants = 1691; studies = 11; low quality of evidence). The quality of evidence for the outcome of total (all-cause) mortality was moderate due to limitations in study design. The quality of evidence for the outcome of invasive fungal infection, superficial fungal infection, fungal colonization and adverse events requiring cessation of therapy was low due to limitations in study design, non-optimal total population size, risk of publication bias, and heterogeneity across studies.\nAuthors' conclusions\nThere is moderate quality evidence that the use of untargeted antifungal treatment is not associated with a significant reduction in total (all-cause) mortality among critically ill, non-neutropenic adults and children compared to no antifungal treatment or placebo. The untargeted antifungal treatment may be associated with a reduction of invasive fungal infections but the quality of evidence is low, and both the heterogeneity and risk of publication bias is high.\nFurther high-quality RCTs are needed to improve the strength of the evidence, especially for more recent and less studied drugs (e.g. echinocandins). Future trials should adopt standardized definitions for microbiological outcomes (e.g. invasive fungal infection, colonization) to reduce heterogeneity. Emergence of resistance to antifungal drugs should be considered as outcome in studies investigating the effects of untargeted antifungal treatment to balance risks and benefit.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of giving antifungal medications before a definitive diagnosis of fungal infections on mortality from all causes and development of severe infections due to fungi (invasive fungal infections) in adults and children who are critically ill but non-neutropenic, i.e. with a normal number of neutrophils in their blood."
}
]
|
query-laysum | 5565 | [
{
"role": "user",
"content": "Abstract: Background\nOrganised inpatient (stroke unit) care is provided by multi-disciplinary teams that manage stroke patients. This can been provided in a ward dedicated to stroke patients (stroke ward), with a peripatetic stroke team (mobile stroke team), or within a generic disability service (mixed rehabilitation ward). Team members aim to provide co-ordinated multi-disciplinary care using standard approaches to manage common post-stroke problems.\nObjectives\n• To assess the effects of organised inpatient (stroke unit) care compared with an alternative service.\n• To use a network meta-analysis (NMA) approach to assess different types of organised inpatient (stroke unit) care for people admitted to hospital after a stroke (the standard comparator was care in a general ward).\nOriginally, we conducted this systematic review to clarify:\n• The characteristic features of organised inpatient (stroke unit) care?\n• Whether organised inpatient (stroke unit) care provide better patient outcomes than alternative forms of care?\n• If benefits are apparent across a range of patient groups and across different approaches to delivering organised stroke unit care?\nWithin the current version, we wished to establish whether previous conclusions were altered by the inclusion of new outcome data from recent trials and further analysis via NMA.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (2 April 2019); the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 4), in the Cochrane Library (searched 2 April 2019); MEDLINE Ovid (1946 to 1 April 2019); Embase Ovid (1974 to 1 April 2019); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 2 April 2019). In an effort to identify further published, unpublished, and ongoing trials, we searched seven trial registries (2 April 2019). We also performed citation tracking of included studies, checked reference lists of relevant articles, and contacted trialists.\nSelection criteria\nRandomised controlled clinical trials comparing organised inpatient stroke unit care with an alternative service (typically contemporary conventional care), including comparing different types of organised inpatient (stroke unit) care for people with stroke who are admitted to hospital.\nData collection and analysis\nTwo review authors assessed eligibility and trial quality. We checked descriptive details and trial data with co-ordinators of the original trials, assessed risk of bias, and applied GRADE. The primary outcome was poor outcome (death or dependency (Rankin score 3 to 5) or requiring institutional care) at the end of scheduled follow-up. Secondary outcomes included death, institutional care, dependency, subjective health status, satisfaction, and length of stay. We used direct (pairwise) comparisons to compare organised inpatient (stroke unit) care with an alternative service. We used an NMA to confirm the relative effects of different approaches.\nMain results\nWe included 29 trials (5902 participants) that compared organised inpatient (stroke unit) care with an alternative service: 20 trials (4127 participants) compared organised (stroke unit) care with a general ward, six trials (982 participants) compared different forms of organised (stroke unit) care, and three trials (793 participants) incorporated more than one comparison.\nCompared with the alternative service, organised inpatient (stroke unit) care was associated with improved outcomes at the end of scheduled follow-up (median one year): poor outcome (odds ratio (OR) 0.77, 95% confidence interval (CI) 0.69 to 0.87; moderate-quality evidence), death (OR 0.76, 95% CI 0.66 to 0.88; moderate-quality evidence), death or institutional care (OR 0.76, 95% CI 0.67 to 0.85; moderate-quality evidence), and death or dependency (OR 0.75, 95% CI 0.66 to 0.85; moderate-quality evidence). Evidence was of very low quality for subjective health status and was not available for patient satisfaction. Analysis of length of stay was complicated by variations in definition and measurement plus substantial statistical heterogeneity (I² = 85%). There was no indication that organised stroke unit care resulted in a longer hospital stay. Sensitivity analyses indicated that observed benefits remained when the analysis was restricted to securely randomised trials that used unequivocally blinded outcome assessment with a fixed period of follow-up. Outcomes appeared to be independent of patient age, sex, initial stroke severity, stroke type, and duration of follow-up. When calculated as the absolute risk difference for every 100 participants receiving stroke unit care, this equates to two extra survivors, six more living at home, and six more living independently.\nThe analysis of different types of organised (stroke unit) care used both direct pairwise comparisons and NMA.\nDirect comparison of stroke ward versus general ward: 15 trials (3523 participants) compared care in a stroke ward with care in general wards. Stroke ward care showed a reduction in the odds of a poor outcome at the end of follow-up (OR 0.78, 95% CI 0.68 to 0.91; moderate-quality evidence).\nDirect comparison of mobile stroke team versus general ward: two trials (438 participants) compared care from a mobile stroke team with care in general wards. Stroke team care may result in little difference in the odds of a poor outcome at the end of follow-up (OR 0.80, 95% CI 0.52 to 1.22; low-quality evidence).\nDirect comparison of mixed rehabilitation ward versus general ward: six trials (630 participants) compared care in a mixed rehabilitation ward with care in general wards. Mixed rehabilitation ward care showed a reduction in the odds of a poor outcome at the end of follow-up (OR 0.65, 95% CI 0.47 to 0.90; moderate-quality evidence).\nIn a NMA using care in a general ward as the comparator, the odds of a poor outcome were as follows: stroke ward - OR 0.74, 95% CI 0.62 to 0.89, moderate-quality evidence; mobile stroke team - OR 0.88, 95% CI 0.58 to 1.34, low-quality evidence; mixed rehabilitation ward - OR 0.70, 95% CI 0.52 to 0.95, low-quality evidence.\nAuthors' conclusions\nWe found moderate-quality evidence that stroke patients who receive organised inpatient (stroke unit) care are more likely to be alive, independent, and living at home one year after the stroke. The apparent benefits were independent of patient age, sex, initial stroke severity, or stroke type, and were most obvious in units based in a discrete stroke ward. We observed no systematic increase in the length of inpatient stay, but these findings had considerable uncertainty.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with stroke who receive organised inpatient (stroke unit) care are more likely to be alive, living at home, and independent in looking after themselves one year after their stroke. Apparent benefits were seen across a broad range of people with stroke. Various types of stroke units have been developed. The best results appear to come from stroke units based in a dedicated stroke ward."
}
]
|
query-laysum | 5566 | [
{
"role": "user",
"content": "Abstract: Background\nStroke is the main cause of disability in high-income countries and ranks second as a cause of death worldwide. Infections occur frequently after stroke and may adversely affect outcome. Preventive antibiotic therapy in the acute phase of stroke may reduce the incidence of infections and improve outcome. In the previous version of this Cochrane Review, published in 2012, we found that antibiotics did reduce the risk of infection but did not reduce the number of dependent or deceased patients. However, included studies were small and heterogeneous. In 2015, two large clinical trials were published, warranting an update of this Review.\nObjectives\nTo assess the effectiveness and safety of preventive antibiotic therapy in people with ischaemic or haemorrhagic stroke. We wished to determine whether preventive antibiotic therapy in people with acute stroke:\n• reduces the risk of a poor functional outcome (dependency and/or death) at follow-up;\n• reduces the occurrence of infections in the acute phase of stroke;\n• reduces the occurrence of elevated body temperature (temperature ≥ 38° C) in the acute phase of stroke;\n• reduces length of hospital stay; or\n• leads to an increased rate of serious adverse events, such as anaphylactic shock, skin rash, or colonisation with antibiotic-resistant micro-organisms.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (25 June 2017); the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 5; 25 June 2017) in the Cochrane Library; MEDLINE Ovid (1950 to 11 May 2017), and Embase Ovid (1980 to 11 May 2017). In an effort to identify further published, unpublished, and ongoing trials, we searched trials and research registers, scanned reference lists, and contacted trial authors, colleagues, and researchers in the field.\nSelection criteria\nRandomised controlled trials (RCTs) of preventive antibiotic therapy versus control (placebo or open control) in people with acute ischaemic or haemorrhagic stroke.\nData collection and analysis\nTwo review authors independently selected articles and extracted data; we discussed and resolved discrepancies at a consensus meeting with a third review author. We contacted study authors to obtain missing data when required. An independent review author assessed risk of bias using the Cochrane 'Risk of bias' tool. We calculated risk ratios (RRs) for dichotomous outcomes, assessed heterogeneity amongst included studies, and performed subgroup analyses on study quality.\nMain results\nWe included eight studies involving 4488 participants. Regarding quality of evidence, trials showed differences in study population, study design, type of antibiotic, and definition of infection; however, primary outcomes among the included studies were consistent. Mortality rate in the preventive antibiotic group was not significantly different from that in the control group (373/2208 (17%) vs 360/2214 (16%); RR 1.03, 95% confidence interval (CI) 0.87 to 1.21; high-quality evidence). The number of participants with a poor functional outcome (death or dependency) in the preventive antibiotic therapy group was also not significantly different from that in the control group (1158/2168 (53%) vs 1182/2164 (55%); RR 0.99, 95% CI 0.89 to 1.10; moderate-quality evidence). However, preventive antibiotic therapy did significantly reduce the incidence of 'overall' infections in participants with acute stroke from 26% to 19% (408/2161 (19%) vs 558/2156 (26%); RR 0.71, 95% CI 0.58 to 0.88; high-quality evidence). This finding was highly significant for urinary tract infections (81/2131 (4%) vs 204/2126 (10%); RR 0.40, 95% CI 0.32 to 0.51; high-quality evidence), whereas no preventive effect for pneumonia was found (222/2131 (10%) vs 235/2126 (11%); RR 0.95, 95% CI 0.80 to 1.13; high-quality evidence). No major side effects of preventive antibiotic therapy were reported. Only two studies qualitatively assessed the occurrence of elevated body temperature; therefore, these results could not be pooled. Only one study reported length of hospital stay.\nAuthors' conclusions\nPreventive antibiotics had no effect on functional outcome or mortality, but significantly reduced the risk of 'overall' infections. This reduction was driven mainly by prevention of urinary tract infection; no effect for pneumonia was found.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It has become possible to draw first 'overall' conclusions on the net effect of preventive antibiotic therapy in stroke; however, the decision of whether to use preventive antibiotic therapy in acute stroke should be reached with care. Studies were heterogeneous, and despite the large numbers of participants, results from a total of eight studies are limited. In two of these studies, risk of bias was considered to be high for three out of six criteria. Overall, reviewers considered the quality of evidence for the main outcomes of this review - looking at 'any' preventive antibiotic therapy, in 'any' dose, at any length of treatment - as high to moderate."
}
]
|
query-laysum | 5567 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an update of a previous Cochrane Review published in 2012, Issue 9.\nSurgery for endometrial cancer (hysterectomy with removal of both fallopian tubes and ovaries) is performed through laparotomy. It has been suggested that the laparoscopic approach is associated with a reduction in operative morbidity. Over the last two decades there has been a steady increase of the use of laparoscopy for endometrial cancer. This review investigated the evidence of benefits and harms of laparoscopic surgery compared with laparotomy for presumed early stage endometrial cancer.\nObjectives\nTo compare overall survival (OS) and disease free survival (DFS) for laparoscopic surgery versus laparotomy in women with presumed early stage endometrial cancer.\nSearch methods\nFor this update, we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 5) in the Cochrane Library, MEDLINE via Ovid (April 2012 to June 2018) and Embase via Ovid (April 2012 to June 2018). We also searched registers of clinical trials, abstracts of scientific meetings and reference lists of included studies. The trial registers included NHMRC Clinical Trials Register, UKCCCR Register of Cancer Trials, Meta-Register and Physician Data Query Protocol.\nSelection criteria\nRandomised controlled trials (RCTs) comparing laparoscopy and laparotomy for early stage endometrial cancer.\nData collection and analysis\nWe independently abstracted data and assessed risk of bias. We used hazard ratios (HRs) for OS and recurrence free survival (RFS), risk ratios (RR) for severe adverse events and mean differences (MD) for continuous outcomes in women who received laparoscopy or laparotomy with 9% confidence intervals (CI). These were pooled in random-effects meta-analyses.\nMain results\nWe identified one new study in this update of the review. The review contains nine RCTs comparing laparoscopy with laparotomy for the surgical management of early stage endometrial cancer.\nAll nine studies met the inclusion criteria and assessed 4389 women at the end of the studies. Six studies assessing 3993 participants with early stage endometrial cancer found no significant difference in the risk of death between women who underwent laparoscopy and women who underwent laparotomy (HR 1.04, 95% 0.86 to 1.25; moderate-certainty evidence) and five studies assessing 3710 participants found no significant difference in the risk of recurrence between the laparoscopy and laparotomy groups (HR 1.14, 95% CI 0.90 to 1.43; moderate-certainty evidence). There was no significant difference in the rate of perioperative death; women requiring a blood transfusion; and bladder, ureteric, bowel and vascular injury. However, one meta-analysis of three studies found that women in the laparoscopy group lost significantly less blood than women in the laparotomy group (MD –106.82 mL, 95% CI –141.59 to –72.06; low-certainty evidence). A further meta-analysis of two studies, which assessed 3344 women and included one very large trial of over 2500 participants, found that there was no clinical difference in the risk of severe postoperative complications in women in the laparoscopy and laparotomy groups (RR 0.78, 95% CI 0.44 to 1.38). Most studies were at moderate risk of bias. All nine studies reported hospital stay and results showed that on average, laparoscopy was associated with a significantly shorter hospital stay.\nAuthors' conclusions\nThis review found low to moderate-certainty evidence to support the role of laparoscopy for the management of early endometrial cancer. For presumed early stage primary endometrioid adenocarcinoma of the endometrium, laparoscopy is associated with similar OS and DFS. Furthermore, laparoscopy is associated with reduced operative morbidity and hospital stay. There is no significant difference in severe postoperative morbidity between the two modalities.\nThe certainty of evidence for OS and RFS was moderate and was downgraded for unclear risk of bias profiles and imprecision in effect estimates. However, most studies used adequate methods of sequence generation and concealment of allocation so studies were not prone to selection bias. Adverse event outcomes were downgraded for the same reasons and additionally for low event rates and low power thus these outcomes provided low-certainty evidence.\n\nGiven the provided abstract, please respond to the following query: \"What were the conclusions?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review update confirms the findings of the previous review that laparoscopy (keyhole) is an effective and viable alternative to laparotomy (open surgery) in the treatment of early stage endometrial cancer. With regards to long term survival outcomes, treatment by laparoscopy is comparable to laparotomy."
}
]
|
query-laysum | 5568 | [
{
"role": "user",
"content": "Abstract: Background\nSystemic lupus erythematosus (SLE) is a rare, chronic autoimmune inflammatory disease with a prevalence varying from 4.3 to 150 people in 100,000, or approximately five million people worldwide. Systemic manifestations frequently include internal organ involvement, a characteristic malar rash on the face, pain in joints and muscles, and profound fatigue. Exercise is purported to be beneficial for people with SLE. For this review, we focused on studies that examined all types of structured exercise as an adjunctive therapy in the management of SLE.\nObjectives\nTo evaluate the benefits and harms of structured exercise as adjunctive therapy for adults with SLE compared with usual pharmacological care, usual pharmacological care plus placebo and usual pharmacological care plus non-pharmacological care.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 30 March 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) of exercise as an adjunct to usual pharmacological treatment in SLE compared with placebo, usual pharmacological care alone and another non-pharmacological treatment. Major outcomes were fatigue, functional capacity, disease activity, quality of life, pain, serious adverse events, and withdrawals due to any reason, including any adverse events.\nData collection and analysis\nWe used standard Cochrane methods. Our major outcomes were 1. fatigue, 2. functional capacity, 3. disease activity, 4. quality of life, 5. pain, 6. serious adverse events, and 7. withdrawals due to any reason. Our minor outcomes were 8. responder rate, 9. aerobic fitness, 10. depression, and 11. anxiety. We used GRADE to assess certainty of evidence. The primary comparison was exercise compared with placebo.\nMain results\nWe included 13 studies (540 participants) in this review. Studies compared exercise as an adjunct to usual pharmacological care (antimalarials, immunosuppressants, and oral glucocorticoids) with usual pharmacological care plus placebo (one study); usual pharmacological care (six studies); and another non-pharmacological treatment such as relaxation therapy (seven studies). Most studies had selection bias, and all studies had performance and detection bias. We downgraded the evidence for all comparisons because of a high risk of bias and imprecision.\nExercise plus usual pharmacological care versus placebo plus usual pharmacological care\nEvidence from a single small study (17 participants) that compared whole body vibration exercise to whole body placebo vibration exercise (vibrations switched off) indicated that exercise may have little to no effect on fatigue, functional capacity, and pain (low-certainty evidence). We are uncertain whether exercise results in fewer or more withdrawals (very low-certainty evidence). The study did not report disease activity, quality of life, and serious adverse events.\nThe study measured fatigue using the self-reported Functional Assessment of Chronic Illness Therapy – Fatigue (FACIT-Fatigue), scale 0 to 52; lower score means less fatigue. People who did not exercise rated their fatigue at 38 points and those who did exercise rated their fatigue at 33 points (mean difference (MD) 5 points lower, 95% confidence interval (CI) 13.29 lower to 3.29 higher).\nThe study measured functional capacity using the self-reported 36-item Short Form health questionnaire (SF-36) Physical Function domain, scale 0 to 100; higher score means better function. People who did not exercise rated their functional capacity at 70 points and those who did exercise rated their functional capacity at 67.5 points (MD 2.5 points lower, 95% CI 23.78 lower to 18.78 higher).\nThe study measured pain using the SF-36 Pain domain, scale 0 to 100; lower scores mean less pain. People who did not exercise rated their pain at 43 points and those who did exercise rated their pain at 34 points (MD 9 points lower, 95% CI 28.88 lower to 10.88 higher).\nMore participants from the exercise group (3/11, 27%) withdrew from the study than the placebo group (1/10, 10%) (risk ratio (RR) 2.73, 95% CI 0.34 to 22.16).\nExercise plus usual pharmacological care versus usual pharmacological care alone\nThe addition of exercise to usual pharmacological care may have little to no effect on fatigue, functional capacity, and disease activity (low-certainty evidence). We are uncertain whether the addition of exercise improves pain (very low-certainty evidence), or results in fewer or more withdrawals (very low-certainty evidence). Serious adverse events and quality of life were not reported.\nExercise plus usual care versus another non-pharmacological intervention such as receiving information about the disease or relaxation therapy\nCompared with education or relaxation therapy, exercise may reduce fatigue slightly (low-certainty evidence), may improve functional capacity (low-certainty evidence), probably results in little to no difference in disease activity (moderate-certainty evidence), and may result in little to no difference in pain (low-certainty evidence). We are uncertain whether exercise results in fewer or more withdrawals (very low-certainty evidence). Quality of life and serious adverse events were not reported.\nAuthors' conclusions\nDue to low- to very low-certainty evidence, we are not confident on the benefits of exercise on fatigue, functional capacity, disease activity, and pain, compared with placebo, usual care, or advice and relaxation therapy. Harms data were not well reported.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out if exercise in addition to usual care improved fatigue, functional capacity (ability to perform normal everyday tasks), quality of life, pain, and disease activity, and caused no harm."
}
]
|
query-laysum | 5569 | [
{
"role": "user",
"content": "Abstract: Background\nFracture of the distal radius is a common clinical problem, particularly in older people with osteoporosis. There is considerable variation in the management, including rehabilitation, of these fractures. This is an update of a Cochrane review first published in 2002 and last updated in 2006.\nObjectives\nTo examine the effects of rehabilitation interventions in adults with conservatively or surgically treated distal radial fractures.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL 2014; Issue 12), MEDLINE, EMBASE, CINAHL, AMED, PEDro, OTseeker and other databases, trial registers, conference proceedings and reference lists of articles. We did not apply any language restrictions. The date of the last search was 12 January 2015.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs evaluating rehabilitation as part of the management of fractures of the distal radius sustained by adults. Rehabilitation interventions such as active and passive mobilisation exercises, and training for activities of daily living, could be used on their own or in combination, and be applied in various ways by various clinicians.\nData collection and analysis\nThe review authors independently screened and selected trials, and reviewed eligible trials. We contacted study authors for additional information. We did not pool data.\nMain results\nWe included 26 trials, involving 1269 mainly female and older patients. With few exceptions, these studies did not include people with serious fracture or treatment-related complications, or older people with comorbidities and poor overall function that would have precluded trial participation or required more intensive treatment. Only four of the 23 comparisons covered by these 26 trials were evaluated by more than one trial. Participants of 15 trials were initially treated conservatively, involving plaster cast immobilisation. Initial treatment was surgery (external fixation or internal fixation) for all participants in five trials. Initial treatment was either surgery or plaster cast alone in six trials. Rehabilitation started during immobilisation in seven trials and after post-immobilisation in the other 19 trials. As well as being small, the majority of the included trials had methodological shortcomings and were at high risk of bias, usually related to lack of blinding, that could affect the validity of their findings. Based on GRADE criteria for assessment quality, we rated the evidence for each of the 23 comparisons as either low or very low quality; both ratings indicate considerable uncertainty in the findings.\nFor interventions started during immobilisation, there was very low quality evidence of improved hand function for hand therapy compared with instructions only at four days after plaster cast removal, with some beneficial effects continuing one month later (one trial, 17 participants). There was very low quality evidence of improved hand function in the short-term, but not in the longer-term (three months), for early occupational therapy (one trial, 40 participants), and of a lack of differences in outcome between supervised and unsupervised exercises (one trial, 96 participants).\nFour trials separately provided very low quality evidence of clinically marginal benefits of specific interventions applied in addition to standard care (therapist-applied programme of digit mobilisation during external fixation (22 participants); pulsed electromagnetic field (PEMF) during cast immobilisation (60 participants); cyclic pneumatic soft tissue compression using an inflatable cuff placed under the plaster cast (19 participants); and cross-education involving strength training of the non-fractured hand during cast immobilisation with or without surgical repair (39 participants)).\nFor interventions started post-immobilisation, there was very low quality evidence from one study (47 participants) of improved function for a single session of physiotherapy, primarily advice and instructions for a home exercise programme, compared with 'no intervention' after cast removal. There was low quality evidence from four heterogeneous trials (30, 33, 66 and 75 participants) of a lack of clinically important differences in outcome in patients receiving routine physiotherapy or occupational therapy in addition to instructions for home exercises versus instructions for home exercises from a therapist. There was very low quality evidence of better short-term hand function in participants given physiotherapy than in those given either instructions for home exercises by a surgeon (16 participants, one trial) or a progressive home exercise programme (20 participants, one trial). Both trials (46 and 76 participants) comparing physiotherapy or occupational therapy versus a progressive home exercise programme after volar plate fixation provided low quality evidence in favour of a structured programme of home exercises preceded by instructions or coaching. One trial (63 participants) provided very low quality evidence of a short-term, but not persisting, benefit of accelerated compared with usual rehabilitation after volar plate fixation.\nFor trials testing single interventions applied post-immobilisation, there was very low quality evidence of no clinically significant differences in outcome in patients receiving passive mobilisation (69 participants, two trials), ice (83 participants, one trial), PEMF (83 participants, one trial), PEMF plus ice (39 participants, one trial), whirlpool immersion (24 participants, one trial), and dynamic extension splint for patients with wrist contracture (40 participants, one trial), compared with no intervention. This finding applied also to the trial (44 participants) comparing PEMF versus ice, and the trial (29 participants) comparing manual oedema mobilisation versus traditional oedema treatment. There was very low quality evidence from single trials of a short-term benefit of continuous passive motion post-external fixation (seven participants), intermittent pneumatic compression (31 participants) and ultrasound (38 participants).\nAuthors' conclusions\nThe available evidence from RCTs is insufficient to establish the relative effectiveness of the various interventions used in the rehabilitation of adults with fractures of the distal radius. Further randomised trials are warranted. However, in order to optimise research effort and engender the large multicentre randomised trials that are required to inform practice, these should be preceded by research that aims to identify priority questions.\n\nGiven the provided abstract, please respond to the following query: \"Search results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the scientific literature up to January 2015 and found 26 randomised controlled studies, involving 1269 mainly female and older patients. Only four of the 23 treatment comparisons covered by these 26 studies were tested by more than one study. Participants of 15 studies were initially treated with plaster cast immobilisation. Some or all participants in the other 11 studies were treated with surgery. In seven studies, the rehabilitation intervention being tested started during wrist immobilisation. In the other 19 studies, rehabilitation started when the cast had been removed.\nAll studies were small and were designed in a way that may affect the reliability of their findings. Most studies did not report on patient-reported outcome measures of function and did not follow up patients for long enough. We judged the quality of the reported evidence as either low or very low and thus we are not confident that the results described below are true."
}
]
|
query-laysum | 5570 | [
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease (SCD) is a group of inherited disorders of haemoglobin (Hb) structure in a person who has inherited two mutant globin genes (one from each parent), at least one of which is always the sickle mutation. It is estimated that between 5% and 7% of the world's population are carriers of the mutant Hb gene, and SCD is the most commonly inherited blood disorder.\nSCD is characterized by distorted sickle-shaped red blood cells. Manifestations of the disease are attributed to either haemolysis (premature red cell destruction) or vaso-occlusion (obstruction of blood flow, the most common manifestation). Shortened lifespans are attributable to serious comorbidities associated with the disease, including renal failure, acute cholecystitis, pulmonary hypertension, aplastic crisis, pulmonary embolus, stroke, acute chest syndrome, and sepsis.\nVaso-occlusion can lead to an acute, painful crisis (sickle cell crisis, vaso-occlusive crisis (VOC) or vaso-occlusive episode). Pain is most often reported in the joints, extremities, back or chest, but it can occur anywhere and can last for several days or weeks. The bone and muscle pain experienced during a sickle cell crisis is both acute and recurrent.\nKey pharmacological treatments for VOC include opioid analgesics, non-opioid analgesics, and combinations of drugs. Non-pharmacological approaches, such as relaxation, hypnosis, heat, ice and acupuncture, have been used in conjunction to rehydrating the patient and reduce the sickling process.\nObjectives\nTo assess the analgesic efficacy and adverse events of pharmacological interventions to treat acute painful sickle cell vaso-occlusive crises in adults, in any setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online, MEDLINE via Ovid, Embase via Ovid and LILACS, from inception to September 2019. We also searched the reference lists of retrieved studies and reviews, and searched online clinical trial registries.\nSelection criteria\nRandomized, controlled, double-blind trials of pharmacological interventions, of any dose and by any route, compared to placebo or any active comparator, for the treatment (not prevention) of painful sickle cell VOC in adults.\nData collection and analysis\nThree review authors independently assessed studies for eligibility. We planned to use dichotomous data to calculate risk ratio (RR) and number needed to treat for one additional event, using standard methods. Our primary outcomes were participant-reported pain relief of 50%, or 30%, or greater; Patient Global Impression of Change (PGIC) very much improved, or much or very much improved. Our secondary outcomes included adverse events, serious adverse events, and withdrawals due to adverse events. We assessed GRADE and created three 'Summary of findings' tables.\nMain results\nWe included nine studies with data for 638 VOC events and 594 participants aged 17 to 42 years with SCD presenting to a hospital emergency department in a painful VOC. Three studies investigated a non-steroidal anti-inflammatory drug (NSAID) compared to placebo. One study compared an opioid with a placebo, two studies compared an opioid with an active comparator, two studies compared an anticoagulant with a placebo, and one study compared a combination of three drugs with a combination of four drugs.\nRisk of bias across the nine studies varied. Studies were primarily at an unclear risk of selection, performance, and detection bias. Studies were primarily at a high risk of bias for size with fewer than 50 participants per treatment arm; two studies had 50 to 199 participants per treatment arm (unclear risk).\nNon-steroidal anti-inflammatory drugs (NSAID) compared with placebo\nNo data were reported regarding participant-reported pain relief of 50% or 30% or greater.\nThe efficacy was uncertain regarding PGIC very much improved, and PGIC much or very much improved (no difference; 1 study, 21 participants; very low-quality evidence).\nVery low-quality, uncertain results suggested similar rates of adverse events across both the NSAIDs group (16/45 adverse events, 1/56 serious adverse events, and 1/56 withdrawal due to adverse events) and the placebo group (19/45 adverse events, 2/56 serious adverse events, and 1/56 withdrawal due to adverse events).\nOpioids compared with placebo\nNo data were reported regarding participant-reported pain relief of 50% or 30%, PGIC, or adverse events (any adverse event, serious adverse events, and withdrawals due to adverse events).\nOpioids compared with active comparator\nNo data were reported regarding participant-reported pain relief of 50% or 30% or greater.\nThe results were uncertain regarding PGIC very much improved (33% of the opioids group versus 19% of the placebo group). No data were reported regarding PGIC much or very much improved.\nVery low-quality, uncertain results suggested similar rates of adverse events across both the opioids group (9/66 adverse events, and 0/66 serious adverse events) and the placebo group (7/64 adverse events, 0/66 serious adverse events). No data were reported regarding withdrawal due to adverse events.\nQuality of the evidence\nWe downgraded the quality of the evidence by three levels to very low-quality because there are too few data to have confidence in results (e.g. too few participants per treatment arm). Where no data were reported for an outcome, we had no evidence to support or refute (quality of the evidence is unknown).\nAuthors' conclusions\nThis review identified only nine studies, with insufficient data for all pharmacological interventions for analysis.\nThe available evidence is very uncertain regarding the efficacy or harm from pharmacological interventions used to treat pain related to sickle cell VOC in adults. This area could benefit most from more high quality, certain evidence, as well as the establishment of suitable registries which record interventions and outcomes for this group of people.\n\nGiven the provided abstract, please respond to the following query: \"Bottom line\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are uncertain which medicines provide the best pain relief for adults experiencing a painful sickle cell crisis."
}
]
|
query-laysum | 5571 | [
{
"role": "user",
"content": "Abstract: Background\nIt has been suggested that the severity of autism spectrum disorder (ASD) symptoms is positively correlated with the level of circulating or stored toxic metals, and that excretion of these heavy metals, brought about by the use of pharmaceutical chelating agents, results in improved symptoms.\nObjectives\nTo assess the potential benefits and adverse effects of pharmaceutical chelating agents (referred to as chelation therapy throughout this review) for autism spectrum disorder (ASD) symptoms.\nSearch methods\nWe searched the following databases on 6 November 2014: CENTRAL, Ovid MEDLINE, Ovid MEDLINE In-Process, Embase, PsycINFO, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and 15 other databases, including three trials registers. In addition we checked references lists and contacted experts.\nSelection criteria\nAll randomised controlled trials of pharmaceutical chelating agents compared with placebo in individuals with ASD.\nData collection and analysis\nTwo review authors independently selected studies, assessed them for risk of bias and extracted relevant data. We did not conduct a meta-analysis, as only one study was included.\nMain results\nWe excluded nine studies because they were non-randomised trials or were withdrawn before enrolment. We included one study, which was conducted in two phases. During the first phase of the study, 77 children with ASD were randomly assigned to receive seven days of glutathione lotion or placebo lotion, followed by three days of oral dimercaptosuccinic acid (DMSA). Forty-nine children who were found to be high excreters of heavy metals during phase one continued on to phase two to receive three days of oral DMSA or placebo followed by 11 days off, with the cycle repeated up to six times. The second phase thus assessed the effectiveness of multiple doses of oral DMSA compared with placebo in children who were high excreters of heavy metals and who received a three-day course of oral DMSA. Overall, no evidence suggests that multiple rounds of oral DMSA had an effect on ASD symptoms.\nAuthors' conclusions\nThis review included data from only one study, which had methodological limitations. As such, no clinical trial evidence was found to suggest that pharmaceutical chelation is an effective intervention for ASD. Given prior reports of serious adverse events, such as hypocalcaemia, renal impairment and reported death, the risks of using chelation for ASD currently outweigh proven benefits. Before further trials are conducted, evidence that supports a causal link between heavy metals and autism and methods that ensure the safety of participants are needed.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The purpose of this review was to assess the evidence for the effects of pharmaceutical chelating agents for symptoms of ASD."
}
]
|
query-laysum | 5572 | [
{
"role": "user",
"content": "Abstract: Background\nCentral venous catheters (CVC) are associated with potentially dangerous complications such as thromboses, pericardial effusions, extravasation, and infections in neonates. Indwelling catheters are amongst the main risk factors for nosocomial infections. The use of skin antiseptics during the preparation for central catheter insertion may prevent catheter-related bloodstream infections (CRBSI) and central line-associated bloodstream infections (CLABSI). However, it is still not clear which antiseptic solution is the best to prevent infection with minimal side effects.\nObjectives\nTo systematically evaluate the safety and efficacy of different antiseptic solutions in preventing CRBSI and other related outcomes in neonates with CVC.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and trial registries up to 22 April 2022. We checked reference lists of included trials and systematic reviews that related to the intervention or population examined in this Cochrane Review.\nSelection criteria\nRandomised controlled trials (RCTs) or cluster-RCTs were eligible for inclusion in this review if they were performed in the neonatal intensive care unit (NICU), and were comparing any antiseptic solution (single or in combination) against any other type of antiseptic solution or no antiseptic solution or placebo in preparation for central catheter insertion. We excluded cross-over trials and quasi-RCTs.\nData collection and analysis\nWe used the standard methods from Cochrane Neonatal. We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included three trials that had two different comparisons: 2% chlorhexidine in 70% isopropyl alcohol (CHG-IPA) versus 10% povidone-iodine (PI) (two trials); and CHG-IPA versus 2% chlorhexidine in aqueous solution (CHG-A) (one trial). A total of 466 neonates from level III NICUs were evaluated. All included trials were at high risk of bias. The certainty of the evidence for the primary and some important secondary outcomes ranged from very low to moderate. There were no included trials that compared antiseptic skin solutions with no antiseptic solution or placebo.\nCHG-IPA versus 10% PI\nCompared to PI, CHG-IPA may result in little to no difference in CRBSI (risk ratio (RR) 1.32, 95% confidence interval (CI) 0.53 to 3.25; risk difference (RD) 0.01, 95% CI −0.03 to 0.06; 352 infants, 2 trials, low-certainty evidence) and all-cause mortality (RR 0.88, 95% CI 0.46 to 1.68; RD −0.01, 95% CI −0.08 to 0.06; 304 infants, 1 trial, low-certainty evidence). The evidence is very uncertain about the effect of CHG-IPA on CLABSI (RR 1.00, 95% CI 0.07 to 15.08; RD 0.00, 95% CI −0.11 to 0.11; 48 infants, 1 trial; very low-certainty evidence) and chemical burns (RR 1.04, 95% CI 0.24 to 4.48; RD 0.00, 95% CI −0.03 to 0.03; 352 infants, 2 trials, very low-certainty evidence), compared to PI. Based on a single trial, infants receiving CHG-IPA appeared less likely to develop thyroid dysfunction compared to PI (RR 0.05, 95% CI 0.00 to 0.85; RD −0.06, 95% CI −0.10 to −0.02; number needed to treat for an additional harmful outcome (NNTH) 17, 95% CI 10 to 50; 304 infants). Neither of the two included trials assessed the outcome of premature central line removal or the proportion of infants or catheters with exit-site infection.\nCHG-IPA versus CHG-A\nThe evidence suggests CHG-IPA may result in little to no difference in the rate of proven CRBSI when applied on the skin of neonates prior to central line insertion (RR 0.80, 95% CI 0.34 to 1.87; RD −0.05, 95% CI −0.22 to 0.13; 106 infants, 1 trial, low-certainty evidence) and CLABSI (RR 1.14, 95% CI 0.34 to 3.84; RD 0.02, 95% CI −0.12 to 0.15; 106 infants, 1 trial, low-certainty evidence), compared to CHG-A. Compared to CHG-A, CHG-IPA probably results in little to no difference in premature catheter removal (RR 0.91, 95% CI 0.26 to 3.19; RD −0.01, 95% CI −0.15 to 0.13; 106 infants, 1 trial, moderate-certainty evidence) and chemical burns (RR 0.98, 95% CI 0.47 to 2.03; RD −0.01, 95% CI −0.20 to 0.18; 114 infants, 1 trial, moderate-certainty evidence). No trial assessed the outcome of all-cause mortality and the proportion of infants or catheters with exit-site infection.\nAuthors' conclusions\nBased on current evidence, compared to PI, CHG-IPA may result in little to no difference in CRBSI and mortality. The evidence is very uncertain about the effect of CHG-IPA on CLABSI and chemical burns. One trial showed a statistically significant increase in thyroid dysfunction with the use of PI compared to CHG-IPA.\nThe evidence suggests CHG-IPA may result in little to no difference in the rate of proven CRBSI and CLABSI when applied on the skin of neonates prior to central line insertion. Compared to CHG-A, CHG-IPA probably results in little to no difference in chemical burns and premature catheter removal.\nFurther trials that compare different antiseptic solutions are required, especially in low- and middle-income countries, before stronger conclusions can be made.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In trials comparing 2% chlorhexidine in 70% alcohol (CHG-IPA) and PI:\n • Compared to PI, the use of CHG-IPA may result in little to no difference in CRBSI (2 trials, 352 infants) and death (1 trial, 304 infants). It is unclear whether CHG-IPA reduces skin burns and CLABSI (2 trials, 352 infants).\n • The use of 10% PI could disturb the production of thyroid hormone (1 trial, 304 infants).\nCompared to water-based chlorhexidine, alcohol-based chlorhexidine may result in little to no difference in the rate of proven CRBSI and CLABSI and probably results in little to no difference in skin irritation (1 trial, 114 infants)."
}
]
|
query-laysum | 5573 | [
{
"role": "user",
"content": "Abstract: Background\nYoga is an ancient spiritual practice that originated in India and is currently accepted in the Western world as a form of relaxation and exercise. It has been of interest for people with schizophrenia as an alternative or adjunctive treatment.\nObjectives\nTo systematically assess the effects of yoga versus non-standard care for people with schizophrenia.\nSearch methods\nThe Information Specialist of the Cochrane Schizophrenia Group searched their specialised Trials Register (latest 30 March 2017), which is based on regular searches of MEDLINE, PubMed, Embase, CINAHL, BIOSIS, AMED, PsycINFO, and registries of clinical trials. We searched the references of all included studies. There are no language, date, document type, or publication status limitations for inclusion of records in the register.\nSelection criteria\nAll randomised controlled trials (RCTs) including people with schizophrenia and comparing yoga with non-standard care. We included trials that met our selection criteria and reported useable data.\nData collection and analysis\nThe review team independently selected studies, assessed quality, and extracted data. For binary outcomes, we calculated risk ratio (RR) and its 95% confidence interval (CI), on an intention-to-treat basis. For continuous data, we estimated the mean difference (MD) between groups and its 95% CI. We employed a fixed-effect models for analyses. We examined data for heterogeneity (I2 technique), assessed risk of bias for included studies, and created a 'Summary of findings’ table for seven main outcomes of interest using GRADE (Grading of Recommendations Assessment, Development and Evaluation).\nMain results\nWe were able to include six studies (586 participants). Non-standard care consisted solely of another type of exercise programme. All outcomes were short term (less than six months). There was a clear difference in the outcome leaving the study early (6 RCTs, n=586, RR 0.64 CI 0.49 to 0.83, medium quality evidence) in favour of the yoga group. There were no clear differences between groups for the remaining outcomes. These included mental state (improvement in Positive and Negative Syndrome Scale, 1 RCT, n=84, RR 0.81 CI 0.62 to 1.07, low quality evidence), social functioning (improvement in Social Occupational Functioning Scale, 1 RCT, n=84, RR 0.90 CI 0.78 to 1.04, low quality evidence), quality of life (mental health) (average change 36-Item Short Form Survey (SF-36) quality-of-life sub-scale, 1 RCT, n=69, MD -5.30 CI -17.78 to 7.18, low quality evidence), physical health, (average change WHOQOL-BREF physical-health sub-scale, 1 RCT, n=69, MD 9.22 CI -0.42 to 18.86, low quality evidence). Only one study reported adverse effects, finding no incidence of adverse events in either treatment group. There were a considerable number of missing outcomes, which included relapse, change in cognition, costs of care, effect on standard care, service intervention, disability, and activities of daily living.\nAuthors' conclusions\nWe found minimal differences between yoga and non-standard care, the latter consisting of another exercise comparator, which could be broadly considered aerobic exercise. Outcomes were largely based on single studies with limited sample sizes and short-term follow-up. Overall, many outcomes were not reported and evidence presented in this review is of low to moderate quality - too weak to indicate that yoga is superior or inferior to non-standard care control for management of people with schizophrenia.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Yoga comes from ancient India and involves physical postures and breathing exercises to promote balance between mind and body. Yoga has now been widely adopted as a method of relaxation and exercise, improving strength, flexibility, co-ordination, endurance, and breathing control and concentration. Yoga has also been shown to reduce stress and promote health and feelings of well-being. Yoga has been used as a complementary therapy for many health conditions, including improving blood pressure control as well as mental health conditions such as depression and anxiety disorders.\nSome research suggests that yoga could also be of benefit as an add-on treatment to reduce the complex symptoms of the serious mental illness schizophrenia (such as hearing voices, seeing things, lack of interest in people and activities, tiredness, loss of emotions and withdrawal), and improve the quality of life of people with schizophrenia. The effectiveness of yoga versus other available (non-drug and non-talking therapy) add-on treatments is under-researched."
}
]
|
query-laysum | 5574 | [
{
"role": "user",
"content": "Abstract: Background\nTobacco smoking is the leading preventable cause of death worldwide, which makes it essential to stimulate smoking cessation. The financial cost of smoking cessation treatment can act as a barrier to those seeking support. We hypothesised that provision of financial assistance for people trying to quit smoking, or reimbursement of their care providers, could lead to an increased rate of successful quit attempts. This is an update of the original 2005 review.\nObjectives\nThe primary objective of this review was to assess the impact of reducing the costs for tobacco smokers or healthcare providers for using or providing smoking cessation treatment through healthcare financing interventions on abstinence from smoking. The secondary objectives were to examine the effects of different levels of financial support on the use or prescription of smoking cessation treatment, or both, and on the number of smokers making a quit attempt (quitting smoking for at least 24 hours). We also assessed the cost effectiveness of different financial interventions, and analysed the costs per additional quitter, or per quality-adjusted life year (QALY) gained.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialised Register in September 2016.\nSelection criteria\nWe considered randomised controlled trials (RCTs), controlled trials and interrupted time series studies involving financial benefit interventions to smokers or their healthcare providers, or both.\nData collection and analysis\nTwo reviewers independently extracted data and assessed the quality of the included studies. We calculated risk ratios (RR) for individual studies on an intention-to-treat basis and performed meta-analysis using a random-effects model.\nMain results\nIn the current update, we have added six new relevant studies, resulting in a total of 17 studies included in this review involving financial interventions directed at smokers or healthcare providers, or both.\nFull financial interventions directed at smokers had a favourable effect on abstinence at six months or longer when compared to no intervention (RR 1.77, 95% CI 1.37 to 2.28, I² = 33%, 9333 participants). There was no evidence that full coverage interventions increased smoking abstinence compared to partial coverage interventions (RR 1.02, 95% CI 0.71 to 1.48, I² = 64%, 5914 participants), but partial coverage interventions were more effective in increasing abstinence than no intervention (RR 1.27 95% CI 1.02 to 1.59, I² = 21%, 7108 participants). The economic evaluation showed costs per additional quitter ranging from USD 97 to USD 7646 for the comparison of full coverage with partial or no coverage.\nThere was no clear evidence of an effect on smoking cessation when we pooled two trials of financial incentives directed at healthcare providers (RR 1.16, CI 0.98 to 1.37, I² = 0%, 2311 participants).\nFull financial interventions increased the number of participants making a quit attempt when compared to no interventions (RR 1.11, 95% CI 1.04 to 1.17, I² = 15%, 9065 participants). There was insufficient evidence to show whether partial financial interventions increased quit attempts compared to no interventions (RR 1.13, 95% CI 0.98 to 1.31, I² = 88%, 6944 participants).\nFull financial interventions increased the use of smoking cessation treatment compared to no interventions with regard to various pharmacological and behavioural treatments: nicotine replacement therapy (NRT): RR 1.79, 95% CI 1.54 to 2.09, I² = 35%, 9455 participants; bupropion: RR 3.22, 95% CI 1.41 to 7.34, I² = 71%, 6321 participants; behavioural therapy: RR 1.77, 95% CI 1.19 to 2.65, I² = 75%, 9215 participants.\nThere was evidence that partial coverage compared to no coverage reported a small positive effect on the use of bupropion (RR 1.15, 95% CI 1.03 to 1.29, I² = 0%, 6765 participants). Interventions directed at healthcare providers increased the use of behavioural therapy (RR 1.69, 95% CI 1.01 to 2.86, I² = 85%, 25820 participants), but not the use of NRT and/or bupropion (RR 0.94, 95% CI 0.76 to 1.18, I² = 6%, 2311 participants).\nWe assessed the quality of the evidence for the main outcome, abstinence from smoking, as moderate. In most studies participants were not blinded to the different study arms and researchers were not blinded to the allocated interventions. Furthermore, there was not always sufficient information on attrition rates. We detected some imprecision but we judged this to be of minor consequence on the outcomes of this study.\nAuthors' conclusions\nFull financial interventions directed at smokers when compared to no financial interventions increase the proportion of smokers who attempt to quit, use smoking cessation treatments, and succeed in quitting. There was no clear and consistent evidence of an effect on smoking cessation from financial incentives directed at healthcare providers. We are only moderately confident in the effect estimate because there was some risk of bias due to a lack of blinding in participants and researchers, and insufficient information on attrition rates.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Interventions that reduce or cover the costs of smoking cessation medication and behavioural support could help smokers quit. We reviewed the evidence about the effects of financial interventions directed at smokers and healthcare providers on medication use, quit attempts and successful quitting."
}
]
|
query-laysum | 5575 | [
{
"role": "user",
"content": "Abstract: Background\nIn 2010, the World Health Organization recommended that all patients with suspected malaria are tested for malaria before treatment. In rural African settings light microscopy is often unavailable. Diagnosis has relied on detecting fever, and most people were given antimalarial drugs presumptively. Rapid diagnostic tests (RDTs) provide a point-of-care test that may improve management, particularly of people for whom the RDT excludes the diagnosis of malaria.\nObjectives\nTo evaluate whether introducing RDTs into algorithms for diagnosing and treating people with fever improves health outcomes, reduces antimalarial prescribing, and is safe, compared to algorithms using clinical diagnosis.\nSearch methods\nWe searched the Cochrane Infectious Disease Group Specialized Register; CENTRAL (The Cochrane Library); MEDLINE; EMBASE; CINAHL; LILACS; and the metaRegister of Controlled Trials for eligible trials up to 10 January 2014. We contacted researchers in the field and reviewed the reference lists of all included trials to identify any additional trials.\nSelection criteria\nIndividual or cluster randomized trials (RCTs) comparing RDT-supported algorithms and algorithms using clinical diagnosis alone for diagnosing and treating people with fever living in malaria-endemic settings.\nData collection and analysis\nTwo authors independently applied the inclusion criteria and extracted data. We combined data from individually and cluster RCTs using the generic inverse variance method. We presented all outcomes as risk ratios (RR) with 95% confidence intervals (CIs), and assessed the quality of evidence using the GRADE approach.\nMain results\nWe included seven trials, enrolling 17,505 people with fever or reported history of fever in this review; two individually randomized trials and five cluster randomized trials. All trials were conducted in rural African settings.\nIn most trials the health workers diagnosing and treating malaria were nurses or clinical officers with less than one week of training in RDT supported diagnosis. Health worker prescribing adherence to RDT results was highly variable: the number of participants with a negative RDT result who received antimalarials ranged from 0% to 81%.\nOverall, RDT supported diagnosis had little or no effect on the number of participants remaining unwell at four to seven days after treatment (6990 participants, five trials, low quality evidence); but using RDTs reduced prescribing of antimalarials by up to three-quarters (17,287 participants, seven trials, moderate quality evidence). As would be expected, the reduction in antimalarial prescriptions was highest where health workers adherence to the RDT result was high, and where the true prevalence of malaria was lower.\nUsing RDTs to support diagnosis did not have a consistent effect on the prescription of antibiotics, with some trials showing higher antibiotic prescribing and some showing lower prescribing in the RDT group (13,573 participants, five trials, very low quality evidence).\nOne trial reported malaria microscopy on all enrolled patients in an area of moderate endemicity, so we could compare the number of patients in the RDT and clinical diagnosis groups that actually had microscopy confirmed malaria infection but did not receive antimalarials. No difference was detected between the two diagnostic strategies (1280 participants, one trial, low quality evidence).\nAuthors' conclusions\nAlgorithms incorporating RDTs can substantially reduce antimalarial prescribing if health workers adhere to the test results. Introducing RDTs has not been shown to improve health outcomes for patients, but adherence to the test result does not seem to result in worse clinical outcomes than presumptive treatment.\nConcentrating on improving the care of RDT negative patients could improve health outcomes in febrile children.\n\nGiven the provided abstract, please respond to the following query: \"What are RDTs and how might they improve patient care\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "RDTs are simple to use diagnostic kits which can detect the parasites that cause malaria from one drop of the patient's blood. They do not require laboratory facilities or extensive training, and can provide a simple positive or negative result within 20 minutes, making them suitable for use in rural areas of Africa where most malaria cases occur.\nImproving malaria diagnosis by introducing RDTs is unlikely to improve the health outcomes of people with true malaria as they would probably have received antimalarials even if the health worker was relying on clinical symptoms alone. However, for patients with fever not due to malaria, RDTs could improve health outcomes by prompting the health worker to look for and treat the true cause of their fever earlier."
}
]
|
query-laysum | 5576 | [
{
"role": "user",
"content": "Abstract: Background\nHealthcare workers can suffer from work-related stress as a result of an imbalance of demands, skills and social support at work. This may lead to stress, burnout and psychosomatic problems, and deterioration of service provision. This is an update of a Cochrane Review that was last updated in 2015, which has been split into this review and a review on organisational-level interventions.\nObjectives\nTo evaluate the effectiveness of stress-reduction interventions targeting individual healthcare workers compared to no intervention, wait list, placebo, no stress-reduction intervention or another type of stress-reduction intervention in reducing stress symptoms.\nSearch methods\nWe used the previous version of the review as one source of studies (search date: November 2013). We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PsycINFO, CINAHL, Web of Science and a trials register from 2013 up to February 2022.\nSelection criteria\nWe included randomised controlled trials (RCT) evaluating the effectiveness of stress interventions directed at healthcare workers. We included only interventions targeted at individual healthcare workers aimed at reducing stress symptoms.\nData collection and analysis\nReview authors independently selected trials for inclusion, assessed risk of bias and extracted data. We used standard methodological procedures expected by Cochrane. We categorised interventions into ones that:\n1. focus one’s attention on the (modification of the) experience of stress (thoughts, feelings, behaviour);\n2. focus one’s attention away from the experience of stress by various means of psychological disengagement (e.g. relaxing, exercise);\n3. alter work-related risk factors on an individual level; and ones that\n4. combine two or more of the above.\nThe crucial outcome measure was stress symptoms measured with various self-reported questionnaires such as the Maslach Burnout Inventory (MBI), measured at short term (up to and including three months after the intervention ended), medium term (> 3 to 12 months after the intervention ended), and long term follow-up (> 12 months after the intervention ended).\nMain results\nThis is the second update of the original Cochrane Review published in 2006, Issue 4. This review update includes 89 new studies, bringing the total number of studies in the current review to 117 with a total of 11,119 participants randomised.\nThe number of participants per study arm was ≥ 50 in 32 studies. The most important risk of bias was the lack of blinding of participants.\nFocus on the experience of stress versus no intervention/wait list/placebo/no stress-reduction intervention\nFifty-two studies studied an intervention in which one's focus is on the experience of stress. Overall, such interventions may result in a reduction in stress symptoms in the short term (standardised mean difference (SMD) -0.37, 95% confidence interval (CI) -0.52 to -0.23; 41 RCTs; 3645 participants; low-certainty evidence) and medium term (SMD -0.43, 95% CI -0.71 to -0.14; 19 RCTs; 1851 participants; low-certainty evidence). The SMD of the short-term result translates back to 4.6 points fewer on the MBI-emotional exhaustion scale (MBI-EE, a scale from 0 to 54). The evidence is very uncertain (one RCT; 68 participants, very low-certainty evidence) about the long-term effect on stress symptoms of focusing one's attention on the experience of stress.\nFocus away from the experience of stress versus no intervention/wait list/placebo/no stress-reduction intervention\nForty-two studies studied an intervention in which one's focus is away from the experience of stress. Overall, such interventions may result in a reduction in stress symptoms in the short term (SMD -0.55, 95 CI -0.70 to -0.40; 35 RCTs; 2366 participants; low-certainty evidence) and medium term (SMD -0.41 95% CI -0.79 to -0.03; 6 RCTs; 427 participants; low-certainty evidence). The SMD on the short term translates back to 6.8 fewer points on the MBI-EE. No studies reported the long-term effect.\nFocus on work-related, individual-level factors versus no intervention/no stress-reduction intervention\nSeven studies studied an intervention in which the focus is on altering work-related factors. The evidence is very uncertain about the short-term effects (no pooled effect estimate; three RCTs; 87 participants; very low-certainty evidence) and medium-term effects and long-term effects (no pooled effect estimate; two RCTs; 152 participants, and one RCT; 161 participants, very low-certainty evidence) of this type of stress management intervention.\nA combination of individual-level interventions versus no intervention/wait list/no stress-reduction intervention\nSeventeen studies studied a combination of interventions. In the short-term, this type of intervention may result in a reduction in stress symptoms (SMD -0.67 95%, CI -0.95 to -0.39; 15 RCTs; 1003 participants; low-certainty evidence). The SMD translates back to 8.2 fewer points on the MBI-EE. On the medium term, a combination of individual-level interventions may result in a reduction in stress symptoms, but the evidence does not exclude no effect (SMD -0.48, 95% CI -0.95 to 0.00; 6 RCTs; 574 participants; low-certainty evidence). The evidence is very uncertain about the long term effects of a combination of interventions on stress symptoms (one RCT, 88 participants; very low-certainty evidence).\nFocus on stress versus other intervention type\nThree studies compared focusing on stress versus focusing away from stress and one study a combination of interventions versus focusing on stress. The evidence is very uncertain about which type of intervention is better or if their effect is similar.\nAuthors' conclusions\nOur review shows that there may be an effect on stress reduction in healthcare workers from individual-level stress interventions, whether they focus one's attention on or away from the experience of stress. This effect may last up to a year after the end of the intervention. A combination of interventions may be beneficial as well, at least in the short term. Long-term effects of individual-level stress management interventions remain unknown. The same applies for interventions on (individual-level) work-related risk factors.\nThe bias assessment of the studies in this review showed the need for methodologically better-designed and executed studies, as nearly all studies suffered from poor reporting of the randomisation procedures, lack of blinding of participants and lack of trial registration. Better-designed trials with larger sample sizes are required to increase the certainty of the evidence. Last, there is a need for more studies on interventions which focus on work-related risk factors.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found a total of 117 studies that involved a total of 11,119 healthcare workers. Most studies followed their participants up to three months and some up to 12 months, but only few longer than a year.\nWe found that there may be an effect on stress reduction in healthcare workers from stress management interventions, whether they focus one's attention on or away from the experience of stress. This effect may last up to a year after the end of the intervention. A combination of interventions may be beneficial as well, at least in the short term. The long-term effects of stress management interventions, longer than a year after the intervention has ended, remain unknown. The same applies for interventions on (individual-level) work-related risk factors."
}
]
|
query-laysum | 5577 | [
{
"role": "user",
"content": "Abstract: Background\nAortic aneurysms occur when the aorta, the body's largest artery, grows in size, and can occur in the thoracic or abdominal aorta. The approaches to repair aortic aneurysms include directly exposing the aorta and replacing the diseased segment via open repair, or endovascular repair. Endovascular repair uses fluoroscopic-guidance to access the aorta and deliver a device to exclude the aneurysmal aortic segment without requiring a large surgical incision. Endovascular repair can be performed under a general anesthetic, during which the unconscious patient is paralyzed and reliant on an anesthetic machine to maintain the airway and provide oxygen to the lungs, or a loco-regional anesethetic, for which medications are administered to provide the person with sufficient sedation and pain control without requiring a general anesthetic. While people undergoing general anesthesia are more likely to remain still during surgery and have a well-controlled airway in the event of unanticipated complications, loco-regional anesthesia is associated with fewer postoperative complications in some studies. It remains unclear which anesthetic technique is associated with better outcomes following the endovascular repair of aortic aneurysms.\nObjectives\nTo evaluate the benefits and harms of general anesthesia compared to loco-regional anesthesia for endovascular aortic aneurysm repair.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search was 11 March 2022.\nSelection criteria\nWe searched for all randomized controlled trials that assessed the effects of general anesthesia compared to loco-regional anesthesia for endovascular aortic aneurysm repairs.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were: all-cause mortality, length of hospital stay, length of intensive care unit stay. Our secondary outcomes were: incidence of endoleaks, requirement for re-intervention, incidence of myocardial infarction, quality of life, incidence of respiratory complications, incidence of pulmonary embolism, incidence of deep vein thrombosis, and length of procedure. We planned to use GRADE methodology to assess the certainty of evidence for each outcome.\nMain results\nWe found no studies, published or ongoing, that met our inclusion criteria.\nAuthors' conclusions\nWe did not identify any randomized controlled trials that compared general versus loco-regional anesthesia for endovascular aortic aneurysm repair. There is currently insufficient high-quality evidence to determine the benefits or harms of either anesthetic approach during endovascular aortic aneurysm repair. Well-designed prospective randomized trials with relevant clinical outcomes are needed to adequately address this.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- we did not find any randomized controlled trials to help answer our question\n- there is a need for high-quality evidence to compare the benefits and harms of general anesthesia compared to loco-regional anesthesia in people undergoing endovascular repair of an aortic aneurysm"
}
]
|
query-laysum | 5578 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review previously published in 2016. This review is one in a series of Cochrane Reviews investigating pair-wise monotherapy comparisons.\nEpilepsy is a common neurological condition in which abnormal electrical discharges from the brain cause recurrent unprovoked seizures. It is believed that with effective drug treatment, up to 70% of individuals with active epilepsy have the potential to become seizure-free and go into long-term remission shortly after starting drug therapy with a single antiepileptic drug in monotherapy.\nWorldwide, carbamazepine and phenobarbitone are commonly used broad-spectrum antiepileptic drugs, suitable for most epileptic seizure types. Carbamazepine is a current first-line treatment for focal onset seizures, and is used in the USA and Europe. Phenobarbitone is no longer considered a first-line treatment because of concerns over associated adverse events, particularly documented behavioural adverse events in children treated with the drug. However, phenobarbitone is still commonly used in low- and middle-income countries because of its low cost. No consistent differences in efficacy have been found between carbamazepine and phenobarbitone in individual trials; however, the confidence intervals generated by these trials are wide, and therefore, synthesising the data of the individual trials may show differences in efficacy.\nObjectives\nTo review the time to treatment failure, remission and first seizure with carbamazepine compared with phenobarbitone when used as monotherapy in people with focal onset seizures (simple or complex focal and secondarily generalised), or generalised onset tonic-clonic seizures (with or without other generalised seizure types).\nSearch methods\nFor the latest update, we searched the following databases on 24 May 2018: the Cochrane Register of Studies (CRS Web), which includes Cochrane Epilepsy's Specialized Register and CENTRAL; MEDLINE; the US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov); and the World Health Organization International Clinical Trials Registry Platform (ICTRP). We handsearched relevant journals and contacted pharmaceutical companies, original trial investigators, and experts in the field.\nSelection criteria\nRandomised controlled trials comparing monotherapy with either carbamazepine or phenobarbitone in children or adults with focal onset seizures or generalised onset tonic-clonic seizures.\nData collection and analysis\nThis was an individual participant data (IPD), review. Our primary outcome was time to treatment failure. Our secondary outcomes were time to first seizure post-randomisation, time to six-month remission, time to 12-month remission, and incidence of adverse events. We used Cox proportional hazards regression models to obtain trial-specific estimates of hazard ratios (HRs), with 95% confidence intervals (CIs), using the generic inverse variance method to obtain the overall pooled HR and 95% CI.\nMain results\nWe included 13 trials in this review and IPD were available for 836 individuals out of 1455 eligible individuals from six trials, 57% of the potential data. For remission outcomes, a HR of less than 1 indicates an advantage for phenobarbitone and for first seizure and treatment failure outcomes a HR of less than 1 indicates an advantage for carbamazepine.\nResults for the primary outcome of the review were: time to treatment failure for any reason related to treatment (pooled HR adjusted for seizure type for 676 participants: 0.66, 95% CI 0.50 to 0.86, moderate-quality evidence), time to treatment failure due to adverse events (pooled HR adjusted for seizure type for 619 participants: 0.69, 95% CI 0.49 to 0.97, low-quality evidence), time to treatment failure due to lack of efficacy (pooled HR adjusted for seizure type for 487 participants: 0.54, 95% CI 0.38 to 0.78, moderate-quality evidence), showing a statistically significant advantage for carbamazepine compared to phenobarbitone.\nFor our secondary outcomes, we did not find any statistically significant differences between carbamazepine and phenobarbitone: time to first seizure post-randomisation (pooled HR adjusted for seizure type for 822 participants: 1.13, 95% CI 0.93 to 1.38, moderate-quality evidence), time to 12-month remission (pooled HR adjusted for seizure type for 683 participants: 1.09, 95% CI 0.84 to 1.40, low-quality evidence), and time to six-month remission pooled HR adjusted for seizure type for 683 participants: 1.01, 95% CI 0.81 to 1.24, low-quality evidence).\nResults of these secondary outcomes suggest that there may be an association between treatment effect in terms of efficacy and seizure type; that is, that participants with focal onset seizures experience seizure recurrence later and hence remission of seizures earlier on phenobarbitone than carbamazepine, and vice versa for individuals with generalised seizures. It is likely that the analyses of these outcomes were confounded by several methodological issues and misclassification of seizure type, which could have introduced the heterogeneity and bias into the results of this review.\nLimited information was available regarding adverse events in the trials and we could not compare the rates of adverse events between carbamazepine and phenobarbitone. Some adverse events reported on both drugs were abdominal pain, nausea, and vomiting, drowsiness, motor and cognitive disturbances, dysmorphic side effects (such as rash), and behavioural side effects in three paediatric trials.\nAuthors' conclusions\nModerate-quality evidence from this review suggests that carbamazepine is likely to be a more effective drug than phenobarbitone in terms of treatment retention (treatment failures due to lack of efficacy or adverse events or both). Moderate- to low-quality evidence from this review also suggests an association between treatment efficacy and seizure type in terms of seizure recurrence and seizure remission, with an advantage for phenobarbitone for focal onset seizures and an advantage for carbamazepine for generalised onset seizures.\nHowever, some of the trials contributing to the analyses had methodological inadequacies and inconsistencies that may have impacted upon the results of this review. Therefore, we do not suggest that results of this review alone should form the basis of a treatment choice for a patient with newly onset seizures. We recommend that future trials should be designed to the highest quality possible with consideration of masking, choice of population, classification of seizure type, duration of follow-up, choice of outcomes and analysis, and presentation of results.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Results of the review suggest that people are likely to stop taking phenobarbitone treatment earlier than carbamazepine treatment, because of seizure recurrence, side-effects of the drug, or both. Results also suggest that recurrence of seizures after starting treatment with phenobarbitone may happen later than treatment with carbamazepine (and therefore a seizure free period of 6 months or 12 months may occur earlier with phenobarbitone than with carbamazepine) for people with focal onset seizures, and vice-versa for people with generalised onset seizures.\nSome side effects reported by people taking carbamazepine and people taking phenobarbitone were abdominal pain, nausea, vomiting, tiredness, motor problems (such as poor co-ordination), cognitive problems (poor memory), rashes and other skin problems. Behavioural side effects such as aggression were reported on both drugs in three trials in children."
}
]
|
query-laysum | 5579 | [
{
"role": "user",
"content": "Abstract: Background\nStroke survivors are often physically inactive as well as sedentary,and may sit for long periods of time each day. This increases cardiometabolic risk and has impacts on physical and other functions. Interventions to reduce or interrupt periods of sedentary time, as well as to increase physical activity after stroke, could reduce the risk of secondary cardiovascular events and mortality during life after stroke.\nObjectives\nTo determine whether interventions designed to reduce sedentary behaviour after stroke, or interventions with the potential to do so, can reduce the risk of death or secondary vascular events, modify cardiovascular risk, and reduce sedentary behaviour.\nSearch methods\nIn December 2019, we searched the Cochrane Stroke Trials Register, CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, Conference Proceedings Citation Index, and PEDro. We also searched registers of ongoing trials, screened reference lists, and contacted experts in the field.\nSelection criteria\nRandomised trials comparing interventions to reduce sedentary time with usual care, no intervention, or waiting-list control, attention control, sham intervention or adjunct intervention. We also included interventions intended to fragment or interrupt periods of sedentary behaviour.\nData collection and analysis\nTwo review authors independently selected studies and performed 'Risk of bias' assessments. We analyzed data using random-effects meta-analyses and assessed the certainty of the evidence with the GRADE approach.\nMain results\nWe included 10 studies with 753 people with stroke. Five studies used physical activity interventions, four studies used a multicomponent lifestyle intervention, and one study used an intervention to reduce and interrupt sedentary behaviour. In all studies, the risk of bias was high or unclear in two or more domains. Nine studies had high risk of bias in at least one domain.\nThe interventions did not increase or reduce deaths (risk difference (RD) 0.00, 95% confidence interval (CI) -0.02 to 0.03; 10 studies, 753 participants; low-certainty evidence), the incidence of recurrent cardiovascular or cerebrovascular events (RD -0.01, 95% CI -0.04 to 0.01; 10 studies, 753 participants; low-certainty evidence), the incidence of falls (and injuries) (RD 0.00, 95% CI -0.02 to 0.02; 10 studies, 753 participants; low-certainty evidence), or incidence of other adverse events (moderate-certainty evidence).\nInterventions did not increase or reduce the amount of sedentary behaviour time (mean difference (MD) +0.13 hours/day, 95% CI -0.42 to 0.68; 7 studies, 300 participants; very low-certainty evidence). There were too few data to examine effects on patterns of sedentary behaviour.\nThe effect of interventions on cardiometabolic risk factors allowed very limited meta-analysis.\nAuthors' conclusions\nSedentary behaviour research in stroke seems important, yet the evidence is currently incomplete, and we found no evidence for beneficial effects. Current World Health Organization (WHO) guidelines recommend reducing the amount of sedentary time in people with disabilities, in general. The evidence is currently not strong enough to guide practice on how best to reduce sedentariness specifically in people with stroke.\nMore high-quality randomised trials are needed, particularly involving participants with mobility limitations. Trials should include longer-term interventions specifically targeted at reducing time spent sedentary, risk factor outcomes, objective measures of sedentary behaviour (and physical activity), and long-term follow-up.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence that examines the effects of treatments to reduce the amount of sedentary behaviour in people after stroke."
}
]
|
query-laysum | 5580 | [
{
"role": "user",
"content": "Abstract: Background\nMost people admitted to hospitals worldwide require a vascular access device (VAD). Hundreds of millions of VADs are inserted annually in the USA with reports of over a billion peripheral intravenous catheters used annually worldwide. Numerous reports suggest that a team approach for the assessment, insertion, and maintenance of VADs improves clinical outcomes, the patient experience, and healthcare processes.\nObjectives\nTo compare the use of the vascular access specialist team (VAST) for VAD insertion and care to a generalist model approach for hospital or community participants requiring a VAD in terms of insertion success, device failure, and cost-effectiveness.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 1); Ovid MEDLINE (1950 to 7 February 2018); Ovid Embase (1980 to 7 February 2018); EBSCO CINAHL (1982 to 7 February 2018); Web of Science Conference Proceedings Citation Index - Science and Social Science and Humanities (1990 to 7 February 2018); and Google Scholar. We searched the following trial registries: Australian and New Zealand Clinical Trials Register (www.anzctr.org.au); ClinicalTrials.gov (www.clinicaltrials.gov); Current Controlled Trials (www.controlled-trials.com/mrct); HKU Clinical Trials Registry (www.hkclinicaltrials.com); Clinical Trials Registry - India (ctri.nic.in/Clinicaltrials/login.php); UK Clinical Trials Gateway (www.controlled-trials.com/ukctr/); and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) (www.who.int/trialsearch). We searched all databases on 7 February 2018.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) that evaluated the effectiveness of VAST or specialist inserters for their impact on clinical outcomes.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane and used Covidence software to assist with file management.\nMain results\nWe retrieved 2398 citations: 30 studies were eligible for further examination of their full text, and we found one registered clinical trial in progress. No studies could be included in the analysis or review. We assigned one study as awaiting classification, as it has not been accepted for publication.\nAuthors' conclusions\nThis systematic review failed to locate relevant published RCTs to support or refute the assertion that vascular access specialist teams are superior to the generalist model. A vascular access specialist team has advanced knowledge with regard to insertion techniques, clinical care, and management of vascular access devices, whereas a generalist model comprises nurses, doctors, or other designated healthcare professionals in the healthcare facility who may have less advanced insertion techniques and who care for vascular access devices amongst other competing clinical tasks. However, this conclusion may change once the one study awaiting classification and one ongoing study are published. There is a need for good-quality RCTs to evaluate the efficacy of a vascular access specialist team approach for vascular access device insertion and care for the prevention of failure.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched a wide range of medical databases on 7 February 2018. We identified 2398 potential studies, 30 of which we looked at in detail. We found one suitable study, however although the study is complete the manuscript has not yet been accepted for publication, and so we were unable to analyse the data. We have assigned the study as awaiting classification; once its results are published we will evaluate it again and decide if it is eligible for inclusion in the review. We found one registered trial that is investigating our review question, but it is still ongoing and not yet completed or published."
}
]
|
query-laysum | 5581 | [
{
"role": "user",
"content": "Abstract: Background\nAsthma exacerbations can be frequent and range in severity from mild to life-threatening. The use of magnesium sulfate (MgSO₄) is one of numerous treatment options available during acute exacerbations. While the efficacy of intravenous MgSO₄ has been demonstrated, the role of inhaled MgSO₄ is less clear.\nObjectives\nTo determine the efficacy and safety of inhaled MgSO₄ administered in acute asthma.\nSpecific aims: to quantify the effects of inhaled MgSO₄ I) in addition to combination treatment with inhaled β₂-agonist and ipratropium bromide; ii) in addition to inhaled β₂-agonist; and iii) in comparison to inhaled β₂-agonist.\nSearch methods\nWe identified randomised controlled trials (RCTs) from the Cochrane Airways Group register of trials and online trials registries in September 2017. We supplemented these with searches of the reference lists of published studies and by contact with trialists.\nSelection criteria\nRCTs including adults or children with acute asthma were eligible for inclusion in the review. We included studies if patients were treated with nebulised MgSO₄ alone or in combination with β₂-agonist or ipratropium bromide or both, and were compared with the same co-intervention alone or inactive control.\nData collection and analysis\nTwo review authors independently assessed trial selection, data extraction and risk of bias. We made efforts to collect missing data from authors. We present results, with their 95% confidence intervals (CIs), as mean differences (MDs) or standardised mean differences (SMDs) for pulmonary function, clinical severity scores and vital signs; and risk ratios (RRs) for hospital admission. We used risk differences (RDs) to analyse adverse events because events were rare.\nMain results\nTwenty-five trials (43 references) of varying methodological quality were eligible; they included 2907 randomised patients (2777 patients completed). Nine of the 25 included studies involved adults; four included adult and paediatric patients; eight studies enrolled paediatric patients; and in the remaining four studies the age of participants was not stated. The design, definitions, intervention and outcomes were different in all 25 studies; this heterogeneity made direct comparisons difficult. The quality of the evidence presented ranged from high to very low, with most outcomes graded as low or very low. This was largely due to concerns about the methodological quality of the included studies and imprecision in the pooled effect estimates.\nInhaled magnesium sulfate in addition to inhaled β₂-agonist and ipratropium\nWe included seven studies in this comparison. Although some individual studies reported improvement in lung function indices favouring the intervention group, results were inconsistent overall and the largest study reporting this outcome found no between-group difference at 60 minutes (MD −0.3 % predicted peak expiratory flow rate (PEFR), 95% CI −2.71% to 2.11%). Admissions to hospital at initial presentation may be reduced by the addition of inhaled magnesium sulfate (RR 0.95, 95% CI 0.91 to 1.00; participants = 1308; studies = 4; I² = 52%) but no difference was detected for re-admissions or escalation of care to ITU/HDU. Serious adverse events during admission were rare. There was no difference between groups for all adverse events during admission (RD 0.01, 95% CI −0.03 to 0.05; participants = 1197; studies = 2).\nInhaled magnesium sulfate in addition to inhaled β₂-agonist\nWe included 13 studies in this comparison. Although some individual studies reported improvement in lung function indices favouring the intervention group, none of the pooled results showed a conclusive benefit as measured by FEV1 or PEFR. Pooled results for hospital admission showed a point estimate that favoured the combination of MgSO₄ and β₂-agonist, but the confidence interval includes the possibility of admissions increasing in the intervention group (RR 0.78, 95% CI 0.52 to 1.15; participants = 375; studies = 6; I² = 0%). There were no serious adverse events reported by any of the included studies and no between-group difference for all adverse events (RD −0.01, 95% CI −0.05 to 0.03; participants = 694; studies = 5).\nInhaled magnesium sulfate versus inhaled β₂-agonist\nWe included four studies in this comparison. The evidence for the efficacy of β₂-agonists in acute asthma is well-established and therefore this could be considered a historical comparison. Two studies reported a benefit of β₂-agonist over MgSO₄ alone for PEFR and two studies reported no difference; we did not pool these results. Admissions to hospital were only reported by one small study and events were rare, leading to an uncertain result. No serious adverse events were reported in any of the studies in this comparison; one small study reported mild to moderate adverse events but the result is imprecise.\nAuthors' conclusions\nTreatment with nebulised MgSO₄ may result in modest additional benefits for lung function and hospital admission when added to inhaled β₂-agonists and ipratropium bromide, but our confidence in the evidence is low and there remains substantial uncertainty. The recent large, well-designed trials have generally not demonstrated clinically important benefits. Nebulised MgSO₄ does not appear to be associated with an increase in serious adverse events. Individual studies suggest that those with more severe attacks and attacks of shorter duration may experience a greater benefit but further research into subgroups is warranted.\nDespite including 24 trials in this review update we were unable to pool data for all outcomes of interest and this has limited the strength of the conclusions reached. A core outcomes set for studies in acute asthma is needed. This is particularly important in paediatric studies where measuring lung function at the time of an exacerbation may not be possible. Placebo-controlled trials in patients not responding to standard maximal treatment, including inhaled β₂-agonists and ipratropium bromide and systemic steroids, may help establish if nebulised MgSO₄ has a role in acute asthma. However, the accumulating evidence suggests that a substantial benefit may be unlikely.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Asthma attacks are common in adults and children. People having an attack may need to be treated in a hospital emergency department (A&E). Even with the best treatment, some people need to be admitted to hospital or even into the intensive care unit. Some guidelines suggest that giving magnesium sulfate, either by injection or inhaled straight into the lungs, may be beneficial. In this review we focused on inhaled (or 'nebulised') magnesium sulfate. We were particularly interested in finding out the effects of magnesium sulfate on lung function (breathing tests), severity scores and hospital admissions. We also wanted to know if it was safe."
}
]
|
query-laysum | 5582 | [
{
"role": "user",
"content": "Abstract: Background\nGastro-oesophageal reflux disease (GORD) is a common condition with 3% to 33% of people from different parts of the world suffering from GORD. There is considerable uncertainty about whether people with GORD should receive an operation or medical treatment for controlling the condition.\nObjectives\nTo assess the benefits and harms of laparoscopic fundoplication versus medical treatment for people with gastro-oesophageal reflux disease.\nSearch methods\nWe searched the Cochrane Upper Gastrointestinal and Pancreatic Diseases Group (UGPD) Trials Register (June 2015), Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library Issue 6, 2015), Ovid MEDLINE (1966 to June 2015), and EMBASE (1980 to June 2015) to identify randomised controlled trials. We also searched the references of included trials to identify further trials.\nSelection criteria\nWe considered only randomised controlled trials (RCT) comparing laparoscopic fundoplication with medical treatment in people with GORD irrespective of language, blinding, or publication status for inclusion in the review.\nData collection and analysis\nTwo review authors independently identified trials and independently extracted data. We calculated the risk ratio (RR) or standardised mean difference (SMD) with 95% confidence intervals (CI) using both fixed-effect and random-effects models with RevMan 5 based on available case analysis.\nMain results\nFour studies met the inclusion criteria for the review, and provided information on one or more outcomes for the review. A total of 1160 participants in the four RCTs were either randomly assigned to laparoscopic fundoplication (589 participants) or medical treatment with proton pump inhibitors (571 participants). All the trials included participants who had had reflux symptoms for at least six months and had received long-term acid suppressive therapy. All the trials included only participants who could undergo surgery if randomised to the surgery arm. All of the trials were at high risk of bias. The overall quality of evidence was low or very low. None of the trials reported long-term health-related quality of life (HRQoL) or GORD-specific quality of life (QoL).\nThe difference between laparoscopic fundoplication and medical treatment was imprecise for overall short-term HRQOL (SMD 0.14, 95% CI -0.02 to 0.30; participants = 605; studies = 3), medium-term HRQOL (SMD 0.03, 95% CI -0.19 to 0.24; participants = 323; studies = 2), medium-term GORD-specific QoL (SMD 0.28, 95% CI -0.27 to 0.84; participants = 994; studies = 3), proportion of people with adverse events (surgery: 7/43 (adjusted proportion = 14.0%); medical: 0/40 (0.0%); RR 13.98, 95% CI 0.82 to 237.07; participants = 83; studies = 1), long-term dysphagia (surgery: 27/118 (adjusted proportion = 22.9%); medical: 28/110 (25.5%); RR 0.90, 95% CI 0.57 to 1.42; participants = 228; studies = 1), and long-term reflux symptoms (surgery: 29/118 (adjusted proportion = 24.6%); medical: 41/115 (35.7%); RR 0.69, 95% CI 0.46 to 1.03; participants = 233; studies = 1).\nThe short-term GORD-specific QoL was better in the laparoscopic fundoplication group than in the medical treatment group (SMD 0.58, 95% CI 0.46 to 0.70; participants = 1160; studies = 4).\nThe proportion of people with serious adverse events (surgery: 60/331 (adjusted proportion = 18.1%); medical: 38/306 (12.4%); RR 1.46, 95% CI 1.01 to 2.11; participants = 637; studies = 2), short-term dysphagia (surgery: 44/331 (adjusted proportion = 12.9%); medical: 11/306 (3.6%); RR 3.58, 95% CI 1.91 to 6.71; participants = 637; studies = 2), and medium-term dysphagia (surgery: 29/288 (adjusted proportion = 10.2%); medical: 5/266 (1.9%); RR 5.36, 95% CI 2.1 to 13.64; participants = 554; studies = 1) was higher in the laparoscopic fundoplication group than in the medical treatment group.\nThe proportion of people with heartburn at short term (surgery: 29/288 (adjusted proportion = 10.0%); medical: 59/266 (22.2%); RR 0.45, 95% CI 0.30 to 0.69; participants = 554; studies = 1), medium term (surgery: 12/288 (adjusted proportion = 4.2%); medical: 59/266 (22.2%); RR 0.19, 95% CI 0.10 to 0.34; participants = 554; studies = 1), long term (surgery: 46/111 (adjusted proportion = 41.2%); medical: 78/106 (73.6%); RR 0.56, 95% CI 0.44 to 0.72); participants = 217; studies = 1) and those with reflux symptoms at short-term (surgery: 6/288 (adjusted proportion = 2.0%); medical: 53/266 (19.9%); RR 0.10, 95% CI 0.05 to 0.24; participants = 554; studies = 1) and medium term (surgery: 6/288 (adjusted proportion = 2.1%); medical: 37/266 (13.9%); RR 0.15, 95% CI 0.06 to 0.35; participants = 554; studies = 1) was less in the laparoscopic fundoplication group than in the medical treatment group.\nAuthors' conclusions\nThere is considerable uncertainty in the balance of benefits versus harms of laparoscopic fundoplication compared to long-term medical treatment with proton pump inhibitors. Further RCTs of laparoscopic fundoplication versus medical management in patients with GORD should be conducted with outcome-assessor blinding and should include all participants in the analysis. Such trials should include long-term patient-orientated outcomes such as treatment-related adverse events (including severity), quality of life, and also report on the social and economic impact of the adverse events and symptoms.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Gastro-oesophageal reflux disease or GORD is a condition which develops when stomach contents regurgitate into the oesophagus causing troublesome symptoms such as heartburn (burning sensation at the lower end of the breastbone) or regurgitation (perception of flow of stomach content into the throat or mouth). Long-term complications of GORD include reflux oesophagitis (injury to lining of oesophagus), bleeding from the oesophagus, narrowing of the oesophagus, change in the nature of the lining of the oesophagus which can sometimes give rise to oesophageal cancer. Approximately 3% to 33% of people have GORD around the world. The risk factors for GORD include a family history of reflux disease in immediate relatives, pregnancy, older age, obesity, cigarette smoking, and excessive alcohol drinking. Apart from lifestyle changes (such as cessation of smoking) and diet changes (avoiding food that causes heartburn), the major forms of treatment for GORD are medical and surgical. The medical treatment is usually aimed at decreasing acidity in the stomach. Currently a group of drugs which suppress acid secretion, called proton pump inhibitors, are considered the best in decreasing acid secretion. The main surgical treatment is fundoplication which involves wrapping around the lower part of the food pipe with stomach. This can be performed by traditional open surgery, keyhole surgery or surgery that is performed without making any cut from within the stomach with the help of an endoscope (in this context, a flexible tube introduced through the mouth to give a view of the food pipe and stomach). The benefits and harms of laparoscopic fundoplication compared to medical treatment in people with GORD is not known. We sought to resolve this issue by searching for existing studies on the topic. We included all studies whose results were reported until 1st October 2014."
}
]
|
query-laysum | 5583 | [
{
"role": "user",
"content": "Abstract: Background\nPyrethroid long-lasting insecticidal nets (LLINs) have been important in the large reductions in malaria cases in Africa, but insecticide resistance in Anopheles mosquitoes threatens their impact. Insecticide synergists may help control insecticide-resistant populations. Piperonyl butoxide (PBO) is such a synergist; it has been incorporated into pyrethroid-LLINs to form pyrethroid-PBO nets, which are currently produced by five LLIN manufacturers and, following a recommendation from the World Health Organization (WHO) in 2017, are being included in distribution campaigns. This review examines epidemiological and entomological evidence on the addition of PBO to pyrethroid nets on their efficacy.\nObjectives\nTo compare effects of pyrethroid-PBO nets currently in commercial development or on the market with effects of their non-PBO equivalent in relation to:\n1. malaria parasite infection (prevalence or incidence); and\n2. entomological outcomes.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group (CIDG) Specialized Register, CENTRAL, MEDLINE, Embase, Web of Science, CAB Abstracts, and two clinical trial registers (ClinicalTrials.gov and WHO International Clinical Trials Registry Platform) up to 25 September 2020. We contacted organizations for unpublished data. We checked the reference lists of trials identified by these methods.\nSelection criteria\nWe included experimental hut trials, village trials, and randomized controlled trials (RCTs) with mosquitoes from the Anopheles gambiae complex or the Anopheles funestus group.\nData collection and analysis\nTwo review authors assessed each trial for eligibility, extracted data, and determined the risk of bias for included trials. We resolved disagreements through discussion with a third review author. We analysed data using Review Manager 5 and assessed the certainty of evidence using the GRADE approach.\nMain results\nSixteen trials met the inclusion criteria: 10 experimental hut trials, four village trials, and two cluster-RCTs (cRCTs). Three trials are awaiting classification, and four trials are ongoing.\nTwo cRCTs examined the effects of pyrethroid-PBO nets on parasite prevalence in people living in areas with highly pyrethroid-resistant mosquitoes (< 30% mosquito mortality in discriminating dose assays). At 21 to 25 months post intervention, parasite prevalence was lower in the intervention arm (odds ratio (OR) 0.79, 95% confidence interval (CI) 0.67 to 0.95; 2 trials, 2 comparisons; moderate-certainty evidence).\nIn highly pyrethroid-resistant areas, unwashed pyrethroid-PBO nets led to higher mosquito mortality compared to unwashed standard-LLINs (risk ratio (RR) 1.84, 95% CI 1.60 to 2.11; 14,620 mosquitoes, 5 trials, 9 comparisons; high-certainty evidence) and lower blood feeding success (RR 0.60, 95% CI 0.50 to 0.71; 14,000 mosquitoes, 4 trials, 8 comparisons; high-certainty evidence). However, in comparisons of washed pyrethroid-PBO nets to washed LLINs, we do not know if PBO nets had a greater effect on mosquito mortality (RR 1.20, 95% CI 0.88 to 1.63; 10,268 mosquitoes, 4 trials, 5 comparisons; very low-certainty evidence), although the washed pyrethroid-PBO nets did decrease blood-feeding success compared to standard-LLINs (RR 0.81, 95% CI 0.72 to 0.92; 9674 mosquitoes, 3 trials, 4 comparisons; high-certainty evidence).\nIn areas where pyrethroid resistance is moderate (31% to 60% mosquito mortality), mosquito mortality was higher with unwashed pyrethroid-PBO nets compared to unwashed standard-LLINs (RR 1.68, 95% CI 1.33 to 2.11; 1007 mosquitoes, 2 trials, 3 comparisons; moderate-certainty evidence), but there was little to no difference in effects on blood-feeding success (RR 0.90, 95% CI 0.72 to 1.11; 1006 mosquitoes, 2 trials, 3 comparisons; moderate-certainty evidence). For washed pyrethroid-PBO nets compared to washed standard-LLINs, we found little to no evidence for higher mosquito mortality or reduced blood feeding (mortality: RR 1.07, 95% CI 0.74 to 1.54; 329 mosquitoes, 1 trial, 1 comparison, low-certainty evidence; blood feeding success: RR 0.91, 95% CI 0.74 to 1.13; 329 mosquitoes, 1 trial, 1 comparison; low-certainty evidence).\nIn areas where pyrethroid resistance is low (61% to 90% mosquito mortality), studies reported little to no difference in the effects of unwashed pyrethroid-PBO nets compared to unwashed standard-LLINs on mosquito mortality (RR 1.25, 95% CI 0.99 to 1.57; 1580 mosquitoes, 2 trials, 3 comparisons; moderate-certainty evidence), and we do not know if there was any effect on blood-feeding success (RR 0.75, 95% CI 0.27 to 2.11; 1580 mosquitoes, 2 trials, 3 comparisons; very low-certainty evidence). For washed pyrethroid-PBO nets compared to washed standard-LLINs, we do not know if there was any difference in mosquito mortality (RR 1.39, 95% CI 0.95 to 2.04; 1774 mosquitoes, 2 trials, 3 comparisons; very low-certainty evidence) or on blood feeding (RR 1.07, 95% CI 0.49 to 2.33; 1774 mosquitoes, 2 trials, 3 comparisons; low-certainty evidence).\nIn areas where mosquito populations are susceptible to insecticides (> 90% mosquito mortality), there may be little to no difference in the effects of unwashed pyrethroid-PBO nets compared to unwashed standard-LLINs on mosquito mortality (RR 1.20, 95% CI 0.64 to 2.26; 2791 mosquitoes, 2 trials, 2 comparisons; low-certainty evidence). This is similar for washed nets (RR 1.07, 95% CI 0.92 to 1.25; 2644 mosquitoes, 2 trials, 2 comparisons; low-certainty evidence). We do not know if unwashed pyrethroid-PBO nets had any effect on the blood-feeding success of susceptible mosquitoes (RR 0.52, 95% CI 0.12 to 2.22; 2791 mosquitoes, 2 trials, 2 comparisons; very low-certainty evidence). The same applies to washed nets (RR 1.25, 95% CI 0.82 to 1.91; 2644 mosquitoes, 2 trials, 2 comparisons; low-certainty evidence).\nIn village trials comparing pyrethroid-PBO nets to LLINs, there was no difference in sporozoite rate (4 trials, 5 comparisons) nor in mosquito parity (3 trials, 4 comparisons).\nAuthors' conclusions\nIn areas of high insecticide resistance, pyrethroid-PBO nets have greater entomological and epidemiological efficacy compared to standard LLINs, with sustained reduction in parasite prevalence, higher mosquito mortality and reduction in mosquito blood feeding rates 21 to 25 months post intervention. Questions remain about the durability of PBO on nets, as the impact of pyrethroid-PBO nets on mosquito mortality was not sustained over 20 washes in experimental hut trials, and epidemiological data on pyrethroid-PBO nets for the full intended three-year life span of the nets is not available. Little evidence is available to support greater entomological efficacy of pyrethroid-PBO nets in areas where mosquitoes show lower levels of resistance to pyrethroids.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Pyrethroid-PBO nets were more effective than standard pyrethroid-only nets in killing mosquitoes and preventing blood feeding in areas where mosquito populations are very resistant to pyrethroid insecticides (high-certainty evidence). Pyrethroid-PBO nets reduced the number of malaria infections in areas of high pyrethroid resistance (moderate-certainty evidence), although further studies are needed to measure clinical outcomes for the full lifetime of the net."
}
]
|
query-laysum | 5584 | [
{
"role": "user",
"content": "Abstract: Background\nChronic pelvic pain (CPP) is a common gynaecological condition accounting for 20% of all gynaecological referrals. There are wide ranges of causes with overlapping symptomatology, therefore the management of the condition is a formidable challenge for clinicians. The aetiology of CPP is heterogeneous and in many cases, no clear diagnosis can be reached. It is in this scenario that the label of chronic pelvic pain syndrome (CPPS) can be applied. We defined women with CPPS as having a minimum duration of pain of at least 6 months, including with a diagnosis of pelvic congestion syndrome, but excluding pain caused by a condition such as endometriosis. Many surgical interventions have been tried in isolation or in conjunction with non-surgical interventions in the management with variable results. Surgical interventions are invasive and carry operative risks. Surgical interventions must be evaluated for their effectiveness prior to their prevalent use in the management of women with CPPS.\nObjectives\nTo review the effectiveness and safety of surgical interventions in the management of women with CPPS.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group (CGF) Specialised Register of Controlled Trials, CENTRAL, MEDLINE, Embase and PsycINFO, on 23 April 2021 for any randomised controlled trials (RCT) for surgical interventions in women with CPPS. We also searched the citation lists of relevant publications, two trial registries, relevant journals, abstracts, conference proceedings and several key grey literature sources.\nSelection criteria\nRCTs with women who had CPPS. The review authors were prepared to consider studies of any surgical intervention used for the management of CPPS. Outcome measures were pain rating scales, adverse events, psychological outcomes, quality of life (QoL) measures and requirement for analgesia.\nData collection and analysis\nTwo review authors independently evaluated studies for inclusion and extracted data using the forms designed according to Cochrane guidelines. For each included trial, we collected information regarding the method of randomisation, allocation concealment, blinding, data reporting and analyses. We reported pooled results as mean difference (MDs) or odds ratios (OR) and 95% confidence interval (CI) by the Mantel-Haenszel method. If similar outcomes were reported on different scales, we calculated the standardised mean difference (SMD). We applied GRADE criteria to judge the overall certainty of the evidence.\nMain results\nFour studies met our inclusion criteria involving 216 women with CPP and no identifiable cause.\nAdhesiolysis compared to no surgery or diagnostic laparoscopy\nWe are uncertain of the effect of adhesiolysis on pelvic pain scores postoperatively at three months (MD −7.3, 95% CI −29.9 to 15.3; 1 study, 43 participants; low-certainty evidence), six months (MD −14.3, 95% CI −35.9 to 7.3; 1 study, 43 participants; low-certainty evidence) and 12 months postsurgery (MD 0.00, 95% CI −4.60 to 4.60; 1 study, 43 participants; very low-certainty evidence).\nAdhesiolysis may improve both the emotional wellbeing (MD 24.90, 95% CI 7.92 to 41.88; 1 study, 43 participants; low-certainty evidence) and social support (MD 23.90, 95% CI −1.77 to 49.57; 1 study, 43 participants; low-certainty evidence) components of the Endometriosis Health Profile-30, and both the emotional component (MD 32.30, 95% CI 13.16 to 51.44; 1 study, 43 participants; low-certainty evidence) and the physical component of the 12-item Short Form (MD 22.90, 95% CI 10.97 to 34.83; 1 study, 43 participants; low-certainty evidence) when compared to diagnostic laparoscopy.\nWe are uncertain of the safety of adhesiolysis compared to comparator groups due to low-certainty evidence and lack of structured adverse event reporting.\nNo studies reported on psychological outcomes or requirements for analgesia.\nLaparoscopic uterosacral ligament ablation or resection compared to diagnostic laparoscopy/other treatment\nWe are uncertain of the effect of laparoscopic uterosacral ligament/nerve ablation (LUNA) or resection compared to other treatments postoperatively at three months (OR 1.26, 95% CI 0.40 to 3.93; 1 study, 51 participants; low-certainty evidence) and six months (MD −2.10, 95% CI −4.38 to 0.18; 1 study, 74 participants; very low-certainty evidence). At 12 months post-surgery, we are uncertain of the effect of LUNA on the rate of successful treatment compared to diagnostic laparoscopy. One study of 56 participants found no difference in the effect of LUNA on non-cyclical pain (P = 0.854) or dyspareunia (P = 0.41); however, there was a difference favouring LUNA on dysmenorrhea (P = 0.045) and dyschezia (P = 0.05). We are also uncertain of the effect of LUNA compared to vaginal uterosacral ligament resection on pelvic pain at 12 months (MD 2.00, 95% CI 0.47 to 3.53; 1 study, 74 participants; very low-certainty evidence).\nWe are uncertain of the safety of LUNA or resection compared to comparator groups due to the lack of structured adverse event reporting.\nWomen undergoing LUNA may require more analgesia postoperatively than those undergoing other treatments (P < 0.001; 1 study, 74 participants).\nNo studies reported psychological outcomes or QoL.\nAuthors' conclusions\nWe are uncertain about the benefit of adhesiolysis or LUNA in management of pain in women with CPPS based on the current literature. There may be a QoL benefit to adhesiolysis in improving both emotional wellbeing and social support, as measured by the validated QoL tools. It was not possible to synthesis evidence on adverse events as these were only reported narratively in some studies, in which none were observed. With the inadequate objective assessment of adverse events, especially long-term adverse events, associated with adhesiolysis or LUNA for CPPS, there is currently little to support these interventions for CPPS.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chronic pelvic pain in women is a common and debilitating condition. Definitions vary, but generally it is defined as pelvic pain for a period of six months or greater. There are many causes of chronic pelvic pain, but these can sometimes be difficult to identify. Regardless of identifying any or the specific cause, treatment is aimed at reducing symptoms. Occasionally, a diagnostic surgery with insertion of laparoscope is completed (inserting a telescope into the belly to visualise pelvic structures). When identifiable causes of chronic pelvic pain are present, such as endometriosis (tissue similar to the lining of the womb that starts to grow in other places) or adenomyosis (tissue similar to the lining of the womb is found deep in the muscle of the womb), there may be different treatment strategies necessary than when there are no obvious problems. When no disease is identified at the time of a diagnostic surgery despite chronic pelvic pain, we may consider various surgical procedures to treat the chronic pelvic pain, including removing scar tissue originating from infection or previous operation (called adhesiolysis), or cauterising (heat treatment) or excising (removing) the nerves carrying the pain sensation from pelvis to brain (called uterosacral ligament ablation/resection). Despite it being unclear how effective these surgical treatments are, they are being offered and done."
}
]
|
query-laysum | 5585 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with cystic fibrosis (CF) experience chronic airway infections as a result of mucus buildup within the lungs. Repeated infections often cause lung damage and disease. Airway clearance therapies aim to improve mucus clearance, increase sputum production, and improve airway function. The active cycle of breathing technique (ACBT) is an airway clearance method that uses a cycle of techniques to loosen airway secretions including breathing control, thoracic expansion exercises, and the forced expiration technique. This is an update of a previously published review.\nObjectives\nTo compare the clinical effectiveness of ACBT with other airway clearance therapies in CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. We also searched clinical trials registries and the reference lists of relevant articles and reviews.\nDate of last search: 29 March 2021.\nSelection criteria\nWe included randomised or quasi-randomised controlled clinical studies, including cross-over studies, comparing ACBT with other airway clearance therapies in CF.\nData collection and analysis\nTwo review authors independently screened each article, abstracted data and assessed the risk of bias of each study. We used GRADE to assess our confidence in the evidence assessing quality of life, participant preference, adverse events, forced expiratory volume in one second (FEV1) % predicted, forced vital capacity (FVC) % predicted, sputum weight, and number of pulmonary exacerbations.\nMain results\nOur search identified 99 studies, of which 22 (559 participants) met the inclusion criteria. Eight randomised controlled studies (259 participants) were included in the analysis; five were of cross-over design. The 14 remaining studies were cross-over studies with inadequate reports for complete assessment. The study size ranged from seven to 65 participants. The age of the participants ranged from six to 63 years (mean age 18.7 years). In 13 studies follow up lasted a single day. However, there were two long-term randomised controlled studies with follow up of one to three years. Most of the studies did not report on key quality items, and therefore, have an unclear risk of bias in terms of random sequence generation, allocation concealment, and outcome assessor blinding. Due to the nature of the intervention, none of the studies blinded participants or the personnel applying the interventions. However, most of the studies reported on all planned outcomes, had adequate follow up, assessed compliance, and used an intention-to-treat analysis.\nIncluded studies compared ACBT with autogenic drainage, airway oscillating devices (AOD), high-frequency chest compression devices, conventional chest physiotherapy (CCPT), positive expiratory pressure (PEP), and exercise. We found no difference in quality of life between ACBT and PEP mask therapy, AOD, other breathing techniques, or exercise (very low-certainty evidence). There was no difference in individual preference between ACBT and other breathing techniques (very low-certainty evidence). One study comparing ACBT with ACBT plus postural exercise reported no deaths and no adverse events (very low-certainty evidence). We found no differences in lung function (forced expiratory volume in one second (FEV1) % predicted and forced vital capacity (FVC) % predicted), oxygen saturation or expectorated sputum between ACBT and any other technique (very low-certainty evidence). There were no differences in the number of pulmonary exacerbations between people using ACBT and people using CCPT (low-certainty evidence) or ACBT with exercise (very low-certainty evidence), the only comparisons to report this outcome.\nAuthors' conclusions\nThere is little evidence to support or reject the use of the ACBT over any other airway clearance therapy and ACBT is comparable with other therapies in outcomes such as participant preference, quality of life, exercise tolerance, lung function, sputum weight, oxygen saturation, and number of pulmonary exacerbations. Longer-term studies are needed to more adequately assess the effects of ACBT on outcomes important for people with cystic fibrosis such as quality of life and preference.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have little or no confidence in the evidence and think that further research is very likely to affect our conclusions of this review for any of the interventions analysed.\nMany of the studies did not provide enough details of their methods to determine if there were any biases that might have affected the results. Many studies did not report how they decided who would get which treatment and how they made sure that the people who were putting people into the different treatment groups and those who were assessing the results did not know which group each individual was in. Most of the included studies had a cross-over design (where people have one treatment and then switch to the second), and many of these did not report the length of time in between different treatments. As it is possible that the first treatment might affect the results of the next treatment, we only included results from the first treatment period. Many of the studies did not report separate results for just the first treatment period, so we did not include their results in our review.\nAll participants knew which treatment group they were in (it is not possible to disguise different physiotherapy techniques). This could have affected the results for some of the self-reported outcomes, such as quality of life, personal preference, or exercise tolerance, but is unlikely to have affected the more objective outcomes, such as lung function.\nMost of the studies followed those taking part for less than one month and did this for most of the participants for the entire study period. In two out of the three longer studies more than 10% of the people taking part dropped out. The study results could be affected if the people who dropped out of the studies were not evenly spread across the different treatment groups.\nOver half of the studies checked that participants were using the airway clearance therapy they were supposed to. Most of the studies reported on all their planned outcomes.\nThe findings of the review were limited as not many studies made the same comparisons; also, there were not many long-term studies and the studies we included did not report enough data."
}
]
|
query-laysum | 5586 | [
{
"role": "user",
"content": "Abstract: Background\nInfertility is a prevalent problem that has significant consequences for individuals, families, and the community. Modifiable lifestyle factors may affect the chance of people with infertility having a baby. However, no guideline is available about what preconception advice should be offered. It is important to determine what preconception advice should be given to people with infertility and to evaluate whether this advice helps them make positive behavioural changes to improve their lifestyle and their chances of conceiving.\nObjectives\nTo assess the safety and effectiveness of preconception lifestyle advice on fertility outcomes and lifestyle behavioural changes for people with infertility.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register of controlled trials, CENTRAL, MEDLINE, Embase, PsycINFO, AMED, CINAHL, trial registers, Google Scholar, and Epistemonikos in January 2021; we checked references and contacted field experts to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), randomised cross-over studies, and cluster-randomised studies that compared at least one form of preconception lifestyle advice with routine care or attention control for people with infertility.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. Primary effectiveness outcomes were live birth and ongoing pregnancy. Primary safety outcomes were adverse events and miscarriage. Secondary outcomes included reported behavioural changes in lifestyle, birth weight, gestational age, clinical pregnancy, time to pregnancy, quality of life, and male factor infertility outcomes. We assessed the overall quality of evidence using GRADE criteria.\nMain results\nWe included in the review seven RCTs involving 2130 participants. Only one RCT included male partners. Three studies compared preconception lifestyle advice on a combination of topics with routine care or attention control. Four studies compared preconception lifestyle advice on one topic (weight, alcohol intake, or smoking) with routine care for women with infertility and specific lifestyle characteristics. The evidence was of low to very low-quality. The main limitations of the included studies were serious risk of bias due to lack of blinding, serious imprecision, and poor reporting of outcome measures.\nPreconception lifestyle advice on a combination of topics versus routine care or attention control \nPreconception lifestyle advice on a combination of topics may result in little to no difference in the number of live births (risk ratio (RR) 0.93, 95% confidence interval (CI) 0.79 to 1.10; 1 RCT, 626 participants), but the quality of evidence was low. No studies reported on adverse events or miscarriage. Due to very low-quality evidence, we are uncertain whether preconception lifestyle advice on a combination of topics affects lifestyle behavioural changes: body mass index (BMI) (mean difference (MD) -1.06 kg/m², 95% CI -2.33 to 0.21; 1 RCT, 180 participants), vegetable intake (MD 12.50 grams/d, 95% CI -8.43 to 33.43; 1 RCT, 264 participants), alcohol abstinence in men (RR 1.08, 95% CI 0.74 to 1.58; 1 RCT, 210 participants), or smoking cessation in men (RR 1.01, 95% CI 0.91 to 1.12; 1 RCT, 212 participants). Preconception lifestyle advice on a combination of topics may result in little to no difference in the number of women with adequate folic acid supplement use (RR 0.98, 95% CI 0.95 to 1.01; 2 RCTs, 850 participants; I² = 4%), alcohol abstinence (RR 1.07, 95% CI 0.99 to 1.17; 1 RCT, 607 participants), and smoking cessation (RR 1.01, 95% CI 0.98 to 1.04; 1 RCT, 606 participants), on low quality evidence. No studies reported on other behavioural changes.\nPreconception lifestyle advice on weight versus routine care \nStudies on preconception lifestyle advice on weight were identified only in women with infertility and obesity. Compared to routine care, we are uncertain whether preconception lifestyle advice on weight affects the number of live births (RR 0.94, 95% CI 0.62 to 1.43; 2 RCTs, 707 participants; I² = 68%; very low-quality evidence), adverse events including gestational diabetes (RR 0.78, 95% CI 0.48 to 1.26; 1 RCT, 317 participants; very low-quality evidence), hypertension (RR 1.07, 95% CI 0.66 to 1.75; 1 RCT, 317 participants; very low-quality evidence), or miscarriage (RR 1.50, 95% CI 0.95 to 2.37; 1 RCT, 577 participants; very low-quality evidence). Regarding lifestyle behavioural changes for women with infertility and obesity, preconception lifestyle advice on weight may slightly reduce BMI (MD -1.30 kg/m², 95% CI -1.58 to -1.02; 1 RCT, 574 participants; low-quality evidence). Due to very low-quality evidence, we are uncertain whether preconception lifestyle advice affects the percentage of weight loss, vegetable and fruit intake, alcohol abstinence, or physical activity. No studies reported on other behavioural changes.\nPreconception lifestyle advice on alcohol intake versus routine care \nStudies on preconception lifestyle advice on alcohol intake were identified only in at-risk drinking women with infertility. We are uncertain whether preconception lifestyle advice on alcohol intake affects the number of live births (RR 1.15, 95% CI 0.53 to 2.50; 1 RCT, 37 participants; very low-quality evidence) or miscarriages (RR 1.31, 95% CI 0.21 to 8.34; 1 RCT, 37 participants; very low-quality evidence). One study reported on behavioural changes for alcohol consumption but not as defined in the review methods. No studies reported on adverse events or other behavioural changes.\nPreconception lifestyle advice on smoking versus routine care \nStudies on preconception lifestyle advice on smoking were identified only in smoking women with infertility. No studies reported on live birth, ongoing pregnancy, adverse events, or miscarriage. One study reported on behavioural changes for smoking but not as defined in the review methods.\nAuthors' conclusions\nLow-quality evidence suggests that preconception lifestyle advice on a combination of topics may result in little to no difference in the number of live births. Evidence was insufficient to allow conclusions on the effects of preconception lifestyle advice on adverse events and miscarriage and on safety, as no studies were found that looked at these outcomes, or the studies were of very low quality. This review does not provide clear guidance for clinical practice in this area. However, it does highlight the need for high-quality RCTs to investigate preconception lifestyle advice on a combination of topics and to assess relevant effectiveness and safety outcomes in men and women with infertility.\n\nGiven the provided abstract, please respond to the following query: \"Preconception lifestyle advice on a combination of topics versus routine care or attention control\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Preconception lifestyle advice on a combination of topics may not affect live birth. The evidence suggests that if live birth is assumed to be 48% for those receiving routine care or attention control, then live birth when preconception lifestyle advice is received would be between 38% and 53%. We are uncertain whether preconception lifestyle advice on a combination of topics affects lifestyle behaviour changes such as body mass index (BMI) in women, vegetable intake in men and women, or alcohol abstinence and smoking cessation in men. Preconception lifestyle advice on a combination of topics may not affect adequate use of folic acid supplement, alcohol abstinence, or smoking cessation in women. The evidence suggests that if adequate folic acid supplement use in women is assumed to be 93% for those receiving routine care or attention control, then adequate folic acid supplement use when preconception lifestyle advice is received would be between 89% and 94%. Evidence also suggests that if it is assumed that 75% of women abstain from alcohol with routine care or attention control, then between 74% and 88% of women would abstain from alcohol when receiving preconception lifestyle advice. If it is assumed that smoking cessation is seen in 95% of women receiving routine care or attention control, then smoking cessation would be seen in 93% to 99% of women when they receive preconception lifestyle advice. No study reported on other behavioural changes."
}
]
|
query-laysum | 5587 | [
{
"role": "user",
"content": "Abstract: Background\nA key function of health systems is implementing interventions to improve health, but coverage of essential health interventions remains low in low-income countries. Implementing interventions can be challenging, particularly if it entails complex changes in clinical routines; in collaborative patterns among different healthcare providers and disciplines; in the behaviour of providers, patients or other stakeholders; or in the organisation of care. Decision-makers may use a range of strategies to implement health interventions, and these choices should be based on evidence of the strategies' effectiveness.\nObjectives\nTo provide an overview of the available evidence from up-to-date systematic reviews about the effects of implementation strategies for health systems in low-income countries. Secondary objectives include identifying needs and priorities for future evaluations and systematic reviews on alternative implementation strategies and informing refinements of the framework for implementation strategies presented in the overview.\nMethods\nWe searched Health Systems Evidence in November 2010 and PDQ-Evidence up to December 2016 for systematic reviews. We did not apply any date, language or publication status limitations in the searches. We included well-conducted systematic reviews of studies that assessed the effects of implementation strategies on professional practice and patient outcomes and that were published after April 2005. We excluded reviews with limitations important enough to compromise the reliability of the review findings. Two overview authors independently screened reviews, extracted data and assessed the certainty of evidence using GRADE. We prepared SUPPORT Summaries for eligible reviews, including key messages, 'Summary of findings' tables (using GRADE to assess the certainty of the evidence) and assessments of the relevance of findings to low-income countries.\nMain results\nWe identified 7272 systematic reviews and included 39 of them in this overview. An additional four reviews provided supplementary information. Of the 39 reviews, 32 had only minor limitations and 7 had important methodological limitations. Most studies in the reviews were from high-income countries. There were no studies from low-income countries in eight reviews.\nImplementation strategies addressed in the reviews were grouped into four categories – strategies targeting:\n1. healthcare organisations (e.g. strategies to change organisational culture; 1 review);\n2. healthcare workers by type of intervention (e.g. printed educational materials; 14 reviews);\n3. healthcare workers to address a specific problem (e.g. unnecessary antibiotic prescription; 9 reviews);\n4. healthcare recipients (e.g. medication adherence; 15 reviews).\nOverall, we found the following interventions to have desirable effects on at least one outcome with moderate- or high-certainty evidence and no moderate- or high-certainty evidence of undesirable effects.\n1.Strategies targeted at healthcare workers: educational meetings, nutrition training of health workers, educational outreach, practice facilitation, local opinion leaders, audit and feedback, and tailored interventions.\n2.Strategies targeted at healthcare workers for specific types of problems: training healthcare workers to be more patient-centred in clinical consultations, use of birth kits, strategies such as clinician education and patient education to reduce antibiotic prescribing in ambulatory care settings, and in-service neonatal emergency care training.\n3. Strategies targeted at healthcare recipients: mass media interventions to increase uptake of HIV testing; intensive self-management and adherence, intensive disease management programmes to improve health literacy; behavioural interventions and mobile phone text messages for adherence to antiretroviral therapy; a one time incentive to start or continue tuberculosis prophylaxis; default reminders for patients being treated for active tuberculosis; use of sectioned polythene bags for adherence to malaria medication; community-based health education, and reminders and recall strategies to increase vaccination uptake; interventions to increase uptake of cervical screening (invitations, education, counselling, access to health promotion nurse and intensive recruitment); health insurance information and application support.\nAuthors' conclusions\nReliable systematic reviews have evaluated a wide range of strategies for implementing evidence-based interventions in low-income countries. Most of the available evidence is focused on strategies targeted at healthcare workers and healthcare recipients and relates to process-based outcomes. Evidence of the effects of strategies targeting healthcare organisations is scarce.\n\nGiven the provided abstract, please respond to the following query: \"Strategies targeted at healthcare organisations\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- Strategies to improve organisational culture."
}
]
|
query-laysum | 5588 | [
{
"role": "user",
"content": "Abstract: Background\nPressure ulcers, also known as pressure injuries and bed sores, are localised areas of injury to the skin or underlying tissues, or both. Dressings made from a variety of materials, including foam, are used to treat pressure ulcers. An evidence-based overview of dressings for pressure ulcers is needed to enable informed decision-making on dressing use. This review is part of a suite of Cochrane Reviews investigating the use of dressings in the treatment of pressure ulcers. Each review will focus on a particular dressing type.\nObjectives\nTo assess the clinical and cost effectiveness of foam wound dressings for healing pressure ulcers in people with an existing pressure ulcer in any care setting.\nSearch methods\nIn February 2017 we searched: the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase; EBSCO CINAHL Plus and the NHS Economic Evaluation Database (NHS EED). We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nPublished or unpublished randomised controlled trials (RCTs) and cluster-RCTs, that compared the clinical and cost effectiveness of foam wound dressings for healing pressure ulcers (Category/Stage II or above).\nData collection and analysis\nTwo review authors independently performed study selection, risk of bias and data extraction. A third reviewer resolved discrepancies between the review authors.\nMain results\nWe included nine trials with a total of 483 participants, all of whom were adults (59 years or older) with an existing pressure ulcer Category/Stage II or above. All trials had two arms, which compared foam dressings with other dressings for treating pressure ulcers.\nThe certainty of evidence ranged from low to very low due to various combinations of selection, performance, attrition, detection and reporting bias, and imprecision due to small sample sizes and wide confidence intervals. We had very little confidence in the estimate of effect of included studies. Where a foam dressing was compared with another foam dressing, we established that the true effect was likely to be substantially less than the study's estimated effect.\nWe present data for four comparisons.\nOne trial compared a silicone foam dressing with another (hydropolymer) foam dressing (38 participants), with an eight-week (short-term) follow-up. It was uncertain whether alternate types of foam dressing affected the incidence of healed pressure ulcers (RR 0.89, 95% CI 0.45 to 1.75) or adverse events (RR 0.37, 95% CI 0.04 to 3.25), as the certainty of evidence was very low, downgraded for serious limitations in study design and very serious imprecision.\nFour trials with a median sample size of 20 participants (230 participants), compared foam dressings with hydrocolloid dressings for eight weeks or less (short-term). It was uncertain whether foam dressings affected the probability of healing in comparison to hydrocolloid dressings over a short follow-up period in three trials (RR 0.85, 95% CI 0.54 to 1.34), very low-certainty evidence, downgraded for very serious study limitations and serious imprecision. It was uncertain if there was a difference in risk of adverse events between groups (RR 0.88, 95% CI 0.37 to 2.11), very low-certainty evidence, downgraded for serious study limitations and very serious imprecision. Reduction in ulcer size, patient satisfaction/acceptability, pain and cost effectiveness data were also reported but we assessed the evidence as being of very low certainty.\nOne trial (34 participants), compared foam and hydrogel dressings over an eight-week (short-term) follow-up. It was uncertain if the foam dressing affected the probability of healing (RR 1.00, 95% CI 0.78 to 1.28), time to complete healing (MD 5.67 days 95% CI -4.03 to 15.37), adverse events (RR 0.33, 95% CI 0.01 to 7.65) or reduction in ulcer size (MD 0.30 cm2 per day, 95% CI -0.15 to 0.75), as the certainty of the evidence was very low, downgraded for serious study limitations and very serious imprecision.\nThe remaining three trials (181 participants) compared foam with basic wound contact dressings. Follow-up times ranged from short-term (8 weeks or less) to medium-term (8 to 24 weeks). It was uncertain whether foam dressings affected the probability of healing compared with basic wound contact dressings, in the short term (RR 1.33, 95% CI 0.62 to 2.88) or medium term (RR 1.17, 95% CI 0.79 to 1.72), or affected time to complete healing in the medium term (MD -35.80 days, 95% CI -56.77 to -14.83), or adverse events in the medium term (RR 0.58, 95% CI 0.33 to 1.05). This was due to the very low-certainty evidence, downgraded for serious to very serious study limitations and imprecision. Reduction in ulcer size, patient satisfaction/acceptability, pain and cost effectiveness data were also reported but again, we assessed the evidence as being of very low certainty.\nNone of the included trials reported quality of life or pressure ulcer recurrence.\nAuthors' conclusions\nIt is uncertain whether foam dressings are more clinically effective, more acceptable to users, or more cost effective compared to alternative dressings in treating pressure ulcers. It was difficult to make accurate comparisons between foam dressings and other dressings due to the lack of data on reduction of wound size, complete wound healing, treatment costs, or insufficient time-frames. Quality of life and patient (or carer) acceptability/satisfaction associated with foam dressings were not systematically measured in any of the included studies. We assessed the certainty of the evidence in the included trials as low to very low. Clinicians need to carefully consider the lack of robust evidence in relation to the clinical and cost-effectiveness of foam dressings for treating pressure ulcers when making treatment decisions, particularly when considering the wound management properties that may be offered by each dressing type and the care context.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no clear evidence from any of the studies included in this review that foam dressings are more effective at healing pressure ulcers than other types of dressings; or that foam dressings are more cost effective than other dressings. This is due in part to the low quality of the studies, many of which had small numbers of participants and did not provide accurate details of their methods."
}
]
|
query-laysum | 5589 | [
{
"role": "user",
"content": "Abstract: Background\nInfants born very preterm often receive multiple red blood cell (RBC) transfusions during their initial hospitalisation. However, there is an increasing awareness of potential adverse effects of RBC transfusions in this vulnerable patient population. Modification of RBCs prior to transfusion, through washing with 0.9% saline, may reduce these adverse effects and reduce the rate of significant morbidity and mortality for preterm infants and improve outcomes for this high-risk group.\nObjectives\nTo determine whether pre-transfusion washing of RBCs prevents morbidity and mortality in preterm infants.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL 2015, Issue 7), MEDLINE via PubMed (31 July 2015), EMBASE (31 July 2015), and CINAHL (31 July 2015). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nRandomised, cluster randomised, and quasi-randomised controlled trials including preterm infants (less than 32 weeks gestation) or very low birth weight infants (less than 1500 g), or both, who received one or more washed packed RBC transfusions.\nData collection and analysis\nTwo review authors independently assessed the eligibility of the trials. We identified four studies from the initial search. After further review of the full-text studies, we found one study meeting the selection criteria.\nMain results\nWe included a single study enrolling a total of 21 infants for analysis in this review and reported on all-cause mortality during hospital stay, length of initial neonatal intensive care unit (NICU) stay (days), and duration of mechanical ventilation (days). There was no significant difference in mortality between the washed versus the unwashed RBCs for transfusion groups (risk ratio 1.63, 95% confidence interval (CI) 0.28 to 9.36; risk difference 0.10, 95% CI -0.26 to 0.45). There was no significant difference in the length of initial NICU stay between the washed versus the unwashed RBCs for transfusion groups (mean difference (MD) 25 days, 95% CI -21.15 to 71.15) or the duration of mechanical ventilation between the washed versus the unwashed RBCs for transfusion groups (MD 9.60 days, 95% CI -1.90 to 21.10).\nAuthors' conclusions\nWe identified a single small study. The results from this study show a high level of uncertainty, as the confidence intervals are consistent with both a large improvement or a serious harm caused by the intervention. Consequently, there is insufficient evidence to support or refute the use of washed RBCs to prevent the development of significant neonatal morbidities or mortality. Further clinical trials are required to assess the potential effects of pre-transfusion washing of RBCs for preterm or very low birth weight infants, or both, on short- and long-term outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It was hard from the available evidence to draw any conclusions about whether washing blood would be helpful or not for preterm babies. As of now, there is no strong evidence showing that washing blood makes any difference to the outcomes of preterm babies."
}
]
|
query-laysum | 5590 | [
{
"role": "user",
"content": "Abstract: Background\nAutogenic drainage is an airway clearance technique that was developed by Jean Chevaillier in 1967. The technique is characterised by breathing control using expiratory airflow to mobilise secretions from smaller to larger airways. Secretions are cleared independently by adjusting the depth and speed of respiration in a sequence of controlled breathing techniques during exhalation. The technique requires training, concentration and effort from the individual but it has previously been shown to be an effective treatment option for those who are seeking techniques to support and promote independence. However, at a time where the trajectory and demographics of the disease are changing, it is important to systematically review the evidence demonstrating that autogenic drainage is an effective intervention for people with cystic fibrosis.\nObjectives\nTo compare the clinical effectiveness of autogenic drainage in people with cystic fibrosis with other physiotherapy airway clearance techniques.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. We also searched the reference lists of relevant articles and reviews, as well as two ongoing trials registers (02 February 2021).\nDate of most recent search of the Cochrane Cystic Fibrosis Trials Register: 06 July 2021.\nSelection criteria\nWe identified randomised and quasi-randomised controlled studies comparing autogenic drainage to another airway clearance technique or no therapy in people with cystic fibrosis for at least two treatment sessions.\nData collection and analysis\nData extraction and assessments of risk of bias were independently performed by three authors. The authors assessed the quality of the evidence using the GRADE system. The authors contacted seven teams of investigators for further information pertinent to their published studies.\nMain results\nSearches retrieved 64 references to 37 individual studies, of which eight (n = 212) were eligible for inclusion. One study was of parallel design with the remaining seven being cross-over in design; participant numbers ranged from 4 to 75. The total study duration varied between four days and two years. The age of participants ranged between seven and 63 years with a wide range of disease severity reported. Six studies enrolled participants who were clinically stable, whilst participants in two studies received treatment whilst hospitalised with an infective exacerbation. All studies compared autogenic drainage to one (or more) other recognised airway clearance technique. Exercise is commonly used as an alternative therapy by people with cystic fibrosis; however, there were no studies identified comparing exercise with autogenic drainage.\nThe certainty of the evidence was generally low or very low. The main reasons for downgrading the level of evidence were the frequent use of a cross-over design, outcome reporting bias and the inability to blind participants.\nThe review's primary outcome, forced expiratory volume in one second, was the most common outcome measured and was reported by all eight studies; only three studies reported on quality of life (also a primary outcome of the review). One study reported on adverse events and described a decrease in oxygen saturation levels whilst performing active cycle of breathing techniques, but not with autogenic drainage. Seven of the eight included studies measured forced vital capacity and three of the studies used mid peak expiratory flow (per cent predicted) as an outcome. Six studies reported sputum weight. Less commonly used outcomes included oxygen saturation levels, personal preference, hospital admissions, intravenous antibiotics and pseudomonas gene expression. There were no statistically significant differences found between any of the techniques used with respect to the outcomes measured except when autogenic drainage was described as being the preferred technique of the participants in one study over postural drainage and percussion.\nAuthors' conclusions\nAutogenic drainage is a challenging technique that requires commitment from the individual. As such, this intervention merits systematic review to ensure its effectiveness for people with cystic fibrosis, particularly in an era where treatment options are changing rapidly. From the studies assessed, autogenic drainage was not found to be superior to any other form of airway clearance technique. Larger studies are required to better evaluate autogenic drainage in comparison to other airway clearance techniques in view of the relatively small number of participants in this review and the complex study designs. The studies recruited a range of participants and were not powered to assess non-inferiority. The varied length and design of the studies made the analysis of pooled data challenging.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, the certainty of the evidence from the studies was judged to be mainly low or very low. The main problems for this being the small numbers of participants in each study, the unclear reporting of results in the studies and the study design used. In one study, which was classed as having a high risk of bias due to incomplete results, those taking part had to change physiotherapy technique halfway through the study and there were many who dropped out and did not comply with the postural drainage and percussion treatment arm. Six of the eight studies used research staff to assess results who did not know which technique each person was using and this improved the quality of the evidence and reduced any bias in this respect."
}
]
|
query-laysum | 5591 | [
{
"role": "user",
"content": "Abstract: Background\nFulvestrant is a selective oestrogen receptor down-regulator (SERD), which by blocking proliferation of breast cancer cells, is an effective endocrine treatment for women with hormone-sensitive advanced breast cancer. The goal of such systemic therapy in this setting is to reduce symptoms, improve quality of life, and increase survival time.\nObjectives\nTo assess the efficacy and safety of fulvestrant for hormone-sensitive locally advanced or metastatic breast cancer in postmenopausal women, as compared to other standard endocrine agents.\nSearch methods\nWe searched the Cochrane Breast Cancer Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), and ClinicalTrials.gov on 7 July 2015. We also searched major conference proceedings (American Society of Clinical Oncology (ASCO) and San Antonio Breast Cancer Symposium) and practice guidelines from major oncology groups (ASCO, European Society for Medical Oncology (ESMO), National Comprehensive Cancer Network, and Cancer Care Ontario). We handsearched reference lists from relevant studies.\nSelection criteria\nWe included for analyses randomised controlled trials that enrolled postmenopausal women with hormone-sensitive advanced breast cancer (TNM classifications: stages IIIA, IIIB, and IIIC) or metastatic breast cancer (TNM classification: stage IV) with an intervention group treated with fulvestrant with or without other standard anticancer therapy.\nData collection and analysis\nTwo review authors independently extracted data from trials identified in the searches, conducted 'Risk of bias' assessments of the included studies, and assessed the overall quality of the evidence using the GRADE approach. Outcome data extracted from these trials for our analyses and review included progression-free survival (PFS) or time to progression (TTP) or time to treatment failure, overall survival, clinical benefit rate, toxicity, and quality of life. We used the fixed-effect model for meta-analysis where possible.\nMain results\nWe included nine studies randomising 4514 women for meta-analysis and review. Overall results for the primary endpoint of PFS indicated that women receiving fulvestrant did at least as well as the control groups (hazard ratio (HR) 0.95, 95% confidence interval (CI) 0.89 to 1.02; P = 0.18, I2= 56%, 4258 women, 9 studies, high-quality evidence). In the one high-quality study that tested fulvestrant at the currently approved and now standard dose of 500 mg against anastrozole, women treated with fulvestrant 500 mg did better than anastrozole, with a HR for TTP of 0.66 (95% CI 0.47 to 0.93; 205 women) and a HR for overall survival of 0.70 (95% CI 0.50 to 0.98; 205 women). There was no difference in PFS whether fulvestrant was used in combination with another endocrine therapy or in the first- or second-line setting, when compared to control treatments: for monotherapy HR 0.97 (95% CI 0.90 to 1.04) versus HR 0.87 (95% CI 0.77 to 0.99) for combination therapy when compared to control, and HR 0.93 (95% CI 0.84 to 1.03) in the first-line setting and HR 0.96 (95% CI 0.88 to 1.04) in the second-line setting.\nOverall, there was no difference between fulvestrant and control treatments in clinical benefit rate (risk ratio (RR) 1.03, 95% CI 0.97 to 1.10; P = 0.29, I2 = 24%, 4105 women, 9 studies, high-quality evidence) or overall survival (HR 0.97, 95% CI 0.87 to 1.09, P = 0.62, I2 = 66%, 2480 women, 5 studies, high-quality evidence). There was no significant difference in vasomotor toxicity (RR 1.02, 95% CI 0.89 to 1.18, 3544 women, 8 studies, high-quality evidence), arthralgia (RR 0.96, 95% CI 0.86 to 1.09, 3244 women, 7 studies, high-quality evidence), and gynaecological toxicities (RR 1.22, 95% CI 0.94 to 1.57, 2848 women, 6 studies, high-quality evidence). Four studies reported quality of life, none of which reported a difference between the fulvestrant and control arms, though specific data were not presented.\nAuthors' conclusions\nFor postmenopausal women with advanced hormone-sensitive breast cancer, fulvestrant is at least as effective and safe as the comparator endocrine therapies in the included studies. However, fulvestrant may be potentially more effective than current therapies when given at 500 mg, though this higher dosage was used in only one of the nine studies included in the review. We saw no advantage with combination therapy, and fulvestrant was equally as effective as control therapies in both the first- and second-line setting. Our review demonstrates that fulvestrant is a safe and effective systemic therapy and can be considered as a valid option in the sequence of treatments for postmenopausal women with hormone-sensitive advanced breast cancer.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All studies were of high quality."
}
]
|
query-laysum | 5592 | [
{
"role": "user",
"content": "Abstract: Background\nThe application of continuous positive airway pressure (CPAP) has been shown to have some benefits in the treatment of preterm infants with respiratory distress. CPAP has the potential to reduce lung damage, particularly if applied early before atelectasis has occurred. Early application may better conserve an infant's own surfactant stores and consequently may be more effective than later application.\nObjectives\n• To determine if early compared with delayed initiation of CPAP results in lower mortality and reduced need for intermittent positive-pressure ventilation in preterm infants in respiratory distress\n○ Subgroup analyses were planned a priori on the basis of weight (with subdivisions at 1000 grams and 1500 grams), gestation (with subdivisions at 28 and 32 weeks), and according to whether surfactant was used\n▫ Sensitivity analyses based on trial quality were also planned\n○ For this update, we have excluded trials using continuous negative pressure\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 6), in the Cochrane Library; Ovid MEDLINE(R) and Epub Ahead of Print, In-Process & Other Non-Indexed Citations Daily and Versions(R); and the Cumulative Index to Nursing and Allied Health Literatue (CINAHL), on 30 June 2020. We also searched clinical trials databases and the reference lists of retrieved articles for randomised controlled trials (RCTs) and quasi-RCTs.\nSelection criteria\nWe included trials that used random or quasi-random allocation to either early or delayed CPAP for spontaneously breathing preterm infants in respiratory distress.\nData collection and analysis\nWe used the standard methods of Cochrane and Cochrane Neonatal, including independent assessment of trial quality and extraction of data by two review authors. We used the GRADE approach to assess the certainty of evidence.\nMain results\nWe found four studies that recruited a total of 119 infants. Two were quasi-randomised, and the other two did not provide details on the method of randomisation or allocation used. None of these studies used blinding of the intervention or the outcome assessor. Evidence showed uncertainty about whether early CPAP has an effect on subsequent use of intermittent positive-pressure ventilation (IPPV) (typical risk ratio (RR) 0.77, 95% confidence interval (CI) 0.43 to 1.38; typical risk difference (RD) -0.08, 95% CI -0.23 to 0.08; I² = 0%, 4 studies, 119 infants; very low-certainty evidence) or mortality (typical RR 0.93, 95% CI 0.43 to 2.03; typical RD -0.02, 95% CI -0.15 to 0.12; I² = 33%, 4 studies, 119 infants; very low-certainty evidence). The outcome 'failed treatment' was not reported in any of these studies. There was an uncertain effect on air leak (pneumothorax) (typical RR 1.09, 95% CI 0.39 to 3.04, I² = 0%, 3 studies, 98 infants; very low-certainty evidence). No trials reported intraventricular haemorrhage or necrotising enterocolitis. No cases of retinopathy of prematurity were reported in one study (21 infants). One case of bronchopulmonary dysplasia was reported in each group in one study involving 29 infants. Long-term outcomes were not reported.\nAuthors' conclusions\nAll four small trials included in this review were performed in the 1970s or the early 1980s, and we are very uncertain whether early application of CPAP confers clinical benefit in the treatment of respiratory distress, or whether it is associated with any adverse effects.\nFurther trials should be directed towards establishing the appropriate level of CPAP and the timing and method of administration of surfactant when used along with CPAP.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All four included studies had weaknesses in the way they were conducted, and all were very small. In addition, because they were old studies, study results might not apply to the current care of preterm infants. Therefore, we assessed the certainty of evidence as very low."
}
]
|
query-laysum | 5593 | [
{
"role": "user",
"content": "Abstract: Background\nHeavy menstrual bleeding significantly impairs the quality of life of many otherwise healthy women. Perception of heavy menstrual bleeding is subjective and management usually depends upon what symptoms are acceptable to the individual. Surgical options include conservative surgery (uterine resection or ablation) and hysterectomy. Medical treatment options include oral medication and a hormone-releasing intrauterine device (LNG-IUS).\nObjectives\nTo compare the effectiveness, safety and acceptability of surgery versus medical therapy for heavy menstrual bleeding.\nSearch methods\nWe searched the following databases from inception to January 2016: Cochrane Gynaecology and Fertility Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO and clinical trials registers (clinical trials.gov and ICTRP). We also searched the reference lists of retrieved articles.\nSelection criteria\nRandomised controlled trials (RCTs) comparing conservative surgery or hysterectomy versus medical therapy (oral or intrauterine) for heavy menstrual bleeding.\nData collection and analysis\nTwo review authors independently selected the studies, assessed their risk of bias and extracted the data. Our primary outcomes were menstrual bleeding, satisfaction rate and adverse events. Where appropriate we pooled the data to calculate pooled risk ratios (RRs) or mean differences, with 95% confidence intervals (CIs), using a fixed-effect model. We assessed heterogeneity with the I2 statistic and evaluated the quality of the evidence using GRADE methods.\nMain results\nWe included 15 parallel-group RCTs (1289 women). Surgical interventions included hysterectomy and endometrial resection or ablation. Medical interventions included oral medication and the levonorgestrel-releasing intrauterine device (LNG-IUS). The overall quality of the evidence for different comparisons ranged from very low to moderate. The main limitations were lack of blinding, attrition and imprecision. Moreover, it was difficult to interpret long-term study findings as many women randomised to medical interventions subsequently underwent surgery.\nSurgery versus oral medication\nSurgery (endometrial resection) was more effective in controlling bleeding at four months (RR 2.66, 95% CI 1.94 to 3.64, one RCT, 186 women, moderate quality evidence) and also at two years (RR 1.29, 95% CI 1.06 to 1.57, one RCT, 173 women, low quality evidence). There was no evidence of a difference between the groups at five years (RR 1.14, 95% CI 0.97 to 1.34, one RCT, 140 women, very low quality evidence).\nSatisfaction with treatment was higher in the surgical group at two years (RR 1.40, 95% CI 1.13 to 1.74, one RCT, 173 women, moderate quality evidence), but there was no evidence of a difference between the groups at five years (RR 1.13, 95% CI 0.94 to 1.37, one RCT, 114 women, very low quality evidence). There were fewer adverse events in the surgical group at four months (RR 0.26, 95 CI 0.15 to 0.46, one RCT, 186 women). These findings require cautious interpretation, as 59% of women randomised to the oral medication group had had surgery within two years and 77% within five years.\nSurgery versus LNG-IUS\nWhen hysterectomy was compared with LNG-IUS, the hysterectomy group were more likely to have objective control of bleeding at one year (RR 1.11, 95% CI 1.05 to 1.19, one RCT, 223 women, moderate quality evidence). There was no evidence of a difference in quality of life between the groups at five or 10 years, but by 10 years 46% of women originally assigned to LNG-IUS had undergone hysterectomy. Adverse effects associated with hysterectomy included surgical complications such as bladder or bowel perforation and vesicovaginal fistula. Adverse effects associated with LNG-IUS were ongoing bleeding and hormonal symptoms.\nWhen conservative surgery was compared with LNG-IUS, at one year the surgical group were more likely to have subjective control of bleeding (RR 1.19, 95% CI 1.07 to 1.32, five RCTs, 281 women, low quality evidence, I2 = 15%). Satisfaction rates were higher in the surgical group at one year (RR 1.16, 95% CI 1.04, to 1.28, six RCTs, 442 women, I2 = 27%), but this finding was sensitive to the choice of statistical model and use of a random-effects model showed no conclusive evidence of a difference between the groups. There was no evidence of a difference between the groups in satisfaction rates at two years (RR 0.93, 95% CI 0.81 to 1.08, two RCTs, 117 women, I2 = 1%).\nAt one year there were fewer adverse events (such as bleeding and spotting) in the surgical group (RR 0.36, 95% CI 0.15 to 0.82, three RCTs, moderate quality evidence). It was unclear what proportion of women assigned to LNG-IUS underwent surgery over long-term follow-up, as there were few data beyond one year.\nAuthors' conclusions\nSurgery, especially hysterectomy, reduces menstrual bleeding more than medical treatment at one year. There is no conclusive evidence of a difference in satisfaction rates between surgery and LNG-IUS, though adverse effects such as bleeding and spotting are more likely to occur with LNG-IUS. Oral medication suits a minority of women in the long term, and the LNG-IUS device provides a better alternative to surgery in most cases. Although hysterectomy is a definitive treatment for heavy menstrual bleeding, it can cause serious complications for a minority of women. Most women may be well advised to try a less radical treatment as first-line therapy. Both LNG-IUS and conservative surgery appear to be safe, acceptable and effective.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cochrane review authors compared the effectiveness, safety and acceptability of surgery versus medical therapy for heavy menstrual bleeding."
}
]
|
query-laysum | 5594 | [
{
"role": "user",
"content": "Abstract: Background\nNeuropathic pain, which is caused by nerve damage, is increasing in prevalence worldwide. This may reflect improved diagnosis, or it may be due to increased incidence of diabetes-associated neuropathy, linked to increasing levels of obesity. Other types of neuropathic pain include post-herpetic neuralgia, trigeminal neuralgia, and neuralgia caused by chemotherapy. Antidepressant drugs are sometimes used to treat neuropathic pain; however, their analgesic efficacy is unclear. A previous Cochrane review that included all antidepressants for neuropathic pain is being replaced by new reviews of individual drugs examining chronic neuropathic pain in the first instance. Venlafaxine is a reasonably well-tolerated antidepressant and is a serotonin reuptake inhibitor and weak noradrenaline reuptake inhibitor. Although not licensed for the treatment of chronic or neuropathic pain in most countries, it is sometimes used for this indication.\nObjectives\nTo assess the analgesic efficacy of, and the adverse effects associated with the clinical use of, venlafaxine for chronic neuropathic pain in adults.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) via The Cochrane Library, and MEDLINE and EMBASE via Ovid up to 14 August 2014. We reviewed the bibliographies of any randomised trials identified and review articles, contacted authors of one excluded study and searched www.clinicaltrials.gov to identify additional published or unpublished data. We also searched the meta-Register of controlled trials (mRCT) (www.controlled-trials.com/mrct) and the WHO International Clinical Trials Registry Platform (ICTRP) (apps.who.int/trialsearch/) for ongoing trials but did not find any relevant trials.\nSelection criteria\nWe included randomised, double-blind studies of at least two weeks' duration comparing venlafaxine with either placebo or another active treatment in chronic neuropathic pain in adults. All participants were aged 18 years or over and all included studies had at least 10 participants per treatment arm. We only included studies with full journal publication.\nData collection and analysis\nThree review authors independently extracted data using a standard form and assessed study quality. We intend to analyse data in three tiers of evidence as described by Hearn 2014, but did not find any first-tier evidence (ie evidence meeting current best standards, with minimal risk of bias) or second-tier evidence, that was considered at some risk of bias but with adequate participant numbers (at least 200 in the comparison). Third-tier evidence is that arising from studies with small numbers of participants; studies of short duration, studies that are likely to be of limited clinical utility due to other limitations, including selection bias and attrition bias; or a combination of these.\nMain results\nWe found six randomised, double-blind trials of at least two weeks' duration eligible for inclusion. These trials included 460 participants with neuropathic pain, with most participants having painful diabetic neuropathy. Four studies were of cross-over design and two were parallel trials. Only one trial was both parallel design and placebo-controlled. Mean age of participants ranged from 48 to 59 years. In three studies (Forssell 2004, Jia 2006 and Tasmuth 2002), only mean data were reported. Comparators included placebo, imipramine, and carbamazepine and duration of treatment ranged from two to eight weeks. The risk of bias was considerable overall in the review, especially due to the small size of most studies and due to attrition bias. Four of the six studies reported some positive benefit for venlafaxine. In the largest study by Rowbotham, 2004, 56% of participants receiving venlafaxine 150 to 225 mg achieved at least a 50% reduction in pain intensity versus 34% of participants in the placebo group and the number needed to treat for an additional beneficial outcome was 4.5. However, this study was subject to significant selection bias. Known adverse effects of venlafaxine, including somnolence, dizziness, and mild gastrointestinal problems, were reported in all studies but were not particularly problematic and, overall, adverse effects were equally prominent in placebo or other active comparator groups.\nAuthors' conclusions\nWe found little compelling evidence to support the use of venlafaxine in neuropathic pain. While there was some third-tier evidence of benefit, this arose from studies that had methodological limitations and considerable risk of bias. Placebo effects were notably strong in several studies. Given that effective drug treatments for neuropathic pain are in current use, there is no evidence to revise prescribing guidelines to promote the use of venlafaxine in neuropathic pain. Although venlafaxine was generally reasonably well tolerated, there was some evidence that it can precipitate fatigue, somnolence, nausea, and dizziness in a minority of people.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In detailed searches of the medical literature, we found six trials that were suitable for inclusion in our analysis, that together included 460 adults."
}
]
|
query-laysum | 5595 | [
{
"role": "user",
"content": "Abstract: Background\nIntensity of exercise is considered a key determinant of training response, however, no systematic review has investigated the effects of different levels of training intensity on exercise capacity, functional exercise capacity and health-related quality of life (HRQoL) in people with chronic obstructive pulmonary disease (COPD). As type of training (continuous or interval) may also affect training response, the effects of the type of training in COPD also require investigation.\nObjectives\nTo determine the effects of training intensity (higher versus lower) or type (continuous versus interval training) on primary outcomes in exercise capacity and secondary outcomes in symptoms and HRQoL for people with COPD.\nSearch methods\nWe searched for studies in any language from the Cochrane Airways Group Specialised Register, CENTRAL, MEDLINE, EMBASE, CINAHL, AMED, PsycINFO and PubMed. Searches were current as of June 2011.\nSelection criteria\nWe included randomised controlled trials comparing higher training intensity to lower training intensity or comparing continuous training to interval training in people with COPD. We excluded studies that compared exercise training with no exercise training.\nData collection and analysis\nWe pooled results of comparable groups of studies and calculated the treatment effect and 95% confidence intervals (CI) using a random-effects model. We made two separate comparisons of effects between: 1) higher and lower training intensity; 2) continuous and interval training. We contacted authors of missing data.\nMain results\nWe analysed three included studies (231 participants) for comparisons between higher and lower-intensity training and eight included studies (367 participants) for comparisons between continuous and interval training. Primary outcomes were outcomes at peak exercise (peak work rate, peak oxygen consumption, peak minute ventilation and lactate threshold), at isowork or isotime, endurance time on a constant work rate test and functional exercise capacity (six-minute walk distance). When comparing higher versus lower-intensity training, the pooled primary outcomes were endurance time and six-minute walk distance. There were no significant differences in endurance time improvement (mean difference (MD) 1.07 minutes; 95% CI -1.53 to 3.67) and six-minute walk distance improvement (MD 2.8 metres; 95% CI -10.1 to 15.6) following higher or lower-intensity training. However, heterogeneity of the endurance time results between studies was significant. When comparing continuous and interval training, there were no significant differences in any of the primary outcomes, except for oxygen consumption at isotime (MD 0.08; 95% CI 0.01 to 0.16) but the treatment effect was not considered clinically important. According to the GRADE system, studies were of low to moderate quality.\nAuthors' conclusions\nComparisons between the higher and lower training intensity were limited due to the small number of included studies and participants. Consequently, there are insufficient data to draw any conclusions on exercise capacity, symptoms and HRQoL for this comparison. For comparisons between continuous and interval training, both appear to be equally effective in improving exercise capacity, symptoms and HRQoL.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found three studies comparing higher with lower-intensity training. Due to a small number of studies and participants, data are limited in evaluating the effects of different levels of training intensity on exercise capacity, breathlessness and quality of life. We also found eight studies that compared continuous with interval training. There was no significant difference between continuous and interval training in improvements in exercise capacity, breathlessness and quality of life."
}
]
|
query-laysum | 5596 | [
{
"role": "user",
"content": "Abstract: Background\nAnaesthetic drugs during general anaesthesia are titrated according to sympathetic or somatic responses to surgical stimuli. It is now possible to measure depth of anaesthesia using electroencephalography (EEG). Entropy, an EEG-based monitor can be used to assess the depth of anaesthesia using a strip of electrodes applied to the forehead, and this can guide intraoperative anaesthetic drug administration.\nObjectives\nThe primary objective of this review was to assess the effectiveness of entropy monitoring in facilitating faster recovery from general anaesthesia. We also wanted to assess mortality at 24 hours, 30 days, and one year following general anaesthesia with entropy monitoring.\nThe secondary objectives were to assess the effectiveness of the entropy monitor in: preventing postoperative recall of intraoperative events (awareness) following general anaesthesia; reducing the amount of anaesthetic drugs used; reducing cost of the anaesthetic as well as in reducing time to readiness to leave the postanaesthesia care unit (PACU).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 10), MEDLINE via Ovid SP (1990 to September 2014) and EMBASE via Ovid SP (1990 to September 2014). We reran the search in CENTRAL, MEDLINE via Ovid SP and EMBASE via Ovid SP in January 2016. We added one potential new study of interest to the list of ‘Studies awaiting Classification' and we will incorporate this study into the formal review findings during the review update.\nSelection criteria\nWe included randomized controlled trials (RCTs) conducted in adults and children (aged greater than two years of age), where in one arm entropy monitoring was used for titrating anaesthesia, and in the other standard practice (increase in heart rate, mean arterial pressure, lacrimation, movement in response to noxious surgical stimuli) was used for titrating anaesthetic drug administration. We also included trials with an additional third arm, wherein another EEG monitor, the Bispectral index (BIS) monitor was used to assess anaesthetic depth.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently extracted details of trial methodology and outcome data from trials considered eligible for inclusion. All analyses were made on an intention-to-treat basis. We used a random-effect model where there was heterogeneity. For assessments of the overall quality of evidence for each outcome that included pooled data from RCTs, we downgraded evidence from 'high quality' by one level for serious (or by two for very serious) study limitations (risk of bias, indirectness of evidence, serious inconsistency, imprecision of effect or potential publication bias).\nMain results\nWe included 11 RCTs (962 participants). Eight RCTs (762 participants) were carried out on adults (18 to 80 years of age), two (128 participants) involved children (two to 16 years) and one RCT (72 participants) included patients aged 60 to 75 years. Of the 11 included studies, we judged three to be at low risk of bias, and the remaining eight RCTs at unclear or high risk of bias.\nSix RCTs (383 participants) estimated the primary outcome, time to awakening after stopping general anaesthesia, which was reduced in the entropy as compared to the standard practice group (mean difference (MD) -5.42 minutes, 95% confidence interval (CI) -8.77 to -2.08; moderate quality of evidence). We noted heterogeneity for this outcome; on performing subgroup analysis this was found to be due to studies that included participants undergoing major, long duration surgeries (off-pump coronary artery bypass grafting, major urological surgery). The MD for time to awakening with four studies on ambulatory procedures was -3.20 minutes (95% CI -3.94 to -2.45). No trial reported the second primary outcome, mortality at 24 hours, 30 days, and one year with the use of entropy monitoring.\nEight trials (797 participants) compared the secondary outcome, postoperative recall of intraoperative events (awareness) in the entropy and standard practice groups. Awareness was reported by only one patient in the standard practice group, making meaningful estimation of benefit of entropy monitoring difficult; moderate quality of evidence.\nAll 11 RCTs compared the amount of anaesthetic agent used between the entropy and standard practice groups. Six RCTs compared the amount of propofol, four compared the amount of sevoflurane and one the amount of isoflurane used between the groups. Analysis of three studies (166 participants) revealed that the MD of propofol consumption between the entropy group and control group was -11.56 mcg/kg/min (95% CI -24.05 to 0.92); low quality of evidence. Analysis of another two studies (156 participants) showed that the MD in sevoflurane consumption in the entropy group compared to the control group was -3.42 mL (95% CI -6.49 to -0.35); moderate quality of evidence.\nNo trial reported on the secondary outcome of the cost of general anaesthesia.\nThree trials (170 participants) estimated MD in time to readiness to leave the PACU of the entropy group as compared to the control group (MD -5.94 minutes, 95% CI -16.08 to 4.20; low quality of evidence). Heterogeneity was noted, which was due to the difference in anaesthetic technique (propofol-based general anaesthesia) in one study. The remaining two studies had used volatile-based general anaesthesia. The MD in time to readiness to leave the PACU was -4.17 minutes (95% CI -6.84 to -1.51) with these two studies.\nAuthors' conclusions\nThe evidence as regards time to awakening, recall of intraoperative awareness and reduction in inhalational anaesthetic agent use was of moderate quality. The quality of evidence of as regards reduction in intravenous anaesthetic agent (propofol) use, as well as time to readiness to leave the PACU was found to be of low quality. As the data are limited, further studies consisting of more participants will be required for ascertaining benefits of entropy monitoring.\nFurther studies are needed to assess the effect of entropy monitoring on focal issues such as short-term and long-term mortality, as well as cost of general anaesthesia.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence for assessing reduction in time to awakening, recall of intraoperative events and amount of inhalation of anaesthetic agents used is of moderate quality. The quality of evidence as regards intravenous anaesthetic agent used and length of stay in the PACU is of low quality."
}
]
|
query-laysum | 5597 | [
{
"role": "user",
"content": "Abstract: Background\nAcute otitis media (AOM) is one of the most common childhood illnesses. While many children experience sporadic AOM episodes, an important group suffer from recurrent AOM (rAOM), defined as three or more episodes in six months, or four or more in one year. In this subset of children AOM poses a true burden through frequent episodes of ear pain, general illness, sleepless nights and time lost from nursery or school. Grommets, also called ventilation or tympanostomy tubes, can be offered for rAOM.\nObjectives\nTo assess the benefits and harms of bilateral grommet insertion with or without concurrent adenoidectomy in children with rAOM.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trials Register; CENTRAL; MEDLINE; EMBASE; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 4 December 2017.\nSelection criteria\nRandomised controlled trials (RCTs) comparing bilateral grommet insertion with or without concurrent adenoidectomy and no ear surgery in children up to age 16 years with rAOM. We planned to apply two main scenarios: grommets as a single surgical intervention and grommets as concurrent treatment with adenoidectomy (i.e. children in both the intervention and comparator groups underwent adenoidectomy). The comparators included active monitoring, antibiotic prophylaxis and placebo medication.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Primary outcomes were: proportion of children who have no AOM recurrences at three to six months follow-up (intermediate-term) and persistent tympanic membrane perforation (significant adverse event). Secondary outcomes were: proportion of children who have no AOM recurrences at six to 12 months follow-up (long-term); total number of AOM recurrences, disease-specific and generic health-related quality of life, presence of middle ear effusion and other adverse events at short-term, intermediate-term and long-term follow-up. We used GRADE to assess the quality of the evidence for each outcome; this is indicated in italics.\nMain results\nFive RCTs (805 children) with unclear or high risk of bias were included. All studies were conducted prior to the introduction of pneumococcal vaccination in the countries' national immunisation programmes. In none of the trials was adenoidectomy performed concurrently in both groups.\nGrommets versus active monitoring\nGrommets were more effective than active monitoring in terms of:\n- proportion of children who had no AOM recurrence at six months (one study, 95 children, 46% versus 5%; risk ratio (RR) 9.49, 95% confidence interval (CI) 2.38 to 37.80, number needed to treat to benefit (NNTB) 3; low-quality evidence);\n- proportion of children who had no AOM recurrence at 12 months (one study, 200 children, 48% versus 34%; RR 1.41, 95% CI 1.00 to 1.99, NNTB 8; low-quality evidence);\n- number of AOM recurrences at six months (one study, 95 children, mean number of AOM recurrences per child: 0.67 versus 2.17, mean difference (MD) -1.50, 95% CI -1.99 to -1.01; low-quality evidence);\n- number of AOM recurrences at 12 months (one study, 200 children, one-year AOM incidence rate: 1.15 versus 1.70, incidence rate difference -0.55, 95% -0.17 to -0.93; low-quality evidence).\nChildren receiving grommets did not have better disease-specific health-related quality of life (Otitis Media-6 questionnaire) at four (one study, 85 children) or 12 months (one study, 81 children) than those managed by active monitoring (low-quality evidence).\nOne study reported no persistent tympanic membrane perforations among 54 children receiving grommets (low-quality evidence).\nGrommets versus antibiotic prophylaxis\nIt is uncertain whether or not grommets are more effective than antibiotic prophylaxis in terms of:\n- proportion of children who had no AOM recurrence at six months (two studies, 96 children, 60% versus 35%; RR 1.68, 95% CI 1.07 to 2.65, I2 = 0%, fixed-effect model, NNTB 5; very low-quality evidence);\n- number of AOM recurrences at six months (one study, 43 children, mean number of AOM recurrences per child: 0.86 versus 1.38, MD -0.52, 95% CI -1.37 to 0.33; very low-quality evidence).\nGrommets versus placebo medication\nGrommets were more effective than placebo medication in terms of:\n- proportion of children who had no AOM recurrence at six months (one study, 42 children, 55% versus 15%; RR 3.64, 95% CI 1.20 to 11.04, NNTB 3; very low-quality evidence);\n- number of AOM recurrences at six months (one study, 42 children, mean number of AOM recurrences per child: 0.86 versus 2.0, MD -1.14, 95% CI -2.06 to -0.22; very low-quality evidence).\nOne study reported persistent tympanic membrane perforations in 3 of 76 children (4%) receiving grommets (low-quality evidence).\nSubgroup analysis\nThere were insufficient data to determine whether presence of middle ear effusion at randomisation, type of grommet or age modified the effectiveness of grommets.\nAuthors' conclusions\nCurrent evidence on the effectiveness of grommets in children with rAOM is limited to five RCTs with unclear or high risk of bias, which were conducted prior to the introduction of pneumococcal vaccination. Low to very low-quality evidence suggests that children receiving grommets are less likely to have AOM recurrences compared to those managed by active monitoring and placebo medication, but the magnitude of the effect is modest with around one fewer episode at six months and a less noticeable effect by 12 months. The low to very low quality of the evidence means that these numbers need to be interpreted with caution since the true effects may be substantially different. It is uncertain whether or not grommets are more effective than antibiotic prophylaxis. The risk of persistent tympanic membrane perforation after grommet insertion was low.\nWidespread use of pneumococcal vaccination has changed the bacteriology and epidemiology of AOM, and how this might impact the results of prior trials is unknown. New and high-quality RCTs of grommet insertion in children with rAOM are therefore needed. These trials should not only focus on the frequency of AOM recurrences, but also collect data on the severity of AOM episodes, antibiotic consumption and adverse effects of both surgery and antibiotics. This is particularly important since grommets may reduce the severity of AOM recurrences and allow for topical rather than oral antibiotic treatment.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review includes evidence up to 4 December 2017. We included five randomised controlled trials with a total of 805 children with recurring acute middle ear infections. All studies were performed before the introduction of vaccination against pneumococcus, a bacterium that commonly causes ear infections. Surgical removal of the adenoids was not performed in both groups in any of the trials."
}
]
|
query-laysum | 5598 | [
{
"role": "user",
"content": "Abstract: Background\nThe consequences of influenza in the elderly (those age 65 years or older) are complications, hospitalisations, and death. The primary goal of influenza vaccination in the elderly is to reduce the risk of death among people who are most vulnerable. This is an update of a review published in 2010. Future updates of this review will be made only when new trials or vaccines become available. Observational data included in previous versions of the review have been retained for historical reasons but have not been updated because of their lack of influence on the review conclusions.\nObjectives\nTo assess the effects (efficacy, effectiveness, and harm) of vaccines against influenza in the elderly.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2016, Issue 11), which includes the Cochrane Acute Respiratory Infections Group's Specialised Register; MEDLINE (1966 to 31 December 2016); Embase (1974 to 31 December 2016); Web of Science (1974 to 31 December 2016); CINAHL (1981 to 31 December 2016); LILACS (1982 to 31 December 2016); WHO International Clinical Trials Registry Platform (ICTRP; 1 July 2017); and ClinicalTrials.gov (1 July 2017).\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs assessing efficacy against influenza (laboratory-confirmed cases) or effectiveness against influenza-like illness (ILI) or safety. We considered any influenza vaccine given independently, in any dose, preparation, or time schedule, compared with placebo or with no intervention. Previous versions of this review included 67 cohort and case-control studies. The searches for these trial designs are no longer updated.\nData collection and analysis\nReview authors independently assessed risk of bias and extracted data. We rated the certainty of evidence with GRADE for the key outcomes of influenza, ILI, complications (hospitalisation, pneumonia), and adverse events. We have presented aggregate control group risks to illustrate the effect in absolute terms. We used them as the basis for calculating the number needed to vaccinate to prevent one case of each event for influenza and ILI outcomes.\nMain results\nWe identified eight RCTs (over 5000 participants), of which four assessed harms. The studies were conducted in community and residential care settings in Europe and the USA between 1965 and 2000. Risk of bias reduced our certainty in the findings for influenza and ILI, but not for other outcomes.\nOlder adults receiving the influenza vaccine may experience less influenza over a single season compared with placebo, from 6% to 2.4% (risk ratio (RR) 0.42, 95% confidence interval (CI) 0.27 to 0.66; low-certainty evidence). We rated the evidence as low certainty due to uncertainty over how influenza was diagnosed. Older adults probably experience less ILI compared with those who do not receive a vaccination over the course of a single influenza season (3.5% versus 6%; RR 0.59, 95% CI 0.47 to 0.73; moderate-certainty evidence). These results indicate that 30 people would need to be vaccinated to prevent one person experiencing influenza, and 42 would need to be vaccinated to prevent one person having an ILI.\nThe study providing data for mortality and pneumonia was underpowered to detect differences in these outcomes. There were 3 deaths from 522 participants in the vaccination arm and 1 death from 177 participants in the placebo arm, providing very low-certainty evidence for the effect on mortality (RR 1.02, 95% CI 0.11 to 9.72). No cases of pneumonia occurred in one study that reported this outcome (very low-certainty evidence). No data on hospitalisations were reported. Confidence intervaIs around the effect of vaccines on fever and nausea were wide, and we do not have enough information about these harms in older people (fever: 1.6% with placebo compared with 2.5% after vaccination (RR 1.57, 0.92 to 2.71; moderate-certainty evidence)); nausea (2.4% with placebo compared with 4.2% after vaccination (RR 1.75, 95% CI 0.74 to 4.12; low-certainty evidence)).\nAuthors' conclusions\nOlder adults receiving the influenza vaccine may have a lower risk of influenza (from 6% to 2.4%), and probably have a lower risk of ILI compared with those who do not receive a vaccination over the course of a single influenza season (from 6% to 3.5%). We are uncertain how big a difference these vaccines will make across different seasons. Very few deaths occurred, and no data on hospitalisation were reported. No cases of pneumonia occurred in one study that reported this outcome. We do not have enough information to assess harms relating to fever and nausea in this population.\nThe evidence for a lower risk of influenza and ILI with vaccination is limited by biases in the design or conduct of the studies. Lack of detail regarding the methods used to confirm the diagnosis of influenza limits the applicability of this result. The available evidence relating to complications is of poor quality, insufficient, or old and provides no clear guidance for public health regarding the safety, efficacy, or effectiveness of influenza vaccines for people aged 65 years or older. Society should invest in research on a new generation of influenza vaccines for the elderly.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Inactivated vaccines can reduce the proportion of elderly who have influenza and ILI. Data on deaths were sparse, and we found no data on hospitalisations due to complications. However, variation in the results of studies means we cannot be certain about how big a difference these vaccines will make across different seasons."
}
]
|
query-laysum | 5599 | [
{
"role": "user",
"content": "Abstract: Background\nWrist fractures, involving the distal radius, are the most common fractures in children. Most are buckle fractures, which are stable fractures, unlike greenstick and other usually displaced fractures. There is considerable variation in practice, such as the extent of immobilisation for buckle fractures and use of surgery for seriously displaced fractures.\nObjectives\nTo assess the effects (benefits and harms) of interventions for common distal radius fractures in children, including skeletally immature adolescents.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group's Specialised Register, the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, trial registries and reference lists to May 2018.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs comparing interventions for treating distal radius fractures in children. We sought data on physical function, treatment failure, adverse events, time to return to normal activities (recovery time), wrist pain, and child (and parent) satisfaction.\nData collection and analysis\nAt least two review authors independently performed study screening and selection, 'Risk of bias' assessment and data extraction. We pooled data where appropriate and used GRADE for assessing the quality of evidence for each outcome.\nMain results\nOf the 30 included studies, 21 were RCTs, seven were quasi-RCTs and two did not describe their randomisation method. Overall, 2930 children were recruited. Typically, trials included more male children and reported mean ages between 8 and 10 years. Eight studies recruited buckle fractures, five recruited buckle and other stable fractures, three recruited minimally displaced fractures and 14 recruited displaced fractures, typically requiring closed reduction, typically requiring closed reduction. All studies were at high risk of bias, mainly reflecting lack of blinding. The studies made 14 comparisons. Below we consider five prespecified comparisons:\nRemovable splint versus below-elbow cast for predominantly buckle fractures (6 studies, 695 children)\nOne study (66 children) reported similar Modified Activities Scale for Kids - Performance scores (0 to 100; no disability) at four weeks (median scores: splint 99.04; cast 99.11); low-quality evidence. Thirteen children needed a change or reapplication of device (splint 5/225; cast 8/219; 4 studies); very low-quality evidence. One study (87 children) reported no refractures at six months. One study (50 children) found no between-group difference in pain during treatment; very low-quality evidence. Evidence was absent (recovery time), insufficient (children with minor complications) or contradictory (child or parent satisfaction). Two studies estimated lower healthcare costs for removable splints.\nSoft or elasticated bandage versus below-elbow cast for buckle or similar fractures (4 studies, 273 children)\nOne study (53 children) reported more children had no or only limited disability at four weeks in the bandage group; very low-quality evidence. Eight children changed device or extended immobilisation for delayed union (bandage 5/90; cast 3/91; 3 studies); very low-quality evidence. Two studies (139 children) reported no serious adverse events at four weeks. Evidence was absent, insufficient or contradictory for recovery time, wrist pain, children with minor complications, and child and parent satisfaction. More bandage-group participants found their treatment convenient (39 children).\nRemoval of casts at home by parents versus at the hospital fracture clinic by clinicians (2 studies, 404 children, mainly buckle fractures)\nOne study (233 children) found full restoration of physical function at four weeks; low-quality evidence. There were five treatment changes (home 4/197; hospital 1/200; 2 studies; very low-quality evidence). One study found no serious adverse effects at six months (288 children). Recovery time and number of children with minor complications were not reported. There was no evidence of a difference in pain at four weeks (233 children); low-quality evidence. One study (80 children) found greater parental satisfaction in the home group; low-quality evidence. One UK study found lower healthcare costs for home removal.\nBelow-elbow versus above-elbow casts for displaced or unstable both-bone fractures (4 studies, 399 children)\nShort-term physical function data were unavailable but very low-quality evidence indicated less dependency when using below-elbow casts. One study (66 children with minimally displaced both-bone fractures) found little difference in ABILHAND-Kids scores (0 to 42; no problems) (mean scores: below-elbow 40.7; above-elbow 41.8); very low-quality evidence. Overall treatment failure data are unavailable, but nine of the 11 remanipulations or secondary reductions (366 children, 4 studies) were in the above-elbow group; very low-quality evidence. There was no refracture or compartment syndrome at six months (215 children; 2 studies). Recovery time and overall numbers of children with minor complications were not reported. There was little difference in requiring physiotherapy for stiffness (179 children, 2 studies); very low-quality evidence. One study (85 children) found less pain at one week for below-elbow casts; low-quality evidence. One study found treatment with an above-elbow cast cost three times more in Nepal.\nSurgical fixation with percutaneous wiring and cast immobilisation versus cast immobilisation alone after closed reduction of displaced fractures (5 studies, 323 children)\nWhere reported, above-elbow casts were used. Short-term functional outcome data were unavailable. One study (123 children) reported similar ABILHAND-Kids scores indicating normal physical function at six months (mean scores: surgery 41.9; cast only 41.4); low-quality evidence. There were fewer treatment failures, defined as early or problematic removal of wires or remanipulation for early loss in position, after surgery (surgery 20/124; cast only 41/129; 4 studies; very low-quality evidence). Similarly, there were fewer serious advents after surgery (surgery 28/124; cast only 43/129; 4 studies; very low-quality evidence). Recovery time, wrist pain, and satisfaction were not reported. There was lower referral for physiotherapy for stiffness after surgery (1 study); very low-quality evidence. One USA study found similar treatment costs in both groups.\nAuthors' conclusions\nWhere available, the quality of the RCT-based evidence on interventions for treating wrist fractures in children is low or very low. However, there is reassuring evidence of a full return to previous function with no serious adverse events, including refracture, for correctly-diagnosed buckle fractures, whatever the treatment used. The review findings are consistent with the move away from cast immobilisation for these injuries. High-quality evidence is needed to address key treatment uncertainties; notably, some priority topics are already being tested in ongoing multicentre trials, such as FORCE.\n\nGiven the provided abstract, please respond to the following query: \"Results of the search\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases up to May 2018 and included 30 studies with 2930 children. Studies included more male children and reported mean ages between eight and 10 years. We summarise the results from five key comparisons."
}
]
|
query-laysum | 5600 | [
{
"role": "user",
"content": "Abstract: Background\nBronchiectasis is a long term respiratory condition with an increasing rate of diagnosis. It is associated with persistent symptoms, repeated infective exacerbations, and reduced quality of life, imposing a burden on individuals and healthcare systems. The main aims of therapeutic management are to reduce exacerbations and improve quality of life. Self-management interventions are potentially important for empowering people with bronchiectasis to manage their condition more effectively and to seek care in a timely manner. Self-management interventions are beneficial in the management of other airways diseases such as asthma and COPD (chronic obstructive pulmonary disease) and have been identified as a research priority for bronchiectasis.\nObjectives\nTo assess the efficacy, cost-effectiveness and adverse effects of self-management interventions for adults and children with non-cystic fibrosis bronchiectasis.\nSearch methods\nWe searched the Cochrane Airways Specialised Register of trials, clinical trials registers, reference lists of included studies and review articles, and relevant manufacturers’ websites up to 13 December 2017.\nSelection criteria\nWe included all randomised controlled trials of any duration that included adults or children with a diagnosis of non-cystic fibrosis bronchiectasis assessing self-management interventions delivered in any form. Self-management interventions included at least two of the following elements: patient education, airway clearance techniques, adherence to medication, exercise (including pulmonary rehabilitation) and action plans.\nData collection and analysis\nTwo review authors independently screened searches, extracted study characteristics and outcome data and assessed risk of bias for each included study. Primary outcomes were, health-related quality of life, exacerbation frequency and serious adverse events. Secondary outcomes were the number of participants admitted to hospital on at least one occasion, lung function, symptoms, self-efficacy and economic costs. We used a random effects model for analyses and standard Cochrane methods throughout.\nMain results\nTwo studies with a total of 84 participants were included: a 12-month RCT of early rehabilitation in adults of mean age 72 years conducted in two centres in England (UK) and a six-month proof-of-concept RCT of an expert patient programme (EPP) in adults of mean age 60 years in a single regional respiratory centre in Northern Ireland (UK). The EPP was delivered in group format once a week for eight weeks using standardised EPP materials plus disease-specific education including airway clearance techniques, dealing with symptoms, exacerbations, health promotion and available support. We did not find any studies that included children. Data aggregation was not possible and findings are reported narratively in the review.\nFor the primary outcomes, both studies reported health-related quality of life, as measured by the St George's Respiratory Questionnaire (SGRQ), but there was no clear evidence of benefit. In one study, the mean SGRQ total scores were not significantly different at 6 weeks', 3 months' and 12 months' follow-up (12 months mean difference (MD) -10.27, 95% confidence interval (CI) -45.15 to 24.61). In the second study there were no significant differences in SGRQ. Total scores were not significantly different between groups (six months, MD 3.20, 95% CI -6.64 to 13.04). We judged the evidence for this outcome as low or very low. Neither of the included studies reported data on exacerbations requiring antibiotics. For serious adverse events, one study reported more deaths in the intervention group compared to the control group, (intervention: 4 of 8, control: 2 of 12), though interpretation is limited by the low event rate and the small number of participants in each group.\nFor our secondary outcomes, there was no evidence of benefit in terms of frequency of hospital admissions or FEV1 L, based on very low-quality evidence. One study reported self-efficacy using the Chronic Disease Self-Efficacy scale, which comprises 10 components. All scales showed significant benefit from the intervention but effects were only sustained to study endpoint on the Managing Depression scale. Further details are reported in the main review. Based on overall study quality, we judged this evidence as low quality. Neither study reported data on respiratory symptoms, economic costs or adverse events.\nAuthors' conclusions\nThere is insufficient evidence to determine whether self-management interventions benefit people with bronchiectasis. In the absence of high-quality evidence it is advisable that practitioners adhere to current international guidelines that advocate self-management for people with bronchiectasis.\nFuture studies should aim to clearly define and justify the specific nature of self-management, measure clinically important outcomes and include children as well as adults.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We conducted a search on 13 December 2017 and found just two UK studies that included 84 participants, comparing a self-management approach with normal care for adults with bronchiectasis. One study looked at the impact of an expert patient self-management programme and the other, involving just a small number of participants with bronchiectasis, looked at self-management in combination with exercises to improve lung function. Neither study included children."
}
]
|
query-laysum | 5601 | [
{
"role": "user",
"content": "Abstract: Background\nMuscle cramps can occur anywhere and for many reasons. Quinine has been used to treat cramps of all causes. However, controversy continues about its efficacy and safety. This review was first published in 2010 and searches were updated in 2014.\nObjectives\nTo assess the efficacy and safety of quinine-based agents in treating muscle cramps.\nSearch methods\nOn 27 October 2014 we searched the Cochrane Neuromuscular Disease Group Specialized Register, CENTRAL, MEDLINE and EMBASE. We searched reference lists of articles up to 2014. We also searched for ongoing trials in November 2014.\nSelection criteria\nRandomised controlled trials of people of all ages with muscle cramps in any location and of any cause, treated with quinine or its derivatives.\nData collection and analysis\nThree review authors independently selected trials for inclusion, assessed risk of bias and extracted data. We contacted study authors for additional information. For comparisons including more than one trial, we assessed the quality of the evidence using Grading of Recommendations Assessment, Development and Evaluation (GRADE).\nMain results\nWe identified 23 trials with a total of 1586 participants. Fifty-eight per cent of these participants were from five unpublished studies. Quinine was compared to placebo (20 trials, n = 1140), vitamin E (four trials, n = 543), a quinine-vitamin E combination (three trials, n = 510), a quinine-theophylline combination (one trial, n = 77), and xylocaine injections into the gastrocnemius muscle (one trial, n = 24). The most commonly used quinine dosage was 300 mg/day (range 200 to 500 mg). We found no new trials for inclusion when searches were updated in 2014.\nThe risk of bias in the trials varied considerably. All 23 trials claimed to be randomised, but only a minority described randomisation and allocation concealment adequately.\nCompared to placebo, quinine significantly reduced cramp number over two weeks by 28%, cramp intensity by 10%, and cramp days by 20%. Cramp duration was not significantly affected.\nA significantly greater number of people suffered minor adverse events on quinine than placebo (risk difference (RD) 3%, 95% confidence interval (CI) 0% to 6%), mainly gastrointestinal symptoms. Overdoses of quinine have been reported elsewhere to cause potentially fatal adverse effects, but in the included trials there was no significant difference in major adverse events compared with placebo (RD 0%, 95% CI -1% to 2%). One participant suffered from thrombocytopenia (0.12% risk) on quinine.\nA quinine-vitamin E combination, vitamin E alone, and xylocaine injections into gastrocnemius were not significantly different to quinine across all outcomes, including adverse effects. Based on a single trial comparison, quinine alone was significantly less effective than a quinine-theophylline combination but with no significant differences in adverse events.\nAuthors' conclusions\nThere is low quality evidence that quinine (200 mg to 500 mg daily) significantly reduces cramp number and cramp days and moderate quality evidence that quinine reduces cramp intensity. There is moderate quality evidence that with use up to 60 days, the incidence of serious adverse events is not significantly greater than for placebo in the identified trials, but because serious adverse events can be rarely fatal, in some countries prescription of quinine is severely restricted.\nEvidence from single trials suggests that theophylline combined with quinine improves cramps more than quinine alone, and the effects of xylocaine injections into gastrocnemius are not significantly different to quinine across all outcomes. Low or moderate quality evidence shows no significant difference between quinine and vitamin E or quinine and quinine-vitamin E mixture. Further research into these alternatives, as well other pharmacological and non-pharmacological treatments, is thus warranted.\nThere is no evidence to judge optimal dosage or duration of quinine treatment. Further studies using different dosages and measurement of serum quinine levels will allow a therapeutic range to be defined for muscle cramp. Because serious adverse events are not common, large population studies are required to more accurately inform incidence. Longer lengths of follow-up in future trials will help determine the duration of action following cessation of quinine as well as long-term adverse events. The search for new therapies, pharmacological and nonpharmacological, should continue and further trials should compare vitamin E, quinine-vitamin E combination, and quinine-theophylline mixture with quinine.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The risk of bias in the included trials varied considerably. All 23 trials claimed to be randomised, but many failed to clearly describe how participants were assigned to treatments. There is low quality evidence that quinine (200 mg to 500 mg daily) significantly reduces cramp number and cramp days and moderate quality evidence that quinine reduces cramp intensity. There is moderate quality evidence that there are more minor adverse events with quinine compared to placebo but no increase in major adverse events. However, there are reliable reports from other sources that an overdose of quinine can cause serious harm including death.\nLow or moderate quality evidence shows there is no significant difference when comparing quinine to vitamin E or to a quinine-vitamin E mixture. There is evidence from one trial that theophylline combined with quinine improves cramps more than quinine alone. In a single trial there was no significant difference when comparing quinine to xylocaine injections.\nMore research is needed to clarify the best dose and duration of treatment, as well as alternatives to quinine for cramps.\nThe evidence is current to October 2014."
}
]
|
query-laysum | 5602 | [
{
"role": "user",
"content": "Abstract: Background\nIn the treatment of children with mild persistent asthma, low-dose inhaled corticosteroids (ICS) are recommended as the preferred monotherapy (referred to as step 2 of therapy). In children with inadequate asthma control on low doses of ICS (step 2), asthma management guidelines recommend adding an anti-leukotriene agent to existing ICS as one of three therapeutic options to intensify therapy (step 3).\nObjectives\nTo compare the efficacy and safety of the combination of anti-leukotriene agents and ICS to the use of the same, an increased, or a tapering dose of ICS in children and adolescents with persistent asthma who remain symptomatic despite the use of maintenance ICS. In addition, we wished to determine the characteristics of people or treatments, if any, that influenced the magnitude of response attributable to the addition of anti-leukotrienes.\nSearch methods\nWe identified trials from the Cochrane Airways Group Specialised Register of Trials (CAGR), which were derived from systematic searches of bibliographic databases including the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO, AMED, and CINAHL; and the handsearching of respiratory journals and meeting abstracts, as well as the www.clinicaltrials.gov website. The search was conducted until January 2013.\nSelection criteria\nWe considered for inclusion randomised controlled trials (RCTs) conducted in children and adolescents, aged one to 18 years, with asthma, who remained symptomatic despite the use of a stable maintenance dose of ICS and in whom anti-leukotrienes were added to the ICS if they were compared to the same, an increased, or a tapering dose of ICS for at least four weeks.\nData collection and analysis\nWe used standard methods expected by The Cochrane Collaboration.\nMain results\nFive paediatric (parallel group or cross-over) trials met the inclusion criteria. We considered two (40%) trials to be at a low risk of bias. Four published trials, representing 559 children (aged ≥ six years) and adolescents with mild to moderate asthma, contributed data to the review. No trial enrolled preschoolers. All trials used montelukast as the anti-leukotriene agent administered for between four and 16 weeks. Three trials evaluated the combination of anti-leukotrienes and ICS compared to the same dose of ICS alone (step 3 versus step 2). No statistically significant group difference was observed in the only trial reporting participants with exacerbations requiring oral corticosteroids over four weeks (N = 268 participants; risk ratio (RR) 0.80, 95% confidence interval (CI) 0.34 to 1.91). There was also no statistically significant difference in percentage change in FEV₁ (forced expiratory volume in 1 second) with mean difference (MD) 1.3 (95% CI -0.09 to 2.69) in this trial, but a significant group difference was observed in the morning (AM) and evening (PM) peak expiratory flow rates (PEFR): N = 218 participants; MD 9.70 L/min (95% CI 1.27 to 18.13) and MD 10.70 (95% CI 2.41 to 18.99), respectively. One trial compared the combination of anti-leukotrienes and ICS to a higher-dose of ICS (step 3 versus step 3). No significant group difference was observed in this trial for participants with exacerbations requiring rescue oral corticosteroids over 16 weeks (N = 182 participants; RR 0.82, 95% CI 0.54 to 1.25), nor was there any significant difference in exacerbations requiring hospitalisation. There was no statistically significant group difference in withdrawals overall or because of any cause with either protocol. No trial explored the impact of adding anti-leukotrienes as a means to taper the dose of ICS.\nAuthors' conclusions\nThe addition of anti-leukotrienes to ICS is not associated with a statistically significant reduction in the need for rescue oral corticosteroids or hospital admission compared to the same or an increased dose of ICS in children and adolescents with mild to moderate asthma. Although anti-leukotrienes have been licensed for use in children for over 10 years, the paucity of paediatric trials, the absence of data on preschoolers, and the variability in the reporting of relevant clinical outcomes considerably limit firm conclusions. At present, there is no firm evidence to support the efficacy and safety of anti-leukotrienes as add-on therapy to ICS as a step-3 option in the therapeutic arsenal for children with uncontrolled asthma symptoms on low-dose ICS.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review is based on a small number of identified trials conducted in children with asthma; none were conducted in preschoolers. As a single study of moderate duration reported all measures of efficacy and most measures of safety, our confidence in the quality of evidence is low. Other important measures of asthma control were either not measured or reported in different formats, so they could not be pooled. In other words, there are too few paediatric trials to conclude firmly whether either treatment is superior to the other."
}
]
|
query-laysum | 5603 | [
{
"role": "user",
"content": "Abstract: Background\nTranexamic acid reduces haemorrhage through its antifibrinolytic effects. In a previous version of the present review, we found that tranexamic acid may reduce mortality. This review includes updated searches and new trials.\nObjectives\nTo assess the effects of tranexamic acid versus no intervention, placebo or other antiulcer drugs for upper gastrointestinal bleeding.\nSearch methods\nWe updated the review by performing electronic database searches (Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Science Citation Index) and manual searches in July 2014.\nSelection criteria\nRandomised controlled trials, irrespective of language or publication status.\nData collection and analysis\nWe used the standard methodological procedures of the The Cochrane Collaboration. All-cause mortality, bleeding and adverse events were the primary outcome measures. We performed fixed-effect and random-effects model meta-analyses and presented results as risk ratios (RRs) with 95% confidence intervals (CIs) and used I² as a measure of between-trial heterogeneity. We analysed tranexamic acid versus placebo or no intervention and tranexamic acid versus antiulcer drugs separately. To analyse sources of heterogeneity and robustness of the overall results, we performed subgroup, sensitivity and sequential analyses.\nMain results\nWe included eight randomised controlled trials on tranexamic acid for upper gastrointestinal bleeding. Additionally, we identified one large ongoing pragmatic randomised controlled trial from which data are not yet available. Control groups were randomly assigned to placebo (seven trials) or no intervention (one trial). Two trials also included a control group randomly assigned to antiulcer drugs (lansoprazole or cimetidine). The included studies were published from 1973 to 2011. The number of participants randomly assigned ranged from 47 to 216 (median 204). All trials reported mortality. In total, 42 of 851 participants randomly assigned to tranexamic acid and 71 of 850 in the control group died (RR 0.60, 95% CI 0.42 to 0.87; P value 0.007; I² = 0%). The analysis was not confirmed when all participants in the intervention group with missing outcome data were included as treatment failures, or when the analysis was limited to trials with low risk of attrition bias. Rebleeding was diagnosed for 117 of 826 participants in the tranexamic acid group and for 146 of 825 participants in the control group (RR 0.80, 95% CI 0.64 to 1.00; P value 0.07; I² = 49%). We were able to evaluate the risk of serious adverse events on the basis of only four trials. Our analyses showed 'no evidence of a difference between tranexamic acid and control interventions regarding the risk of thromboembolic events.' Tranexamic acid appeared to reduce the risk of surgery in a fixed-effect meta-analysis (RR 0.73, 95% CI 0.56 to 0.95), but this result was no longer statistically significant in a random-effects meta-analysis (RR 0.61, 95% CI 0.35 to 1.04; P value 0.07). No difference was apparent between tranexamic acid and placebo in the assessment of transfusion (RR 1.02, 95% CI 0.94 to 1.11; I² = 0%), and meta-analyses that compared tranexamic acid versus antiulcer drugs did not identify beneficial or detrimental effects of tranexamic acid for any of the outcomes assessed.\nAuthors' conclusions\nThis review found that tranexamic acid appears to have a beneficial effect on mortality, but a high dropout rate in some trials means that we cannot be sure of this until the findings of additional research are published. At the time of this update in 2014, one large study (8000 participants) is in progress, so this review will be much more informative in a few years. Further examination of tranexamic acid would require inclusion of high-quality randomised controlled trials. Timing of randomisation is essential to avoid attrition bias and to limit the number of withdrawals. Future trials may use a pragmatic design and should include all participants with suspected bleeding or with endoscopically verified bleeding, as well as a tranexamic placebo arm and co-administration of pump inhibitors and endoscopic therapy. Assessment of outcome measures in such studies should be clearly defined. Endoscopic examination with appropriate control of severe bleeding should be performed, as should endoscopic verification of clinically significant rebleeding. In addition, clinical measures of rebleeding should be included. Other important outcome measures include mortality (30-day or in-hospital), need for emergency surgery or blood transfusion and adverse events (major or minor).\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Tranexamic acid is an antifibrinolytic agent. This drug reduces the breakdown of fibrin; fibrin provides the framework for the formation of a blood clot, which is needed to stop the bleeding. Clinical trials suggest that tranexamic acid could reduce mortality in upper gastrointestinal bleeding."
}
]
|
query-laysum | 5604 | [
{
"role": "user",
"content": "Abstract: Background\nThis review is one in a series of Cochrane Reviews of interventions for shoulder disorders.\nObjectives\nTo synthesise the available evidence regarding the benefits and harms of rotator cuff repair with or without subacromial decompression in the treatment of rotator cuff tears of the shoulder.\nSearch methods\nWe searched the CENTRAL, MEDLINE, Embase, Clinicaltrials.gov and WHO ICRTP registry unrestricted by date or language until 8 January 2019.\nSelection criteria\nRandomised controlled trials (RCTs) including adults with full-thickness rotator cuff tears and assessing the effect of rotator cuff repair compared to placebo, no treatment, or any other treatment were included. As there were no trials comparing surgery with placebo, the primary comparison was rotator cuff repair with or without subacromial decompression versus non-operative treatment (exercises with or without glucocorticoid injection). Other comparisons were rotator cuff repair and acromioplasty versus rotator cuff repair alone, and rotator cuff repair and subacromial decompression versus subacromial decompression alone. Major outcomes were mean pain, shoulder function, quality of life, participant-rated global assessment of treatment success, adverse events and serious adverse events. The primary endpoint for this review was one year.\nData collection and analysis\nWe used standard methodologic procedures expected by Cochrane.\nMain results\nWe included nine trials with 1007 participants. Three trials compared rotator cuff repair with subacromial decompression followed by exercises with exercise alone. These trials included 339 participants with full-thickness rotator cuff tears diagnosed with magnetic resonance imaging (MRI) or ultrasound examination. One of the three trials also provided up to three glucocorticoid injections in the exercise group. All surgery groups received tendon repair with subacromial decompression and the postoperative exercises were similar to the exercises provided for the non-operative groups. Five trials (526 participants) compared repair with acromioplasty versus repair alone; and one trial (142 participants) compared repair with subacromial decompression versus subacromial decompression alone.\nThe mean age of trial participants ranged between 56 and 68 years, and females comprised 29% to 56% of the participants. Symptom duration varied from a mean of 10 months up to 28 months. Two trials excluded tears with traumatic onset of symptoms. One trial defined a minimum duration of symptoms of six months and required a trial of conservative therapy before inclusion. The trials included mainly repairable full-thickness supraspinatus tears, six trials specifically excluded tears involving the subscapularis tendon.\nAll trials were at risk of bias for several criteria, most notably due to lack of participant and personnel blinding, but also for other reasons such as unclearly reported methods of random sequence generation or allocation concealment (six trials), incomplete outcome data (three trials), selective reporting (six trials), and other biases (six trials).\nOur main comparison was rotator cuff repair with or without subacromial decompression versus non-operative treatment. We identified three trials for this comparison, that compared rotator cuff repair with subacromial decompression followed by exercises with exercise alone with or without glucocorticoid injections, and results are reported here for the 12 month follow up.\nAt one year, moderate-certainty evidence (downgraded for bias) from 3 trials with 258 participants indicates that surgery probably provides little or no improvement in pain; mean pain (range 0 to 10, higher scores indicate more pain) was 1.6 points with non-operative treatment and 0.87 points better (0.43 better to 1.30 better) with surgery. Mean function (zero to 100, higher score indicating better outcome) was 72 points with non-operative treatment and 6 points better (2.43 better to 9.54 better) with surgery (3 trials; 269 participants), low-certainty evidence (downgraded for bias and imprecision). Participant-rated global success rate was 48/55 after non-operative treatment and 52/55 after surgery corresponding to risk ratio (RR) 1.08, 95% confidence interval (CI) 0.96 to 1.22; low-certainty evidence (downgraded for bias and imprecision). Health-related quality of life was 57.5 points (SF-36 mental component score, 0 to 100, higher score indicating better quality of life) with non-operative treatment and 1.3 points worse (4.5 worse to 1.9 better) with surgery (1 trial; 103 participants), low-certainty evidence (downgraded for bias and imprecision).\nWe were unable to estimate the risk of adverse events and serious adverse events as only one event was reported across the trials (very low-certainty evidence; downgraded once due to bias and twice due to very serious imprecision).\nAuthors' conclusions\nAt the moment, we are uncertain whether rotator cuff repair surgery provides clinically meaningful benefits to people with symptomatic tears; it may provide little or no clinically important benefits with respect to pain, function, overall quality of life or participant-rated global assessment of treatment success when compared with non-operative treatment. Surgery may not improve shoulder pain or function compared with exercises, with or without glucocorticoid injections.\nThe trials included have methodology concerns and none included a placebo control. They included participants with mostly small degenerative tears involving the supraspinatus tendon and the conclusions of this review may not be applicable to traumatic tears, large tears involving the subscapularis tendon or young people. Furthermore, the trials did not assess if surgery could prevent arthritic changes in long-term follow-up. Further well-designed trials in this area that include a placebo-surgery control group and long follow-up are needed to further increase certainty about the effects of surgery for rotator cuff tears.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This Cochrane Review is current to January 2019. We found nine trials with 1007 participants. Participants mean age was 56 to 68 years, and females comprised 29% to 56% of the participants. The participants had symptoms for several months or years and were diagnosed with a full-thickness tear with magnetic resonance imaging or ultrasound examination. Studies were conducted in Finland, Norway, Canada, USA, France, the Netherlands, Italy and South Korea. Our primary analysis included three trials with 339 participants who received either surgery (tendon repair and removal of bone from undersurface of acromion) or non-operative therapy (exercises with or without glucocorticoid injection). Three studies received funding however none of them reported using the funds directly for these trials."
}
]
|
query-laysum | 5605 | [
{
"role": "user",
"content": "Abstract: Background\nThe use of botulinum toxin as an investigative and treatment modality for strabismus is well reported in the medical literature. However, it is unclear how effective it is in comparison with other treatment options for strabismus.\nObjectives\nThe primary objective was to examine the efficacy of botulinum toxin therapy in the treatment of strabismus compared with alternative conservative or surgical treatment options. This review sought to ascertain those types of strabismus that particularly benefit from the use of botulinum toxin as a treatment option (such as small angle strabismus or strabismus with binocular potential, i.e. the potential to use both eyes together as a pair). The secondary objectives were to investigate the dose effect and complication rates associated with botulinum toxin.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, LILACS and three trials registers on 6 July 2022, together with reference checking to identify additional studies. We did not use any date or language restrictions in the electronic searches for trials.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) comparing botulinum toxin with strabismus surgery, botulinum toxin alternatives (i.e. bupivacaine) and conservative therapy such as orthoptic exercises, prisms, or lens therapy for people of any age with strabismus. All relevant RCTs identified in this update compared botulinum toxin with strabismus surgery.\nData collection and analysis\nWe used standard methods expected by Cochrane and assessed the certainty of the body of evidence using GRADE.\nMain results\nWe included four RCTs with 242 participants that enrolled adults with esotropia or exotropia, children with acquired esotropia, and children with infantile esotropia. The follow-up period ranged from six to 36 months. Two studies were conducted in Spain, and one each in Canada and South Africa. We judged the included studies to have a mixture of low, unclear and high risk of bias. We did not consider any of the included studies to be at low risk of bias for all domains.\nAll four studies reported the proportion of participants who improved or corrected strabismus, defined as ≤ 10 prism diopters (PD) at six months (two studies) or ≤ 8 PD at one year (two studies). Low-certainty evidence suggested that participants treated with the surgery may be more likely to improve or correct strabismus compared with those who treated with botulinum toxin (risk ratio (RR) 0.72, 95% confidence interval (CI) 0.53 to 0.99; I² = 50%; 4 studies, 242 participants; low-certainty evidence).\nOne study, which enrolled 110 children with infantile esotropia, suggested that surgery may reduce the incidence of additional surgical intervention required, but the evidence was very uncertain (RR 3.05, 95% CI 1.34 to 6.91; 1 study, 101 participants; very low-certainty evidence).\nTwo studies conducted in Spain compared botulinum toxin with surgery in children who required retreatment for acquired or infantile esotropia. These two studies provided low-certainty evidence that botulinum toxin may have little to no effect on achieving sensory fusion (RR 0.88, 95% CI 0.63 to 1.23; I² = 0%; 2 studies, 102 participants) and stereopsis (RR 0.86, 95% CI 0.59 to 1.25; I² = 0%; 2 studies, 102 participants) compared with surgery.\nThree studies reported non-serious adverse events. Partial transient ptosis (range 16.7% to 37.0%) and transient vertical deviation (range 5.6% to 18.5%) were observed among participants treated with botulinum toxin in three studies. In one study, 44.7% participants in the surgery group experienced discomfort. No studies reported serious adverse events or postintervention quality of life.\nAuthors' conclusions\nIt remains unclear whether botulinum toxin may be an alternative to strabismus surgery as an independent treatment modality among certain types of strabismus because we found only low and very low-certainty evidence in this review update.\nLow-certainty evidence suggests that strabismus surgery may be preferable to botulinum toxin injection to improve or correct strabismus when types of strabismus and different age groups are combined. We found low-certainty evidence suggesting botulinum toxin may have little to no effect on achievement of binocular single vision compared with surgery in children with acquired or infantile esotropia. We did not find sufficient evidence to draw any meaningful conclusions with respect to need for additional surgery, quality of life, and serious adverse events.\nWe identified three ongoing trials comparing botulinum toxin with conventional surgeries in the varying types of strabismus, whose results will provide relevant evidence for our stated objectives. Future trials should be rigorously designed, and investigators should analyze outcome data appropriately and report adequate information to provide evidence of high certainty. Quality of life and cost-effectiveness should be examined in addition to clinical and safety outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that investigated botulinum toxin injection compared with other treatment such as surgery in people with strabismus. We compared and summarized the results of relevant studies and rated our confidence in the evidence, based on factors such as study methods and sizes."
}
]
|
query-laysum | 5606 | [
{
"role": "user",
"content": "Abstract: Background\nExcessive alcohol use contributes significantly to physical and psychological illness, injury and death, and a wide array of social harm in all age groups. A proven strategy for reducing excessive alcohol consumption levels is to offer a brief conversation-based intervention in primary care settings, but more recent technological innovations have enabled people to interact directly via computer, mobile device or smartphone with digital interventions designed to address problem alcohol consumption.\nObjectives\nTo assess the effectiveness and cost-effectiveness of digital interventions for reducing hazardous and harmful alcohol consumption, alcohol-related problems, or both, in people living in the community, specifically: (i) Are digital interventions more effective and cost-effective than no intervention (or minimal input) controls? (ii) Are digital interventions at least equally effective as face-to-face brief alcohol interventions? (iii) What are the effective component behaviour change techniques (BCTs) of such interventions and their mechanisms of action? (iv) What theories or models have been used in the development and/or evaluation of the intervention? Secondary objectives were (i) to assess whether outcomes differ between trials where the digital intervention targets participants attending health, social care, education or other community-based settings and those where it is offered remotely via the internet or mobile phone platforms; (ii) to specify interventions according to their mode of delivery (e.g. functionality features) and assess the impact of mode of delivery on outcomes.\nSearch methods\nWe searched CENTRAL, MEDLINE, PsycINFO, CINAHL, ERIC, HTA and Web of Knowledge databases; ClinicalTrials.com and WHO ICTRP trials registers and relevant websites to April 2017. We also checked the reference lists of included trials and relevant systematic reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated the effectiveness of digital interventions compared with no intervention or with face-to-face interventions for reducing hazardous or harmful alcohol consumption in people living in the community and reported a measure of alcohol consumption.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included 57 studies which randomised a total of 34,390 participants. The main sources of bias were from attrition and participant blinding (36% and 21% of studies respectively, high risk of bias). Forty one studies (42 comparisons, 19,241 participants) provided data for the primary meta-analysis, which demonstrated that participants using a digital intervention drank approximately 23 g alcohol weekly (95% CI 15 to 30) (about 3 UK units) less than participants who received no or minimal interventions at end of follow up (moderate-quality evidence).\nFifteen studies (16 comparisons, 10,862 participants) demonstrated that participants who engaged with digital interventions had less than one drinking day per month fewer than no intervention controls (moderate-quality evidence), 15 studies (3587 participants) showed about one binge drinking session less per month in the intervention group compared to no intervention controls (moderate-quality evidence), and in 15 studies (9791 participants) intervention participants drank one unit per occasion less than no intervention control participants (moderate-quality evidence).\nOnly five small studies (390 participants) compared digital and face-to-face interventions. There was no difference in alcohol consumption at end of follow up (MD 0.52 g/week, 95% CI -24.59 to 25.63; low-quality evidence). Thus, digital alcohol interventions produced broadly similar outcomes in these studies. No studies reported whether any adverse effects resulted from the interventions.\nA median of nine BCTs were used in experimental arms (range = 1 to 22). 'B' is an estimate of effect (MD in quantity of drinking, expressed in g/week) per unit increase in the BCT, and is a way to report whether individual BCTs are linked to the effect of the intervention. The BCTs of goal setting (B -43.94, 95% CI -78.59 to -9.30), problem solving (B -48.03, 95% CI -77.79 to -18.27), information about antecedents (B -74.20, 95% CI -117.72 to -30.68), behaviour substitution (B -123.71, 95% CI -184.63 to -62.80) and credible source (B -39.89, 95% CI -72.66 to -7.11) were significantly associated with reduced alcohol consumption in unadjusted models. In a multivariable model that included BCTs with B > 23 in the unadjusted model, the BCTs of behaviour substitution (B -95.12, 95% CI -162.90 to -27.34), problem solving (B -45.92, 95% CI -90.97 to -0.87), and credible source (B -32.09, 95% CI -60.64 to -3.55) were associated with reduced alcohol consumption.\nThe most frequently mentioned theories or models in the included studies were Motivational Interviewing Theory (7/20), Transtheoretical Model (6/20) and Social Norms Theory (6/20). Over half of the interventions (n = 21, 51%) made no mention of theory. Only two studies used theory to select participants or tailor the intervention. There was no evidence of an association between reporting theory use and intervention effectiveness.\nAuthors' conclusions\nThere is moderate-quality evidence that digital interventions may lower alcohol consumption, with an average reduction of up to three (UK) standard drinks per week compared to control participants. Substantial heterogeneity and risk of performance and publication bias may mean the reduction was lower. Low-quality evidence from fewer studies suggested there may be little or no difference in impact on alcohol consumption between digital and face-to-face interventions.\nThe BCTs of behaviour substitution, problem solving and credible source were associated with the effectiveness of digital interventions to reduce alcohol consumption and warrant further investigation in an experimental context.\nReporting of theory use was very limited and often unclear when present. Over half of the interventions made no reference to any theories. Limited reporting of theory use was unrelated to heterogeneity in intervention effectiveness.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 57 studies comparing the drinking of people getting advice about alcohol from computers or mobile devices with those who did not after one to 12 months. Of these, 41 studies (42 comparisons, 19,241 participants) focused on the actual amounts that people reported drinking each week. Most people reported drinking less if they received advice about alcohol from a computer or mobile device compared to people who did not get this advice.\nEvidence shows that the amount of alcohol people cut down may be about 1.5 pints (800 mL) of beer or a third of a bottle of wine (250 mL) each week. Other measures supported the effectiveness of digital alcohol interventions, although the size of the effect tended to be smaller than for overall alcohol consumption. Positive differences in measures of drinking were seen at 1, 6 and 12 months after the advice.\nThere was not enough information to help us decide if advice was better from computers, telephones or the internet to reduce risky drinking. We do not know which pieces of advice were the most important to help people reduce problem drinking. However, advice from trusted people such as doctors seemed helpful, as did recommendations that people think about specific ways they could overcome problems that might prevent them from drinking less and suggestions about things to do instead of drinking. We included five studies which compared the drinking of people who got advice from computers or mobile devices with advice from face-to-face conversations with doctors or nurses; there may be little or no difference between these to reduce heavy drinking.\nNo studies reported whether any harm came from the interventions.\nPersonalised advice using computers or mobile devices may help people reduce heavy drinking better than doing nothing or providing only general health information. Personalised advice through computers or mobile devices may make little or no difference to reduce drinking compared to face-to-face conversation."
}
]
|
query-laysum | 5607 | [
{
"role": "user",
"content": "Abstract: Background\nInadvertent postoperative hypothermia (a drop in core body temperature to below 36°C) occurs as an effect of surgery when anaesthetic drugs and exposure of the skin for long periods of time during surgery result in interference with normal temperature regulation. Once hypothermia has occurred, it is important that patients are rewarmed promptly to minimise potential complications. Several different interventions are available for rewarming patients.\nObjectives\nTo estimate the effectiveness of treating inadvertent perioperative hypothermia through postoperative interventions to decrease heat loss and apply passive and active warming systems in adult patients who have undergone surgery.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2014, Issue 2), MEDLINE (Ovid SP) (1956 to 21 February 2014), EMBASE (Ovid SP) (1982 to 21 February 2014), the Institute for Scientific Information (ISI) Web of Science (1950 to 21 February 2014) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL), EBSCO host (1980 to 21 February 2014), as well as reference lists of articles. We also searched www.controlled-trials.com and www.clincialtrials.gov.\nSelection criteria\nRandomized controlled trials of postoperative warming interventions aiming to reverse hypothermia compared with control or with each other.\nData collection and analysis\nThree review authors identified studies for inclusion in this review. One review author extracted data and completed risk of bias assessments; two review authors checked the details. Meta-analysis was conducted when appropriate by using standard methodological procedures as expected by The Cochrane Collaboration.\nMain results\nWe included 11 trials with 699 participants. Ten trials provided data for analysis. Trials varied in the numbers and types of participants included and in the types of surgery performed. Most trials were at high or unclear risk of bias because of inappropriate or unclear randomization procedures, and because blinding of assessors and participants generally was not possible. This may have influenced results, but it is unclear how the results may have been influenced. Active warming was found to reduce the mean time taken to achieve normothermia by about 30 minutes in comparison with use of warmed cotton blankets (mean difference (MD) -32.13 minutes, 95% confidence interval (CI) -42.55 to -21.71; moderate-quality evidence), but no significant difference in shivering was noted. Active warming was found to reduce mean time taken to achieve normothermia by almost an hour and a half in comparison with use of unwarmed cotton blankets (MD -88.86 minutes, 95% CI -123.49 to -54.23; moderate-quality evidence), and people in the active warming group were less likely to shiver than those in the unwarmed cotton blanket group (Relative Risk=0.61 95% CI= 0.42 to 0.86; low quality evidence). There was no effect on mean temperature difference in degrees celsius at 60 minutes (MD=0.18°C, 95% CI=-0.10 to 0.46; moderate quality evidence), and no data were available in relation to major cardiovascular complications. Forced air warming was found to reduce time taken to achieve normothermia by about one hour in comparison to circulating hot water devices (MD=-54.21 minutes 95% CI= -94.95, -13.47). There was no statistically significant difference between thermal insulation and cotton blankets on mean time to achieve normothermia (MD =-0.29 minutes, 95% CI=-25.47 to 24.89; moderate quality evidence) or shivering (Relative Risk=1.36 95% CI= 0.69 to 2.67; moderate quality evidence), and no data were available for mean temperature difference or major cardiovascular complications. Insufficient evidence was available about other comparisons, adverse effects or any other secondary outcomes.\nAuthors' conclusions\nActive warming, particularly forced air warming, appears to offer a clinically important reduction in mean time taken to achieve normothermia (normal body temperature between 36°C and 37.5°C) in patients with postoperative hypothermia. However, high-quality evidence on other important clinical outcomes is lacking; therefore it is unclear whether active warming offers other benefits and harms. High-quality evidence on other warming methods is also lacking; therefore it is unclear whether other rewarming methods are effective in reversing postoperative hypothermia.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Quality of the evidenceMost of the evidence was moderate to low in quality. Methods used to assign patients to treatment groups were generally unclear or inadequate, and it was not possible to keep patients or people assessing patients unaware of the treatment given. This may have biased the results, but we are not sure what influence this could have had on the overall results."
}
]
|
query-laysum | 5608 | [
{
"role": "user",
"content": "Abstract: Background\nTeeth that have suffered trauma can fuse to the surrounding bone in a process called dental ankylosis. Ankylosed permanent front teeth fail to erupt during facial growth and can become displaced, thus resulting in functional and aesthetic problems. Dental ankylosis is also associated with root resorption, which may eventually lead to the loss of affected teeth. Different interventions for the management of ankylosed permanent front teeth have been described, but it is unclear which are the most effective.\nObjectives\nTo evaluate the effectiveness of any intervention that can be used in the treatment of ankylosed permanent front teeth.\nSearch methods\nThe following electronic databases were searched: the Cochrane Oral Health Group Trials Register (to 3 August 2015), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, 2015, Issue 7), MEDLINE via OVID (1946 to 3 August 2015), EMBASE via OVID (1980 to 3 August 2015) and LILACS via BIREME (1982 to 3 August 2015). We searched the US National Institutes of Health Trials Register (http://clinicaltrials.gov) and the WHO Clinical Trials Registry Platform for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing any intervention for treating displaced ankylosed permanent front teeth in individuals of any age. Treatments could be compared with one another, with placebo or with no treatment.\nData collection and analysis\nTwo independent review authors screened studies independently. Full papers were obtained for potentially relevant trials. Although no study was included, the authors had planned to extract data independently and to analyse the data according to the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nNo randomised controlled trials that met the inclusion criteria were identified.\nAuthors' conclusions\nWe were unable to identify any reports of randomised controlled trials regarding the efficacy of different treatment options for ankylosed permanent front teeth. The lack of high level evidence for the management of this health problem emphasises the need for well designed clinical trials on this topic, which conform to the CONSORT statement (www.consort-statement.org/).\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Authors from the Cochrane Oral Health Group carried out this review of existing studies, and the evidence is up-to-date to 3 October 2015. There were no studies found that met the inclusion criteria for this review."
}
]
|
query-laysum | 5609 | [
{
"role": "user",
"content": "Abstract: Background\nTransient tachypnea of the newborn is characterized by tachypnea and signs of respiratory distress. Transient tachypnea typically appears within the first two hours of life in term and late preterm newborns. Although transient tachypnea of the newborn is usually a self limited condition, it is associated with wheezing syndromes in late childhood. The rationale for the use of epinephrine (adrenaline) for transient tachypnea of the newborn is based on studies showing that β-agonists can accelerate the rate of alveolar fluid clearance.\nObjectives\nTo assess whether epinephrine compared to placebo, no treatment or any other drugs (excluding salbutamol) is effective and safe in the treatment of transient tachypnea of the newborn in infants born at 34 weeks' gestational age or more.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, 2016, Issue 3), MEDLINE (1996 to March 2016), EMBASE (1980 to March 2016) and CINAHL (1982 to March 2016). We applied no language restrictions. We searched the abstracts of the major congresses in the field (Perinatal Society of Australia and New Zealand and Pediatric Academic Societies) from 2000 to 2015.\nSelection criteria\nRandomized controlled trials, quasi-randomized controlled trials and cluster trials comparing epinephrine versus placebo or no treatment or any other drugs administered to infants born at 34 weeks' gestational age or more and less than three days of age with transient tachypnea of the newborn.\nData collection and analysis\nFor the included trial, two review authors independently extracted data (e.g. number of participants, birth weight, gestational age, duration of oxygen therapy (hours), need for continuous positive airway pressure and need for mechanical ventilation, duration of mechanical ventilation, etc.) and assessed the risk of bias (e.g. adequacy of randomization, blinding, completeness of follow-up). The primary outcomes considered in this review were duration of oxygen therapy (hours), need for continuous positive airway pressure and need for mechanical ventilation.\nMain results\nOne trial, which included 20 infants, met the inclusion criteria of this review. Study authors administered three doses of nebulized 2.25% racemic epinephrine or placebo. We found no differences between the two group in the duration of supplemental oxygen therapy (mean difference (MD) -6.60, 95% confidence interval (CI) -54.80 to 41.60 hours) and need for mechanical ventilation (risk ratio (RR) 0.67, 95% CI 0.08 to 5.88; risk difference (RD) -0.07, 95% CI -0.46 to 0.32). Among secondary outcomes, we found no differences in terms of initiation of oral feeding. The quality of the evidence was limited due to the imprecision of the estimates.\nAuthors' conclusions\nAt present there is insufficient evidence to determine the efficacy and safety of epinephrine in the management of transient tachypnea of the newborn.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Does epinephrine reduce the duration of oxygen therapy and the need for respiratory support in newborns with transient tachypnea?"
}
]
|
query-laysum | 5610 | [
{
"role": "user",
"content": "Abstract: Background\nChild overweight and obesity has increased globally, and can be associated with short- and long-term health consequences.\nObjectives\nTo assess the effects of diet, physical activity, and behavioural interventions for the treatment of overweight or obesity in preschool children up to the age of 6 years.\nSearch methods\nWe performed a systematic literature search in the databases Cochrane Library, MEDLINE, EMBASE, PsycINFO, CINAHL, and LILACS, as well as in the trial registers ClinicalTrials.gov and ICTRP Search Portal. We also checked references of identified trials and systematic reviews. We applied no language restrictions. The date of the last search was March 2015 for all databases.\nSelection criteria\nWe selected randomised controlled trials (RCTs) of diet, physical activity, and behavioural interventions for treating overweight or obesity in preschool children aged 0 to 6 years.\nData collection and analysis\nTwo review authors independently assessed risk of bias, evaluated the overall quality of the evidence using the GRADE instrument, and extracted data following the Cochrane Handbook for Systematic Reviews of Interventions. We contacted trial authors for additional information.\nMain results\nWe included 7 RCTs with a total of 923 participants: 529 randomised to an intervention and 394 to a comparator. The number of participants per trial ranged from 18 to 475. Six trials were parallel RCTs, and one was a cluster RCT. Two trials were three-arm trials, each comparing two interventions with a control group. The interventions and comparators in the trials varied. We categorised the comparisons into two groups: multicomponent interventions and dietary interventions. The overall quality of the evidence was low or very low, and six trials had a high risk of bias on individual 'Risk of bias' criteria. The children in the included trials were followed up for between six months and three years.\nIn trials comparing a multicomponent intervention with usual care, enhanced usual care, or information control, we found a greater reduction in body mass index (BMI) z score in the intervention groups at the end of the intervention (6 to 12 months): mean difference (MD) -0.3 units (95% confidence interval (CI) -0.4 to -0.2); P < 0.00001; 210 participants; 4 trials; low-quality evidence, at 12 to 18 months' follow-up: MD -0.4 units (95% CI -0.6 to -0.2); P = 0.0001; 202 participants; 4 trials; low-quality evidence, and at 2 years' follow-up: MD -0.3 units (95% CI -0.4 to -0.1); 96 participants; 1 trial; low-quality evidence.\nOne trial stated that no adverse events were reported; the other trials did not report on adverse events. Three trials reported health-related quality of life and found improvements in some, but not all, aspects. Other outcomes, such as behaviour change and parent-child relationship, were inconsistently measured.\nOne three-arm trial of very low-quality evidence comparing two types of diet with control found that both the dairy-rich diet (BMI z score change MD -0.1 units (95% CI -0.11 to -0.09); P < 0.0001; 59 participants) and energy-restricted diet (BMI z score change MD -0.1 units (95% CI -0.11 to -0.09); P < 0.0001; 57 participants) resulted in greater reduction in BMI than the comparator at the end of the intervention period, but only the dairy-rich diet maintained this at 36 months' follow-up (BMI z score change in MD -0.7 units (95% CI -0.71 to -0.69); P < 0.0001; 52 participants). The energy-restricted diet had a worse BMI outcome than control at this follow-up (BMI z score change MD 0.1 units (95% CI 0.09 to 0.11); P < 0.0001; 47 participants). There was no substantial difference in mean daily energy expenditure between groups. Health-related quality of life, adverse effects, participant views, and parenting were not measured.\nNo trial reported on all-cause mortality, morbidity, or socioeconomic effects.\nAll results should be interpreted cautiously due to their low quality and heterogeneous interventions and comparators.\nAuthors' conclusions\nMuticomponent interventions appear to be an effective treatment option for overweight or obese preschool children up to the age of 6 years. However, the current evidence is limited, and most trials had a high risk of bias. Most trials did not measure adverse events. We have identified four ongoing trials that we will include in future updates of this review.\nThe role of dietary interventions is more equivocal, with one trial suggesting that dairy interventions may be effective in the longer term, but not energy-restricted diets. This trial also had a high risk of bias.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of the evidence was low or very low, mainly because there were just a few studies per outcome measurement or the number of the included children was small. In addition, many children left the studies before they had finished."
}
]
|
query-laysum | 5611 | [
{
"role": "user",
"content": "Abstract: Background\nGM-CSF (granulocyte macrophage colony-stimulating factor) is a growth factor that is used to supplement culture media in an effort to improve clinical outcomes for those undergoing assisted reproduction. It is worth noting that the use of GM-CSF-supplemented culture media often adds a further cost to the price of an in vitro fertilisation (IVF) cycle. The purpose of this review was to assess the available evidence from randomised controlled trials (RCTs) on the effectiveness and safety of GM-CSF-supplemented culture media.\nObjectives\nTo assess the effectiveness and safety of GM-CSF-supplemented human embryo culture media versus culture media not supplemented with GM-CSF, in women or couples undergoing assisted reproduction.\nSearch methods\nWe used standard methodology recommended by Cochrane. We searched the Cochrane Gynaecology and Fertility Group Trials Register, CENTRAL, MEDLINE, Embase, CINAHL, LILACS, DARE, OpenGrey, PubMed, Google Scholar, and two trials registers on 15 October 2019, checked references of relevant papers and communicated with experts in the field.\nSelection criteria\nWe included RCTs comparing GM-CSF (including G-CSF (granulocyte colony-stimulating factor))-supplemented embryo culture media versus any other non-GM-CSF-supplemented embryo culture media (control) in women undergoing assisted reproduction.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. The primary review outcomes were live birth and miscarriage rate. The secondary outcomes were clinical pregnancy, multiple gestation, preterm birth, birth defects, aneuploidy, and stillbirth rates. We assessed the quality of the evidence using GRADE methodology. We undertook one comparison, GM-CSF-supplemented culture media versus culture media not supplemented with GM-CSF, for those undergoing assisted reproduction.\nMain results\nWe included five studies, the data for three of which (1532 participants) were meta-analysed. We are uncertain whether GM-CSF-supplemented culture media makes any difference to the live-birth rate when compared to using conventional culture media not supplemented with GM-CSF (odds ratio (OR) 1.19, 95% confidence interval (CI) 0.93 to 1.52, 2 RCTs, N = 1432, I2 = 69%, low-quality evidence). The evidence suggests that if the rate of live birth associated with conventional culture media not supplemented with GM-CSF was 22%, the rate with the use of GM-CSF-supplemented culture media would be between 21% and 30%.\nWe are uncertain whether GM-CSF-supplemented culture media makes any difference to the miscarriage rate when compared to using conventional culture media not supplemented with GM-CSF (OR 0.75, 95% CI 0.41 to 1.36, 2 RCTs, N = 1432, I2 = 0%, low-quality evidence). This evidence suggests that if the miscarriage rate associated with conventional culture media not supplemented with GM-CSF was 4%, the rate with the use of GM-CSF-supplemented culture media would be between 2% and 5%.\nFurthermore, we are uncertain whether GM-CSF-supplemented culture media makes any difference to the following outcomes: clinical pregnancy (OR 1.16, 95% CI 0.93 to 1.45, 3 RCTs, N = 1532 women, I2 = 67%, low-quality evidence); multiple gestation (OR 1.24, 95% CI 0.73 to 2.10, 2 RCTs, N = 1432, I2 = 35%, very low-quality evidence); preterm birth (OR 1.20, 95% CI 0.70 to 2.04, 2 RCTs, N = 1432, I2 = 76%, very low-quality evidence); birth defects (OR 1.33, 95% CI 0.59 to 3.01, I2 = 0%, 2 RCTs, N = 1432, low-quality evidence); and aneuploidy (OR 0.34, 95% CI 0.03 to 3.26, I2 = 0%, 2 RCTs, N = 1432, low-quality evidence). We were unable to undertake analysis of stillbirth, as there were no events in either arm of the two studies that assessed this outcome.\nAuthors' conclusions\nDue to the very low to low quality of the evidence, we cannot be certain whether GM-CSF is any more or less effective than culture media not supplemented with GM-CSF for clinical outcomes that reflect effectiveness and safety. It is important that independent information on the available evidence is made accessible to those considering using GM-CSF-supplemented culture media. The claims from marketing information that GM-CSF has a positive effect on pregnancy rates are not supported by the available evidence presented here; further well-designed, properly powered RCTs are needed to lend certainty to the evidence.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Does culture media containing the growth factor GM-CSF (granulocyte macrophage colony-stimulating factor) improve the chances of a pregnancy and live-born baby, and reduce the risk of miscarriage, twin or triplet pregnancy, premature birth, birth defects, genetic problems in the baby, and stillbirth?"
}
]
|
query-laysum | 5612 | [
{
"role": "user",
"content": "Abstract: Background\nOutwardly directed aggressive behaviour in people with intellectual disabilities is a significant issue that may lead to poor quality of life, social exclusion and inpatient psychiatric admissions. Cognitive and behavioural approaches have been developed to manage aggressive behaviour but the effectiveness of these interventions on reducing aggressive behaviour and other outcomes are unclear. This is the third update of this review and adds nine new studies, resulting in a total of 15 studies in this review.\nObjectives\nTo evaluate the efficacy of behavioural and cognitive-behavioural interventions on outwardly directed aggressive behaviour compared to usual care, wait-list controls or no treatment in people with intellectual disability. We also evaluated enhanced interventions compared to non-enhanced interventions.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was March 2022. We revised the search terms to include positive behaviour support (PBS).\nSelection criteria\nWe included randomised and quasi-randomised trials of children and adults with intellectual disability of any duration, setting and any eligible comparator.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were change in 1. aggressive behaviour, 2. ability to control anger, and 3. adaptive functioning, and 4. adverse effects. Our secondary outcomes were change in 5. mental state, 6. medication, 7. care needs and 8. quality of life, and 9. frequency of service utilisation and 10. user satisfaction data. We used GRADE to assess certainty of evidence for each outcome.\nWe expressed treatment effects as mean differences (MD) or odds ratios (OR), with 95% confidence intervals (CI). Where possible, we pooled data using a fixed-effect model.\nMain results\nThis updated version comprises nine new studies giving 15 included studies and 921 participants. The update also adds new interventions including parent training (two studies), mindfulness-based positive behaviour support (MBPBS) (two studies), reciprocal imitation training (RIT; one study) and dialectical behavioural therapy (DBT; one study). It also adds two new studies on PBS.\nMost studies were based in the community (14 studies), and one was in an inpatient forensic service. Eleven studies involved adults only. The remaining studies involved children (one study), children and adolescents (one study), adolescents (one study), and adolescents and adults (one study). One study included boys with fragile X syndrome.\nSix studies were conducted in the UK, seven in the USA, one in Canada and one in Germany. Only five studies described sources of funding.\nFour studies compared anger management based on cognitive behaviour therapy to a wait-list or no treatment control group (n = 263); two studies compared PBS with treatment as usual (TAU) (n = 308); two studies compared carer training on mindfulness and PBS with PBS only (n = 128); two studies involving parent training on behavioural approaches compared to wait-list control or TAU (n = 99); one study of mindfulness to a wait-list control (n = 34); one study of adapted dialectal behavioural therapy compared to wait-list control (n = 21); one study of RIT compared to an active control (n = 20) and one study of modified relaxation compared to an active control group (n = 12).\nThere was moderate-certainty evidence that anger management may improve severity of aggressive behaviour post-treatment (MD −3.50, 95% CI −6.21 to −0.79; P = 0.01; 1 study, 158 participants); very low-certainty evidence that it might improve self-reported ability to control anger (MD −8.38, 95% CI −14.05 to −2.71; P = 0.004, I2 = 2%; 3 studies, 212 participants), adaptive functioning (MD −21.73, 95% CI −36.44 to −7.02; P = 0.004; 1 study, 28 participants) and psychiatric symptoms (MD −0.48, 95% CI −0.79 to −0.17; P = 0.002; 1 study, 28 participants) post-treatment; and very low-certainty evidence that it does not improve quality of life post-treatment (MD −5.60, 95% CI −18.11 to 6.91; P = 0.38; 1 study, 129 participants) or reduce service utilisation and costs at 10 months (MD 102.99 British pounds, 95% CI −117.16 to 323.14; P = 0.36; 1 study, 133 participants).\nThere was moderate-certainty evidence that PBS may reduce aggressive behaviour post-treatment (MD −7.78, 95% CI −15.23 to −0.32; P = 0.04, I2 = 0%; 2 studies, 275 participants) and low-certainty evidence that it probably does not reduce aggressive behaviour at 12 months (MD −5.20, 95% CI −13.27 to 2.87; P = 0.21; 1 study, 225 participants). There was low-certainty evidence that PBS does not improve mental state post-treatment (OR 1.44, 95% CI 0.83 to 2.49; P = 1.21; 1 study, 214 participants) and very low-certainty evidence that it might not reduce service utilisation at 12 months (MD −448.00 British pounds, 95% CI −1660.83 to 764.83; P = 0.47; 1 study, 225 participants).\nThere was very low-certainty evidence that mindfulness may reduce incidents of physical aggression (MD −2.80, 95% CI −4.37 to −1.23; P < 0.001; 1 study; 34 participants) and low-certainty evidence that MBPBS may reduce incidents of aggression post-treatment (MD −10.27, 95% CI −14.86 to −5.67; P < 0.001, I2 = 87%; 2 studies, 128 participants).\nReasons for downgrading the certainty of evidence were risk of bias (particularly selection and performance bias); imprecision (results from single, often small studies, wide CIs, and CIs crossing the null effect); and inconsistency (statistical heterogeneity).\nAuthors' conclusions\nThere is moderate-certainty evidence that cognitive-behavioural approaches such as anger management and PBS may reduce outwardly directed aggressive behaviour in the short term but there is less certainty about the evidence in the medium and long term, particularly in relation to other outcomes such as quality of life. There is some evidence to suggest that combining more than one intervention may have cumulative benefits.\nMost studies were small and there is a need for larger, robust randomised controlled trials, particularly for interventions where the certainty of evidence is very low. More trials are needed that focus on children and whether psychological interventions lead to reductions in the use of psychotropic medications.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This is an update of a Cochrane Review. The aim was to find out whether approaches such as behavioural and cognitive-behavioural therapies are helpful in reducing aggressive behaviour in children and adults with intellectual disabilities."
}
]
|
query-laysum | 5613 | [
{
"role": "user",
"content": "Abstract: Background\nPresent-centered therapy (PCT) is a non-trauma, manualized psychotherapy for adults with post-traumatic stress disorder (PTSD). PCT was originally designed as a treatment comparator in trials evaluating the effectiveness of trauma-focused cognitive-behavioral therapy (TF-CBT). Recent trials have indicated that PCT may be an effective treatment option for PTSD and that patients may drop out of PCT at lower rates relative to TF-CBT.\nObjectives\nTo assess the effects of PCT for adults with PTSD. Specifically, we sought to determine whether (1) PCT is more effective in alleviating symptoms relative to control conditions, (2) PCT results in similar alleviation of symptoms compared to TF-CBT, based on an a priori minimally important differences on a semi-structured interview of PTSD symptoms, and (3) PCT is associated with lower treatment dropout as compared to TF-CBT.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Controlled Trials Register, the Cochrane Library, Ovid MEDLINE, Embase, PsycINFO, PubMed, and PTSDpubs (previously called the Published International Literature on Traumatic Stress (PILOTS) database) (all years to 15 February 2019 search). We also searched the World Health Organization (WHO) trials portal (ICTRP) and ClinicalTrials.gov to identify unpublished and ongoing trials. Reference lists of included studies and relevant systematic reviews were checked. Grey literature searches were also conducted to identify dissertations and theses, clinical guidelines, and regulatory agency reports.\nSelection criteria\nWe selected all randomized clinical trials (RCTs) that recruited adults diagnosed with PTSD to evaluate PCT compared to TF-CBT or a control condition. Both individual and group PCT modalities were included. The primary outcomes of interest included reduced PTSD severity as determined by a clinician-administered measure and treatment dropout rates.\nData collection and analysis\nWe complied with the Cochrane recommended standards for data screening and collection. Two review authors independently screened articles for inclusion and extracted relevant data from eligible studies, including the assessment of trial quality. Random-effects meta-analyses, subgroup analyses, and sensitivity analyses were conducted using mean differences (MD) and standardized mean differences (SMD) for continuous data or risk ratios (RR) and risk differences (RD) for dichotomous data. To conclude that PCT resulted in similar reductions in PTSD symptoms relative to TF-CBT, we required a MD of less than 10 points (to include the 95% confidence interval) on the Clinician-Administered PTSD Scale (CAPS). Five members of the review team convened to rate the quality of evidence across the primary outcomes. Any disagreements were resolved through discussion. Review authors who were investigators on any of the included trials were not involved in the qualitative or quantitative syntheses.\nMain results\nWe included 12 studies (n = 1837), of which, three compared PCT to a wait-list/minimal attention (WL/MA) group and 11 compared PCT to TF-CBT. PCT was more effective than WL/MA in reducing PTSD symptom severity (SMD -0.84, 95% CI -1.10 to -0.59; participants = 290; studies = 3; I² = 0%). We assessed the quality of this evidence as moderate. The results of the non-inferiority analysis comparing PCT to TF-CBT did not support PCT non-inferiority, with the 95% confidence interval surpassing the clinically meaningful cut-off (MD 6.83, 95% CI 1.90 to 11.76; 6 studies, n = 607; I² = 42%). We assessed this quality of evidence as low. CAPS differences between PCT and TF-CBT attenuated at 6-month (MD 1.59, 95% CI -0.46 to 3.63; participants = 906; studies = 6; I² = 0%) and 12-month (MD 1.22, 95% CI -2.17 to 4.61; participants = 485; studies = 3; I² = 0%) follow-up periods. To confirm the direction of the treatment effect using all eligible trials, we also evaluated PTSD SMD differences. These results were consistent with the primary MD outcomes, with meaningful effect size differences between PCT and TF-CBT at post-treatment (SMD 0.32, 95% CI 0.08 to 0.56; participants = 1129; studies = 9), but smaller effect size differences at six months (SMD 0.17, 95% CI 0.05 to 0.29; participants = 1339; studies = 9) and 12 months (SMD 0.17, 95% CI 0.03 to 0.31; participants = 728; studies = 5). PCT had approximately 14% lower treatment dropout rates compared to TF-CBT (RD -0.14, 95% CI -0.18 to -0.10; participants = 1542; studies = 10). We assessed the quality of this evidence as moderate. There was no evidence of meaningful differences on self-reported PTSD (MD 4.50, 95% CI 3.09 to 5.90; participants = 983; studies = 7) or depression symptoms (MD 1.78, 95% CI -0.23 to 3.78; participants = 705; studies = 5) post-treatment.\nAuthors' conclusions\nModerate-quality evidence indicates that PCT is more effective in reducing PTSD severity compared to control conditions. Low quality of evidence did not support PCT as a non-inferior treatment compared to TF-CBT on clinician-rated post-treatment PTSD severity. The treatment effect differences between PCT and TF-CBT may attenuate over time. PCT participants drop out of treatment at lower rates relative to TF-CBT participants. Of note, all of the included studies were primarily designed to test the effectiveness of TF-CBT which may bias results away from PCT non-inferiority.The current systematic review provides the most rigorous evaluation to date to determine whether PCT is comparably as effective as TF-CBT. Findings are generally consistent with current clinical practice guidelines that suggest that PCT may be offered as a treatment for PTSD when TF-CBT is not available.\n\nGiven the provided abstract, please respond to the following query: \"Study Characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review included 12 studies that comprised a total of 1837 participants. Eleven studies that included 1826 participants contributed to the quantitative syntheses. Participants were all adults, but ranged in demographics and trauma types. All studies recruited participants in the United States and there was a predominance of studies conducted on military veterans."
}
]
|
query-laysum | 5614 | [
{
"role": "user",
"content": "Abstract: Background\nMacular hole (MH) is a full-thickness defect in the central portion of the retina that causes loss of central vision. According to the usual definition, a large MH has a diameter greater than 400 µm at the narrowest point. For closure of MH, there is evidence that pars plana vitrectomy (PPV) with internal limiting membrane (ILM) peeling achieves better anatomical outcomes than standard PPV. PPV with ILM peeling is currently the standard of care for MH management; however, the failure rate of this technique is higher for large MHs than for smaller MHs. Some studies have shown that the inverted ILM flap technique is superior to conventional ILM peeling for the management of large MHs.\nObjectives\nTo evaluate the clinical effectiveness and safety of pars plana vitrectomy with the inverted internal limiting membrane flap technique versus pars plana vitrectomy with conventional internal limiting membrane peeling for treating large macular holes, including idiopathic, traumatic, and myopic macular holes.\nSearch methods\nThe Cochrane Eyes and Vision Information Specialist searched CENTRAL, MEDLINE, Embase, two other databases, and two trials registries on 12 December 2022.\nSelection criteria\nWe included randomized controlled trials (RCTs) that evaluated PPV with ILM peeling versus PPV with inverted ILM flap for treatment of large MHs (with a basal diameter greater than 400 µm at the narrowest point measured by optical coherence tomography) of any type (idiopathic, traumatic, or myopic).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane and assessed the certainty of the body of evidence using GRADE.\nMain results\nWe included four RCTs (285 eyes of 275 participants; range per study 24 to 91 eyes). Most participants were women (63%), and of older age (range of means 59.4 to 66 years). Three RCTs were single-center trials, and the same surgeon performed all surgeries in two RCTs (the third single-center RCT did not report the number of surgeons). One RCT was a multicenter trial (three sites), and four surgeons performed all surgeries. Two RCTs took place in India, one in Poland, and one in Mexico. Maximum follow-up ranged from three months (2 RCTs) to 12 months (1 RCT). No RCTs reported conflicts of interest or disclosed financial support. All four RCTs enrolled people with large idiopathic MHs and compared conventional PPV with ILM peeling versus PPV with inverted ILM flap techniques. Variations in technique across the four RCTs were minimal. There was some heterogeneity in interventions: in two RCTs, all participants underwent combined cataract-PPV surgery, whereas in one RCT, some participants underwent cataract surgery after PPV (the fourth RCT did not mention cataract surgery). The critical outcomes for this review were mean best-corrected visual acuity (BCVA) and MH closure rates. All four RCTs provided data for meta-analyses of both critical outcomes. We assessed the risk of bias for both outcomes using the Cochrane risk of bias tool (RoB 2); there were some concerns for risk of bias associated with lack of masking of outcome assessors and selective reporting of outcomes in all RCTs.\nAll RCTs reported postoperative BCVA values; only one RCT reported the change in BCVA from baseline. Based on evidence from the four RCTs, it is unclear if the inverted ILM flap technique compared with ILM peeling reduces (improves) postoperative BCVA measured on a logarithm of the minimum angle of resolution (logMAR) chart at one month (mean difference [MD] −0.08 logMAR, 95% confidence interval [CI] −0.20 to 0.05; P = 0.23, I2 = 65%; 4 studies, 254 eyes; very low-certainty evidence), but it may improve BCVA at three months or more (MD −0.17 logMAR, 95% CI −0.23 to −0.10; P < 0.001, I2 = 0%; 4 studies, 276 eyes; low-certainty evidence). PPV with an inverted ILM flap compared to PPV with ILM peeling probably increases the proportion of eyes achieving MH closure (risk ratio [RR] 1.10, 95% CI 1.02 to 1.18; P = 0.01, I2 = 0%; 4 studies, 276 eyes; moderate-certainty evidence) and type 1 MH closure (RR 1.31, 95% CI 1.03 to 1.66; P = 0.03, I² = 69%; 4 studies, 276 eyes; moderate-certainty evidence). One study reported that none of the 38 participants experienced postoperative retinal detachment.\nAuthors' conclusions\nWe found low-certainty evidence from four small RCTs that PPV with the inverted ILM flap technique is superior to PPV with ILM peeling with respect to BCVA gains at three or more months after surgery. We also found moderate-certainty evidence that the inverted ILM flap technique achieves more overall and type 1 MH closures. There is a need for high-quality multicenter RCTs to ascertain whether the inverted ILM flap technique is superior to ILM peeling with regard to anatomical and functional outcomes. Investigators should use the standard logMAR charts when measuring BCVA to facilitate comparison across trials.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to examine whether PPV with the ILM flap technique was better than PPV with ILM peeling for treating large macular holes."
}
]
|
query-laysum | 5615 | [
{
"role": "user",
"content": "Abstract: Background\nOral nirmatrelvir/ritonavir (Paxlovid®) aims to avoid severe COVID-19 in asymptomatic people or those with mild symptoms, thereby decreasing hospitalization and death. Due to its novelty, there are currently few published study results. It remains to be evaluated for which indications and patient populations the drug is suitable.\nObjectives\nTo assess the efficacy and safety of nirmatrelvir/ritonavir (Paxlovid®) plus standard of care compared to standard of care with or without placebo, or any other intervention for treating COVID-19 and for preventing SARS-CoV-2 infection.\nTo explore equity aspects in subgroup analyses.\nTo keep up to date with the evolving evidence base using a living systematic review (LSR) approach and make new relevant studies available to readers in-between publication of review updates.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register, Scopus, and WHO COVID-19 Global literature on coronavirus disease database, identifying completed and ongoing studies without language restrictions and incorporating studies up to 11 July 2022.\nThis is a LSR. We conduct monthly update searches that are being made publicly available on the open science framework (OSF) platform.\nSelection criteria\nStudies were eligible if they were randomized controlled trials (RCTs) comparing nirmatrelvir/ritonavir plus standard of care with standard of care with or without placebo, or any other intervention for treatment of people with confirmed COVID-19 diagnosis, irrespective of disease severity or treatment setting, and for prevention of SARS-CoV-2 infection.\nWe screened all studies for research integrity. Studies were ineligible if they had been retracted, or if they were not prospectively registered including appropriate ethics approval.\nData collection and analysis\nWe followed standard Cochrane methodology and used the Cochrane risk of bias 2 tool. We rated the certainty of evidence using the GRADE approach for the following outcomes: 1. to treat outpatients with mild COVID-19; 2. to treat inpatients with moderate-to-severe COVID-19: mortality, clinical worsening or improvement, quality of life, (serious) adverse events, and viral clearance; 3. to prevent SARS-CoV-2 infection in post-exposure prophylaxis (PEP); and 4. pre-exposure prophylaxis (PrEP) scenarios: SARS-CoV-2 infection, development of COVID-19 symptoms, mortality, admission to hospital, quality of life, and (serious) adverse events.\nWe explored inequity by subgroup analysis for elderly people, socially-disadvantaged people with comorbidities, populations from LICs and LMICs, and people from different ethnic and racial backgrounds.\nMain results\nAs of 11 July 2022, we included one RCT with 2246 participants in outpatient settings with mild symptomatic COVID-19 comparing nirmatrelvir/ritonavir plus standard of care with standard of care plus placebo. Trial participants were unvaccinated, without previous confirmed SARS-CoV-2 infection, had a symptom onset of no more than five days before randomization, and were at high risk for progression to severe disease. Prohibited prior or concomitant therapies included medications highly dependent on CYP3A4 for clearance and CYP3A4 inducers.\nWe identified eight ongoing studies.\nNirmatrelvir/ritonavir for treating COVID-19 in outpatient settings with asymptomatic or mild disease\nFor the specific population of unvaccinated, high-risk patients nirmatrelvir/ritonavir plus standard of care compared to standard of care plus placebo may reduce all-cause mortality at 28 days (risk ratio (RR) 0.04, 95% confidence interval (CI) 0.00 to 0.68; 1 study, 2224 participants; estimated absolute effect: 11 deaths per 1000 people receiving placebo compared to 0 deaths per 1000 people receiving nirmatrelvir/ritonavir; low-certainty evidence, and admission to hospital or death within 28 days (RR 0.13, 95% CI 0.07 to 0.27; 1 study, 2224 participants; estimated absolute effect: 61 admissions or deaths per 1000 people receiving placebo compared to eight admissions or deaths per 1000 people receiving nirmatrelvir/ritonavir; low-certainty evidence).\nNirmatrelvir/ritonavir plus standard of care may reduce serious adverse events during the study period compared to standard of care plus placebo (RR 0.24, 95% CI 0.15 to 0.41; 1 study, 2224 participants; low-certainty evidence). Nirmatrelvir/ritonavir plus standard of care probably has little or no effect on treatment-emergent adverse events (RR 0.95, 95% CI 0.82 to 1.10; 1 study, 2224 participants; moderate-certainty evidence), and probably increases treatment-related adverse events such as dysgeusia and diarrhoea during the study period compared to standard of care plus placebo (RR 2.06, 95% CI 1.44 to 2.95; 1 study, 2224 participants; moderate-certainty evidence). Nirmatrelvir/ritonavir plus standard of care probably decreases discontinuation of study drug due to adverse events compared to standard of care plus placebo (RR 0.49, 95% CI 0.30 to 0.80; 1 study, 2224 participants; moderate-certainty evidence).\nNo study results were identified for improvement of clinical status, quality of life, and viral clearance.\nSubgroup analyses for equity\nMost study participants were younger than 65 years (87.1% of the : modified intention to treat (mITT1) population with 2085 participants), of white ethnicity (71.5%), and were from UMICs or HICs (92.1% of study centres). Data on comorbidities were insufficient.\nThe outcome ‘admission to hospital or death’ was investigated for equity: age (< 65 years versus ≥ 65 years) and ethnicity (Asian versus Black versus White versus others). There was no difference between subgroups of age. The effects favoured treatment with nirmatrelvir/ritonavir for the White ethnic group. Estimated effects in the other ethnic groups included the line of no effect (RR = 1). No subgroups were reported for comorbidity status and World Bank country classification by income level. No subgroups were reported for other outcomes.\nNirmatrelvir/ritonavir for treating COVID-19 in inpatient settings with moderate to severe disease\nNo studies available.\nNirmatrelvir/ritonavir for preventing SARS-CoV-2 infection (PrEP and PEP)\nNo studies available.\nAuthors' conclusions\nThere is low-certainty evidence that nirmatrelvir/ritonavir reduces the risk of all-cause mortality and hospital admission or death based on one trial investigating unvaccinated COVID-19 participants without previous infection that were at high risk and with symptom onset of no more than five days. There is low- to moderate-certainty evidence that nirmatrelvir/ritonavir is safe in people without prior or concomitant therapies including medications highly dependent on CYP3A4.\nRegarding equity aspects, except for ethnicity, no differences in effect size and direction were identified.\nNo evidence is available on nirmatrelvir/ritonavir to treat hospitalized people with COVID-19 and to prevent a SARS-CoV-2 infection.\nWe will continually update our search and make search results available on OSF.\n\nGiven the provided abstract, please respond to the following query: \"Equity aspects\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most study participants were younger than 65 years, of white ethnicity and were from upper-middle- or high-income countries. There was no difference in effectiveness between younger and older participants. There was a positive effect in all ethnic groups, which was clearest for people of white ethnicity but numbers of participants in the other ethnic groups were low. No subgroups were reported for different levels of comorbidity and World Bank country classification by income level.\nNo subgroups were reported for other outcomes."
}
]
|
query-laysum | 5616 | [
{
"role": "user",
"content": "Abstract: Background\nCystic fibrosis is a multi-system disease characterised by the production of thick secretions causing recurrent pulmonary infection, often with unusual bacteria. This leads to lung destruction and eventually death through respiratory failure. There are no antibiotics in development that exert a new mode of action and many of the current antibiotics are ineffective in eradicating the bacteria once chronic infection is established. Antibiotic adjuvants - therapies that act by rendering the organism more susceptible to attack by antibiotics or the host immune system, by rendering it less virulent or killing it by other means, would be a significant therapeutic advance. This is an update of a previously published review.\nObjectives\nTo determine if antibiotic adjuvants improve clinical and microbiological outcome of pulmonary infection in people with cystic fibrosis.\nSearch methods\nWe searched the Cystic Fibrosis Trials Register which is compiled from database searches, hand searches of appropriate journals and conference proceedings.\nDate of most recent search: 16 January 2020.\nWe also searched MEDLINE (all years) on 14 February 2019 and ongoing trials registers on 06 April 2020.\nSelection criteria\nRandomised controlled trials and quasi-randomised controlled trials of a therapy exerting an antibiotic adjuvant mechanism of action compared to placebo or no therapy for people with cystic fibrosis.\nData collection and analysis\nTwo of the authors independently assessed and extracted data from identified trials.\nMain results\nWe identified 42 trials of which eight (350 participants) that examined antibiotic adjuvant therapies are included. Two further trials are ongoing and five are awaiting classification. The included trials assessed β-carotene (one trial, 24 participants), garlic (one trial, 34 participants), KB001-A (a monoclonal antibody) (two trials, 196 participants), nitric oxide (two trials, 30 participants) and zinc supplementation (two trials, 66 participants). The zinc trials recruited children only, whereas the remaining trials recruited both adults and children. Three trials were located in Europe, one in Asia and four in the USA.\nThree of the interventions measured our primary outcome of pulmonary exacerbations (β-carotene, mean difference (MD) -8.00 (95% confidence interval (CI) -18.78 to 2.78); KB001-A, risk ratio (RR) 0.25 (95% CI 0.03 to 2.40); zinc supplementation, RR 1.85 (95% CI 0.65 to 5.26). β-carotene and KB001-A may make little or no difference to the number of exacerbations experienced (low-quality evidence); whereas, given the moderate-quality evidence we found that zinc probably makes no difference to this outcome.\nRespiratory function was measured in all of the included trials. β-carotene and nitric oxide may make little or no difference to forced expiratory volume in one second (FEV1) (low-quality evidence), whilst garlic probably makes little or no difference to FEV1 (moderate-quality evidence). It is uncertain whether zinc or KB001-A improve FEV1 as the certainty of this evidence is very low.\nFew adverse events were seen across all of the different interventions and the adverse events that were reported were mild or not treatment-related (quality of the evidence ranged from very low to moderate).\nOne of the trials (169 participants) comparing KB001-A and placebo, reported on the time to the next course of antibiotics; results showed there is probably no difference between groups, HR 1.00 (95% CI 0.69 to 1.45) (moderate-quality evidence). Quality of life was only reported in the two KB001-A trials, which demonstrated that there may be little or no difference between KB001-A and placebo (low-quality evidence). Sputum microbiology was measured and reported for the trials of KB001-A and nitric oxide (four trials). There was very low-quality evidence of a numerical reduction in Pseudomonas aeruginosa density with KB001-A, but it was not significant. The two trials looking at the effects of nitric oxide reported significant reductions in Staphylococcus aureus and near-significant reductions in Pseudomonas aeruginosa, but the quality of this evidence is again very low.\nAuthors' conclusions\nWe could not identify an antibiotic adjuvant therapy that we could recommend for treating of lung infection in people with cystic fibrosis. The emergence of increasingly resistant bacteria makes the reliance on antibiotics alone challenging for cystic fibrosis teams. There is a need to explore alternative strategies, such as the use of adjuvant therapies. Further research is required to provide future therapeutic options.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with cystic fibrosis suffer from infections in their lungs as they produce thick secretions which allow bacteria to grow in them. Often the infections are caused by unusual bacteria, including Pseudomonas aeruginosa, and these bacteria become resistant to treatment with antibiotics. Long-term infection reduces a person's quality of life and their lung function. There are no new antibiotics currently being developed which use a new type of action. New agents - antibiotic adjuvants - are needed to work alongside antibiotics to make bacteria more sensitive to either antibiotics or to the body's own immune system, and to interfere with the formation of colonies of bacteria in the lungs."
}
]
|
query-laysum | 5617 | [
{
"role": "user",
"content": "Abstract: Background\nDespite potential analgesic benefits from topical ophthalmic amides and esters, their outpatient use has become of concern because of the potential for abuse and ophthalmic complications.\nObjectives\nTo assess the effectiveness and safety of topical ophthalmic anesthetics compared with placebo or other treatments in persons with corneal abrasions.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE; Embase.com; Latin American and Caribbean Health Sciences (LILACS); ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), without restriction on language or year of publication. The search was performed on 10 February 2023.\nSelection criteria\nWe included randomized controlled trials (RCTs) of topical ophthalmic anesthetics alone or in combination with another treatment (e.g. nonsteroidal anti-inflammatory drugs (NSAIDs)) versus a non-anesthetic control group (e.g. placebo, non-treatment, or alternative treatment). We included trials that enrolled participants of all ages who had corneal abrasions within 48 hours of presentation.\nData collection and analysis\nWe used standard Cochrane methodology.\nMain results\nWe included nine parallel-group RCTs with a total of 556 participants (median number of participants per study: 45, interquartile range (IQR) 44 to 74), conducted in eight countries: Australia, Canada, France, South Korea, Turkey, New Zealand, UK, and USA.\nStudy characteristics and risk of bias\nFour RCTs (314 participants) investigated post-traumatic corneal abrasions diagnosed in the emergency department setting. Five trials described 242 participants from ophthalmology surgery centers with post-surgical corneal defects: four from photorefractive keratectomy (PRK) and one from pterygium surgery. Study duration ranged from two days to six months, the most common being one week (four RCTs). Treatment duration ranged from three hours to one week (nine RCTs); the majority were between 24 and 48 hours (five RCTs). The age of participants was reported in eight studies, ranging from 17 to 74 years of age. Only one participant in one trial was under 18 years of age. Of four studies that reported funding sources, none was industry-sponsored. We judged a high risk of bias in one trial with respect to the outcome pain control by 48 hours, and in five of seven trials with respect to the outcome complications at the furthest time point. The domain for which we assessed studies to be at the highest risk of bias was missing or selective reporting of outcome data.\nFindings\nThe treatments investigated included topical anesthetics compared with placebo, topical anesthetic compared with NSAID (post-surgical cases), and topical anesthetics plus NSAID compared with placebo (post-surgical cases).\nPain control by 24 hours\nIn all studies, self-reported pain outcomes were on a 10-point scale, where lower numbers represent less pain. In post-surgical trials, topical anesthetics provided a moderate reduction in self-reported pain at 24 hours compared with placebo of 1.28 points on a 10-point scale (mean difference (MD) −1.28, 95% confidence interval (CI) −1.76 to −0.80; 3 RCTs, 119 participants). In the post-trauma participants, there may be little or no difference in effect (MD −0.04, 95% CI −0.10 to 0.02; 1 RCT, 76 participants). Compared with NSAID in post-surgical participants, topical anesthetics resulted in a slight increase in pain at 24 hours (MD 0.82, 95% CI 0.01 to 1.63; 1 RCT, 74 participants).\nOne RCT compared topical anesthetics plus NSAID to placebo. There may be a large reduction in pain at 24 hours with topical anesthetics plus NSAID in post-surgical participants, but the evidence to support this large effect is very uncertain (MD −5.72, 95% CI −7.35 to −4.09; 1 RCT, 30 participants; very low-certainty evidence).\nPain control by 48 hours\nCompared with placebo, topical anesthetics reduced post-trauma pain substantially by 48 hours (MD −5.68, 95% CI −6.38 to −4.98; 1 RCT, 111 participants) but had little to no effect on post-surgical pain (MD 0.41, 95% CI −0.45 to 1.27; 1 RCT, 44 participants), although the evidence is very uncertain.\nPain control by 72 hours\nOne post-surgical RCT showed little or no effect of topical anesthetics compared with placebo by 72 hours (MD 0.49, 95% CI −0.06 to 1.04; 44 participants; very low-certainty evidence).\nProportion of participants with unresolved epithelial defects\nWhen compared with placebo or NSAID, topical anesthetics increased the number of participants without complete resolution of defects in trials of post-trauma participants (risk ratio (RR) 1.37, 95% CI 0.78 to 2.42; 3 RCTs, 221 participants; very low-certainty evidence). The proportion of placebo-treated post-surgical participants with unresolved epithelial defects at 24 to 72 hours was lower when compared with those assigned to topical anesthetics (RR 0.14, 95% CI 0.01 to 2.55; 1 RCT, 30 participants; very low-certainty evidence) or topical anesthetics plus NSAID (RR 0.33, 95% CI 0.04 to 2.85; 1 RCT, 30 participants; very low-certainty evidence).\nProportion of participants with complications at the longest follow-up\nWhen compared with placebo or NSAID, topical anesthetics resulted in a higher proportion of post-trauma participants with complications at up to two weeks (RR 1.13, 95% CI 0.23 to 5.46; 3 RCTs, 242 participants) and post-surgical participants with complications at up to one week (RR 7.00, 95% CI 0.38 to 128.02; 1 RCT, 44 participants). When topical anesthetic plus NSAID was compared with placebo, no complications were reported in either treatment arm up to one week post-surgery (risk difference (RD) 0.00, 95% CI −0.12 to 0.12; 1 RCT, 30 participants). The evidence is very uncertain for safety outcomes.\nQuality of life\nNone of the included trials assessed quality of life outcomes.\nAuthors' conclusions\nDespite topical anesthetics providing excellent pain control in the intraoperative setting, the currently available evidence provides little or no certainty about their efficacy for reducing ocular pain in the initial 24 to 72 hours after a corneal abrasion, whether from unintentional trauma or surgery. We have very low confidence in this evidence as a basis to recommend topical anesthetics as an efficacious treatment modality to relieve pain from corneal abrasions. We also found no evidence of a substantial effect on epithelial healing up to 72 hours or a reduction in ocular complications when we compared anesthetics alone or with NSAIDs versus placebo.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We assessed whether anesthetic eye drops reduce pain in people with corneal abrasions. We also examined whether anesthetic eye drops influence the healing of the corneal wound or cause unwanted effects on the eyes."
}
]
|
query-laysum | 5618 | [
{
"role": "user",
"content": "Abstract: Background\nCocaine dependence is a severe disorder for which no medication has been approved. Like opioids for heroin dependence, replacement therapy with psychostimulants could be an effective therapy for treatment.\nObjectives\nTo assess the effects of psychostimulants for cocaine abuse and dependence. Specific outcomes include sustained cocaine abstinence and retention in treatment. We also studied the influence of type of drug and comorbid disorders on psychostimulant efficacy.\nSearch methods\nThis is an update of the review previously published in 2010. For this updated review, we searched the Cochrane Drugs and Alcohol Group Trials Register, CENTRAL, MEDLINE, Embase and PsycINFO up to 15 February 2016. We handsearched references of obtained articles and consulted experts in the field.\nSelection criteria\nWe included randomised parallel group controlled clinical trials comparing the efficacy of a psychostimulant drug versus placebo.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 26 studies involving 2366 participants. The included studies assessed nine drugs: bupropion, dexamphetamine, lisdexamfetamine, methylphenidate, modafinil, mazindol, methamphetamine, mixed amphetamine salts and selegiline. We did not consider any study to be at low risk of bias for all domains included in the Cochrane 'Risk of bias' tool. Attrition bias was the most frequently suspected potential source of bias of the included studies. We found very low quality evidence that psychostimulants improved sustained cocaine abstinence (risk ratio (RR) 1.36, 95% confidence interval (CI) 1.05 to 1.77, P = 0.02), but they did not reduce cocaine use (standardised mean difference (SMD) 0.16, 95% CI −0.02 to 0.33) among participants who continued to use it. Furthermore, we found moderate quality evidence that psychostimulants did not improve retention in treatment (RR 1.00, 95% CI 0.93 to 1.06). The proportion of adverse event-induced dropouts and cardiovascular adverse event-induced dropouts was similar for psychostimulants and placebo (RD 0.00, 95% CI −0.01 to 0.01; RD 0.00, 95% CI −0.02 to 0.01, respectively). When we included the type of drug as a moderating variable, the proportion of patients achieving sustained cocaine abstinence was higher with bupropion and dexamphetamine than with placebo. Psychostimulants also appeared to increase the proportion of patients achieving sustained cocaine and heroin abstinence amongst methadone-maintained, dual heroin-cocaine addicts. Retention to treatment was low, though, so our results may be compromised by attrition bias. We found no evidence of publication bias.\nAuthors' conclusions\nThis review found mixed results. Psychostimulants improved cocaine abstinence compared to placebo in some analyses but did not improve treatment retention. Since treatment dropout was high, we cannot rule out the possibility that these results were influenced by attrition bias. Existing evidence does not clearly demonstrate the efficacy of any pharmacological treatment for cocaine dependence, but substitution treatment with psychostimulants appears promising and deserves further investigation.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The efficacy of psychostimulants for cocaine dependence is not entirely clear, but these treatments appear promising and deserve further investigation."
}
]
|
query-laysum | 5619 | [
{
"role": "user",
"content": "Abstract: Background\nCerebral small vessel disease is a progressive disease of the brain's deep perforating blood vessels. It is usually diagnosed based on lesions seen on brain imaging. Cerebral small vessel disease is a common cause of stroke but can also cause a progressive cognitive decline. As antithrombotic therapy is an established treatment for stroke prevention, we sought to determine whether antithrombotic therapy might also be effective in preventing cognitive decline in people with small vessel disease.\nObjectives\nTo assess the effects of antithrombotic therapy for prevention of cognitive decline in people with small vessel disease on neuroimaging but without dementia.\nSearch methods\nWe searched ALOIS, the Cochrane Dementia and Cognitive Improvement Review Group's Specialised Register, and the Cochrane Stroke Group's Specialised Register; the most recent search was on 21 July 2021. We also searched MEDLINE, Embase, four other databases and two trials registries. We searched the reference lists of the articles retrieved from these searches. As trials with a stroke focus may include relevant subgroup data, we complemented these searches with a focussed search of all antithrombotic titles in the Cochrane Stroke Group database.\nSelection criteria\nWe included randomised controlled trials (RCT) of people with neuroimaging evidence of at least mild cerebral small vessel disease (defined here as white matter hyperintensities, lacunes of presumed vascular origin and subcortical infarcts) but with no evidence of dementia. The trials had to compare antithrombotic therapy of minimum 24 weeks' duration to no antithrombotic therapy (either placebo or treatment as usual), or compare different antithrombotic treatment regimens. Antithrombotic therapy could include antiplatelet agents (as monotherapy or combination therapy), anticoagulants or a combination.\nData collection and analysis\nTwo review authors independently screened all the titles identified by the searches. We assessed full texts for eligibility for inclusion according to our prespecified selection criteria, extracted data to a proforma and assessed risk of bias using the Cochrane tool for RCTs. We evaluated the certainty of evidence using GRADE. Due to heterogeneity across included participants, interventions and outcomes of eligible trials, it was not possible to perform meta-analyses.\nMain results\nWe included three RCTs (3384 participants). One study investigated the effect of antithrombotic therapy in participants not yet on antithrombotic therapy; two studies investigated the effect of additional antithrombotic therapy, one in a population already taking a single antithrombotic agent and one in a mixed population (participants on an antithrombotic drug and antithrombotic-naive participants). Intervention and follow-up durations varied from 24 weeks to four years.\nJia 2016 was a placebo-controlled trial assessing 24 weeks of treatment with DL-3-n-butylphthalide (a compound with multimodal actions, including a putative antiplatelet effect) in 280 Chinese participants with vascular cognitive impairment caused by subcortical ischaemic small vessel disease, but without dementia. There was very low-certainty evidence for a small difference in cognitive test scores favouring treatment with DL-3-n-butylphthalide, as measured by the 12-item Alzheimer’s Disease Assessment Scale-Cognitive subscale (adjusted mean difference −1.07, 95% confidence interval (CI) −2.02 to −0.12), but this difference may not be clinically relevant. There was also very low-certainty evidence for greater proportional improvement measured with the Clinician Interview-Based Impression of Change-Plus Caregiver Input (57% with DL-3-n-butylphthalide versus 42% with placebo; P = 0.01), but there was no difference in other measures of cognition (Mini-Mental State Examination and Clinical Dementia Rating) or function. There was no evidence of a difference in adverse events between treatment groups.\nThe SILENCE RCT compared antithrombotic therapy (aspirin) and placebo during four years of treatment in 83 participants with 'silent brain infarcts' who were on no prior antithrombotic therapy. There was very low-certainty evidence for no difference between groups across various measures of cognition and function, rates of stroke or adverse events.\nThe Secondary Prevention of Subcortical Stroke Study (SPS3) compared dual antiplatelet therapy (clopidogrel plus aspirin) to aspirin alone in 3020 participants with recent lacunar stroke. There was low-certainty evidence of no effect on cognitive outcomes as measured by the Cognitive Abilities Screening Instruments (CASI) assessed annually over five years. There was also low-certainty evidence of no difference in the annual incidence of mild cognitive decline between the two treatment groups (9.7% with dual antiplatelet therapy versus 9.9% with aspirin), or the annual stroke recurrence rate (2.5% with dual antiplatelet therapy versus 2.7% with aspirin). Bleeding risk may be higher with dual antiplatelet therapy (hazard ratio (HR) 2.15, 95% CI 1.49 to 3.11; low certainty evidence), but there may be no significant increase in intracerebral bleeding risk (HR 1.52, 95% CI 0.79 to 2.93; low-certainty evidence).\nNone of the included trials assessed the incidence of new dementia.\nAuthors' conclusions\nWe found no convincing evidence to suggest any clinically relevant cognitive benefit of using antithrombotic therapy in addition to standard treatment in people with cerebral small vessel disease but without dementia, but there may be an increased bleeding risk with this approach. There was marked heterogeneity across the trials and the certainty of the evidence was generally poor.\n\nGiven the provided abstract, please respond to the following query: \"What we did\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the medical literature up to 21 July 2021 looking for studies that compared blood-thinning medications given over at least 24 weeks to a comparator, which could have been either usual care or a placebo (dummy) tablet. To make the comparison fair, the studies had to assign people randomly to blood-thinning medications or the comparator treatment. We were interested in the effects on participants' performance in memory and thinking tests, their ability to look after themselves, their risk of developing dementia and stroke, and side effects (especially from bleeding). Because the studies were so different from each other in terms of the type of participants, medications and assessments, we were unable to combine the results in analyses. Rather, we described the results of individual studies and assessed how confident we were in their findings."
}
]
|
query-laysum | 5620 | [
{
"role": "user",
"content": "Abstract: Background\nCancer is a common disease and radiotherapy is one well-established treatment for some solid tumours. Hyperbaric oxygenation therapy (HBOT) may improve the ability of radiotherapy to kill hypoxic cancer cells, so the administration of radiotherapy while breathing hyperbaric oxygen may result in a reduction in mortality and recurrence.\nObjectives\nTo assess the benefits and harms of administering radiotherapy for the treatment of malignant tumours while breathing HBO.\nSearch methods\nIn September 2017 we searched the Cochrane Central Register of Controlled Trials (CENTRAL), the Cochrane Library Issue 8, 2017, MEDLINE, Embase, and the Database of Randomised Trials in Hyperbaric Medicine using the same strategies used in 2011 and 2015, and examined the reference lists of included articles.\nSelection criteria\nRandomised and quasi-randomised studies comparing the outcome of malignant tumours following radiation therapy while breathing HBO versus air or an alternative sensitising agent.\nData collection and analysis\nThree review authors independently evaluated the quality of and extracted data from the included trials.\nMain results\nWe included 19 trials in this review (2286 participants: 1103 allocated to HBOT and 1153 to control).\nFor head and neck cancer, there was an overall reduction in the risk of dying at both one year and five years after therapy (risk ratio (RR) 0.83, 95% confidence interval (CI) 0.70 to 0.98, number needed to treat for an additional beneficial outcome (NNTB) = 11 and RR 0.82, 95% CI 0.69 to 0.98, high-quality evidence), and some evidence of improved local tumour control immediately following irradiation (RR with HBOT 0.58, 95% CI 0.39 to 0.85, moderate-quality evidence due to imprecision). There was a lower incidence of local recurrence of tumour when using HBOT at both one and five years (RR at one year 0.66, 95% CI 0.56 to 0.78, high-quality evidence; RR at five years 0.77, 95% CI 0.62 to 0.95, moderate-quality evidence due to inconsistency between trials). There was also some evidence with regard to the chance of metastasis at five years (RR with HBOT 0.45 95% CI 0.09 to 2.30, single trial moderate quality evidence imprecision). No trials reported a quality of life assessment. Any benefits come at the cost of an increased risk of severe local radiation reactions with HBOT (severe radiation reaction RR 2.64, 95% CI 1.65 to 4.23, high-quality evidence). However, the available evidence failed to clearly demonstrate an increased risk of seizures from acute oxygen toxicity (RR 4.3, 95% CI 0.47 to 39.6, moderate-quality evidence).\nFor carcinoma of the uterine cervix, there was no clear benefit in terms of mortality at either one year or five years (RR with HBOT at one year 0.88, 95% CI 0.69 to 1.11, high-quality evidence; RR at five years 0.95, 95% CI 0.80 to 1.14, moderate-quality evidence due to inconsistency between trials). Similarly, there was no clear evidence of a benefit of HBOT in the reported rate of local recurrence (RR with HBOT at one year 0.82, 95% CI 0.63 to 1.06, high-quality evidence; RR at five years 0.85, 95% CI 0.65 to 1.13, moderate-quality evidence due to inconsistency between trials). We also found no clear evidence for any effect of HBOT on the rate of development of metastases at both two years and five years (two years RR with HBOT 1.05, 95% CI 0.84 to 1.31, high quality evidence; five years RR 0.79, 95% CI 0.50 to 1.26, moderate-quality evidence due to inconsistency). There were, however, increased adverse effects with HBOT. The risk of a severe radiation injury at the time of treatment with HBOT was 2.05, 95% CI 1.22 to 3.46, high-quality evidence. No trials reported any failure of local tumour control, quality of life assessments, or the risk of seizures during treatment.\nWith regard to the treatment of urinary bladder cancer, there was no clear evidence of a benefit in terms of mortality from HBOT at one year (RR 0.97, 95% CI 0.74 to 1.27, high-quality evidence), nor any benefit in the risk of developing metastases at two years (RR 2.0, 95% CI 0.58 to 6.91, moderate-quality evidence due to imprecision). No trial reported on failure of local control, local recurrence, quality of life, or adverse effects.\nWhen all cancer types were combined, there was evidence for an increased risk of severe radiation tissue injury during the course of radiotherapy with HBOT (RR 2.35, 95% CI 1.66 to 3.33, high-quality evidence) and of oxygen toxic seizures during treatment (RR with HBOT 6.76, 96% CI 1.16 to 39.31, moderate-quality evidence due to imprecision).\nAuthors' conclusions\nWe found evidence that HBOT improves local tumour control, mortality, and local tumour recurrence for cancers of the head and neck. These benefits may only occur with unusual fractionation schemes. Hyperbaric oxygenation therapy is associated with severe tissue radiation injury. Given the methodological and reporting inadequacies of the included studies, our results demand a cautious interpretation. More research is needed for head and neck cancer, but is probably not justified for uterine cervical or bladder cancer. There is little evidence available concerning malignancies at other anatomical sites.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 19 randomised trials that together included 2286 participants. The dose of oxygen per treatment session in the HBO arm was remarkably uniform, with all trials except one administering external beam radiation therapy at 3 atmospheres absolute (ATA). However, the number of treatments given ranged widely, from two sessions only, separated by three weeks, up to 40 sessions over eight weeks.The total dose of radiation was generally reduced in the HBO participants in order to reduce side effects. The follow-up period varied between trials, from six months to 10 years, although most studies followed participants for between two and five years."
}
]
|
query-laysum | 5621 | [
{
"role": "user",
"content": "Abstract: Background\nUterine fibroids are smooth muscle tumours arising from the uterus. These tumours, although benign, are commonly associated with abnormal uterine bleeding, bulk symptoms and reproductive dysfunction. The importance of progesterone in fibroid pathogenesis supports selective progesterone receptor modulators (SPRMs) as effective treatment. Both biochemical and clinical evidence suggests that SPRMs may reduce fibroid growth and ameliorate symptoms. SPRMs can cause unique histological changes to the endometrium that are not related to cancer, are not precancerous and have been found to be benign and reversible. This review summarises randomised trials conducted to evaluate the effectiveness of SPRMs as a class of medication for treatment of individuals with fibroids.\nObjectives\nTo evaluate the effectiveness and safety of SPRMs for treatment of premenopausal women with uterine fibroids.\nSearch methods\nWe searched the Specialised Register of the Cochrane Gynaecology and Fertility Group, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PsycINFO, the Cumulative Index to Nursing and Allied Health Literature (CINAHL) and clinical trials registries from database inception to May 2016. We handsearched the reference lists of relevant articles and contacted experts in the field to request additional data.\nSelection criteria\nIncluded studies were randomised controlled trials (RCTs) of premenopausal women with fibroids who were treated for at least three months with a SPRM.\nData collection and analysis\nTwo review authors independently reviewed all eligible studies identified by the search. We extracted data and assessed risk of bias independently using standard forms. We analysed data using mean differences (MDs) or standardised mean differences (SMDs) for continuous data and odds ratios (ORs) for dichotomous data. We performed meta-analyses using the random-effects model. Our primary outcome was change in fibroid-related symptoms.\nMain results\nWe included in the review 14 RCTs with a total of 1215 study participants. We could not extract complete data from three studies. We included in the meta-analysis 11 studies involving 1021 study participants: 685 received SPRMs and 336 were given a control intervention (placebo or leuprolide). Investigators evaluated three SPRMs: mifepristone (five studies), ulipristal acetate (four studies) and asoprisnil (two studies). The primary outcome was change in fibroid-related symptoms (symptom severity, health-related quality of life, abnormal uterine bleeding, pelvic pain). Adverse event reporting in the included studies was limited to SPRM-associated endometrial changes. More than half (8/14) of these studies were at low risk of bias in all domains. The most common limitation of the other studies was poor reporting of methods. The main limitation for the overall quality of evidence was potential publication bias.\nSPRM versus placebo\nSPRM treatment resulted in improvements in fibroid symptom severity (MD -20.04 points, 95% confidence interval (CI) -26.63 to -13.46; four RCTs, 171 women, I2 = 0%; moderate-quality evidence) and health-related quality of life (MD 22.52 points, 95% CI 12.87 to 32.17; four RCTs, 200 women, I2 = 63%; moderate-quality evidence) on the Uterine Fibroid Symptom Quality of Life Scale (UFS-QoL, scale 0 to 100). Women treated with an SPRM showed reduced menstrual blood loss on patient-reported bleeding scales, although this effect was small (SMD -1.11, 95% CI -1.38 to -0.83; three RCTs, 310 women, I2 = 0%; moderate-quality evidence), along with higher rates of amenorrhoea (29 per 1000 in the placebo group vs 237 to 961 per 1000 in the SPRM group; OR 82.50, 95% CI 37.01 to 183.90; seven RCTs, 590 women, I2 = 0%; moderate-quality evidence), compared with those given placebo. We could draw no conclusions regarding changes in pelvic pain owing to variability in the estimates. With respect to adverse effects, SPRM-associated endometrial changes were more common after SPRM therapy than after placebo (OR 15.12, 95% CI 6.45 to 35.47; five RCTs, 405 women, I2 = 0%; low-quality evidence).\nSPRM versus leuprolide acetate\nIn comparing SPRM versus other treatments, two RCTs evaluated SPRM versus leuprolide acetate. One RCT reported primary outcomes. No evidence suggested a difference between SPRM and leuprolide groups for improvement in quality of life, as measured by UFS-QoL fibroid symptom severity scores (MD -3.70 points, 95% CI -9.85 to 2.45; one RCT, 281 women; moderate-quality evidence) and health-related quality of life scores (MD 1.06 points, 95% CI -5.73 to 7.85; one RCT, 281 women; moderate-quality evidence). It was unclear whether results showed a difference between SPRM and leuprolide groups for reduction in menstrual blood loss based on the pictorial blood loss assessment chart (PBAC), as confidence intervals were wide (MD 6 points, 95% CI -40.95 to 50.95; one RCT, 281 women; low-quality evidence), or for rates of amenorrhoea (804 per 1000 in the placebo group vs 732 to 933 per 1000 in the SPRM group; OR 1.14, 95% CI 0.60 to 2.16; one RCT, 280 women; moderate-quality evidence). No evidence revealed differences between groups in pelvic pain scores based on the McGill Pain Questionnaire (scale 0 to 45) (MD -0.01 points, 95% CI -2.14 to 2.12; 281 women; moderate-quality evidence). With respect to adverse effects, SPRM-associated endometrial changes were more common after SPRM therapy than after leuprolide treatment (OR 10.45, 95% CI 5.38 to 20.33; 301 women; moderate-quality evidence).\nAuthors' conclusions\nShort-term use of SPRMs resulted in improved quality of life, reduced menstrual bleeding and higher rates of amenorrhoea than were seen with placebo. Thus, SPRMs may provide effective treatment for women with symptomatic fibroids. Evidence derived from one RCT showed no difference between leuprolide acetate and SPRM with respect to improved quality of life and bleeding symptoms. Evidence was insufficient to show whether effectiveness was different between SPRMs and leuprolide. Investigators more frequently observed SPRM-associated endometrial changes in women treated with SPRMs than in those treated with placebo or leuprolide acetate. As noted above, SPRM-associated endometrial changes are benign, are not related to cancer and are not precancerous. Reporting bias may impact the conclusion of this meta-analysis. Well-designed RCTs comparing SPRMs versus other treatments are needed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Review authors included 14 randomised controlled trials (RCTs) (1215 women) but could not obtain data from three studies. In addition, several completed registered trials had not yet reported findings. This review evaluated results of 11 RCTs that included 1021 women with fibroids. Investigators treated women with mifepristone (five studies), ulipristal acetate (four studies) or asoprisnil (two studies) and compared SPRMs with either placebo or leuprolide acetate. More than half of these studies were at low risk of bias in all domains. The most common limitation of the other studies was poor reporting of methods."
}
]
|
query-laysum | 5622 | [
{
"role": "user",
"content": "Abstract: Background\nPoor diet and insufficient physical activity are major risk factors for non-communicable diseases. Developing healthy diet and physical activity behaviors early in life is important as these behaviors track between childhood and adulthood. Parents and other adult caregivers have important influences on children's health behaviors, but whether their involvement in children's nutrition and physical activity interventions contributes to intervention effectiveness is not known.\nObjectives\n• To assess effects of caregiver involvement in interventions for improving children's dietary intake and physical activity behaviors, including those intended to prevent overweight and obesity\n• To describe intervention content and behavior change techniques employed, drawing from a behavior change technique taxonomy developed and advanced by Abraham, Michie, and colleagues (Abraham 2008; Michie 2011; Michie 2013; Michie 2015)\n• To identify content and techniques related to reported outcomes when such information was reported in included studies\nSearch methods\nIn January 2019, we searched CENTRAL, MEDLINE, Embase, 11 other databases, and three trials registers. We also searched the references lists of relevant reports and systematic reviews.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs evaluating the effects of interventions to improve children's dietary intake or physical activity behavior, or both, with children aged 2 to 18 years as active participants and at least one component involving caregivers versus the same interventions but without the caregiver component(s). We excluded interventions meant as treatment or targeting children with pre-existing conditions, as well as caregiver-child units residing in orphanages and school hostel environments.\nData collection and analysis\nWe used standard methodological procedures outlined by Cochrane.\nMain results\nWe included 23 trials with approximately 12,192 children in eligible intervention arms. With the exception of two studies, all were conducted in high-income countries, with more than half performed in North America. Most studies were school-based and involved the addition of healthy eating or physical education classes, or both, sometimes in tandem with other changes to the school environment. The specific intervention strategies used were not always reported completely. However, based on available reports, the behavior change techniques used most commonly in the child-only arm were \"shaping knowledge,\" \"comparison of behavior,\" \"feedback and monitoring,\" and \"repetition and substitution.\" In the child + caregiver arm, the strategies used most commonly included additional \"shaping knowledge\" or \"feedback and monitoring\" techniques, as well as \"social support\" and \"natural consequences.\"\nWe considered all trials to be at high risk of bias for at least one design factor. Seven trials did not contribute any data to analyses. The quality of reporting of intervention content varied between studies, and there was limited scope for meta-analysis. Both validated and non-validated instruments were used to measure outcomes of interest. Outcomes measured and reported differed between studies, with 16 studies contributing data to the meta-analyses. About three-quarters of studies reported their funding sources; no studies reported industry funding. We assessed the quality of evidence to be low or very low.\nDietary behavior change interventions with a caregiver component versus interventions without a caregiver component\nSeven studies compared dietary behavior change interventions with and without a caregiver component. At the end of the intervention, we did not detect a difference between intervention arms in children's percentage of total energy intake from saturated fat (mean difference [MD] −0.42%, 95% confidence interval [CI] −1.25 to 0.41, 1 study, n = 207; low-quality evidence) or from sodium intake (MD −0.12 g/d, 95% CI −0.36 to 0.12, 1 study, n = 207; low-quality evidence). No trial in this comparison reported data for children's combined fruit and vegetable intake, sugar-sweetened beverage (SSB) intake, or physical activity levels, nor for adverse effects of interventions.\nPhysical activity interventions with a caregiver component versus interventions without a caregiver component\nSix studies compared physical activity interventions with and without a caregiver component. At the end of the intervention, we did not detect a difference between intervention arms in children's total physical activity (MD 0.20 min/h, 95% CI −1.19 to 1.59, 1 study, n = 54; low-quality evidence) or moderate to vigorous physical activity (MVPA) (standard mean difference [SMD] 0.04, 95% CI −0.41 to 0.49, 2 studies, n = 80; moderate-quality evidence). No trial in this comparison reported data for percentage of children's total energy intake from saturated fat, sodium intake, fruit and vegetable intake, or SSB intake, nor for adverse effects of interventions.\nCombined dietary and physical activity interventions with a caregiver component versus interventions without a caregiver component\nTen studies compared dietary and physical activity interventions with and without a caregiver component. At the end of the intervention, we detected a small positive impact of a caregiver component on children's SSB intake (SMD −0.28, 95% CI −0.44 to −0.12, 3 studies, n = 651; moderate-quality evidence). We did not detect a difference between intervention arms in children's percentage of total energy intake from saturated fat (MD 0.06%, 95% CI −0.67 to 0.80, 2 studies, n = 216; very low-quality evidence), sodium intake (MD 35.94 mg/d, 95% CI −322.60 to 394.47, 2 studies, n = 315; very low-quality evidence), fruit and vegetable intake (MD 0.38 servings/d, 95% CI −0.51 to 1.27, 1 study, n = 134; very low-quality evidence), total physical activity (MD 1.81 min/d, 95% CI −15.18 to 18.80, 2 studies, n = 573; low-quality evidence), or MVPA (MD −0.05 min/d, 95% CI −18.57 to 18.47, 1 study, n = 622; very low-quality evidence). One trial indicated that no adverse events were reported by study participants but did not provide data.\nAuthors' conclusions\nCurrent evidence is insufficient to support the inclusion of caregiver involvement in interventions to improve children's dietary intake or physical activity behavior, or both. For most outcomes, the quality of the evidence is adversely impacted by the small number of studies with available data, limited effective sample sizes, risk of bias, and imprecision. To establish the value of caregiver involvement, additional studies measuring clinically important outcomes using valid and reliable measures, employing appropriate design and power, and following established reporting guidelines are needed, as is evidence on how such interventions might contribute to health equity.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The findings of this review suggest that adding a parent or caregiver component to dietary behavior change interventions or physical activity interventions may make little or no difference to children's dietary intake or physical activity levels. For interventions that target both diet and physical activity behaviors, involving a parent or caregiver probably slightly reduces children's sugar-sweetened beverage intake by the end of the intervention. We do not know whether any of these types of interventions result in adverse effects because these data are not available."
}
]
|
query-laysum | 5623 | [
{
"role": "user",
"content": "Abstract: Background\nComplete (full-thickness) rectal prolapse is a lifestyle-altering disability that commonly affects older people. The range of surgical methods available to correct the underlying pelvic floor defects in full-thickness rectal prolapse reflects the lack of consensus regarding the best operation.\nObjectives\nTo assess the effects of different surgical repairs for complete (full-thickness) rectal prolapse.\nSearch methods\nWe searched the Cochrane Incontinence Group Specialised Register up to 3 February 2015; it contains trials from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE in process, ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) as well as trials identified through handsearches of journals and conference proceedings. We also searched EMBASE and EMBASE Classic (1947 to February 2015) and PubMed (January 1950 to December 2014), and we specifically handsearched theBritish Journal of Surgery (January 1995 to June 2014), Diseases of the Colon and Rectum (January 1995 to June 2014) and Colorectal Diseases (January 2000 to June 2014), as well as the proceedings of the Association of Coloproctology meetings (January 2000 to December 2014). Finally, we handsearched reference lists of all relevant articles to identify additional trials.\nSelection criteria\nAll randomised controlled trials (RCTs) of surgery for managing full-thickness rectal prolapse in adults.\nData collection and analysis\nTwo reviewers independently selected studies from the literature searches, assessed the methodological quality of eligible trials and extracted data. The four primary outcome measures were the number of patients with recurrent rectal prolapse, number of patients with residual mucosal prolapse, number of patients with faecal incontinence and number of patients with constipation.\nMain results\nWe included 15 RCTs involving 1007 participants in this third review update. One trial compared abdominal with perineal approaches to surgery, three trials compared fixation methods, three trials looked at the effects of lateral ligament division, one trial compared techniques of rectosigmoidectomy, two trials compared laparoscopic with open surgery, and two trials compared resection with no resection rectopexy. One new trial compared rectopexy versus rectal mobilisation only (no rectopexy), performed with either open or laparoscopic surgery. One new trial compared different techniques used in perineal surgery, and another included three comparisons: abdominal versus perineal surgery, resection versus no resection rectopexy in abdominal surgery and different techniques used in perineal surgery.\nThe heterogeneity of the trial objectives, interventions and outcomes made analysis difficult. Many review objectives were covered by only one or two studies with small numbers of participants. Given these caveats, there is insufficient data to say which of the abdominal and perineal approaches are most effective. There were no detectable differences between the methods used for fixation during rectopexy. Division, rather than preservation, of the lateral ligaments was associated with less recurrent prolapse but more postoperative constipation. Laparoscopic rectopexy was associated with fewer postoperative complications and shorter hospital stay than open rectopexy. Bowel resection during rectopexy was associated with lower rates of constipation. Recurrence of full-thickness prolapse was greater for mobilisation of the rectum only compared with rectopexy. There were no differences in quality of life for patients who underwent the different kinds of prolapse surgery.\nAuthors' conclusions\nThe lack of high quality evidence on different techniques, together with the small sample size of included trials and their methodological weaknesses, severely limit the usefulness of this review for guiding practice. It is impossible to identify or refute clinically important differences between the alternative surgical operations. Longer follow-up with current studies and larger rigorous trials are needed to improve the evidence base and to define the optimum surgical treatment for full-thickness rectal prolapse.\n\nGiven the provided abstract, please respond to the following query: \"The main findings of the review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Whether surgery is performed through a cut in the abdomen or a cut through the anus (known as a perineal approach), it makes no difference with regard to reappearance of the prolapse or appearance of postoperative complications. When surgeons perform the operation through a small hole in the abdomen (laparoscopic or keyhole surgery) recovery may be faster than for open abdominal surgery. When constipation is one of the main symptoms, bowel resection (removing part of the bowel) during prolapse repair may help. There was no difference in the results when different types of repair were used during the perineal (anal) approach."
}
]
|
query-laysum | 5624 | [
{
"role": "user",
"content": "Abstract: Background\nInvasive fungal infections (IFIs) are life-threatening opportunistic infections that occur in immunocompromised or critically ill people. Early detection and treatment of IFIs is essential to reduce morbidity and mortality in these populations. (1→3)-β-D-glucan (BDG) is a component of the fungal cell wall that can be detected in the serum of infected individuals. The serum BDG test is a way to quickly detect these infections and initiate treatment before they become life-threatening. Five different versions of the BDG test are commercially available: Fungitell, Glucatell, Wako, Fungitec-G, and Dynamiker Fungus.\nObjectives\nTo compare the diagnostic accuracy of commercially available tests for serum BDG to detect selected invasive fungal infections (IFIs) among immunocompromised or critically ill people.\nSearch methods\nWe searched MEDLINE (via Ovid) and Embase (via Ovid) up to 26 June 2019. We used SCOPUS to perform a forward and backward citation search of relevant articles. We placed no restriction on language or study design.\nSelection criteria\nWe included all references published on or after 1995, which is when the first commercial BDG assays became available. We considered published, peer-reviewed studies on the diagnostic test accuracy of BDG for diagnosis of fungal infections in immunocompromised people or people in intensive care that used the European Organization for Research and Treatment of Cancer (EORTC) criteria or equivalent as a reference standard. We considered all study designs (case-control, prospective consecutive cohort, and retrospective cohort studies). We excluded case studies and studies with fewer than ten participants. We also excluded animal and laboratory studies. We excluded meeting abstracts because they provided insufficient information.\nData collection and analysis\nWe followed the standard procedures outlined in the Cochrane Handbook for Diagnostic Test Accuracy Reviews. Two review authors independently screened studies, extracted data, and performed a quality assessment for each study. For each study, we created a 2 × 2 matrix and calculated sensitivity and specificity, as well as a 95% confidence interval (CI). We evaluated the quality of included studies using the Quality Assessment of Studies of Diagnostic Accuracy-Revised (QUADAS-2). We were unable to perform a meta-analysis due to considerable variation between studies, with the exception of Candida, so we have provided descriptive statistics such as receiver operating characteristics (ROCs) and forest plots by test brand to show variation in study results.\nMain results\nWe included in the review 49 studies with a total of 6244 participants. About half of these studies (24/49; 49%) were conducted with people who had cancer or hematologic malignancies. Most studies (36/49; 73%) focused on the Fungitell BDG test. This was followed by Glucatell (5 studies; 10%), Wako (3 studies; 6%), Fungitec-G (3 studies; 6%), and Dynamiker (2 studies; 4%). About three-quarters of studies (79%) utilized either a prospective or a retrospective consecutive study design; the remainder used a case-control design.\nBased on the manufacturer's recommended cut-off levels for the Fungitell test, sensitivity ranged from 27% to 100%, and specificity from 0% to 100%. For the Glucatell assay, sensitivity ranged from 50% to 92%, and specificity ranged from 41% to 94%. Limited studies have used the Dynamiker, Wako, and Fungitec-G assays, but individual sensitivities and specificities ranged from 50% to 88%, and from 60% to 100%, respectively. Results show considerable differences between studies, even by manufacturer, which prevented a formal meta-analysis. Most studies (32/49; 65%) had no reported high risk of bias in any of the QUADAS-2 domains. The QUADAS-2 domains that had higher risk of bias included participant selection and flow and timing.\nAuthors' conclusions\nWe noted considerable heterogeneity between studies, and these differences precluded a formal meta-analysis. Because of wide variation in the results, it is not possible to estimate the diagnostic accuracy of the BDG test in specific settings. Future studies estimating the accuracy of BDG tests should be linked to the way the test is used in clinical practice and should clearly describe the sampling protocol and the relationship of time of testing to time of diagnosis.\n\nGiven the provided abstract, please respond to the following query: \"Who do the results of this review apply to?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most included studies were performed at academic medical centers or public hospitals in the United States, Germany, and Italy. The most common underlying conditions were cancer (47%) and admission to intensive care (33%). A majority of participants were adults. The overall prevalence of invasive fungal infection was 28%."
}
]
|
query-laysum | 5625 | [
{
"role": "user",
"content": "Abstract: Background\nCaries is one of the most prevalent and preventable conditions worldwide. If identified early enough then non-invasive techniques can be applied, and therefore this review focusses on early caries involving the enamel surface of the tooth. The cornerstone of caries detection is a visual and tactile dental examination, however alternative methods of detection are available, and these include fluorescence-based devices. There are three categories of fluorescence-based device each primarily defined by the different wavelengths they exploit; we have labelled these groups as red, blue, and green fluorescence. These devices could support the visual examination for the detection and diagnosis of caries at an early stage of decay.\nObjectives\nOur primary objectives were to estimate the diagnostic test accuracy of fluorescence-based devices for the detection and diagnosis of enamel caries in children or adults. We planned to investigate the following potential sources of heterogeneity: tooth surface (occlusal, proximal, smooth surface or adjacent to a restoration); single point measurement devices versus imaging or surface assessment devices; and the prevalence of more severe disease in each study sample, at the level of caries into dentine.\nSearch methods\nCochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 30 May 2019); Embase Ovid (1980 to 30 May 2019); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 30 May 2019); and the World Health Organization International Clinical Trials Registry Platform (to 30 May 2019). We studied reference lists as well as published systematic review articles.\nSelection criteria\nWe included diagnostic accuracy study designs that compared a fluorescence-based device with a reference standard. This included prospective studies that evaluated the diagnostic accuracy of single index tests and studies that directly compared two or more index tests. Studies that explicitly recruited participants with caries into dentine or frank cavitation were excluded.\nData collection and analysis\nTwo review authors extracted data independently using a piloted study data extraction form based on the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Sensitivity and specificity with 95% confidence intervals (CIs) were reported for each study. This information has been displayed as coupled forest plots and summary receiver operating characteristic (SROC) plots, displaying the sensitivity-specificity points for each study. We estimated diagnostic accuracy using hierarchical summary receiver operating characteristic (HSROC) methods. We reported sensitivities at fixed values of specificity (median 0.78, upper quartile 0.90).\nMain results\nWe included a total of 133 studies, 55 did not report data in the 2 x 2 format and could not be included in the meta-analysis. 79 studies which provided 114 datasets and evaluated 21,283 tooth surfaces were included in the meta-analysis. There was a high risk of bias for the participant selection domain. The index test, reference standard, and flow and timing domains all showed a high proportion of studies to be at low risk of bias. Concerns regarding the applicability of the evidence were high or unclear for all domains, the highest proportion being seen in participant selection. Selective participant recruitment, poorly defined diagnostic thresholds, and in vitro studies being non-generalisable to the clinical scenario of a routine dental examination were the main reasons for these findings. The dominance of in vitro studies also means that the information on how the results of these devices are used to support diagnosis, as opposed to pure detection, was extremely limited. There was substantial variability in the results which could not be explained by the different devices or dentition or other sources of heterogeneity that we investigated. The diagnostic odds ratio (DOR) was 14.12 (95% CI 11.17 to 17.84).\nThe estimated sensitivity, at a fixed median specificity of 0.78, was 0.70 (95% CI 0.64 to 0.75). In a hypothetical cohort of 1000 tooth sites or surfaces, with a prevalence of enamel caries of 57%, obtained from the included studies, the estimated sensitivity of 0.70 and specificity of 0.78 would result in 171 missed tooth sites or surfaces with enamel caries (false negatives) and 95 incorrectly classed as having early caries (false positives).\nWe used meta-regression to compare the accuracy of the different devices for red fluorescence (84 datasets, 14,514 tooth sites), blue fluorescence (21 datasets, 3429 tooth sites), and green fluorescence (9 datasets, 3340 tooth sites) devices. Initially, we allowed threshold, shape, and accuracy to vary according to device type by including covariates in the model. Allowing consistency of shape, removal of the covariates for accuracy had only a negligible effect (Chi2 = 3.91, degrees of freedom (df) = 2, P = 0.14).\nDespite the relatively large volume of evidence we rated the certainty of the evidence as low, downgraded two levels in total, for risk of bias due to limitations in the design and conduct of the included studies, indirectness arising from the high number of in vitro studies, and inconsistency due to the substantial variability of results.\nAuthors' conclusions\nThere is considerable variation in the performance of these fluorescence-based devices that could not be explained by the different wavelengths of the devices assessed, participant, or study characteristics. Blue and green fluorescence-based devices appeared to outperform red fluorescence-based devices but this difference was not supported by the results of a formal statistical comparison. The evidence base was considerable, but we were only able to include 79 studies out of 133 in the meta-analysis as estimates of sensitivity or specificity values or both could not be extracted or derived. In terms of applicability, any future studies should be carried out in a clinical setting, where difficulties of caries assessment within the oral cavity include plaque, staining, and restorations. Other considerations include the potential of fluorescence devices to be used in combination with other technologies and comparative diagnostic accuracy studies.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There are three different types of fluorescence device that use different types of light which we grouped as red, blue, and green fluorescence. Each device reflects more or less light depending on the amount of tooth decay, and this is measured by the device to give a score which indicates whether there is tooth decay and how severe the decay is. We studied decay on the occlusal surfaces (biting surfaces of the back teeth), the proximal surfaces (tooth surfaces that are next to each other), and the smooth surfaces."
}
]
|
query-laysum | 5626 | [
{
"role": "user",
"content": "Abstract: Background\nDiverticular disease is a common condition that increases in prevalence with age. Recent theories on the pathogenesis of diverticular inflammation have implicated chronic inflammation similar to that seen in ulcerative colitis. Mesalamine, or 5-aminosalicylic acid (5-ASA), is a mainstay of therapy for individuals with ulcerative colitis. Accordingly, 5-ASA has been studied for prevention of recurrent diverticulitis.\nObjectives\nTo evaluate the efficacy of mesalamine (5-ASA) for prevention of recurrent diverticulitis.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 8), in the Cochrane Library; Ovid MEDLINE (from 1950 to 9 September 2017); Ovid Embase (from 1974 to 9 September 2017); and two clinical trials registries for ongoing trials - Clinicaltrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform database (9 September 2017).\nWe also searched proceedings from major gastrointestinal conferences - Digestive Disease Week (DDW), United European Gastroenterology Week (UEGW), and the American College of Gastroenterology (ACG) Annual Scientific Meeting - from 2010 to September 2017. In addition, we scanned reference lists from eligible publications, and we contacted corresponding authors to ask about additional trials.\nSelection criteria\nWe included randomised controlled clinical trials comparing the efficacy of 5-ASA versus placebo or another active drug for prevention of recurrent diverticulitis.\nData collection and analysis\nWe used standard methodological procedures as defined by Cochrane. Three review authors assessed eligibility for inclusion. Two review authors selected studies, extracted data, and assessed methodological quality independently. We calculated risk ratios (RRs) for prevention of diverticulitis recurrence using an intention-to-treat principle and random-effects models. We assessed heterogeneity using criteria for Chi2 (P < 0.10) and I2 tests (> 50%). To explore sources of heterogeneity, we conducted a priori subgroup analyses. To assess the robustness of our results, we carried out sensitivity analyses using different summary statistics (RR vs odds ratio (OR)) and meta-analytical models (fixed-effect vs random-effects).\nMain results\nWe included in this review seven studies with a total of 1805 participants.\n\nWe judged all seven studies to have unclear or high risk of bias. Investigators found no evidence of an effect when comparing 5-ASA versus control for prevention of recurrent diverticulitis (31.3% vs 29.8%; RR 0.69, 95% confidence interval (CI) 0.43 to 1.09); very low quality of evidence).\nFive of the seven studies provided data on adverse events of 5-ASA therapy. The most commonly reported side effects were gastrointestinal symptoms (epigastric pain, nausea, and diarrhoea). No significant difference was seen between 5-ASA and control (67.8% vs 64.6%; RR 0.98, 95% CI 0.91 to 1.06; P = 0.63; moderate quality of evidence), nor was significant heterogeneity observed (I2 = 0%; P = 0.50).\nAuthors' conclusions\nThe effects of 5-ASA on recurrence of diverticulitis are uncertain owing to the small number of heterogenous trials included in this review. Rates of recurrent diverticulitis were similar among participants using 5-ASA and control participants. Effective medical strategies for prevention of recurrent diverticulitis are needed, and further randomised, double-blinded, placebo-controlled trials of rigorous design are warranted to specify the effects of 5-ASA (mesalamine) in the management of diverticulitis.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our analysis determined that approximately one-third of participants receiving 5-ASA had recurrence of diverticulitis (31.3%). Participants receiving non-5-ASA therapy experienced a similar rate of recurrence (29.8%). Adverse event rates were similar among 5-ASA therapy and comparison therapies. The most commonly reported side effects of 5-ASA therapy were gastrointestinal symptoms (epigastric pain, nausea, and diarrhoea)."
}
]
|
query-laysum | 5627 | [
{
"role": "user",
"content": "Abstract: Background\nAbout 5% to 10% of all deep vein thromboses occur in the upper extremities. Serious complications of upper extremity deep vein thrombosis, such as post-thrombotic syndrome and pulmonary embolism, may in theory be avoided using thrombolysis. No systematic review has assessed the effects of thrombolysis for the treatment of individuals with acute upper extremity deep vein thrombosis.\nObjectives\nTo assess the beneficial and harmful effects of thrombolysis for the treatment of individuals with acute upper extremity deep vein thrombosis.\nSearch methods\nThe Cochrane Vascular Information Specialist (CIS) searched the Specialised Register (29 March 2017), the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 2), and three trial registries (World Health Organization International Clinical Trials Registry, ClinicalTrials.gov, and ISRCTN registry) for ongoing and unpublished studies. We additionally searched the registries of the European Medical Agency and the US Food and Drug Administration (December 2016).\nSelection criteria\nWe planned to include randomised clinical trials irrespective of publication type, publication date and language that investigated the effects of thrombolytics added to anticoagulation, thrombolysis versus anticoagulation, or thrombolysis versus any other type of medical intervention for the treatment of acute upper extremity deep vein thrombosis.\nData collection and analysis\nTwo review authors independently screened all records to identify those that met inclusion criteria. We planned to use the standard methodological procedures expected by Cochrane. We planned to use trial domains to assess the risks of systematic error (bias) in the trials. We planned to conduct trial sequential analyses to control for the risk of random errors and to assess the robustness of our conclusions. We planned to consider a P value of 0.025 or less as statistically significant. We planned to assess the quality of the evidence using the GRADE approach. Our primary outcomes were severe bleeding, pulmonary embolism, and all-cause mortality.\nMain results\nWe found no trials eligible for inclusion. We also identified no ongoing trials.\nAuthors' conclusions\nThere is currently insufficient evidence from which to draw conclusion on the benefits or harms of thrombolysis for the treatment of individuals with acute upper extremity deep vein thrombosis as an add-on therapy to anticoagulation, alone compared with anticoagulation, or alone compared with any other type of medical intervention. Large randomised clinical trials with a low risk of bias are warranted. They should focus on clinical outcomes and not solely on surrogate measures.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "A blood clot that forms in the deep blood vessels of the arms, blocking the passage of blood, is referred to as an acute upper extremity deep vein thrombosis. An acute upper extremity deep vein thrombosis currently affects 4 to 10 per 100,000 people in the general population. One of the most serious complications of an upper extremity deep vein thrombosis is a pulmonary embolism, which is a blockage of one of the major blood vessels in the lung. This can be a life-threatening condition. Post-thrombotic syndrome, in which the blood clot causes permanent swelling, skin colour changes, sores or ulcers, and decreased function of the affected limb, is another serious complication that can impact a person's quality of life.\nThrombolysis aims to break down the blood clot with the use of drugs infused directly into a blood vessel. A previous Cochrane Review looked at the beneficial and harmful effects of thrombolysis for the treatment of acute deep vein thrombosis of the lower extremities (e.g. the legs). While thrombolysis was found to lower the risk of post-thrombotic syndrome, it had no effect on the risk of death, risk of a blood clot travelling to the lungs or brain, (where it can cause a stroke), or the risk of bleeding inside the skull. In this present review, we attempted to assess the benefits and harms of thrombolysis for the treatment of acute upper extremity deep vein thrombosis."
}
]
|
query-laysum | 5628 | [
{
"role": "user",
"content": "Abstract: Background\nCarbon dioxide (CO2) measurement is a fundamental evaluation in a neonatal intensive care unit (NICU), as both low and high values of CO2 might have detrimental effects on neonatal morbidity and mortality. Though measurement of CO2 in the arterial blood gas is the most accurate way to assess the amount of CO2, it requires blood sampling and it does not provide a continuous monitoring of CO2.\nObjectives\nTo assess whether the use of continuous transcutaneous CO2 (tcCO2) monitoring in newborn infants reduces mortality and improves short and long term respiratory and neurodevelopmental outcomes.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review group to search the Cochrane Central Register of Controlled Trials (CENTRAL 2015, Issue 11), MEDLINE via PubMed (1966 to November 1, 2015), EMBASE (1980 to November 1, 2015), and CINAHL (1982 to November 1, 2015). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomized controlled trials and quasi-randomized trials.\nSelection criteria\nRandomized, quasi-randomized and cluster randomized controlled trials comparing different strategies regarding tcCO2 monitoring in newborns. Three comparisons were considered, that is, continuous tcCO2 monitoring versus 1) any intermittent modalities to measure CO2; 2) other continuous CO2 monitoring; and 3) with or without intermittent CO2 monitoring.\nData collection and analysis\nWe used the standard methods of the Cochrane Neonatal Review Group. Two review authors independently assessed studies identified by the search strategy for inclusion.\nMain results\nOur search strategy yielded 106 references. Two review authors independently assessed all references for inclusion. We did not find any completed studies for inclusion, nor ongoing trials.\nAuthors' conclusions\nThere was no evidence to recommend or refute the use of transcutaneous CO2 monitoring in neonates. Well-designed, adequately powered randomized controlled studies are necessary to address efficacy and safety of transcutaneous CO2 monitoring in neonates.\n\nGiven the provided abstract, please respond to the following query: \"Background.\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "CO2measurement is a very important procedure because abnormal values of CO2may have detrimental effects in the sick newborn. There are three main methods to assess CO2: in the arterial blood gas, in the air exhaled from the body, and through the skin, that is, transcutaneously. The latter is minimally invasive and allows continuous monitoring. The evidence is current to November 2015."
}
]
|
query-laysum | 5629 | [
{
"role": "user",
"content": "Abstract: Background\nFollowing cataract surgery and intraocular lens (IOL) implantation, loss of accommodation or postoperative presbyopia occurs and remains a challenge. Standard monofocal IOLs correct only distance vision; patients require spectacles for near vision. Accommodative IOLs have been designed to overcome loss of accommodation after cataract surgery.\nObjectives\nTo define (a) the extent to which accommodative IOLs improve unaided near visual function, in comparison with monofocal IOLs; (b) the extent of compromise to unaided distance visual acuity; c) whether a higher rate of additional complications is associated the use of accommodative IOLs.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2013, Issue 9), Ovid MEDLINE, Ovid MEDLINE in-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily Update, Ovid OLDMEDLINE (January 1946 to October 2013), EMBASE (January 1980 to October 2013), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to October 2013), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrial.gov) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 10 October 2013.\nSelection criteria\nWe include randomised controlled trials (RCTs) which compared implantation of accommodative IOLs to implantation of monofocal IOLs in cataract surgery.\nData collection and analysis\nTwo authors independently screened search results, assessed risk of bias and extracted data. All included trials used the 1CU accommodative IOL (HumanOptics, Erlangen, Germany) for their intervention group. One trial had an additional arm with the AT-45 Crystalens accommodative IOL (Eyeonics Vision). We performed a separate analysis comparing 1CU and AT-45 IOL.\nMain results\nWe included four RCTs, including 229 participants (256 eyes), conducted in Germany, Italy and the UK. The age range of participants was 21 to 87 years. All studies included people who had bilateral cataracts with no pre-existing ocular pathologies. We judged all studies to be at high risk of performance bias. We graded two studies with high risk of detection bias and one study with high risk of selection bias.\nParticipants who received the accommodative IOLs achieved better distance-corrected near visual acuity (DCNVA) at six months (mean difference (MD) -3.10 Jaeger units; 95% confidence intervals (CI) -3.36 to -2.83, 2 studies, 106 people, 136 eyes, moderate quality evidence). Better DCNVA was seen in the accommodative lens group at 12 to 18 months in the three trials that reported this time point but considerable heterogeneity of effect was seen, ranging from 1.3 (95% CI 0.98 to 1.68; 20 people, 40 eyes) to 6 (95% CI 4.15 to 7.85; 51 people, 51 eyes) Jaeger units and 0.12 (95% CI 0.05 to 0.19; 40 people, binocular) logMAR improvement (low quality evidence). The relative effect of the lenses on corrected distant visual acuity (CDVA) was less certain. At six months there was a standardised mean difference of -0.04 standard deviations (95% CI -0.37 to 0.30, 2 studies, 106 people, 136 eyes, low quality evidence). At long-term follow-up there was heterogeneity of effect with 18-month data in two studies showing that CDVA was better in the monofocal group (MD 0.12 logMAR; 95% CI 0.07 to 0.16, 2 studies, 70 people,100 eyes) and one study which reported data at 12 months finding similar CDVA in the two groups (-0.02 logMAR units, 95% CI -0.06 to 0.02, 51 people) (low quality evidence).\nThe relative effect of the lenses on reading speed and spectacle independence was uncertain, The average reading speed was 11.6 words per minute more in the accommodative lens group but the 95% confidence intervals ranged from 12.2 words less to 35.4 words more (1 study, 40 people, low quality evidence). People with accommodative lenses were more likely to be spectacle-independent but the estimate was very uncertain (risk ratio (RR) 8.18; 95% CI 0.47 to 142.62, 1 study, 40 people, very low quality evidence).\nMore cases of posterior capsule opacification (PCO) were seen in accommodative lenses but the effect of the lenses on PCO was uncertain (Peto odds ratio (OR) 2.12; 95% CI 0.45 to 10.02, 91 people, 2 studies, low quality evidence). People in the accommodative lens group were more likely to require laser capsulotomy (Peto OR 7.96; 95% CI 2.49 to 25.45, 2 studies, 60 people, 80 eyes, low quality evidence). Glare was reported less frequently with accommodative lenses but the relative effect of the lenses on glare was uncertain (RR any glare 0.78; 95% CI 0.32 to 1.90, 1 study, 40 people, and RR moderate/severe glare 0.45; 95% CI 0.04 to 4.60, low quality evidence).\nAuthors' conclusions\nThere is moderate-quality evidence that study participants who received accommodative IOLs had a small gain in near visual acuity after six months. There is some evidence that distance visual acuity with accommodative lenses may be worse after 12 months but due to low quality of evidence and heterogeneity of effect, the evidence for this is not clear-cut. People receiving accommodative lenses had more PCO which may be associated with poorer distance vision. However, the effect of the lenses on PCO was uncertain.\nFurther research is required to improve the understanding of how accommodative IOLs may affect near visual function, and whether they provide any durable gains. Additional trials, with longer follow-up, comparing different accommodative IOLs, multifocal IOLs and monofocal IOLs, would help map out their relative efficacy, and associated late complications. Research is needed on control over capsular fibrosis postimplantation.\nRisks of bias, heterogeneity of outcome measures and study designs used, and the dominance of one design of accommodative lens in existing trials (the HumanOptics 1CU) mean that these results should be interpreted with caution. They may not be applicable to other accommodative IOL designs.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Accommodation is the ability of the eye to focus on both distant and near objects.\nAccommodation is achieved through the contraction of ciliary muscles, which results in an increase in curvature and a forward shift of the natural lens in the eye. Accommodation declines with increasing age due to a decrease in lens elasticity and a reduction in ciliary muscle contraction, resulting in difficulty in near vision (presbyopia). This is a problem for most people in their 40s or 50s.\nFor best optical performance, the lens must be transparent. Cataract is the clouding of the human lens. It is more common with increasing age, and is a common cause of visual impairment. Fortunately, cataract is treatable by a surgical procedure in which the natural lens is removed through a small incision. Once all lens material is removed, an artificial lens, known as an intraocular lens (IOL) is implanted into the eye to lie in the original position of the removed natural lens.\nAll functions of the natural lens are preserved by an IOL, with the exception of accommodation. Standard IOLs, known as monofocal IOLs, allow only distant objects to be focused and seen clearly. Patients require spectacles for near vision. This problem after cataract surgery remains a challenge for ophthalmologists. To overcome the loss of accommodation after cataract surgery, various strategies have been tried with variable success.\nAccommodative IOLs have been designed to restore accommodation. The aim of this systematic review is to help define the extent to which accommodative IOLs improve near vision in comparison with standard monofocal IOLs."
}
]
|
query-laysum | 5630 | [
{
"role": "user",
"content": "Abstract: Background\nExercise training is commonly recommended for individuals with fibromyalgia. This review is one of a series of reviews about exercise training for fibromyalgia that will replace the review titled \"Exercise for treating fibromyalgia syndrome\", which was first published in 2002.\nObjectives\nTo evaluate the benefits and harms of mixed exercise training protocols that include two or more types of exercise (aerobic, resistance, flexibility) for adults with fibromyalgia against control (treatment as usual, wait list control), non exercise (e.g. biofeedback), or other exercise (e.g. mixed versus flexibility) interventions.\nSpecific comparisons involving mixed exercise versus other exercises (e.g. resistance, aquatic, aerobic, flexibility, and whole body vibration exercises) were not assessed.\nSearch methods\nWe searched the Cochrane Library, MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Thesis and Dissertations Abstracts, the Allied and Complementary Medicine Database (AMED), the Physiotherapy Evidence Databese (PEDro), Current Controlled Trials (to 2013), WHO ICTRP, and ClinicalTrials.gov up to December 2017, unrestricted by language, to identify all potentially relevant trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) in adults with a diagnosis of fibromyalgia that compared mixed exercise interventions with other or no exercise interventions. Major outcomes were health-related quality of life (HRQL), pain, stiffness, fatigue, physical function, withdrawals, and adverse events.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, extracted data, and assessed risk of bias and the quality of evidence for major outcomes using the GRADE approach.\nMain results\nWe included 29 RCTs (2088 participants; 98% female; average age 51 years) that compared mixed exercise interventions (including at least two of the following: aerobic or cardiorespiratory, resistance or muscle strengthening exercise, and flexibility exercise) versus control (e.g. wait list), non-exercise (e.g. biofeedback), and other exercise interventions. Design flaws across studies led to selection, performance, detection, and selective reporting biases. We prioritised the findings of mixed exercise compared to control and present them fully here.\nTwenty-one trials (1253 participants) provided moderate-quality evidence for all major outcomes but stiffness (low quality). With the exception of withdrawals and adverse events, major outcome measures were self-reported and expressed on a 0 to 100 scale (lower values are best, negative mean differences (MDs) indicate improvement; we used a clinically important difference between groups of 15% relative difference). Results for mixed exercise versus control show that mean HRQL was 56 and 49 in the control and exercise groups, respectively (13 studies; 610 participants) with absolute improvement of 7% (3% better to 11% better) and relative improvement of 12% (6% better to 18% better). Mean pain was 58.6 and 53 in the control and exercise groups, respectively (15 studies; 832 participants) with absolute improvement of 5% (1% better to 9% better) and relative improvement of 9% (3% better to 15% better). Mean fatigue was 72 and 59 points in the control and exercise groups, respectively (1 study; 493 participants) with absolute improvement of 13% (8% better to 18% better) and relative improvement of 18% (11% better to 24% better). Mean stiffness was 68 and 61 in the control and exercise groups, respectively (5 studies; 261 participants) with absolute improvement of 7% (1% better to 12% better) and relative improvement of 9% (1% better to 17% better). Mean physical function was 49 and 38 in the control and exercise groups, respectively (9 studies; 477 participants) with absolute improvement of 11% (7% better to 15% better) and relative improvement of 22% (14% better to 30% better). Pooled analysis resulted in a moderate-quality risk ratio for all-cause withdrawals with similar rates across groups (11 per 100 and 12 per 100 in the control and intervention groups, respectively) (19 studies; 1065 participants; risk ratio (RR) 1.02, 95% confidence interval (CI) 0.69 to 1.51) with an absolute change of 1% (3% fewer to 5% more) and a relative change of 11% (28% fewer to 47% more). Across all 21 studies, no injuries or other adverse events were reported; however some participants experienced increased fibromyalgia symptoms (pain, soreness, or tiredness) during or after exercise. However due to low event rates, we are uncertain of the precise risks with exercise. Mixed exercise may improve HRQL and physical function and may decrease pain and fatigue; all-cause withdrawal was similar across groups, and mixed exercises may slightly reduce stiffness. For fatigue, physical function, HRQL, and stiffness, we cannot rule in or out a clinically relevant change, as the confidence intervals include both clinically important and unimportant effects.\nWe found very low-quality evidence on long-term effects. In eight trials, HRQL, fatigue, and physical function improvement persisted at 6 to 52 or more weeks post intervention but improvements in stiffness and pain did not persist. Withdrawals and adverse events were not measured.\nIt is uncertain whether mixed versus other non-exercise or other exercise interventions improve HRQL and physical function or decrease symptoms because the quality of evidence was very low. The interventions were heterogeneous, and results were often based on small single studies. Adverse events with these interventions were not measured, and thus uncertainty surrounds the risk of adverse events.\nAuthors' conclusions\nCompared to control, moderate-quality evidence indicates that mixed exercise probably improves HRQL, physical function, and fatigue, but this improvement may be small and clinically unimportant for some participants; physical function shows improvement in all participants. Withdrawal was similar across groups. Low-quality evidence suggests that mixed exercise may slightly improve stiffness. Very low-quality evidence indicates that we are 'uncertain' whether the long-term effects of mixed exercise are maintained for all outcomes; all-cause withdrawals and adverse events were not measured. Compared to other exercise or non-exercise interventions, we are uncertain about the effects of mixed exercise because we found only very low-quality evidence obtained from small, very heterogeneous trials. Although mixed exercise appears to be well tolerated (similar withdrawal rates across groups), evidence on adverse events is scarce, so we are uncertain about its safety. We downgraded the evidence from these trials due to imprecision (small trials), selection bias (e.g. allocation), blinding of participants and care providers or outcome assessors, and selective reporting.\n\nGiven the provided abstract, please respond to the following query: \"Stiffness\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "After 16 weeks, people who exercised were 7% less stiff (1% better 1 to 12% better) or improved by 7 points on a 100 point scale.\nPeople who exercised rated their stiffness at 61 points.\nPeople in the control group rated their stiffness at 68 points."
}
]
|
query-laysum | 5631 | [
{
"role": "user",
"content": "Abstract: Background\nResilience can be defined as maintaining or regaining mental health during or after significant adversities such as a potentially traumatising event, challenging life circumstances, a critical life transition or physical illness. Healthcare students, such as medical, nursing, psychology and social work students, are exposed to various study- and work-related stressors, the latter particularly during later phases of health professional education. They are at increased risk of developing symptoms of burnout or mental disorders. This population may benefit from resilience-promoting training programmes.\nObjectives\nTo assess the effects of interventions to foster resilience in healthcare students, that is, students in training for health professions delivering direct medical care (e.g. medical, nursing, midwifery or paramedic students), and those in training for allied health professions, as distinct from medical care (e.g. psychology, physical therapy or social work students).\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, 11 other databases and three trial registries from 1990 to June 2019. We checked reference lists and contacted researchers in the field. We updated this search in four key databases in June 2020, but we have not yet incorporated these results.\nSelection criteria\nRandomised controlled trials (RCTs) comparing any form of psychological intervention to foster resilience, hardiness or post-traumatic growth versus no intervention, waiting list, usual care, and active or attention control, in adults (18 years and older), who are healthcare students. Primary outcomes were resilience, anxiety, depression, stress or stress perception, and well-being or quality of life. Secondary outcomes were resilience factors.\nData collection and analysis\nTwo review authors independently selected studies, extracted data, assessed risks of bias, and rated the certainty of the evidence using the GRADE approach (at post-test only).\nMain results\nWe included 30 RCTs, of which 24 were set in high-income countries and six in (upper- to lower-) middle-income countries. Twenty-two studies focused solely on healthcare students (1315 participants; number randomised not specified for two studies), including both students in health professions delivering direct medical care and those in allied health professions, such as psychology and physical therapy. Half of the studies were conducted in a university or school setting, including nursing/midwifery students or medical students. Eight studies investigated mixed samples (1365 participants), with healthcare students and participants outside of a health professional study field.\nParticipants mainly included women (63.3% to 67.3% in mixed samples) from young adulthood (mean age range, if reported: 19.5 to 26.83 years; 19.35 to 38.14 years in mixed samples). Seventeen of the studies investigated group interventions of high training intensity (11 studies; > 12 hours/sessions), that were delivered face-to-face (17 studies). Of the included studies, eight compared a resilience training based on mindfulness versus unspecific comparators (e.g. wait-list).\nThe studies were funded by different sources (e.g. universities, foundations), or a combination of various sources (four studies). Seven studies did not specify a potential funder, and three studies received no funding support.\nRisk of bias was high or unclear, with main flaws in performance, detection, attrition and reporting bias domains.\nAt post-intervention, very-low certainty evidence indicated that, compared to controls, healthcare students receiving resilience training may report higher levels of resilience (standardised mean difference (SMD) 0.43, 95% confidence interval (CI) 0.07 to 0.78; 9 studies, 561 participants), lower levels of anxiety (SMD −0.45, 95% CI −0.84 to −0.06; 7 studies, 362 participants), and lower levels of stress or stress perception (SMD −0.28, 95% CI −0.48 to −0.09; 7 studies, 420 participants). Effect sizes varied between small and moderate. There was little or no evidence of any effect of resilience training on depression (SMD −0.20, 95% CI −0.52 to 0.11; 6 studies, 332 participants; very-low certainty evidence) or well-being or quality of life (SMD 0.15, 95% CI −0.14 to 0.43; 4 studies, 251 participants; very-low certainty evidence).\nAdverse effects were measured in four studies, but data were only reported for three of them. None of the three studies reported any adverse events occurring during the study (very-low certainty of evidence).\nAuthors' conclusions\nFor healthcare students, there is very-low certainty evidence for the effect of resilience training on resilience, anxiety, and stress or stress perception at post-intervention.\nThe heterogeneous interventions, the paucity of short-, medium- or long-term data, and the geographical distribution restricted to high-income countries limit the generalisability of results. Conclusions should therefore be drawn cautiously. Since the findings suggest positive effects of resilience training for healthcare students with very-low certainty evidence, high-quality replications and improved study designs (e.g. a consensus on the definition of resilience, the assessment of individual stressor exposure, more attention controls, and longer follow-up periods) are clearly needed.\n\nGiven the provided abstract, please respond to the following query: \"Search dates\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to June 2019. The results of an updated search of four key databases in June 2020 have not yet been included in the review."
}
]
|
query-laysum | 5632 | [
{
"role": "user",
"content": "Abstract: Background\nAntipsychotic drugs are the mainstay treatment for schizophrenia, yet they are associated with diverse and potentially dose-related side effects which can reduce quality of life. For this reason, the lowest possible doses of antipsychotics are generally recommended, but higher doses are often used in clinical practice. It is still unclear if and how antipsychotic doses could be reduced safely in order to minimise the adverse-effect burden without increasing the risk of relapse.\nObjectives\nTo assess the efficacy and safety of reducing antipsychotic dose compared to continuing the current dose for people with schizophrenia.\nSearch methods\nWe conducted a systematic search on 10 February 2021 at the Cochrane Schizophrenia Group's Study-Based Register of Trials, which is based on CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO, PubMed, ClinicalTrials.gov, ISRCTN, and WHO ICTRP. We also inspected the reference lists of included studies and previous reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing any dose reduction against continuation in people with schizophrenia or related disorders who were stabilised on their current antipsychotic treatment.\nData collection and analysis\nAt least two review authors independently screened relevant records for inclusion, extracted data from eligible studies, and assessed the risk of bias using RoB 2. We contacted study authors for missing data and additional information. Our primary outcomes were clinically important change in quality of life, rehospitalisations and dropouts due to adverse effects; key secondary outcomes were clinically important change in functioning, relapse, dropouts for any reason, and at least one adverse effect. We also examined scales measuring symptoms, quality of life, and functioning as well as a comprehensive list of specific adverse effects. We pooled outcomes at the endpoint preferably closest to one year. We evaluated the certainty of the evidence using the GRADE approach.\nMain results\nWe included 25 RCTs, of which 22 studies provided data with 2635 participants (average age 38.4 years old). The median study sample size was 60 participants (ranging from 18 to 466 participants) and length was 37 weeks (ranging from 12 weeks to 2 years). There were variations in the dose reduction strategies in terms of speed of reduction (i.e. gradual in about half of the studies (within 2 to 16 weeks) and abrupt in the other half), and in terms of degree of reduction (i.e. median planned reduction of 66% of the dose up to complete withdrawal in three studies). We assessed risk of bias across outcomes predominantly as some concerns or high risk.\nNo study reported data on the number of participants with a clinically important change in quality of life or functioning, and only eight studies reported continuous data on scales measuring quality of life or functioning. There was no difference between dose reduction and continuation on scales measuring quality of life (standardised mean difference (SMD) −0.01, 95% confidence interval (CI) −0.17 to 0.15, 6 RCTs, n = 719, I2 = 0%, moderate certainty evidence) and scales measuring functioning (SMD 0.03, 95% CI −0.10 to 0.17, 6 RCTs, n = 966, I2 = 0%, high certainty evidence).\nDose reduction in comparison to continuation may increase the risk of rehospitalisation based on data from eight studies with estimable effect sizes; however, the 95% CI does not exclude the possibility of no difference (risk ratio (RR) 1.53, 95% CI 0.84 to 2.81, 8 RCTs, n = 1413, I2 = 59% (moderate heterogeneity), very low certainty evidence). Similarly, dose reduction increased the risk of relapse based on data from 20 studies (RR 2.16, 95% CI 1.52 to 3.06, 20 RCTs, n = 2481, I2 = 70% (substantial heterogeneity), low certainty evidence).\nMore participants in the dose reduction group in comparison to the continuation group left the study early due to adverse effects (RR 2.20, 95% CI 1.39 to 3.49, 6 RCTs with estimable effect sizes, n = 1079, I2 = 0%, moderate certainty evidence) and for any reason (RR 1.38, 95% CI 1.05 to 1.81, 12 RCTs, n = 1551, I2 = 48% (moderate heterogeneity), moderate certainty evidence).\nLastly, there was no difference between the dose reduction and continuation groups in the number of participants with at least one adverse effect based on data from four studies with estimable effect sizes (RR 1.03, 95% CI 0.94 to 1.12, 5 RCTs, n = 998 (4 RCTs, n = 980 with estimable effect sizes), I2 = 0%, moderate certainty evidence).\nAuthors' conclusions\nThis review synthesised the latest evidence on the reduction of antipsychotic doses for stable individuals with schizophrenia. There was no difference between dose reduction and continuation groups in quality of life, functioning, and number of participants with at least one adverse effect. However, there was a higher risk for relapse and dropouts, and potentially for rehospitalisations, with dose reduction. Of note, the majority of the trials focused on relapse prevention rather potential beneficial outcomes on quality of life, functioning, and adverse effects, and in some studies there was rapid and substantial reduction of doses. Further well-designed RCTs are therefore needed to provide more definitive answers.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 25 studies involving a total of 2721 participants with schizophrenia. Twenty-two studies (2635 participants) provided data for the analyses. The studies lasted between 12 weeks and 2 years. They were conducted all over the world, including in the USA, the UK, Europe, and Asia. Fourteen studies were sponsored by public institutions, five by pharmaceutical companies, two by public institutions and pharmaceutical companies jointly, and four studies did not provide clear information on funding.\nWe found that dose reduction:\n- probably has little to no effect on quality of life;\n- makes no difference in readmission to hospital, but we are very uncertain about the results;\n- probably increases the number of participants leaving the study early due to side effects;\n- has little to no effect on functioning;\n- may increase the number of participants with a relapse;\n- probably increases the number of participants leaving the study early for any reason;\n- probably has little to no effect on the number of participants with at least one side effect."
}
]
|
query-laysum | 5633 | [
{
"role": "user",
"content": "Abstract: Background\nHistologic assessment of mucosal disease activity has been increasingly used in clinical trials of treatment for Crohn's disease. However, the operating properties of the currently existing histologic scoring indices remain unclear.\nObjectives\nA systematic review was undertaken to evaluate the development and operating characteristics of available histologic disease activity indices in Crohn's disease.\nSearch methods\nElectronic searches of MEDLINE, EMBASE, PubMed, and the Cochrane Library (CENTRAL) databases from inception to 20 July 2016 were supplemented by manual reviews of bibliographies and abstracts submitted to major gastroenterology meetings (Digestive Disease Week, United European Gastroenterology Week, European Crohn's and Colitis Organisation).\nSelection criteria\nAny study design (e.g. randomised controlled trial, cohort study, case series) that evaluated a histologic disease activity index in patients with Crohn’s disease was considered for inclusion. Study participants included adult patients (> 16 years), diagnosed with Crohn’s disease using conventional clinical, radiographic or endoscopic criteria.\nData collection and analysis\nTwo authors independently reviewed the titles and abstracts of the studies identified from the literature search. The full text of potentially relevant citations were reviewed for inclusion and the study investigators were contacted as needed for clarification. Any disagreements regarding study eligibility were resolved by discussion and consensus with a third author.\nTwo authors independently extracted and recorded data using a standard form. The following data were recorded from each eligible study: number of patients enrolled; number of patients per treatment arm; patient characteristics: age and gender distribution; description of histologic disease activity index utilized; and outcomes such as content validity, construct validity, criterion validity, responsiveness, intra-rater reliability, inter-rater reliability, and feasibility.\nMain results\nSixteen reports of 14 studies describing 14 different numerical histological indices fulfilled the inclusion criteria.\nInter-rater reliability was assessed in one study. For the Naini and Cortina Score, estimates of correlation were 'almost perfect', ranging from r = 0.94 to 0.96. The methodological quality of this study with respect to reliability was 'good'.\nWith respect to validity, correlation estimates between various histological scoring systems and Crohn's disease activity as measured by objective markers of inflammation (including C-reactive protein, erythrocyte sedimentation rate, fecal calprotectin and fecal lactoferrin); endoscopic disease activity scores; clinical disease activity scores; and quality of life questionnaires were reported. Comparisons between histologic scoring indices and endoscopic scoring indices ranged from no correlation to 'substantial' (r = 0.779). The methodological quality of the studies that explored validity ranged form 'poor' to 'good'.\nResponsiveness data were available in seven studies. After subjects were administered a treatment of known efficacy, statistically significant change in the index score was demonstrated in five studies with respect to six indices. Two studies failed to indicate whether there was statistically significant change in the index score post-treatment. With regard to methodological quality, six of the studies were rated as 'poor' and one of the studies was rated as 'fair'.\nFeasibility was assessed by one study. The Naini and Cortina Score was shown to be simple to use and feasible for every given case.\nAuthors' conclusions\nCurrently there is no fully validated histological scoring index for evaluation of Crohn's disease activity. Development of a validated histological scoring index for Crohn’s disease is a clinical and research priority.\n\nGiven the provided abstract, please respond to the following query: \"What did the researchers investigate?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is important that histological scoring indices measure what they are supposed to measure (validity); that they detect change after treatment (responsiveness); that the scores are consistently reproducible (reliability); and that they can be easily utilized (feasibility). The researchers investigated whether studies have assessed the validity, responsiveness, reliability and feasibility of histological scoring indices."
}
]
|
query-laysum | 5634 | [
{
"role": "user",
"content": "Abstract: Background\nDespite improvements in the management of rheumatoid arthritis (RA), pain control is often inadequate even when inflammation is well controlled.\nObjectives\nTo assess the efficacy and safety of opioid analgesics for treating pain in patients with RA.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE and EMBASE for studies to May 2010. We also searched the 2008 to 2009 American College of Rheumatology (ACR) and European League against Rheumatism (EULAR) abstracts and performed a handsearch of the reference lists of articles.\nSelection criteria\nStudies were included if they were randomized or quasi-randomized controlled trials (RCTs or CCTs) which compared opioid therapy to another therapy (active or placebo) for pain in patients with RA. Outcomes of interest were pain, adverse effects, function and quality of life.\nData collection and analysis\nTwo review authors independently selected the studies for inclusion, extracted the data, and performed a risk of bias assessment.\nMain results\nEleven studies (672 participants) were included in the review. Four studies assessed the efficacy of single doses of various opioid and non-opioid analgesics; a pooled analysis of these studies was not performed but in each study opioids reduced pain more than placebo. There were no differences between analgesic drugs in these studies.\nSeven studies were between one and six weeks in duration and assessed six different oral opioids (dextropropoxyphene, codeine, tramadol, tilidine, pentazocine, morphine), either alone or combined with non-opioid analgesics. The only strong opioid investigated was controlled-release morphine sulphate, in a single study with 20 participants. Six studies compared an opioid to placebo. Opioids were superior to placebo in patient-reported global impression of change (3 studies, 324 participants: relative risk (RR) 1.44, 95% CI 1.03 to 2.03) but not for the number of withdrawals due to inadequate analgesia (4 studies, 345 participants: RR 0.82, 95% CI 0.34 to 2.0). Adverse events (most commonly nausea, vomiting, dizziness and constipation) were more frequent in patients receiving opioids compared to placebo (4 studies, 371 participants: odds ratio 3.90, 95% CI 2.31 to 6.56); the pooled risk ratio for withdrawal due to adverse events was 2.67 (3 studies, 331 participants: 95% CI 0.52 to 13.75). One study compared an opioid (codeine with paracetamol) to an NSAID (diclofenac) and found no difference in efficacy or safety between interventions.\nAuthors' conclusions\nThere is limited evidence that weak oral opioids may be effective analgesics for some patients with RA, but adverse effects are common and may offset the benefits of this class of medications. There is insufficient evidence to draw conclusions regarding the use of weak opioids for longer than six weeks, or the role of strong opioids.\n\nGiven the provided abstract, please respond to the following query: \"The review shows that in people with rheumatoid arthritis treated with weak opioids for up to six weeks\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- Weak opioids may reduce pain compared with placebo\n- Treatment with weak opioids may result in more side-effects compared with placebo.\nThere were no studies in people with rheumatoid arthritis that looked at the effects of weak opioids taken for more than six weeks. There were not enough studies of strong opioids to draw conclusions about their effects in rheumatoid arthritis."
}
]
|
query-laysum | 5635 | [
{
"role": "user",
"content": "Abstract: Background\nVenous thromboembolism (VTE), which comprises deep vein thrombosis (DVT) and pulmonary embolism (PE), is the leading cause of preventable death in hospitalised people and the third most common cause of mortality in surgical patients. People undergoing bariatric surgery have the additional risk factor of being overweight. Although VTE prophylaxis in surgical patients is well established, the best way to prevent VTE in those undergoing bariatric surgery is less clear.\nObjectives\nTo evaluate the benefits and harms of pharmacological interventions (alone or in combination) on venous thromboembolism and other health outcomes in people undergoing bariatric surgery compared to the same pharmacological intervention administered at a different dose or frequency, the same pharmacological intervention or started at a different time point, another pharmacological intervention, no intervention or placebo.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 1 November 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs in males and females of any age undergoing bariatric surgery comparing pharmacological interventions for VTE (alone or in combination) with the same pharmacological intervention administered at a different dose or frequency, the same pharmacological intervention started at a different time point, a different pharmacological intervention, no treatment or placebo.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. VTE and 2. major bleeding. Our secondary outcomes were 1. all-cause mortality, 2. VTE-related mortality, 3. PE, 4. DVT, 5. adverse effects and 6. quality of life. We used GRADE to assess certainty of evidence for each outcome.\nMain results\nWe included seven RCTs with 1045 participants. Data for meta-analysis were available from all participants.\nFour RCTs (597 participants) compared higher-dose heparin to standard-dose heparin: one of these studies (139 participants) used unfractionated heparin (UFH) and the other three (458 participants) used low-molecular-weight heparin (LMWH). One study compared heparin versus pentasaccharide (198 participants), and one study compared starting heparin before versus after bariatric surgery (100 participants). One study (150 participants) compared combined mechanical and pharmacological (enoxaparin) prophylaxis versus mechanical prophylaxis alone. The duration of the interventions ranged from seven to 15 days, and follow-up ranged from 10 to 180 days.\nHigher-dose heparin versus standard-dose heparin\nCompared to standard-dose heparin, higher-dose heparin may result in little or no difference in the risk of VTE (RR 0.55, 95% CI 0.05 to 5.99; 4 studies, 597 participants) or major bleeding (RR 1.19, 95% CI 0.48 to 2.96; I2 = 8%; 4 studies, 597 participants; low-certainty) in people undergoing bariatric surgery. The evidence on all-cause mortality, VTE-related mortality, PE, DVT and adverse events (thrombocytopenia) is uncertain (effect not estimable or very low-certainty evidence).\nHeparin versus pentasaccharide\nHeparin compared to a pentasaccharide after bariatric surgery may result in little or no difference in the risk of VTE (RR 0.83, 95% CI 0.19 to 3.61; 1 study, 175 participants) or DVT (RR 0.83, 95% CI 0.19 to 3.61; 1 study, 175 participants). The evidence on major bleeding, PE and mortality is uncertain (effect not estimable or very low-certainty evidence).\nHeparin started before versus after the surgical procedure\nStarting prophylaxis with heparin 12 hours before surgery versus after surgery may result in little or no difference in the risk of VTE (RR 0.11, 95% CI 0.01 to 2.01; 1 study, 100 participants) or DVT (RR 0.11, 95% CI 0.01 to 2.01; 1 study, 100 participants). The evidence on major bleeding, all-cause mortality and VTE-related mortality is uncertain (effect not estimable or very low-certainty evidence). We were unable to assess the effect of this intervention on PE or adverse effects, as the study did not measure these outcomes.\nCombined mechanical and pharmacological prophylaxis versus mechanical prophylaxis alone\nCombining mechanical and pharmacological prophylaxis (started 12 hours before surgery) may reduce VTE events in people undergoing bariatric surgery compared to mechanical prophylaxis alone (RR 0.05, 95% CI 0.00 to 0.89; number needed to treat for an additional beneficial outcome (NNTB) = 9; 1 study, 150 participants; low-certainty). We were unable to assess the effect of this intervention on major bleeding or morality (effect not estimable), or on PE or adverse events (not measured).\nNo studies measured quality of life.\nAuthors' conclusions\nHigher-dose heparin may make little or no difference to venous thromboembolism or major bleeding in people undergoing bariatric surgery when compared to standard-dose heparin.\nHeparin may make little or no difference to venous thromboembolism in people undergoing bariatric surgery when compared to pentasaccharide. There are inadequate data to draw conclusions about the effects of heparin compared to pentasaccharide on major bleeding.\nStarting prophylaxis with heparin 12 hours before bariatric surgery may make little or no difference to venous thromboembolism in people undergoing bariatric surgery when compared to starting heparin after bariatric surgery. There are inadequate data to draw conclusions about the effects of heparin started before versus after surgery on major bleeding.\nCombining mechanical and pharmacological prophylaxis (started 12 hours before surgery) may reduce VTE events in people undergoing bariatric surgery when compared to mechanical prophylaxis alone. No data are available relating to major bleeding.\nThe certainty of the evidence is limited by small sample sizes, few or no events, and risk of bias concerns. Future trials must be sufficiently large to enable analysis of relevant clinical outcomes, and should standardise the time of treatment and follow-up. They should also address the effect of direct oral anticoagulants and antiplatelets, preferably grouping them according to the type of intervention.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This evidence is up-to-date to November 2021."
}
]
|
query-laysum | 5636 | [
{
"role": "user",
"content": "Abstract: Background\nAntibiotics have been considered to treat ulcerative colitis (UC) due to their antimicrobial properties against intestinal bacteria linked to inflammation. However, there are concerns about their efficacy and safety.\nObjectives\nTo determine whether antibiotic therapy is safe and effective for the induction and maintenance of remission in people with UC.\nSearch methods\nWe searched five electronic databases on 10 December 2021 for randomised controlled trials (RCTs) comparing antibiotic therapy to placebo or an active comparator.\nSelection criteria\nWe considered people with UC of all ages, treated with antibiotics of any type, dose, and route of administration for inclusion. Induction studies required a minimum duration of two weeks for inclusion. Maintenance studies required a minimum duration of three months to be considered for inclusion.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Our primary outcome for induction studies was failure to achieve remission and for maintenance studies was relapse, as defined by the primary studies.\nMain results\nWe included 12 RCTs (847 participants). One maintenance of remission study used sole antibiotic therapy compared with 5-aminosalicylic acid (5-ASA). All other trials used concurrent medications or standard care regimens and antibiotics as an adjunct therapy or compared antibiotics with other adjunct therapies to examine the effect on induction of remission.\nThere is high certainty evidence that antibiotics (154/304 participants) compared to placebo (175/304 participants) result in no difference in failure to achieve clinical remission (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.74 to 1.06). A subgroup analysis found no differences when steroids, steroids plus 5-ASA, or steroids plus 5-ASA plus probiotics were used as additional therapies to antibiotics and placebo.\nThere is low certainty evidence that antibiotics (102/168 participants) compared to placebo (121/175 participants) may result in no difference in failure to achieve clinical response (RR 0.75, 95% CI 0.47 to 1.22). A subgroup analysis found no differences when steroids or steroids plus 5-ASA were used as additional therapies to antibiotics and placebo.\nThere is low certainty evidence that antibiotics (6/342 participants) compared to placebo (5/349 participants) may result in no difference in serious adverse events (RR 1.19, 95% CI 0.38 to 3.71). A subgroup analysis found no differences when steroids were additional therapies to antibiotics and placebo.\nThere is low certainty evidence that antibiotics (3/342 participants) compared to placebo (1/349 participants) may result in no difference in withdrawals due to adverse events (RR 2.06, 95% CI 0.27 to 15.72). A subgroup analysis found no differences when steroids or steroids plus 5-ASA were additional therapies to antibiotics and placebo.\nIt is unclear if there is any difference between antibiotics in combination with probiotics compared to no treatment or placebo for failure to achieve clinical remission (RR 0.68, 95% CI 0.39 to 1.19), serious adverse events (RR 1.00, 95% CI 0.07 to 15.08), or withdrawals due to adverse events (RR 1.00, 95% CI 0.07 to 15.08). The certainty of the evidence is very low.\nIt is unclear if there is any difference between antibiotics compared to 5-ASA for failure to achieve clinical remission (RR 2.20, 95% CI 1.17 to 4.14). The certainty of the evidence is very low.\nIt is unclear if there is any difference between antibiotics compared to probiotics for failure to achieve clinical remission (RR 0.47, 95% CI 0.23 to 0.94). The certainty of the evidence is very low.\nIt is unclear if there is any difference between antibiotics compared to 5-ASA for failure to maintain clinical remission (RR 0.71, 95% CI 0.47 to 1.06). The certainty of the evidence is very low.\nIt is unclear if there is any difference between antibiotics compared to no treatment for failure to achieve clinical remission in a mixed population of people with active and inactive disease (RR 0.56, 95% CI 0.29 to 1.07). The certainty of the evidence is very low.\nFor all other outcomes, no effects could be estimated due to a lack of data.\nAuthors' conclusions\nThere is high certainty evidence that there is no difference between antibiotics and placebo in the proportion of people who achieve clinical remission at the end of the intervention period. However, there is evidence that there may be a greater proportion of people who achieve clinical remission and probably a greater proportion who achieve clinical response with antibiotics when compared with placebo at 12 months. There may be no difference in serious adverse events or withdrawals due to adverse events between antibiotics and placebo.\nNo clear conclusions can be drawn for any other comparisons.\nA clear direction for future research appears to be comparisons of antibiotics and placebo (in addition to standard therapies) with longer-term measurement of outcomes. Additionally. As there were single studies of other head-to-head comparisons, there may be scope for future studies in this area.\n\nGiven the provided abstract, please respond to the following query: \"What is ulcerative colitis?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "UC is a chronic (long term) inflammatory disease that affects the large bowel. The most common symptoms include bloody diarrhoea, abdominal pain, and a sudden almost uncontrollable urge to pass stool. Some people have other symptoms including sore joints, mouth sores, and inflammation in their eyes. When someone is experiencing symptoms of UC, they are said to have 'active' disease. When symptoms improve with treatment, the disease is said to be responding to therapy. When symptoms stop, the disease is said to be in 'remission'. People with UC in remission are often given therapy with medicines to try and prolong (maintain) their remission or response to therapy."
}
]
|
query-laysum | 5637 | [
{
"role": "user",
"content": "Abstract: Background\nSince the previous Cochrane Review on this topic in 2016, debate has continued surrounding a potential role for vitamin D in reducing risk of asthma exacerbation and improving asthma control. We therefore conducted an updated meta-analysis to include data from new trials completed since this date.\nObjectives\nTo evaluate the effectiveness and safety of administration of vitamin D or its hydroxylated metabolites in reducing the risk of severe asthma exacerbations (defined as those requiring treatment with systemic corticosteroids) and improving asthma symptom control.\nSearch methods\nWe searched the Cochrane Airways Group Trial Register and reference lists of articles. We contacted the authors of studies in order to identify additional trials. Date of last search: 8 September 2022.\nSelection criteria\nWe included double-blind, randomised, placebo-controlled trials of vitamin D in children and adults with asthma evaluating exacerbation risk or asthma symptom control, or both.\nData collection and analysis\nFour review authors independently applied study inclusion criteria, extracted the data, and assessed risk of bias. We obtained missing data from the authors where possible. We reported results with 95% confidence intervals (CIs). The primary outcome was the incidence of severe asthma exacerbations requiring treatment with systemic corticosteroids. Secondary outcomes included the incidence of asthma exacerbations precipitating an emergency department visit or requiring hospital admission, or both, end-study childhood Asthma Control Test (cACT) or Asthma Control Test (ACT) scores, and end-study % predicted forced expiratory volume in one second (FEV1).\nWe performed subgroup analyses to determine whether the effect of vitamin D on risk of asthma exacerbation was modified by baseline vitamin D status, vitamin D dose, frequency of dosing regimen, form of vitamin D given, and age of participants.\nMain results\nWe included 20 studies in this review; 15 trials involving a total of 1155 children and five trials involving a total of 1070 adults contributed data to analyses. Participant ages ranged from 1 to 84 years, with two trials providing data specific to participants under five years (n = 69) and eight trials providing data specific to participants aged 5 to 16 (n = 766). Across the trials, 1245 participants were male and 1229 were female, with two studies not reporting sex distribution. Fifteen trials contributed to the primary outcome analysis of exacerbations requiring systemic corticosteroids. The duration of trials ranged from three to 40 months; all but two investigated effects of administering cholecalciferol (vitamin D3). As in the previous Cochrane Review, the majority of participants had mild to moderate asthma, and profound vitamin D deficiency (25-hydroxyvitamin D (25(OH)D) < 25 nmol/L) at baseline was rare.\nAdministration of vitamin D or its hydroxylated metabolites did not reduce or increase the proportion of participants experiencing one or more asthma exacerbations treated with systemic corticosteroids (odds ratio (OR) 1.04, 95% CI 0.81 to 1.34; I2 = 0%; 14 studies, 1778 participants; high-quality evidence). This equates to an absolute risk of 226 per 1000 (95% CI 185 to 273) in the pooled vitamin D group, compared to a baseline risk of 219 participants per 1000 in the pooled placebo group.\nWe also found no effect of vitamin D supplementation on the rate of exacerbations requiring systemic corticosteroids (rate ratio 0.86, 95% CI 0.62 to 1.19; I2 = 60%; 10 studies, 1599 participants; high-quality evidence), or the time to first exacerbation (hazard ratio 0.82, 95% CI 0.59 to 1.15; I2 = 22%; 3 studies, 850 participants; high-quality evidence). Subgroup analysis did not reveal any evidence of effect modification by baseline vitamin D status, vitamin D dose, frequency of dosing regimen, or age. A single trial investigating administration of calcidiol reported a benefit of the intervention for the primary outcome of asthma control.\nVitamin D supplementation did not influence any secondary efficacy outcome meta-analysed, which were all based on moderate- or high-quality evidence. We observed no effect on the incidence of serious adverse events (OR 0.89, 95% CI 0.56 to 1.41; I2 = 0%; 12 studies, 1556 participants; high-quality evidence). The effect of vitamin D on fatal asthma exacerbations was not estimable, as no such events occurred in any trial. Six studies reported adverse reactions potentially attributable to vitamin D. These occurred across treatment and control arms and included hypercalciuria, hypervitaminosis D, kidney stones, gastrointestinal symptoms and mild itch. In one trial, we could not ascertain the total number of participants with hypercalciuria from the trial report.\nWe assessed three trials as being at high risk of bias in at least one domain; none of these contributed data to the analysis of the outcomes reported above. Sensitivity analyses that excluded these trials from each outcome to which they contributed did not change the null findings.\nAuthors' conclusions\nIn contrast to findings of our previous Cochrane Review on this topic, this updated review does not find evidence to support a role for vitamin D supplementation or its hydroxylated metabolites to reduce risk of asthma exacerbations or improve asthma control. Participants with severe asthma and those with baseline 25(OH)D concentrations < 25 nmol/L were poorly represented, so further research is warranted here. A single study investigating effects of calcidiol yielded positive results, so further studies investigating effects of this metabolite are needed.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• People with severe asthma and those with very low vitamin D levels prior to supplementation were poorly represented, so we cannot assess whether vitamin D supplements might help these individuals.\n• A single study investigating effects of calcidiol, an alternative form of vitamin D, showed a protective effect. Further investigation of this form of vitamin D is needed."
}
]
|
query-laysum | 5638 | [
{
"role": "user",
"content": "Abstract: Background\nBefore extraction and synthetic chemistry were invented, musculoskeletal complaints were treated with preparations from medicinal plants. They were either administered orally or topically. In contrast to the oral medicinal plant products, topicals act in part as counterirritants or are toxic when given orally.\nObjectives\nTo update the previous Cochrane review of herbal therapy for osteoarthritis from 2000 by evaluating the evidence on effectiveness for topical medicinal plant products.\nSearch methods\nDatabases for mainstream and complementary medicine were searched using terms to include all forms of arthritis combined with medicinal plant products. We searched electronic databases (Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, AMED, CINAHL, ISI Web of Science, World Health Organization Clinical Trials Registry Platform) to February 2013, unrestricted by language. We also searched the reference lists from retrieved trials.\nSelection criteria\nRandomised controlled trials of herbal interventions used topically, compared with inert (placebo) or active controls, in people with osteoarthritis were included.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, assessed the risk of bias of included studies and extracted data.\nMain results\nSeven studies (seven different medicinal plant interventions; 785 participants) were included. Single studies (five studies, six interventions) and non-comparable studies (two studies, one intervention) precluded pooling of results.\nModerate evidence from a single study of 174 people with hand osteoarthritis indicated that treatment with Arnica extract gel probably results in similar benefits as treatment with ibuprofen (non-steroidal anti-inflammatory drug) with a similar number of adverse events. Mean pain in the ibuprofen group was 44.2 points on a 100 point scale; treatment with Arnica gel reduced the pain by 4 points after three weeks: mean difference (MD) -3.8 points (95% confidence intervals (CI) -10.1 to 2.5), absolute reduction 4% (10% reduction to 3% increase). Hand function was 7.5 points on a 30 point scale in the ibuprofen-treated group; treatment with Arnica gel reduced function by 0.4 points (MD -0.4, 95% CI -1.75 to 0.95), absolute improvement 1% (6% improvement to 3% decline)). Total adverse events were higher in the Arnica gel group (13% compared to 8% in the ibuprofen group): relative risk (RR) 1.65 (95% CI 0.72 to 3.76).\nModerate quality evidence from a single trial of 99 people with knee osteoarthritis indicated that compared with placebo, Capsicum extract gel probably does not improve pain or knee function, and is commonly associated with treatment-related adverse events including skin irritation and a burning sensation. At four weeks follow-up, mean pain in the placebo group was 46 points on a 100 point scale; treatment with Capsicum extract reduced pain by 1 point (MD -1, 95% CI -6.8 to 4.8), absolute reduction of 1% (7% reduction to 5% increase). Mean knee function in the placebo group was 34.8 points on a 96 point scale at four weeks; treatment with Capsicum extract improved function by a mean of 2.6 points (MD -2.6, 95% CI -9.5 to 4.2), an absolute improvement of 3% (10% improvement to 4% decline). Adverse event rates were greater in the Capsicum extract group (80% compared with 20% in the placebo group, rate ratio 4.12, 95% CI 3.30 to 5.17). The number needed to treat to result in adverse events was 2 (95% CI 1 to 2).\nModerate evidence from a single trial of 220 people with knee osteoarthritis suggested that comfrey extract gel probably improves pain without increasing adverse events. At three weeks, the mean pain in the placebo group was 83.5 points on a 100 point scale. Treatment with comfrey reduced pain by a mean of 41.5 points (MD -41.5, 95% CI -48 to -34), an absolute reduction of 42% (34% to 48% reduction). Function was not reported. Adverse events were similar: 6% (7/110) reported adverse events in the comfrey group compared with 14% (15/110) in the placebo group (RR 0.47, 95% CI 0.20 to 1.10).\nAlthough evidence from a single trial indicated that adhesive patches containing Chinese herbal mixtures FNZG and SJG may improve pain and function, the clinical applicability of these findings are uncertain because participants were only treated and followed up for seven days. We are also uncertain if other topical herbal products (Marhame-Mafasel compress, stinging nettle leaf) improve osteoarthritis symptoms due to the very low quality evidence from single trials.\nNo serious side effects were reported.\nAuthors' conclusions\nAlthough the mechanism of action of the topical medicinal plant products provides a rationale basis for their use in the treatment of osteoarthritis, the quality and quantity of current research studies of effectiveness are insufficient. Arnica gel probably improves symptoms as effectively as a gel containing non-steroidal anti-inflammatory drug, but with no better (and possibly worse) adverse event profile. Comfrey extract gel probably improves pain, and Capsicum extract gel probably will not improve pain or function at the doses examined in this review. Further high quality, fully powered studies are required to confirm the trends of effectiveness identifed in studies so far.\n\nGiven the provided abstract, please respond to the following query: \"What is osteoarthritis and what is herbal therapy?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Osteoarthritis (OA) is a disease of the joints (commonly knee, hip, hands). When joints lose cartilage, bone grows to try to repair the damage. Instead of making things better, however, the bone grows abnormally and makes things worse. For example, the bone can become misshapen and make the joint painful and limit movement. OA can affect your physical function, particularly your ability to use your joints.\nHerbal medicines are defined as finished, labeled medicinal products that contain as active ingredients aerial or underground parts of plants, other plant material, or combinations thereof, whether in the crude state or as plant preparations (for example oils, tinctures)."
}
]
|
query-laysum | 5639 | [
{
"role": "user",
"content": "Abstract: Background\nTramadol is an opioid analgesic licensed for use in moderate to severe pain. It is considered as a low risk for abuse, so control regulations are not as stringent as for 'strong' opioids such as morphine. It has a potential role as a step 2 option of the World Health Organization (WHO) analgesic ladder.\nObjectives\nTo assess the benefits and adverse effects of tramadol with or without paracetamol (acetaminophen) for cancer-related pain.\nSearch methods\nWe searched the following databases using a wide range of search terms: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and LILACS. We also searched three clinical trials registry databases. The date of the last search was 2 November 2016.\nSelection criteria\nWe selected studies that were randomised, with placebo or active controls, or both, and included a minimum of 10 participants per treatment arm. We were interested particularly in blinded studies, but also included open studies.\nWe excluded non-randomised studies, studies of experimental pain, case reports, and clinical observations.\nData collection and analysis\nTwo review authors independently extracted data using a standard form and checked for agreement before entry into Review Manager 5. We included information about the number of participants treated and demographic details, type of cancer, drug and dosing regimen, study design (placebo or active control) and methods, study duration and follow-up, analgesic outcome measures and results, withdrawals, and adverse events. We collated multiple reports of the same study, so that each study, rather than each report, was the unit of interest in the review. We assessed the evidence using GRADE and created a 'Summary of findings' table.\nThe main outcomes of interest for benefit were pain reduction of 30% or greater and 50% or greater from baseline, participants with pain no worse than mild, and participants feeling much improved or very much improved.\nMain results\nWe included 10 studies (12 reports) with 958 adult participants. All the studies enrolled participants with chronic malignant tumour-related pain who were experiencing pain intensities described as moderate to severe, with most experiencing at least 4/10 with current treatment. The mean ages were 59 to 70 years, with participants aged between 24 and 87 years. Study length ranged from one day to six months. Five studies used a cross-over design. Tramadol doses ranged from 50 mg as single dose to 600 mg per day; doses of 300 mg per day to 400 mg per day were most common.\nNine studies were at high risk of bias for one to four criteria (only one high risk of bias for size). We judged all the results to be very low quality evidence because of widespread lack of blinding of outcome assessment, inadequately described sequence generation, allocation concealment, and small numbers of participants and events. Important outcomes were poorly reported. There were eight different active comparators and one comparison with placebo. There was little information available for any comparison and no firm conclusions could be drawn for any outcome.\nSingle comparisons of oral tramadol with codeine plus paracetamol, of dihydrocodeine, and of rectal versus oral tramadol provided no data for key outcomes. One study used tramadol combined with paracetamol; four participants received this intervention. One study compared tramadol with flupirtine - a drug that is no longer available. One study compared tramadol with placebo and a combination of cobrotoxin, tramadol, and ibuprofen, but the dosing schedule poorly explained.\nTwo studies (191 participants) compared tramadol with buprenorphine. One study (131 participants) reported a similar proportion of no or mild pain at 14 days.\nThree studies (300 participants) compared tramadol with morphine. Only one study, combining tramadol, tramadol plus paracetamol, and paracetamol plus codeine as a single weak-opioid group reported results. Weak opioid produced reduction in pain of at least 30% from baseline in 55/117 (47%) participants, compared with 91/110 (82%) participants with morphine. Weak opioid produced reduction in pain of at least 50% in 49/117 (42%) participants, compared with 83/110 (75%) participants with morphine.\nThere was no useful information for any other outcome of benefit or harm.\nAuthors' conclusions\nThere is limited, very low quality, evidence from randomised controlled trials that tramadol produced pain relief in some adults with pain due to cancer and no evidence at all for children. There is very low quality evidence that it is not as effective as morphine. This review does not provide a reliable indication of the likely effect. The likelihood that the effect will be substantially different is very high. The place of tramadol in managing cancer pain and its role as step 2 of the WHO analgesic ladder is unclear.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In November 2016, we found 10 studies with 958 adult participants and no studies in children. The studies were often small and compared different tramadol preparations with different comparator drugs, and did not report important outcomes well. This made it difficult to work out whether tramadol was as good or better or worse than any other drug for cancer pain."
}
]
|
query-laysum | 5640 | [
{
"role": "user",
"content": "Abstract: Background\nIron-deficiency anaemia is highly prevalent among non-pregnant women of reproductive age (menstruating women) worldwide, although the prevalence is highest in lower-income settings. Iron-deficiency anaemia has been associated with a range of adverse health outcomes, which restitution of iron stores using iron supplementation has been considered likely to resolve. Although there have been many trials reporting effects of iron in non-pregnant women, these trials have never been synthesised in a systematic review.\nObjectives\nTo establish the evidence for effects of daily supplementation with iron on anaemia and iron status, as well as on physical, psychological and neurocognitive health, in menstruating women.\nSearch methods\nIn November 2015 we searched CENTRAL, Ovid MEDLINE, EMBASE, and nine other databases, as well as four digital thesis repositories. In addition, we searched the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) and reference lists of relevant reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs comparing daily oral iron supplementation with or without a cointervention (folic acid or vitamin C), for at least five days per week at any dose, to control or placebo using either individual- or cluster-randomisation. Inclusion criteria were menstruating women (or women aged 12 to 50 years) reporting on predefined primary (anaemia, haemoglobin concentration, iron deficiency, iron-deficiency anaemia, all-cause mortality, adverse effects, and cognitive function) or secondary (iron status measured by iron indices, physical exercise performance, psychological health, adherence, anthropometric measures, serum/plasma zinc levels, vitamin A status, and red cell folate) outcomes.\nData collection and analysis\nWe used the standard methodological procedures of Cochrane.\nMain results\nThe search strategy identified 31,767 records; after screening, 90 full-text reports were assessed for eligibility. We included 67 trials (from 76 reports), recruiting 8506 women; the number of women included in analyses varied greatly between outcomes, with endpoint haemoglobin concentration being the outcome with the largest number of participants analysed (6861 women). Only 10 studies were considered at low overall risk of bias, with most studies presenting insufficient details about trial quality.\nWomen receiving iron were significantly less likely to be anaemic at the end of intervention compared to women receiving control (risk ratio (RR) 0.39 (95% confidence interval (CI) 0.25 to 0.60, 10 studies, 3273 women, moderate quality evidence). Women receiving iron had a higher haemoglobin concentration at the end of intervention compared to women receiving control (mean difference (MD) 5.30, 95% CI 4.14 to 6.45, 51 studies, 6861 women, high quality evidence). Women receiving iron had a reduced risk of iron deficiency compared to women receiving control (RR 0.62, 95% CI 0.50 to 0.76, 7 studies, 1088 women, moderate quality evidence). Only one study (55 women) specifically reported iron-deficiency anaemia and no studies reported mortality. Seven trials recruiting 901 women reported on 'any side effect' and did not identify an overall increased prevalence of side effects from iron supplements (RR 2.14, 95% CI 0.94 to 4.86, low quality evidence). Five studies recruiting 521 women identified an increased prevalence of gastrointestinal side effects in women taking iron (RR 1.99, 95% CI 1.26 to 3.12, low quality evidence). Six studies recruiting 604 women identified an increased prevalence of loose stools/diarrhoea (RR 2.13, 95% CI 1.10, 4.11, high quality evidence); eight studies recruiting 1036 women identified an increased prevalence of hard stools/constipation (RR 2.07, 95% CI 1.35 to 3.17, high quality evidence). Seven studies recruiting 1190 women identified evidence of an increased prevalence of abdominal pain among women randomised to iron (RR 1.55, 95% CI 0.99 to 2.41, low quality evidence). Eight studies recruiting 1214 women did not find any evidence of an increased prevalence of nausea among women randomised to iron (RR 1.19, 95% CI 0.78 to 1.82). Evidence that iron supplementation improves cognitive performance in women is uncertain, as studies could not be meta-analysed and individual studies reported conflicting results. Iron supplementation improved maximal and submaximal exercise performance, and appears to reduce symptomatic fatigue. Although adherence could not be formally meta-analysed due to differences in reporting, there was no evident difference in adherence between women randomised to iron and control.\nAuthors' conclusions\nDaily iron supplementation effectively reduces the prevalence of anaemia and iron deficiency, raises haemoglobin and iron stores, improves exercise performance and reduces symptomatic fatigue. These benefits come at the expense of increased gastrointestinal symptomatic side effects.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included studies comparing the effects of iron compared with no iron when given at least five days per week to menstruating women. We identified 67 trials recruiting 8506 women eligible for inclusion in the review. Most trials lasted between one and three months. The most commonly used iron form was ferrous sulphate."
}
]
|
query-laysum | 5641 | [
{
"role": "user",
"content": "Abstract: Background\nOsteoradionecrosis (ORN) of the jaws is among the most serious oral complications of head and neck cancer radiotherapy, arising from radiation-induced fibro-atrophic tissue injury, manifested by necrosis of osseous tissues and failure to heal, often secondary to operative interventions in the oral cavity. It is associated with considerable morbidity and has important quality of life ramifications. Since ORN is very difficult to treat effectively, preventive measures to limit the onset of this disease are needed; however, the effects of various preventive interventions has not been adequately quantified.\nObjectives\nTo assess the effects of interventions for preventing ORN of the jaws in adult patients with head and neck cancer undergoing curative or adjuvant (i.e. non-palliative) radiotherapy.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 5 November 2019), the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 10) in the Cochrane Library (searched 5 November 2019), MEDLINE Ovid (1946 to 5 November 2019), Embase Ovid (1980 to 5 November 2019), Allied and Complementary Medicine (AMED) Ovid (1985 to 5 November 2019), Scopus (1966 to 5 November 2019), Proquest Dissertations and Theses International (1861 to 5 November 2019) and Web of Science Conference Proceedings (1990 to 5 November 2019). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe selected randomised controlled trials (RCTs) or quasi-RCTs of adult patients 18 years or older with head and neck cancer who had undergone curative or adjuvant radiotherapy to the head and neck, who had received an intervention to prevent the onset of ORN. Eligible patients were those subjected to pre- or post-irradiation dental evaluation. Management of these patients was to be with interventions independent of their cancer therapy, including but not limited to local, systemic, or behavioural interventions.\nData collection and analysis\nTwo review authors independently selected trials from search results, assessed risk of bias, and extracted relevant data for inclusion in the review. Authors of included studies were contacted to request missing data. We used standard methodological procedures expected by Cochrane.\nMain results\nFour studies were identified that met pre-determined eligibility criteria, evaluating a total of 342 adults. From the four studies, all assessed as at high risk of bias, three broad interventions were identified that may potentially reduce the risk of ORN development: one study showed no reduction in ORN when using platelet-rich plasma placed in the extraction sockets of prophylactically removed healthy mandibular molar teeth prior to radiotherapy (odds ratio (OR) 3.32, 95% confidence interval (CI) 0.58 to 19.09; one trial, 44 participants; very low-certainty evidence). Another study involved comparing fluoride gel and high-content fluoride toothpaste (1350 parts per million (ppm)) in prevention of post-radiation caries, and found no difference between their use as no cases of ORN were reported (one trial, 220 participants; very low-certainty evidence). The other two studies involved the use of perioperative hyperbaric oxygen (HBO) therapy and antibiotics. One study showed that treatment with HBO caused a reduction in the development of ORN in comparison to patients treated with antibiotics following dental extractions (risk ratio (RR) 0.18, 95% CI 0.43 to 0.76; one trial, 74 participants; very low-certainty evidence). Another study found no difference between combined HBO and antibiotics compared to antibiotics alone prior to dental implant placement (RR 3.00, 95% CI 0.14 to 65.16; one trial, 26 participants; very low-certainty evidence).\nAdverse effects of the different interventions were not reported clearly or were not important.\nAuthors' conclusions\nGiven the suboptimal reporting and inadequate sample sizes of the included studies, evidence regarding the interventions evaluated by the trials included in this review is uncertain. More well-designed RCTs with larger samples are required to make conclusive statements regarding the efficacy of these interventions.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "ORN of the jaws is a serious problem which can happen after people have radiotherapy as part of their treatment for head and neck cancer. ORN refers to the death and exposure of bone that result from damage to the tissues due to being exposed to radiation. ORN is very difficult to treat and it is important that steps are taken to prevent it. In this review, we looked at any treatments that have been used to prevent ORN."
}
]
|
query-laysum | 5642 | [
{
"role": "user",
"content": "Abstract: Background\nOxygen (O2) is widely used in people with acute myocardial infarction (AMI). Previous systematic reviews concluded that there was insufficient evidence to know whether oxygen reduced, increased or had no effect on heart ischaemia or infarct size. Our first Cochrane review in 2010 also concluded there was insufficient evidence to know whether oxygen should be used. Since 2010, the lack of evidence to support this widely used intervention has attracted considerable attention, prompting further trials of oxygen therapy in myocardial infarction patients. It is thus important to update this Cochrane review.\nObjectives\nTo assess the effects of routine use of inhaled oxygen for acute myocardial infarction (AMI).\nSearch methods\nWe searched the following bibliographic databases on 6 June 2015: the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE (OVID), Embase (OVID), CINAHL (EBSCO) and Web of Science (Thomson Reuters). LILACS (Latin American and Caribbean Health Sciences Literature) was last searched in September 2016. We also contacted experts to identify eligible studies. We applied no language restrictions.\nSelection criteria\nRandomised controlled trials in people with suspected or proven AMI (ST-segment elevation myocardial infarction (STEMI) or non-STEMI) within 24 hours after onset, in which the intervention was inhaled oxygen (at normal pressure) compared to air, regardless of co-therapies provided to participants in both arms of the trial.\nData collection and analysis\nTwo authors independently reviewed the titles and abstracts of identified studies to see if they met the inclusion criteria and independently undertook the data extraction. We assessed the quality of studies and the risk of bias according to guidance in the Cochrane Handbook for Systematic Reviews of Interventions. The primary outcome was death. The measure of effect used was the risk ratio (RR) with a 95% confidence interval (CI). We used the GRADE approach to evaluate the quality of the evidence and the GRADE profiler (GRADEpro) to import data from Review Manager 5 and create 'Summary of findings' tables.\nMain results\nThe updated search yielded one new trial, for a total of five included studies involving 1173 participants, 32 of whom died. The pooled risk ratio (RR) of all-cause mortality in the intention-to-treat analysis was 0.99 (95% CI 0.50 to 1.95; 4 studies, N = 1123; I2 = 46%; quality of evidence: very low) and 1.02 (95% CI 0.52 to 1.98; 4 studies, N = 871; I2 = 49%; quality of evidence: very low) when only analysing participants with confirmed AMI. One trial measured pain directly, and two others measured it by opiate usage. The trial showed no effect, with a pooled RR of 0.97 for the use of opiates (95% CI 0.78 to 1.20; 2 studies, N = 250). The result on mortality and pain are inconclusive. There is no clear effect for oxygen on infarct size (the evidence is inconsistent and low quality).\nAuthors' conclusions\nThere is no evidence from randomised controlled trials to support the routine use of inhaled oxygen in people with AMI, and we cannot rule out a harmful effect. Given the uncertainty surrounding the effect of oxygen therapy on all-cause mortality and on other outcomes critical for clinical decision, well-conducted, high quality randomised controlled trials are urgently required to inform guidelines in order to give definitive recommendations about the routine use of oxygen in AMI.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Since there is no evidence whether the oxygen is good or harmful in this clinical condition, it is important to test oxygen in a big trial as soon as possible to be sure that this common treatment is doing more good than harm in people who are having a heart attack."
}
]
|
query-laysum | 5643 | [
{
"role": "user",
"content": "Abstract: Background\nOral cancer is an important global healthcare problem, its incidence is increasing and late-stage presentation is common. Screening programmes have been introduced for a number of major cancers and have proved effective in their early detection. Given the high morbidity and mortality rates associated with oral cancer, there is a need to determine the effectiveness of a screening programme for this disease, either as a targeted, opportunistic or population-based measure. Evidence exists from modelled data that a visual oral examination of high-risk individuals may be a cost-effective screening strategy and the development and use of adjunctive aids and biomarkers is becoming increasingly common.\nObjectives\nTo assess the effectiveness of current screening methods in decreasing oral cancer mortality.\nSearch methods\nWe searched the following electronic databases: the Cochrane Oral Health Group's Trials Register (to 22 July 2013), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 6), MEDLINE via OVID (1946 to 22 July 2013), EMBASE via OVID (1980 to 22 July 2013) and CANCERLIT via PubMed (1950 to 22 July 2013). There were no restrictions on language in the search of the electronic databases.\nSelection criteria\nRandomised controlled trials (RCTs) of screening for oral cancer or potentially malignant disorders using visual examination, toluidine blue, fluorescence imaging or brush biopsy.\nData collection and analysis\nTwo review authors screened the results of the searches against inclusion criteria, extracted data and assessed risk of bias independently and in duplicate. We used mean differences (MDs) and 95% confidence intervals (CIs) for continuous data and risk ratios (RRs) with 95% CIs for dichotomous data. Meta-analyses would have been undertaken using a random-effects model if the number of studies had exceeded a minimum of three. Study authors were contacted where possible and where deemed necessary for missing information.\nMain results\nA total of 3239 citations were identified through the searches. Only one RCT, with 15-year follow-up met the inclusion criteria (n = 13 clusters: 191,873 participants). There was no statistically significant difference in the oral cancer mortality rates for the screened group (15.4/100,000 person-years) and the control group (17.1/100,000 person-years), with a RR of 0.88 (95% CI 0.69 to 1.12). A 24% reduction in mortality was reported between the screening group (30/100,000 person-years) and the control group (39.0/100,000) for high-risk individuals who used tobacco or alcohol or both, which was statistically significant (RR 0.76; 95% CI 0.60 to 0.97). No statistically significant differences were found for incidence rates. A statistically significant reduction in the number of individuals diagnosed with stage III or worse oral cancer was found for those in the screening group (RR 0.81; 95% CI 0.70 to 0.93). No harms were reported. The study was assessed as at high risk of bias.\nAuthors' conclusions\nThere is evidence that a visual examination as part of a population-based screening programme reduces the mortality rate of oral cancer in high-risk individuals. In addition, there is a stage shift and improvement in survival rates across the population as a whole. However, the evidence is limited to one study, which has a high risk of bias and did not account for the effect of cluster randomisation in the analysis. There was no evidence to support the use of adjunctive technologies like toluidine blue, brush biopsy or fluorescence imaging as a screening tool to reduce oral cancer mortality. Further RCTs are recommended to assess the efficacy and cost-effectiveness of a visual examination as part of a population-based screening programme in low, middle and high-income countries.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Oral cancer is increasing worldwide and it is the sixth most common cancer overall. The highest rates of oral cancer occur in the most disadvantaged sections of the population. Important risk factors in the development of the disease are tobacco, alcohol, age, gender and sunlight although a role for candida (which causes thrush) and the human papillomavirus (which causes warts) has also been documented. People who are heavy drinkers and also smoke have 38 times the risk of developing oral cancer compared with people who do neither. These factors are considered to be especially important in the development of the disease in young people, a group experiencing an increasing incidence of the disease, particularly in countries with a high incidence of it.\nGeographic variation in the occurrence of oral cancer around the world is wide. For example it is the most common cancer for men in India, Sri Lanka and Pakistan and 30% of all new cases of cancer in these countries is oral cancer whereas only 3% of new cases of cancer in the United Kingdom are oral cancer.\nWhen people first seek medical help, their oral cancer is usually at a late or advanced stage and the effects of the condition as well as the treatment for it can be extremely debilitating. Death rates from oral cancer and the negative effects of the disease are high and increasing rather than declining as for other cancers such as breast and colon.\nPrevention screening programmes for other cancers have proven to be effective in early detection. However, whilst there maybe advantages to screening there are disadvantages because screening has the potential to produce either false positive or false negative results. Screening can be targeted at high-risk groups, it can be opportunistic, for example when people attend health services for other reasons, or can be done by looking at statistics across the population as a whole.\nThe aim of preventive screening for early detection of oral cancer is to screen individuals for pre-cancerous conditions which are lesions such as leukoplakia. The most common screening method is visual inspection by a clinician but other techniques include the use of a special blue dye, the use of imaging techniques and measuring biochemical changes to normal calls."
}
]
|
query-laysum | 5644 | [
{
"role": "user",
"content": "Abstract: Background\nWith the increased demand for whiter teeth, home-based bleaching products, either dentist-prescribed or over-the-counter products have been exponentially increasing in the past few decades. This is an update of a Cochrane Review first published in 2006.\nObjectives\nTo evaluate the effects of home-based tooth whitening products with chemical bleaching action, dispensed by a dentist or over-the-counter.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 12 June 2018), the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 6) in the Cochrane Library (searched 12 June 2018), MEDLINE Ovid (1946 to 12 June 2018), and Embase Ovid (1980 to 12 June 2018). The US National Institutes of Health Ongoing Trials Register ClinicalTrials.gov (12 June 2018) and the World Health Organization International Clinical Trials Registry Platform (12 June 2018) were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included in our review randomised controlled trials (RCTs) which involved adults who were 18 years and above, and compared dentist-dispensed or over-the-counter tooth whitening (bleaching) products with placebo or other comparable products.\nQuasi-randomised trials, combination of in-office and home-based treatments, and home-based products having physical removal of stains were excluded.\nData collection and analysis\nTwo review authors independently selected trials. Two pairs of review authors independently extracted data and assessed risk of bias. We estimated risk ratios (RRs) for dichotomous data, and mean differences (MDs) or standardised mean difference (SMD) for continuous data, with 95% confidence intervals (CIs). We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included 71 trials in the review with 26 studies (1398 participants) comparing a bleaching agent to placebo and 51 studies (2382 participants) comparing a bleaching agent to another bleaching agent. Two studies were at low overall risk of bias; two at high overall risk of bias; and the remaining 67 at unclear overall risk of bias.\nThe bleaching agents (carbamide peroxide (CP) gel in tray, hydrogen peroxide (HP) gel in tray, HP strips, CP paint-on gel, HP paint-on gel, sodium hexametaphosphate (SHMP) chewing gum, sodium tripolyphosphate (STPP) chewing gum, and HP mouthwash) at different concentrations with varying application times whitened teeth compared to placebo over a short time period (from 2 weeks to 6 months), however the certainty of the evidence is low to very low.\nIn trials comparing one bleaching agent to another, concentrations, application method and application times, and duration of use varied widely. Most of the comparisons were reported in single trials with small sample sizes and event rates and certainty of the evidence was assessed as low to very low. Therefore the evidence currently available is insufficient to draw reliable conclusions regarding the superiority of home-based bleaching compositions or any particular method of application or concentration or application time or duration of use.\nTooth sensitivity and oral irritation were the most common side effects which were more prevalent with higher concentrations of active agents though the effects were mild and transient. Tooth whitening did not have any effect on oral health-related quality of life.\nAuthors' conclusions\nWe found low to very low-certainty evidence over short time periods to support the effectiveness of home-based chemically-induced bleaching methods compared to placebo for all the outcomes tested.\nWe were unable to draw any conclusions regarding the superiority of home-based bleaching compositions or any particular method of application or concentration or application time or duration of use, as the overall evidence generated was of very low certainty. Well-planned RCTs need to be conducted by standardising methods of application, concentrations, application times, and duration of treatment.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall certainty of the evidence was low to very low for all comparisons. This was because most of the comparisons were reported in single trials with small sample sizes and event rates. There was an unclear risk of bias in most of the trials."
}
]
|
query-laysum | 5645 | [
{
"role": "user",
"content": "Abstract: Background\nPresbyopia occurs when the lens of the eyes loses its elasticity leading to loss of accommodation. The lens may also progress to develop cataract, affecting visual acuity and contrast sensitivity. One option of care for individuals with presbyopia and cataract is the use of multifocal or extended depth of focus intraocular lens (IOL) after cataract surgery. Although trifocal and bifocal IOLs are designed to restore three and two focal points respectively, trifocal lens may be preferable because it restores near, intermediate, and far vision, and may also provide a greater range of useful vision and allow for greater spectacle independence in individuals with presbyopia.\nObjectives\nTo assess the effectiveness and safety of implantation with trifocal versus bifocal IOLs during cataract surgery among people with presbyopia.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2022, Issue 3); Ovid MEDLINE; Embase.com; PubMed; ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 31 March 2022.\nSelection criteria\nWe included randomized controlled trials that compared trifocal and bifocal IOLs among participants 30 years of age or older with presbyopia undergoing cataract surgery.\nData collection and analysis\nWe used standard Cochrane methodology and graded the certainty of the body of evidence according to the GRADE classification.\nMain results\nWe identified seven studies conducted in Europe and Turkey with a total of 331 participants. All included studies assessed visual acuity using a logarithm of the minimum angle of resolution (LogMAR chart). Of them, six (86%) studies assessed uncorrected distance visual acuity (the primary outcome of this review). Some studies also examined our secondary outcomes including uncorrected near, intermediate, and best-corrected distance visual acuity, as well as contrast sensitivity.\nStudy characteristics\nAll participants had bilateral cataracts with no pre-existing ocular pathologies or ocular surgery. Participants' mean age ranged from 55 to 74 years. Three studies reported on gender of participants, and they were mostly women. We assessed all of the included studies as being at unclear risk of bias for most domains. Two studies received financial support from manufacturers of lenses evaluated in this review, and at least one author of another study reported receiving payments for delivering lectures with lens manufacturers.\nFindings\nAll studies compared trifocal versus bifocal IOL implantation on visual acuity outcomes measured on a LogMAR scale. At one year, trifocal IOL showed no evidence of effect on uncorrected distance visual acuity (mean difference (MD) 0.00, 95% confidence interval (CI) −0.04 to 0.04; I2 = 0%; 2 studies, 107 participants; low-certainty evidence) and uncorrected near visual acuity (MD 0.01, 95% CI −0.04 to 0.06; I2 = 0%; 2 studies, 107 participants; low-certainty evidence). Trifocal IOL implantation may improve uncorrected intermediate visual acuity at one year (MD −0.16, 95% CI −0.22 to −0.10; I2 = 0%; 2 studies, 107 participants; low-certainty evidence), but showed no evidence of effect on best-corrected distance visual acuity at one year (MD 0.00, 95% CI −0.03 to 0.04; I2 = 0%; 2 studies, 107 participants; low-certainty evidence). No study reported on contrast sensitivity or quality of life at one-year follow-up. Data from one study at three months suggest that contrast sensitivity did not differ between groups under photopic conditions, but may be worse in the trifocal group in one of the four frequencies under mesopic conditions (MD −0.19, 95% CI −0.33 to −0.05; 1 study; I2 = 0%, 25 participants; low-certainty evidence). One study examined vision-related quality of life using the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25) at six months, and suggested no evidence of a difference between trifocal and bifocal IOLs (MD 1.41, 95% CI −1.78 to 4.60; 1 study, 40 participants; low-certainty evidence).\nAdverse events\nAdverse events reporting varied among studies. Of five studies reporting information on adverse events, two studies observed no intraoperative and postoperative complications or no posterior capsular opacification at six months. One study reported that glare and halos were similar to the preoperative measurements. One study reported that 4 (20%) and 10 (50%) participants had glare complaints at 6 months in trifocal and bifocal group, respectively (risk ratio 0.40, 95% CI 0.15 to 1.07; 40 participants). One study reported that four eyes (11.4%) in the bifocal group and three eyes (7.5%) in the trifocal group developed significant posterior capsular opacification requiring YAG capsulotomy at one year. The certainty of the evidence for adverse events was low.\nAuthors' conclusions\nWe found low-certainty of evidence that compared with bifocal IOL, implantation of trifocal IOL may improve uncorrected intermediate visual acuity at one year. However, there was no evidence of a difference between trifocal and bifocal IOL for uncorrected distance visual acuity, uncorrected near visual acuity, and best-corrected visual acuity at one year. Future research should include the comparison of both trifocal IOL and specific bifocal IOLs that correct intermediate visual acuity to evaluate important outcomes such as contrast sensitivity, quality of life, and vision-related adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that people who receive trifocal lens after their cataract surgery may experience improvement in uncorrected intermediate sharpness of vision (visual acuity) at one year compared with those who had received bifocal lens, but we have little confidence in the evidence. There was no evidence of a difference between trifocal and bifocal lenses for uncorrected distance visual acuity, uncorrected near visual acuity, and best-corrected distance visual acuity at one year. The effects of trifocal and bifocal lenses on quality of life and the ability to distinguish between fine increments of light and dark (contrast sensitivity) remain uncertain."
}
]
|
query-laysum | 5646 | [
{
"role": "user",
"content": "Abstract: Background\nInspection systems are used in healthcare to promote quality improvements (i.e. to achieve changes in organisational structures or processes, healthcare provider behaviour and patient outcomes). These systems are based on the assumption that externally promoted adherence to evidence-based standards (through inspection/assessment) will result in higher quality of healthcare. However, the benefits of external inspection in terms of organisational-, provider- and patient-level outcomes are not clear. This is the first update of the original Cochrane review, published in 2011.\nObjectives\nTo evaluate the effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes.\nSearch methods\nWe searched the following electronic databases for studies up to 1 June 2015: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Database of Abstracts of Reviews of Effectiveness, HMIC, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform. There was no language restriction and we included studies regardless of publication status. We also searched the reference lists of included studies and contacted authors of relevant papers, accreditation bodies and the International Organization for Standardization (ISO), regarding any further published or unpublished work. We also searched an online database of systematic reviews (PDQ-evidence.org).\nSelection criteria\nWe included randomised controlled trials (RCTs), non-randomised trials (NRCTs), interrupted time series (ITSs) and controlled before-after studies (CBAs) evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes in hospitals, primary healthcare organisations and other community-based healthcare organisations.\nData collection and analysis\nTwo review authors independently applied eligibility criteria, extracted data and assessed the risk of bias of each included study. Since meta-analysis was not possible, we produced a narrative results summary. We used the GRADE tool to assess the certainty of the evidence.\nMain results\nWe did not identify any new eligible studies in this update. One cluster RCT involving 20 South African public hospitals and one ITS involving all acute hospital trusts in England, met the inclusion criteria. A trust is a National Health Service hospital which has opted to withdraw from local authority control and be managed by a trust instead.\nThe cluster RCT reported mixed effects of external inspection on compliance with COHSASA (Council for Health Services Accreditation for South Africa) accreditation standards and eight indicators of hospital quality. Improved total compliance score with COHSASA accreditation standards was reported for 21/28 service elements: mean intervention effect was 30% (95% confidence interval (CI) 23% to 37%) (P < 0.001). The score increased from 48% to 78% in intervention hospitals, while remaining the same in control hospitals (43%). The median intervention effect for the indicators of hospital quality of care was 2.4% (range -1.9% to +11.8%).\nThe ITS study evaluated compliance with policies to address healthcare-acquired infections and reported a mean reduction in MRSA (methicillin-resistant Staphylococcus aureus) infection rates of 100 cases per quarter (95% CI -221.0 to 21.5, P = 0.096) at three months' follow-up and an increase of 70 cases per quarter (95% CI -250.5 to 391.0; P = 0.632) at 24 months' follow-up. Regression analysis showed similar MRSA rates before and after the external inspection (difference in slope 24.27, 95% CI -10.4 to 58.9; P = 0.147).\nNeither included study reported data on unanticipated/adverse consequences or economic outcomes. The cluster RCT reported mainly outcomes related to healthcare organisation change, and no patient reported outcomes other than patient satisfaction.\nThe certainty of the included evidence from both studies was very low. It is uncertain whether external inspection accreditation programmes lead to improved compliance with accreditation standards. It is also uncertain if external inspection infection programmes lead to improved compliance with standards, and if this in turn influences healthcare-acquired MRSA infection rates.\nAuthors' conclusions\nThe review highlights the paucity of high-quality controlled evaluations of the effectiveness and the cost-effectiveness of external inspection systems. If policy makers wish to understand the effectiveness of this type of intervention better, there needs to be further studies across a range of settings and contexts and studies reporting outcomes important to patients.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to find out if external inspection of compliance with standards can improve improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes. Cochrane researchers collected and analysed all relevant studies to answer this question and found two studies."
}
]
|
query-laysum | 5647 | [
{
"role": "user",
"content": "Abstract: Background\nEchocardiogram is the reference standard for the diagnosis of haemodynamically significant patent ductus arteriosus (hsPDA) in preterm infants. A simple blood assay for brain natriuretic peptide (BNP) or amino-terminal pro-B-type natriuretic peptide (NT-proBNP) may be useful in the diagnosis and management of hsPDA, but a summary of the diagnostic accuracy has not been reviewed recently.\nObjectives\nPrimary objective: To determine the diagnostic accuracy of the cardiac biomarkers BNP and NT-proBNP for diagnosis of haemodynamically significant patent ductus arteriosus (hsPDA) in preterm neonates. Our secondary objectives were: to compare the accuracy of BNP and NT-proBNP; and to explore possible sources of heterogeneity among studies evaluating BNP and NT-proBNP, including type of commercial assay, chronological age of the infant at testing, gestational age at birth, whether used to initiate medical or surgical treatment, test threshold, and criteria of the reference standard (type of echocardiographic parameter used for diagnosis, clinical symptoms or physical signs if data were available).\nSearch methods\nWe searched the following databases in September 2021: MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and Web of Science. We also searched clinical trial registries and conference abstracts. We checked references of included studies and conducted cited reference searches of included studies. We did not apply any language or date restrictions to the electronic searches or use methodological filters, so as to maximise sensitivity.\nSelection criteria\nWe included prospective or retrospective, cohort or cross-sectional studies, which evaluated BNP or NT-proBNP (index tests) in preterm infants (participants) with suspected hsPDA (target condition) in comparison with echocardiogram (reference standard).\nData collection and analysis\nTwo authors independently screened title/abstracts and full-texts, resolving any inclusion disagreements through discussion or with a third reviewer. We extracted data from included studies to create 2 × 2 tables. Two independent assessors performed quality assessment using the Quality Assessment of Diagnostic-Accuracy Studies-2 (QUADAS 2) tool. We excluded studies that did not report data in sufficient detail to construct 2 × 2 tables, and where this information was not available from the primary investigators. We used bivariate and hierarchical summary receiver operating characteristic (HSROC) random-effects models for meta-analysis and generated summary receiver operating characteristic space (ROC) curves. Since both BNP and NTproBNP are continuous variables, sensitivity and specificity were reported at multiple thresholds. We dealt with the threshold effect by reporting summary ROC curves without summary points.\nMain results\nWe included 34 studies: 13 evaluated BNP and 21 evaluated NT-proBNP in the diagnosis of hsPDA. Studies varied by methodological quality, type of commercial assay, thresholds, age at testing, gestational age and whether the assay was used to initiate medical or surgical therapy. We noted some variability in the definition of hsPDA among the included studies.\nFor BNP, the summary curve is reported in the ROC space (13 studies, 768 infants, low-certainty evidence). The estimated specificities from the ROC curve at fixed values of sensitivities at median (83%), lower and upper quartiles (79% and 92%) were 93.6% (95% confidence interval (CI) 77.8 to 98.4), 95.5% (95% CI 83.6 to 98.9) and 81.1% (95% CI 50.6 to 94.7), respectively. Subgroup comparisons revealed differences by type of assay and better diagnostic accuracy at lower threshold cut-offs (< 250 pg/ml compared to ≥ 250 pg/ml), testing at gestational age < 30 weeks and chronological age at testing at one to three days. Data were insufficient for subgroup analysis of whether the BNP testing was indicated for medical or surgical management of PDA.\nFor NT-proBNP, the summary ROC curve is reported in the ROC space (21 studies, 1459 infants, low-certainty evidence). The estimated specificities from the ROC curve at fixed values of sensitivities at median (92%), lower and upper quartiles (85% and 94%) were 83.6% (95% CI 73.3 to 90.5), 90.6% (95% CI 83.8 to 94.7) and 79.4% (95% CI 67.5 to 87.8), respectively. Subgroup analyses by threshold (< 6000 pg/ml and ≥ 6000 pg/ml) did not reveal any differences. Subgroup analysis by mean gestational age (< 30 weeks vs 30 weeks and above) showed better accuracy with < 30 weeks, and chronological age at testing (days one to three vs over three) showed testing at days one to three had better diagnostic accuracy. Data were insufficient for subgroup analysis of whether the NTproBNP testing was indicated for medical or surgical management of PDA.\nWe performed meta-regression for BNP and NT-proBNP using the covariates: assay type, threshold, mean gestational age and chronological age; none of the covariates significantly affected summary sensitivity and specificity.\nAuthors' conclusions\nLow-certainty evidence suggests that BNP and NT-proBNP have moderate accuracy in diagnosing hsPDA and may work best as a triage test to select infants for echocardiography. The studies evaluating the diagnostic accuracy of BNP and NT-proBNP for hsPDA varied considerably by assay characteristics (assay kit and threshold) and infant characteristics (gestational and chronological age); hence, generalisability between centres is not possible. We recommend that BNP or NT-proBNP assays be locally validated for specific populations and outcomes, to initiate therapy or follow response to therapy.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review looked at studies, which measured blood levels of BNP and NT-proBNP in preterm infants and compared results to an echo."
}
]
|
query-laysum | 5648 | [
{
"role": "user",
"content": "Abstract: Background\nElectromechanical and robot-assisted arm training devices are used in rehabilitation, and may help to improve arm function after stroke.\nObjectives\nTo assess the effectiveness of electromechanical and robot-assisted arm training for improving activities of daily living, arm function, and arm muscle strength in people after stroke. We also assessed the acceptability and safety of the therapy.\nSearch methods\nWe searched the Cochrane Stroke Group's Trials Register (last searched January 2018), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2018, Issue 1), MEDLINE (1950 to January 2018), Embase (1980 to January 2018), CINAHL (1982 to January 2018), AMED (1985 to January 2018), SPORTDiscus (1949 to January 2018), PEDro (searched February 2018), Compendex (1972 to January 2018), and Inspec (1969 to January 2018). We also handsearched relevant conference proceedings, searched trials and research registers, checked reference lists, and contacted trialists, experts, and researchers in our field, as well as manufacturers of commercial devices.\nSelection criteria\nRandomised controlled trials comparing electromechanical and robot-assisted arm training for recovery of arm function with other rehabilitation or placebo interventions, or no treatment, for people after stroke.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, assessed trial quality and risk of bias, used the GRADE approach to assess the quality of the body of evidence, and extracted data. We contacted trialists for additional information. We analysed the results as standardised mean differences (SMDs) for continuous variables and risk differences (RDs) for dichotomous variables.\nMain results\nWe included 45 trials (involving 1619 participants) in this update of our review. Electromechanical and robot-assisted arm training improved activities of daily living scores (SMD 0.31, 95% confidence interval (CI) 0.09 to 0.52, P = 0.0005; I² = 59%; 24 studies, 957 participants, high-quality evidence), arm function (SMD 0.32, 95% CI 0.18 to 0.46, P < 0.0001, I² = 36%, 41 studies, 1452 participants, high-quality evidence), and arm muscle strength (SMD 0.46, 95% CI 0.16 to 0.77, P = 0.003, I² = 76%, 23 studies, 826 participants, high-quality evidence). Electromechanical and robot-assisted arm training did not increase the risk of participant dropout (RD 0.00, 95% CI -0.02 to 0.02, P = 0.93, I² = 0%, 45 studies, 1619 participants, high-quality evidence), and adverse events were rare.\nAuthors' conclusions\nPeople who receive electromechanical and robot-assisted arm training after stroke might improve their activities of daily living, arm function, and arm muscle strength. However, the results must be interpreted with caution although the quality of the evidence was high, because there were variations between the trials in: the intensity, duration, and amount of training; type of treatment; participant characteristics; and measurements used.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess the effects of electromechanical and robot-assisted arm training for improving arm function in people who have had a stroke."
}
]
|
query-laysum | 5649 | [
{
"role": "user",
"content": "Abstract: Background\nBetween 10% to 18% of people undergoing cholecystectomy for gallstones have common bile duct stones. Treatment of the bile duct stones can be conducted as open cholecystectomy plus open common bile duct exploration or laparoscopic cholecystectomy plus laparoscopic common bile duct exploration (LC + LCBDE) versus pre- or post-cholecystectomy endoscopic retrograde cholangiopancreatography (ERCP) in two stages, usually combined with either sphincterotomy (commonest) or sphincteroplasty (papillary dilatation) for common bile duct clearance. The benefits and harms of the different approaches are not known.\nObjectives\nWe aimed to systematically review the benefits and harms of different approaches to the management of common bile duct stones.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Group Controlled Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL, Issue 7 of 12, 2013) in The Cochrane Library, MEDLINE (1946 to August 2013), EMBASE (1974 to August 2013), and Science Citation Index Expanded (1900 to August 2013).\nSelection criteria\nWe included all randomised clinical trials which compared the results from open surgery versus endoscopic clearance and laparoscopic surgery versus endoscopic clearance for common bile duct stones.\nData collection and analysis\nTwo review authors independently identified the trials for inclusion and independently extracted data. We calculated the odds ratio (OR) or mean difference (MD) with 95% confidence interval (CI) using both fixed-effect and random-effects models meta-analyses, performed with Review Manager 5.\nMain results\nSixteen randomised clinical trials with a total of 1758 randomised participants fulfilled the inclusion criteria of this review. Eight trials with 737 participants compared open surgical clearance with ERCP; five trials with 621 participants compared laparoscopic clearance with pre-operative ERCP; and two trials with 166 participants compared laparoscopic clearance with postoperative ERCP. One trial with 234 participants compared LCBDE with intra-operative ERCP. There were no trials of open or LCBDE versus ERCP in people without an intact gallbladder. All trials had a high risk of bias.\nThere was no significant difference in the mortality between open surgery versus ERCP clearance (eight trials; 733 participants; 5/371 (1%) versus 10/358 (3%) OR 0.51;95% CI 0.18 to 1.44). Neither was there a significant difference in the morbidity between open surgery versus ERCP clearance (eight trials; 733 participants; 76/371 (20%) versus 67/358 (19%) OR 1.12; 95% CI 0.77 to 1.62). Participants in the open surgery group had significantly fewer retained stones compared with the ERCP group (seven trials; 609 participants; 20/313 (6%) versus 47/296 (16%) OR 0.36; 95% CI 0.21 to 0.62), P = 0.0002.\nThere was no significant difference in the mortality between LC + LCBDE versus pre-operative ERCP +LC (five trials; 580 participants; 2/285 (0.7%) versus 3/295 (1%) OR 0.72; 95% CI 0.12 to 4.33). Neither was there was a significant difference in the morbidity between the two groups (five trials; 580 participants; 44/285 (15%) versus 37/295 (13%) OR 1.28; 95% CI 0.80 to 2.05). There was no significant difference between the two groups in the number of participants with retained stones (five trials; 580 participants; 24/285 (8%) versus 31/295 (11%) OR 0.79; 95% CI 0.45 to 1.39).\nThere was only one trial assessing LC + LCBDE versus LC+intra-operative ERCP including 234 participants. There was no reported mortality in either of the groups. There was no significant difference in the morbidity, retained stones, procedure failure rates between the two intervention groups.\nTwo trials assessed LC + LCBDE versus LC+post-operative ERCP. There was no reported mortality in either of the groups. There was no significant difference in the morbidity between laparoscopic surgery and postoperative ERCP groups (two trials; 166 participants; 13/81 (16%) versus 12/85 (14%) OR 1.16; 95% CI 0.50 to 2.72). There was a significant difference in the retained stones between laparoscopic surgery and postoperative ERCP groups (two trials; 166 participants; 7/81 (9%) versus 21/85 (25%) OR 0.28; 95% CI 0.11 to 0.72; P = 0.008.\nIn total, seven trials including 746 participants compared single staged LC + LCBDE versus two-staged pre-operative ERCP + LC or LC + post-operative ERCP. There was no significant difference in the mortality between single and two-stage management (seven trials; 746 participants; 2/366 versus 3/380 OR 0.72; 95% CI 0.12 to 4.33). There was no a significant difference in the morbidity (seven trials; 746 participants; 57/366 (16%) versus 49/380 (13%) OR 1.25; 95% CI 0.83 to 1.89). There were significantly fewer retained stones in the single-stage group (31/366 participants; 8%) compared with the two-stage group (52/380 participants; 14%), but the difference was not statistically significantOR 0.59; 95% CI 0.37 to 0.94).\nThere was no significant difference in the conversion rates of LCBDE to open surgery when compared with pre-operative, intra-operative, and postoperative ERCP groups. Meta-analysis of the outcomes duration of hospital stay, quality of life, and cost of the procedures could not be performed due to lack of data.\nAuthors' conclusions\nOpen bile duct surgery seems superior to ERCP in achieving common bile duct stone clearance based on the evidence available from the early endoscopy era. There is no significant difference in the mortality and morbidity between laparoscopic bile duct clearance and the endoscopic options. There is no significant reduction in the number of retained stones and failure rates in the laparoscopy groups compared with the pre-operative and intra-operative ERCP groups. There is no significant difference in the mortality, morbidity, retained stones, and failure rates between the single-stage laparoscopic bile duct clearance and two-stage endoscopic management. More randomised clinical trials without risks of systematic and random errors are necessary to confirm these findings.\n\nGiven the provided abstract, please respond to the following query: \"Review questions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We analysed results from randomised clinical trials in the literature to assess the benefits and harms of these procedures"
}
]
|
query-laysum | 5650 | [
{
"role": "user",
"content": "Abstract: Background\nDental caries is a major public health problem in most industrialised countries, affecting 60% to 90% of school children. Community water fluoridation was initiated in the USA in 1945 and is currently practised in about 25 countries around the world; health authorities consider it to be a key strategy for preventing dental caries. Given the continued interest in this topic from health professionals, policy makers and the public, it is important to update and maintain a systematic review that reflects contemporary evidence.\nObjectives\nTo evaluate the effects of water fluoridation (artificial or natural) on the prevention of dental caries.\nTo evaluate the effects of water fluoridation (artificial or natural) on dental fluorosis.\nSearch methods\nWe searched the following electronic databases: The Cochrane Oral Health Group's Trials Register (to 19 February 2015); The Cochrane Central Register of Controlled Trials (CENTRAL; Issue 1, 2015); MEDLINE via OVID (1946 to 19 February 2015); EMBASE via OVID (1980 to 19 February 2015); Proquest (to 19 February 2015); Web of Science Conference Proceedings (1990 to 19 February 2015); ZETOC Conference Proceedings (1993 to 19 February 2015). We searched the US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization's WHO International Clinical Trials Registry Platform for ongoing trials. There were no restrictions on language of publication or publication status in the searches of the electronic databases.\nSelection criteria\nFor caries data, we included only prospective studies with a concurrent control that compared at least two populations - one receiving fluoridated water and the other non-fluoridated water - with outcome(s) evaluated at at least two points in time. For the assessment of fluorosis, we included any type of study design, with concurrent control, that compared populations exposed to different water fluoride concentrations. We included populations of all ages that received fluoridated water (naturally or artificially fluoridated) or non-fluoridated water.\nData collection and analysis\nWe used an adaptation of the Cochrane 'Risk of bias' tool to assess risk of bias in the included studies.\nWe included the following caries indices in the analyses: decayed, missing and filled teeth (dmft (deciduous dentition) and DMFT (permanent dentition)), and proportion caries free in both dentitions. For dmft and DMFT analyses we calculated the difference in mean change scores between the fluoridated and control groups. For the proportion caries free we calculated the difference in the proportion caries free between the fluoridated and control groups.\nFor fluorosis data we calculated the log odds and presented them as probabilities for interpretation.\nMain results\nA total of 155 studies met the inclusion criteria; 107 studies provided sufficient data for quantitative synthesis.\nThe results from the caries severity data indicate that the initiation of water fluoridation results in reductions in dmft of 1.81 (95% CI 1.31 to 2.31; 9 studies at high risk of bias, 44,268 participants) and in DMFT of 1.16 (95% CI 0.72 to 1.61; 10 studies at high risk of bias, 78,764 participants). This translates to a 35% reduction in dmft and a 26% reduction in DMFT compared to the median control group mean values. There were also increases in the percentage of caries free children of 15% (95% CI 11% to 19%; 10 studies, 39,966 participants) in deciduous dentition and 14% (95% CI 5% to 23%; 8 studies, 53,538 participants) in permanent dentition. The majority of studies (71%) were conducted prior to 1975 and the widespread introduction of the use of fluoride toothpaste.\nThere is insufficient information to determine whether initiation of a water fluoridation programme results in a change in disparities in caries across socioeconomic status (SES) levels.\nThere is insufficient information to determine the effect of stopping water fluoridation programmes on caries levels.\nNo studies that aimed to determine the effectiveness of water fluoridation for preventing caries in adults met the review's inclusion criteria.\nWith regard to dental fluorosis, we estimated that for a fluoride level of 0.7 ppm the percentage of participants with fluorosis of aesthetic concern was approximately 12% (95% CI 8% to 17%; 40 studies, 59,630 participants). This increases to 40% (95% CI 35% to 44%) when considering fluorosis of any level (detected under highly controlled, clinical conditions; 90 studies, 180,530 participants). Over 97% of the studies were at high risk of bias and there was substantial between-study variation.\nAuthors' conclusions\nThere is very little contemporary evidence, meeting the review's inclusion criteria, that has evaluated the effectiveness of water fluoridation for the prevention of caries.\nThe available data come predominantly from studies conducted prior to 1975, and indicate that water fluoridation is effective at reducing caries levels in both deciduous and permanent dentition in children. Our confidence in the size of the effect estimates is limited by the observational nature of the study designs, the high risk of bias within the studies and, importantly, the applicability of the evidence to current lifestyles. The decision to implement a water fluoridation programme relies upon an understanding of the population's oral health behaviour (e.g. use of fluoride toothpaste), the availability and uptake of other caries prevention strategies, their diet and consumption of tap water and the movement/migration of the population. There is insufficient evidence to determine whether water fluoridation results in a change in disparities in caries levels across SES. We did not identify any evidence, meeting the review's inclusion criteria, to determine the effectiveness of water fluoridation for preventing caries in adults.\nThere is insufficient information to determine the effect on caries levels of stopping water fluoridation programmes.\nThere is a significant association between dental fluorosis (of aesthetic concern or all levels of dental fluorosis) and fluoride level. The evidence is limited due to high risk of bias within the studies and substantial between-study variation.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We assessed each study for the quality of the methods used and how thoroughly the results were reported. We had concerns about the methods used, or the reporting of the results, in the vast majority (97%) of the studies. For example, many did not take full account of all the factors that could affect children’s risk of tooth decay or dental fluorosis. There was also substantial variation between the results of the studies, many of which took place before the introduction of fluoride toothpaste. This makes it difficult to be confident of the size of the effects of water fluoridation on tooth decay or the numbers of people likely to have dental fluorosis at different levels of fluoride in the water."
}
]
|
query-laysum | 5651 | [
{
"role": "user",
"content": "Abstract: Background\nElectromechanical- and robot-assisted gait-training devices are used in rehabilitation and might help to improve walking after stroke. This is an update of a Cochrane Review first published in 2007 and previously updated in 2017.\nObjectives\nPrimary\n• To determine whether electromechanical- and robot-assisted gait training versus normal care improves walking after stroke\nSecondary\n• To determine whether electromechanical- and robot-assisted gait training versus normal care after stroke improves walking velocity, walking capacity, acceptability, and death from all causes until the end of the intervention phase\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (last searched 6 January 2020); the Cochrane Central Register of Controlled Trials (CENTRAL; 2020 Issue 1), in the Cochrane Library; MEDLINE in Ovid (1950 to 6 January 2020); Embase (1980 to 6 January 2020); the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 20 November 2019); the Allied and Complementary Medicine Database (AMED; 1985 to 6 January 2020); Web of Science (1899 to 7 January 2020); SPORTDiscus (1949 to 6 January 2020); the Physiotherapy Evidence Database (PEDro; searched 7 January 2020); and the engineering databases COMPENDEX (1972 to 16 January 2020) and Inspec (1969 to 6 January 2020). We handsearched relevant conference proceedings, searched trials and research registers, checked reference lists, and contacted trial authors in an effort to identify further published, unpublished, and ongoing trials.\nSelection criteria\nWe included all randomised controlled trials and randomised controlled cross-over trials in people over the age of 18 years diagnosed with stroke of any severity, at any stage, in any setting, evaluating electromechanical- and robot-assisted gait training versus normal care.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, assessed methodological quality and risk of bias, and extracted data. We assessed the quality of evidence using the GRADE approach. The primary outcome was the proportion of participants walking independently at follow-up.\nMain results\nWe included in this review update 62 trials involving 2440 participants. Electromechanical-assisted gait training in combination with physiotherapy increased the odds of participants becoming independent in walking (odds ratio (random effects) 2.01, 95% confidence interval (CI) 1.51 to 2.69; 38 studies, 1567 participants; P < 0.00001; I² = 0%; high-quality evidence) and increased mean walking velocity (mean difference (MD) 0.06 m/s, 95% CI 0.02 to 0.10; 42 studies, 1600 participants; P = 0.004; I² = 60%; low-quality evidence) but did not improve mean walking capacity (MD 10.9 metres walked in 6 minutes, 95% CI -5.7 to 27.4; 24 studies, 983 participants; P = 0.2; I² = 42%; moderate-quality evidence). Electromechanical-assisted gait training did not increase the risk of loss to the study during intervention nor the risk of death from all causes. Results must be interpreted with caution because (1) some trials investigated people who were independent in walking at the start of the study, (2) we found variation between trials with respect to devices used and duration and frequency of treatment, and (3) some trials included devices with functional electrical stimulation. Post hoc analysis showed that people who are non-ambulatory at the start of the intervention may benefit but ambulatory people may not benefit from this type of training. Post hoc analysis showed no differences between the types of devices used in studies regarding ability to walk but revealed differences between devices in terms of walking velocity and capacity.\nAuthors' conclusions\nPeople who receive electromechanical-assisted gait training in combination with physiotherapy after stroke are more likely to achieve independent walking than people who receive gait training without these devices. We concluded that eight patients need to be treated to prevent one dependency in walking. Specifically, people in the first three months after stroke and those who are not able to walk seem to benefit most from this type of intervention. The role of the type of device is still not clear. Further research should consist of large definitive pragmatic phase 3 trials undertaken to address specific questions about the most effective frequency and duration of electromechanical-assisted gait training, as well as how long any benefit may last. Future trials should consider time post stroke in their trial design.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Electronic and robotic devices plus physiotherapy help people walk independently again after a stroke. They may particularly benefit people in the first three months after a stroke, as well as people who are unable to walk.\nWe need more research to find out how often, and for how long, these devices should be used."
}
]
|
query-laysum | 5652 | [
{
"role": "user",
"content": "Abstract: Background\nIndividuals dying of coronavirus disease 2019 (COVID-19) may experience distressing symptoms such as breathlessness or delirium. Palliative symptom management can alleviate symptoms and improve the quality of life of patients. Various treatment options such as opioids or breathing techniques have been discussed for use in COVID-19 patients. However, guidance on symptom management of COVID-19 patients in palliative care has often been derived from clinical experiences and guidelines for the treatment of patients with other illnesses. An understanding of the effectiveness of pharmacological and non-pharmacological palliative interventions to manage specific symptoms of COVID-19 patients is required.\nObjectives\nTo assess the efficacy and safety of pharmacological and non-pharmacological interventions for palliative symptom control in individuals with COVID-19.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register (including Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (PubMed), Embase, ClinicalTrials.gov, World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), medRxiv); Web of Science Core Collection (Science Citation Index Expanded, Emerging Sources); CINAHL; WHO COVID-19 Global literature on coronavirus disease; and COAP Living Evidence on COVID-19 to identify completed and ongoing studies without language restrictions until 23 March 2021.\nWe screened the reference lists of relevant review articles and current treatment guidelines for further literature.\nSelection criteria\nWe followed standard Cochrane methodology as outlined in the Cochrane Handbook for Systematic Reviews of Interventions.\nWe included studies evaluating palliative symptom management for individuals with a confirmed diagnosis of COVID-19 receiving interventions for palliative symptom control, with no restrictions regarding comorbidities, age, gender, or ethnicity. Interventions comprised pharmacological as well as non-pharmacological treatment (e.g. acupressure, physical therapy, relaxation, or breathing techniques). We searched for the following types of studies: randomized controlled trials (RCT), quasi-RCTs, controlled clinical trials, controlled before-after studies, interrupted time series (with comparison group), prospective cohort studies, retrospective cohort studies, (nested) case-control studies, and cross-sectional studies.\nWe searched for studies comparing pharmacological and non-pharmacological interventions for palliative symptom control with standard care.\nWe excluded studies evaluating palliative interventions for symptoms caused by other terminal illnesses. If studies enrolled populations with or exposed to multiple diseases, we would only include these if the authors provided subgroup data for individuals with COVID-19. We excluded studies investigating interventions for symptom control in a curative setting, for example patients receiving life-prolonging therapies such as invasive ventilation.\nData collection and analysis\nWe used a modified version of the Newcastle Ottawa Scale for non-randomized studies of interventions (NRSIs) to assess bias in the included studies. We included the following outcomes: symptom relief (primary outcome); quality of life; symptom burden; satisfaction of patients, caregivers, and relatives; serious adverse events; and grade 3 to 4 adverse events.\nWe rated the certainty of evidence using the GRADE approach.\nAs meta-analysis was not possible, we used tabulation to synthesize the studies and histograms to display the outcomes.\nMain results\nOverall, we identified four uncontrolled retrospective cohort studies investigating pharmacological interventions for palliative symptom control in hospitalized patients and patients in nursing homes. None of the studies included a comparator. We rated the risk of bias high across all studies. We rated the certainty of the evidence as very low for the primary outcome symptom relief, downgrading mainly for high risk of bias due to confounding and unblinded outcome assessors.\nPharmacological interventions for palliative symptom control\nWe identified four uncontrolled retrospective cohort studies (five references) investigating pharmacological interventions for palliative symptom control. Two references used the same register to form their cohorts, and study investigators confirmed a partial overlap of participants. We therefore do not know the exact number of participants, but individual reports included 61 to 2105 participants. Participants received multimodal pharmacological interventions: opioids, neuroleptics, anticholinergics, and benzodiazepines for relieving dyspnea (breathlessness), delirium, anxiety, pain, audible upper airway secretions, respiratory secretions, nausea, cough, and unspecified symptoms.\nPrimary outcome: symptom relief\nAll identified studies reported this outcome. For all symptoms (dyspnea, delirium, anxiety, pain, audible upper airway secretions, respiratory secretions, nausea, cough, and unspecified symptoms), a majority of interventions were rated as completely or partially effective by outcome assessors (treating clinicians or nursing staff). Interventions used in the studies were opioids, neuroleptics, anticholinergics, and benzodiazepines.\nWe are very uncertain about the effect of pharmacological interventions on symptom relief (very low-certainty evidence). The initial rating of the certainty of evidence was low since we only identified uncontrolled NRSIs. Our main reason for downgrading the certainty of evidence was high risk of bias due to confounding and unblinded outcome assessors. We therefore did not find evidence to confidently support or refute whether pharmacological interventions may be effective for palliative symptom relief in COVID-19 patients.\nSecondary outcomes\nWe planned to include the following outcomes: quality of life; symptom burden; satisfaction of patients, caregivers, and relatives; serious adverse events; and grade 3 to 4 adverse events.\nWe did not find any data for these outcomes, or any other information on the efficacy and safety of used interventions.\nNon-pharmacological interventions for palliative symptom control\nNone of the identified studies used non-pharmacological interventions for palliative symptom control.\nAuthors' conclusions\nWe found very low certainty evidence for the efficacy of pharmacological interventions for palliative symptom relief in COVID-19 patients. We found no evidence on the safety of pharmacological interventions or efficacy and safety of non-pharmacological interventions for palliative symptom control in COVID-19 patients. The evidence presented here has no specific implications for palliative symptom control in COVID-19 patients because we cannot draw any conclusions about the effectiveness or safety based on the identified evidence. More evidence is needed to guide clinicians, nursing staff, and caregivers when treating symptoms of COVID-19 patients at the end of life. Specifically, future studies ought to investigate palliative symptom control in prospectively registered studies, using an active-controlled setting, assess patient-reported outcomes, and clearly define interventions.\nThe publication of the results of ongoing studies will necessitate an update of this review. The conclusions of an updated review could differ from those of the present review and may allow for a better judgement regarding pharmacological and non-pharmacological interventions for palliative symptom control in COVID-19 patients.\n\nGiven the provided abstract, please respond to the following query: \"What was the aim of our review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To explore how well different interventions (drugs and non-drugs) work for the treatment of palliative symptoms in COVID-19 patients at the end of life. We included patients of all ages and with all comorbidities (additional medical conditions)."
}
]
|
query-laysum | 5653 | [
{
"role": "user",
"content": "Abstract: Background\nAlcohol use in young people is a risk factor for a range of short- and long-term harms and is a cause of concern for health services, policy-makers, youth workers, teachers, and parents.\nObjectives\nTo assess the effectiveness of universal, selective, and indicated family-based prevention programmes in preventing alcohol use or problem drinking in school-aged children (up to 18 years of age).\nSpecifically, on these outcomes, the review aimed:\n• to assess the effectiveness of universal family-based prevention programmes for all children up to 18 years (‘universal interventions’);\n• to assess the effectiveness of selective family-based prevention programmes for children up to 18 years at elevated risk of alcohol use or problem drinking (‘selective interventions’); and\n• to assess the effectiveness of indicated family-based prevention programmes for children up to 18 years who are currently consuming alcohol, or who have initiated use or regular use (‘indicated interventions’).\nSearch methods\nWe identified relevant evidence from the Cochrane Central Register of Controlled Trials (CENTRAL), in the Cochrane Library, MEDLINE (Ovid 1966 to June 2018), Embase (1988 to June 2018), Education Resource Information Center (ERIC; EBSCOhost; 1966 to June 2018), PsycINFO (Ovid 1806 to June 2018), and Google Scholar. We also searched clinical trial registers and handsearched references of topic-related systematic reviews and the included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster RCTs (C-RCTs) involving the parents of school-aged children who were part of the general population with no known risk factors (universal interventions), were at elevated risk of alcohol use or problem drinking (selective interventions), or were already consuming alcohol (indicated interventions). Psychosocial or educational interventions involving parents with or without involvement of children were compared with no intervention, or with alternate (e.g. child only) interventions, allowing experimental isolation of parent components.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 46 studies (39,822 participants), with 27 classified as universal, 12 as selective, and seven as indicated. We performed meta-analyses according to outcome, including studies reporting on the prevalence, frequency, or volume of alcohol use. The overall quality of evidence was low or very low, and there was high, unexplained heterogeneity.\nUpon comparing any family intervention to no intervention/standard care, we found no intervention effect on the prevalence (standardised mean difference (SMD) 0.00, 95% confidence interval (CI) -0.08 to 0.08; studies = 12; participants = 7490; I² = 57%; low-quality evidence) or frequency (SMD -0.31, 95% CI -0.83 to 0.21; studies = 8; participants = 1835; I² = 96%; very low-quality evidence) of alcohol use in comparison with no intervention/standard care. The effect of any parent/family interventions on alcohol consumption volume compared with no intervention/standard care was very small (SMD -0.14, 95% CI -0.27 to 0.00; studies = 5; participants = 1825; I² = 42%; low-quality evidence).\nWhen comparing parent/family and adolescent interventions versus interventions with young people alone, we found no difference in alcohol use prevalence (SMD -0.39, 95% CI -0.91 to 0.14; studies = 4; participants = 5640; I² = 99%; very low-quality evidence) or frequency (SMD -0.16, 95% CI -0.42 to 0.09; studies = 4; participants = 915; I² = 73%; very low-quality evidence). For this comparison, no trials reporting on the volume of alcohol use could be pooled in meta-analysis.\nIn general, the results remained consistent in separate subgroup analyses of universal, selective, and indicated interventions. No adverse effects were reported.\nAuthors' conclusions\nThe results of this review indicate that there are no clear benefits of family-based programmes for alcohol use among young people. Patterns differ slightly across outcomes, but overall, the variation, heterogeneity, and number of analyses performed preclude any conclusions about intervention effects. Additional independent studies are required to strengthen the evidence and clarify the marginal effects observed.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Alcohol use puts young people at increased risk for a range of short- and long-term harms and is a cause of concern for health services, policy-makers, youth workers, teachers, and parents."
}
]
|
query-laysum | 5654 | [
{
"role": "user",
"content": "Abstract: Background\nChordoma is a rare primary bone tumour with a high propensity for local recurrence. Surgical resection is the mainstay of treatment, but complete resection is often morbid due to tumour location. Similarly, the dose of radiotherapy (RT) that surrounding healthy organs can tolerate is frequently below that required to provide effective tumour control. Therefore, clinicians have investigated different radiation delivery techniques, often in combination with surgery, aimed to improve the therapeutic ratio.\nObjectives\nTo assess the effects and toxicity of proton and photon adjuvant radiotherapy (RT) in people with biopsy-confirmed chordoma.\nSearch methods\nWe searched CENTRAL (2021, Issue 4); MEDLINE Ovid (1946 to April 2021); Embase Ovid (1980 to April 2021) and online registers of clinical trials, and abstracts of scientific meetings up until April 2021.\nSelection criteria\nWe included adults with pathologically confirmed primary chordoma, who were irradiated with curative intent, with protons or photons in the form of fractionated RT, SRS (stereotactic radiosurgery), SBRT (stereotactic body radiotherapy), or IMRT (intensity modulated radiation therapy). We limited analysis to studies that included outcomes of participants treated with both protons and photons.\nData collection and analysis\nThe primary outcomes were local control, mortality, recurrence, and treatment-related toxicity. We followed current standard Cochrane methodological procedures for data extraction, management, and analysis. We used the ROBINS-I tool to assess risk of bias, and GRADE to assess the certainty of the evidence.\nMain results\nWe included six observational studies with 187 adult participants. We judged all studies to be at high risk of bias. Four studies were included in meta-analysis.\nWe are uncertain if proton compared to photon therapy worsens or has no effect on local control (hazard ratio (HR) 5.34, 95% confidence interval (CI) 0.66 to 43.43; 2 observational studies, 39 participants; very low-certainty evidence).\nMedian survival time ranged between 45.5 months and 66 months. We are uncertain if proton compared to photon therapy reduces or has no effect on mortality (HR 0.44, 95% CI 0.13 to 1.57; 4 observational studies, 65 participants; very low-certainty evidence).\nMedian recurrence-free survival ranged between 3 and 10 years. We are uncertain whether proton compared to photon therapy reduces or has no effect on recurrence (HR 0.34, 95% CI 0.10 to 1.17; 4 observational studies, 94 participants; very low-certainty evidence).\nOne study assessed treatment-related toxicity and reported that four participants on proton therapy developed radiation-induced necrosis in the temporal bone, radiation-induced damage to the brainstem, and chronic mastoiditis; one participant on photon therapy developed hearing loss, worsening of the seventh cranial nerve paresis, and ulcerative keratitis (risk ratio (RR) 1.28, 95% CI 0.17 to 9.86; 1 observational study, 33 participants; very low-certainty evidence). There is no evidence that protons led to reduced toxicity.\nThere is very low-certainty evidence to show an advantage for proton therapy in comparison to photon therapy with respect to local control, mortality, recurrence, and treatment related toxicity.\nAuthors' conclusions\nThere is a lack of published evidence to confirm a clinical difference in effect with either proton or photon therapy for the treatment of chordoma. As radiation techniques evolve, multi-institutional data should be collected prospectively and published, to help identify persons that would most benefit from the available radiation treatment techniques.\n\nGiven the provided abstract, please respond to the following query: \"What are the main findings?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included six studies that enrolled 187 adults with chordoma, four of which we included in a meta-analysis. We are uncertain whether proton radiation therapy compared to conventional photon radiation treatment improves local control, because our confidence in the evidence was very low (2 observational studies, 39 participants). We are uncertain whether proton therapy compared to photon therapy reduces mortality (4 observational studies, 65 participants), recurrence (4 observational studies, 94 participants), or treatment-related toxicity (1 observational study, 33 participants)."
}
]
|
query-laysum | 5655 | [
{
"role": "user",
"content": "Abstract: Background\nIron-deficiency anaemia is common during childhood. Iron administration has been claimed to increase the risk of malaria.\nObjectives\nTo evaluate the effects and safety of iron supplementation, with or without folic acid, in children living in areas with hyperendemic or holoendemic malaria transmission.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), published in the Cochrane Library, MEDLINE (up to August 2015) and LILACS (up to February 2015). We also checked the metaRegister of Controlled Trials (mRCT) and World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) up to February 2015. We contacted the primary investigators of all included trials, ongoing trials, and those awaiting assessment to ask for unpublished data and further trials. We scanned references of included trials, pertinent reviews, and previous meta-analyses for additional references.\nSelection criteria\nWe included individually randomized controlled trials (RCTs) and cluster RCTs conducted in hyperendemic and holoendemic malaria regions or that reported on any malaria-related outcomes that included children younger than 18 years of age. We included trials that compared orally administered iron, iron with folic acid, and iron with antimalarial treatment versus placebo or no treatment. We included trials of iron supplementation or fortification interventions if they provided at least 80% of the Recommended Dietary Allowance (RDA) for prevention of anaemia by age. Antihelminthics could be administered to either group, and micronutrients had to be administered equally to both groups.\nData collection and analysis\nThe primary outcomes were clinical malaria, severe malaria, and death from any cause. We assessed the risk of bias in included trials with domain-based evaluation and assessed the quality of the evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. We performed a fixed-effect meta-analysis for all outcomes and random-effects meta-analysis for hematological outcomes, and adjusted analyses for cluster RCTs. We based the subgroup analyses for anaemia at baseline, age, and malaria prevention or management services on trial-level data.\nMain results\nThirty-five trials (31,955 children) met the inclusion criteria. Overall, iron does not cause an excess of clinical malaria (risk ratio (RR) 0.93, 95% confidence intervals (CI) 0.87 to 1.00; 14 trials, 7168 children, high quality evidence). Iron probably does not cause an excess of clinical malaria in both populations where anaemia is common and those in which anaemia is uncommon. In areas where there are prevention and management services for malaria, iron (with or without folic acid) may reduce clinical malaria (RR 0.91, 95% CI 0.84 to 0.97; seven trials, 5586 participants, low quality evidence), while in areas where such services are unavailable, iron (with or without folic acid) may increase the incidence of malaria, although the lower CIs indicate no difference (RR 1.16, 95% CI 1.02 to 1.31; nine trials, 19,086 participants, low quality evidence). Iron supplementation does not cause an excess of severe malaria (RR 0.90, 95% CI 0.81 to 0.98; 6 trials, 3421 children, high quality evidence). We did not observe any differences for deaths (control event rate 1%, low quality evidence). Iron and antimalarial treatment reduced clinical malaria (RR 0.54, 95% CI 0.43 to 0.67; three trials, 728 children, high quality evidence). Overall, iron resulted in fewer anaemic children at follow up, and the end average change in haemoglobin from base line was higher with iron.\nAuthors' conclusions\nIron treatment does not increase the risk of clinical malaria when regular malaria prevention or management services are provided. Where resources are limited, iron can be administered without screening for anaemia or for iron deficiency, as long as malaria prevention or management services are provided efficiently.\n\nGiven the provided abstract, please respond to the following query: \"Main findings of the review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cochrane researchers searched the available evidence up to 30 August 2015 and included 35 trials (31,955 children). Iron did not increase the risk of malaria, indicated by fever and the presence of parasites in the blood (high quality evidence). There was no increased risk of death among children treated with iron, although the quality of the evidence for this was low. Among children treated with iron, there was no increased risk of severe malaria (high quality evidence). Although it is hypothesized that iron supplementation might harm children who do not have anaemia living in malarial areas, there is probably no increased risk for malaria in these children (moderate quality evidence). In areas where health services are sufficient to help prevent and treat malaria, giving iron supplements (with or without folic acid) may reduce clinical malaria. In areas where these services are not available, iron supplementation (with or without folic acid) may increase the number of children with clinical malaria (low quality evidence). Overall, iron resulted in fewer anaemic children at follow up, and the end average change in haemoglobin from base line was higher with iron."
}
]
|
query-laysum | 5656 | [
{
"role": "user",
"content": "Abstract: Background\nEarly discharge of stable preterm infants still requiring gavage feeds offers the benefits of uniting families sooner and reducing healthcare and family costs compared with discharge home when on full sucking feeds. Potential disadvantages of early discharge include increased care burden for the family and risk of complications related to gavage feeding.\nObjectives\nTo determine the effects of a policy of early discharge of stable preterm infants with home support of gavage feeding compared with a policy of discharge of such infants when they have reached full sucking feeds.\nWe planned subgroup analyses to determine whether safety and efficacy outcomes are altered by the type of support received (outpatient visits vs home support) or by the maturity of the infants discharged (gestational age ≤ 28 weeks at birth or birth weight ≤ 1000 grams).\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group, together with searches of the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 3), the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (1982 to March 2015), EMBASE (1980 to March 2015) and MEDLINE (1950 to March 2015). We found no new trials.\nSelection criteria\nWe included all randomised and quasi-randomised trials among infants born at < 37 weeks and requiring no intravenous nutrition at the point of discharge. Trials were required to compare early discharge home with gavage feeds and healthcare support versus later discharge home when full sucking feeds were attained.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. We conducted study authors for additional information. We performed data analysis in accordance with the standards of the Cochrane Neonatal Review Group.\nMain results\nWe included in the review data from one quasi-randomised trial with 88 infants from 75 families. Infants in the early discharge programme with home gavage feeding had a mean hospital stay that was 9.3 days shorter (mean difference (MD) -9.3, 95% confidence interval (CI) -18.49 to -0.11) than that of infants in the control group. Infants in the early discharge programme also had lower risk of clinical infection during the home gavage period compared with those in the control group spending corresponding time in hospital (risk ratio 0.35, 95% CI 0.17 to 0.69). No significant differences were noted between groups in duration and extent of breast feeding, weight gain, re-admission within the first 12 months post discharge from the home gavage programme or from hospital, scores reflecting parental satisfaction or overall health service use.\nAuthors' conclusions\nExperimental evidence on the benefits and risks for preterm infants of early discharge from hospital with home gavage feeding compared with later discharge upon attainment of full sucking feeds is limited to the results of one small quasi-randomised controlled trial. High-quality trials with concealed allocation, complete follow-up of all randomly assigned infants and adequate sample size are needed before practice recommendations can be made.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Infants in the early discharge programme with home gavage feeds had a mean hospital stay that was approximately nine days shorter than that of infants in the control group. Infants in the early discharge programme also had lower risk of clinical infection during the home gavage period compared with infants in the control group during the corresponding time in hospital. No significant differences were observed between groups in duration and extent of breast feeding, weight gain, re-admission within the first 12 months post discharge from the home gavage programme or from hospital, scores reflecting parental satisfaction or health service use."
}
]
|
query-laysum | 5657 | [
{
"role": "user",
"content": "Abstract: Background\nMyelodysplastic syndrome (MDS) is one of the most frequent haematologic malignancies of the elderly population and characterised by progenitor cell dysplasia with ineffective haematopoiesis and a high rate of transformation to acute myeloid leukaemia (AML). Thrombocytopenia represents a common problem for patients with MDS. ranging from mild to serious bleeding events and death. To manage thrombocytopenia, the current standard treatment includes platelet transfusion, unfortunately leading to a range of side effects. Thrombopoietin (TPO) mimetics represent an alternative treatment option for MDS patients with thrombocytopenia. However, it remains unclear, whether TPO mimetics influence the increase of blast cells and therefore to premature progression to AML.\nObjectives\nTo evaluate the efficacy and safety of thrombopoietin (TPO) mimetics for patients with MDS.\nSearch methods\nWe searched for randomised controlled trials in the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (January 2000 to August 2017), trials registries (ISRCTN, EU clinical trials register and clinicaltrials.gov) and conference proceedings. We did not apply any language restrictions. Two review authors independently screened search results, disagreements were solved by discussion.\nSelection criteria\nWe included randomised controlled trials comparing TPO mimetics with placebo, no further treatment or another TPO mimetic in patients with MDS of all risk groups, without gender, age or ethnicity restrictions. Additional chemotherapeutic treatment had to be equal in both arms.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of trials, disagreements were resolved by discussion. Risk ratio (RR) was used to analyse mortality during study, transformation to AML, incidence of bleeding events, transfusion requirement, all adverse events, adverse events >= grade 3, serious adverse events and platelet response. Overall survival (OS) and progression-free survival (PFS) have been extracted as hazard ratios, but could not be pooled as results were reported in heterogenous ways. Health-related quality of life and duration of thrombocytopenia would have been analysed as standardised mean differences, but no trial reported these outcomes.\nMain results\nWe did not identify any trial comparing one TPO mimetic versus another. We analysed six eligible trials involving 746 adult patients. All trials were reported as randomised and double-blind trials including male and female patients. Two trials compared TPO mimetics (romiplostim or eltrombopag) with placebo, one trial evaluated eltrombopag in addition to the hypomethylating agent azacitidine, two trials analysed romiplostim additionally to a hypomethylating agent (azacitidine or decitabine) and one trial evaluated romiplostim in addition to the immunomodulatory drug lenalidomide. There are more data on romiplostim (four included, completed, full-text trials) than on eltrombopag (two trials included: one full-text publication, one abstract publication). Due to small sample sizes and imbalances in baseline characteristics in three trials and premature termination of two studies, we judged the potential risk of bias of all included trials as high.\nDue to heterogenous reporting, we were not able to pool data for OS. Instead of that, we analysed mortality during study. There is little or no evidence for a difference in mortality during study for thrombopoietin mimetics compared to placebo (RR 0.97, 95% confidence interval (CI) 0.73 to 1.27, N = 6 trials, 746 patients, low-quality evidence). It is unclear whether the use of TPO mimetics induces an acceleration of transformation to AML (RR 1.02, 95% CI 0.59 to 1.77, N = 5 trials, 372 patients, very low-quality evidence).Thrombopoietin mimetics probably improve the incidence of all bleeding events (RR 0.92, 95% CI 0.86 to 0.99, N = 5 trials, 390 patients, moderate-quality evidence). This means that in the study population, 713 out of 1000 in the placebo arm will have a bleeding event, compared to 656 of 1000 (95% CI 613 to 699) in the TPO mimetics arm. There is little or no evidence for a difference that TPO mimetics significantly diminish the rate of transfusion requirement (RR 0.83, 95% CI 0.66 to 1.05, N = 4 trials, 358 patients, low-quality evidence). No studies were found that looked at quality of life or duration of thrombocytopenia.\nThere is no evidence that patients given TPO mimetics suffer more all adverse events (RR 1.01, 95% CI 0.96 to 1.07, N = 5 trials, 390 patients, moderate-quality evidence). There is uncertainty whether the number of serious adverse events decrease under therapy with TPO mimetics (RR 0.89, 95% CI 0.54 to 1.46, N = 4 trials, 356 patients, very low-quality evidence).\nWe identified one ongoing study and one study marked as completed (March 2015), but without publication of results for MDS patients (only results reported for AML and MDS patients together). Both studies evaluate MDS patients receiving eltrombopag in comparison to placebo.\nAuthors' conclusions\nNo trial evaluated one TPO mimetic versus another.\nSix trials including adult patients analysed one TPO mimetic versus placebo, sometimes combined with standard therapy in both arms. Given the uncertainty of the quality of evidence, meta-analyses show that there is little or no evidence for a difference in mortality during study and premature progress to AML. However, these assumptions have to be further explored. Treatment with TPO mimetics resulted in a lower number of MDS patients suffering from bleeding events.There is no evidence for a difference between study groups regarding transfusion requirement. Enlarged sample sizes and a longer follow-up of future trials should improve the estimate of safety and efficacy of TPO mimetics, moreover health-related quality of life should be evaluated. As two ongoing studies currently investigate eltrombopag (one already completed, but without published results), we are awaiting results for this drug.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of evidence for the outcomes incidence of bleeding events and all adverse events is moderate due to high risk of bias (small sample sizes and imbalances in baseline characteristics in three trials, premature closure of two trials, selective reporting in one trial, industrial sponsorship). The quality of evidence for the outcomes mortality during study and transfusion requirements is low, due to imprecision through small number of events and high risk of bias. It is very low for the outcomes transformation to AML and serious adverse events, because of very small number of events, high risk of bias and heterogeneity between trials."
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.