query
stringlengths
1k
5.71k
answer
stringlengths
219
4.42k
Three trials, conducted between 1982 and 1984, met the inclusion criteria (n=17,965). Two trials examined the effect of driver education on licensing. In the trial by Stock (USA) 87% of students in the driver education group obtained their driving licence as compared to 84.3% in the control group (RR 1.04; 95% CI 1.02 to 1.05). In the trial by Wynne-Jones (New Zealand) the time from trial enrolment to licensing was 111 days in males receiving driver education compared with 300 days in males who did not receive driver education, and 105 days in females receiving driver education compared with 415 days in females who did not receive driver education. All three trials examined the effect of driver education on road traffic crashes. In the trial by Strang (Australia), 42% of students in each group had one or more crashes since being licensed (RR 1.01, 95% CI 0.83 to 1.23). In the trial by Stock, the number of students involved in one or more crashes as a driver was 27.5% in the driver education group compared to 26.7% in the control group (RR 1.03; 95% CI 0.98 to 1.09). In the trial by Wynne-Jones, the number of students who experienced crashes was 16% in the driver education group as compared to 14.5% in the control group (RR 1.10; 95% CI 0.76 to 1.59). The results show that driver education leads to early licensing. They provide no evidence that driver education reduces road crash involvement, and suggest that it may lead to a modest but potentially important increase in the proportion of teenagers involved in traffic crashes.
The results of this systematic review show that driver education in schools leads to early licensing. They provide no evidence that driver education reduces road crash involvement, and suggest that it may lead to a modest but potentially important increase in the proportion of teenagers involved in traffic crashes.
We identified four randomised controlled trials, using different intraoperative imaging technologies: iMRI (2 trials including 58 and 14 participants, respectively); fluorescence-guided surgery with 5-aminolevulinic acid (5-ALA) (1 trial, 322 participants); and neuronavigation (1 trial, 45 participants). We identified one ongoing trial assessing iMRI with a planned sample size of 304 participants for which results are expected to be published around autumn 2018. We identified no trials for ultrasound. Meta-analysis was not appropriate due to differences in the tumours included (eloquent versus non-eloquent locations) and variations in the image guidance tools used in the control arms (usually selective utilisation of neuronavigation). There were significant concerns regarding risk of bias in all the included studies. All studies included people with high-grade glioma only. Extent of resection was increased in one trial of iMRI (risk ratio (RR) of incomplete resection 0.13, 95% confidence interval (CI) 0.02 to 0.96; 1 study, 49 participants; very low-quality evidence) and in the trial of 5-ALA (RR of incomplete resection 0.55, 95% CI 0.42 to 0.71; 1 study, 270 participants; low-quality evidence). The other trial assessing iMRI was stopped early after an unplanned interim analysis including 14 participants, therefore the trial provides very low-quality evidence. The trial of neuronavigation provided insufficient data to evaluate the effects on extent of resection. Reporting of adverse events was incomplete and suggestive of significant reporting bias (very low-quality evidence). Overall, reported events were low in most trials. There was no clear evidence of improvement in overall survival with 5-ALA (hazard ratio 0.83, 95% CI 0.62 to 1.07; 1 study, 270 participants; low-quality evidence). Progression-free survival data were not available in an appropriate format for analysis. Data for quality of life were only available for one study and suffered from significant attrition bias (very low-quality evidence). Intra-operative imaging technologies, specifically iMRI and 5-ALA, may be of benefit in maximising extent of resection in participants with high grade glioma. However, this is based on low to very low quality evidence, and is therefore very uncertain. The short- and long-term neurological effects are uncertain. Effects of image-guided surgery on overall survival, progression-free survival, and quality of life are unclear. A brief economic commentary found limited economic evidence for the equivocal use of iMRI compared with conventional surgery. In terms of costs, a non-systematic review of economic studies suggested that compared with standard surgery use of image-guided surgery has an uncertain effect on costs and that 5-aminolevulinic acid was more costly. Further research, including studies of ultrasound-guided surgery, is needed.
Our search strategy is up to date as of July 2017. We found four trials looking at three different types of tools to help improve the amount of tumour that is removed. The tumour being evaluated was high-grade glioma. Imaging interventions used during surgery included: • magnetic resonance imaging (iMRI) during surgery to assess the amount of remaining tumour; • fluorescent dye (5-aminolevulinic acid) to mark out the tumour; or • imaging before surgery to map out the location of a tumour, which was then used at the time of surgery to guide the surgery (neuronavigation). All the studies had compromised methods, which could mean their conclusions were biased. Other studies were funded by the manufacturers of the image guidance technology being evaluated. We found low- to very low-quality evidence that use of image-guided surgery may result in more of the tumour being removed surgically in some people. The short- and long-term neurological effects are uncertain. We did not have the data to determined whether any of the evaluated technologies affect overall survival, time until disease progression, or quality of life. There was very low-quality evidence for neuronavigation, and we identified no trials for ultrasound guidance. In terms of costs, a non-systematic review of economic studies suggested that compared with standard surgery use of image-guided surgery has an uncertain effect on costs and that 5-aminolevulinic acid was more costly than conventional surgery. Evidence for intraoperative imaging technology for use in removing brain tumours is sparse and of low to very low quality. Further research is needed to assess three main questions. 1. Is removing more of the tumour better for the patient in the long term? 2. What are the risks of causing a patient to have worse symptoms by taking out more of the tumour? 3. How does resection affect a patient's quality of life?
Two randomised controlled trials contributed to this literature, enrolling 586 participants, and found no critical difference between TDF and AZT in regards to serious adverse events or virologic response. The trials did find higher rates of adherence and immunologic response in TDF-containing regimens compared with those containing AZT. The quality of the literature to support this conclusion is moderate to high. Drug resistance was more common for TDF than AZT, but the quality of this literature is low, with only one study reporting this outcome. It should be noted that the two studies compared two different drugs in addition to TDF and AZT; one had lamivudine (3TC) and nevirapine (NVP) and the other had emtricitabine (FTC) and efavirenz (EFV). We conclude that for the critical outcomes of virologic response and serious adverse events initial ART regimens containing TDF are equivalent to those containing AZT. However, TDF is superior to AZT in terms of immunologic response and adherence and more frequent emergence of resistance. How much the other drugs in the regimens contributed to these findings is unclear, and true head-to-head trials are still warranted. The role of each drug in initial ART will likely be driven by their specific toxicities.
The purpose of this review was to assess which of these two medications was the best for initial treatment for people living with HIV, and through our search we identified two randomised controlled trials. We did not find any critical difference between the two medications in regards to serious adverse events or virologic response, but did find that TDF is superior to AZT in terms of immunologic response and adherence and more frequent emergence of resistance. However, these two studies are not directly comparable because they used two related different drugs in addition to TDF and AZT. Future studies and recommendations should focus on specific toxicities and tolerability when comparing these two medications.
We included 16 randomised trials. The number of deaths was 254 in the tacrolimus group (1899 patients) and 302 in the cyclosporin group (1914 patients). At one year, mortality (RR 0.85, 95% CI 0.73 to 0.99) and graft loss (RR 0.73, 95% CI 0.61 to 0.86) were significantly reduced in tacrolimus-treated recipients. Tacrolimus reduced the number of recipients with acute rejection (RR 0.81, 95% CI 0.75 to 0.88), and steroid-resistant rejection (RR 0.54, 95% CI 0.47 to 0.74) in the first year. Differences were not seen with respect to lymphoproliferative disorder or de-novo dialysis rates, but more de-novo insulin-requiring diabetes mellitus (RR 1.38, 95% CI 1.01 to 1.86) occurred in the tacrolimus group. More patients were withdrawn from cyclosporin therapy than from tacrolimus (RR 0.57, 95% CI 0.49 to 0.66). Tacrolimus is superior to cyclosporin in improving survival (patient and graft) and preventing acute rejection after liver transplantation, but it increases the risk of post-transplant diabetes. Treating 100 recipients with tacrolimus instead of cyclosporin would avoid acute rejection and steroid-resistant rejection in nine and seven patients, respectively, and graft loss and death in five and two patients, respectively, but four additional patients would develop diabetes after liver transplantation.
This is a review of the clinical trials that compared patients initially prescribed one of the two anti-rejection drugs after liver transplantation. Sixteen trials (3813 participants) were included. The review shows that tacrolimus is marginally better than cyclosporin at preventing patient death and graft loss. Tacrolimus is substantially better than cyclosporin at preventing rejection. No differences were seen between the drugs with respect to adverse events (renal failure, lymphoproliferative disorder) except for diabetes mellitus, which was more common with tacrolimus. After liver transplantation more patients stayed on tacrolimus than on cyclosporin. Tacrolimus is more beneficial than cyclosporine and should be considered the treatment of choice after liver transplantation. This review does not evaluate the benefit or harm of switching from one anti-rejection drug to another.
We examined 1430 records, scrutinized 14 full-text publications and included four RCTs. Altogether 1305 participants entered the four trials, 543 participants were randomised to TT and 762 participants to ST. A total of 98% and 97% of participants finished the trials in the TT and ST groups, respectively. Two trials had a duration of follow-up between 12 and 39 months and two trials a follow-up of 5 and 10 years, respectively. Risk of bias across studies was mainly unknown for selection, performance and detection bias. Attrition bias was generally low and reporting bias high for some outcomes. In the short-term postoperative period no deaths were reported for both TT and ST groups. However, longer-term data on all-cause mortality were not reported (1284 participants; 4 trials; moderate quality evidence). Goiter recurrence was lower in the TT group compared to ST. Goiters recurred in 0.2% (1/425) of the TT group compared to 8.4% (53/632) of the ST group (OR 0.05 (95% CI 0.01 to 0.21); P < 0.0001; 1057 participants; 3 trials; moderate quality evidence). Re-intervention due to goitre recurrence was lower in the TT group compared to ST. Re-intervention was necessary in 0.5% (1/191) of TT patients compared to 0.8% (3/379)of ST patients (OR 0.66 (95% CI 0.07 to 6.38); P = 0.72; 570 participants; 1 trial; low quality evidence). The incidence of permanent recurrent laryngeal nerve palsy was lower for ST compared with TT. Permanent recurrent laryngeal nerve palsy occurred in 0.8% (6/741) of ST patients compared to 0.7% (4/543) of TT patients (OR 1.28, (95% CI 0.38 to 4.36); P = 0.69; 1275 participants; 4 trials; low quality evidence). The incidence of permanent hypoparathyroidism was lower for ST compared with TT. Permanent hypoparathyroidism occurred in 0.1% (1/741) of ST patients compared to 0.6% (3/543) of TT patients (OR 3.09 (95% CI 0.45 to 21.36); P = 0.25; 1275 participants: 4 trials; low quality evidence). The incidence of thyroid cancer was lower for ST compared with TT. Thyroid cancer occurred in 6.1% (41/669) of ST patients compared to 7.3% (34/465)of TT patients (OR 1.32 (95% CI 0.81 to 2.15); P = 0.27; 1134 participants; 3 trials; low quality evidence). No data on health-related quality of life or socioeconomic effects were reported in the included studies. The body of evidence on TT compared with ST is limited. Goiter recurrence is reduced following TT. The effects on other key outcomes such as re-interventions due to goitre recurrence, adverse events and thyroid cancer incidence are uncertain. New long-term RCTs with additional data such as surgeons level of experience, treatment volume of surgical centres and details on techniques used are needed.
We included four randomised controlled trials with a total of 1305 participants. A total of 543 participants were randomised to total or near-total thyroidectomy and 762 participants to subtotal thyroidectomy. Two trials had a duration of follow-up between 12 and 39 months and two trials a follow-up of 5 and 10 years, respectively. Most participants were women and the average age was around 50 years. In the short-term period after surgery no deaths were reported for both total thyroidectomy and subtotal thyroidectomy groups, however longer-term data on all-cause mortality were not reported. Goitre recurrence was lower for total thyroidectomy compared to subtotal thyroidectomy: the risk for goitre recurrence was 84 per 1000 trial participants for subtotal thyroidectomy and 5 per 1000 participants (with a possible range of 1 to 19) for total thyroidectomy. There was no clear benefit or harm of either surgical technique for re-operations because of goitre recurrence, side effects like permanent recurrent laryngeal nerve palsy or development of thyroid cancer. No data on health-related quality of life or socioeconomic effects were reported in the included trials. The overall quality was low to moderate, mainly because of the small number of studies and participants as well as low rates of events which makes distinction between harms and benefits of the two surgical techniques difficult. This evidence is up to date as of June 2015.
We included 59 studies with a total of 13,342 participants. Compared to no treatment control MI showed a significant effect on substance use which was strongest at post-intervention SMD 0.79, (95% CI 0.48 to 1.09) and weaker at short SMD 0.17 (95% CI 0.09 to 0.26], and medium follow-up SMD 0.15 (95% CI 0.04 to 0.25]). For long follow-up, the effect was not significant SMD 0.06 (95% CI-0.16 to 0.28). There were no significant differences between MI and treatment as usual for either follow-up post-intervention, short and medium follow up. MI did better than assessment and feedback for medium follow-up SMD 0.38 (95% CI 0.10 to 0.66). For short follow-up, there was no significant effect . For other active intervention there were no significant effects for either follow-up. There was not enough data to conclude about effects of MI on the secondary outcomes. MI can reduce the extent of substance abuse compared to no intervention. The evidence is mostly of low quality, so further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
We searched for studies that had included people with alcohol or drug problems and that had divided them by chance into MI or a control group that either received nothing or some other treatment. We included only studies that had checked video or sound recordings of the therapies in order to be certain that what was given really was MI. The results in this review are based on 59 studies. The results show that people who have received MI have reduced their use of substances more than people who have not received any treatment. However, it seems that other active treatments, treatment as usual and being assessed and receiving feedback can be as effective as motivational interviewing. There was not enough data to conclude about the effects of MI on retention in treatment, readiness to change, or repeat convictions.The quality of the research forces us to be careful about our conclusions, and new research may change them.
Two studies met the inclusion criteria. No study looked at our main outcome: dog bite rates. The included studies were randomised controlled trials conducted in kindergarten and primary schools. Their methodology was of moderate quality. One study showed that the intervention group showed less 'inappropriate behaviour' when observed in the presence of a dog after a 30-minute educational intervention. Another study showed an increase in knowledge and in caution after an information programme. There is no direct evidence that educational programmes can reduce dog bite rates in children and adolescents. Educating children who are less than 10 years old in school settings could improve their knowledge, attitude and behaviour towards dogs. Educating children and adolescents in settings other than schools should also be evaluated. There is a need for high quality studies that measure dog bite rates as an outcome. To date, evidence does not suggest that educating children and adolescents is effective as a unique public health strategy to reduce dog bite injuries and their consequences.
Two studies were included in this review. Both were of moderate methodological quality and evaluated the effectiveness of educating children on preventing dog bite injuries. Both studies involved a 30-minute lesson. One study additionally compared the effect of educating the children's parents through a leaflet. One study videotaped the way children behaved when exposed to an unknown dog, and their behaviour was observed. The main outcome reported in both studies was a change in behaviour. It is unclear from this review whether educating children can reduce dog bite injuries as dog bite rates were not reported as an outcome in either of the included studies. The effect of educating children and adolescents in settings other than schools has not been evaluated. There is a general lack of evidence about the impact of education to prevent dog bites in children and adolescents, therefore further studies that look at dog bite rates after an intervention are recommended. Education of children and adolescents should not be the only public health strategy to reduce dog bites and their dramatic consequences.
We included in this systematic review 4745 participants who were randomly assigned in 21 trials. Trials were conducted in a wide variety of clinical settings. Most trials included participants with mild to moderate anaemia and excluded participants who were allergic to iron therapy. All trials were at high risk of bias for one or more domains. We compared both oral iron and parenteral iron versus inactive controls and compared different iron preparations. The comparison between oral iron and inactive control revealed no evidence of clinical benefit in terms of mortality (RR 1.05, 95% CI 0.68 to 1.61; four studies, N = 659; very low-quality evidence). The point estimate of the mean difference in haemoglobin levels in individual studies ranged from 0.3 to 3.1 g/dL higher in the oral iron group than in the inactive control group. The proportion of participants who required blood transfusion was lower with oral iron than with inactive control (RR 0.74, 95% CI 0.55 to 0.99; three studies, N = 546; very low-quality evidence). Evidence was inadequate for determination of the effect of parenteral iron on mortality versus oral iron (RR 1.49, 95% CI 0.56 to 3.94; 10 studies, N = 2141; very low-quality evidence) or inactive control (RR 1.04, 95% CI 0.63 to 1.69; six studies, N = 1009; very low-quality evidence). Haemoglobin levels were higher with parenteral iron than with oral iron (MD -0.50 g/dL, 95% CI -0.73 to -0.27; six studies, N = 769; very low-quality evidence). The point estimate of the mean difference in haemoglobin levels in individual studies ranged between 0.3 and 3.0 g/dL higher in the parenteral iron group than in the inactive control group. Differences in the proportion of participants requiring blood transfusion between parenteral iron and oral iron groups (RR 0.61, 95% CI 0.24 to 1.58; two studies, N = 371; very low-quality evidence) or between parenteral iron groups and inactive controls (RR 0.84, 95% CI 0.66 to 1.06; eight studies, N = 1315; very low-quality evidence) were imprecise. Average blood volume transfused was less in the parenteral iron group than in the oral iron group (MD -0.54 units, 95% CI -0.96 to -0.12; very low-quality evidence) based on one study involving 44 people. Differences between therapies in quality of life or in the proportion of participants with serious adverse events were imprecise (very low-quality evidence). No trials reported severe allergic reactions due to parenteral iron, suggesting that these are rare. Adverse effects related to oral iron treatment included nausea, diarrhoea and constipation; most were mild. Comparisons of one iron preparation over another for mortality, haemoglobin or serious adverse events were imprecise. No information was available on quality of life. Thus, little evidence was found to support the use of one preparation or regimen over another. Subgroup analyses did not reveal consistent results; therefore we were unable to determine whether iron is useful in specific clinical situations, or whether iron therapy might be useful for people who are receiving erythropoietin. • Very low-quality evidence suggests that oral iron might decrease the proportion of people who require blood transfusion, and no evidence indicates that it decreases mortality. Oral iron might be useful in adults who can tolerate the adverse events, which are usually mild. • Very low-quality evidence suggests that intravenous iron results in a modest increase in haemoglobin levels compared with oral iron or inactive control without clinical benefit. • No evidence can be found to show any advantage of one iron preparation or regimen over another. • Additional randomised controlled trials with low risk of bias and powered to measure clinically useful outcomes such as mortality, quality of life and blood transfusion requirements are needed.
We included 4745 participants from 21 trials who received iron injections, iron tablets or no treatment. Clinical settings of these trials included loss of blood, cancer, anaemia before surgery for various reasons and heart failure, among others. Most trials included participants with mild to moderate anaemia and excluded participants who were allergic to iron therapy. Comparisons between iron tablets and no treatment revealed no evidence of clinical benefit in terms of a decrease in death or in quality of life. However, a reduction in the proportion of participants who required blood transfusion was noted among those who received iron tablets versus no treatment. Haemoglobin levels were higher in participants receiving iron tablets versus no treatment. With regards to iron injections, haemoglobin levels were higher after iron injections compared with levels reported after iron tablets or no treatment, but no evidence showed clinical benefit in terms of a decrease in death, in the number of participants requiring blood transfusion or in quality of life of participants. Although the average amount of blood transfused was less in the iron injection group than in the iron tablet group, only one trial reported this outcome, introducing significant doubt about this finding. Differences in serious complications between people who received iron versus no treatment were imprecise. No trials reported severe allergic reactions due to iron injections, suggesting that these are rare. Most of the adverse events related to iron tablet treatment were mild; effects such as nausea, diarrhoea and constipation were reported. Comparisons of the clinical benefit of one iron preparation over another were imprecise. We were unable to determine whether iron is useful in specific clinical situations because available information was not clearly presented. In summary, no evidence is currently available to support the routine use of iron injections in adult anaemic men and or in adult non-pregnant anaemic women who have not just given birth to a baby. Iron tablets might be useful in anaemic adult men and adult women who can tolerate the side effects. No evidence suggests any advantage of one iron preparation over another. Additional randomised controlled trials are required to determine whether iron treatment decreases death and blood transfusion requirements and improves quality of life. Such trials should be appropriately designed and should include a sufficiently large number of participants, to decrease the chance of erroneous conclusions.
We included 17 RCTs involving 3488 children, of which 16 RCTs were included in the meta-analyses. Of the 16 RCTs that reported the mean age of children, mean age overall was 2.4 years; in 4 RCTs the mean age of children participating in the trial was less than 1 year old; in 2 RCTs the mean age was between 1 and 2 years old; and in 10 RCTs the mean age was older than 2 years. Probiotic strains evaluated by the trials varied, with 11 of the included RCTs evaluating Lactobacillus-containing probiotics, and six RCTs evaluating Streptococcus-containing probiotics. The proportion of children (i.e. the number of children in each group) experiencing one or more episodes of AOM during the treatment was lower for those taking probiotics (RR 0.77, 95% confidence interval (CI) 0.63 to 0.93; 16 trials; 2961 participants; number needed to treat for an additional beneficial outcome (NNTB) = 10; moderate-certainty evidence). Post hoc subgroup analysis found that among children not prone to otitis media, a lower proportion of children receiving probiotics experienced AOM (RR 0.64, 95% CI 0.49 to 0.84; 11 trials; 2227 participants; NNTB = 9; moderate-certainty evidence). However, among children who were otitis prone, there was no difference between probiotic and comparator groups (RR 0.97, 95% CI 0.85 to 1.11; 5 trials; 734 participants; high-certainty evidence). The test for subgroup differences was significant (P = 0.007). None of the included trials reported on the severity of AOM. The proportion of children experiencing adverse events did not differ between the probiotic and comparator groups (OR 1.54, 95% CI 0.60 to 3.94; 4 trials; 395 participants; low-certainty evidence). Probiotics decreased the proportion of children taking antibiotics for any infection (RR 0.66, 95% CI 0.51 to 0.86; 8 trials; 1768 participants; NNTB = 8; moderate-certainty evidence). Test for subgroup differences (use of antibiotic specifically for AOM, use of antibiotic for infections other than AOM) was not significant. There was no difference in the mean number of school days lost (MD −0.95, 95% CI −2.47 to 0.57; 5 trials; 1280 participants; moderate-certainty evidence). There was no difference between groups in the level of compliance in taking the intervention (RR 1.02, 95% CI 0.99 to 1.05; 5 trials; 990 participants). Probiotics decreased the proportion of children having other infections (RR 0.75, 95% CI 0.65 to 0.87; 11 trials; 3610 participants; NNTB = 12; moderate-certainty evidence). Test for subgroup differences (acute respiratory infections, gastrointestinal infections) was not significant. Probiotic strains trialled and their dose, frequency, and duration of administration varied considerably across studies, which likely contributed to the substantial levels of heterogeneity. Sensitivity testing of funnel plots did not reveal publication bias. Probiotics may prevent AOM in children not prone to AOM, but the inconsistency of the subgroup analyses suggests caution in interpreting these results. Probiotics decreased the proportion of children taking antibiotics for any infection. The proportion of children experiencing adverse events did not differ between the probiotic and comparator groups. The optimal strain, duration, frequency, and timing of probiotic administration still needs to be established.
We searched and identified 17 randomised controlled trials (studies in which participants are assigned to one of two or more treatment groups using a random method), published before October 2018. All were conducted in Europe, and collectively included 3488 children. Twelve trials included children who were not prone to acute middle ear infections, whilst five trials included children who were prone to such infections. One-third fewer children not prone to acute middle ear infection who took probiotics experienced acute middle ear infections compared to children not taking probiotics. However, probiotics may not benefit children prone to acute middle ear infection. Taking probiotics did not impact on the number of days of school that children missed. None of the studies reported on the impact of probiotics on the severity of acute middle ear infection. There was no difference between the group taking probiotics and the group not taking probiotics in the number of children experiencing adverse events (harms). The quality (or certainty) of the evidence was generally moderate (meaning that further research may change our estimates) or high (further research is unlikely to change our estimates). However, the trials differed in terms of types of probiotics evaluated, how often and for how long they were taken, and how the trial results were reported.
We have included 18 studies with 1087 participants of whom 545 received HS compared to 542 who received IS. All participants were over 18 years of age and all trials excluded high-risk patients (ASA IV). All trials assessed haematological parameters peri-operatively and up to three days post-operatively. There were three (< 1%) deaths reported in the IS group and four (< 1%) in the HS group, as assessed at 90 days in one study. There were no reports of serious adverse events. Most participants were in a positive fluid balance postoperatively (4.4 L IS and 2.5 L HS), with the excess significantly less in HS participants (MD -1.92 L, 95% confidence interval (CI) -2.61 to -1.22 L; P < 0.00001). IS participants received a mean volume of 2.4 L and HS participants received 1.49 L, significantly less fluid than IS-treated participants (MD -0.91 L, 95% CI -1.24 to -0.59 L; P < 0.00001). The maximum average serum sodium ranged between 138.5 and 159 in HS groups compared to between 136 and 143 meq/L in the IS groups. The maximum serum sodium was significantly higher in HS participants (MD 7.73, 95% CI 5.84 to 9.62; P < 0.00001), although the level remained within normal limits (136 to 146 meq/L). A high degree of heterogeneity appeared to be related to considerable differences in the dose of HS between studies. The quality of the evidence for the outcomes reported ranged from high to very low. The risk of bias for many of the studies could not be determined for performance and detection bias, criteria that we assess as likely to impact the study outcomes. HS reduces the volume of intravenous fluid required to maintain people undergoing surgery but transiently increases serum sodium. It is not known if HS affects survival and morbidity, but this should be examined in randomized controlled trials that are designed and powered to test these outcomes.
We included 18 trials that compared HS to IS in people undergoing surgery. The trials included 1087 participants. Five hundred and forty-five (545) participants received HS and 542 received IS during their operations. The participants were randomly assigned to their groups. The studies took place in 11 countries. Study participants were over the age of 18. All studies excluded people with serious health risks from participating. All studies monitored fluid levels during the operation and up to three days after. There were seven deaths in total, three (less than 1%) from the IS group and four (less than 1%) from the HS group. The risk of death was very low in these studies. The studies did not report the occurrence of serious adverse events. Thirteen studies reported the amount of fluid given. The IS group received a mean of 2.4 L and the HS group received 0.91 L less (1.49 L). The highest amount of sodium in the blood over the course of the study was reported by 16 studies. The IS group had a median of 139 meq/L and the HS group was 7.73 meq/L higher. The normal acceptable range is 136 to 146 meq/L. For deaths and adverse events the trials lacked sufficient size and duration to adequately assess differences. We assessed the quality of evidence for deaths to be very low, and future studies are likely to change the result reported here. The reporting of the highest amount of sodium is of moderate quality. The measuring of blood sodium during an operation is a common measurement that is unlikely to be misrepresented.
We included five completed trials, involving 4197 participants; all tested transdermal glyceryl trinitrate (GTN), an NO donor. The assessed risk of bias was low across the included studies; one study was double-blind, one open-label and three were single-blind. All included studies had blinded outcome assessment. Overall, GTN did not improve the primary outcome of death or dependency at the end of trial (modified Rankin Scale (mRS) > 2, OR 0.97, 95% CI 0.86 to 1.10, 4195 participants, high-quality evidence). GTN did not improve secondary outcomes, including death (OR 0.78, 95% CI 0.40 to 1.50) and quality of life (MD -0.01, 95% CI -0.17 to 0.15) at the end of trial overall (high-quality evidence). Systolic/diastolic blood pressure (BP) was lower in people treated with GTN (MD -7.2 mmHg (95% CI -8.6 to -5.9) and MD -3.3 (95% CI -4.2 to -2.5) respectively) and heart rate was higher (MD 2.0 beats per minute (95% CI 1.1 to 2.9)). Headache was more common in those randomised to GTN (OR 2.37, 95% CI 1.55 to 3.62). We did not find any trials assessing other nitrates, L-arginine, or NOS-I. There is currently insufficient evidence to recommend the use of NO donors, L-arginine or NOS-I in acute stroke, and only one drug (GTN) has been assessed. In people with acute stroke, GTN reduces blood pressure, increases heart rate and headache, but does not alter clinical outcome (all based on high-quality evidence).
This review is up-to-date as of September 2016. We included five trials involving 4197 people; all trials assessed glyceryl trinitrate, a drug that is given as a skin patch and which releases nitric oxide. One study was international, whilst the remainder were studies performed at single centres. Not all trials contributed data to all outcomes. We used both unpublished and published information, where available. We did not find any trials assessing other nitrate drugs, L-arginine, or nitric oxide synthase inhibitors. Overall, glyceryl trinitrate did not improve the rate of death or dependency compared with those who did not receive glyceryl trinitrate after acute stroke. Glyceryl trinitrate did not improve other outcomes including death and quality of life. Glyceryl trinitrate lowers blood pressure, and increases heart rate and headache in people with acute stroke. The key results are based on high-quality evidence. There is currently insufficient evidence to recommend the use of drugs affecting nitric oxide production in acute stroke. Overall, glyceryl trinitrate is inexpensive, lowers blood pressure, increases heart rate and headache, but does not change clinical outcomes in people who have suffered a stroke.
Two cohort studies and four case control studies met the inclusion criteria for the primary prevention review. The two cohort studies found no effect of exposure to calcium channel blockers on the risk of developing PD. Three case control studies looked at the effects of exposure to calcium channel blockers and beta blockers on the risk of developing PD but the assessment periods of exposure were markedly different prior to PD onset, and different subclasses of drugs were examined, so results were not comparable. A protective effect of centrally acting calcium channel blockers was found in one study. Two trials and one ongoing trial met the inclusion criteria for the secondary prevention review. Each completed trial examined a different class of anti-hypertensive drug. The ongoing trial is examining the effects of the calcium channel blocker isradipine on motor symptoms and disease progression. It follows an earlier tolerability study. The results are due in the year 2012. Adverse effects were noted in all included trials and included intolerability to the drugs and worsening PD symptoms. There is currently a lack of evidence for the use of antihypertensive drugs for either the primary or secondary prevention of PD. More observational studies are required to identify potential drugs to go forward for safety and tolerability studies in people with early PD. The results of the ongoing trial will help inform further research.
Epidemiological or observational studies were used to see if taking a blood pressure lowering drug reduces the risk of developing PD in the general population. Only six studies were found. Three studies of the same design looked at the same classes of blood pressure lowering drugs but used different methods so the results could not be combined. Clinical trials were used to see if taking a blood pressure lowering drug when you already have PD reduces symptoms or slows disease progression. Only two completed trials were found, all reporting the effects of different classes of drugs. One trial of a drug belonging to the group called calcium channel blockers is still underway and the results, which are due in 2012, will help inform further studies. Very little information was found for either primary or secondary prevention. More studies are needed to look at different blood pressure lowering drugs and their effects on the risk of developing PD. From these studies, particular potential drugs will be identified to go forward for clinical trials in patients who have PD, to see if they improve symptoms or slow down the disease. It is important that we have more information about the side effects for people with PD who are taking these drugs and that any benefits far outweigh any harms.
A total of 15 studies of good methodological quality met the inclusion criteria by randomly assigning 7814 participants with predominantly poorly reversible, severe COPD. Data were most plentiful for the FPS combination. Exacerbation rates were significantly reduced with combination therapies (rate ratio 0.87, 95% CI 0.80 to 0.94, 6 studies, N = 5601) compared with ICS alone. The mean exacerbation rate in the control (ICS) arms of the six included studies was 1.21 exacerbations per participant per year (range 0.88 to 1.60), and we would expect this to be reduced to a rate of 1.05 (95% CI 0.97 to 1.14) among those given combination therapy. Mortality was also lower with the combination (odds ratio (OR) 0.78, 95% CI 0.64 to 0.94, 12 studies, N = 7518) than with ICS alone, but this was heavily weighted by a three-year study of FPS. When this study was removed, no significant mortality difference was noted. The reduction in exacerbations did not translate into significantly reduced rates of hospitalisation due to COPD exacerbation (OR 0.93, 95% CI 0.80 to 1.07, 10 studies, N = 7060). Lung function data favoured combination treatment in the FPS, BDF and MF/F trials, but the improvement was small. Small improvements in health-related quality of life were measured on the St George's Respiratory Questionnaire (SGRQ) with FPS or BDF compared with ICS, but this was well below the minimum clinically important difference. Adverse event profiles were similar between the two treatments arms, and rates of pneumonia when it was diagnosed by chest x-ray (CXR) were lower than those reported in earlier trials. Combination ICS and LABA offer some clinical benefits in COPD compared with ICS alone, especially for reduction in exacerbations. This review does not support the use of ICS alone when LABAs are available. Adverse events were not significantly different between treatments. Further long-term assessments using practical outcomes of current and new 24-hour LABAs will help determine their efficacy and safety. For robust comparisons as to their relative effects, long-term head-to-head comparisons are needed.
Our review found 15 studies that compared a combination of ICS/LABA with ICS alone. We found that on the whole, combination inhalers reduced the frequency of flare-ups (not including hospitalisations) compared with ICS alone. The studies showed that on average, the number of exacerbations per participant was reduced, as was the probability of death, during treatment. Quality of life and lung function showed improvement with combination treatment compared with ICS, but no difference between them was noted in terms of adverse effects, or the likelihood of having no flare-ups at all. Future research should assess the efficacy of BDF and MF/F because most evidence gathered to date, including for mortality, has been drawn from FPS studies.
Three trials were included, all of which evaluated doxycyline use. Trial quality suffered from a lack of intention-to-treat analysis and variability across trials in methodology and targeted outcomes. One trial assessed post-exposure prophylaxis in an indigenous population after a flood without apparent efficacy in reduction of clinical or laboratory identified Leptospira infection. Two trials assessed pre-exposure prophylaxis, one among deployed soldiers and another in an indigenous population. Despite an odds ratio of 0.05 (95% CI 0.01 to 0.36) for laboratory-identified infection among deployed soldiers on doxycyline in one of these two trials, pooled data showed no statistically significant reduction in Leptospira infection among participants (Odds ratio 0.28 (95% CI 0.01 to 7.48). Minor adverse events (predominantly nausea and vomiting) were more common among those on doxycycline with an odds ratio of 11 (95% CI 2.1 to 60). Regular use of weekly oral doxycycline 200 mg increases the odds for nausea and vomiting with unclear benefit in reducing Leptospira seroconversion or clinical consequences of infection.
This is a systematic review of clinical research testing whether taking the antibiotic can prevent infection from a water-borne bacteria called Leptospira. Data from different trials had conflicting results, and these trials targeted different kinds of people - travellers and people who live in at risk areas, encompassing soldiers, farmers, and students. Taken together, the data does not support the practice in all cases, though short term travellers with a potential for high risk exposure may be helped. People who took doxycycline were more likely to have stomach pain, nausea, and vomiting but the medication had to be stopped in only a few participants.
Two RCTs involving 112 participants were eligible for inclusion in this review. One study compared autologous bone marrow-mesenchymal stem cells (BM-MSC) plus riluzole versus control (riluzole only), while the other study compared combined intramuscular and intrathecal administration of autologous mesenchymal stem cells secreting neurotrophic factors (MSC-NTF) to placebo. The latter study was reported as an abstract and provided no numerical data. Both studies were funded by biotechnology companies. The only study that contributed to the outcome data in the review involved 64 participants, comparing BM-MSC plus riluzole versus control (riluzole only). It reported outcomes after four to six months. It had a low risk of selection bias, detection bias and reporting bias, but a high risk of performance bias and attrition bias. The certainty of evidence was low for all major efficacy outcomes, with imprecision as the main downgrading factor, because the range of plausible estimates, as shown by the 95% confidence intervals (CIs), encompassed a range that would likely result in different clinical decisions. Functional impairment, expressed as the mean change in the Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-R) score from baseline to six months after cell injection was slightly reduced (better) in the BM-MSC group compared to the control group (mean difference (MD) 3.38, 95% CI 1.22 to 5.54; 1 RCT, 56 participants; low-certainty evidence). ALSFRS-R has a range from 48 (normal) to 0 (maximally impaired); a change of 4 or more points is considered clinically important. The trial did not report outcomes at 12 months. There was no clear difference between the BM-MSC and the no treatment group in change in respiratory function (per cent predicted forced vital capacity; FVC%; MD –0.53, 95% CI –5.37 to 4.31; 1 RCT, 56 participants; low-certainty evidence); overall survival at six months (risk ratio (RR) 1.07, 95% CI 0.94 to 1.22; 1 RCT, 64 participants; low-certainty evidence); risk of total adverse events (RR 0.86, 95% CI 0.62 to 1.19; 1 RCT, 64 participants; low-certainty evidence) or serious adverse events (RR 0.47, 95% CI 0.13 to 1.72; 1 RCT, 64 participants; low-certainty evidence). The study did not measure muscle strength. Currently, there is a lack of high-certainty evidence to guide practice on the use of cell-based therapy to treat ALS/MND. Uncertainties remain as to whether this mode of therapy is capable of restoring muscle function, slowing disease progression, and improving survival in people with ALS/MND. Although one RCT provided low-certainty evidence that BM-MSC may slightly reduce functional impairment measured on the ALSFRS-R after four to six months, this was a small phase II trial that cannot be used to establish efficacy. We need large, prospective RCTs with long-term follow-up to establish the efficacy and safety of cellular therapy and to determine patient-, disease- and cell treatment-related factors that may influence the outcome of cell-based therapy. The major goals of future research are to determine the appropriate cell source, phenotype, dose and method of delivery, as these will be key elements in designing an optimal cell-based therapy programme for people with ALS/MND. Future research should also explore novel treatment strategies, including combinations of cellular therapy and standard or novel neuroprotective agents, to find the best possible approach to prevent or reverse the neurological deficit in ALS/MND, and to prolong survival in this debilitating and fatal condition.
Cochrane review authors searched medical databases for clinical trials. They found two completed RCTs that assessed the effects of cell-based therapy over a six-month follow-up period. One study was not fully published and did not provide numerical data. Both studies were funded by stem cell companies. One study, which included 64 people with ALS/MND, provided data. The people taking part in the trial had an average time since symptom onset of about two years. They had mild to moderate problems with motor function (ability to perform physical tasks) at the start of the trial (with an average of 35 on the ALS Functional Rating Scale-revised, on which a score of 0 indicates greatest impairment and 48 is normal function). The study provided low-quality evidence that stem cells obtained from people's own bone marrow (the cells in the centre of bone) did not result in significant side effects. The cell implantation procedure was well tolerated. Based on evidence from this trial, stem cell treatment may slightly reduce decline in motor function at six months, but may not improve breathing or quality of life at four months, or overall survival at six months. Based on the very limited evidence available, any benefit is uncertain due to there being only one poorly conducted study and results within the study varies. We urgently need large, well-designed clinical trials to establish whether or not cell-based therapies have a clear clinical benefit in ALS/MND. Major goals of future research are to identify the right type and amount of cells to use, and how best to administer them. The evidence is up to date as of July 2019.
Thirteen randomised clinical trials assessed milk thistle in 915 patients with alcoholic and/or hepatitis B or C virus liver diseases. The methodological quality was low: only 23% of the trials reported adequate allocation concealment and only 46% were considered adequately double-blinded. Milk thistle versus placebo or no intervention had no significant effect on mortality (RR 0.78, 95% CI 0.53 to 1.15), complications of liver disease (RR 0.95, 95% CI 0.83 to 1.09), or liver histology. Liver-related mortality was significantly reduced by milk thistle in all trials (RR 0.50, 95% CI 0.29 to 0.88), but not in high-quality trials (RR 0.57, 95% CI 0.28 to 1.19). Milk thistle was not associated with a significantly increased risk of adverse events (RR 0.83, 95% CI 0.46 to 1.50). Our results question the beneficial effects of milk thistle for patients with alcoholic and/or hepatitis B or C virus liver diseases and highlight the lack of high-quality evidence to support this intervention. Adequately conducted and reported randomised clinical trials on milk thistle versus placebo are needed.
Several trials have studied the effects of milk thistle for patients with liver diseases. This systematic review could not demonstrate significant effects of milk thistle on mortality or complications of liver diseases in patients with alcoholic and/or hepatitis B or C liver diseases combining all trials or high-quality trials. Low-quality trials suggested beneficial effects. High-quality randomised clinical trials on milk thistle versus placebo are needed.
We included 57 RCTs (74 randomised comparisons) involving 3002 participants in this review (some appearing in more than one comparison). Twenty-seven randomised comparisons (1620 participants) assessed SLT versus no SLT; SLT resulted in clinically and statistically significant benefits to patients' functional communication (standardised mean difference (SMD) 0.28, 95% confidence interval (CI) 0.06 to 0.49, P = 0.01), reading, writing, and expressive language, but (based on smaller numbers) benefits were not evident at follow-up. Nine randomised comparisons (447 participants) assessed SLT with social support and stimulation; meta-analyses found no evidence of a difference in functional communication, but more participants withdrew from social support interventions than SLT. Thirty-eight randomised comparisons (1242 participants) assessed two approaches to SLT. Functional communication was significantly better in people with aphasia that received therapy at a high intensity, high dose, or over a long duration compared to those that received therapy at a lower intensity, lower dose, or over a shorter period of time. The benefits of a high intensity or a high dose of SLT were confounded by a significantly higher dropout rate in these intervention groups. Generally, trials randomised small numbers of participants across a range of characteristics (age, time since stroke, and severity profiles), interventions, and outcomes. Our review provides evidence of the effectiveness of SLT for people with aphasia following stroke in terms of improved functional communication, reading, writing, and expressive language compared with no therapy. There is some indication that therapy at high intensity, high dose or over a longer period may be beneficial. HIgh-intensity and high dose interventions may not be acceptable to all.
The evidence is current to September 2015. We found and included 57 studies involving 3002 people with aphasia in our review. We reviewed all SLT types, regimens, and methods of delivery. Based on 27 studies (and 1620 people with aphasia), speech and language therapy benefits functional use of language, language comprehension (for example listening or reading), and language production (speaking or writing), when compared with no access to therapy, but it was unclear how long these benefits may last. There was little information available to compare SLT with social support. Information from nine trials (447 people with aphasia) suggests there may be little difference in measures of language ability. However, more people stopped taking part in social support compared with those that attended SLT. Thirty-eight studies compared two different types of SLT (involving 1242 people with aphasia). Studies compared SLT that differed in therapy regimen (intensity, dosage and duration), delivery models (group, one-to-one, volunteer, computer-facilitated), and approach. We need more information on these comparisons. Many hours of therapy over a short period of time (high intensity) appeared to help participants' language use in daily life and reduced the severity of their aphasia problems. However, more people stopped attending these highly intensive treatments (up to 15 hours a week) than those that had a less intensive therapy schedule. Generally, the quality of the studies conducted and reported could be improved. Key quality features were only reported by half of the latest trials. Thus, it is unclear whether this was the result of poorly conducted studies or poorly reported studies. Most comparisons we made would benefit from the availability of more studies involving more people with aphasia.
Only one RCT, providing very low-quality evidence, was identified and included. Thirteen patients received HBO therapy while another 13 did not. Two to six implants were placed in people with fully edentulous mandibles to be rehabilitated with bar-retained overdentures. One year after implant loading, four patients had died from each group. One patient, treated with HBO, developed an osteoradionecrosis and lost all implants so the prosthesis could not be provided. Five patients in the HBO group had at least one implant failure versus two in the control group. There were no statistically significant differences for prosthesis and implant failures, postoperative complications and patient satisfaction between the two groups. Despite the limited amount of clinical research available, it appears that HBO therapy in irradiated patients requiring dental implants may not offer any appreciable clinical benefits. There is a definite need for more RCTs to ascertain the effectiveness of HBO in irradiated patients requiring dental implants. These trials ought to be of a high quality and reported as recommended by the CONSORT statement (www.consort-statement.org/). Each clinical centre may have limited numbers of patients and it is likely that trials will need to be multicentred.
The evidence on which this review is based was up-to-date as of 1 July 2013. One small study carried out at a head and neck cancer clinic based at a university in the Netherlands was found. The study included 26 adults who had been treated for head and neck cancer either with radiotherapy or a combination of radiotherapy and surgery. All participants were missing all their teeth in the lower jaw and were experienced problems retaining a denture. The participants were split into two groups, 13 of them were treated with HBO and the other 13 were not. Only one small trial that was at high risk of bias compared treatment with HBO with treatment without HBO. The results failed to determine a benefit for HBO therapy in preventing failure of dental implants or other serious complications such as the death of bone in the jaw caused by radiotherapy treatment. More reliable studies are needed to provide the final answer to this question. The quality of evidence was very low as it was based on one small trial at high risk of bias.
We included 52 studies with 41,331 participants; two studies were quasi-randomized and the remaining studies were RCTs. All studies included participants undergoing surgery under general anaesthesia. Three studies recruited only participants who were at high risk of intraoperative awareness, whilst two studies specifically recruited an unselected participant group. We analysed the data according to two comparison groups: BIS versus clinical signs; and BIS versus ETAG. Forty-eight studies used clinical signs as a comparison method, which included titration of anaesthesia according to criteria such as blood pressure or heart rate and, six studies used ETAG to guide anaesthesia. Whilst BIS target values differed between studies, all were within a range of values between 40 to 60. BIS versus clinical signs We found low-certainty evidence that BIS-guided anaesthesia may reduce the risk of intraoperative awareness in a surgical population that were unselected or at high risk of awareness (Peto odds ratio (OR) 0.36, 95% CI 0.21 to 0.60; I2 = 61%; 27 studies; 9765 participants). However, events were rare with only five of 27 studies with reported incidences; we found that incidences of intraoperative awareness when BIS was used were three per 1000 (95% CI 2 to 6 per 1000) compared to nine per 1000 when anaesthesia was guided by clinical signs. Of the five studies with event data, one included participants at high risk of awareness and one included unselected participants, four used a structured questionnaire for assessment, and two used an adjudication process to identify confirmed or definite awareness. Early recovery times were also improved when BIS was used. We found low-certainty evidence that BIS may reduce the time to eye opening by mean difference (MD) 1.78 minutes (95% CI -2.53 to -1.03 minutes; 22 studies; 1494 participants), the time to orientation by MD 3.18 minutes (95% CI -4.03 to -2.33 minutes; 6 studies; 273 participants), and the time to discharge from the postanaesthesia care unit (PACU) by MD 6.86 minutes (95% CI -11.72 to -2 minutes; 13 studies; 930 participants). BIS versus ETAG Again, events of intraoperative awareness were extremely rare, and we found no evidence of a difference in incidences of intraoperative awareness according to whether anaesthesia was guided by BIS or by ETAG in a surgical population at unselected or at high risk of awareness (Peto OR 1.13, 95% CI 0.56 to 2.26; I2 = 37%; 5 studies; 26,572 participants; low-certainty evidence). Incidences of intraoperative awareness were one per 1000 in both groups. Only three of five studies reported events, two included participants at high risk of awareness and one included unselected participants, all used a structured questionnaire for assessment and an adjudication process to identify confirmed or definite awareness. One large study (9376 participants) reported a reduced time to discharge from the PACU by a median of three minutes less, and we judged the certainty of this evidence to be low. No studies measured or reported the time to eye opening and the time to orientation. Certainty of the evidence We used GRADE to downgrade the evidence for all outcomes to low certainty. The incidence of intraoperative awareness is so infrequent such that, despite the inclusion of some large multi-centre studies in analyses, we believed that the effect estimates were imprecise. In addition, analyses included studies that we judged to have limitations owing to some assessments of high or unclear bias and in all studies, it was not possible to blind anaesthetists to the different methods of monitoring depth of anaesthesia. Studies often did not report a clear definition of intraoperative awareness. Time points of measurement differed, and methods used to identify intraoperative awareness also differed and we expected that some assessment tools were more comprehensive than others. Intraoperative awareness is infrequent and, despite identifying a large number of eligible studies, evidence for the effectiveness of using BIS to guide anaesthetic depth is imprecise. We found that BIS-guided anaesthesia compared to clinical signs may reduce the risk of intraoperative awareness and improve early recovery times in people undergoing surgery under general anaesthesia but we found no evidence of a difference between BIS-guided anaesthesia and ETAG-guided anaesthesia. We found six studies awaiting classification and two ongoing studies; inclusion of these studies in future updates may increase the certainty of the evidence.
The evidence is current to 26 March 2019. We found 52 studies with 41,331 participants. Six studies are awaiting classification (because we did not have sufficient information to assess them), and two studies are ongoing. All studies included people having surgery under general anaesthesia. Three studies included only people who were at high risk of intraoperative awareness, and two studies included only people who were not selected according to high risk of intraoperative awareness. Forty-eight studies compared BIS-guided anaesthesia with anaesthesia guided by clinical signs, and six studies compared BIS-guided anaesthesia with ETAG-guided anaesthesia. We found low-certainty evidence that BIS-guided anaesthesia may reduce the risk of intraoperative awareness. However, events were rare and only five of 27 studies reported incidences. When BIS-guided anaesthesia was used, we found three per 1000 fewer incidences of intraoperative awareness compared to nine per 1000 incidences when anaesthesia was guided by clinical signs. In addition, we found low-certainty evidence that BIS may improve recovery - the time for people to open their eyes was less, as was the time for orientation, and the time to be discharged from the post-anaesthesia care unit. We found no evidence of a difference in incidences of intraoperative awareness according to whether anaesthesia was guided by BIS or by ETAG, although, again, there were few incidences of awareness (1 per 1000 in each group). Only one study that compared BIS with ETAG-guided anaesthesia measured recovery times; this low-certainty evidence showed that discharge from the postanaesthesia care unit was earlier if anaesthesia was BIS-guided. No studies that compared BIS with ETAG-guided anaesthesia measured the time to eye opening or the time to orientation. We used GRADE to downgrade the evidence for all outcomes to low certainty. The incidence of intraoperative awareness is so rare and, even though we found some large studies, we concluded that the evidence was still imprecise. In addition, we judged many studies to have limitations because of high or unclear risks of bias. For example, all of the anaesthetists were aware of using an additional BIS monitor and we could not be certain how this affected the anaesthetists' standard practice. In addition, we noted that some studies did not report a clear definition of intraoperative awareness. Time points of measurement differed, and the methods used to identify intraoperative awareness also differed and we expected that some assessment tools were more comprehensive than others. Intraoperative awareness is rare, and despite finding a large number of eligible studies, evidence for the effectiveness of using BIS to guide anaesthetic depth is imprecise. We found low-certainty evidence that BIS-guided anaesthesia compared to anaesthesia guided by clinical signs may reduce the risk of intraoperative awareness and improve early recovery times in people having surgery under general anaesthesia. We found no evidence of a difference between BIS-guided anaesthesia and ETAG-guided anaesthesia, and we also judged this evidence to be low certainty.
Thirty-four trials reporting on 46 treatment comparisons were identified. All trials published results for tumour response and 27 trials published time-to-event data for overall survival. The observed 4244 deaths in 5605 randomised women did not demonstrate a statistically significant difference in survival between regimens that contained antitumour antibiotics and those that did not (HR 0.96, 95% CI 0.90 to 1.02, P = 0.22) and no significant heterogeneity. Antitumour antibiotic regimens were favourably associated with time-to-progression (HR 0.84, 95% CI 0.77 to 0.91) and tumour response rates (odds ratio (OR) 1.33, 95% CI 1.21 to 1.48) although statistically significant heterogeneity was observed for these outcomes. These associations were consistent when the analysis was restricted to the 30 trials that reported on anthracyclines. Patients receiving anthracycline containing regimens were also more likely to experience toxic events compared to patients receiving non-antitumour antibiotic regimens. No statistically significant difference was observed in any outcome between mitoxantrone containing and non-antitumour antibiotic-containing regimens. Compared to regimens without antitumour antibiotics, regimens that contained these agents showed a statistically significant advantage for tumour response and time to progression in women with metastatic breast cancer but were not associated with an improvement in overall survival. The favourable effect on tumour response and time to progression observed in anthracycline containing regimens was also associated with greater toxicity.
This review sought to identify and review the randomised evidence comparing courses of chemotherapy containing antitumour antibiotics against courses not containing antitumour antibiotics. This review identified 34 eligible trials involving 5605 women. This review found that for women with advanced breast cancer, taking antitumour antibiotics did not result in better survival than women who took other types of chemotherapy drugs. Despite the lack of evidence of survival benefit, this review demonstrated that women taking these drugs had an advantage in time to progression (the length of time it takes for the cancer to progress after taking the drug) and tumour response (shrinking of the tumour) compared to women who did not take the antitumour antibiotic drugs. In addition however, the risks of side effects including cardiotoxicity, leukopenia and nausea/vomiting were all significantly increased in the women taking the antitumour antibiotics. Given that this review failed to show a benefit in survival for women taking this group of drugs but a higher rate of side effects, the use of these drugs in the management of metastatic breast cancer must be carefully weighed against the risk of these side effects.
One study (186 infants up to two years old) comparing five monthly doses of palivizumab (N = 92) to placebo (N = 94) over one respiratory syncytial virus season was identified and met our inclusion criteria. We judged there to be a low risk of bias with respect to the concealment of the randomization schedule (although it was not clear how this was generated) and to blinding of participants and study personnel. There is also a low risk of bias with regards to incomplete outcome data. However, we judged there to be a high risk of bias from selective reporting (summary statements presented but no data) and the fact that this industry-supported study has not been published as a full report in a peer-reviewed journal. At six months follow-up, one participant in each group was hospitalised due to respiratory syncytial virus; there were no deaths in either group. In the palivizumab and placebo groups, 86 and 90 children experienced any adverse event, while five and four children had related adverse events respectively. Nineteeen children receiving palivizumab and 16 receiving placebo suffered serious adverse events; one participant receiving palivizumab discontinued due to this. At 12 months follow-up, there were no significant differences between groups in number of Pseudomonas bacterial colonisations or change in weight-to-height ratio. We identified one randomised controlled trial comparing five monthly doses of palivizumab to placebo in infants up to two years old with cystic fibrosis. While the overall incidence of adverse events was similar in both groups, it is not possible to draw firm conclusions on the safety and tolerability of respiratory syncytial virus prophylaxis with palivizumab in infants with cystic fibrosis. Six months after treatment, the authors reported no clinically meaningful differences in outcomes. Additional randomised studies are needed to establish the safety and efficacy of palivizumab in children with cystic fibrosis.
We found one study with 186 participants (infants with cystic fibrosis up to two years of age) which was run across 40 centres in the USA. One infant (out of 92) who received palivizumab and one infant (out of 94) who received placebo were admitted to hospital due to infection from respiratory syncytial virus. No infants died. Overall, the number of adverse events in the palivizumab group was similar to that in the placebo group. No serious adverse events were reported to be related to the vaccine. Over the longer term (12 months), weight gain and the number of infections with Pseudomonas aeruginosa (a common bacterial infection in cystic fibrosis) were similar between groups. The limitation of all these findings is that we only identified one study. More research is needed on the use of the palivizumab vaccination in children with cystic fibrosis. We thought there was a low risk that it would be know which treatment group the next participant would be put into, although it was not clear how this order was generated. We also thought that participants and study personnel were sufficiently blinded to the treatment to avoid bias and that any missing data were unlikely to bias the study results. However, we did have concerns about bias from selective reporting (summary statements were presented but without any data) and the fact that this industry-supported study has not been published as a full report in a peer-reviewed journal.
We included nine trials with 1007 participants. Three trials compared rotator cuff repair with subacromial decompression followed by exercises with exercise alone. These trials included 339 participants with full-thickness rotator cuff tears diagnosed with magnetic resonance imaging (MRI) or ultrasound examination. One of the three trials also provided up to three glucocorticoid injections in the exercise group. All surgery groups received tendon repair with subacromial decompression and the postoperative exercises were similar to the exercises provided for the non-operative groups. Five trials (526 participants) compared repair with acromioplasty versus repair alone; and one trial (142 participants) compared repair with subacromial decompression versus subacromial decompression alone. The mean age of trial participants ranged between 56 and 68 years, and females comprised 29% to 56% of the participants. Symptom duration varied from a mean of 10 months up to 28 months. Two trials excluded tears with traumatic onset of symptoms. One trial defined a minimum duration of symptoms of six months and required a trial of conservative therapy before inclusion. The trials included mainly repairable full-thickness supraspinatus tears, six trials specifically excluded tears involving the subscapularis tendon. All trials were at risk of bias for several criteria, most notably due to lack of participant and personnel blinding, but also for other reasons such as unclearly reported methods of random sequence generation or allocation concealment (six trials), incomplete outcome data (three trials), selective reporting (six trials), and other biases (six trials). Our main comparison was rotator cuff repair with or without subacromial decompression versus non-operative treatment. We identified three trials for this comparison, that compared rotator cuff repair with subacromial decompression followed by exercises with exercise alone with or without glucocorticoid injections, and results are reported here for the 12 month follow up. At one year, moderate-certainty evidence (downgraded for bias) from 3 trials with 258 participants indicates that surgery probably provides little or no improvement in pain; mean pain (range 0 to 10, higher scores indicate more pain) was 1.6 points with non-operative treatment and 0.87 points better (0.43 better to 1.30 better) with surgery. Mean function (zero to 100, higher score indicating better outcome) was 72 points with non-operative treatment and 6 points better (2.43 better to 9.54 better) with surgery (3 trials; 269 participants), low-certainty evidence (downgraded for bias and imprecision). Participant-rated global success rate was 48/55 after non-operative treatment and 52/55 after surgery corresponding to risk ratio (RR) 1.08, 95% confidence interval (CI) 0.96 to 1.22; low-certainty evidence (downgraded for bias and imprecision). Health-related quality of life was 57.5 points (SF-36 mental component score, 0 to 100, higher score indicating better quality of life) with non-operative treatment and 1.3 points worse (4.5 worse to 1.9 better) with surgery (1 trial; 103 participants), low-certainty evidence (downgraded for bias and imprecision). We were unable to estimate the risk of adverse events and serious adverse events as only one event was reported across the trials (very low-certainty evidence; downgraded once due to bias and twice due to very serious imprecision). At the moment, we are uncertain whether rotator cuff repair surgery provides clinically meaningful benefits to people with symptomatic tears; it may provide little or no clinically important benefits with respect to pain, function, overall quality of life or participant-rated global assessment of treatment success when compared with non-operative treatment. Surgery may not improve shoulder pain or function compared with exercises, with or without glucocorticoid injections. The trials included have methodology concerns and none included a placebo control. They included participants with mostly small degenerative tears involving the supraspinatus tendon and the conclusions of this review may not be applicable to traumatic tears, large tears involving the subscapularis tendon or young people. Furthermore, the trials did not assess if surgery could prevent arthritic changes in long-term follow-up. Further well-designed trials in this area that include a placebo-surgery control group and long follow-up are needed to further increase certainty about the effects of surgery for rotator cuff tears.
This Cochrane Review is current to January 2019. We found nine trials with 1007 participants. Participants mean age was 56 to 68 years, and females comprised 29% to 56% of the participants. The participants had symptoms for several months or years and were diagnosed with a full-thickness tear with magnetic resonance imaging or ultrasound examination. Studies were conducted in Finland, Norway, Canada, USA, France, the Netherlands, Italy and South Korea. Our primary analysis included three trials with 339 participants who received either surgery (tendon repair and removal of bone from undersurface of acromion) or non-operative therapy (exercises with or without glucocorticoid injection). Three studies received funding however none of them reported using the funds directly for these trials. Compared with non-operative treatment, surgery resulted in little or no benefit in people with rotator cuff tears for up to one year. Pain (lower scores mean less pain) Improved by 9% (4% better to 13% better) or 0.9 points on a zero to 10 scale • People who had non-operative treatment rated their pain as 1.6 points • People who had surgery rated their pain as 0.7 points. Function (0 to 100; higher scores mean better function)Improved by 6% (2% better to 10% better) or 6 points on a zero to 100 scale • People who had non-operative treatment scored 72 points • People who had surgery scored 78 points Participant-rate global treatment success (participants satisfied with the outcome) 7% more people rated their treatment a success (4% fewer to 13% more), or seven more people out of 100. • 48/55 (873/1000) of people considered treatment as successful with non-operative treatment • 51/54 (943/1000) of people considered treatment as successful with surgery Overall quality of life (higher scores mean better quality of life)Worsened 1% (4% worse to 2% better) or 1.3 points on a zero to 100 scale • People who had non-operative treatment rated their quality of life 58 • People who had surgery (subacromial decompression) rated their quality of life 57 Adverse events • One adverse event (frozen shoulder) was reported in the trials in exercise group. Thus, we are unable to estimate comparative risk. Serious adverse events • No serious adverse events were reported in the trials. As compared with non-operative treatment, moderate-certainty evidence (downgraded due to risk of bias) indicates that surgery (rotator cuff repair with or without subacromial decompression) probably provides little or no benefit in pain and low-certainty evidence indicates that it may provide little or no improvement in function, participant-rated global treatment success or overall quality of life (downgraded due to bias and imprecision) in people with rotator cuff tears. Due to only one reported adverse event across the trials, we cannot estimate if there is higher risk for adverse events after either treatment (very low-certainty evidence).
We included no new trials in this update. The results are unchanged from the previous review and are based on the data of six RCTs with 223 participants. All six RCTs compared cognitive rehabilitation with a usual care control. Meta-analyses demonstrated no convincing effect of cognitive rehabilitation on subjective measures of attention either immediately after treatment (standardised mean difference (SMD) 0.53, 95% confidence interval (CI) –0.03 to 1.08; P = 0.06; 2 studies, 53 participants; very low-quality evidence) or at follow-up (SMD 0.16, 95% CI –0.23 to 0.56; P = 0.41; 2 studies, 99 participants; very low-quality evidence). People receiving cognitive rehabilitation (when compared with control) showed that measures of divided attention recorded immediately after treatment may improve (SMD 0.67, 95% CI 0.35 to 0.98; P < 0.0001; 4 studies, 165 participants; low-quality evidence), but it is uncertain that these effects persisted (SMD 0.36, 95% CI –0.04 to 0.76; P = 0.08; 2 studies, 99 participants; very low-quality evidence). There was no evidence for immediate or persistent effects of cognitive rehabilitation on alertness, selective attention, and sustained attention. There was no convincing evidence for immediate or long-term effects of cognitive rehabilitation for attentional problems on functional abilities, mood, and quality of life after stroke. The effectiveness of cognitive rehabilitation for attention deficits following stroke remains unconfirmed. The results suggest there may be an immediate effect after treatment on attentional abilities, but future studies need to assess what helps this effect persist and generalise to attentional skills in daily life. Trials also need to have higher methodological quality and better reporting.
We identified six studies that compared cognitive rehabilitation with a control group who received their usual care (but not cognitive rehabilitation) for people with attention problems following stroke. We did not consider listening to music, meditation, yoga, or mindfulness to be a form of cognitive rehabilitation. The six studies involved 223 participants who demonstrated attentional problems or reported having such problems following stroke. The evidence is current to February 2019. We found no evidence that cognitive rehabilitation improved general (global) measures of attention. The group that received cognitive rehabilitation performed better than the control group on tasks that required people to divide attention. However, this benefit was only seen immediately after the rehabilitation period with no suggestion that the benefits persist for longer. There was no evidence to suggest that cognitive rehabilitation was beneficial for other types of attention problems, or daily life activities, mood, or quality of life. More research is needed. The very low to moderate methodological quality of the studies identified, and the lack of studies means that we cannot draw firm conclusions about the effect of cognitive rehabilitation for attention following stroke.
We included nine studies (1614 participants) in this review. Symptomatic UTI (RR 1.11, 95% CI 0.51 to 2.43), complications (RR 0.78, 95% CI 0. 35 to 1.74), and death (RR 0.99, 95% CI 0.70 to 1.41) were similar between the antibiotic and placebo or no treatment arms. Antibiotics were more effective for bacteriological cure (RR 2.67, 95% CI 1.85 to 3.85) but also more adverse events developed in this group (RR 3.77, 95% CI 1.40 to 10.15). No decline in the kidney function was observed across the studies; minimal data were available on the emergence of resistant strains after antimicrobial treatment. The included studies were of medium and high quality, used different treatments for different durations of treatment and follow-up, different populations, but this did not appear to influence the results of review. No differences were observed between antibiotics versus no treatment of asymptomatic bacteriuria for the development of symptomatic UTI, complications or death. Antibiotics were superior to no treatment for the bacteriological cure but with significantly more adverse events. There was no clinical benefit from treating asymptomatic bacteriuria in the studies included in this review.
Nine studies of medium to high quality, enrolling 1614 institutionalised participants or outpatients, assigned to antibiotics or placebo/no treatment for treating asymptomatic bacteriuria for different durations of treatment and follow-up were included in this review. The evidence is current to February 2015. No clinical benefit was found for antibiotic treatment. Antibiotics eradicated the growth of bacteria in more participants but at the cost of more adverse events than in the no treatment groups.
Twelve trials of six to 14 months duration involving 350 people were included. Eleven trials investigated levothyroxine replacement with placebo, one study compared levothyroxine replacement with no treatment. We did not identify any trial that assessed (cardiovascular) mortality or morbidity. Seven studies evaluated symptoms, mood and quality of life with no statistically significant improvement. One study showed a statistically significant improvement in cognitive function. Six studies assessed serum lipids, there was a trend for reduction in some parameters following levothyroxine replacement. Some echocardiographic parameters improved after levothyroxine replacement therapy, like myocardial relaxation, as indicated by a significant prolongation of the isovolumic relaxation time as well as diastolic dysfunction. Only four studies reported adverse events with no statistically significant differences between groups. In current RCTs, levothyroxine replacement therapy for subclinical hypothyroidism did not result in improved survival or decreased cardiovascular morbidity. Data on health-related quality of life and symptoms did not demonstrate significant differences between intervention groups. Some evidence indicates that levothyroxine replacement improves some parameters of lipid profiles and left ventricular function.
To answer this question twelve studies of six to 14 months duration involving 350 people were analysed. Thyroid hormone therapy for subclinical hypothyroidism did not result in improved survival or decreased cardiovascular morbidity (for example less heart attacks or strokes). Data on health-related quality of life and symptoms did not demonstrate significant differences between placebo and thyroid hormone therapy. Some evidence indicated that thyroid hormone had some effects on blood lipids and technical measurements of heart function. Adverse effects were inadequately addressed in most of the included studies and have to be urgently investigated in future studies, especially in older patients.
Sixteen randomised controlled trials, recruiting a total of 2027 perimenopausal or postmenopausal women, were identified. All studies used oral monopreparations of black cohosh at a median daily dose of 40 mg, for a mean duration of 23 weeks. Comparator interventions included placebo, hormone therapy, red clover and fluoxetine. Reported outcomes included vasomotor symptoms, vulvovaginal symptoms, menopausal symptom scores and adverse effects. There was no significant difference between black cohosh and placebo in the frequency of hot flushes (mean difference (MD) 0.07 flushes per day; 95% confidence interval (CI) -0.43 to 0.56 flushes per day; P=0.79; 393 women; three trials; moderate heterogeneity: I2 = 47%) or in menopausal symptom scores (standardised mean difference (SMD) -0.10; 95% CI -0.32 to 0.11; P = 0.34; 357 women; four trials; low heterogeneity: I2 = 21%). Compared to black cohosh, hormone therapy significantly reduced daily hot flush frequency (three trials; data not pooled) and menopausal symptom scores (SMD 0.32; 95% CI 0.13 to 0.51; P=0.0009; 468 women; five trials; substantial heterogeneity: I2 = 69%). These findings should be interpreted with caution given the heterogeneity between studies. Comparisons of the effectiveness of black cohosh and other interventions were either inconclusive (because of considerable heterogeneity or an insufficient number of studies) or not statistically significant. Similarly, evidence on the safety of black cohosh was inconclusive, owing to poor reporting. There were insufficient data to pool results for health-related quality of life, sexuality, bone health, vulvovaginal atrophic symptoms and night sweats. No trials reported cost-effectiveness data. The quality of included trials was generally unclear, owing to inadequate reporting. There is currently insufficient evidence to support the use of black cohosh for menopausal symptoms. However, there is adequate justification for conducting further studies in this area. The uncertain quality of identified trials highlights the need for improved reporting of study methods, particularly with regards to allocation concealment and the handling of incomplete outcome data. The effect of black cohosh on other important outcomes, such as health-related quality of life, sexuality, bone health, night sweats and cost-effectiveness also warrants further investigation.
The herb black cohosh was traditionally used by Native Americans to treat menstrual irregularity, with many experimental studies indicating a possible use for black cohosh in menopause. This review set out to evaluate the effectiveness of black cohosh for controlling the symptoms of menopause. The review of 16 studies (involving 2027 women) found insufficient evidence to support the use of black cohosh for menopausal symptoms. Given the uncertain quality of most studies included in the review, further research investigating the effectiveness of black cohosh for menopausal symptoms is warranted. Such trials need to give greater consideration to the use of other important outcomes (such as quality of life, bone health, night sweats and cost-effectiveness), stringent study design and the quality reporting of study methods.
The experimental intervention was cup feeding and the control intervention was bottle feeding in all five studies included in this review. One study reported weight gain as g/kg/day and there was no statistically significant difference between the two groups (MD −0.60, 95% CI −3.21 to 2.01); while a second study reported weight gain in the first seven days as grams/day and also showed no statistically significant difference between the two groups (MD −0.10, 95% CI −0.36 to 0.16). There was substantial variation in results for the majority of breastfeeding outcomes, except for not breastfeeding at three months (three studies) (typical RR 0.83, 95% CI 0.71 to 0.97) which favoured cup feeding. Where there was moderate heterogeneity meta-analysis was performed: not breastfeeding at six months (two studies) (typical RR 0.83, 95% CI 0.72 to 0.95); not fully breastfeeding at hospital discharge (four studies) (typical RR 0.61, 95% CI 0.52 to 0.71). Two studies reported average time to feed which showed no difference between the two groups. Two studies assessed length of hospital stay and there was considerable variation in results and in the direction of effect. Only one study has reported gestational age at discharge, which showed no difference between the two groups (MD −0.10, 95% CI −0.54 to 0.34). As the majority of infants in the included studies are preterm infants, no recommendations can be made for cup feeding term infants due to the lack of evidence in this population. From the studies of preterm infants, cup feeding may have some benefits for late preterm infants and on breastfeeding rates up to six months of age. Self-reported breastfeeding status and compliance to supplemental interventions may over-report exclusivity and compliance, as societal expectations of breastfeeding and not wishing to disappoint healthcare professionals may influence responses at interview and on questionnaires. The results for length of stay are mixed, with the study involving lower gestational age preterm infants finding that those fed by cup spent approximately 10 days longer in hospital, whereas the study involving preterm infants at a higher gestational age, who did not commence cup feeding until 35 weeks' gestation, did not have a longer length of stay, with both groups staying on average 26 days. This finding may have been influenced by gestational age at birth and gestational age at commencement of cup feeding, and their mothers' visits; (a large number of mothers of these late preterm infants lived regionally from the hospital and could visit at least twice per week). Compliance to the intervention of cup feeding remains a challenge. The two largest studies have both reported non-compliance, with one study analysing data by intention to treat and the other excluding those infants from the analysis. This may have influenced the findings of the trial. Non-compliance issues need consideration before further large randomised controlled trials are undertaken as this influences power of the study and therefore the statistical results. In addition larger studies with better-quality (especially blinded) outcome assessment with 100% follow-up are needed.
Our search for eligible studies conducted on 31 January 2016 revealed five studies, all comparing cup and bottle feeding in newborn infants, which we were able to include in this review. These studies were conducted in neonatal and maternity units in hospitals in Australia, the United Kingdom, Brazil and Turkey. The mean gestational age of the infants in most of the studies were similar at the time of entry into the study. In four of the studies the intervention (cup or bottle) commenced from the time of enrolment into the study, when the infants first needed a supplemental feed and were as young as 30 weeks' gestation. In the study conducted in Turkey, supplemental feeding was not commenced on enrolment into the study and at the time of first supplemental feed but delayed until infants were at least 35 weeks of age. For some of the outcomes, the results of the different studies could not be combined. This included not breastfeeding at hospital discharge; not exclusively breastfeeding at three months and at six months; the average time taken for a feed; and the number of days spent in hospital. For each of these outcomes, the results from some studies favoured cup feeding, while the results from other studies favoured bottle feeding. For some of the outcomes, the results of the different studies could be combined: there was no difference in weight gain or gestational age at discharge between those infants who recieved supplemental feeds by cup compared to bottle. However those infants who received supplemental feeds by cup were more likely to be exclusively breastfeeding at hospital discharge and were more likely to be receiving some breastfeeds at three and six months of age. As the studies mostly included preterm infants and few term infants, no recommendations can be made for cup feeding term infants. The quality of evidence for weight gain, length of stay, not breastfeeding at hospital discharge and at six months of age and exclusively breastfeeding at hospital discharge and at six months of age is graded very low to low. In the studies included in this review, it is reported that many infants who were to receive supplemental feeds by cup received supplemental feeds by other means as either the parents or nurses did not like cup feeding.
Seventeen trials including 1817 patients were identified. Vasoactive drugs were vasopressin (one trial), terlipressin (one trial), somatostatin (five trials), and octreotide (ten trials). No significant differences were found comparing sclerotherapy with each vasoactive drug for any outcome. Combining all the trials irrespective of the vasoactive drug, the risk differences (95% confidence intervals) were failure to control bleeding -0.02 (-0.06 to 0.02), five-day failure rate -0.05 (-0.10 to 0.01), rebleeding 0.01 (-0.03 to 0.05), mortality (17 randomised trials, 1817 patients) -0.02 (-0.06 to 0.02), and transfused blood units (8 randomised trials, 849 patients) (weighted mean difference) -0.24 (-0.54 to 0.07). Adverse events 0.08 (0.03 to 0.14) and serious adverse events 0.05 (0.02 to 0.08) were significantly more frequent with sclerotherapy. We found no convincing evidence to support the use of emergency sclerotherapy for variceal bleeding in cirrhosis as the first, single treatment when compared with vasoactive drugs. Vasoactive drugs may be safe and effective whenever endoscopic therapy is not promptly available and seems to be associated with less adverse events than emergency sclerotherapy. Other meta-analyses and guidelines advocate that combined vasoactive drugs and endoscopic therapy is superior to either intervention alone.
All of the identified randomised clinical trials comparing emergency sclerotherapy with vasopressin (+/- intravenous or transdermal nitroglycerin), terlipressin, somatostatin, or octreotide have been reviewed. A total of 17 randomised trials including 1817 patients were included. Sclerotherapy did not appear to be superior to the vasoactive drugs in terms of control of bleeding, number of transfusions, 42-day rebleeding and mortality, or rebleeding and mortality before other elective treatments. However, adverse events were significantly more frequent and severe with sclerotherapy than with vasoactive drugs.
Three studies were identified, 2 published as full papers and one in abstract form (537 patients). The risk of bias was low for the 3 included studies. There was no statistically significant difference in the proportion of patients (GM-CSF 25.3% versus placebo 17.5%) who achieved clinical remission (RR 1.67; 95% CI 0.80 to 3.50; P = 0.17; 3 studies; 537 patients). There was no statistically significant difference in the proportion of patients (GM-CSF 38.3% versus placebo 24.8%) who achieved a 100-point clinical response (RR 1.71 95% CI 0.98 to 2.97; P = 0.06; 3 studies; 537 patients). There was no statistically significant difference in the proportion of patients (GM-CSF 54.3% versus placebo 44.2%) who achieved a 70 point clinical response (RR 1.23; 95% CI 0.83 to 1.82; P = 0.30; 1 study; 124 patients). There was no statistically significant difference in the proportion of patients (GM-CSF 95.8% versus placebo 89.3%) who experienced at least one adverse event (RR 1.07; 95% CI 0.99 to 1.16; P = 0.08; 2 studies; 251 patients), or serious adverse events (GM-CSF 12.0% versus placebo 4.8%; RR 2.21; 95% CI 0.84 to 5.81; P = 0.11; 2 studies; 251 patients). The incidence of bone pain, musculoskeletal chest pain, and dyspnea were higher in patients treated with sargramostim compared to placebo. Other adverse events commonly associated with sargramostim such as pulmonary capillary leak syndrome, pulmonary edema, heart failure, fever, and neurotoxicity were not reported in these studies. Sargramostim does not appear to be more effective than placebo for induction of clinical remission or clinical improvement in patients with active Crohn's disease. However, the GRADE analysis indicates that the overall quality of the evidence for the primary (clinical remission) and secondary outcomes (clinical response) was low indicating that further research is likely to have an impact on the effect estimates.
This review of three studies did not show any difference in effectiveness between sargramostim and placebo (fake drug) for induction of remission or clinical improvement in patients with active Crohn's disease. Side effects associated with sargramostim treatment included bone pain, musculoskeletal chest pain, and dyspnea (shortness of breath). Due to the fact that there were only a small number of trials in this area and some of them give opposite results, the authors concluded that while sargramostim does not appear to be more effective than placebo more research is needed to determine if this drug provides a benefit for the treatment of active Crohn's disease.
We included 41 studies (4224 participants). Participants underwent ambulatory or short length of stay surgical procedures, and were predominantly American Society of Anesthesiology (ASA) class I or II. There is one study awaiting classification and three ongoing studies. All studies took place in surgical centres, and were conducted in geographically diverse settings. Risk of bias was generally unclear across all domains. Supplemental intravenous crystalloid administration probably reduces the cumulative risk of postoperative nausea (PON) (risk ratio (RR) 0.62, 95% confidence interval (CI) 0.51 to 0.75; 18 studies; 1766 participants; moderate-certainty evidence). When the postoperative period was divided into early (first six hours postoperatively) and late (at the time point closest to or including 24 hours postoperatively) time points, the intervention reduced the risk of early PON (RR 0.67, 95% CI 0.58 to 0.78; 20 studies; 2310 participants; moderate-certainty evidence) and late PON (RR 0.47, 95% CI 0.32 to 0.69; 17 studies; 1682 participants; moderate-certainty evidence). Supplemental intravenous crystalloid administration probably reduces the risk of postoperative vomiting (POV) (RR 0.50, 95% CI 0.40 to 0.63; 20 studies; 1970 participants; moderate-certainty evidence). The intervention specifically reduced both early POV (RR 0.56, 95% CI 0.41 to 0.76; 19 studies; 1998 participants; moderate-certainty evidence) and late POV (RR 0.48, 95% CI 0.29 to 0.79; 15 studies; 1403 participants; moderate-certainty evidence). Supplemental intravenous crystalloid administration probably reduces the need for pharmacologic treatment of PONV (RR 0.62, 95% CI 0.51 to 0.76; 23 studies; 2416 participants; moderate-certainty evidence). The effect of supplemental intravenous crystalloid administration on the risk of unplanned postoperative admission to hospital is unclear (RR 1.05, 95% CI 0.77 to 1.43; 3 studies; 235 participants; low-certainty evidence). No studies reported serious adverse events that may occur following supplemental perioperative intravenous crystalloid administration (i.e. admission to high-dependency unit, postoperative cardiac or respiratory complication, or death). There is moderate-certainty evidence that supplemental perioperative intravenous crystalloid administration reduces PON and POV, in ASA class I to II patients receiving general anaesthesia for ambulatory or short length of stay surgical procedures. The intervention probably also reduces the risk of pharmacologic treatment for PONV. The effect of the intervention on the risk of unintended postoperative admission to hospital is unclear. The risk of serious adverse events resulting from supplemental perioperative intravenous crystalloid administration is unknown as no studies reported this outcome. The one study awaiting classification may alter the conclusions of the review once assessed.
We looked at studies where people had general anaesthesia for surgery, and received larger or smaller amounts of intravenous fluid, and were later checked to see if they developed nausea and vomiting after their surgeries were done. We found 41 studies, with 4224 participants analysed in our review. Our review suggests that giving people extra intravenous fluid during surgery under general anaesthesia probably decreases the risk of having either nausea or vomiting after surgery, and probably reduces the need for medication to treat nausea. It is unclear how giving extra intravenous fluid affects the risk of unexpectedly needing hospital admission after minor surgery. No studies looked at whether extra intravenous fluid makes other complications worse. There are two reasons why the conclusions of this review may not be exactly correct. First, many of the studies were not designed perfectly. Second, the studies did not agree on exactly how helpful the extra intravenous fluids were for preventing nausea and vomiting. Most studies did find it at least somewhat helpful.
Fifteen studies (733 patients) were suitable for analysis. All studies were small and had variable methodology. Fish oil did not significantly affect patient or graft survival, acute rejection rates, or calcineurin inhibitor toxicity when compared to placebo. Overall SCr was significantly lower in the fish oil group compared to placebo (5 studies, 237 participants: MD -30.63 µmol/L, 95% CI -59.74 to -1.53; I2 = 88%). In the subgroup analysis, this was only significant in the long-course (six months or more) group (4 studies, 157 participants: MD -37.41 µmol/L, 95% CI -69.89 to -4.94; I2 = 82%). Fish oil treatment was associated with a lower diastolic blood pressure (4 studies, 200 participants: MD -4.53 mm Hg, 95% CI -7.60 to -1.45) compared to placebo. Patients receiving fish oil for more than six months had a modest increase in HDL (5 studies, 178 participants: MD 0.12 mmol/L, 95% CI 0.03 to 0.21; I2 = 47%) compared to placebo. Fish oil effects on lipids were not significantly different from low-dose statins. There was insufficient data to analyse cardiovascular outcomes. Fishy aftertaste and gastrointestinal upset were common but did not result in significant patient drop-out. There is insufficient evidence from currently available RCTs to recommend fish oil therapy to improve kidney function, rejection rates, patient survival or graft survival. The improvements in HDL cholesterol and diastolic blood pressure were too modest to recommend routine use. To determine a benefit in clinical outcomes, future RCTs will need to be adequately powered with these outcomes in mind.
This review set out to assess any benefit or harm in using fish oil to reduce the risk of kidney damage and heart disease in people who have had a kidney transplant and are receiving standard drugs to prevent rejection. Information from 15 studies was used and showed that fish oils provide a slight improvement in HDL cholesterol and diastolic blood pressure. These studies did not provide enough information on the differences in the risk of death, heart disease, kidney transplant rejection or kidney function between patients receiving fish oils and those receiving placebo. There appeared to be no harmful effects of taking fish oil. The benefits of taking fish oil after a kidney transplant are a mild improvement in some heart disease risk factors. There was not enough information to show any benefit in preventing heart disease or reduction in kidney function. Larger, better studies are needed before regular use of fish oil can be recommended.
We included 19 studies (six paediatric, 13 adults). None of the paediatric studies could be combined for meta-analysis. A single RCT in infants found that PPI (compared to placebo) was not efficacious for cough outcomes (favouring placebo OR 1.61; 95% CI 0.57 to 4.55) but those on PPI had significantly increased adverse events (OR 5.56; 95% CI 1.18 to 26.25) (number needed to treat for harm in four weeks was 11 (95% CI 3 to 232)). In adults, analysis of H2 antagonist, motility agents and conservative treatment for GORD was not possible (lack of data) and there were no controlled studies of fundoplication. We analysed nine adult studies comparing PPI (two to three months) to placebo for various outcomes in the meta-analysis. Using intention-to-treat, pooled data from studies resulted in no significant difference between treatment and placebo in total resolution of cough (OR 0.46; 95% CI 0.19 to 1.15). Pooled data revealed no overall significant improvement in cough outcomes (end of trial or change in cough scores). We only found significant differences in sensitivity analyses. We found a significant improvement in change of cough scores at end of intervention (two to three months) in those receiving PPI (standardised mean difference -0.41; 95% CI -0.75 to -0.07) using generic inverse variance analysis on cross-over trials. Two studies reported improvement in cough after five days to two weeks of treatment. PPI is not efficacious for cough associated with GORD symptoms in very young children (including infants) and should not be used for cough outcomes. There is insufficient data in older children to draw any valid conclusions. In adults, there is insufficient evidence to conclude definitely that GORD treatment with PPI is universally beneficial for cough associated with GORD. Clinicians should be cognisant of the period (natural resolution with time) and placebo effect in studies that utilise cough as an outcome measure. Future paediatric and adult studies should be double-blind, randomised controlled and parallel-design, using treatments for at least two months, with validated subjective and objective cough outcomes and include ascertainment of time to respond as well as assessment of acid and/or non-acid reflux.
Nineteen studies fulfilled our predetermined criteria but only six could be combined (in meta-analysis). We obtained additional data from trialists. We were not able to combine results in children due to limited data. Thickened feeds had an inconsistent effect. Proton pump inhibitors (PPI) did not reduce cough and should not be used for cough in young children. In adults with cough and GORD, no significant difference was found in clinical cure using proton pump inhibitors (PPI) for cough and GORD. Using other outcomes, there was also no significant difference between PPI and placebo. This review also highlights a large placebo and time period effect (natural resolution with time) of treatment for chronic cough. In adults treatment with PPI for cough associated with GORD is inconsistent and its benefit variable. There was insufficient data to draw any conclusion from other therapies for cough associated with GORD.
Thirty seven studies (2185 participants) were included in this review. There was no significant difference in the risk for CMV disease (16 studies, 770 patients: RR 0.80, 95% CI 0.61 to 1.05), CMV infection (14 studies, 775 patients: RR 0.94, 95% CI 0.80 to 1.10) or all-cause mortality (8 studies, 502 patients: RR 0.57, 95% CI 0.32 to 1.03) with IgG compared with placebo/no treatment. However IgG significantly reduced the risk of death from CMV disease (6 studies, 346 patients: RR 0.33, 95% CI 0.14 to 0.80). There was no difference in the risk for CMV disease (4 studies, 298 patients: RR 1.17, 95% CI 0.74 to 1.86), CMV infection (4 studies, 298 patients: RR 1.16, 95% CI 0.89 to 1.52) or all-cause mortality (2 studies, 217 patients: RR 0.92, 95% CI 0.37 to 2.29) between antiviral medication combined with IgG and antiviral medication alone. There was no significant difference in the risk of CMV disease with anti CMV vaccine or interferon compared with placebo or no treatment. Currently there are no indications for IgG in the prophylaxis of CMV disease in recipients of solid organ transplants.
This review looked at the benefits and harms of IgG, anti CMV vaccines and interferon to prevent CMV disease in solid organ transplant recipients. Thirty seven studies (2185 participants) were identified. This review shows that IgG did not reduce the risk of CMV disease or all-cause mortality compared with placebo or no treatment. The combination of IgG with antiviral medications (aciclovir or ganciclovir) were not more effective than antiviral medications alone in reducing the risk of CMV disease or all-cause mortality. Anti CMV vaccines and interferon did not reduce the risk of CMV disease compared with placebo or no treatment. Currently there are no indications for IgG in the prevention of CMV disease in recipients of solid organ transplants.
We included seven trials (1789 participants). Four studies had high risk of bias and the risk of bias in the other three trials was unclear. In addition, it was difficult to assess possible reporting bias. We pooled 1070 participants from four RCTs to evaluate corporal atrophy development revealing an insignificantly increased OR of 1.50 (95% CI 0.59 to 3.80; P value = 0.39; low-quality evidence) for long-term PPI users relative to non-PPI users. In five eligible trials, corporal intestinal metaplasia was assessed among 1408 participants, also with uncertain results (OR 1.46; 95% CI 0.43 to 5.03; P value = 0.55; low-quality evidence). However, by pooling data of 1705 participants from six RCTs, our meta-analysis showed that participants with PPI maintenance treatment were more likely to experience either diffuse (simple) (OR 5.01; 95% CI 1.54 to 16.26; P value = 0.007; very-low-quality evidence) or linear/micronodular (focal) ECL hyperplasia (OR 3.98; 95% CI 1.31 to 12.16; P value = 0.02; low-quality evidence) than controls. No participant showed any dysplastic or neoplastic change in any included studies. There is presently no clear evidence that the long-term use of PPIs can cause or accelerate the progression of corpus gastric atrophy or intestinal metaplasia, although results were imprecise. People with PPI maintenance treatment may have a higher possibility of experiencing either diffuse (simple) or linear/micronodular (focal) ECL cell hyperplasia. However, the clinical importance of this outcome is currently uncertain.
We searched databases in August 2013 for randomised controlled trials (clinical trials where people are randomly allocated to one of two or more treatment groups) conducted in adults (aged 18 years or over) who did not have gastric cancer at the start of the trial. Treatment had to be with PPI for six months or more and be compared with no treatment, surgery/endoscopic treatment (where a tube is passed down the food pipe and into the stomach), or any other antacid treatment. We found seven randomised controlled trials with 1789 participants. Some trials only partially reported gastric pre-cancerous lesions, and there was a substantial proportion of participants with missing data. We concluded that there was no clear evidence to support the notion that the long-term use of PPIs could promote the development of pre-cancerous lesions. However, there was a potentially elevated risk of developing a thickening of the stomach lining (hyperplasia) among participants with long-term PPI use, which is considered as a possible pre-condition of gastric carcinoid (a relatively benign (non-cancerous) tumour that develops within the stomach lining). Currently, available evidence was of low or very low quality, due to their study design and the large proportion of missing data. We therefore suggest future well-designed clinical trials should be performed for providing better understandings regarding this question.
4 RCTs were included and analysed. Methodological quality of included studies was considered low, when scored according to GRADE methodology. Total numbers of inclusion were limited. The trials included in primary analysis reported 237 patients, (119 ERAS vs 118 conventional). Baseline characteristics were comparable. The primary outcome measure, complications, showed a significant risk reduction for all complications (RR 0.50; 95% CI 0.35 to 0.72). This difference was not due to reduction in major complications. Length of hospital stay was significantly reduced in the ERAS group (MD -2.94 days; 95% CI -3.69 to -2.19), and readmission rates were equal in both groups. Other outcome parameters were unsuitable for meta-analysis, but seemed to favour ERAS. The quantity and especially quality of data are low. Analysis shows a reduction in overall complications, but major complications were not reduced. Length of stay was reduced significantly. We state that ERAS seems safe, but the quality of trials and lack of sufficient other outcome parameters do not justify implementation of ERAS as the standard of care. Within ERAS protocols included, no answer regarding the role for minimally invasive surgery (i.e. laparoscopy) was found. Furthermore, protocol compliance within ERAS programs has not been investigated, while this seems a known problem in the field. Therefore, more specific and large RCT's are needed.
This review investigated whether this intervention is safe and whether it is more effective than the traditional treatment. In order to answer this question, 4 randomised trials were found, comparing these two interventions. We found that ERAS can be viewed as safe, i.e. not resulting in more complications or deaths, and at the same time decreases the days spent in hospital following major bowel surgery. However, the data are of low quality and therefore does not justify implementation of ERAS as the standard method of care yet. More research on other outcome parameters like economical evaluation and quality of life parameters are necessary.
A total of 16 randomised trials with 1326 patients were included. One trial with 42 participants compared phyllanthus with placebo. The trial found no significant difference in HBeAg seroconversion after the end of treatment (RR 0.9; 95% CI 0.73 to 1.25) or follow-up (RR 1.00; 95% CI 0.63 to 1.60). No other outcomes could be assessed. Fifteen trials compared phyllanthus plus an antiviral drug like interferon alpha, lamivudine, adefovir dipivoxil, thymosin, vidarabine, or conventional treatment with the same antiviral drug alone. Phyllanthus did significantly affect serum HBV DNA (RR 0.69; 95% CI 0.52 to 0.91, P = 0.008; I2 = 71%), serum HBeAg (RR 0.70; 95% CI 0.60 to 0.81, P < 0.00001; I2 = 68%), and HBeAg seroconversion (RR 0.77; 95% CI 0.63 to 0.92, P = 0.005; I2 = 78%), but the heterogeneity was substantial. The result obtained regarding serum HBV DNA was not supported by trial sequential analysis. None of the trials reported mortality and hepatitis B-related morbidity, quality of life, or liver histology. Only two trials reported adverse events with numbers without significant differences. No serious adverse events were reported. There is no convincing evidence that phyllanthus compared with placebo benefits patients with chronic HBV infection. Phyllanthus plus an antiviral drug may be better than the same antiviral drug alone. However, heterogeneity, systematic errors, and random errors question the validity of the results. Clinical trials with large sample size and low risk of bias are needed to confirm our findings. Species of phyllanthus should be reported in future trials, and a dose-finding design is warranted.
The objective of this review was to evaluate the benefits and harms of phyllanthus species for patients with chronic HBV infection. Phyllanthus species appear to be safe and may potentially have effects on the clearance of viral markers in patients with HBV infection. However, all of the trials evaluated in this review were of low methodology quality, ie, have high risk of bias, and there was a risk of random errors in the majority of comparisons. Furthermore, all analyses showed substantial heterogeneity. Accordingly, randomised clinical trials with low risk of bias and large sample size should be conducted to confirm the effects of phyllanthus species before clinical use is considered.
We identified seven RCTs, one register, and five ongoing trials from a total of 347 references. The published trials for VEGF-targeting drugs in MBC were limited to bevacizumab. Four trials, including a total of 2886 patients, were available for the comparison of first-line chemotherapy, with versus without bevacizumab. PFS (HR 0.67; 95% confidence interval (CI) 0.61 to 0.73) and response rate were significantly better for patients treated with bevacizumab, with moderate heterogeneity regarding the magnitude of the effect on PFS. For second-line chemotherapy, a smaller, but still significant benefit in terms of PFS could be demonstrated for patients treated with bevacizumab (HR 0.85; 95% CI 0.73 to 0.98), as well as a benefit in tumour response. However, OS did not differ significantly, neither in first- (HR 0.93; 95% CI 0.84 to 1.04), nor second-line therapy (HR 0.98; 95% CI 0.83 to 1.16). Quality of life (QoL) was evaluated in four trials but results were published for only two of these with no relevant impact. Subgroup analysis stated a significant greater benefit for patients with previous (taxane) chemotherapy and patients with hormone-receptor negative status. Regarding toxicity, data from RCTs and registry data were consistent and in line with the known toxicity profile of bevacizumab. While significantly higher rates of adverse events (AEs) grade III/IV (odds ratio (OR) 1.77; 95% CI 1.44 to 2.18) and serious adverse events (SAEs) (OR 1.41; 95% CI 1.13 to 1.75) were observed in patients treated with bevacizumab, rates of treatment-related deaths were lower in patients treated with bevacizumab (OR 0.60; 95% CI 0.36 to 0.99). The overall patient benefit from adding bevacizumab to first- and second-line chemotherapy in metastatic breast cancer can at best be considered as modest. It is dependent on the type of chemotherapy used and limited to a prolongation of PFS and response rates in both first- and second-line therapy, both surrogate parameters. In contrast, bevacizumab has no significant impact on the patient-related secondary outcomes of OS or QoL, which indicate a direct patient benefit. For this reason, the clinical value of bevacizumab for metastatic breast cancer remains controversial.
One of these drugs is bevacizumab (Avastin) which has been studied in clinical trials in metastatic breast cancer. Trials with other drugs are ongoing. Data are available from seven randomised trials, which evaluated the effect of bevacizumab on the primary endpoint in a total of 4032 patients with metastatic breast cancer. These patients were either-hormone receptor negative or had progressed on hormonal treatment. The primary end point was progression-free survival and secondary end points included overall survival, response rate measuring the change in size of the tumour, quality of life and toxicity of the treatment. Progression-free survival is considered a surrogate end point, i.e. a substitute for overall survival as an end point. The addition of bevacizumab to chemotherapy significantly prolongs progression-free survival and response rates in patients who have had previous chemotherapy and those who have not had previous chemotherapy for metastatic disease. The magnitude of this benefit is dependent on the type of chemotherapy used. Best results have been observed for the combination of weekly paclitaxel and bevacizumab in patients without prior chemotherapy for metastatic disease. Although progression-free survival was significantly longer with bevacizumab, there was no significant effect observed on either overall survival or quality of life. Quality of life is a direct measure of benefit to the patient. Adverse effects of bevacizumab in breast cancer are generally manageable, but may be serious and include increased frequencies of high blood pressure, blood clots in arteries and bowel perforations. However, overall rates of treatment-related deaths were lower in patients treated with bevacizumab. Because of the lack of effect on overall survival and quality of life, it is regarded as controversial whether bevacizumab is associated with a true patient benefit in spite of the increase in progression-free survival.
We included eight studies in this updated review but could retain in the analysis only seven studies on 742 operated eyes of 617 participants. Two cross-over trials included 125 participants, and five parallel trials included 492 participants. These studies were published between 1997 and 2005. The mean age of participants varied from 71.5 years to 83.5 years. The female proportion of participants varied from 54% to 76%. Compared with sub-Tenon's anaesthesia, topical anaesthesia (with or without intracameral injection) for cataract surgery increases intraoperative pain but decreases postoperative pain at 24 hours. The amplitude of the effect (equivalent to 1.1 on a score from 0 to 10 for intraoperative pain, and to 0.2 on the same scale for postoperative pain at 24 hours), although statistically significant, was probably too small to be of clinical relevance. The quality of the evidence was rated as high for intraoperative pain and moderate for pain at 24 hours. We did find differences in pain during administration of local anaesthetic (low level of evidence), and indications that surgeon satisfaction (low level of evidence) and participant satisfaction (moderate level of evidence) were less with topical anaesthesia. There was not enough evidence to say that one technique would result in a higher or lower incidence of intraoperative complications compared with the other. Both topical anaesthesia and sub-Tenon's anaesthesia are accepted and safe methods of providing anaesthesia for cataract surgery. An acceptable degree of intraoperative discomfort has to be expected with either of these techniques. Randomized controlled trials on the effects of various strategies to prevent intraoperative pain during cataract surgery could prove useful.
We included eight randomized controlled trials in the review, and we based our analysis on seven of these: two cross-over trials that included 125 participants, and five parallel trials involving 492 participants. The mean age of participants varied from 71.5 years to 83.5 years. Oral sedation was used for two trials only. No trial used oral analgesics before the operation, and no trials mentioned their source of funding. This review showed that sub-Tenon’s anaesthesia provided slightly better pain relief than topical anaesthesia during cataract surgery. The difference was equal to 1.1 on a scale from 0 to 10. Pain on the day after surgery was slightly lower for participants who received topical anaesthesia, and the difference was equivalent to 0.2 on a scale from 0 to 10. Both surgeons and participants preferred sub-Tenon’s anaesthesia. However, all trials were performed at a time when surgeons were only starting to use topical anaesthesia. There was not enough evidence from included trials to say whether one anaesthetic technique would be associated with a lower or higher incidence of important surgical complications during surgery (posterior capsular tear, iris prolapse) that may lead to postoperative complications and eventually to poorer vision. Topical anaesthesia and sub-Tenon’s anaesthesia therefore are accepted and safe methods of providing anaesthesia for cataract surgery.
We included four randomised controlled trials involving 1933 participants. For the primary outcome category of health, there was moderate quality evidence from one study that women who received prenatal support via mobile phone messages had significantly higher satisfaction than those who did not receive the messages, both in the antenatal period (mean difference (MD) 1.25, 95% confidence interval (CI) 0.78 to 1.72) and perinatal period (MD 1.19, 95% CI 0.37 to 2.01). Their confidence level was also higher (MD 1.12, 95% CI 0.51 to 1.73) and anxiety level was lower (MD -2.15, 95% CI -3.42 to -0.88) than in the control group in the antenatal period. In this study, no further differences were observed between groups in the perinatal period. There was low quality evidence that the mobile phone messaging intervention did not affect pregnancy outcomes (gestational age at birth, infant birth weight, preterm delivery and route of delivery). For the primary outcome category of health behaviour, there was moderate quality evidence from one study that mobile phone message reminders to take vitamin C for preventive reasons resulted in higher adherence (risk ratio (RR) 1.41, 95% CI 1.14 to 1.74). There was high quality evidence from another study that participants receiving mobile phone messaging support had a significantly higher likelihood of quitting smoking than those in a control group at 6 weeks (RR 2.20, 95% CI 1.79 to 2.70) and at 12 weeks follow-up (RR 1.55, 95% CI 1.30 to 1.84). At 26 weeks, there was only a significant difference between groups if, for participants with missing data, the last known value was carried forward. There was very low quality evidence from one study that mobile phone messaging interventions for self-monitoring of healthy behaviours related to childhood weight control did not have a statistically significant effect on physical activity, consumption of sugar-sweetened beverages or screen time. For the secondary outcome of acceptability, there was very low quality evidence from one study that user evaluation of the intervention was similar between groups. There was moderate quality evidence from one study of no difference in adverse effects of the intervention, measured as rates of pain in the thumb or finger joints, and car crash rates. None of the studies reported the secondary outcomes of health service utilisation or costs of the intervention. We found very limited evidence that in certain cases mobile phone messaging interventions may support preventive health care, to improve health status and health behaviour outcomes. However, because of the low number of participants in three of the included studies, combined with study limitations of risk of bias and lack of demonstrated causality, the evidence for these effects is of low to moderate quality. The evidence is of high quality only for interventions aimed at smoking cessation. Furthermore, there are significant information gaps regarding the long-term effects, risks and limitations of, and user satisfaction with, such interventions.
There was moderate quality evidence from one study which showed that pregnant women who received supportive, informative text messages experienced higher satisfaction and confidence, and lower anxiety levels in the antenatal period than women who did not receive these. There was low quality evidence that there was no difference in pregnancy outcomes. We found one trial that provided high quality evidence that regular support messages sent by text message can help people to quit smoking, at least in the short-term. One study assessing whether mobile phone messaging promoted use of preventive medication reported moderate quality evidence of higher self-reported adherence by people receiving the messages. A fourth study on healthy behaviours in children found very low quality evidence showing that the interventions had no effect. There was very low quality evidence from one study that people's evaluation of the intervention was similar between groups. There was moderate quality evidence from one study of no difference in harms of the intervention, measured as rates of pain in the thumb or finger joints, and car crash rates. There were no studies reporting outcomes related to health service utilisation or costs. Although we find that, overall, mobile phone messaging can be helpful for some aspects of preventive health care, much is not yet known about the long-term effects or potential negative consequences.
Five studies (338 patients) were included, four studies compared ESWL to PCNL and one compared ESWL with RIRS. Random sequence generation was reported in three studies and unclear in two. Allocation concealment was not reported in any of the included studies. Blinding of participants and investigators could not be undertaken due to the nature of the interventions; blinding of outcome assessors was not reported. Reporting bias was judged to be low risk in all studies. One study was funded by industry and in one study the number of participants in each group was unbalanced. The success of treatment at three months was significantly greater in the PCNL compared to the ESWL group (3 studies, 201 participants: RR 0.46, 95% CI 0.35 to 0.62). Re-treatment (1 study, 122 participants: RR 1.81, 95% CI 0.66 to 4.99) and using auxiliary procedures (2 studies, 184 participants: RR 9.06, 95% CI 1.20 to 68.64) was significantly increased with ESWL group compared to PCNL. The efficiency quotient (EQ; used to assess the effectiveness of procedures) higher for PCNL than ESWL; however EQ decreased when stone size increased. Duration of treatment (MD -36.00 min, 95% CI -54.10 to -17.90) and hospital stay (1 study, 49 participants: MD -3.30 days, 95% CI -5.45 to -1.15) were significantly shorter in the ESWL group. Overall more complications were reported with PCNL, however we were unable to meta-analyse the included studies due to the differing outcomes reported and the timing of the outcome measurements. One study compared ESWL versus RIRS for lower pole kidney stones. The success of treatment was not significantly different at the end of the third month (58 participants: RR 0.91, 95% CI 0.64 to 1.30). Mean procedural time and mean hospital stay was reported to be longer in the RIRS group. Results from five small studies, with low methodological quality, indicated ESWL is less effective for kidney stones than PCNL but not significantly different from RIRS. Hospital stay and duration of treatment was less with ESWL. Larger RCTs with high methodological quality are required to investigate the effectiveness and complications of ESWL for kidney stones compared to PCNL if there is any technological progress in the non-invasive elimination of the residual fragments. Moreover, further research is required for the outcomes of ESWL and RIRS in lower and non-lower pole studies including PCNL versus RIRS.
This review aimed to compare the effectiveness and complications between ESWL and stones removing using the nephroscopy through the skin at kidney level (PCNL) or ureteroscope through the bladder and ureter to the kidney (RIRS). Five small randomised studies (338 patients) were included. Four studies compared ESWL with PCNL and one study compared ESWL with RIRS. Patients with kidney stones who undergo PCNL have a higher success rate than ESWL whereas RIRS was not significantly different from ESWL. However, ESWL patients spent less time in hospital, duration of treatment was shorter and there were fewer complications.
We included 20 RCTs, including 3057 participants. The number of participants ranged per trial between 15 and 645. Follow-up ranged between 24 weeks and two years. Eighteen trials were parallel RCTs and two were cluster RCTs. Twelve RCTs had two comparisons and eight RCTs had three comparisons. The interventions varied widely; the duration, content, delivery and follow-up of the interventions were heterogeneous. The comparators also differed. This review categorised the comparisons into four groups: parent-only versus parent-child, parent-only versus waiting list controls, parent-only versus minimal contact interventions and parent-only versus other parent-only interventions. Trial quality was generally low with a large proportion of trials rated as high risk of bias on individual risk of bias criteria. In trials comparing a parent-only intervention with a parent-child intervention, the body mass index (BMI) z score change showed a mean difference (MD) at the longest follow-up period (10 to 24 months) of -0.04 (95% confidence interval (CI) -0.15 to 0.08); P = 0.56; 267 participants; 3 trials; low quality evidence. In trials comparing a parent-only intervention with a waiting list control, the BMI z score change in favour of the parent-only intervention at the longest follow-up period (10-12 months) had an MD of -0.10 (95% CI -0.19 to -0.01); P = 0.04; 136 participants; 2 trials; low quality evidence. BMI z score change of parent-only interventions when compared with minimal contact control interventions at the longest follow-up period (9 to 12 months) showed an MD of 0.01 (95% CI -0.07 to 0.09); P = 0.81; 165 participants; 1 trial; low quality evidence. There were few similarities between interventions and comparators across the included trials in the parent-only intervention versus other parent-only interventions and we did not pool these data. Generally, these trials did not show substantial differences between their respective parent-only groups on BMI outcomes. Other outcomes such as behavioural measures, parent-child relationships and health-related quality of life were reported inconsistently. Adverse effects of the interventions were generally not reported, two trials stated that there were no serious adverse effects. No trials reported on all-cause mortality, morbidity or socioeconomic effects. All results need to be interpreted cautiously because of their low quality, the heterogeneous interventions and comparators, and the high rates of non-completion. Parent-only interventions may be an effective treatment option for overweight or obese children aged 5 to 11 years when compared with waiting list controls. Parent-only interventions had similar effects compared with parent-child interventions and compared with those with minimal contact controls. However, the evidence is at present limited; some of the trials had a high risk of bias with loss to follow-up being a particular issue and there was a lack of evidence for several important outcomes. The systematic review has identified 10 ongoing trials that have a parent-only arm, which will contribute to future updates. These trials will improve the robustness of the analyses by type of comparator, and may permit subgroup analysis by intervention component and the setting. Trial reports should provide adequate details about the interventions to be replicated by others. There is a need to conduct and report cost-effectiveness analyses in future trials in order to establish whether parent-only interventions are more cost-effective than parent-child interventions.
We found 20 randomised controlled trials (clinical studies where people are randomly put into one of two or more treatment groups) comparing diet, physical activity and behavioural (where habits are changed or improved) treatments (interventions) to a variety of control groups (who did not receive treatment) delivered to parents only of 3057 children aged 5 to 11 years. There were few similarities between the trials in the nature and types of interventions used. We grouped the trials by the type of comparisons. Our systematic review reported on the effects of the parent-only interventions compared with parent and child interventions, waiting list controls (where the intervention was delayed until the end of the trial), other interventions with only minimal information or contact and other types of parent-only interventions. The children in the included trials were monitored (called follow-up) for between six months and two years. This evidence is up to date as of March 2015. The most reported outcome was the body mass index (BMI). This is a measure of body fat and is calculated by dividing weight (in kilograms) by the square of the body height measured in metres (kg/m2). The studies measured BMI in ways that took account of gender, weight and height as the children grew older (such as the BMI z score and the BMI percentile). When compared with a waiting list control, there was limited evidence that parental interventions helped to reduce BMI. In looking at the longest follow-up periods of the included trials, we did not find firm evidence of an advantage or disadvantage of parent-only interventions when compared with either parent and child interventions, or when compared with limited information. Our review found very little information about how different types of parental interventions compared. No trial reported on death from any cause, illness or socioeconomic effects (such as whether parent-only interventions are lower in costs compared with parent and child interventions). Two trials reported no serious side effects and the rest of the trials did not report whether side effects occurred or not. Information on parent-child relationships and health-related quality of life was rarely reported. The overall quality of the evidence was low, mainly because there were just a few trials per measurement or the number of the included children was small. In addition, many children left the trials before they had finished.
There were two small randomised controlled trials including a total of 23 infants eligible for inclusion in the review. Only one trial involving 16 infants included in the analysis reported on either of the primary outcomes of the review. This found no difference in failure of modality between NIV-NAVA and NIPPV (RR 0.33, 95% CI 0.02 to 7.14; RD −0.13, 95% CI -0.41 to 0.16; 1 study, 16 infants; heterogeneity not applicable). Both trials reported on secondary outcomes of the review, specific for cross-over trials (total 22 infants; 1 excluded due to failure of initial modality). One study involving seven infants reported a significant reduction in maximum FiO₂ with NIV-NAVA compared to NIPPV (MD −4.29, 95% CI −5.47 to −3.11; heterogeneity not applicable). There was no difference in maximum electric activity of the diaphragm (Edi) signal between modalities (MD −1.75, 95% CI −3.75 to 0.26; I² = 0%) and a significant increase in respiratory rate with NIV-NAVA compared to NIPPV (MD 7.22, 95% CI 0.21 to 14.22; I² = 72%) on a meta-analysis of two studies involving a total of 22 infants. The included studies did not report on other outcomes of interest. Due to limited data and very low certainty evidence, we were unable to determine if diaphragm-triggered non-invasive respiratory support is effective or safe in preventing respiratory failure in preterm infants. Large, adequately powered randomised controlled trials are needed to determine if diaphragm-triggered non-invasive respiratory support in preterm infants is effective or safe.
We found 15 studies that assessed the effect of diaphragm-triggered non-invasive respiratory support in infants through searches of medical databases up to 10 May 2019. Of these 15, two studies (involving a total of 23 preterm infants) were eligible for inclusion in the review. Ten studies were either awaiting publication or are ongoing. There is limited data from randomised controlled trials to determine the effect of diaphragm-triggered non-invasive respiratory support on important outcomes. We were able to include only two small randomised controlled trials in the review. Both studies involved infants switching from one type of support to the other and were focused on short-term changes in breathing patterns. We were not able to make any meaningful conclusions in this review due to limited data and very low quality evidence. Large, high-quality studies are needed to determine whether diaphragm-triggered non-invasive respiratory support can prevent respiratory failure.
We included nine trials that recruited 727 participants. Four of the nine trials included a comparison of an antipsychotic to a nonantipsychotic drug or placebo and seven included a comparison of a typical to an atypical antipsychotic. The study populations included hospitalised medical, surgical, and palliative patients. No trial reported on duration of delirium. Antipsychotic treatment did not reduce delirium severity compared to nonantipsychotic drugs (standard mean difference (SMD) -1.08, 95% CI -2.55 to 0.39; four studies; 494 participants; very low-quality evidence); nor was there a difference between typical and atypical antipsychotics (SMD -0.17, 95% CI -0.37 to 0.02; seven studies; 542 participants; low-quality evidence). There was no evidence antipsychotics resolved delirium symptoms compared to nonantipsychotic drug regimens (RR 0.95, 95% CI 0.30 to 2.98; three studies; 247 participants; very low-quality evidence); nor was there a difference between typical and atypical antipsychotics (RR 1.10, 95% CI 0.79 to 1.52; five studies; 349 participants; low-quality evidence). The pooled results indicated that antipsychotics did not alter mortality compared to nonantipsychotic regimens (RR 1.29, 95% CI 0.73 to 2.27; three studies; 319 participants; low-quality evidence) nor was there a difference between typical and atypical antipsychotics (RR 1.71, 95% CI 0.82 to 3.35; four studies; 342 participants; low-quality evidence). No trial reported on hospital length of stay, hospital discharge disposition, or health-related quality of life. Adverse event reporting was limited and measured with inconsistent methods; in those reporting events, the number of events were low. No trial reported on physical restraint use, long-term cognitive outcomes, cerebrovascular events, or QTc prolongation (i.e. increased time in the heart's electrical cycle). Only one trial reported on arrhythmias and seizures, with no difference between typical or atypical antipsychotics. We found antipsychotics did not have a higher risk of extrapyramidal symptoms (EPS) compared to nonantipsychotic drugs (RR 1.70, 95% CI 0.04 to 65.57; three studies; 247 participants; very-low quality evidence); pooled results showed no increased risk of EPS with typical antipsychotics compared to atypical antipsychotics (RR 12.16, 95% CI 0.55 to 269.52; two studies; 198 participants; very low-quality evidence). There were no reported data to determine whether antipsychotics altered the duration of delirium, length of hospital stay, discharge disposition, or health-related quality of life as studies did not report on these outcomes. From the poor quality data available, we found antipsychotics did not reduce delirium severity, resolve symptoms, or alter mortality. Adverse effects were poorly or rarely reported in the trials. Extrapyramidal symptoms were not more frequent with antipsychotics compared to nonantipsychotic drug regimens, and no different for typical compared to atypical antipsychotics.
We found nine studies with 727 participants testing antipsychotics for delirium treatment; four trials compared an antipsychotic to another drug class or placebo and seven of the nine trials compared a typical antipsychotic to an atypical antipsychotic. We found no evidence to support or refute the suggestion that antipsychotics shorten the course of delirium in hospitalised patients. Based on the available studies, antipsychotics do not reduce the severity of delirium or resolve symptoms compared to nonantipsychotic drugs or placebo or lower the risk of dying. We found no evidence to support or refute the suggestion that antipsychotics shorten hospital length of stay or improve health-related quality of life. Side effects were rarely reported in the studies. It is important to note many clinically relevant outcomes were not reported in the studies and the overall quality of the available evidence was poor. Canadian Fraility Network (previously Technology Evaluation in the Elderly Network [TVN]) (www.cfn-nce.ca/), Canada
Twenty-seven trials (2485 analyzed /3005 randomized participants) met our inclusion criteria. For acute neck pain only, no evidence was found. For chronic neck pain, moderate quality evidence supports 1) cervico-scapulothoracic and upper extremity strength training to improve pain of a moderate to large amount immediately post treatment [pooled SMD (SMDp) -0.71 (95% CI: -1.33 to -0.10)] and at short-term follow-up; 2) scapulothoracic and upper extremity endurance training for slight beneficial effect on pain at immediate post treatment and short-term follow-up; 3) combined cervical, shoulder and scapulothoracic strengthening and stretching exercises varied from a small to large magnitude of beneficial effect on pain at immediate post treatment [SMDp -0.33 (95% CI: -0.55 to -0.10)] and up to long-term follow-up and a medium magnitude of effect improving function at both immediate post treatment and at short-term follow-up [SMDp -0.45 (95%CI: -0.72 to -0.18)]; 4) cervico-scapulothoracic strengthening/stabilization exercises to improve pain and function at intermediate term [SMDp -14.90 (95% CI:-22.40 to -7.39)]; 5) Mindfulness exercises (Qigong) minimally improved function but not global perceived effect at short term. Low evidence suggests 1) breathing exercises; 2) general fitness training; 3) stretching alone; and 4) feedback exercises combined with pattern synchronization may not change pain or function at immediate post treatment to short-term follow-up. Very low evidence suggests neuromuscular eye-neck co-ordination/proprioceptive exercises may improve pain and function at short-term follow-up. For chronic cervicogenic headache, moderate quality evidence supports static-dynamic cervico-scapulothoracic strengthening/endurance exercises including pressure biofeedback immediate post treatment and probably improves pain, function and global perceived effect at long-term follow-up. Low grade evidence supports sustained natural apophyseal glides (SNAG) exercises. For acute radiculopathy, low quality evidence suggests a small benefit for pain reduction at immediate post treatment with cervical stretch/strengthening/stabilization exercises. No high quality evidence was found, indicating that there is still uncertainty about the effectiveness of exercise for neck pain. Using specific strengthening exercises as a part of routine practice for chronic neck pain, cervicogenic headache and radiculopathy may be beneficial. Research showed the use of strengthening and endurance exercises for the cervico-scapulothoracic and shoulder may be beneficial in reducing pain and improving function. However, when only stretching exercises were used no beneficial effects may be expected. Future research should explore optimal dosage.
The evidence is current to May 2014. We found 27 trials (with a total of 2485 participants) examining whether exercise can help reduce neck pain and disability; improve function, global perceived effect, patient satisfaction and/or quality of life. In these trials, exercise was compared to either a placebo treatment, or no treatment (waiting list), or exercise combined with another intervention was compared with that same intervention (which could include manipulation, education/advice, acupuncture, massage, heat or medications). Twenty-four of 27 trials evaluating neck pain reported on the duration of the disorder: 1 acute; 1 acute to chronic; 1 subacute; 4 subacute/chronic; and 16 chronic. One study reported on neck disorder with acute radiculopathy; two trials investigated subacute to chronic cervicogenic headache. Results showed that exercise is safe, with temporary and benign side effects, although more than half of the trials did not report on adverse effects. An exercise classification system was used to ensure similarity between protocols when looking at the effects of different types of exercises. Some types of exercise did show an advantage over the other comparison groups. There appears to be a role for strengthening exercises in the treatment of chronic neck pain, cervicogenic headache and cervical radiculopathy if these exercises are focused on the neck, shoulder and shoulder blade region. Furthermore, the use of strengthening exercises, combined with endurance or stretching exercises has also been shown to be beneficial. There is some evidence to suggest the beneficial effects of specific exercises (e.g. sustained natural apophyseal glides) with cervicogenic headaches and mindfulness exercises (e.g. Qigong) for chronic mechanical neck pain. There appears to be minimal effect on neck pain and function when only stretching or endurance type exercises are used for the neck, shoulder and shoulder blade region. No high quality evidence was found, indicating that there is still uncertainty about the effectiveness of exercise for neck pain. Future research is likely to have an important impact on the effect estimate.There were a number of challenges with this review; for example, the number of participants in most trials was small, more than half of the included studies were either of low or very low quality and there was limited evidence on optimum dosage requirements.
We included 20 RCTs involving a total of 1681 individual participants and 1172 individual legs (2853 analytic units). Of these 20 trials, 10 included patients undergoing general surgery; six included patients undergoing orthopaedic surgery; three individual trials included patients undergoing neurosurgery, cardiac surgery, and gynaecological surgery, respectively; and only one trial included medical patients. Graduated compression stockings were applied on the day before surgery or on the day of surgery and were worn up until discharge or until the participants were fully mobile. In the majority of the included studies DVT was identified by the radioactive I125 uptake test. Duration of follow-up ranged from seven to 14 days. The included studies were at an overall low risk of bias. We were able to pool the data from 20 studies reporting the incidence of DVT. In the GCS group, 134 of 1445 units developed DVT (9%) in comparison to the control group (without GCS), in which 290 of 1408 units developed DVT (21%). The Peto odds ratio (OR) was 0.35 (95% confidence interval (CI) 0.28 to 0.43; 20 studies; 2853 units; high-quality evidence), showing an overall effect favouring treatment with GCS (P < 0.001). Based on results from eight included studies, the incidence of proximal DVT was 7 of 517 (1%) units in the GCS group and 28 of 518 (5%) units in the control group. The Peto OR was 0.26 (95% CI 0.13 to 0.53; 8 studies; 1035 units; moderate-quality evidence) with an overall effect favouring treatment with GCS (P < 0.001). Combining results from five studies, all based on surgical patients, the incidence of PE was 5 of 283 (2%) participants in the GCS group and 14 of 286 (5%) in the control group. The Peto OR was 0.38 (95% CI 0.15 to 0.96; 5 studies; 569 participants; low-quality evidence) with an overall effect favouring treatment with GCS (P = 0.04). We downgraded the quality of the evidence for proximal DVT and PE due to low event rate (imprecision) and lack of routine screening for PE (inconsistency). We carried out subgroup analysis by speciality (surgical or medical patients). Combining results from 19 trials focusing on surgical patients, 134 of 1365 (9.8%) units developed DVT in the GCS group compared to 282 of 1328 (21.2%) units in the control group. The Peto OR was 0.35 (95% CI 0.28 to 0.44; high-quality evidence), with an overall effect favouring treatment with GCS (P < 0.001). Based on results from seven included studies, the incidence of proximal DVT was 7 of 437 units (1.6%) in the GCS group and 28 of 438 (6.4%) in the control group. The Peto OR was 0.26 (95% CI 0.13 to 0.53; 875 units; moderate-quality evidence) with an overall effect favouring treatment with GCS (P < 0.001). We downgraded the evidence for proximal DVT due to low event rate (imprecision). Based on the results from one trial focusing on medical patients admitted following acute myocardial infarction, 0 of 80 (0%) legs developed DVT in the GCS group and 8 of 80 (10%) legs developed DVT in the control group. The Peto OR was 0.12 (95% CI 0.03 to 0.51; low-quality evidence) with an overall effect favouring treatment with GCS (P = 0.004). None of the medical patients in either group developed a proximal DVT, and the incidence of PE was not reported. Limited data were available to accurately assess the incidence of adverse effects and complications with the use of GCS as these were not routinely quantitatively reported in the included studies. There is high-quality evidence that GCS are effective in reducing the risk of DVT in hospitalised patients who have undergone general and orthopaedic surgery, with or without other methods of background thromboprophylaxis, where clinically appropriate. There is moderate-quality evidence that GCS probably reduce the risk of proximal DVT, and low-quality evidence that GCS may reduce the risk of PE. However, there remains a paucity of evidence to assess the effectiveness of GCS in diminishing the risk of DVT in medical patients.
We identified 20 randomised controlled trials (studies in which participants are assigned to a treatment group using a random method) (2853 analytic units consisting of 1681 individual patients and 1172 individual legs) in our most recent search on 12 June 2018. Nine trials compared wearing stockings to no stockings, and 11 compared stockings plus another method with that method alone. The other methods used were dextran 70, aspirin, heparin, and mechanical sequential compression. Of the 20 trials, 10 included patients undergoing general surgery; six included patients undergoing orthopaedic surgery; three individual trials included patients undergoing neurosurgery, cardiac surgery, and gynaecological surgery, respectively; and only one trial included medical patients (patients who were admitted to the hospital for reasons other than surgery). The compression stockings were applied on the day before surgery or on the day of surgery and were worn up until discharge or until the patients were fully mobile. Thigh-length stockings were used in the vast majority of included studies. The included studies were of good quality overall. We found that wearing GCS reduced the overall risk of developing DVT, and probably also DVT in the thighs. We found that GCS may also reduce the risk of PE amongst patients undergoing surgery. As only one trial included medical patients, results for this population are limited. The occurrence of problems associated with wearing GCS was poorly reported in the included studies. Our review confirmed that GCS are effective in reducing the risk of DVT in hospitalised surgical patients (high-quality evidence). It also demonstrated that GCS probably reduce the risk of developing DVT in the thighs (proximal DVT, moderate-quality evidence) and PE (low-quality evidence). Reasons for downgrading the quality of the evidence included low event rate (i.e. small number of participants who developed DVT) and uncertainty due to only a small number of patients being routinely screened for proximal DVT or PE. Limited evidence was available for hospitalised medical patients, with only one study suggesting that GCS may prevent DVT in such patients.
We identified 26 trials (2736 patients). Twenty trials investigated pTE (thymostimulin or thymosin fraction 5) and six trials investigated sTP (thymopentin or thymosin α1). Twenty-one trials reported results for OS, six for DFS, 14 for TR, nine for AE and 10 for safety of pTE and sTP. Addition of pTE conferred no benefit on OS (RR 1.00, 95% CI 0.79 to 1.25); DFS (RR 0.97, 95% CI 0.82 to 1.16); or TR (RR 1.07, 95% CI 0.92 to 1.25). Heterogeneity was moderate to high for all these outcomes. For thymosin α1 the pooled RR for OS was 1.21 (95% CI 0.94 to 1.56, P = 0.14), with low heterogeneity; and 3.37 (95% CI 0.66 to 17.30, P = 0.15) for DFS, with moderate heterogeneity. The pTE reduced the risk of severe infectious complications (RR 0.54, 95% CI 0.38 to 0.78, P = 0.0008; I² = 0%). The RR for severe neutropenia in patients treated with thymostimulin was 0.55 (95% CI 0.25 to 1.23, P = 0.15). Tolerability of pTE and sTP was good. Most of the trials had at least a moderate risk of bias. Overall, we found neither evidence that the addition of pTE to antineoplastic treatment reduced the risk of death or disease progression nor that it improved the rate of tumour responses to antineoplastic treatment. For thymosin α1, there was a trend for a reduced risk of dying and of improved DFS. There was preliminary evidence that pTE lowered the risk of severe infectious complications in patients undergoing chemotherapy or radiotherapy.
This review looked at the evidence from 26 clinical trials with a total of 2736 adult cancer patients. Many of the trials were small and of moderate quality. Only three studies were less than 10 years old. Thymosin α1 is a synthetic peptide that shows some promise as a treatment option for patients with metastatic melanoma when used in addition to chemotherapy. Severe problems occur during chemotherapy and radiotherapy due to low white blood cell counts and infections. These were reduced by using purified thymus extracts. However, the use of purified thymus extracts should be investigated more thoroughly before the extracts are used routinely in patients. The findings were not conclusive and caution is advised. Overall, thymic peptides seem to be well tolerated.
Six trials (1343 participants) of risperidone as monotherapy or as adjunctive treatment to lithium, or an anticonvulsant, were identified. Permitted doses were consistent with those recommended by the manufacturers of Haldol (haloperidol) and Risperdal (risperidone) for treatment of mania and trials involving haloperidol allowed antiparkinsonian treatment. Risperidone monotherapy was more effective than placebo in reducing manic symptoms, using the Young Mania Rating Scale (YMRS) (weighted mean difference (WMD) -5.75, 95% confidence interval (CI) -7.46 to -4.04, P<0.00001; 2 trials) and in leading to response, remission and sustained remission. Effect sizes for monotherapy and adjunctive treatment comparisons were similar. Low levels of baseline depression precluded reliable assessment of efficacy for treatment of depressive symptoms. Risperidone as monotherapy and as adjunctive treatment was more acceptable than placebo, with lower incidence of failure to complete treatment (RR 0.66, 95% CI 0.52 to 0.82, P = 0.0003; 5 trials). Overall risperidone caused more weight gain, extrapyramidal disorder, sedation and increase in prolactin level than placebo. There was no evidence of a difference in efficacy between risperidone and haloperidol either as monotherapy or as adjunctive treatment. The acceptability of risperidone and haloperidol in incidence of failure to complete treatment was comparable. Overall risperidone caused more weight gain than haloperidol but less extrapyramidal disorder and comparable sedation. Risperidone, as monotherapy and adjunctive treatment, is effective in reducing manic symptoms. The main adverse effects are weight gain, extrapyramidal effects and sedation. Risperidone is comparable in efficacy to haloperidol. Higher quality trials are required to provide more reliable and precise estimates of its costs and benefits.
This review included six trials and investigated the efficacy and tolerability of risperidone, an atypical antipsychotic, as treatment for mania compared to placebo or other medicines. High withdrawal rates from the trials limit the confidence that can be placed on the results. Risperidone, both as monotherapy and combined with lithium, or an anticonvulsant, was more effective at reducing manic symptoms than placebo but caused more weight gain, sedation and elevation of prolactin levels. The efficacy of risperidone was comparable to that of haloperidol both as monotherapy and as adjunctive treatments to lithium, or an anticonvulsant. Risperidone caused less movement disorders than haloperidol but there was some evidence for greater weight gain.
Along with the three RCTs included in the original version of this review (2013), we identified an additional five RCTs in this update, resulting in a total of eight RCTs involving 450 participants (180 (40%) females). The risk of selection bias in the included studies was low and the risk of performance bias high. Six studies explored the effects of combined aerobic and resistance training; one explored the effects of combined aerobic and inspiratory muscle training; and one explored the effects of combined aerobic, resistance, inspiratory muscle training and balance training. On completion of the intervention period, compared to the control group, exercise capacity expressed as the peak rate of oxygen uptake (VO2peak) and six-minute walk distance (6MWD) was greater in the intervention group (VO2peak: MD 2.97 mL/kg/min, 95% confidence interval (CI) 1.93 to 4.02 mL/kg/min, 4 studies, 135 participants, moderate-certainty evidence; 6MWD: MD 57 m, 95% CI 34 to 80 m, 5 studies, 182 participants, high-certainty evidence). One adverse event (hip fracture) related to the intervention was reported in one of the included studies. The intervention group also achieved greater improvements in the physical component of general HRQoL (MD 5.0 points, 95% CI 2.3 to 7.7 points, 4 studies, 208 participants, low-certainty evidence); improved force-generating capacity of the quadriceps muscle (SMD 0.75, 95% CI 0.4 to 1.1, 4 studies, 133 participants, moderate-certainty evidence); and less dyspnoea (SMD −0.43, 95% CI −0.81 to −0.05, 3 studies, 110 participants, very low-certainty evidence). We observed uncertain effects on the mental component of general HRQoL, disease-specific HRQoL, handgrip force, fatigue, and lung function. There were insufficient data to comment on the effect of exercise training on maximal inspiratory and expiratory pressures and feelings of anxiety and depression. Mortality was not reported in the included studies. Exercise training increased exercise capacity and quadriceps muscle force of people following lung resection for NSCLC. Our findings also suggest improvements on the physical component score of general HRQoL and decreased dyspnoea. This systematic review emphasises the importance of exercise training as part of the postoperative management of people with NSCLC.
We included three studies from the 2013 review and an additional five new studies from the current review, for a total of eight studies with 450 participants (180 women). The number of participants in the included studies ranged between 17 and 131; the mean age of participants was between 63 and 71 years. Six studies explored the effects of combined aerobic and strengthening exercises; one explored the effects of combined aerobic exercise and inspiratory muscle training; and one explored the effects of combined aerobic exercise, strengthening exercise inspiratory muscle training and balance training. The length of the exercise programmes ranged from four to 20 weeks, with exercises performed twice to five days a week. Our results showed that people with NSCLC who exercised after lung surgery had better fitness level (measured using both a cycling test and the six-minute walk test) and strength in their leg muscles compared to those that did not exercise. We also showed initial evidence for better quality of life and less breathlessness in those who exercised. One adverse event (hip fracture) related to the intervention was reported in one study. The effect of exercise training after lung surgery on grip strength, fatigue, and lung function was uncertain. We found insufficient evidence for improvements in the strength of breathing muscles or feelings of anxiety and depression. Overall the quality (certainty) of evidence for the outcomes was moderate, ranging between very low (for breathlessness) and high (for fitness level measured via the six-minute walk test).
We included ten RCTs, six in adults (628 participants) and four in children (366 participants). We found no clear evidence of a difference in treatment failure between the outpatient and inpatient groups, either in adults (RR 1.23, 95% CI 0.82 to 1.85, I2 0%; six studies; moderate-certainty evidence) or children (RR 1.04, 95% CI 0.55 to 1.99, I2 0%; four studies; moderate-certainty evidence). For mortality, we also found no clear evidence of a difference either in studies in adults (RR 1.04, 95% CI 0.29 to 3.71; six studies; 628 participants; moderate-certainty evidence) or in children (RR 0.63, 95% CI 0.15 to 2.70; three studies; 329 participants; moderate-certainty evidence). According to the type of intervention (early discharge or exclusively outpatient), meta-analysis of treatment failure in four RCTs in adults with early discharge (RR 1.48, 95% CI 0.74 to 2.95; P = 0.26, I2 0%; 364 participants; moderate-certainty evidence) was similar to the results of the exclusively outpatient meta-analysis (RR 1.15, 95% CI 0.62 to 2.13; P = 0.65, I2 19%; two studies; 264 participants; moderate-certainty evidence). Regarding the secondary outcome measures, we found no clear evidence of a difference between outpatient and inpatient groups in duration of fever (adults: mean difference (MD) 0.2, 95% CI -0.36 to 0.76, 1 study, 169 participants; low-certainty evidence) (children: MD -0.6, 95% CI -0.84 to 0.71, 3 studies, 305 participants; low-certainty evidence) and in duration of neutropaenia (adults: MD 0.1, 95% CI -0.59 to 0.79, 1 study, 169 participants; low-certainty evidence) (children: MD -0.65, 95% CI -1.86 to 0.55, 2 studies, 268 participants; moderate-certainty evidence). With regard to adverse drug reactions, although there was greater frequency in the outpatient group, we found no clear evidence of a difference when compared to the inpatient group, either in adult participants (RR 8.39, 95% CI 0.38 to 187.15; three studies; 375 participants; low-certainty evidence) or children (RR 1.90, 95% CI 0.61 to 5.98; two studies; 156 participants; low-certainty evidence). Four studies compared the hospitalisation time and found that the mean number of days of hospital stay was lower in the outpatient treated group by 1.64 days in adults (MD -1.64, 95% CI -2.22 to -1.06; 3 studies, 251 participants; low-certainty evidence) and by 3.9 days in children (MD -3.90, 95% CI -5.37 to -2.43; 1 study, 119 participants; low-certainty evidence). In the 3 RCTs of children in which days of antimicrobial treatment were analysed, we found no difference between outpatient and inpatient groups (MD -0.07, 95% CI -1.26 to 1.12; 305 participants; low-certainty evidence). We identified two studies that measured QoL: one in adults and one in children. QoL was slightly better in the outpatient group than in the inpatient group in both studies, but there was no consistency in the domains included. Outpatient treatment for low-risk febrile neutropaenia in people with cancer probably makes little or no difference to treatment failure and mortality compared with the standard hospital (inpatient) treatment and may reduce time that patients need to be treated in hospital.
Ten studies (994 participants) provided information for the review. These ten studies compared outpatient antibiotic therapy (491 participants) versus inpatient therapy (503 participants) in people with cancer who developed febrile neutropaenia. Six studies were conducted in adults (628 participants) and four studies were in children (366 participants). These ten trials compared effectiveness in terms of the disappearance of signs of infection (mainly fever) and nine studies assessed the effect on mortality (death). Eight studies recorded the number of treatment days for the fever to resolve. Five studies compared the duration of neutropaenia between out- and inpatients. Five studies analysed duration of antibiotics usage and six looked at the duration of hospitalisation.Two studies assessed quality of life for patients. In eight of the 10 studies, outpatient antibiotic therapy was part of an early discharge programme, i.e. antibiotics were given for a few days in the hospital and then the participants was discharged home. In the other two studies, the antibiotics were started at home. Outpatient antibiotic therapy is probably as effective as inpatient therapy in people (both in adults and children) with cancer who develop febrile neutropaenia for improving the signs of infection, including reducing fever. There was probably little or no difference in mortality between the outpatient therapy and inpatient therapy, as well as in the duration of treatment with antibiotics, or frequency of adverse events related to the use of antibiotics. Treatment as an outpatient may reduce the number of days patients need to be treated in hospital. In general, the studies were of moderate certainty.
Our prespecified primary outcomes were maternal and neonatal mortality, maternal and neonatal severe infection, and duration of maternal and neonatal hospital stay. We included 11 studies (involving 1296 women) and assessed them as having low to moderate risk of bias - mainly because allocation concealment methods were not adequately reported, most studies were open, and outcome reporting was incomplete. The quality of the evidence was low to very low for most outcomes, as per the GRADE approach. The following antibiotics were assessed in the included trials: ampicillin, ampicillin/sulbactam, gentamicin, clindamycin, and cefotetan. During labor: meta-analysis of two studies found no clear differences in rates of neonatal sepsis (163 neonates; risk ratio (RR) 1.07, 95% confidence interval (CI) 0.40 to 2.86; I² = 9%; low quality of evidence), treatment failure (endometritis) (163 participants; RR 0.86, 95% CI 0.27 to 2.70; I² = 0%; low quality of evidence), and postpartum hemorrhage (RR 1.39, 95% CI 0.76 to 2.56; I² = 0%; low quality of evidence) when two different dosages/regimens of gentamicin were assessed. No clear differences between groups were found for any reported maternal or neonatal outcomes. The review did not identify data for a comparison of antibiotics versus no treatment/placebo. Postpartum: meta-analysis of two studies that evaluated use of antibiotics versus placebo after vaginal delivery showed no significant differences between groups in rates of treatment failure or postpartum endometritis. No significant differences were found in rates of neonatal death and postpartum endometritis when use of antibiotics was compared with no treatment. Four trials assessing two different dosages/regimens of gentamicin or dual-agent therapy versus triple-agent therapy, or comparing antibiotics, found no significant differences in most reported neonatal or maternal outcomes; the duration of hospital stay showed a difference in favor of the group of women who received short-duration antibiotics (one study, 292 women; mean difference (MD) -0.90 days, 95% CI -1.64 to -0.16; moderate quality of evidence). Intrapartum versus postpartum: one small study (45 women) evaluating use of ampicillin/gentamicin during intrapartum versus immediate postpartum treatment found significant differences favoring the intrapartum group in the mean number of days of maternal postpartum hospital stay (one trial, 45 women; MD -1.00 days, 95% CI -1.94 to - 0.06; very low quality of evidence) and the mean number of neonatal hospital stay days (one trial, 45 neonates; MD -1.90 days, 95% CI -3.91 to -0.49; very low quality of evidence). Although no significant differences were found in the rate of maternal bacteremia or early neonatal sepsis, for the outcome of neonatal pneumonia or sepsis we observed a significant difference favoring intrapartum treatment (one trial, 45 neonates; RR 0.06, 95% CI 0.00 to 0.95; very low quality of evidence). This review included 11 studies (having low to moderate risk of bias). The quality of the evidence was low to very low for most outcomes, as per the GRADE approach. Only one outcome (duration of hospital stay) was considered to provide moderate quality of evidence when antibiotics (short duration) were compared with antibiotics (long duration) during postpartum management of intra-amniotic infection. Our main reasons for downgrading the quality of evidence were limitations in study design or execution (risk of bias), imprecision, and inconsistency of results. Currently, limited evidence is available to reveal the most appropriate antimicrobial regimen for the treatment of patients with intra-amniotic infection; whether antibiotics should be continued during the postpartum period; and which antibiotic regimen or what treatment duration should be used. Also, no evidence was found on adverse effects of the intervention (not reported in any of the included studies). One small RCT showed that use of antibiotics during the intrapartum period is superior to their use during the postpartum period in reducing the number of days of maternal and neonatal hospital stay.
a total of 11 studies were identified with 1296 women; most studies were conducted in the USA. Four studies evaluated the use of antibiotics before the birth (antepartum); six studies evaluated the use of antibiotics after birth (postpartum); and one compared antibiotic administration both before and after birth. the quality of the evidence was ranked low to very low, mainly because many studies had methodological limitations with outcome results based on limited numbers of trials and included participants that could be pooled. based on the findings of one study, treatment during labor was found to be more effective than treatment after labor; however this finding relates only to maternal and neonatal length of hospital stay and to neonatal severe infection. No evidence indicated that a higher dose of antibiotics before birth was superior to a lower dose. Immediately following birth, no evidence showed that different types of antibiotics or longer or shorter treatment duration improved the health of the mother and her newborn. All women who participated in the postpartum trials received antibiotics before the time of birth. Therefore insufficient information was available from randomized controlled trials to reveal the most appropriate regimen of antibiotics for the treatment of patients with intra-amniotic infection, whether antibiotics should be continued during the postpartum period, and which antibiotic regimen should be used and for what duration. None of the included studies reported information related to adverse effects of the intervention.
This update included 49 trials containing 12,045 participants. Risk of treatment failure was higher with short courses of antibiotics (OR 1.34, 95% CI 1.15 to 1.55) at one month after initiation of therapy (21% failure with short-course treatment and 18% with long-course; absolute difference of 3% between groups). There were no differences found when examining treatment with ceftriaxone for less than seven days (30% failure in those receiving ceftriaxone and 27% in short-acting antibiotics administered for seven days or more) or azithromycin for less than seven days (18% failure in both those receiving azithromycin and short-acting antibiotics administered for seven days or more) with respect to risk of treatment failure at one month or less. Significant reductions in gastrointestinal adverse events were observed for treatment with short-acting antibiotics and azithromycin. Clinicians need to evaluate whether the minimal short-term benefit from longer treatment of antibiotics is worth exposing children to a longer course of antibiotics.
This review of 49 trials found that treating children with a short course (less than seven days) of antibiotics, compared to treatment with a long course (seven days or greater) of antibiotics, increases the likelihood of treatment failure in the short term. No differences are seen one month later. The amount of gastrointestinal adverse events decreased with a shorter course of antibiotics.
We included five trials with 879 participants which investigated B vitamin supplements. In four trials, the intervention was a combination of vitamins B6, B12, and folic acid; in one, it was folic acid only. Doses varied. We considered there to be some risks of performance and attrition bias and of selective outcome reporting among these trials. Our primary efficacy outcomes were the incidence of dementia and scores on measures of overall cognitive function. None of the trials reported the incidence of dementia and the evidence on overall cognitive function was of very low-quality. There was probably little or no effect of B vitamins taken for six to 24 months on episodic memory, executive function, speed of processing, or quality of life. The evidence on our other secondary clinical outcomes, including harms, was very sparse or very low-quality. There was evidence from one study that there may be a slower rate of brain atrophy over two years in participants taking B vitamins. The same study reported subgroup analyses based on the level of serum homocysteine (tHcy) at baseline and found evidence that B vitamins may improve episodic memory in those with tHcy above the median at baseline. We included one trial (n = 516) of vitamin E supplementation. Vitamin E was given as 1000 IU of alpha-tocopherol twice daily. We considered this trial to be at risk of attrition and selective reporting bias. There was probably no effect of vitamin E on the probability of progression from MCI to Alzheimer's dementia over three years (HR 1.02; 95% CI 0.74 to 1.41; n = 516; 1 study, moderate-quality evidence). There was also no evidence of an effect at intermediate time points. The available data did not allow us to conduct analyses, but the authors reported no significant effect of three years of supplementation with vitamin E on overall cognitive function, episodic memory, speed of processing, clinical global impression, functional performance, adverse events, or mortality (five deaths in each group). We considered this to be low-quality evidence. We included one trial (n = 256) of combined vitamin E and vitamin C supplementation and one trial (n = 26) of supplementation with chromium picolinate. In both cases, there was a single eligible cognitive outcome, but we considered the evidence to be very low-quality and so could not be sure of any effects. The evidence on vitamin and mineral supplements as treatments for MCI is very limited. Three years of treatment with high-dose vitamin E probably does not reduce the risk of progression to dementia, but we have no data on this outcome for other supplements. Only B vitamins have been assessed in more than one RCT. There is no evidence for beneficial effects on cognition of supplementation with B vitamins for six to 24 months. Evidence from a single study of a reduced rate of brain atrophy in participants taking vitamin B and a beneficial effect of vitamin B on episodic memory in those with higher tHcy at baseline warrants attempted replication.
We found eight randomised controlled trials (RCTs), which investigated four different types of vitamin or mineral pills by comparing them to a placebo (a dummy pill). The vitamins tested were B vitamins (vitamin B6, vitamin B12 and folic acid), vitamin E, and vitamin E and C given together. The only mineral tested was chromium. Vitamin B combination versus placebo Five trials with a total of 879 participants compared B vitamins with placebo. Four used combinations of vitamin B6, vitamin B12, and folic acid; one small study tested folic acid on its own. None of these studies reported whether or not participants developed dementia. These studies did not find that memory or thinking skills differed between the group of people who took vitamin B supplements and those who took placebo after treatment lasting six months to two years. Our confidence in the results on different tests used in the studies varied from moderate to very low. Two years of vitamin B supplements did seem to help memory in a small subgroup of participants in one study who could be identified by a particular blood test at the start of the trial. One study found that there was probably no effect on participants' quality of life. One study scanned the brains of some participants and reported that B vitamins may slow the rate of brain shrinkage. Harmful effects and deaths were reported in very few participants and we cannot conclude whether or not there are harms from taking these or similar combinations of B vitamins. Vitamin E versus placebo. One study with 516 participants compared a relatively high dose of vitamin E (2000 IU a day) to placebo in people who were also taking a multivitamin containing 15 IU of vitamin E (the daily requirement for vitamin E is approximately 30 IU). The risk of developing dementia due to Alzheimer’s disease (the commonest form of dementia) is probably not affected by three years of treatment with high-dose vitamin E. The quality of the evidence for other outcomes was lower, but there may also be no effect of this dose of vitamin E on specific memory or thinking skills or on how well people could manage their daily activities. Vitamin E and C versus placebo One study with 256 participants compared a combination of vitamins C and E with placebo. It found no effect on overall memory and thinking skills, but we had little confidence in this result because of the quality of the evidence. Chromium picolinate versus placebo Only one very small study with 26 participants investigated the effect of chromium supplements. This study was too small for us to be able to draw any conclusions. The amount and quality of research evidence about vitamin and mineral supplements for treating MCI in people without nutritional deficiency is limited. At the moment, it is not possible to identify any supplements which can reduce the risk of people with MCI developing dementia or which can effectively treat their symptoms. More research is needed before we can answer our review question.
We found 28 published studies and one unpublished study. Only two studies were sufficiently similar to allow pooling of data for statistical analyses. Studies were divided into three groups; children, older people and the general population/mixed age group. None of the studies focusing on children or older people demonstrated a reduction in injuries that were a direct result of environmental modification in the home. One study in older people demonstrated a reduction in falls and one a reduction in falls and injurious falls that may have been due to hazard reduction. One meta-analysis was performed which examined the effects on falls of multifactorial interventions consisting of home hazard assessment and modification, medication review, health and bone assessment and exercise (RR 1.09, 95% CI 0.97 to 1.23). There is insufficient evidence to determine whether interventions focused on modifying environmental home hazards reduce injuries. Further interventions to reduce hazards in the home should be evaluated by adequately designed randomised controlled trials measuring injury outcomes. Recruitment of large study samples to measure effect must be a major consideration for future trials. Researchers should also consider using factorial designs to allow the evaluation of individual components of multifactorial interventions.
The review found that there is insufficient evidence from studies to show that such changes reduce the number of injuries in the home but does not conclude that these interventions are ineffective. Home alterations need to be evaluated by larger and better designed studies which include injuries in their outcomes.
Twenty-nine trials were included, with 23019 participants, among whom 1503 vascular deaths and 3438 fatal and non-fatal vascular events occurred during follow up. Compared with control, dipyridamole had no clear effect on vascular death (relative risk (RR) 0.99, 95% confidence interval (CI) 0.87 to 1.12). This result was not influenced by the dose of dipyridamole or type of presenting vascular disease. Compared with control, dipyridamole appeared to reduce the risk of vascular events (RR 0.88, 95% CI 0.81 to 0.95). This effect was only statistically significant in patients presenting with cerebral ischaemia. For patients who presented with arterial vascular disease, there was no evidence that dipyridamole, in the presence or absence of another antiplatelet drug reduced the risk of vascular death, though it reduces the risk of further vascular events. This benefit was found only in patients presenting after cerebral ischaemia. There was no evidence that dipyridamole alone was more efficacious than aspirin.
This review included 29 studies involving 23019 participants. When we compared the effects of dipyridamole (alone or together with aspirin) with aspirin alone there was no evidence of an effect on death from vascular causes. When we compared the effects on the occurrence of vascular events (strokes, heart attacks, and deaths from vascular diseases) the combination of aspirin and dipyridamole had an advantage over aspirin alone. This result holds particularly true for patients with ischaemic stroke.
One trial with 103 participants studied the effect of impaction of the fracture at the time of surgery. The only outcome measure reported was bone scintimetry. There was some evidence that impaction, particularly of displaced fractures, resulted in a reduction of blood flow to the femoral head as assessed by bone scintimetry. One quasi-randomised trial with 220 participants compared compression of the fracture with no compression. Results for 156 individuals at one year showed a tendency to a lower incidence of non-union for those fractures treated without compression. Two trials, one involving 102 young adults under 50 years old and the other involving 49 older people aged 65 years or over, compared open versus closed reduction of the fracture. Both found open reduction significantly increased length of surgery. None of the other differences between open and closed reduction in the outcomes reported by the two trials were statistically significant. Insufficient evidence exists from randomised trials to confirm the relative effects of open versus closed reduction of intracapsular fractures, or the effects of intra-operative impaction or compression of an intracapsular fracture treated by internal fixation.
This review examines the effects of different surgical techniques. It found insufficient evidence from randomised trials to assess the effects of compression or of impaction of the fracture during surgery. It found limited evidence that open reduction (surgically exposed) as compared with closed reduction (under X-ray control) resulted in a greater length of surgery. The lack of evidence showing benefit of open reduction supports the use of closed reduction of these fractures.
We included 11 studies with 212 participants with cervical SCI. The meta-analysis revealed a statistically significant effect of RMT for three outcomes: vital capacity (MD mean end point 0.4 L, 95% CI 0.12 to 0.69), maximal inspiratory pressure (MD mean end point 10.50 cm/H2O, 95% CI 3.42 to 17.57), and maximal expiratory pressure (MD mean end point 10.31 cm/H2O, 95% CI 2.80 to 17.82). There was no effect on forced expiratory volume in one second or dyspnoea. We could not combine the results from quality of life assessment tools from three studies for meta-analysis. Respiratory complication outcomes were infrequently reported and thus we could not include them in the meta-analysis. Instead, we described the results narratively. We identified no adverse effects as a result of RMT in cervical SCI. In spite of the relatively small number of studies included in this review, meta-analysis of the pooled data indicates that RMT is effective for increasing respiratory muscle strength and perhaps also lung volumes for people with cervical SCI. Further research is needed on functional outcomes following RMT, such as dyspnoea, cough efficacy, respiratory complications, hospital admissions, and quality of life. In addition, longer-term studies are needed to ascertain optimal dosage and determine any carryover effects of RMT on respiratory function, quality of life, respiratory morbidity, and mortality.
This review compared any type of respiratory muscle training with standard care or sham treatments. We reviewed 11 studies (including 212 people with cervical spinal cord injury) and suggested that for people with cervical spinal cord injury there is a small beneficial effect of respiratory muscle training on lung volume and on the strength of the muscles used to take a breath in and to breathe air out and cough. No effect was seen on the maximum amount of air that can be pushed out in one breath, or shortness of breath. An insufficient number of studies had examined the effect of respiratory muscle training on the frequency of lung infections or quality of life, so we could not assess these outcomes in the review. We identified no adverse effects of training the breathing muscles for people with a cervical spinal cord injury.
Of the 51 studies identified, three met the inclusion criteria, including 524 people with sickle cell disease aged between 12 and 65 years of age. One study tested the effectiveness of zinc sulphate as compared to placebo and the remaining two assessed senicapoc versus placebo. No deaths were seen in any of the studies (low-quality evidence). The zinc sulphate study showed a significant reduction in painful crises (in a total of 145 participants) over one and a half years, mean difference -2.83 (95% confidence interval -3.51 to -2.15) (moderate-quality evidence). However, analysis was restricted due to limited statistical data. Changes to red blood cell parameters and blood counts were inconsistent (very low-quality evidence). No serious adverse events were noted in the study. The Phase II dose-finding study of senicapoc (a Gardos channel blocker) compared to placebo showed that the high dose senicapoc showed significant improvement in change in hemoglobin level, the number and proportion of dense red blood cells, red blood cell count and indices and hematocrit value (very low-quality evidence). The results with low-dose senicapoc were similar to the high-dose senicapoc group but of lesser magnitude. There was no difference in the frequency of painful crises between the three groups (low-quality evidence). A subsequent Phase III study of senicapoc was terminated early since there was no difference observed between the treatment and control groups in the primary end point of painful crises. While the results of zinc for reducing sickle-related crises are encouraging, larger and longer-term multicenter studies are needed to evaluate the effectiveness of this therapy for people with sickle cell disease. While the Phase II and the prematurely terminated phase III studies of senicapoc showed that the drug improved red blood cell survival (depending on dose), this did not lead to fewer painful crises. Given this is no longer an active area of research, this review will no longer be regularly updated.
The review included three studies with 524 people with sickle cell disease aged between 12 and 65 years of age. The intervention in one study was zinc sulphate and in two studies was senicapoc. Each study was compared to a placebo group (a substance which contains no medication). For each study people were selected for one treatment or the other randomly. The studies lasted from three months to 18 months. The study with zinc sulphate showed that this drug may be able to reduce the number of sickle cell crises without causing toxic effects (low-quality evidence). There were 145 participants in this study and results showed a significant reduction in the total number of serious sickle-related crises over one and a half years, mean difference -2.83 (95% confidence interval -3.51 to -2.15) (moderate-quality evidence). However, our analysis was limited since not all data were reported. Changes to red blood cell measurements and blood counts were not consistent (very low-quality evidence). No serious adverse events were noted in the study. The two studies with senicapoc demonstrated that this drug increases the red blood survival and has a role in preventing red blood cell dehydration in people with sickle cell disease (very low-quality evidence). The higher dose of the drug was more effective compared to the lower dose. But these changes in the red blood cells did not translate into positive clinical outcomes in terms of reduction in the number of sickle cell crises (low-quality evidence). Senicapoc had a favourable safety profile. More longer-term research is needed on these drugs and others that might prevent water loss in red blood cells. Given this is no longer an active area of research, this review will no longer be regularly updated. The quality of the evidence was mixed across outcomes.
Eighty-six trials (10,716 participants) were included. Ten trials (4,950 participants) were considered to be low risk of bias. Pooled analysis of all trials showed that off-pump CABG increased all-cause mortality compared with on-pump CABG (189/5,180 (3.7%) versus 160/5144 (3.1%); RR 1.24, 95% CI 1.01 to 1.53; P =.04). In the trials at low risk of bias the effect was corroborated (154/2,485 (6.2%) versus 113/2,465 (4.5%), RR 1.35,95% CI 1.07 to 1.70; P =.01). TSA showed that the risk of random error on the result was unlikely. Off-pump CABG resulted in fewer distal anastomoses (MD -0.28; 95% CI -0.40 to -0.16, P <.00001). No significant differences in myocardial infarction, stroke, renal insufficiency, or coronary re-intervention were observed. Off-pump CABG reduced post-operative atrial fibrillation compared with on-pump CABG, however, in trials at low risk of bias, the estimated effect was not significantly different. Our systematic review did not demonstrate any significant benefit of off-pump compared with on-pump CABG regarding mortality, stroke, or myocardial infarction. In contrast, we observed better long-term survival in the group of patients undergoing on-pump CABG with the use of cardiopulmonary bypass and cardioplegic arrest. Based on the current evidence, on-pump CABG should continue to be the standard surgical treatment. However, off-pump CABG may be acceptable when there are contraindications for cannulation of the aorta and cardiopulmonary bypass. Further randomised clinical trials should address the optimal treatment in such patients.
Systematic review of 86 randomised clinical trials including 10,716 patients and statistical analyses of the data showed that coronary artery bypass surgery performed on the beating heart results in an increased risk of death. No firm evidence for benefit or harm was found regarding the outcome measures myocardial infarction, stroke, atrial fibrillation, renal insufficiency, or coronary reintervention. Our data raises a warning regarding coronary artery bypass surgery on the beating heart and cardiac arrest and cardiopulmonary bypass seem less risky. In patients with contraindications for cannulation of the aorta and cardiopulmonary bypass coronary artery bypass surgery on the beating heart may be a solution but we need randomised clinical trials in these patients to identify the most beneficial approach.
Eleven studies met the criteria, providing 14 comparisons, and capturing data on 4317 participants. Seven studies were RCTs, three were cluster non-RCTs, and one was a randomised cross-over design. Seven studies were carried out among school children (N = 3636), three among women of reproductive age (N = 648), and one among infants (N = 33). The studies used diverse types of food as vehicle for iodine delivery: biscuits, milk, fish sauce, drinking water, yoghourt, fruit beverage, seasoning powder, and infant formula milk. Daily amounts of iodine provided ranged from 35 µg/day to 220 µg/day; trial duration ranged from 11 days to 48 weeks. Five studies examined the effect of iodine fortification alone, two against the same unfortified food, and three against no intervention. Six studies evaluated the effect of cofortification of iodine with other micronutrients versus the same food without iodine but with different levels of other micronutrients. We assessed one study to be at low risk of bias for all bias domains, three at low risk of bias for all domains apart from selective reporting, and seven at an overall rating of high risk of bias. No study assessed the primary outcomes of death, mental development, cognitive function, cretinism, or hypothyroidism, or secondary outcomes of TSH or serum thyroglobulin concentration. Two studies reported the effects on goitre, one on physical development measures, and one on adverse effects. All studies assessed urinary iodine concentration. The effects of iodine fortification compared to control on goitre prevalence (OR 1.60, 95% CI 0.60 to 4.31; 1 non-RCT, 83 participants; very low-quality evidence), and five physical development measures were uncertain (1 non-RCT, 83 participants; very low-quality evidence): weight (MD 0.23 kg, 95% CI -6.30 to 6.77); height (MD -0.66 cm, 95% CI -4.64 to 3.33); weight-for-age (MD 0.05, 95% CI -0.59 to 0.69); height-for-age (MD -0.30, 95% CI -0.75 to 0.15); and weight-for-height (MD -0.21, 95% CI -0.51 to 0.10). One study reported that there were no adverse events observed during the cross-over trial (low-quality evidence). Pooled results from RCTs showed that urinary iodine concentration significantly increased following iodine fortification (SMD 0.59, 95% CI 0.37 to 0.81; 6 RCTs, 2032 participants; moderate-quality evidence). This is equivalent to an increase of 38.32 µg/L (95% CI 24.03 to 52.61 µg/L). This effect was not observed in the meta-analysis of non-RCTs (SMD 0.25, 95% CI -0.16 to 0.66; 3 non-RCTs, 262 participants; very low-quality evidence). Sensitivity analyses did not change the effect observed in the primary analyses. The evidence on the effect of iodine fortification of foods, beverages, condiments, or seasonings other than salt on reducing goitre, improving physical development measures, and any adverse effects is uncertain. However, our findings suggest that the intervention likely increases urinary iodine concentration. Additional, adequately powered, high-quality studies on the effects of iodine fortification of foods on these, and other important outcomes, as well as its efficacy and safety, are required.
We searched for articles from different sources including published research papers, unpublished reports, and through direct communication with experts and organisations working to address iodine and micronutrient deficiency. We last searched the databases in January 2018. Eleven studies, which captured data on 4317 participants (3636 children, 648 women of reproductive age, and 33 infants), met our inclusion criteria. The type of foods used as vehicle to deliver iodine differed between the studies, and included biscuits, milk, fish sauce, drinking water, yoghourt, fruit beverage, seasoning powder, and infant formula milk. The amount of iodine provided to participants ranged from 35 µg/day to 220 µg/day, and study duration ranged from 11 days to 48 weeks. Of the 11 studies included, five examined the effect of adding iodine alone to foods compared to either no intervention or the same foods without iodine; while six evaluated the effect of adding iodine plus other micronutrients to foods compared to the same foods without iodine, but with different levels of other micronutrients. No study evaluated the effect of adding iodine to foods on death, mental development, cognitive function, cretinism (a condition characterized by impaired control of physical movement and intellectual disability), hypothyroidism (underactive thyroid), thyroid-stimulating hormone concentration, or serum thyroglobulin concentration (these are biological markers that indicate the presence of iodine deficiency when concentration in the blood is high). Two studies reported on the effect of the intervention on goitre, one study assessed five physical development measures (weight, height, weight-for-age, height-for-age, and weight-for-height scores), and one examined adverse effects. All studies assessed urinary iodine concentration (the concentration of iodine secreted in the urine, which indicates the presence of iodine deficiency when concentration is low in a population group, rather than in an individual). We combined the data that met our requirements in these studies in a meta-analysis. We are uncertain of the effects of iodine fortification on the proportion of participants with goitre, or on any of the five physical development measures. One study reported narratively that no adverse effects were observed during the trial. We found a significant increase of 38.32 µg/L in urinary iodine concentration after adding iodine to foods, compared to the groups that did not have iodine added, from studies of higher quality. Using GRADE, we rated the quality of the evidence as very low for goitre and physical development measures, due to study limitations (risk of bias) and imprecise results, and low for adverse events due to indirectness and imprecise results. We rated the quality of the evidence for urinary iodine concentration, from studies in which participants were allocated to treatment groups at random (gold standard design for clinical research), as moderate. On the other hand, quality of the evidence for urinary iodine concentration from studies without this random element was rated as very low, due to study limitations and imprecise results. Overall, there is no clear evidence on the effect of the intervention on reducing the proportion of people with goitre, improving physical growth, or adverse events. However, our results show that adding iodine to foods likely increases urinary iodine concentration. Additional studies to better quantify the effect of the intervention on these outcomes, as well as other outcomes, are needed.
Three RCTs (68 participants) were identified. All treatment comparisons contained only one study. No significant difference was found between prednisone compared with placebo for complete (RR 1.44, CI 0.95 to 2.19) and partial remission (RR 1.00, CI 0.07 to 14.45) of the nephrotic syndrome due to minimal change disease. There was no difference between intravenous methylprednisolone plus oral prednisone compared with oral prednisone alone for complete remission (RR 0.74, CI 0.50 to 1.08). Prednisone, compared with short-course intravenous methylprednisolone, increased the number of subjects who achieved complete remission (RR 4.95, CI 1.15 to 21.26). The lack of statistical evidence of efficacy associated with prednisone therapy was based on data derived from a single study that compared 'alternate-day prednisone' to no immunosuppression' with only a small number of participants in each group. No RCTs were identified comparing regimens in adults with a steroid-dependent or relapsing disease course or comparing treatments comprising alkylating agents, cyclosporine, tacrolimus, levamisole, or mycophenolate mofetil. Further comparative studies are required to examine the efficacy of immunosuppressive agents for achievement of sustained remission of nephrotic syndrome caused by minimal change disease. Studies are also needed to evaluate treatments for adults with steroid-dependent or relapsing disease.
This review identified three small studies (68 participants) comparing: 1) intravenous plus oral steroid treatment versus oral sterids; 2) oral versus short-course intravenous steroid treatment; and 3) oral steroid treatment versus placebo. Only oral steroid treatment (compared to short-course intravenous steroid treatment) showed an increase in the number of patients who achieved complete remission. However, the lack of available studies leaves important treatment questions unanswered; what is the optimal dose and duration of steroid treatment in new-onset adult minimal change disease; how are relapses following steroid-induced remission prevented and treated; and what are the appropriate treatments for steroid-dependent or treatment-resistant minimal change disease?
Twenty trials involving 902 people were included. Oral medications There was evidence from individual small trials that people with Parkinson's disease had a statistically significant improvement in the number of bowel motions or successful bowel care routines per week when fibre (psyllium) (mean difference (MD) -2.2 bowel motions, 95% confidence interval (CI) -3.3 to -1.4) or oral laxative (isosmotic macrogol electrolyte solution) (MD 2.9 bowel motions per week, 95% CI 1.48 to 4.32) are used compared with placebo. One trial in people with spinal cord injury showed statistically significant improvement in total bowel care time comparing intramuscular neostigmine-glycopyrrolate (anticholinesterase plus an anticholinergic drug) with placebo (MD 23.3 minutes, 95% CI 4.68 to 41.92). Five studies reported the use of cisapride and tegaserod in people with spinal cord injuries or Parkinson's disease. These drugs have since been withdrawn from the market due to adverse effects; as they are no longer available they have been removed from this review. Rectal stimulants One small trial in people with spinal cord injuries compared two bisacodyl suppositories, one polyethylene glycol-based (PGB) and one hydrogenated vegetable oil-based (HVB). The trial found that the PGB bisacodyl suppository significantly reduced the mean defaecation period (PGB 20 minutes versus HVB 36 minutes, P < 0.03) and mean total time for bowel care (PGB 43 minutes versus HVB 74.5 minutes, P < 0.01) compared with the HVB bisacodyl suppository. Physical interventions There was evidence from one small trial with 31 participants that abdominal massage statistically improved the number of bowel motions in people who had a stroke compared with no massage (MD 1.7 bowel motions per week, 95% CI 2.22 to 1.18). A small feasibility trial including 30 individuals with multiple sclerosis also found evidence to support the use of abdominal massage. Constipation scores were statistically better with the abdominal massage during treatment although this was not supported by a change in outcome measures (for example the neurogenic bowel dysfunction score). One small trial in people with spinal cord injury showed statistically significant improvement in total bowel care time using electrical stimulation of abdominal muscles compared with no electrical stimulation (MD 29.3 minutes, 95% CI 7.35 to 51.25). There was evidence from one trial with a low risk of bias that for people with spinal cord injury transanal irrigation, compared against conservative bowel care, statistically improved constipation scores, neurogenic bowel dysfunction score, faecal incontinence score and total time for bowel care (MD 27.4 minutes, 95% CI 7.96 to 46.84). Patients were also more satisfied with this method. Other interventions In one trial in stroke patients, there appeared to be a short term benefit (less than six months) to patients in terms of the number of bowel motions per week with a one-off educational intervention from nurses (a structured nurse assessment leading to targeted education versus routine care), but this did not persist at 12 months. A trial in individuals with spinal cord injury found that a stepwise protocol did not reduce the need for oral laxatives and manual evacuation of stool. Finally, one further trial reported in abstract form showed that oral carbonated water (rather than tap water) improved constipation scores in people who had had a stroke. There is still remarkably little research on this common and, to patients, very significant issue of bowel management. The available evidence is almost uniformly of low methodological quality. The clinical significance of some of the research findings presented here is difficult to interpret, not least because each intervention has only been addressed in individual trials, against control rather than compared against each other, and the interventions are very different from each other. There was very limited evidence from individual trials in favour of a bulk-forming laxative (psyllium), an isosmotic macrogol laxative, abdominal massage, electrical stimulation and an anticholinesterase-anticholinergic drug combination (neostigmine-glycopyrrolate) compared to no treatment or controls. There was also evidence in favour of transanal irrigation (compared to conservative management), oral carbonated (rather than tap) water and abdominal massage with lifestyle advice (compared to lifestyle advice alone). However, these findings need to be confirmed by larger well-designed controlled trials which should include evaluation of the acceptability of the intervention to patients and the effect on their quality of life.
While there is a great deal of information on the causes of NBD, there are few studies that focus on how to practically manage the problem. Currently the usual advice is to have a good fluid intake, a balanced diet, as much physical exercise as is practical and a regular planned bowel routine. A bowel routine may include use of oral laxative medicines, suppositories or enemas; abdominal massage; digital rectal stimulation and digital evacuation of stool. The steps used will depend on the needs of each individual and some degree of trial and error is usually needed to achieve a satisfactory routine. Only research studies where participants were allocated to either the control group (who received either no intervention or usual care) or the treatment group by chance (called randomisation) were included in this review as these studies provide the most reliable evidence. Fifteen new studies have been added in this update. Five have been removed because the drugs the studies reported on (cisapride and tegaserod) have been withdrawn from the market and are no longer available. Most of the 20 randomised studies in this review included very small numbers of participants and the study reports did not always give the information needed to be sure that the study findings were reliable. Some oral laxatives were found to improve bowel function including psyllium, a stool-bulking laxative (one study), and an isosmotic macrogol (one study), which were both studied in individuals with Parkinson's disease. Some suppositories and micro-enemas used to help the bowel to open produced faster results than others (three studies) and the timing of suppository use may affect the response of the bowel (one study). Digital evacuation of stool may be more effective than oral or rectal medication (one study). The use of transanal irrigation in individuals with spinal cord injury improved bowel control, constipation and quality of life measures (one study). Three studies found that abdominal massage was helpful in reducing constipation. One study found that patients may benefit from even one educational contact with a nurse. This review shows that there is still remarkably little research on this common problem which is so important to patients. The research evidence found by the review is generally very poor because the way the studies were carried out and reported means that the results are not reliable. It is not possible to make recommendations for care based on these studies. Managing NBD will continue to rely on trial and error until more high quality studies with larger numbers of participants and which examine the most important aspects of this problem are carried out.
In total, we included 23 RCTs (N = 1309), 13 of which (56%) had low RoB. We included both men and women with a mean age of 50.6 years. We assessed the overall quality of the evidence as very low to moderate. Twelve studies examined suspected facet joint pain, five studies disc pain, two studies SI joint pain, two studies radicular CLBP, one study suspected radiating low back pain and one study CLBP with or without suspected radiation. Overall, moderate evidence suggests that facet joint RF denervation has a greater effect on pain compared with placebo over the short term (mean difference (MD) -1.47, 95% confidence interval (CI) -2.28 to -0.67). Low-quality evidence indicates that facet joint RF denervation is more effective than placebo for function over the short term (MD -5.53, 95% CI -8.66 to -2.40) and over the long term (MD -3.70, 95% CI -6.94 to -0.47). Evidence of very low to low quality shows that facet joint RF denervation is more effective for pain than steroid injections over the short (MD -2.23, 95% CI -2.38 to -2.08), intermediate (MD -2.13, 95% CI -3.45 to -0.81), and long term (MD -2.65, 95% CI -3.43 to -1.88). RF denervation used for disc pain produces conflicting results, with no effects for RF denervation compared with placebo over the short and intermediate term, and small effects for RF denervation over the long term for pain relief (MD -1.63, 95% CI -2.58 to -0.68) and improved function (MD -6.75, 95% CI -13.42 to -0.09). Lack of evidence of short-term effectiveness undermines the clinical plausibility of intermediate-term or long-term effectiveness. When RF denervation is used for SI joint pain, low-quality evidence reveals no differences from placebo in effects on pain (MD -2.12, 95% CI -5.45 to 1.21) and function (MD -14.06, 95% CI -30.42 to 2.30) over the short term, and one study shows a small effect on both pain and function over the intermediate term. RF denervation is an invasive procedure that can cause a variety of complications. The quality and size of original studies were inadequate to permit assessment of how often complications occur. The review authors found no high-quality evidence suggesting that RF denervation provides pain relief for patients with CLBP. Similarly, we identified no convincing evidence to show that this treatment improves function. Overall, the current evidence for RF denervation for CLBP is very low to moderate in quality; high-quality evidence is lacking. High-quality RCTs with larger patient samples are needed, as are data on long-term effects.
The evidence is current to May 2014. This review includes 23 randomised controlled trials with a total of 1309 participants whose chronic low back pain was evaluated with nerve blocks or other diagnostic tests. Both men and women, with a mean age of 50.6 years, were included. Patients with a positive response to a diagnostic block or to discography were given radiofrequency denervation, a placebo or a comparison treatment. No high-quality evidence shows that radiofrequency denervation provides pain relief for patients with chronic low back pain. Similarly, no convincing evidence suggests that this treatment improves function. Moderate-quality evidence suggests that radiofrequency denervation might better relieve facet joint pain and improve function over the short term when compared with placebo. Evidence of very low to low quality shows that radiofrequency denervation might relieve facet joint pain as well as steroid injections. For patients with disc pain, only small long-term effects on pain relief and improved function are shown. For patients with SI joint pain, radiofrequency denervation had no effect over the short term and a smaller effect (based on one study) one to six months after treatment when compared with placebo. For low back pain suspected to arise from other sources, the results were inconclusive. Radiofrequency denervation is an invasive procedure that can cause a variety of complications. The studies in this review were not of adequate quality and size to document how often complications occur. Given the poor quality of the evidence, large, high-quality studies are urgently needed to determine whether radiofrequency denervation is safe and effective.
For this update, 15 additional studies fulfilled selection criteria. We included in this review 33 randomised controlled trials with 4477 participants; 21 compared different prostanoids versus placebo, seven compared prostanoids versus other agents, and five conducted head-to-head comparisons using two different prostanoids. We found low-quality evidence that suggests no clear difference in the incidence of cardiovascular mortality between patients receiving prostanoids and those given placebo (risk ratio (RR) 0.81, 95% confidence interval (CI) 0.41 to 1.58). We found high-quality evidence showing that prostanoids have no effect on the incidence of total amputations when compared with placebo (RR 0.97, 95% CI 0.86 to 1.09). Adverse events were more frequent with prostanoids than with placebo (RR 2.11, 95% CI 1.79 to 2.50; moderate-quality evidence). The most commonly reported adverse events were headache, nausea, vomiting, diarrhoea, flushing, and hypotension. We found moderate-quality evidence showing that prostanoids reduced rest-pain (RR 1.30, 95% CI 1.06 to 1.59) and promoted ulcer healing (RR 1.24, 95% CI 1.04 to 1.48) when compared with placebo, although these small beneficial effects were diluted when we performed a sensitivity analysis that excluded studies at high risk of bias. Additionally, we found evidence of low to very low quality suggesting the effects of prostanoids versus other active agents or versus other prostanoids because studies conducting these comparisons were few and we judged them to be at high risk of bias. None of the included studies assessed quality of life. We found high-quality evidence showing that prostanoids have no effect on the incidence of total amputations when compared against placebo. Moderate-quality evidence showed small beneficial effects of prostanoids for rest-pain relief and ulcer healing when compared with placebo. Additionally, moderate-quality evidence showed a greater incidence of adverse effects with the use of prostanoids, and low-quality evidence suggests that prostanoids have no effect on cardiovascular mortality when compared with placebo. None of the included studies reported quality of life measurements. The balance between benefits and harms associated with use of prostanoids in patients with critical limb ischaemia with no chance of reconstructive intervention is uncertain; therefore careful assessment of therapeutic alternatives should be considered. Main reasons for downgrading the quality of evidence were high risk of attrition bias and imprecision of effect estimates.
We searched published and unpublished studies up to January 2017. We found 33 clinical trials with a total of 4477 participants; most were published in the 1980s and 1990s and were carried out in European countries. Eleven out of 33 studies received funding from pharmaceutical companies. Most studies included patients over 60 years old who had severe blocking of arteries of the leg; many also had diabetes. Follow-up was usually less than 1 year. We found that, when compared with placebo, prostanoids provided a small beneficial effect by alleviating pain in the leg at rest and improving ulcer healing. Prostanoids did not reduce deaths or the need for an amputation. We found that no studies evaluated the quality of life of people with this condition. We found insufficient evidence to compare effects of prostanoids against those of other medications or other prostanoids. Our findings suggest that taking prostanoids does cause harm. When 1000 patients are treated with prostanoids, on average 674 (572 to 798) will experience adverse events, compared with 319 given placebo. Adverse events usually include nausea, vomiting, diarrhoea, headache, dizziness, and flushing. More severe adverse events include low blood pressure, chest pain, and abnormalities in heart rhythm. When evaluating effects of prostanoids on rest-pain, ulcer healing, and adverse events, researchers provided moderate-quality evidence; review authors downgraded this in most cases because of loss of participants to follow-up. Evaluating cardiovascular mortality yielded evidence of low quality related to loss of participants to follow-up and small numbers of reported events. On the other hand, the quality of evidence on risk of amputation was high.
Results should be interpreted with caution because risk of bias of the included trials was variable. We included eleven new trials for this update; there are now 32 included studies, and one trial is ongoing. Thirty trials involving 9015 women contributed to the analysis. Comparisons include any upright position, birth or squat stool, birth cushion, and birth chair versus supine positions. In all women studied (primigravid and multigravid), when compared with supine positions, the upright position was associated with a reduction in duration of second stage in the upright group (MD -6.16 minutes, 95% CI -9.74 to -2.59 minutes; 19 trials; 5811 women; P = 0.0007; random-effects; I² = 91%; very low-quality evidence); however, this result should be interpreted with caution due to large differences in size and direction of effect in individual studies. Upright positions were also associated with no clear difference in the rates of caesarean section (RR 1.22, 95% CI 0.81 to 1.81; 16 trials; 5439 women; low-quality evidence), a reduction in assisted deliveries (RR 0.75, 95% CI 0.66 to 0.86; 21 trials; 6481 women; moderate-quality evidence), a reduction in episiotomies (average RR 0.75, 95% CI 0.61 to 0.92; 17 trials; 6148 women; random-effects; I² = 88%), a possible increase in second degree perineal tears (RR 1.20, 95% CI 1.00 to 1.44; 18 trials; 6715 women; I² = 43%; low-quality evidence), no clear difference in the number of third or fourth degree perineal tears (RR 0.72, 95% CI 0.32 to 1.65; 6 trials; 1840 women; very low-quality evidence), increased estimated blood loss greater than 500 mL (RR 1.48, 95% CI 1.10 to 1.98; 15 trials; 5615 women; I² = 33%; moderate-quality evidence), fewer abnormal fetal heart rate patterns (RR 0.46, 95% CI 0.22 to 0.93; 2 trials; 617 women), no clear difference in the number of babies admitted to neonatal intensive care (RR 0.79, 95% CI 0.51 to 1.21; 4 trials; 2565 infants; low-quality evidence). On sensitivity analysis excluding trials with high risk of bias, these findings were unchanged except that there was no longer a clear difference in duration of second stage of labour (MD -4.34, 95% CI -9.00 to 0.32; 21 trials; 2499 women; I² = 85%). The main reasons for downgrading of GRADE assessment was that several studies had design limitations (inadequate randomisation and allocation concealment) with high heterogeneity and wide CIs. The findings of this review suggest several possible benefits for upright posture in women without epidural anaesthesia, such as a very small reduction in the duration of second stage of labour (mainly from the primigravid group), reduction in episiotomy rates and assisted deliveries. However, there is an increased risk blood loss greater than 500 mL and there may be an increased risk of second degree tears, though we cannot be certain of this. In view of the variable risk of bias of the trials reviewed, further trials using well-designed protocols are needed to ascertain the true benefits and risks of various birth positions.
We searched for evidence up to 30 November 2016. This review now includes data from 30 randomised controlled trials involving 9015 pregnant women who gave birth without epidural anaesthesia. Overall, evidence was not of good quality. When women gave birth in an upright position, as compared with lying on their backs, the length of time they were pushing (second stage of labour) was reduced by around six minutes (19 trials, 5811 women; very low-quality evidence). Fewer women had an assisted delivery, for example with forceps (21 trials, 6481 women; moderate-quality evidence). The number of women having a caesarean section did not differ (16 trials, 5439 women; low-quality evidence). Fewer women had an episiotomy (a surgical cut to the perineum to enlarge the opening for the baby to pass through) although there was a tendency for more women to have perineal tears (low-quality evidence). There was no difference in number of women with serious perineal tears (6 trials, 1840 women; very low-quality evidence) between those giving birth upright or supine. Women were more likely to have a blood loss of 500 mL or more (15 trials, 5615 women; moderate-quality evidence) in the upright position but this may be associated with more accurate ways of measuring the blood loss. Fewer babies had problems with fast or irregular heart beats that indicate distress (2 trials, 617 women) when women gave birth in an upright position although the number of admissions to the neonatal unit was no different (4 trials, 2565 infants; low-quality evidence). This review found that there could be benefits for women who choose to give birth in an upright position. The length of time they had to push may be reduced but the effect was very small and these women might lose more blood. The results should be interpreted with caution because of poorly conducted studies, variations between trials and in how the findings were analysed. More research into the benefits and risks of different birthing positions would help us to say with greater certainty which birth position is best for most women and their babies. Overall, women should be encouraged to give birth in whatever position they find comfortable.
Thirteen trials met the inclusion criteria, comprising women with stress urinary incontinence (SUI), urgency urinary incontinence (UUI) or mixed urinary incontinence (MUI); they compared PFMT added to another active treatment (585 women) with the same active treatment alone (579 women). The pre-specified comparisons were reported by single trials, except bladder training, which was reported by two trials, and electrical stimulation, which was reported by three trials. However, only two of the three trials reporting electrical stimulation could be pooled, as one of the trials did not report any relevant data. We considered the included trials to be at unclear risk of bias for most of the domains, predominantly due to the lack of adequate information in a number of trials. This affected our rating of the quality of evidence. The majority of the trials did not report the primary outcomes specified in the review (cure or improvement, quality of life) or measured the outcomes in different ways. Effect estimates from small, single trials across a number of comparisons were indeterminate for key outcomes relating to symptoms, and we rated the quality of evidence, using the GRADE approach, as either low or very low. More women reported cure or improvement of incontinence in two trials comparing PFMT added to electrical stimulation to electrical stimulation alone, in women with SUI, but this was not statistically significant (9/26 (35%) versus 5/30 (17%); risk ratio (RR) 2.06, 95% confidence interval (CI) 0.79 to 5.38). We judged the quality of the evidence to be very low. There was moderate-quality evidence from a single trial investigating women with SUI, UUI or MUI that a higher proportion of women who received a combination of PFMT and heat and steam generating sheet reported a cure compared to those who received the sheet alone: 19/37 (51%) versus 8/37 (22%) with a RR of 2.38, 95% CI 1.19 to 4.73). More women reported cure or improvement of incontinence in another trial comparing PFMT added to vaginal cones to vaginal cones alone, but this was not statistically significant (14/15 (93%) versus 14/19 (75%); RR 1.27, 95% CI 0.94 to 1.71). We judged the quality of the evidence to be very low. Only one trial evaluating PFMT when added to drug therapy provided information about adverse events (RR 0.84, 95% CI 0.45 to 1.60; very low-quality evidence). With regard to condition-specific quality of life, there were no statistically significant differences between women (with SUI, UUI or MUI) who received PFMT added to bladder training and those who received bladder training alone at three months after treatment, on either the Incontinence Impact Questionnaire-Revised scale (mean difference (MD) -5.90, 95% CI -35.53 to 23.73) or on the Urogenital Distress Inventory scale (MD -18.90, 95% CI -37.92 to 0.12). A similar pattern of results was observed between women with SUI who received PFMT plus either a continence pessary or duloxetine and those who received the continence pessary or duloxetine alone. In all these comparisons, the quality of the evidence for the reported critical outcomes ranged from moderate to very low. This systematic review found insufficient evidence to state whether or not there were additional effects by adding PFMT to other active treatments when compared with the same active treatment alone for urinary incontinence (SUI, UUI or MUI) in women. These results should be interpreted with caution as most of the comparisons were investigated in small, single trials. None of the trials in this review were large enough to provide reliable evidence. Also, none of the included trials reported data on adverse events associated with the PFMT regimen, thereby making it very difficult to evaluate the safety of PFMT.
In this review, we included 13 trials that compared a combination of pelvic floor muscle training and another active treatment in 585 women with the same active treatment alone in 579 women to treat all types of urine leakage. There was not enough evidence to say whether or not the addition of pelvic floor muscle training to another active treatment would result in more reports of a cure or improvement in urine leakage and better quality of life, when compared to the same active treatment alone. There was also insufficient evidence to evaluate the adverse events associated with the addition of PFMT to other active treatment as none of the included trials reported data on adverse events associated with the PFMT regimen. Most of the comparisons were investigated by single trials, which were small. None of the trials included in this systematic review were large enough to answer the questions they were designed to answer. The quality of the evidence was rated as either low or very low for the outcomes of interest. The main limitations of the evidence were poor reporting of study methods, and lack of precision in the findings for the outcome measures.
The search identified 1366 references. Twenty-two RCTs meeting the inclusion criteria of this review were identified. The results of the meta-analyses (across all age groups) indicate benefit of CGM for patients starting on CGM sensor augmented insulin pump therapy compared to patients using multiple daily injections of insulin (MDI) and standard monitoring blood glucose (SMBG). After six months there was a significant larger decline in HbA1c level for real-time CGM users starting insulin pump therapy compared to patients using MDI and SMBG (mean difference (MD) in change in HbA1c level -0.7%, 95% confidence interval (CI) -0.8% to -0.5%, 2 RCTs, 562 patients, I2=84%). The risk of hypoglycaemia was increased for CGM users, but CIs were wide and included unity (4/43 versus 1/35; RR 3.26, 95% CI 0.38 to 27.82 and 21/247 versus 17/248; RR 1.24, 95% CI 0.67 to 2.29). One study reported the occurrence of ketoacidosis from baseline to six months; there was however only one event. Both RCTs were in patients with poorly controlled diabetes. For patients starting with CGM only, the average decline in HbA1c level six months after baseline was also statistically significantly larger for CGM users compared to SMBG users, but much smaller than for patients starting using an insulin pump and CGM at the same time (MD change in HbA1c level -0.2%, 95% CI -0.4% to -0.1%, 6 RCTs, 963 patients, I2=55%). On average, there was no significant difference in risk of severe hypoglycaemia or ketoacidosis between CGM and SMBG users. The confidence interval however, was wide and included a decreased as well as an increased risk for CGM users compared to the control group (severe hypoglycaemia: 36/411 versus 33/407; RR 1.02, 95% CI 0.65 to 1.62, 4 RCTs, I2=0% and ketoacidosis: 8/411 versus 8/407; RR 0.94, 95% CI 0.36 to 2.40, 4 RCTs, I2=0%). Health-related quality of life was reported in five of the 22 studies. In none of these studies a significant difference between CGM and SMBG was found. Diabetes complications, death and costs were not measured. There were no studies in pregnant women with diabetes type 1 and in patients with hypoglycaemia unawareness. There is limited evidence for the effectiveness of real-time continuous glucose monitoring (CGM) use in children, adults and patients with poorly controlled diabetes. The largest improvements in glycaemic control were seen for sensor-augmented insulin pump therapy in patients with poorly controlled diabetes who had not used an insulin pump before. The risk of severe hypoglycaemia or ketoacidosis was not significantly increased for CGM users, but as these events occurred infrequent these results have to be interpreted cautiously.There are indications that higher compliance of wearing the CGM device improves glycosylated haemoglobin A1c level (HbA1c) to a larger extent.
In this review 22 studies were included. These studies randomised 2883 patients with type 1 diabetes to receive a form of CGM or to use self measurement of blood glucose (SMBG) using fingerprick. The duration of follow-up varied between 3 and 18 months; most studies reported results for six months of CGM use. This review shows that CGM helps in lowering the glycosylated haemoglobin A1c (HbA1c) value (a measure of glycaemic control). In most studies the HbA1c value decreased (denoting improvement of glycaemic control) in both the CGM and the SMBG users, but more in the CGM group. The difference in change in HbA1c levels between the groups was on average 0.7% for patients starting on an insulin pump with integrated CGM and 0.2% for patients starting with CGM alone. The most important adverse events, severe hypoglycaemia and ketoacidosis did not occur frequently in the studies, and absolute numbers were low (9% of the patients, measured over six months). Diabetes complications, death from any cause and costs were not measured. There are no data on pregnant women with diabetes type 1 and patients with diabetes who are not aware of hypoglycaemia.
We included 78 studies (59,216 women) and excluded 34 studies.There was no statistically significant difference in maternal mortality for misoprostol compared with control groups overall (31 studies; 11/19,715 versus 4/20,076 deaths; risk ratio (RR) 2.08, 95% confidence interval (CI) 0.82 to 5.28); or for the trials of misoprostol versus placebo: 10 studies, 6/4626 versus 1/4707 ; RR 2.70; 95% CI 0.72 to 10.11; or for misoprostol versus other uterotonics: 21 studies, 5/15,089 versus 3/15,369 (19/100,000); RR 1.54; 95% CI 0.40 to 5.92. All 11 deaths in the misoprostol arms occurred in studies of misoprostol ≥ 600 µg. There was a statistically significant difference in the composite outcome ‘maternal death or severe morbidity’ for the comparison of misoprostol versus placebo (12 studies; average RR 1.70, 95% CI 1.02 to 2.81; Tau² = 0.00, I² = 0%) but not for the comparison of misoprostol versus other uterotonics (17 studies; average RR 1.50, 95% CI 0.50 to 4.52; Tau² = 1.81, I² = 69%). When we excluded hyperpyrexia from the composite outcome in exploratory analyses, there was no significant difference in either of these comparisons. Pyrexia > 38°C was increased with misoprostol compared with controls (56 studies, 2776/25,647 (10.8%) versus 614/26,800 (2.3%); average RR 3.97, 95% CI 3.13 to 5.04; Tau² = 0.47, I² = 80%). The effect was greater for trials using misoprostol 600 µg or more (27 studies; 2197/17,864 (12.3%) versus 422/18,161 (2.3%); average RR 4.64; 95% CI 3.33 to 6.46; Tau² = 0.51, I² = 86%) than for those using misoprostol 400 µg or less (31 studies; 525/6751 (7.8%) versus 185/7668 (2.4%); average RR 3.07; 95% CI 2.25 to 4.18; Tau² = 0.29, I² = 58%). Misoprostol does not appear to increase or reduce severe morbidity (excluding hyperpyrexia) when used to prevent or treat PPH. Misoprostol did not increase or decrease maternal mortality. However, misoprostol is associated with an increased risk of pyrexia, particularly in dosages of 600 µg or more. Given that misoprostol is used prophylactically in very large numbers of healthy women, the greatest emphasis should be placed on limiting adverse effects. In this context, the findings of this review support the use of the lowest effective dose. As for any new medication being used on a large scale, continued vigilance for adverse effects is essential and there is a need for large randomised trials to further elucidate both the relative effectiveness and the risks of various dosages of misoprostol.
This review investigated whether giving misoprostol to women after birth to prevent or treat excessive bleeding reduces maternal deaths and severe complications other than blood loss (which is covered in separate reviews). We included 78 randomised controlled studies involving 59,216 women. The variety of study designs, populations studied, routes of administration and co-interventions, as well as the exceptionally high incidence of hyperpyrexia in Ecuador were limiting factors. Maternal deaths, and the combined outcome, death or severe illness resulting in major surgery, admission to intensive care or vital organ failure (excluding very high fever) were not reduced by misoprostol. The known side effects of misoprostol (fever and very high fever) were worse with dosages of 600 µg or more than with lower dosages. Therefore, the review supports the use of the lowest effective misoprostol dose to prevent or treat maternal bleeding after the birth of the baby, and calls for more research to find out the optimal dosage, with continued surveillance for serious side effects.
We included six trials involving 1270 participants in the review: three trials involving 686 participants compared routine shunting with no shunting, one trial involving 200 participants compared routine shunting with selective shunting, one trial involving 253 participants compared selective shunting with and without near-infrared refractory spectroscopy monitoring, and the other trial involving 131 participants compared shunting with a combination of electroencephalographic and carotid pressure measurement with shunting by carotid pressure measurement alone. In general, reporting of methodology in the included studies was poor. For most studies, the blinding of outcome assessors and the report of prespecified outcomes were unclear. For routine versus no shunting, there was no significant difference in the rate of all stroke, ipsilateral stroke or death up to 30 days after surgery, although data were limited. No significant difference was found between the groups in terms of postoperative neurological deficit between selective shunting with and without near-infrared refractory spectroscopy monitoring, However, this analysis was inadequately powered to reliably detect the effect. There was no significant difference between the risk of ipsilateral stroke in participants selected for shunting with the combination of electroencephalographic and carotid pressure assessment compared with pressure assessment alone, although again the data were limited. This review concluded that the data available were too limited to either support or refute the use of routine or selective shunting in carotid endarterectomy. Large scale randomised trials of routine shunting versus selective shunting are required. No method of monitoring in selective shunting has been shown to produce better outcomes.
We identified six studies up to August 2013, for inclusion in the review. These studies included a total of 1270 participants. Three of the trials compared routine shunting with no shunting, one trial compared routine shunting versus selective shunting, and another two trials compared different methods of monitoring in selective shunting. We have not yet identified any trials that compared selective shunting with no shunting. All the included trials assessed the use of shunting in people undergoing endarterectomy under general anaesthetic. The age of the participants ranged from 40 to 89 years, and overall, there were more male than female participants. Where reported, participants were followed up for no longer than 30 days. There is still no evidence for the use of a carotid shunt during carotid endarterectomy. This review suggests a benefit from the use of a shunt, but the overall results were not statistically significant. More trials are needed. There were significant problems with the quality of the randomised trials and, overall, the reporting of study methodology was poor.
Four studies involving 1642 participants made three eligible comparisons: (i) personal assistance versus usual care, (ii) personal assistance versus nursing homes, and (iii) personal assistance versus 'cluster care'. One was an RCT, three were non-randomised. Personal assistance was generally preferred over other services; however, some people prefer other models of care. This review indicates that personal assistance probably has some benefits for some recipients and caregivers. Paid assistance probably substitutes for informal care and may cost government more than alternatives; however, the total costs to recipients and society are currently unknown. Research in this field is limited. Personal assistance is expensive and difficult to organise, especially in places that do not already have services in place. When implementing new programmes, recipients could be randomly assigned to different forms of assistance. While advocates may support personal assistance for myriad reasons, this review demonstrates that further studies are required to determine which models of personal assistance are most effective and efficient.
This review investigated the effectiveness of personal assistance versus any other form of care for older adults (65+). An exhaustive literature search identified 4 studies that met the inclusion criteria, which included 1642 participants. They suggested that personal assistance may be preferred over other services; however, some people prefer other models of care. This review indicates that personal assistance probably has some benefits for some recipients and their informal caregivers. Paid assistance might substitute for informal care and cost government more than alternative arrangements; however, the relative total costs to recipients and society are unknown.
We included 42 studies involving 1453 participants. The trials included participants who had some residual motor power of the paretic arm, the potential for further motor recovery and with limited pain or spasticity, but tended to use the limb little, if at all. The majority of studies were underpowered (median number of included participants was 29) and we cannot rule out small-trial bias. Eleven trials (344 participants) assessed disability immediately after the intervention, indicating a non-significant standard mean difference (SMD) 0.24 (95% confidence interval (CI) -0.05 to 0.52) favouring CIMT compared with conventional treatment. For the most frequently reported outcome, arm motor function (28 studies involving 858 participants), the SMD was 0.34 (95% CI 0.12 to 0.55) showing a significant effect (P value 0.004) in favour of CIMT. Three studies involving 125 participants explored disability after a few months of follow-up and found no significant difference, SMD -0.20 (95% CI -0.57 to 0.16) in favour of conventional treatment. CIMT is a multi-faceted intervention where restriction of the less affected limb is accompanied by increased exercise tailored to the person’s capacity. We found that CIMT was associated with limited improvements in motor impairment and motor function, but that these benefits did not convincingly reduce disability. This differs from the result of our previous meta-analysis where there was a suggestion that CIMT might be superior to traditional rehabilitation. Information about the long-term effects of CIMT is scarce. Further trials studying the relationship between participant characteristics and improved outcomes are required.
We, a team of Cochrane researchers, searched widely through the medical literature and identified 42 relevant studies involving 1453 participants. The evidence is current to January 2015. The participants in these studies had some control of their affected arm and were generally able to open their affected hand by extending the wrist and fingers. CIMT treatments varied between studies in terms of the time for which the participants' unaffected arm was constrained each day, and the amount of active exercise that the affected arm was required to do. CIMT was compared mainly to active physiotherapy treatments, and sometimes to no treatment. The 42 studies assessed different aspects of recovery from stroke, and not all measured the same things. Eleven studies (with 344 participants) assessed the effect of CIMT on disability (the effective use of the arm in daily living) and found that the use of CIMT did not lead to improvement in ability to manage everyday activities such as bathing, dressing, eating, and toileting. Twenty-eight trials (with 858 participants) tested whether CIMT improved the ability to use the affected arm. CIMT appeared to be more effective at improving arm movement than active physiotherapy treatments or no treatment. The quality of the evidence for each outcome is limited due to small numbers of study participants and poor reporting of study details. We considered the quality of the evidence to be low for disability and very low for the ability to use the affected arm.
Fifteen RCTs were included in the review. Six of these had been included in the previous review of RO. The studies included participants from a variety of settings, interventions that were of varying duration and intensity, and were from several different countries. The quality of the studies was generally low by current standards but most had taken steps to ensure assessors were blind to treatment allocation. Data were entered in the meta-analyses for 718 participants (407 receiving cognitive stimulation, 311 in control groups). The primary analysis was on changes that were evident immediately at the end of the treatment period. A few studies provided data allowing evaluation of whether any effects were subsequently maintained. A clear, consistent benefit on cognitive function was associated with cognitive stimulation (standardised mean difference (SMD) 0.41, 95% CI 0.25 to 0.57). This remained evident at follow-up one to three months after the end of treatment. In secondary analyses with smaller total sample sizes, benefits were also noted on self-reported quality of life and well-being (standardised mean difference: 0.38 [95% CI: 0.11, 0.65]); and on staff ratings of communication and social interaction (SMD 0.44, 95% CI 0.17 to 0.71). No differences in relation to mood (self-report or staff-rated), activities of daily living, general behavioural function or problem behaviour were noted. In the few studies reporting family caregiver outcomes, no differences were noted. Importantly, there was no indication of increased strain on family caregivers in the one study where they were trained to deliver the intervention. There was consistent evidence from multiple trials that cognitive stimulation programmes benefit cognition in people with mild to moderate dementia over and above any medication effects. However, the trials were of variable quality with small sample sizes and only limited details of the randomisation method were apparent in a number of the trials. Other outcomes need more exploration but improvements in self-reported quality of life and well-being were promising. Further research should look into the potential benefits of longer term cognitive stimulation programmes and their clinical significance.
This review included 15 trials with a total of 718 participants. The findings suggested that cognitive stimulation has a beneficial effect on the memory and thinking test scores of people with dementia. Although based on a smaller number of studies, there was evidence that the people with dementia who took part reported improved quality of life. They were reported to communicate and interact better than previously. No evidence was found of improvements in the mood of participants or their ability to care for themselves or function independently, and there was no reduction in behaviour found difficult by staff or caregivers. Family caregivers, including those who were trained to deliver the intervention, did not report increased levels of strain or burden. The trials included people in the mild to moderate stages of dementia and the intervention does not appear to be appropriate for people with severe dementia. More research is needed to find out how long the effects of cognitive stimulation last and for how long it is beneficial to continue the stimulation. Involving family caregivers in the delivery of cognitive stimulation is an interesting development and merits further evaluation.
Of a total of 216 citations identified by the systematic literature search, we included six randomised controlled trials (reported in nine publications), with a total of 576 participants. We identified a moderate heterogeneity of methodological quality and risk of bias of the included trials. None of the pooled results for our main outcomes of interest showed significant differences: delayed gastric emptying (OR 0.60; 95% CI 0.31 to 1.18; P = 0.14), mortality (RD -0.01; 95% CI -0.03 to 0.02; P = 0.72), postoperative pancreatic fistula (OR 0.98; 95% CI 0.65 to 1.47; P = 0.92), postoperative haemorrhage (OR 0.79; 95% CI 0.38 to 1.65; P = 0.53), intra-abdominal abscess (OR 0.93; 95% CI 0.52 to 1.67; P = 0.82), bile leakage (OR 0.89; 95% CI 0.36 to 2.15; P = 0.79), reoperation rate (OR 0.59; 95% CI 0.27 to 1.31; P = 0.20), and length of hospital stay (MD -0.67; 95%CI -2.85 to 1.51; P = 0.55). Furthermore, the perioperative outcomes duration of operation, intraoperative blood loss and time to NGT removal showed no relevant differences. Only one trial reported quality of life, on a subgroup of participants, also without a significant difference between the two groups at any time point. The overall quality of the evidence was only low to moderate, due to heterogeneity, some inconsistency and risk of bias in the included trials. There was low to moderate quality evidence suggesting no significant differences in morbidity, mortality, length of hospital stay, or quality of life between antecolic and retrocolic reconstruction routes for gastro- or duodenojejunostomy. Due to heterogeneity in definitions of the endpoints between trials, and differences in postoperative management, future research should be based on clearly defined endpoints and standardised perioperative management, to potentially elucidate differences between these two procedures. Novel strategies should be evaluated for prophylaxis and treatment of common complications, such as delayed gastric emptying.
We included six randomised controlled trials (reported in nine publications), reporting data on a total of 576 adult participants, who underwent pancreaticoduodenectomy for any pancreatic disease. The evidence is current to September 2015. We did not identify significant differences in delayed gastric emptying; postoperative mortality; postoperative pancreatic fistula, or other complications; reoperations; or length of hospital stay. Quality of life, only reported for a subset of participants in one trial, did not differ between the two groups. Our results do not suggest any relevant differences between antecolic and retrocolic reconstruction of the gastro- or duodenojejunostomy after partial pancreaticoduodenectomy. The quality of the evidence was only low to moderate, due to clinical and statistical differences between individual trials, and risk of bias, due to shortcomings in the way the trials were conducted. Therefore, the results should be viewed with caution.
Thirty-one trials involving 17,771 women are included in this review. This review found that folic acid supplementation has no impact on pregnancy outcomes such as preterm birth (risk ratio (RR) 1.01, 95% confidence interval (CI) 0.73 to 1.38; three studies, 2959 participants), and stillbirths/neonatal deaths (RR 1.33, 95% CI 0.96 to 1.85; three studies, 3110 participants). However, improvements were seen in the mean birthweight (mean difference (MD) 135.75, 95% CI 47.85 to 223.68). On the other hand, the review found no impact on improving pre-delivery anaemia (average RR 0.62, 95% CI 0.35 to 1.10; eight studies, 4149 participants; random-effects), mean pre-delivery haemoglobin level (MD -0.03, 95% CI -0.25 to 0.19; 12 studies, 1806 participants), mean pre-delivery serum folate levels (standardised mean difference (SMD) 2.03, 95% CI 0.80 to 3.27; eight studies, 1250 participants; random-effects), and mean pre-delivery red cell folate levels (SMD 1.59, 95% CI -0.07 to 3.26; four studies, 427 participants; random-effects). However, a significant reduction was seen in the incidence of megaloblastic anaemia (RR 0.21, 95% CI 0.11 to 0.38, four studies, 3839 participants). We found no conclusive evidence of benefit of folic acid supplementation during pregnancy on pregnancy outcomes.
The review authors found 31 trials (involving 17,771 women) that looked at the impact of providing folic acid supplementation during pregnancy. The data showed that taking folate during pregnancy was not associated with reducing the chance of preterm births, stillbirths, neonatal deaths, low birthweight babies, pre-delivery anaemia in the mother or low pre-delivery red cell folate, although pre-delivery serum levels were improved. The review also did not show any impact of folate supplementation on improving mean birthweight and the mother’s mean haemoglobin levels during pregnancy compared with taking a placebo. However, the review showed some benefit in indicators of folate status in the mother. The evidence provided so far from these trials did not find conclusive results for any overall benefit of folic acid supplementation during pregnancy. Most of the studies were conducted over 30 to 45 years ago.
We included 16 studies out of the 243 identified. Most of the included studies showed methodological weaknesses that hamper the strength and reliability of their findings. When fees were introduced or increased, we found the use of health services decreased significantly in most studies. Two studies found increases in health service use when quality improvements were introduced at the same time as user fees. However, these studies have a high risk of bias. We found no evidence of effects on health outcomes or health expenditure. The review suggests that reducing or removing user fees increases the utilisation of certain healthcare services. However, emerging evidence suggests that such a change may have unintended consequences on utilisation of preventive services and service quality. The review also found that introducing or increasing fees can have a negative impact on health services utilisation, although some evidence suggests that when implemented with quality improvements these interventions could be beneficial. Most of the included studies suffered from important methodological weaknesses. More rigorous research is needed to inform debates on the desirability and effects of user fees.
The studies in this review took place in 12 different countries. They evaluated either the effects of introducing user fees; removing fees; or increasing or decreasing fees. The studies varied according to the type of health services and the level and nature of payment. While some of the studies looked at the impact of large-scale national reforms, other studies looked at small-scale pilot projects. All of the evidence was of very low quality and the studies showed mixed results: When user fees were introduced or increased: - People’s use of preventive healthcare services decreased. - People’s use of curative services generally decreased. However, when quality improvements were made to the health services at the same time as fees were introduced, people’s use of curative services increased. In addition, poor parts of the population began to use health care services more. When user fees were removed: - There was usually no immediate impact on people’s use of preventive healthcare services. But in several cases, people’s use of these services did increase after some time. - There was some increase in the number of outpatient visits, but no increase in the number of inpatient visits. When user fees were decreased: - There was an increase in the use of preventive and curative healthcare services, ranging from a very small to a large increase. To summarise, results were mixed and the quality of the evidence was very low. We are therefore uncertain about the effects of user fees on health service use.
Six trials involving 849 patients satisfied the inclusion criteria. Pharmacological interventions included aprotinin, desmopressin, recombinant factor VIIa, antithrombin III, and tranexamic acid. One or two trials could be included under most comparisons. All trials had a high risk of bias. There was no significant difference in the peri-operative mortality, survival at maximal follow-up, liver failure, or other peri-operative morbidity. The risk ratio of requiring allogeneic blood transfusion was significantly lower in the aprotinin and tranexamic acid groups than the respective control groups. Other interventions did not show significant decreases of allogeneic transfusion requirements. None of the interventions seem to decrease peri-operative morbidity or offer any long-term survival benefit. Aprotinin and tranexamic acid show promise in the reduction of blood transfusion requirements in liver resection surgery. However, there is a high risk of type I (erroneously concluding that an intervention is beneficial when it is actually not beneficial) and type II errors (erroneously concluding that an intervention is not beneficial when it is actually beneficial) because of the few trials included, the small sample size in each trial, and the high risk of bias. Further randomised clinical trials with low risk of bias and random errors assessing clinically important outcomes such as peri-operative mortality are necessary to assess any pharmacological interventions aimed at decreasing blood loss and blood transfusion requirements in liver resections. Trials need to be designed to assess the effect of a combination of different interventions in liver resections.
This systematic review was aimed at determining whether any medical treatment decreased blood loss and decreased allogeneic blood transfusion requirements in patients undergoing liver resections. This systematic review included six trials with 849 patients. All trials had high risk of bias ('systematic error') as well of play of chance ('random error'). The trials included comparison of medicines (such as aprotinin, desmopressin, recombinant factor VIIa, antithrombin III, and tranexamic acid) with controls (no medicines). There was no difference in the death or complications due to surgery or long-term survival in any of the comparisons. Fewer patients required transfusion of blood donated by others when aprotinin or tranexamic acid were compared to controls not receiving the interventions. The other comparisons did not decrease the transfusion requirements. However, there is a high risk of type I errors (erroneously concluding that an intervention is beneficial when it is actually not beneficial) and type II errors (erroneously concluding that an intervention is not beneficial when it is actually beneficial) because of the few trials included and the small sample size in each trial as well as the inherent risk of bias (systematic errors). Aprotinin and tranexamic acid show promise in the reduction of blood transfusion requirements in liver resections. Further randomised clinical trials with low risk of bias (systematic errors) and low risk of play of chance (random errors) which assess clinically important outcomes (such as death and complications due to operation) are necessary to assess any pharmacological interventions aimed at decreasing blood transfusion and blood transfusion requirements in liver resections. Trials need to be designed to assess the effect of a combination of different interventions in liver resections.
We included seven trials including 521 patients for this review. The sample size in the trials varied from 12 to 180 patients. All the trials were of high risks of systematic errors and of random errors. Four trials included patients who underwent liver resection only. In the remaining three trials, patients underwent combined liver resection with extrahepatic biliary resection resulting in a biliary enteric anastomosis. Four trials included only major liver resection. The remaining three trials included a mixture of major and minor liver resections. It appears that the proportion of cirrhotic patients in the trials was very low. The comparisons performed included whether antibiotics are necessary routinely during the peri-operative period of liver resection, the duration of antibiotics, the use of prebiotics and probiotics in the perioperative period, use of recombinant bactericidal-permeability increasing protein 21 (rBPI21), and the use of topical povidone iodine gel at the time of wound closure. Only one or two trials were included under each comparison. There was no significant differences in mortality or severe morbidity in any of the comparisons. Quality of life was not reported in any of the trials. There is currently no evidence to support or refute the use of any treatment to reduce infectious complications after liver resections. Further well designed trials with low risk of systematic error and low risk of random errors are necessary.
We included seven trials involving 521 patients for this review. The number of patients included in the trials varied from 12 to 180. The comparisons performed included whether antibiotics are necessary routinely during the peri-operative period of liver resection, the duration of antibiotics, and the use of other agents to improve the general body resistance to infection. There was no difference in the risk of death or in the major complication rates between the compared groups in any of the comparisons. Quality of life was not reported in any of the trials. All the trials were of high risk of systematic errors (ie, there was a potential to arrive at wrong conclusions because of the way the trial was conducted) and random errors (there was a potential to arrive at wrong conclusions because of play of chance). We are unable to advocate or refute any method of decreasing infectious complications after liver resection. Further well designed trials with low risk of systematic error and low risk of random errors are necessary.
Five trials involving 836 participants randomised to peritoneal closure (410 participants) and no peritoneal closure (426 participants) were included in this review. All the trials were at high risk of bias. All the trials included participants undergoing laparotomy (open surgery). Four of the five trials used catgut or chromic catgut for peritoneal closure. Three trials involved vertical incisions and two trials involved transverse incisions. None of the trials reported 30-day mortality. There was no significant difference in the one-year mortality between the two groups (RR 1.11; 95% CI 0.56 to 2.19) in the only trial that reported this outcome. The only serious peri-operative adverse event reported was burst abdomen, which was reported by three trials. Overall, 10/663 (1.5%) of participants developed burst abdomen. There was no significant difference in the proportion of participants who developed burst abdomen between the two groups (RR 0.71; 95% CI 0.22 to 2.35). The same three trials reported the proportion of participants who developed incisional hernia. Details of the follow-up period were only available for one trial, and so we were unable to calculate the incidence rate. Overall, 17/663 (2.5%) of participants developed incisional hernia. There was no significant difference in the proportion of participants who developed incisional hernia between the two groups (RR 0.92; 95% CI 0.37 to 2.28). None of the trials reported quality of life; the incidence rate of, or proportion of participants who developed, intestinal obstruction due to adhesions; or re-operation due to incisional hernia or adhesions. Only one trial reported the length of hospital stay, and this trial did not include readmissions in its calculations. There was no significant difference in the length of hospital stay between the two groups (MD 0.40 days; 95% CI -0.51 to 1.31). There is no evidence for any short-term or long-term advantage in peritoneal closure for non-obstetric operations. If further trials are performed on this topic, they should have an adequate period of follow-up and adequate measures should be taken to ensure that the results are not subject to bias.
We identified five trials involving 836 participants who had open abdominal operations. Peritoneal closure was done in 410 participants and not done in 426. All trials had a high risk of bias. Only one trial reported the proportion of participants who died up to one year after the operation, and there was no significant difference between the closure and non-closure groups. Three trials reported major wound breakdown (burst abdomen), which requires emergency surgery. Overall, 10/663 participants (1.5%) developed burst abdomen, with no significant difference in proportions between the two groups. Three trials reported minor wound breakdown (incisional hernia), that may require surgery. Overall, 17/663 participants (2.5%) developed incisional hernia; again there was no significant difference between the two groups. None of the trials reported on important outcomes, such as quality of life; the occurrence of intestinal obstruction (caused by intestines sticking to themselves and the abdominal wall (adhesions)); or the proportion of participants who had surgery to fix incisional hernia or adhesions. Only one trial reported length of hospital stay, and showed no significant difference between the groups, but did not include readmissions in its calculations. There does not appear to be any evidence for a short-term or long-term advantage in peritoneal closure in operations not related to childbirth. However, the trials were at high risk of bias, which can lead to false conclusions. Interestingly, our findings are similar to those of another research group who performed a similar review for operations related to childbirth.
Twenty three trials are included, allocation concealment was adequate in nine. Sixteen trials compared artemisinin drugs with quinine in 2653 patients. Artemisinin drugs were associated with better survival (mortality odds ratio 0.61, 95% confidence interval 0.46 to 0.82, random effects model). In trials where concealment of allocation was adequate (2261 patients), this was barely statistically significant (odds ratio 0.72, 95% CI 0.54 to 0.96, random effects model). In 1939 patients with cerebral malaria, mortality was also lower with artemisinin drugs overall (odds ratio 0.63, 95% CI 0.44 to 0.88, random effects model). The difference was not significant however when only trials reporting adequate concealment of allocation were analysed (odds ratio 0.78, 95% CI 0.55 to 1.10, random effects model) based on 1607 patients. No difference in neurological sequelae was shown. Compared with quinine, artemisinin drugs showed faster parasite clearance from the blood and similar adverse effects. The evidence suggests that artemisinin drugs are no worse than quinine in preventing death in severe or complicated malaria. No artemisinin derivative appears to be better than the others. This review summarizes trials up to 1999. For the reasons in the 'What's new' section, this review will no longer be updated.
The review shows that treatment with artemisinin drugs may be better than quinine at preventing death in adults and children with severe and complicated malaria. There is no evidence so far against early treatment with suppositories in rural areas whilst patients are transferred to hospital. Few side effects have been reported with these drugs.
We included five trials (involving 960 women). In three trials of 471 women, we found no significant differences in the incidence of mastitis between use of antibiotics and no antibiotics (risk ratio (RR) 0.43; 95% confidence interval (CI) 0.11 to 1.61; or in one trial of 99 women comparing two doses (RR 0.38; 95% CI 0.02 to 9.18). We found no significant differences for mastitis in three trials of specialist breastfeeding education with usual care (one trial); anti-secretory factor cereal (one trial); and mupirocin, fusidic acid ointment or breastfeeding advice (one trial). Generally we found no differences in any of the trials for breastfeeding initiation or duration; or symptoms of mastitis. There was insufficient evidence to show effectiveness of any of the interventions, including breastfeeding education, pharmacological treatments and alternative therapies, regarding the occurrence of mastitis or breastfeeding exclusivity and duration. While studies reported the incidence of mastitis, they all used different interventions. Caution needs to be applied when considering the findings of this review as the conclusion is based on studies, often with small sample sizes. An urgent need for further adequately powered research is needed into this area to conclusively determine the effectiveness of these interventions.
This review found five randomised controlled trials that involved a total of 960 women. They looked at a variety of preventive interventions including breastfeeding education, taking antibiotic medication, topical ointments and anti-secretory factor cereal. None of the therapies made any difference in reducing breast infections or the length of breastfeeding exclusivity and duration with this limited evidence. Generally studies were of low quality, with limited findings highlighting the need for better quality research in this area.
Five studies met the inclusion criteria. All five studies showed that published trials showed an overall greater treatment effect than grey trials. This difference was statistically significant in one of the five studies. Data could be combined for three of the five studies. This showed that, on average, published trials showed a 9% greater treatment effect than grey trials (ratio of odds ratios for grey versus published trials 1.09; 95% CI 1.03-1.16). Overall there were more published trials included in the meta-analyses than grey trials (median 224 (IQR 108-365) versus 45(IQR 40-102)). Published trials had more participants on average. The most common types of grey literature were abstracts (55%) and unpublished data (30%). There is limited evidence to show whether grey trials are of poorer methodological quality than published trials. This review shows that published trials tend to be larger and show an overall greater treatment effect than grey trials. This has important implications for reviewers who need to ensure they identify grey trials, in order to minimise the risk of introducing bias into their review.
This methodology review identified five studies which investigated the effect of including trials found in the grey literature in systematic reviews of health care interventions. They showed that trials found in the published literature tend to be larger and show larger effects of a health care intervention than those trials found in the grey literature. There was limited evidence to show whether grey trials are of poorer methodological quality than published trials. This means that those carrying out systematic reviews need to search for trials in both the published and grey literature in order to help minimise the effects of publication bias in their review.
Thirteen studies (254 individuals) met our inclusion criteria. These included 75 individuals with type 1 diabetes and 158 individuals with type 2 diabetes. The median reduction in urinary sodium was 203 mmol/24 h (11.9 g/day) in type 1 diabetes and 125 mmol/24 h (7.3 g/day) in type 2 diabetes. The median duration of salt restriction was one week in both type 1 and type 2 diabetes. BP was reduced in both type 1 and type 2 diabetes. In type 1 diabetes (56 individuals), salt restriction reduced BP by -7.11/-3.13 mm Hg (systolic/diastolic); 95% CI: systolic BP (SBP) -9.13 to -5.10; diastolic BP (DBP) -4.28 to -1.98). In type 2 diabetes (56 individuals), salt restriction reduced BP by -6.90/-2.87 mm Hg (95% CI: SBP -9.84 to -3.95; DBP -4.39 to -1.35). There was a greater reduction in BP in normotensive patients, possibly due to a larger decrease in salt intake in this group. Although the studies are not extensive, this meta-analysis shows a large fall in BP with salt restriction, similar to that of single drug therapy. All diabetics should consider reducing salt intake at least to less than 5-6 g/day in keeping with current recommendations for the general population and may consider lowering salt intake to lower levels, although further studies are needed.
This review found 13 studies including 254 patients with type 1 and type 2 diabetes. Reducing salt intake by 8.5 g/day lowered BP by 7/3 mm Hg. Public health guidelines recommend reducing dietary salt intake to less than 5-6 g/day and people with diabetes would benefit from reducing salt in their diet to at least this level.
We included five studies with a total of 1296 participants under two years of age hospitalised with bronchiolitis. Two studies with low risk of bias compared 4 mg montelukast (a leukotriene inhibitor) daily use from admission until discharge with a matching placebo. Both selected length of hospital stay as a primary outcome and clinical severity score as a secondary outcome. However, the effects of leukotriene inhibitors on length of hospital stay and clinical severity score were uncertain due to considerable heterogeneity between the study results and wide confidence intervals around the estimated effects (hospital stay: mean difference (MD) -0.95 days, 95% confidence interval (CI) -3.08 to 1.19, P value = 0.38, low quality evidence; clinical severity score on day two: MD -0.57, 95% CI -2.37 to 1.23, P value = 0.53, low quality evidence; clinical severity score on day three: MD 0.17, 95% CI -1.93 to 2.28, P value = 0.87, low quality evidence). The other three studies compared montelukast for several weeks for preventing post-bronchiolitis symptoms with placebo. We assessed one study as low risk of bias, whereas we assessed the other two studies as having a high risk of attrition bias. Due to the significant clinical heterogeneity in severity of disease, duration of treatment, outcome measurements and timing of assessment, we did not pool the results. Individual analyses of these studies did not show significant differences between the leukotriene inhibitors group and the control group in symptom-free days and incidence of recurrent wheezing. One study of 952 children reported two deaths in the leukotriene inhibitors group: neither was determined to be drug-related. No data were available on the percentage of children requiring ventilation, oxygen saturation and respiratory rate. Finally, three studies reported adverse events including diarrhoea, wheezing shortly after administration and rash. No differences were reported between the study groups. The current evidence does not allow definitive conclusions to be made about the effects of leukotriene inhibitors on length of hospital stay and clinical severity score in infants and young children with bronchiolitis. The quality of the evidence was low due to inconsistency (unexplained high levels of statistical heterogeneity) and imprecision arising from small sample sizes and wide confidence intervals, which did not rule out a null effect or harm. Data on symptom-free days and incidence of recurrent wheezing were from single studies only. Further large studies are required. We identified one registered ongoing study, which may make a contribution in the updates of this review.
This evidence is current to May 2014. We identified five randomised controlled trials (1296 participants under two years of age) comparing montelukast (a leukotriene inhibitor) with placebo in infants and young children hospitalised with bronchiolitis. Our primary outcomes were length of hospital stay and all-cause mortality. Secondary outcomes included clinical severity score, percentage of symptom-free days, percentage of children requiring ventilation, recurrent wheezing, oxygen saturation, respiratory rate and clinical adverse effects. The effects of montelukast on length of hospital stay and clinical severity score were uncertain due to considerable heterogeneity (differences) between the study results and wide confidence intervals around the estimated effects. Data on symptom-free days and incidence of recurrent wheezing were from single studies only and individual analyses of these studies did not show significant differences between the intervention group and the control group. One study of 952 children reported two deaths in the leukotriene inhibitors group: neither was determined to be drug-related. No data were available on the percentage of children requiring ventilation, oxygen saturation and respiratory rate. Finally, three studies reported adverse events including diarrhoea, wheezing shortly after administration and rash. No differences were reported between the study groups. We assessed the quality of the evidence for length of hospital stay and clinical severity score as low due to inconsistency and imprecision arising from small sample sizes and wide confidence intervals, which did not rule out no effect or harm. Overall, the current evidence does not allow definitive conclusions to be made about the effect and safety of leukotriene inhibitors in infants and young children with bronchiolitis.
We identified two eligible studies, both comparing the use of one model of SAD, the ProSeal laryngeal mask airway (PLMA) with a TT, with a total study population of 232. One study population underwent laparoscopic surgery. The included studies were generally of high quality, but there was an unavoidable high risk of bias in the main airway variables, such as change of device or laryngospasm, as the intubator could not be blinded. Many outcomes included data from one study only. A total of 5/118 (4.2%) participants randomly assigned to PLMA across both studies were changed to TT insertion because of failed or unsatisfactory placement of the device. Postoperative episodes of hypoxaemia (oxygen saturation < 92% whilst breathing air) were less common in the PLMA groups (RR 0.27, 95% CI 0.10 to 0.72). We found a significant postoperative difference in mean oxygen saturation, with saturation 2.54% higher in the PLMA group (95% CI 1.09% to 4.00%). This analysis showed high levels of heterogeneity between results (I2 = 71%). The leak fraction was significantly higher in the PLMA group, with the largest difference seen during abdominal insufflation-a 6.4% increase in the PLMA group (95% CI 3.07% to 9.73%). No cases of pulmonary aspiration of gastric contents, mortality or serious respiratory complications were reported in either study. We are therefore unable to present effect estimates for these outcomes. In all, 2/118 participants with a PLMA suffered laryngospam or bronchospasm compared with 4/114 participants with a TT. The pooled estimate shows a non-significant reduction in laryngospasm in the PLMA group (RR 0.48, 95% CI 0.09 to 2.59). Postoperative coughing was less common in the PLMA group (RR 0.10, 95% CI 0.03 to 0.31), and there was no significant difference in the risk of sore throat or dysphonia (RR 0.25, 95% CI 0.03 to 2.13). On average, PLMA placement took 5.9 seconds longer than TT placement (95% CI 3 seconds to 8.8 seconds). There was no significant difference in the proportion of successful first placements of a device, with 33/35 (94.2%) first-time successes in the PLMA group and 32/35 (91.4%) in the TT group. We have inadequate information to draw conclusions about safety, and we can only comment on one design of SAD (the PLMA) in obese patients. We conclude that during routine and laparoscopic surgery, PLMAs may take a few seconds longer to insert, but this is unlikely to be a matter of clinical importance. A failure rate of 3% to 5% can be anticipated in obese patients. However, once fitted, PLMAs provide at least as good oxygenation, with the caveat that the leak fraction may increase, although in the included studies, this did not affect ventilation. We found significant improvement in oxygenation during and after surgery, indicating better pulmonary performance of the PLMA, and reduced postoperative coughing, suggesting better recovery for patients.
We searched the databases to September 2012, to find controlled trials that had randomly assigned obese participants (with body mass index (BMI) greater than 30 kg/m2) undergoing general anaesthesia to TT or SAD for airway management. We wanted to investigate the effect of airway type on risk of failed placement; serious complications and death; oxygenation of the blood during and after surgery; coughing, sore throat or hoarseness during or after placement; and time taken and number of attempts needed to fit the airway. We found two randomized studies with a total of 232 obese participants, both of which studied one model of SAD-the ProSeal laryngeal mask airway (PLMA). No relevant outcomes for death or other serious complications occurred in these studies.We found that in 3% to 5% of obese participants, it was not possible to fit a PLMA, and a change of device to a TT was required. The proportion of successful first attempts at airway placement did not differ between PLMA and TT, although it took approximately six seconds longer to place an SAD than a TT. We found significant postoperative reduction of almost 75% in episodes of low oxygen saturation and an improvement in mean oxygen saturation of 2.5% during recovery in the PLMA group. Postoperative cough was less common among participants in the PLMA group. Our findings are consistent with both increased and decreased risks of both sore throat and hoarseness in the PLMA group. Identifying optimal anaesthetic techniques for obese patients is a priority for research. We could not establish the safety of SAD use in obese patients. Large databases created from medical records may be needed to clarify this issue.
A total of 14 trials were included in this review; 4970 patient results were analysed. Four trials evaluating vitamin K antagonists (VKA) versus no VKA suggested that oral anticoagulation may favour autologous venous, but not artificial, graft patency as well as limb salvage and survival. Two other studies comparing VKA with aspirin (ASA) or aspirin and dipyridamole provided evidence to support a positive effect of VKA on the patency of venous but not artificial grafts. Three trials comparing low molecular weight heparin (LMWH) to unfractionated heparin (UFH) failed to demonstrate a significant difference on patency. One trial comparing LMWH with placebo found no significant improvement in graft patency over the first postoperative year in a population receiving aspirin. One trial showed an advantage for LMWH versus aspirin and dipyridamol at one year for patients undergoing limb salvage procedures. Perioperative administration of ancrod showed no greater benefit when compared to unfractionated heparin. Dextran 70 showed similar graft patency rates to LMWH but a significantly higher proportion of patients developed heart failure with dextran. Patients undergoing infrainguinal venous graft are more likely to benefit from treatment with VKA than platelet inhibitors. Patients receiving an artificial graft benefit from platelet inhibitors (aspirin). However, the evidence is not conclusive. Randomised controlled trials with larger patient numbers are needed in the future to compare antithrombotic therapies with either placebo or antiplatelet therapies.
Surgery to bypass the blockage uses either a piece of vein from another part of the person’s body or a synthetic graft. The bypass may help improve blood supply to the leg but the graft can also become blocked, even in the first year. To help prevent this, people are given aspirin (an antiplatelet drug) or a vitamin K antagonist (anti-blood clotting or antithrombotic drug), or both, to try to stop loss of blood flow through the graft (patency). The review of trials found that patients undergoing venous grafts were more likely to benefit from treatment with vitamin K antagonists than platelet inhibitors. Patients receiving an artificial graft may benefit from platelet inhibitors (aspirin). However, the evidence is not conclusive. Although a total of 14 randomised, controlled trials involving 4970 patients were included in the review, trials with larger patient numbers are needed. This is because there was considerable variation between the included trials in whether patients received both types of drugs, anticoagulation levels and how they were measured, and the indications for surgery, intermittent claudication or critical limb ischaemia.
Fifteen trials involving 3781 participants were included. No studies reported the effect of meglitinides on mortality or morbidity. In the eleven studies comparing meglitinides to placebo, both repaglinide and nateglinide resulted in a reductions in glycosylated haemoglobin (0.1% to 2.1% reduction in HbA1c for repaglinide; 0.2% to 0.6% for nateglinide). Only two trials compared repaglinide to nateglinide (342 participants), with greater reduction in glycosylated haemoglobin in those receiving repaglinide. Repaglinide (248 participants in three trials) had a similar degree of effect in reducing glycosylated haemoglobin as metformin. Nateglinide had a similar or slightly less marked effect on glycosylated haemoglobin than metformin (one study, 355 participants). Weight gain was generally greater in those treated with meglitinides compared with metformin (up to three kg in three months). Diarrhoea occurred less frequently and hypoglycaemia occurred more frequently but rarely severely enough as to require assistance. Meglitinides may offer an alternative oral hypoglycaemic agent of similar potency to metformin, and may be indicated where side effects of metformin are intolerable or where metformin is contraindicated. However, there is no evidence available to indicate what effect meglitinides will have on important long-term outcomes, particularly mortality.
No studies reported the effect of meglitinides on mortality or diabetes related complications. In the eleven studies comparing meglitinides to placebo, both repaglinide and nateglinide resulted in an improved blood sugar control. Weight gain was generally greater in those treated with meglitinides compared with metformin (up to three kg in three months), another oral antidiabetic drug. Here, diarrhoea occurred less frequently and hypoglycaemia occurred more frequently but rarely severely enough as to require assistance. Meglitinides may offer an alternative oral hypoglycaemic agent of similar potency to metformin, and may be indicated where side effects of metformin are intolerable (in particular persistent diarrhoea) or where metformin is contraindicated. However, there is no evidence available yet to indicate what effect meglitinides will have on important long-term outcomes, in particular, on mortality. As yet, the experience with meglitinides in terms of side effects is limited. Results from other Cochrane review groups may provide additional information about the potential role of meglitinides in the management of type 2 diabetes mellitus.
We found six RCTs (1412 participants overall) conducted to evaluate the effects of therapeutic hypothermia - five on neurological outcome and survival, one on only neurological outcome. The quality of the included studies was generally moderate, and risk of bias was low in three out of six studies. When we compared conventional cooling methods versus no cooling (four trials; 437 participants), we found that participants in the conventional cooling group were more likely to reach a favourable neurological outcome (risk ratio (RR) 1.94, 95% confidence interval (CI) 1.18 to 3.21). The quality of the evidence was moderate. Across all studies that used conventional cooling methods rather than no cooling (three studies; 383 participants), we found a 30% survival benefit (RR 1.32, 95% CI 1.10 to 1.65). The quality of the evidence was moderate. Across all studies, the incidence of pneumonia (RR 1.15, 95% CI 1.02 to 1.30; two trials; 1205 participants) and hypokalaemia (RR 1.38, 95% CI 1.03 to 1.84; two trials; 975 participants) was slightly increased among participants receiving therapeutic hypothermia, and we observed no significant differences in reported adverse events between hypothermia and control groups. Overall the quality of the evidence was moderate (pneumonia) to low (hypokalaemia). Evidence of moderate quality suggests that conventional cooling methods provided to induce mild therapeutic hypothermia improve neurological outcome after cardiac arrest, specifically with better outcomes than occur with no temperature management. We obtained available evidence from studies in which the target temperature was 34°C or lower. This is consistent with current best medical practice as recommended by international resuscitation guidelines for hypothermia/targeted temperature management among survivors of cardiac arrest. We found insufficient evidence to show the effects of therapeutic hypothermia on participants with in-hospital cardiac arrest, asystole or non-cardiac causes of arrest.
We included in our analysis six studies (1412 people overall), four of which (437 people) examined effects of cooling the body by conventional methods after successful resuscitation for cardiac arrest. One study that used haemofiltration (cooling the blood externally - similar to dialysis) as the cooling method and one study in which cooling to 33°C was compared with temperature management at 36°C were treated separately in the review. The study that used external cooling was supported by a dialysis-related company. Of the five studies included in the main analysis, two received funding from government or non-profit organizations; three studies did not provide information on funding. When we compared people whose bodies were cooled to 32°C to 34°C after resuscitation versus those whose bodies were not cooled at all, we found that 63% of those receiving cooling would suffer no, or only minor, brain damage, while only 33% of those not cooled would suffer no, or only minor, brain damage. Cooling had an important effect on simple survival, with or without brain damage: 57% would survive if their bodies were cooled compared with 42% if their bodies were not cooled at all. No serious side effects occurred, but cooling the body was associated with increased risk of pneumonia (49% vs 42% of those studied) and increased risk of low concentrations of potassium in the blood (18% vs 13%). Some studies had quality shortcomings including small numbers of participants and use of inadequate methods to balance participants between intervention and control groups. However, when differences between studies are acknowledged (heterogeneity), it is clear that these shortcomings had no major impact on the main results.
We included eight parallel-design RCTs, involving a total of 640 participants. We did not assess any of the studies as being at low risk of bias across all domains, with the main limitation being lack of blinding. Using GRADE methodology, the quality of the evidence ranged from very low to low quality. Long-term GnRH agonist therapy versus no pretreatment We are uncertain whether long-term GnRH agonist therapy affects the live birth rate (RR 0.48, 95% CI 0.26 to 0.87; 1 RCT, n = 147; I2 not calculable; very low-quality evidence) or the overall complication rate (Peto OR 1.23, 95% CI 0.37; to 4.14; 3 RCTs, n = 318; I2 = 73%; very low-quality evidence) compared to standard IVF/ICSI. Further, we are uncertain whether this intervention affects the clinical pregnancy rate (RR 1.13, 95% CI 0.91 to 1.41; 6 RCTs, n = 552, I2 = 66%; very low-quality evidence), multiple pregnancy rate (Peto OR 0.14, 95% CI 0.03 to 0.56; 2 RCTs, n = 208, I2 = 0%; very low-quality evidence), miscarriage rate (Peto OR 0.45, 95% CI 0.10 to 2.00; 2 RCTs, n = 208; I2 = 0%; very low-quality evidence), mean number of oocytes (MD 0.72, 95% CI 0.06 to 1.38; 4 RCTs, n = 385; I2 = 81%; very low-quality evidence) or mean number of embryos (MD -0.76, 95% CI -1.33 to -0.19; 2 RCTs, n = 267; I2 = 0%; very low-quality evidence). Long-term GnRH agonist therapy versus long-term continuous COC No studies reported on this comparison. Long-term GnRH agonist therapy versus surgical therapy of endometrioma No studies reported on this comparison. This review raises important questions regarding the merit of long-term GnRH agonist therapy compared to no pretreatment prior to standard IVF/ICSI in women with endometriosis. Contrary to previous findings, we are uncertain as to whether long-term GnRH agonist therapy impacts on the live birth rate or indeed the complication rate compared to standard IVF/ICSI. Further, we are uncertain whether this intervention impacts on the clinical pregnancy rate, multiple pregnancy rate, miscarriage rate, mean number of oocytes and mean number of embryos. In light of the paucity and very low quality of existing data, particularly for the primary outcomes examined, further high-quality trials are required to definitively determine the impact of long-term GnRH agonist therapy on IVF/ICSI outcomes, not only compared to no pretreatment, but also compared to other proposed alternatives to endometriosis management.
We found eight randomised controlled trials comparing long-term GnRH agonist therapy with no pretreatment including a total of 640 women with endometriosis prior to IVF/ICSI. The evidence is current to January 2019. Compared to no pretreatment, we are uncertain whether long-term GnRH agonist therapy prior to IVF/ICSI in women with endometriosis affects the live birth rate. The evidence suggests that if the chance of live birth rate is assumed to be 36% with no pretreatment, the chance following long-term GnRH agonist therapy would be between 9% and 31%. We are also uncertain whether this intervention affects complication rate, clinical pregnancy rate, multiple pregnancy rate, miscarriage rate, mean number of oocytes and mean number of embryos. No studies compared long-term GnRH agonist therapy to long-term continuous COC therapy or surgery to remove endometriomas. The evidence was of very low quality. The main limitations in the evidence were lack of blinding (the process where the women participating in the trial, as well as the research staff, are not aware of the intervention used), inconsistency (differences between different studies) and imprecision (random error and small size of each study).
Fifteen trials including a total of 1219 participants met the inclusion criteria. No trial to date has measured the effect of chitosan on mortality or morbidity. Analyses indicated that chitosan preparations result in a significantly greater weight loss (weighted mean difference -1.7 kg; 95% confidence interval (CI) -2.1 to -1.3 kg, P < 0.00001), decrease in total cholesterol (-0.2 mmol/L [95% CI -0.3 to -0.1], P < 0.00001), and a decrease in systolic and diastolic blood pressure compared with placebo. There were no clear differences between intervention and control groups in terms of frequency of adverse events or in faecal fat excretion. However, the quality of many studies was sub-optimal and analyses restricted to studies that met allocation concealment criteria, were larger, or of longer duration showed that such trials produced substantially smaller decreases in weight and total cholesterol. There is some evidence that chitosan is more effective than placebo in the short-term treatment of overweight and obesity. However, many trials to date have been of poor quality and results have been variable. Results obtained from high quality trials indicate that the effect of chitosan on body weight is minimal and unlikely to be of clinical significance.
Fifteen studies which lasted between 4 to 24 weeks including a total of 1219 participants were analysed. Trials of chitosan to date have varied considerably in terms of quality. The review suggests that chitosan may have a small effect on body weight but results from high quality trials indicate that this effect is likely to be minimal.
Six studies were included, with a total of 440 participants. The interventions examined were cognitive therapy (CT), behavioural therapy (BT), cognitive behavioural therapy (CBT), behavioural stress management (BSM) and psychoeducation. All forms of psychotherapy except psychoeducation showed a significant improvement in hypochondriacal symptoms compared to waiting list control (SMD (random) [95% CI] = -0.86 [-1.25 to -0.46]). For some therapies, significant improvements were found in the secondary outcomes of general functioning (CBT), resource use (psychoeducation), anxiety (CT, BSM), depression (CT, BSM) and physical symptoms (CBT). These secondary outcome findings were based on smaller numbers of participants and there was significant heterogeneity between studies. Cognitive therapy, behavioural therapy, cognitive behavioural therapy and behavioural stress management are effective in reducing symptoms of hypochondriasis. However, studies included in the review used small numbers of participants and do not allow estimation of effect size, comparison between different types of psychotherapy or whether people are "cured". Most long-term outcome data were uncontrolled. Further studies should make use of validated rating scales, assess treatment acceptability and effect on resource use, and determine the active ingredients and nonspecific factors that are important in psychotherapy for hypochondriasis.
The objective of this review was to assess whether any form of psychotherapy is effective in the management of people suffering from hypochondriasis. Six studies were included in the review. Analysis of data suggested that, compared to being on a waiting list, forms of cognitive and behaviour therapy, or a non-specific therapy called behavioural stress management all improve the symptoms of hypochondriasis. However, the numbers of people in the studies were small and it was not possible to tell how much of an improvement each therapy made. It is possible that the improvements seen were due to non-specific factors involved in regular contact with a therapist rather than specific properties of these forms of psychotherapy. It was also not possible to make comparisons between the different types of psychotherapy. A study of psychoeducation was not considered to be sufficient evidence that this form of psychotherapy is effective.
This review identified three studies (from four reports) involving a total of 22 children that investigated the efficacy of NSOMT as adjunctive treatment to conventional speech intervention versus conventional speech intervention for children with speech sound disorders. One study, a randomised controlled trial (RCT), included four boys aged seven years one month to nine years six months - all had speech sound disorders, and two had additional conditions (one was diagnosed as "communication impaired" and the other as "multiply disabled"). Of the two quasi-randomised controlled trials, one included 10 children (six boys and four girls), aged five years eight months to six years nine months, with speech sound disorders as a result of tongue thrust, and the other study included eight children (four boys and four girls), aged three to six years, with moderate to severe articulation disorder only. Two studies did not find NSOMT as adjunctive treatment to be more effective than conventional speech intervention alone, as both intervention and control groups made similar improvements in articulation after receiving treatments. One study reported a change in postintervention articulation test results but used an inappropriate statistical test and did not report the results clearly. None of the included studies examined the effects of NSOMTs on any other primary outcomes, such as speech intelligibility, speech physiology and adverse effects, or on any of the secondary outcomes such as listener acceptability. The RCT was judged at low risk for selection bias. The two quasi-randomised trials used randomisation but did not report the method for generating the random sequence and were judged as having unclear risk of selection bias. The three included studies were deemed to have high risk of performance bias as, given the nature of the intervention, blinding of participants was not possible. Only one study implemented blinding of outcome assessment and was at low risk for detection bias. One study showed high risk of other bias as the baseline characteristics of participants seemed to be unequal. The sample size of each of the included studies was very small, which means it is highly likely that participants in these studies were not representative of its target population. In the light of these serious limitations in methodology, the overall quality of the evidence provided by the included trials is judged to be low. Therefore, further research is very likely to have an important impact on our confidence in the estimate of treatment effect and is likely to change the estimate. The three included studies were small in scale and had a number of serious methodological limitations. In addition, they covered limited types of NSOMTs for treating children with speech sound disorders of unknown origin with the sounds /s/ and /z/. Hence, we judged the overall applicability of the evidence as limited and incomplete. Results of this review are consistent with those of previous reviews: Currently no strong evidence suggests that NSOMTs are an effective treatment or an effective adjunctive treatment for children with developmental speech sound disorders. Lack of strong evidence regarding the treatment efficacy of NSOMTs has implications for clinicians when they make decisions in relation to treatment plans. Well-designed research is needed to carefully investigate NSOMT as a type of treatment for children with speech sound disorders.
The evidence is current to April 2014. We found three studies (from four reports) involving a total of 22 children aged three to nine years who received a combination of NSOMTs and articulation or phonological therapy (intervention group), or articulation or phonological therapy alone (control group). One study was a randomised controlled trial in which four boys with speech sound disorders were randomly assigned to one of the two groups. In this study, each participant received 16 × 30-minute individual therapy sessions, twice per week over eight weeks, to treat the speech sound 's'. For the intervention group, NSOMT (oral placement therapy) was conducted in the first 10 minutes of each session, followed by 20-minute articulation therapy. The other two studies used randomisation, but the method used to generate the random sequence was not reported. In one of these studies, six boys and four girls, all with speech sound disorders due to tongue thrust, were randomly assigned to one of the two groups. Each participant received 22 × 30-minute individual sessions conducted weekly in the first six weeks, and twice a week in the following eight weeks, to treat 's' and 'z' sounds. The intervention group received NSOMT (Hanson's 1977 approach) in the first six weeks and alternating sessions of NSOMT and articulation therapy in the following eight weeks. The final study randomly assigned four boys and four girls with moderate to severe articulation disorder alone to either intervention group or control group. Each participant received 9 × 20-minute group therapy sessions (two participants in each group), conducted twice a week over five weeks. For the intervention group, NSOMT (oral motor exercises for speech clarity) was conducted during the first 10 minutes of each session. Speech errors associated with the 's' sound were treated for the intervention group; however, the speech sound(s) treated for the control group were not detailed. None of the studies reported funding support. Two studies (one that used oral placement therapy and one that used Hanson's 1977 approach) did not find NSOMT as an adjunctive treatment to be more effective than conventional speech intervention only, as both intervention and control groups had made similar improvements in articulation after treatment (i.e. fewer speech errors or increased percentage of correct articulation). The study that used oral motor exercises for speech clarity as the NSOMT reported a change in articulation test results after treatment, but used an inappropriate statistical test and did not report the results clearly. The three included studies were small in scale and had a number of serious methodological limitations. Moreover, these studies covered limited types of NSOMTs for treating just one class of speech sounds - 's' with or without 'z' - in children with speech sound disorders. Hence, the overall applicability of the evidence is limited, and the evidence is believed to be incomplete and of low quality. To conclude, currently no strong evidence indicates whether NSOMTs are effective as treatment or adjunctive treatment for children with developmental speech sound disorders.
Four RCTs, 13 prospective cohort studies, and one unpublished ongoing cohort study met our inclusion criteria, and included a total of 2098 participants with TBM. None of the included RCTs directly compared six months versus longer regimens, so we analysed all data as individual cohorts to obtain relapse rates in each set of cohorts. We included 20 cohorts reported in 18 studies. One of these was reported separately, leaving 19 cohorts in the main analysis. We included seven cohorts of participants treated for six months with a total of 458 participants. Three studies were conducted in Thailand, two in South Africa, and one each in Ecuador and Papua New Guinea between the 1980s and 2009. We included 12 cohorts of participants treated for longer than six months (ranging from eight to 16 months), with a total of 1423 participants. Four studies were conducted in India, three in Thailand and one each in China, South Africa, Romania, Turkey, and Vietnam, between the late 1970s and 2011. The unpublished ongoing cohort study is conducted in India and included 217 participants. The proportion of participants classified as having stage III disease (severe) was higher in the cohorts treated for six months (33.2% versus 16.9%), but the proportion with known concurrent HIV was higher in the cohorts treated for longer (0/458 versus 122/1423). Although there were variations in the treatment regimens, most cohorts received isoniazid, rifampicin, and pyrazinamide during the intensive phase. Investigators achieved follow-up beyond 18 months after completing treatment in three out of the seven cohorts treated for six months, and five out of the 12 cohorts treated for eight to 16 months. All studies had potential sources of bias in their estimation of the relapse rate, and comparisons between the cohorts could be confounded. Relapse was an uncommon event across both groups of cohorts (3/369 (0.8%) with six months treatment versus 7/915 (0.8%) with longer), with only one death attributed to relapse in each group. Overall, the proportion of participants who died was higher in the cohorts treated for longer than six months (447/1423 (31.4%) versus 58/458 (12.7%)). However, most deaths occurred during the first six months in both treatment cohorts, which suggested that the difference in death rate was not directly related to duration of ATT but was due to confounding. Clinical cure was higher in the group of cohorts treated for six months (408/458 (89.1%) versus longer than six months (984/1336 (73.7%)), consistent with the observations for deaths. Few participants defaulted from treatment with six months treatment (4/370 (1.1%)) versus longer treatment (8/355 (2.3%)), and adherence was not well reported. In all cohorts most deaths occurred in the first six months; and relapse was uncommon in all participants irrespective of the regimen. Further inferences are probably inappropriate given this is observational data and confounding is likely. These data are almost all from participants who are HIV-negative, and thus the inferences will not apply to the efficacy and safety of the six months regimens in HIV-positive people. Well-designed RCTs, or large prospective cohort studies, comparing six months with longer treatment regimens with long follow-up periods established at initiation of ATT are needed to resolve the uncertainty regarding the safety and efficacy of six months regimens for TBM.
This Cochrane review assessed the effects of six-month regimens for treating people with TBM compared with longer regimens. Cochrane researchers examined the available evidence up to 31 March 2016 and they included 18 studies. None of the included studies directly compared people with TBM treated for six months with people with TBM treated for longer. Two of the included studies analysed two groups of participants treated for six months and longer than six months. Therefore, the review authors included information from seven groups of people treated for six months (458 people), 12 groups of people treated for longer than 6 months (1423 people), and one ongoing study of 217 people which was analysed separately due to methodological concerns. Although the treatment regimens in the included studies varied, most participants received standard first-line antituberculous drugs, and were followed up for more than a year after the end of treatment. The studies included adults and children with TBM, but few participants were HIV-positive. Relapse was an uncommon event across both groups of studies, with only one death attributed to relapse in each group. Most deaths occurred during the first six months of treatment in both groups of studies, which showed that treatment duration did not have a direct impact on the risk of death in these studies. There was a higher death rate in participants treated for longer than six months, and this probably reflects the differences between the participants in the two groups of studies. Few participants defaulted from treatment, and adherence was not clearly documented. They found no evidence of high relapse rates in people treated for six months, and relapse was uncommon in all patients irrespective of regimen. There may be differences between the participants treated for six months and longer than six months that could have led to bias (confounding factors), so further research would help determine if shorter regimens are safe. Most of the data were in patients without HIV, and so these inferences do not apply to patients who are HIV-positive.
Fifteen trials involving 2022 people were included. Compared to placebo, haloperidol was more effective at reducing manic symptoms, both as monotherapy (Weighted Mean Difference (WMD) -5.85, 95% Confidence Interval (CI) -7.69 to -4.00) and as adjunctive treatment to lithium or valproate (WMD -5.20, 95% CI -9.26 to -1.14). There was a statistically significant difference, with haloperidol being less effective than aripiprazole (Relative Risk (RR) 1.45, 95% CI 1.22 to 1.73). No significant differences between haloperidol and risperidone, olanzapine, carbamazepine or valproate were found. Compared with placebo, a statistically significant difference in favour of haloperidol in failure to complete treatment (RR 0.74, 95% Cl 0.57 to 0.96) was reported. Haloperidol was associated with less weight gain than olanzapine (RR: 0.28, 95% CI 0.12 to 0.67), but with a higher incidence of tremor (RR: 3.01, 95% CI 1.55 to 5.84) and other movement disorders. There is some evidence that haloperidol is an effective treatment for acute mania. From the limited data available, there was no difference in overall efficacy of treatment between haloperidol and olanzapine or risperidone. Some evidence suggests that haloperidol could be less effective than aripiprazole. Referring to tolerability, when considering the poor evidence comparing drugs, clinicians and patients should consider different side effect profiles as an important issue to inform their choice.
Fifteen trials met the inclusion criteria and are included in the review. Interpretation of the results was hindered by the small total sample size and by the low quality of reporting of the included trials. There was some evidence that haloperidol was more efficacious than placebo in terms of reduction of manic and psychotic symptom scores, when used both as monotherapy and as add-on treatment to lithium or valproate. There is no evidence of difference in efficacy between haloperidol and risperidone, olanzapine, valproate, carbamazepine, sultopride and zuclopentixol. There was a statistically significant difference with haloperidol being probably less effective than aripiprazole. No comparative efficacy data with quetiapine, lithium or chlorpromazine were reported. Haloperidol caused more extrapyramidal symptoms (EPS) than placebo and more movement disorders and EPS but less weight gain than olanzapine. Haloperidol caused more EPS than valproate but no difference was found between haloperidol and lithium, carbamazepine, sultopride and risperidone in terms of side effects profile.
The five included trials led to the following results: 1. There was no significant improvement in agitation among haloperidol treated patients, compared with controls. 2. Aggression decreased among patients with agitated dementia treated with haloperidol; other aspects of agitation were not affected significantly in treated patients compared with controls. 3. Although two studies showed increased drop-outs due to adverse effects among haloperidol patients, there was no significant difference in drop-out rates, comparing all haloperidol treated patients with controls. 4. The data were insufficient to examine response to treatment in relation to length of treatment, degree of dementia, age or sex of patients, and cause of dementia. 1. Evidence suggests that haloperidol was useful in reducing aggression, but was associated with adverse effects; there was no evidence to support the routine use of this drug for other manifestations of agitation in dementia. 2. Similar drop-out rates among haloperidol and placebo treated patients suggested that poorly controlled symptoms, or other factors, may be important in causing treatment discontinuation. 3. Variations in degree of dementia, dosage and length of haloperidol treatment, and in ways of assessing response to treatment suggested caution in the interpretation of reported effects of haloperidol in the management of agitation in dementia. 4. The present study confirmed that haloperidol should not be used routinely to treat patients with agitated dementia. Treatment of agitated dementia with haloperidol should be individualized and patients should be monitored for adverse effects of therapy.
In the present study haloperidol treatment was associated with a lower degree of aggression than was placebo. Adverse effects occurred more frequently in haloperidol treated patients than controls, but similar drop-out rates among treated and control patients suggested that for some patients adverse effects may have been tolerated because of better control of behaviour. Our findings indicated that there is little evidence to support a benefit of haloperidol on manifestations of agitation other than aggression, and that haloperidol should not be used routinely to treat patients with agitated dementia. Treatment of agitated dementia should be individualized, with careful monitoring of benefits and adverse effects.
Five randomised trials and four controlled before and after studies were included. The interventions were complex. Five studies added an additional component, or linked a new component, to an existing service, for example, adding family planning or HIV counselling and testing to routine services. The evidence from these studies indicated that adding on services probably increases service utilisation but probably does not improve health status outcomes, such as incident pregnancies. Four studies compared integrated services to single, special services. Based on the included studies, fully integrating sexually transmitted infection (STI) and family planning, and maternal and child health services into routine care as opposed to delivering them as special 'vertical' services may decrease utilisation, client knowledge of and satisfaction with the services and may not result in any difference in health outcomes, such as child survival. Integrating HIV prevention and control at facility and community level improved the effectiveness of certain services (STI treatment in males) but resulted in no difference in health seeking behaviour, STI incidence, or HIV incidence in the population. There is some evidence that 'adding on' services (or linkages) may improve the utilisation and outputs of healthcare delivery. However, there is no evidence to date that a fuller form of integration improves healthcare delivery or health status. Available evidence suggests that full integration probably decreases the knowledge and utilisation of specific services and may not result in any improvements in health status. More rigorous studies of different strategies to promote integration over a wider range of services and settings are needed. These studies should include economic evaluation and the views of clients as clients' views will influence the uptake of integration strategies at the point of delivery and the effectiveness on community health of these strategies.
This updated review included nine studies that evaluated integrated care or linkages in care. The studies made two types of comparison. 1) Integration of care, by adding a service to an existing service (tuberculosis (TB) or sexually transmitted infection (STI) patients were offered HIV testing and counselling; mothers attending an immunisation clinic were encouraged to have family planning services). 2) Integrated services versus single, special services (family planning, maternal and child health delivered as a special vertical programme or integrated into routine healthcare delivery). There was some evidence from the included studies that adding on services or creating linkages to an existing service improved its use and delivery of health care but little or no evidence that fuller integration of primary healthcare services improved people's health status in low- or middle-income countries. People should be aware that integration may not improve service delivery or health status.If policy makers and planners consider integrating healthcare services they should monitor and evaluate them using good study designs. A summary of this review for policy-makers is availablehere
We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
We identified 16 randomised clinical trials (trials where participants are randomly allocated to treatment groups) including 827 participants. The included people had cirrhosis often due to alcoholic liver disease or viral hepatitis (liver infection due to a virus). The trials compared BCAA with placebo (a pretend treatment), no intervention, diets, lactulose (a liquid sugar often used to treat constipation), or neomycin (an antibiotic). The evidence is current to October 2014. The analyses found no effect on mortality, but that BCAA had a beneficial effect on symptoms and signs of hepatic encephalopathy. BCAA did not increase the risk of serious adverse events, but was associated with nausea and diarrhoea. When excluding trials on lactulose or neomycin, BCAA had a beneficial effect on hepatic encephalopathy. When analysing trials with a lactulose or neomycin control, we found no beneficial or detrimental effect of BCAA. We assessed the quality of the evidence to evaluate aspects that can lead to errors in the judgment of intervention effects. We concluded that we had high quality evidence in our analyses about the effect of BCAA on hepatic encephalopathy. We concluded that we had moderate or low quality evidence in the remaining analyses because the number of participants in the trials was too small and the risk of bias (systematic errors) was unclear or high.
We included four trials in this review and did not identify new studies from the search in April 2015. Home-based end-of-life care increased the likelihood of dying at home compared with usual care (risk ratio (RR) 1.33, 95% confidence interval (CI) 1.14 to 1.55, P = 0.0002; Chi2 = 1.72, df = 2, P = 0.42, I2 = 0%; 3 trials; N = 652; high quality evidence). Admission to hospital while receiving home-based end-of-life care varied between trials, and this was reflected by a high level of statistical heterogeneity in this analysis (range RR 0.62 to RR 2.61; 4 trials; N = 823; moderate quality evidence). Home-based end-of-life care may slightly improve patient satisfaction at one-month follow-up and reduce it at six-month follow-up (2 trials; low quality evidence). The effect on caregivers is uncertain (2 trials; low quality evidence). The intervention may slightly reduce healthcare costs (2 trials, low quality evidence). No trial reported costs to patients and caregivers. The evidence included in this review supports the use of home-based end-of-life care programmes for increasing the number of people who will die at home, although the numbers of people admitted to hospital while receiving end-of-life care should be monitored. Future research should systematically assess the impact of home-based end-of-life care on caregivers.
We searched the literature until April 2015 and found no new trials for this update. We found four trials for the previous updates. We included four trials in our review and report that people receiving end-of-life care at home are more likely to die at home. It is unclear whether home-based end-of-life care increases or decreases the probability of being admitted to hospital. Admission to hospital while receiving home-based end-of-life care varied between trials. People who receive end-of-life care at home may be slightly more satisfied after one month and less satisfied after six months. It is unclear whether home-based end-of-life care reduces or increases caregiver burden. Healthcare costs are uncertain, and no data on costs to participants and their families were reported. People who receive end-of-life care at home are more likely to die at home. There were few data on the impact of home-based end-of-life services on family members and lay caregivers.
Twenty-six trials met inclusion criteria. The number, site and dosage of injections varied widely between studies. The number of participants per trial ranged from 20 to 114 (median 52 participants). Methodological quality was variable. For rotator cuff disease, subacromial steroid injection was demonstrated to have a small benefit over placebo in some trials however no benefit of subacromial steroid injection over NSAID was demonstrated based upon the pooled results of three trials. For adhesive capsulitis, two trials suggested a possible early benefit of intra-articular steroid injection over placebo but there was insufficient data for pooling of any of the trials. One trial suggested short-term benefit of intra-articular corticosteroid injection over physiotherapy in the short-term (success at seven weeks RR=1.66 (1.21, 2.28). Despite many RCTs of corticosteroid injections for shoulder pain, their small sample sizes, variable methodological quality and heterogeneity means that there is little overall evidence to guide treatment. Subacromial corticosteroid injection for rotator cuff disease and intra-articular injection for adhesive capsulitis may be beneficial although their effect may be small and not well-maintained. There is a need for further trials investigating the efficacy of corticosteroid injections for shoulder pain. Other important issues that remain to be clarified include whether the accuracy of needle placement, anatomical site, frequency, dose and type of corticosteroid influences efficacy.
The available evidence from randomized controlled trials supports the use of subacromial corticosteroid injection for rotator cuff disease, although its effect may be small and short-lived, and it may be no better than non-steroidal anti-inflammatory drugs. Similarly, intra-articular steroid injection may be of limited, short-term benefit for adhesive capsulitis. Further trials investigating the efficacy of corticosteroid injections for shoulder pain are needed. Important issues that need clarification include whether the accuracy of needle placement, anatomical site, frequency, dose and type of corticosteroid influences efficacy.
Our update search identified 465 citations, which we assessed for eligibility. Three new studies met the criteria for inclusion, giving a total of 14 included studies (n = 3370). The definition of partner varied among the studies. We compared partner support versus control interventions at six- to nine-month follow-up and at 12 or more months follow-up. We also examined outcomes among three subgroups: interventions targeting relatives, friends or coworkers; interventions targeting spouses or cohabiting partners; and interventions targeting fellow cessation programme participants. All studies gave self-reported smoking cessation rates, with limited biochemical verification of abstinence. The pooled risk ratio (RR) for abstinence was 0.97 (95% confidence interval (CI) 0.83 to 1.14; 12 studies; 2818 participants) at six to nine months, and 1.04 (95% CI 0.88 to 1.22; 7 studies; 2573 participants) at 12 months or more post-treatment. Of the 11 studies that measured partner support at follow-up, only two reported a significant increase in partner support in the intervention groups. One of these studies reported a significant increase in partner support in the intervention group, but smokers' reports of partner support received did not differ significantly. We judged one of the included studies to be at high risk of selection bias, but a sensitivity analysis suggests that this did not have an impact on the results. There were also potential issues with detection bias due to a lack of validation of abstinence in five of the 14 studies; however, this is not apparent in the statistically homogeneous results across studies. Using the GRADE system we rated the overall quality of the evidence for the two primary outcomes as low. We downgraded due to the risk of bias, as we judged studies with a high weighting in analyses to be at a high risk of detection bias. In addition, a study in both analyses was insufficiently randomised. We also downgraded the quality of the evidence for indirectness, as very few studies provided any evidence that the interventions tested actually increased the amount of partner support received by participants in the relevant intervention group. Interventions that aim to enhance partner support appear to have no impact on increasing long-term abstinence from smoking. However, most interventions that assessed partner support showed no evidence that the interventions actually achieved their aim and increased support from partners for smoking cessation. Future research should therefore focus on developing behavioural interventions that actually increase partner support, and test this in small-scale studies, before large trials assessing the impact on smoking cessation can be justified.
This is an update of previous reviews. We searched for studies published up to April 2018, and found three new studies that we could include, giving a total of 14 studies with 3370 participants. Studies had to be randomised controlled trials that recruited smokers trying to quit, and measured whether participants had quit smoking at least six months after the beginning of the study. The study had to include at least one group who were part of a stop-smoking programme to increase partner support, and at least one group who received a comparable stop-smoking programme without partner support. Most of the studies were conducted in the USA. At recruitment the average amount participants smoked was between 13 to 29 cigarettes a day across studies. The smoking status of partners providing support varied, but most were non-smokers. Intervention techniques ranged from low to high intensity; in some cases help was by a self-help booklet and in other cases by face-to-face counselling. In some studies researchers did not make direct contact with 'partners' and the smokers themselves were encouraged to find a 'buddy', but in other studies both the smoker and their 'buddy' received face-to-face support. We combined 12 studies (2818 participants) to measure successful quitting at six to nine months follow-up, and seven studies (2573 participants) to measure quitting at 12-month follow-up. Partner support did not increase the chances of stopping smoking at either time point. We also split the studies in each analysis based on the type of partner giving support (relatives/friends/co-workers versus spouses/cohabiting partners versus fellow cessation-programme participants). There was no difference in quit rates between study groups, regardless of the type of partner providing the support. Only one study reported that partner support improved more in the group given the partner-support intervention than in the group where no partner-support intervention was provided. Another study reported that partner support improved more in a more intensive partner-support intervention than a less intensive partner-support intervention. We rated the overall quality of the evidence as low. This is because there were problems with the design of some of the studies. A number of important studies only used participant self-report to measure if people had quit smoking, and there is a chance that these reports may have been inaccurate. Also, very few studies found that the intervention actually increased the level of partner support that participants received. This review therefore cannot tell us whether receiving more support from a partner can help a person to give up smoking.
Eight trials are included in this review, seven used methylprednisolone. Methylprednisolone sodium succinate has been shown to improve neurologic outcome up to one year post-injury if administered within eight hours of injury and in a dose regimen of: bolus 30mg/kg over 15 minutes, with maintenance infusion of 5.4 mg/kg per hour infused for 23 hours. The initial North American trial results were replicated in a Japanese trial but not in the one from France. Data was obtained from the latter studies to permit appropriate meta-analysis of all three trials. This indicated significant recovery in motor function after methylprednisolone therapy, when administration commenced within eight hours of injury. A more recent trial indicates that, if methylprednisolone therapy is given for an additional 24 hours (a total of 48 hours), additional improvement in motor neurologic function and functional status are observed. This is particularly observed if treatment cannot be started until between three to eight hours after injury. The same methylprednisolone therapy has been found effective in whiplash injuries. A modified regimen was found to improve recovery after surgery for lumbar disc disease. The risk of bias was low in the largest methyprednisolone trials. Overall, there was no evidence of significantly increased complications or mortality from the 23 or 48 hour therapy. High-dose methylprednisolone steroid therapy is the only pharmacologic therapy shown to have efficacy in a phase three randomized trial when administered within eight hours of injury. One trial indicates additional benefit by extending the maintenance dose from 24 to 48 hours, if start of treatment must be delayed to between three and eight hours after injury. There is an urgent need for more randomized trials of pharmacologic therapy for acute spinal cord injury.
The review looked for studies that examined the effectiveness of this treatment in improving movement and reducing the death rate. Nearly all the research, seven trials, has involved just one steroid, methylprednisolone. The results show that treatment with this steroid does improve movement but it must start soon after the injury has happened, within no more than eight hours. It should be continued for 24 to 48 hours. Different dose rates of the drug have been given and the so-called high-dose rate is the most effective. The treatment does not, however, give back the patient a normal amount of movement and more research is necessary with steroids, possibly combining them with other drugs.
Five RCTs with a total of 292 babies, were included in the review. Comparisons made within the RCTs were squeezable versus rigid feeding bottles (two studies), breastfeeding versus spoon-feeding (one study) and maxillary plate versus no plate (two studies). No statistically significant differences were shown for any of the primary outcomes when comparing bottle types, although squeezable bottles were less likely to require modification. No difference was shown for infants fitted with a maxillary plate compared to no plate. However, there was some evidence of an effect on weight at 6 weeks post-surgery in favour of breastfeeding when compared to spoon-feeding (mean difference 0.47; 95% confidence interval 0.20 to 0.74). Squeezable bottles appear easier to use than rigid feeding bottles for babies born with clefts of the lip and/or palate, however, there is no evidence of a difference in growth outcomes between the bottle types. There is weak evidence that breastfeeding is better than spoon-feeding following surgery for cleft. There was no evidence to suggest that maxillary plates assist growth in babies with clefts of the palate. No evidence was found to assess the use of any types of maternal advice and/or support for these babies.
Maternal advice and support on feeding techniques and breastfeeding positions are often provided, but no studies evaluated the effectiveness of this intervention. Squeezable rather than rigid bottles may be easier to use for feeding babies with cleft lip and/or palate, and breastfeeding may have growth advantages over spoon-feeding following cleft lip surgery. Only five studies (including 292 babies) compared the effects of feeding interventions in babies with cleft lip and/or palate on growth, development or parental satisfaction. Evidence for breastfeeding rather than spoon-feeding following surgery was weak and there was a suggestion that squeezable bottles may be more manageable than rigid ones. No evidence was found to support the use of maxillary plates in babies with unilateral clefts and no studies assessed the effects of maternal advice or support. Further research is required to assess the most effective feeding interventions to prevent developmental delays in infants with cleft lip and/or palate.