query
stringlengths
1k
5.71k
answer
stringlengths
219
4.42k
We included seven studies involving 2492 participants. Peer support telephone calls were associated with an increase in mammography screening, with 49% of women in the intervention group and 34% of women in the control group receiving a mammogram since the start of the intervention (P </ = 0.001). In another study, peer telephone support calls were found to maintain mammography screening uptake for baseline adherent women (P = 0.029). Peer support telephone calls for post myocardial infarction patients were associated at six months with a change in diet in the intervention and usual care groups of 54% and 44% respectively (P = 0.03). In another study for post myocardial infarction patients there were no significant differences between groups for self-efficacy, health status and mental health outcomes. Peer support telephone calls were associated with greater continuation of breastfeeding in mothers at 3 months post partum (P = 0.01). Peer support telephone calls were associated with reduced depressive symptoms in mothers with postnatal depression (Edinburgh Postnatal Depression Scale (EPDS) > 12). The peer support intervention significantly decreased depressive symptomatology at the 4-week assessment (odds ratio (OR) 6.23 (95% confidence interval (CI) 1.15 to 33.77; P = 0.02)) and 8-week assessment (OR 6.23 (95% CI 1.40 to 27.84; P = 0.01). One study investigated the use of peer support for patients with poorly controlled diabetes. There were no significant differences between groups for self-efficacy, HbA1C, cholesterol level and body mass index. Whilst this review provides some evidence that peer support telephone calls can be effective for certain health-related concerns, few of the studies were of high quality and so results should be interpreted cautiously. There were many methodological limitations thus limiting the generalisability of findings. Overall, there is a need for further well designed randomised controlled studies to clarify the cost and clinical effectiveness of peer support telephone calls for improvement in health and health-related behaviour.
Seven randomised controlled trials conducted in the USA, UK, Canada and Australia related to a range of conditions and target populations. They provided some evidence of efficacy. Peer support telephone calls may increase mammography screening in women over 40 years, may help patients change their diet and cease smoking after a heart attack; and may help reduce depressive symptoms among mothers with postnatal depression. Findings need to be interpreted cautiously. There is a need for well designed randomised controlled studies to clarify which elements of peer telephone interventions work best to improve health and health-related behaviour.
We found no randomised controlled trials on this topic. We included six non-randomised studies (five retrospective) that compared laparoscopic versus open transhiatal oesophagectomy (334 patients: laparoscopic = 154 patients; open = 180 patients); five studies (326 patients: laparoscopic = 151 patients; open = 175 patients) provided information for one or more outcomes. Most studies included a mixture of adenocarcinoma and squamous cell carcinoma and different stages of oesophageal cancer, without metastases. All the studies were at unclear or high risk of bias; the overall quality of evidence was very low for all the outcomes. The differences between laparoscopic and open transhiatal oesophagectomy were imprecise for short-term mortality (laparoscopic = 0/151 (adjusted proportion based on meta-analysis estimate: 0.5%) versus open = 2/175 (1.1%); RR 0.44; 95% CI 0.05 to 4.09; participants = 326; studies = 5; I² = 0%); long-term mortality (HR 0.97; 95% CI 0.81 to 1.16; participants = 193; studies = 2; I² = 0%); anastomotic stenosis (laparoscopic = 4/36 (11.1%) versus open = 3/37 (8.1%); RR 1.37; 95% CI 0.33 to 5.70; participants = 73; studies = 1); short-term recurrence (laparoscopic = 1/16 (6.3%) versus open = 0/4 (0%); RR 0.88; 95% CI 0.04 to 18.47; participants = 20; studies = 1); long-term recurrence (HR 1.00; 95% CI 0.84 to 1.18; participants = 173; studies = 2); proportion of people who required blood transfusion (laparoscopic = 0/36 (0%) versus open = 6/37 (16.2%); RR 0.08; 95% CI 0.00 to 1.35; participants = 73; studies = 1); proportion of people with positive resection margins (laparoscopic = 15/102 (15.8%) versus open = 27/111 (24.3%); RR 0.65; 95% CI 0.37 to 1.12; participants = 213; studies = 3; I² = 0%); and the number of lymph nodes harvested during surgery (median difference between the groups varied from 12 less to 3 more lymph nodes in the laparoscopic compared to the open group; participants = 326; studies = 5). The proportion of patients with serious adverse events was lower in the laparoscopic group (10/99, (10.3%) compared to the open group = 24/114 (21.1%); RR 0.49; 95% CI 0.24 to 0.99; participants = 213; studies = 3; I² = 0%); as it was for adverse events in the laparoscopic group = 37/99 (39.9%) versus the open group = 71/114 (62.3%); RR 0.64; 95% CI 0.48 to 0.86; participants = 213; studies = 3; I² = 0%); and the median lengths of hospital stay were significantly less in the laparoscopic group than the open group (three days less in all three studies that reported this outcome; number of participants = 266). There was lack of clarity as to whether the median difference in the quantity of blood transfused was statistically significant favouring laparoscopic oesophagectomy in the only study that reported this information. None of the studies reported post-operative dysphagia, health-related quality of life, time-to-return to normal activity (return to pre-operative mobility without caregiver support), or time-to-return to work. There are currently no randomised controlled trials comparing laparoscopic with open transhiatal oesophagectomy for patients with oesophageal cancers. In observational studies, laparoscopic transhiatal oesophagectomy is associated with fewer overall complications and shorter hospital stays than open transhiatal oesophagectomy. However, this association is unlikely to be causal. There is currently no information to determine a causal association in the differences between the two surgical approaches. Randomised controlled trials comparing laparoscopic transhiatal oesophagectomy with other methods of oesophagectomy are required to determine the optimal method of oesophagectomy.
Randomised controlled trials are the best types of studies to find out whether one treatment is better than another since it ensures that similar types of people receive the new and the old treatment. But we did not find randomised controlled trials; we identified six relevant non-randomised studies with a total of 334 patients, which compared laparoscopic and open surgeries. Since one of the studies did not provide usable results, five studies, with 326 patients, provided information for this review; laparoscopic surgery = 151 patients and open surgery = 175 patients. In four of these studies, historical information was collected from hospital records. In one study, new information was collected. In general, new information is considered to be more reliable than information from hospital records. The differences between laparoscopic and open transhiatal oesophagectomy were imprecise for: deaths during the short-term and long-term, the percentage of people with major complications, narrowing of the new junction between the gut, created after removing the oesophagus, cancer returning during the short-term and long-term, and the proportion of people who required blood transfusion. The proportion of patients with any complications and the average lengths of hospital stay were less in the key-hole group than the open cut group. There was lack of clarity about the difference in the amount of blood transfused between the two groups. None of the studies reported difficulty in swallowing after surgery, health-related quality of life, the amount of time it took to return to normal activity (same mobility as before surgery), or work. The quality of the evidence was very low. This was mainly because it was not clear whether participants who received laparoscopic surgery were similar to those who had open surgery. This makes the findings unreliable. Well-designed randomised controlled trials are necessary to obtain high-quality evidence on the best method to perform oesophagectomy.
We included 32 studies (6597 women) in this review. Forceps were less likely than the ventouse to fail to achieve a vaginal birth with the allocated instrument (risk ratio (RR) 0.65, 95% confidence interval (CI) 0.45 to 0.94). However, with forceps there was a trend to more caesarean sections, and significantly more third- or fourth-degree tears (with or without episiotomy), vaginal trauma, use of general anaesthesia, and flatus incontinence or altered continence. Facial injury was more likely with forceps (RR 5.10, 95% CI 1.12 to 23.25). Using a random-effects model because of heterogeneity between studies, there was a trend towards fewer cases of cephalhaematoma with forceps (average RR 0.64, 95% CI 0.37 to 1.11). Among different types of ventouse, the metal cup was more likely to result in a successful vaginal birth than the soft cup, with more cases of scalp injury and cephalhaematoma. The hand-held ventouse was associated with more failures than the metal ventouse, and a trend to fewer than the soft ventouse. Overall forceps or the metal cup appear to be most effective at achieving a vaginal birth, but with increased risk of maternal trauma with forceps and neonatal trauma with the metal cup. There is a recognised place for forceps and all types of ventouse in clinical practice. The role of operator training with any choice of instrument must be emphasised. The increasing risks of failed delivery with the chosen instrument from forceps to metal cup to hand-held to soft cup vacuum, and trade-offs between risks of maternal and neonatal trauma identified in this review need to be considered when choosing an instrument.
This review of 32 studies (6597 women) looks at assisted or instrumental vaginal deliveries in women in the second stage of labour. The importance of this review is due to the fact that Instrumental delivery is a frequent intervention in childbirth and in some cases may result in harmful outcomes for either mother or baby or both. The main comparisons are between the forceps or the ventouse. There are also comparisons between different types of ventouse. The outcomes which are analysed are the success of the particular instrument in achieving the delivery and the rate of complications for both mother and baby. Not all studies considered all outcomes, and in particular, there were differences in the types of complications encountered by mothers and babies. In addition, we identified no studies for some comparisons. The results showed that the forceps was the better instrument in terms of achieving a successful delivery. However, it was also associated with higher rates of complications for the mother. These were perineal trauma, tears, requirements for pain relief and incontinence. There were risks of injury to the baby with both types of instrument. Comparisons between different types of ventouse revealed that the metal cup was better at achieving successful delivery than the soft cup,.but with more risk of injury to the baby. There were no significant differences between the handheld and the standard vacuum. Decisions as to which instrument is best will, therefore, depend upon individual situations where the urgency with which the baby needs to be delivered will be balanced against potential risks to the mother and baby.
Five trials recruited a total of 439 participants and between them compared different types of VNS stimulation therapy. Baseline phase ranged from 4 to 12 weeks and double-blind treatment phases from 12 to 20 weeks in the five trials. Overall, two studies were rated as having a low risk of bias and three had an unclear risk of bias due to lack of reported information around study design. Effective blinding of studies of VNS is difficult due to the frequency of stimulation-related side effects such as voice alteration; this may limit the validity of the observed treatment effects. Four trials compared high frequency stimulation to low frequency stimulation and were included in quantitative syntheses (meta-analyses).The overall risk ratio (95% CI) for 50% or greater reduction in seizure frequency across all studies was 1.73 (1.13 to 2.64) showing that high frequency VNS was over one and a half times more effective than low frequency VNS. For this outcome, we rated the evidence as being moderate in quality due to incomplete outcome data in one included study; however results did not vary substantially and remained statistically significant for both the best and worst case scenarios. The risk ratio (RR) for treatment withdrawal was 2.56 (0.51 to 12.71), however evidence for this outcome was rated as low quality due to imprecision of the result and incomplete outcome data in one included study. The RR of adverse effects were as follows: (a) voice alteration and hoarseness 2.17 (99% CI 1.49 to 3.17); (b) cough 1.09 (99% CI 0.74 to 1.62); (c) dyspnea 2.45 (99% CI 1.07 to 5.60); (d) pain 1.01 (99% CI 0.60 to 1.68); (e) paresthesia 0.78 (99% CI 0.39 to 1.53); (f) nausea 0.89 (99% CI 0.42 to 1.90); (g) headache 0.90 (99% CI 0.48 to 1.69); evidence of adverse effects was rated as moderate to low quality due to imprecision of the result and/or incomplete outcome data in one included study. No important heterogeneity between studies was found for any of the outcomes. VNS for partial seizures appears to be an effective and well tolerated treatment in 439 included participants from five trials. Results of the overall efficacy analysis show that VNS stimulation using the high stimulation paradigm was significantly better than low stimulation in reducing frequency of seizures. Results for the outcome "withdrawal of allocated treatment" suggest that VNS is well tolerated as withdrawals were rare. No significant difference was found in withdrawal rates between the high and low stimulation groups, however limited information was available from the evidence included in this review so important differences between high and low stimulation cannot be excluded . Adverse effects associated with implantation and stimulation were primarily hoarseness, cough, dyspnea, pain, paresthesia, nausea and headache, with hoarseness and dyspnea more likely to occur on high stimulation than low stimulation. However, the evidence on these outcomes is limited and of moderate to low quality. Further high quality research is needed to fully evaluate the efficacy and tolerability of VNS for drug resistant partial seizures.
Overall the five multi-centre randomised controlled trials (RCTs) recruited a total of 439 participants and between them compared different types of VNS therapy. Overall there were three randomised controlled trials which compared high frequency stimulation to low frequency stimulation in participants aged 12-60 years and another trial examined high frequency stimulation versus low frequency stimulation in children. Additionally, one trial examined three different stimulation frequencies. The review of these trials found that vagus nerve stimulation is effective, when used with one or more antiepileptic drugs, to reduce the number of seizures for people whose epilepsy does not respond to drugs alone. Common side effects were voice alteration and hoarseness, pain, shortness of breath, cough, feeling sickly, tingling sensation, headache or infection at the site of the operation, with shortness of breath, voice alteration and hoarseness more common in people receiving high frequency stimulation compared to people receiving low level stimulation. Out of the five included studies, two studies were rated individually as being of high quality and the other three studies were rated as being of unclear quality due to lack of reported information in the study paper about the methods of study design. The evidence for the effectiveness and side effects of VNS therapy was limited and imprecise from the small number of studies included in this review, so was rated as being of moderate to low quality. Further large, high quality studies are required to provide more information about the effectiveness and side effects of VNS therapy.
Four studies enrolling 450 participants met our inclusion criteria. Overall risk of bias was judged to be low in one study, unclear in two, and high in one. There was no significant differences in the number of fistulas that were successfully created (4 studies, 433 patients: RR 1.06, 95% CI 0.95 to 1.28; I² = 76%); the number of fistulas that matured at six months (3 studies, 356 participants: RR 1.11, 95% CI 0.98 to 1.25; I² = 0%); number of fistulas that were used successfully for dialysis (2 studies, 286 participants: RR 1.12, 95% CI 0.99 to 1.28; I² = 0%); the number of patients initiating dialysis with a catheter (1 study, 214 patients: RR 0.66, 95% CI 0.42 to 1.04); and in the rate of interventions required to maintain patency (1 study, 70 patients: MD 14.70 interventions/1000 patient-days, 95% CI -7.51 to 36.91) between the use of preoperative imaging technologies compared with standard care (no imaging). Based on four small studies, preoperative vessel imaging did not improve fistula outcomes compared with standard care. Adequately powered prospective studies are required to fully answer this question.
For this review, we searched the literature published up to April 2015 and found four studies that involved 450 participants which met our inclusion criteria. The included studies compared the proportions of fistulas that matured when evaluation was carried out before surgery using medical imaging techniques with standard care (no imaging). Our analysis found that vessel imaging before surgery did not improve the rate of fistulas that matured. Further research in this area involving more participants may be beneficial to better understand if imaging before surgery could help to increase the success of fistulas for people who need haemodialysis.
Twelve studies were eligible for inclusion. Allergic disease and / or food hypersensitivity outcomes were assessed by 6 studies enrolling 2080 infants, but outcomes for only 1549 infants were reported. Studies generally had adequate randomisation, allocation concealment and blinding of treatment. However, the findings of this review should be treated with caution due to excess losses in patient follow-up (17% to 61%). Meta-analysis of five studies reporting the outcomes of 1477 infants found a significant reduction in infant eczema (typical RR 0.82, 95% CI 0.70, 0.95). However, there was significant and substantial heterogeneity between studies. One study reported that the difference in eczema between groups persisted to 4 years age. When the analysis was restricted to studies reporting atopic eczema (confirmed by skin prick test or specific IgE), the findings were no longer significant (typical RR 0.80, 95% CI 0.62, 1.02). All studies reporting significant benefits used probiotic supplements containing L. rhamnosus and enrolled infants at high risk of allergy. No other benefits were reported for any other allergic disease or food hypersensitivity outcome. There is insufficient evidence to recommend the addition of probiotics to infant feeds for prevention of allergic disease or food hypersensitivity. Although there was a reduction in clinical eczema in infants, this effect was not consistent between studies and caution is advised in view of methodological concerns regarding included studies. Further studies are required to determine whether the findings are reproducible.
This review found that probiotics added to infant feeds may help prevent infant eczema, with one study suggesting the benefit may persist to four years of age. However, concerns regarding the quality of studies, inconsistency of findings between studies, and the fact that the benefits did not persist if restricted to infants with evidence of sensitisation to allergens, suggests that further studies are needed to confirm these results.
We included five trials in this analysis, all of which were performed in the prehospital setting. The risk of bias was low in four of these studies (n = 1186). The trials accumulated 1254 participants. Aminophylline was found to have no effect on survival to hospital discharge (risk ratio (RR) 0.58, 95% confidence interval (CI) 0.12 to 2.74) or on secondary survival outcome (survival to hospital admission: RR 0.92, 95% CI 0.61 to 1.39; return of spontaneous circulation: RR 1.15, 95% CI 0.89 to 1.49). Survival was rare (6/1254), making data about neurological outcomes and adverse events quite limited. The planned subgroup analysis for early administration of aminophylline included 37 participants. No one in the subgroup survived to hospital discharge. The prehospital administration of aminophylline in bradyasystolic arrest is not associated with improved return of circulation, survival to admission or survival to hospital discharge. The benefits of aminophylline administered early in resuscitative efforts are not known.
We found five studies that included 1254 patients who had this type of cardiac arrest in the prehospital setting. Four of the five studies (1186 patients) were well-designed studies with low risk of bias. Although no adverse events were reported, aminophylline showed no advantage when it was added to the standard resuscitation practice of paramedics when compared with placebo in these patients. It is not known whether giving aminophylline sooner would be helpful.
We included 14 randomised trials involving 4596 operations, of which 3526 were from the single largest trial (GALA). In general, reporting of methodology in the included studies was poor. All studies were unable to blind patients and surgical teams to randomised treatment allocation and for most studies the blinding of outcome assessors was unclear. There was no statistically significant difference in the incidence of stroke within 30 days of surgery between the local anaesthesia group and the general anaesthesia group. The incidence of strokes in the local anaesthesia group was 3.2% compared to 3.5% in the general anaesthesia group (Peto OR 0.92, 95% CI 0.67 to 1.28). There was no statistically significant difference in the proportion of patients who had a stroke or died within 30 days of surgery. In the local anaesthesia group 3.6% of patients had a stroke or died compared to 4.2% of patients in the general anaesthesia group (Peto OR 0.85, 95% CI 0.63 to 1.16). There was a non-significant trend towards lower operative mortality with local anaesthetic. In the local anaesthesia group 0.9% of patients died within 30 days of surgery compared to 1.5% of patients in the general anaesthesia group (Peto OR 0.62, 95% CI 0.36 to 1.07). However, neither the GALA trial or the pooled analysis were adequately powered to reliably detect an effect on mortality. The proportion of patients who had a stroke or died within 30 days of surgery did not differ significantly between the two types of anaesthetic techniques used during carotid endarterectomy. This systematic review provides evidence to suggest that patients and surgeons can choose either anaesthetic technique, depending on the clinical situation and their own preferences.
This review includes 14 randomised trials, involving 4596 operations, comparing the use of local anaesthetic to general anaesthetic for carotid endarterectomy. There was no statistically significant difference between the anaesthetic techniques in the percentage of patients who had a stroke or died within 30 days of surgery. This systematic review provides evidence to suggest that patients and surgeons can choose either anaesthetic technique, depending on the clinical situation and their own preferences.
Five studies were included. No new eligible studies have been found since the review was initially conducted. Method discontinuation was similar between groups in all trials. Bleeding patterns and side effects were similar in trials that compared immediate with conventional start. In a study of depot medroxyprogesterone acetate (DMPA), immediate start of DMPA showed fewer pregnancies than a 'bridge' method before DMPA (OR 0.36; 95% CI 0.16 to 0.84). Further, more women in the immediate-DMPA group were very satisfied versus those with a 'bridge' method (OR 1.99; 95% CI 1.05 to 3.77). A trial of two immediate-start methods showed the vaginal ring group had less prolonged bleeding (OR 0.42; 95% CI 0.20 to 0.89) and less frequent bleeding (OR 0.23; 95% CI 0.05 to 1.03) than COC users. The ring group also reported fewer side effects. Also, more immediate ring users were very satisfied than immediate COC users (OR 2.88; 95% CI 1.59 to 5.22). We found limited evidence that immediate start of hormonal contraception reduces unintended pregnancies or increases method continuation. However, the pregnancy rate was lower with immediate start of DMPA versus another method. Some differences were associated with contraceptive type rather than initiation method, i.e., immediate ring versus immediate COC. More studies are needed of immediate versus conventional start of the same hormonal contraceptive.
In August 2012, did computer searches for randomized controlled trials of the quick-start method for pills and other hormonal birth control. We contacted researchers to find other studies. We included trials that compared quick start to the usual start of birth control. Also included were studies that compared quick start of different types of hormonal birth control with each other. Birth control methods could have the hormones estrogen and progestin (combined hormonal birth control) or just the progestin. Five studies were included. In a study of 'depo,' which is given as a shot, fewer women with quick start of depo became pregnant than those who used another method for 21 days before depo. In this review, the numbers of women who stopped using their birth control method early were similar between groups in all trials. In the depo trial, more women with quick start of depo were very satisfied. A trial of two quick-start methods showed women with the vaginal ring had less long-term bleeding and less frequent bleeding than those with pills. For six side effects, including changes in breasts, mood, and nausea, quick start of the ring showed fewer problems than quick start of pills. For satisfaction in that trial, more women in the ring group were very satisfied with their method of birth control. We found little evidence that quick start leads to fewer pregnancies or fewer women stopping early. However, fewer women on quick start of depo became pregnant than the women who started with another method. Other differences were between types of birth control rather than start times. Women using the vaginal ring had fewer problems than women using birth control pills. More studies are needed comparing quick start versus usual start of the same hormonal birth control method.
We identified three RCTs for inclusion. One of these studies had serious problems with allocation of the study drug and placebo, so we could not analyse data for intervention effect from it. The remaining two RCTs recruited 104 participants. One randomized 65 participants to receive linezolid or not, in addition to a background regimen; the other randomized 39 participants to addition of linezolid to a background regimen immediately, or after a delay of two months. We included 14 non-randomized cohort studies (two prospective, 12 retrospective), with a total of 1678 participants. Settings varied in terms of income and tuberculosis burden. One RCT and 7 out of 14 non-randomized studies commenced recruitment in or after 2009. All RCT participants and 38.7% of non-randomized participants were reported to have XDR-TB. Dosing and duration of linezolid in studies were variable and reported inconsistently. Daily doses ranged from 300 mg to 1200 mg; some studies had planned dose reduction for all participants after a set time, others had incompletely reported dose reductions for some participants, and most did not report numbers of participants receiving each dose. Mean or median duration of linezolid therapy was longer than 90 days in eight of the 14 non-randomized cohorts that reported this information. Duration of participant follow-up varied between RCTs. Only five out of 14 non-randomized studies reported follow-up duration. Both RCTs were at low risk of reporting bias and unclear risk of selection bias. One RCT was at high risk of performance and detection bias, and low risk for attrition bias, for all outcomes. The other RCT was at low risk of detection and attrition bias for the primary outcome, with unclear risk of detection and attrition bias for non-primary outcomes, and unclear risk of performance bias for all outcomes. Overall risk of bias for the non-randomized studies was critical for three studies, and serious for the remaining 11. One RCT reported higher cure (risk ratio (RR) 2.36, 95% confidence interval (CI) 1.13 to 4.90, very low-certainty evidence), lower failure (RR 0.26, 95% CI 0.10 to 0.70, very low-certainty evidence), and higher sputum culture conversion at 24 months (RR 2.10, 95% CI 1.30 to 3.40, very low-certainty evidence), amongst the linezolid-treated group than controls, with no differences in other primary and secondary outcomes. This study also found more anaemia (17/33 versus 2/32), nausea and vomiting, and neuropathy (14/33 versus 1/32) events amongst linezolid-receiving participants. Linezolid was discontinued early and permanently in two of 33 (6.1%) participants who received it. The other RCT reported higher sputum culture conversion four months after randomization (RR 2.26, 95% CI 1.19 to 4.28), amongst the group who received linezolid immediately compared to the group who had linezolid initiation delayed by two months. Linezolid was discontinued early and permanently in seven of 39 (17.9%) participants who received it. Linezolid discontinuation occurred in 22.6% (141/624; 11 studies), of participants in the non-randomized studies. Total, serious, and linezolid-attributed adverse events could not be summarized quantitatively or comparatively, due to incompleteness of data on duration of follow-up and numbers of participants experiencing events. We found some evidence of efficacy of linezolid for drug-resistant pulmonary tuberculosis from RCTs in participants with XDR-TB but adverse events and discontinuation of linezolid were common. Overall, there is a lack of comparative data on efficacy and safety. Serious risk of bias and heterogeneity in conducting and reporting non-randomized studies makes the existing, mostly retrospective, data difficult to interpret. Further prospective cohort studies or RCTs in high tuberculosis burden low-income and lower-middle-income countries would be useful to inform policymakers and clinicians of the efficacy and safety of linezolid as a component of drug-resistant TB treatment regimens.
We searched for evidence up to 13 July 2018. We analysed data from two trials, one of which randomly allocated 65 people with drug-resistant tuberculosis to either a linezolid-containing or linezolid-free drug combination, and another that randomly allocated 39 participants to receive linezolid as part of their treatment from the start or have it added after a delay of two months. We also included 14 studies, including 1678 people, in which some participants received linezolid but others did not, but this was not determined at random. One trial showed a higher likelihood of cure and lower risk of treatment failure in participants receiving linezolid compared to those who did not. The second trial showed that participants who received linezolid immediately had a higher chance of tuberculosis being cleared from their sputum four months after the start of the study than those who added linezolid after a two-month delay. When they examined safety, the first trial found a higher risk of developing low red blood cell counts, nausea and vomiting, and nerve damage in people receiving linezolid. From 11 of the non-randomized studies that reported this, 22.6% of people had to stop linezolid due to adverse effects (side effects), though further comparisons of harmful effects were not possible due to incomplete reporting in the non-randomized studies. Overall, although there is some evidence of benefit, we have very low certainty in its accuracy. More high-quality studies are required before we can be certain how effective and safe linezolid is for drug-resistant tuberculosis. This review is current up to 13 July 2018.
We included 13 trials with 567,476 participants randomised to pre- or post-exposure prophylaxis. The trials had high risk of bias. The trials were heterogeneous in terms of study setting, participants, interventions, and outcome measures. Our meta-analysis with six randomised trials showed that immunoglobulins, when used for pre-exposure prophylaxis, significantly reduced the number of adult patients with hepatitis A at 6 to 12 months (1020/286503 versus 761/134529; RR 0.53; 95% CI 0.40 to 0.70; random-effects model) in comparison with no intervention or inactive control. Four trials showed a similar effect in children aged 3 to 17 at 6 to 12 months follow-up (917/210822 versus 677/78960; RR 0.45; 95% CI 0.34 to 0.59). Comparing different doses of immunoglobulins, higher dosage was generally more effective than lower dosage (1.5 ml better than 0.75 ml and 0.75 ml better than 0.1 ml) in preventing hepatitis A. No significant systemic adverse events were reported. One trial showed that immunoglobulin was more effective than placebo for post-exposure prophylaxis. It appeared that there was no significant difference between immunoglobulins and inactivated hepatitis A vaccine in seroconversion to hepatitis A vaccine antibodies at four weeks (RR 1.16; 95% CI 0.98 to 1.38), but immunoglobulins were significantly less effective than vaccine regarding antibody levels at 8, 12, or 24 weeks. Immunoglobulins seem to be effective for pre-exposure and post-exposure prophylaxis of hepatitis A. However, caution is warranted for the positive findings due to the limited number of trials, year of conductance, and risk of bias. Conductance of rigorous trials will be justifiable.
This review concludes that immunoglobulins seem effective for preventing hepatitis A in both children and adults. However, the evidence, on which the conclusion is based, is not strong as the included trials appear to have risk of bias and their number is insufficient. Because there is a potential risk of blood-borne diseases from immunoglobulins preparations, such as human immunodeficiency virus, and because of the availability of hepatitis A vaccine, the use of immunoglobulins has become limited. However, their use is still required in some specific populations, such as persons with compromised immune function, children under one year of age, or persons who have not developed a full response to vaccine immunisation. Future clinical trials should address the benefit and harm of immunoglobulins in these populations.
We identified six high quality, up to date Cochrane reviews. Four of these related to the safety of regular formoterol or salmeterol (as monotherapy or combination therapy) and these included 19 studies in children. We added data from two recent studies on salmeterol combination therapy in 689 children which were published after the relevant Cochrane review had been completed, making a total of 21 trials on 7474 children (from four to 17 years of age). The two remaining reviews compared the safety of formoterol with salmeterol from trials randomising participants to one or other treatment, but the reviews only included a single trial in children in which there were 156 participants. Only one child died across all the trials, so impact on mortality could not be assessed. We found a statistically significant increase in the odds of suffering a non-fatal serious adverse event of any cause in children on formoterol monotherapy (Peto odds ratio (OR) 2.48; 95% confidence interval (CI) 1.27 to 4.83, I2 = 0%, 5 trials, N = 1335, high quality) and smaller increases in odds which were not statistically significant for salmeterol monotherapy (Peto OR 1.30; 95% CI 0.82 to 2.05, I2 = 17%, 5 trials, N = 1333, moderate quality), formoterol combination therapy (Peto OR 1.60; 95% CI 0.80 to 3.28, I2 = 32%, 7 trials, N = 2788, moderate quality) and salmeterol combination therapy (Peto OR 1.20; 95% CI 0.37 to 2.91, I2 = 0%, 5 trials, N = 1862, moderate quality). We compared the pooled results of the monotherapy and combination therapy trials. There was no significant difference between the pooled ORs of children with a serious adverse event (SAE) from long-acting beta2-agonist beta agonist (LABA) monotherapy (Peto OR 1.60; 95% CI 1.10 to 2.33, 10 trials, N = 2668) and combination trials (Peto OR 1.50; 95% CI 0.82 to 2.75, 12 trials, N = 4,650). However, there were fewer children with an SAE in the regular inhaled corticosteroid (ICS) control group (0.7%) than in the placebo control group (3.6%). As a result, there was an absolute increase of an additional 21 children (95% CI 4 to 45) suffering such an SAE of any cause for every 1000 children treated over six months with either regular formoterol or salmeterol monotherapy, whilst for combination therapy the increased risk was an additional three children (95% CI 1 fewer to 12 more) per 1000 over three months. We only found a single trial in 156 children comparing the safety of regular salmeterol to regular formoterol monotherapy, and even with the additional evidence from indirect comparisons between the combination formoterol and salmeterol trials, the CI around the effect on SAEs is too wide to tell whether there is a difference in the comparative safety of formoterol and salmeterol (OR 1.26; 95% CI 0.37 to 4.32). We do not know if regular combination therapy with formoterol or salmeterol in children alters the risk of dying from asthma. Regular combination therapy is likely to be less risky than monotherapy in children with asthma, but we cannot say that combination therapy is risk free. There are probably an additional three children per 1000 who suffer a non-fatal serious adverse event on combination therapy in comparison to ICS over three months. This is currently our best estimate of the risk of using LABA combination therapy in children and has to be balanced against the symptomatic benefit obtained for each child. We await the results of large on-going surveillance studies to further clarify the risks of combination therapy in children and adolescents with asthma. The relative safety of formoterol in comparison to salmeterol remains unclear, even when all currently available direct and indirect trial evidence is combined.
We looked at previous Cochrane reviews on long-acting beta2-agonists and also searched for additional trials on long-acting beta2-agonists in children. We found a total of 21 trials involving 7318 children that provided information on the safety of formoterol or salmeterol given alone or combined with corticosteroids. We also found one trial on 156 children which directly compared formoterol to salmeterol. There were more non-fatal serious adverse events in children taking formoterol or salmeterol compared to those on placebo; for every 1000 children treated with formoterol or salmeterol over six months, 21 extra children suffered a non-fatal event in comparison with placebo. There was a smaller and non-significant increase in serious adverse events in children on formoterol or salmeterol and corticosteroids compared to corticosteroids alone: for every 1000 children treated with combination therapy over three months, three extra children suffered a non-fatal event in comparison with corticosteroids alone. This number illustrates the average difference between combination therapy and corticosteroids. Our analyses showed that in fact the true answer could be between 1 fewer and 12 more children who would experience a non-fatal event. We did not have enough numbers from the small trial comparing formoterol to salmeterol, or from information in the other trials, to tell whether one long-acting beta2-agonist treatment is safer than the other. There was only one death across all the trials, so we did not have enough information to tell whether formoterol or salmeterol increases the risk of death.
Thirty-two controlled clinical trials met the selection criteria; two were duplicate articles. The treatment drugs were intravenous lidocaine (16 trials), mexiletine (12 trials), lidocaine plus mexiletine sequentially (one trial), and tocainide (one trial). Twenty-one trials were crossover studies, and nine were parallel. Lidocaine and mexiletine were superior to placebo [weighted mean difference (WMD) = -11; 95% CI: -15 to -7; P < 0.00001], and limited data showed no difference in efficacy (WMD = -0.6; 95% CI: -7 to 6), or adverse effects versus carbamazepine, amantadine, gabapentin or morphine. In these trials, systemic local anesthetics were safe, with no deaths or life-threatening toxicities. Sensitivity analysis identified data distribution in three trials as a probable source of heterogeneity. There was no publication bias. Lidocaine and oral analogs were safe drugs in controlled clinical trials for neuropathic pain, were better than placebo, and were as effective as other analgesics. Future trials should enroll specific diseases and test novel lidocaine analogs with better toxicity profiles. More emphasis is necessary on outcomes measuring patient satisfaction to assess if statistically significant pain relief is clinically meaningful.
The authors reviewed all randomized studies comparing these drugs with placebo or with other analgesics and found that: local anesthetics were superior to placebo in decreasing intensity of neuropathic pain; limited data showed no difference in efficacy or adverse effects between local anesthetics and carbamazepine, amantadine, gabapentin or morphine; local anesthetics had more adverse effects than placebo; and local anesthetics were safe.
Seventy-seven studies met the entry criteria and randomised 21,248 participants (4625 children and 16,623 adults). Participants were generally symptomatic at baseline with moderate airway obstruction despite their current ICS regimen. Formoterol or salmeterol were most frequently added to low-dose ICS (200 to 400 µg/day of beclomethasone (BDP) or equivalent) in 49% of the studies. The addition of a daily LABA to ICS reduced the risk of exacerbations requiring oral steroids by 23% from 15% to 11% (RR 0.77, 95% CI 0.68 to 0.87, 28 studies, 6808 participants). The number needed to treat with the addition of LABA to prevent one use of rescue oral corticosteroids is 41 (29, 72), although the event rates in the ICS groups varied between 0% and 38%. Studies recruiting adults dominated the analysis (6203 adult participants versus 605 children). The subgroup estimate for paediatric studies was not statistically significant (RR 0.89, 95% CI 0.58 to 1.39) and includes the possibility of the superiority of ICS alone in children. Higher than usual dose of LABA was associated with significantly less benefit. The difference in the relative risk of serious adverse events with LABA was not statistically significant from that of ICS alone (RR 1.06, 95% CI 0.87 to 1.30). The addition of LABA led to a significantly greater improvement in FEV1 (0.11 litres, 95% 0.09 to 0.13) and in the proportion of symptom-free days (11.88%, 95% CI 8.25 to 15.50) compared to ICS monotherapy. It was also associated with a reduction in the use of rescue short-acting ß2-agonists (-0.58 puffs/day, 95% CI -0.80 to -0.35), fewer withdrawals due to poor asthma control (RR 0.50, 95% CI 0.41 to 0.61), and fewer withdrawals due to any reason (RR 0.80, 95% CI 0.75 to 0.87). There was no statistically significant group difference in the risk of overall adverse effects (RR 1.00, 95% 0.97 to 1.04), withdrawals due to adverse health events (RR 1.04, 95% CI 0.86 to 1.26) or any of the specific adverse health events. In adults who are symptomatic on low to high doses of ICS monotherapy, the addition of a LABA at licensed doses reduces the rate of exacerbations requiring oral steroids, improves lung function and symptoms and modestly decreases use of rescue short-acting ß2-agonists. In children, the effects of this treatment option are much more uncertain. The absence of group difference in serious adverse health events and withdrawal rates in both groups provides some indirect evidence of the safety of LABAs at usual doses as add-on therapy to ICS in adults, although the width of the confidence interval precludes total reassurance.
The purpose of this review was to assess the efficacy and safety of adding long-acting ß2-agonists to inhaled corticosteroids in asthmatic children and adults. Based on the identified randomised trials, in people who remain symptomatic while on inhaled corticosteroids, the addition of long-acting ß2-agonists improves lung function and reduces the risk of asthma exacerbations compared to ongoing treatment with a similar dose of inhaled corticosteroids alone in adults. We could not find evidence of increased serious adverse events or withdrawal rates due to adverse health events with the combination of long-acting ß2-agonists at usual doses and inhaled corticosteroids in adults. This provides some indirect evidence, but not total reassurance, regarding the short- and medium-term safety of this treatment strategy. There have not been enough children studied to assess the risks and benefits of adding LABAs in this age group.
We included 35 trials (13,872 adult participants). Seven included studies were at low risk of bias. We identified eight studies as awaiting classification since we could not obtain the full texts, and had insufficient information to include or exclude them. We included data from 24 trials for quantitative synthesis. The results of meta-analyses showed that nitrous oxide-based techniques increased the incidence of pulmonary atelectasis (odds ratio (OR) 1.57, 95% confidence interval (CI) 1.18 to 2.10, P = 0.002), but had no effects on the inhospital case fatality rate, the incidence of pneumonia, myocardial infarction, stroke, severe nausea and vomiting, venous thromboembolism, wound infection, or the length of hospital stay. The sensitivity analyses suggested that the results of the meta-analyses were all robust except for the outcomes of pneumonia, and severe nausea and vomiting. Two trials reported length of intensive care unit (ICU) stay but the data were skewed so were not pooled. Both trials reported that nitrous oxide-based techniques had no effects on the length of ICU stay. We rated the quality of evidence for two outcomes (pulmonary atelectasis, myocardial infarction) as high, four outcomes (inhospital case fatality rate, stroke, venous thromboembolism, length of hospital stay) as moderate, and three (pneumonia, severe nausea and vomiting, wound infection rate) as low. Given the evidence from this Cochrane review, the avoidance of nitrous oxide may be reasonable in participants with pre-existing poor pulmonary function or at high risk of postoperative nausea and vomiting. Since there are eight studies awaiting classification, selection bias may exist in our systematic review.
We examined the evidence available up to 17 October 2014. We included 35 trials involving 13,872 adult participants, all of whom were randomized to either receive nitrous oxide or no nitrous oxide. The trials covered a variety of situations during general anaesthesia. We found that general anaesthesia with nitrous oxide increased the risk of pulmonary atelectasis (i.e. failure of the lungs to expand fully). When we restricted the results to the highest quality studies only, we found evidence that nitrous oxide may potentially increase the risk of pneumonia and severe nausea and vomiting. However, nitrous oxide had no effect on the patients' survival, the incidence of heart attack, stroke, wound infection, the occurrence of blood clots within veins, the length of hospital stay, or the length of intensive care unit stay. The evidence related to survival of participants was of moderate quality because we did not have enough data. The evidence related to some harmful effects, such as failure of the lungs to expand fully and heart attack, was of high quality, while for other harmful effects, such as stroke and the occurrence of blood clots within veins, the evidence was of moderate quality. For others, such as pneumonia, severe nausea and vomiting, and wound infection, the evidence was of low quality. The evidence related to the length of time spend in hospital was of moderate quality. The avoidance of nitrous oxide may be reasonable in participants with pre-existing poor pulmonary function or at high risk of postoperative nausea and vomiting.
We identified seven trials that met the inclusion criteria. Out of these, six trials provided data for the meta-analyses. A total of 488 participants with acute cholecystitis and fit to undergo laparoscopic cholecystectomy were randomised to early laparoscopic cholecystectomy (ELC) (244 people) and delayed laparoscopic cholecystectomy (DLC) (244 people) in the six trials. Blinding was not performed in any of the trials and so all the trials were at high risk of bias. Other than blinding, three of the six trials were at low risk of bias in the other domains such as sequence generation, allocation concealment, incomplete outcome data, and selective outcome reporting. The proportion of females ranged between 43.3% and 80% in the trials that provided this information. The average age of participants ranged between 40 years and 60 years. There was no mortality in any of the participants in five trials that reported mortality. There was no significant difference in the proportion of people who developed bile duct injury in the two groups (ELC 1/219 (adjusted proportion 0.4%) versus DLC 2/219 (0.9%); Peto OR 0.49; 95% CI 0.05 to 4.72 (5 trials)). There was no significant difference between the two groups (ELC 14/219 (adjusted proportion 6.5%) versus DLC 11/219 (5.0%); RR 1.29; 95% CI 0.61 to 2.72 (5 trials)) in terms of other serious complications. None of the trials reported quality of life from the time of randomisation. There was no significant difference between the two groups in the proportion of people who required conversion to open cholecystectomy (ELC 49/244 (adjusted proportion 19.7%) versus DLC 54/244 (22.1%); RR 0.89; 95% CI 0.63 to 1.25 (6 trials)). The total hospital stay was shorter in the early group than the delayed group by four days (MD -4.12 days; 95% CI -5.22 to -3.03 (4 trials; 373 people)). There was no significant difference in the operating time between the two groups (MD -1.22 minutes; 95% CI -3.07 to 0.64 (6 trials; 488 people)). Only one trial reported return to work. The people belonging to the ELC group returned to work earlier than the DLC group (MD -11.00 days; 95% CI -19.61 to -2.39 (1 trial; 36 people)). Four trials did not report any gallstone-related morbidity during the waiting period. One trial reported five gallstone-related morbidities (cholangitis: two; biliary colic not requiring urgent operation: one; acute cholecystitis not requiring urgent operation: two). There were no reports of pancreatitis during the waiting time. Gallstone-related morbidity was not reported in the remaining trials. Forty (18.3%) of the people belonging to the delayed group had either non-resolution of symptoms or recurrence of symptoms before their planned operation and had to undergo emergency laparoscopic cholecystectomy in five trials. The proportion with conversion to open cholecystectomy was 45% (18/40) in this group of people. We found no significant difference between early and late laparoscopic cholecystectomy on our primary outcomes. However, trials with high risk of bias indicate that early laparoscopic cholecystectomy during acute cholecystitis seems safe and may shorten the total hospital stay. The majority of the important outcomes occurred rarely, and hence the confidence intervals are wide. It is unlikely that future randomised clinical trials will be powered to measure differences in bile duct injury and other serious complications since this might involve performing a trial of more than 50,000 people, but several smaller randomised trials may answer the questions through meta-analyses.
Six trials providing information on the review question were identified. A total of 488 people with acute cholecystitis were included. Laparoscopic cholecystectomy was performed early (within seven days of people presenting to the doctor with symptoms) in 244 people while it was performed after at least six weeks in the remaining 244 people. The proportion of females ranged between 43.3% and 80% in the trials that provided this information. The average age of participants ranged between 40 years and 60 years. All the trials were at high risk of bias (and might have overestimated the benefits or underestimated the harms of either early laparoscopic cholecystectomy or delayed laparoscopic cholecystectomy). All the people included in the trials were discharged home alive after operation in the five trials from which this information was available. There was no significant difference in the proportion of people who developed bile duct injury, surgical complications, or who required conversion from key-hole to open operation between the two groups. None of the trials reported quality of life from the time of randomisation. The total hospital stay was shorter in the early group than the delayed group by four days. There was no significant difference in the operating time between the two groups. Only one trial reported the time taken for employed people to return to work. The people belonging to the early laparoscopic cholecystectomy group returned to work 11 days, on average, earlier than the delayed laparoscopic cholecystectomy group. Four trials did not report any gallstone-related complications during the waiting period. One trial reported five gallstone-related complications, including two people with cholangitis. There were no reports of pancreatitis during the waiting time. Gallstone-related morbidity was not reported in the remaining trial. Approximately one-sixth of people belonging to the delayed group had either non-resolution of symptoms or recurrence of symptoms before their planned operation and had to undergo emergency laparoscopic cholecystectomy in five trials. Based on information from a varied number of participants as well as trials at high risk of bias, early laparoscopic cholecystectomy during acute cholecystitis appears safe and shortens the total hospital stay. The majority of the important outcomes occurred rarely and hence one cannot rule out that future trials may show that one treatment or another may be better in terms of complications. However, the trial size required to show such differences involves a clinical trial on more than 50,000 people and so it is unlikely that such large trials will be performed. Several smaller randomised trials may answer the questions through meta-analyses.
Of the three included studies (101 participants), one evaluated continuous arteriovenous haemodialysis and two investigated continuous venovenous haemofiltration; all included conventional therapy as control. We found significant decreases in myoglobin in patients among whom CRRT therapy was initiated on days four, eight, and 10 (day 4: MD -11.00 (μg/L), 95% CI -20.65 to -1.35; Day 8: MD -23.00 (μg/L), 95% CI -30.92 to -15.08; day 10: MD -341.87 (μg/L), 95% CI -626.15 to -57.59) compared with those who underwent conventional therapy. Although CRRT was associated with improved serum creatinine, blood urea nitrogen, and potassium levels; reduced duration of the oliguria phase; and was associated with reduced time in hospital, no significant differences were found in mortality rates compared with conventional therapy (RR 0.17, 95% CI 0.02 to 1.37). The included studies did not report on long-term outcomes or prevention of AKI. Overall, we found that study quality was suboptimal: blinding and randomisation allocation were not reported by any of the included studies, leading to the possibility of selection, performance and detection bias. Although CRRT may provide some benefits for people with rhabdomyolysis, the poor methodological quality of the included studies and lack of data relating to clinically important outcomes limited our findings about the effectiveness of CRRT for people with rhabdomyolysis. There was insufficient evidence to discern any likely benefits of CRRT over conventional therapy for people with rhabdomyolysis and prevention of rhabdomyolysis-induced AKI.
We searched the literature published before 6 January 2014, and after assessment, included three small studies that involved 101 participants. Our analysis found that although CRRT showed limited advantages over conventional treatment to improve some aspects of kidney function and muscle tissue loss, we found no significant benefits in reducing risk of death. The small body of available evidence demonstrated poor methodological quality, and was insufficient to enable us to make any robust conclusions about the effectiveness of CRRT for people with rhabdomyolysis. Larger and better designed studies would be needed to investigate if CRRT is beneficial for people with rhabdomyolysis.
We included seven trials reported in 30 references in the review (354 participants). In all trials, G-CSF was compared with placebo preparations. Dosage of G-CSF varied among studies, ranging from 2.5 to 10 microgram/kg/day. Regarding overall risk of bias, data regarding the generation of randomization sequence and incomplete outcome data were at a low risk of bias; however, data regarding binding of personnel were not conclusive. The rate of mortality was not different between the two groups (RR 0.64, 95% CI 0.15 to 2.80, P = 0.55). Regarding safety, the limited amount of evidence is inadequate to reach any conclusions regarding the safety of G-CSF therapy. Moreover, the results did not show any beneficial effects of G-CSF in patients with AMI regarding left ventricular function parameters, including left ventricular ejection fraction (RR 3.41, 95% CI -0.61 to 7.44, P = 0.1), end systolic volume (RR -1.35, 95% CI -4.68 to 1.99, P = 0.43) and end diastolic volume (RR -4.08, 95% CI -8.28 to 0.12, P = 0.06). It should also be noted that the study was limited since the trials included lacked long enough follow up durations. Limited evidence from small trials suggested a lack of benefit of G-CSF therapy in patients with AMI. Since data of the risk of bias regarding blinding of personnel were not conclusive, larger RCTs with appropriate power calculations and longer follow up durations are required in order to address current uncertainties regarding the clinical efficacy and therapy-related adverse events of G-CSF treatment.
In this review, analysis of seven included studies with low risk of bias using G-CSF to improve the function of damaged heart of patient with heart attack failed to show any beneficial effects of this treatment. The rate of mortality was not different between the two groups (RR 0.64, 95% CI 0.15 to 2.80, P = 0.55). Also, left ventricular parameters including left ventricular ejection fraction (RR 3.41, 95% CI -0.61 to 7.44, P = 0.1), end systolic volume (RR -1.35, 95% CI -4.68 to 1.99, P = 0.43) and end diastolic volume (RR -4.08, 95% CI -8.28 to 0.12, P = 0.06) did not show significant changes between the treatment and the control groups. There was no evidence that the study was associated with serious adverse effects, however it should be noted that the study was limited since the trials included lacked long enough follow up durations. Additionally four studies had either high or unclear risk of bias for blinding. Therefore, based on the results of the current study, G-CSF treatment should not be administered for patients with heart attack.
A total of 29 trials were included in this review and 21 trials (17,276 women) provided data that could be included in an analysis. The quality of the trials was variable. 1. Non-closure of visceral and parietal peritoneum versus closure of both parietal layers Sixteen trials involving 15,480 women, were included and analysed, when both parietal peritoneum was left unclosed versus when both peritoneal surfaces were closed. Postoperative adhesion formation was assessed in only four trials with 282 women, and no difference was found between groups (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.76 to 1.29). There was significant reduction in the operative time (mean difference (MD) -5.81 minutes, 95% CI -7.68 to -3.93). The duration of hospital stay in a total of 13 trials involving 14,906 women, was also reduced (MD -0.26, 95% CI -0.47 to -0.05) days. In a trial involving 112 women, reduced chronic pelvic pain was found in the peritoneal non-closure group. 2. Non-closure of visceral peritoneum only versus closure of both peritoneal surfaces Three trials involving 889 women were analysed. There was an increase in adhesion formation (two trials involving 157 women, RR 2.49, 95% CI 1.49 to 4.16) which was limited to one trial with high risk of bias.There was reduction in operative time, postoperative days in hospital and wound infection. There was no significant reduction in postoperative pyrexia. 3. Non-closure of parietal peritoneum only versus closure of both peritoneal layers The two identified trials involved 573 women. Neither study reported on postoperative adhesion formation. There was reduction in operative time and postoperative pain with no difference in the incidence of postoperative pyrexia, endometritis, postoperative duration of hospital stay and wound infection. In only one study, postoperative day one wound pain assessed by the numerical rating scale, (MD -1.60, 95% CI -1.97 to -1.23) and chronic abdominal pain d by the visual analogue score (MD -1.10, 95% CI -1.39 to -0.81) was reduced in the non-closure group. 4. Non-closure versus closure of visceral peritoneum when parietal peritoneum is closed. There was reduction in all the major urinary symptoms of frequency, urgency and stress incontinence when the visceral peritoneum is left unsutured. There was a reduction in operative time across all the subgroups. There was also a reduction in the period of hospitalisation post-caesarean section except in the subgroup where parietal peritoneum only was not sutured where there was no difference in the period of hospitalisation. The evidence on adhesion formation was limited and inconsistent. There is currently insufficient evidence of benefit to justify the additional time and use of suture material necessary for peritoneal closure. More robust evidence on long-term pain, adhesion formation and infertility is needed.
There are many ways of performing a caesarean section and the techniques used depend on a number factors including the clinical situation and the preference of the operator. The peritoneum is a thin membrane of cells supported by a thin layer of connective tissue, and during caesarean section these peritoneal surfaces have to be cut through in order to reach the uterus and for the baby to be born. Following a caesarean section, it has been standard practice to close the peritoneum by stitching (suturing) the two layers of tissue that line the abdomen and cover the internal organs, to restore the anatomy. It has however been suggested that peritoneal adhesions may be more likely rather than less likely when the peritoneum is sutured, possibly as a result of a tissue reaction to the suture material. This review of trials sought to address whether to routinely suture these thin layers of tissue or not after delivering a baby by caesarean section. Twenty-nine randomised controlled trials were identified, with differences in their methodological quality; 21 trials involving over 17,000 women contributing data to the review. Several minutes were saved when the peritoneum was not stitched, and with a shorter period of hospital stay in most of the women. Postoperative adhesion formation was assessed in only four trials with 282 women, and no difference was found when leaving both layers of peritoneum unclosed was compared with closure of both. Longer-term outcomes were not adequately assessed, particularly adhesion formation, subfertility and ease of other surgeries in later life. Although the methodological quality of trials was variable, the results were in general consistent between the trials of better and poorer quality. Further studies are needed to further assess all these outcomes.
20 double-blind RCTs evaluated the BP lowering efficacy of beta-blockers as second-line drug in 3744 hypertensive patients (baseline BP of 158/102 mmHg; mean duration of 7 weeks). The BP reduction from adding a beta-blocker as the second drug was estimated by comparing the difference in BP reduction between the combination and monotherapy groups. A reduction in BP was seen with adding a beta-blocker to thiazide diuretics or calcium channel blockers at doses as low as 0.25 times the manufacturer's recommended starting dose. The BP lowering efficacy of beta-blockers as a second drug was 6/4 mmHg at 1 times the starting dose and 8/6 mmHg at 2 times the starting dose. Beta-blockers reduced heart rate by 10 beats/min at 1 to 2 times the starting dose. Beta-blockers did not statistically significantly increase withdrawals due to adverse effects but this was likely due to the lack of reporting of this outcome in 35% of the included RCTs. Addition of a beta-blocker to diuretics or calcium-channel blockers reduces BP by 6/4mmHg at 1 times the starting dose and by 8/6 mmHg at 2 times the starting dose. When the blood pressure lowering effect of beta-blockers from this review was compared to that of thiazide diuretics from our previous review (Chen 2009), second-line beta-blockers reduce systolic BP to the same extent as second-line thiazide diuretics, but reduce diastolic BP to a greater degree. The different effect on diastolic BP means that beta-blockers have little or no effect on pulse pressure whereas thiazides cause a significant dose-related decrease in pulse pressure. This difference in the pattern of BP lowering with beta-blockers as compared to thiazides might be the explanation for the fact that beta-blockers appear to be less effective at reducing adverse cardiovascular outcomes than thiazide diuretics, particularly in older individuals.
In this review, we asked how much do beta-blockers reduce BP when used as the second drug to treat hypertension. Twenty trials lasting an average of 7 weeks were found in the world scientific literature to answer this question. The data showed that the addition of a beta-blocker to thiazide diuretics or calcium channel blockers reduced BP by 8/6 mmHg when given at doses 2 times the recommended starting dose. When we compared these results with our previous review of the blood pressure lowering effect of thiazide diuretics as second line drug, we found that beta-blockers have a different pattern of BP lowering. This different pattern of effect on blood pressure might explain why first-line beta-blockers appear to be less effective at reducing adverse cardiovascular outcomes than first-line thiazide diuretics, particularly in older individuals.
We included 3684 patients from 53 studies. SEMS insertion was safer and more effective than plastic tube insertion. Thermal and chemical ablative therapy provided comparable dysphagia palliation but had an increased requirement for re-interventions and for adverse effects. Anti-reflux stents provided comparable dysphagia palliation to conventional metal stents. Some anti-reflux stents might have reduced gastro-oesophageal reflux and complications. Newly-designed double-layered nitinol (Niti-S) stents were preferable due to longer survival time and fewer complications compared to simple Niti-S stents. Brachytherapy might be a suitable alternative to SEMS in providing a survival advantage and possibly a better quality of life, and might provide better results when combined with argon plasma coagulation or external beam radiation therapy. Self-expanding metal stent insertion is safe, effective and quicker in palliating dysphagia compared to other modalities. However, high-dose intraluminal brachytherapy is a suitable alternative and might provide additional survival benefit with a better quality of life. Some anti-reflux stents and newly-designed stents lead to longer survival and fewer complications compared to conventional stents. Combinations of brachytherapy with self-expanding metal stent insertion or radiotherapy are preferable due to the reduced requirement for re-interventions. Rigid plastic tube insertion, dilatation alone or in combination with other modalities, and chemotherapy alone are not recommended for palliation of dysphagia due to a high incidence of delayed complications and recurrent dysphagia.
The review included randomised controlled studies comparing the use of different interventions to improve dysphagia among patients with inoperable or unresectable primary oesophageal cancer. To find new studies for this updated review, in January 2014 we searched, according to the Cochrane Upper Gastrointestinal and Pancreatic Diseases model, the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE and CINAHL; and major conference proceedings (up to January 2014). The review updates the previous version but still fails to present any obvious superiority of one technique over another among the different kinds of interventions. Self-expanding metal stents provided safer and more effective relief of dysphagia compared to rigid plastic stents. Other techniques like radiotherapy or brachytherapy were also suitable alternatives and might be favourable in improving quality of life and prolonging survival. Individual differences should be emphasised when the intervention type was determined. Half of the studies included in this review were of high quality. Most studies did not state the methods used to seek and report quality of life outcomes and adverse effects.
Seven trials met the inclusion criteria and had a total of 4526 women. Five were multi-site studies. Four trials were conducted in the USA, while Nigeria and Zambia were represented by one study each, and one trial was done in both Jamaica and India. Two trials provided multiple sessions for participants. In one study that examined contraceptive choice, women in the expanded program were more likely to choose sterilization (OR 4.26; 95% CI 2.46 to 7.37) or use a modern contraceptive method (OR 2.35; 95% CI 1.82 to 3.03), i.e., sterilization, pills, injectable, intrauterine device or barrier method. For the other study, the groups received educational interventions with differing format and intensity. Both groups reportedly had increases in contraceptive use, but they did not differ significantly by six months in consistent use of an effective contraceptive, i.e., sterilization, IUD, injectable, implant, and consistent use of oral contraceptives, diaphragm, or male condoms. Five trials provided one session and focused on testing educational material or media. In one study, knowledge gain favored a slide-and-sound presentation versus a physician's oral presentation (MD -19.00; 95% CI -27.52 to -10.48). In another trial, a table with contraceptive effectiveness categories led to more correct answers than a table based on pregnancy numbers [ORs were 2.42 (95% CI 1.43 to 4.12) and 2.19 (95% CI 1.21 to 3.97)] or a table with effectiveness categories and pregnancy numbers [ORs were 2.58 (95% CI 1.5 to 4.42) and 2.03 (95% CI 1.13 to 3.64)]. Still another trial provided structured counseling with a flipchart on contraceptive methods. The intervention and usual-care groups did not differ significantly in choice of contraceptive method (by effectiveness category) or in continuation of the chosen method at three months. Lastly, a study with couples used videos to communicate contraceptive information (control, motivational, contraceptive methods, and both motivational and methods videos). The analyses showed no significant difference between the groups in the types of contraceptives chosen. These trials varied greatly in the types of participants and interventions to communicate contraceptive effectiveness. Therefore, we cannot say overall what would help consumers choose an appropriate contraceptive method. For presenting pregnancy risk data, one trial showed that effectiveness categories were better than pregnancy numbers. In another trial, audiovisual aids worked better than the usual oral presentation. Strategies should be tested in clinical settings and measured for their effect on contraceptive choice. More detailed reporting of intervention content would help in interpreting results. Reports could also include whether the instruments used to assess knowledge or attitudes were tested for validity or reliability. Follow-up should be incorporated to assess retention of knowledge over time. The overall quality of evidence was considered to be low for this review, given that five of the seven studies provided low or very low quality evidence.
Through February 2013, we did computer searches for randomized trials of ways to inform people about how well family planning methods prevent pregnancy. We wrote to researchers to find other trials. The new program could be compared to the usual practice or to another program or means of informing people. We found seven trials with a total of 4526 women. Two had several sessions for participants. One of those looked at the choice of birth control method. Women in the test program more often chose to be sterilized or to use modern birth control than women with the usual counseling. In the other study, the groups had different sessions on family planning. Both groups increased their birth control use. However, the groups were similar at six months in using methods that work well to prevent pregnancy. Five trials had a single session for each group. In one, women learned more from a slide-and-sound format than from having a doctor talk to them. Another trial found that effectiveness categories were better than pregnancy numbers for comparing the methods. Still another study provided structured counseling using a flipchart on family planning methods. The groups were similar in choice of birth control and in numbers who still used their chosen method at three months. The last study used videos to inform couples about family planning. The groups were mostly similar in birth control use after the videos. But those who watched videos on motivation and on family planning did not choose pills or an injectable method as often as those who watched only the family planning video. The studies had different types of participants and programs. We cannot say overall what would help consumers choose their method of birth control. Ways to inform women about family planning options should be tested in clinics. Trials should look at the choice of birth control method, along with how much consumers remember later.
We included 24 observational cluster studies (359 hospitals) in this review. We did not find any randomised controlled trials or formal economic evaluations. In 22 studies, the people who used the intervention (CSTD plus safe handling) and control (safe handling alone) were pharmacists or pharmacy technicians; in the other two studies, the people who used the intervention and control were nurses, pharmacists, or pharmacy technicians. Therefore, the evidence is mainly applicable to pharmacists or pharmacy technicians. The CSTD used in the studies were PhaSeal (13 studies), Tevadaptor (2 studies), SpikeSwan (1 study), PhaSeal and Tevadaptor (1 study), varied (5 studies), and not stated (2 studies). Therefore, the evidence is mainly applicable for PhaSeal. The studies' descriptions of the control groups were varied. Twenty-two studies provide data on one or more outcomes for this systematic review. All the studies are at serious risk of bias. The quality of evidence is very low for all the outcomes. Very low certainty evidence from small studies is insufficient to determine whether there is any important difference between CSTD and control groups in the proportion of people with positive urine tests for exposure between the CSTD and control groups for any of the drugs: cyclophosphamide alone (RR 0.83, 95% CI 0.46 to 1.52; I² = 12%; 2 studies; 2 hospitals; 20 participants; CSTD: 76.1% versus control: 91.7%); cyclophosphamide or ifosfamide (RR 0.09, 95% CI 0.00 to 2.79; 1 study; 1 hospital; 14 participants; CSTD: 6.4% versus control: 71.4%); and cyclophosphamide, ifosfamide, or gemcitabine (RR not estimable; 1 study; 1 hospital; 36 participants; 0% in both groups). Very low certainty evidence from small studies is insufficient to determine whether there is any important difference between CSTD and control groups in the proportion of surfaces contaminated or the quantity of contamination. Overall, out of 24 comparisons in pharmacy areas or patient-care areas, there was a reduction in the proportion of surfaces contaminated in only one comparison and out of 15 comparisons in pharmacy areas or patient-care areas, there was a reduction in the quantity of contamination in only two comparisons. None of the studies report on atmospheric contamination, blood tests, or other measures of exposure to infusional hazardous drugs such as urine mutagenicity, chromosomal aberrations, sister chromatid exchanges, or micronuclei induction. None of the studies report short-term health outcomes such as reduction in skin rashes, medium-term reproductive health outcomes such as fertility and parity, or long-term health outcomes related to the development of any type of cancer or adverse events. Five studies (six hospitals) report the potential cost savings through the use of CSTD. The studies used different methods of calculating the costs, and the results were not reported in a format that could be pooled via meta-analysis. There is significant variability between the studies in terms of whether CSTD resulted in cost savings (the point estimates of the average potential cost savings ranged from (2017) USD −642,656 to (2017) USD 221,818). The healthcare professionals in the studies that provide data were mostly pharmacists or pharmacy technicians. Therefore, the evidence is mainly applicable to pharmacists and pharmacy technicians. Most of the studies that provide information for this review evaluated the use of PhaSeal; therefore the findings are mostly applicable to PhaSeal. Currently, no firm conclusions can be drawn on the effect of CSTD combined with safe handling versus safe handling alone due to very low certainty evidence available for the main outcomes. Multicentre randomised controlled trials may be feasible depending upon the proportion of people with exposure. The next best study design is interrupted time-series. Future studies should evaluate exposure to a relevant selection of hazardous drugs used in the hospital, and they should measure direct short-term health outcomes.
Short or long-term health outcomes were not reported in any studies. We found very low quality evidence (the best evidence available currently) that there is no considerable difference in exposure between CSTD plus safe handling versus safe handling alone. We also found very low quality evidence (the best evidence available currently) that there is no considerable effect of CSTD on the percentage of surfaces contaminated and the amount of contamination in in pharmacy and patient-care areas for most drugs even though there was a small effect on contamination for one drug out of 24 studied and on amount of drug contamination for two drugs out of 15 studied . Therefore, no firm conclusions can be drawn on the effect of CSTD plus safe handling versus safe handling alone due to very low quality evidence available. Since most of the studies were conducted in pharmacy technicians and pharmacists and the CSTD used was PhaSeal, the evidence is applicable mainly to pharmacy technicians and pharmacists, and to PhaSeal. What was studied in the review? We included all types of studies that compared CSTD plus safe handling ('CSTD group') and safe handling alone ('control group'). What are the main results of the review? We included 24 studies (359 hospitals) in this review, none of which used the gold standard study design (randomised controlled trial) or explored a treatment's value for money. In 22 studies, the people who used the CSTD and safe handling were pharmacists or pharmacy technicians. Nineteen studies provide information that could be included for this study. No firm conclusion can be drawn on the effects of using CSTD on indirect measures of exposure such as the presence of the hazardous drug in the urine of the healthcare professionals or on the contamination of surfaces or the floor. There is significant variability between the studies in terms of whether the use of CSTD resulted in cost savings, with some studies reporting increased costs and others reporting decreased costs after introducing CSTD. None of the studies report on short or long-term health outcomes such as reduction in skin rashes, infertility, miscarriage, development of any type of cancer, or adverse events. We judged the certainty of evidence for all outcomes to be very low because all the studies had one or more significant limitations in their design. Therefore, the reported effects of interventions are uncertain. How up-to-date is this review? We searched for studies up until 26 October 2017.
Four trials (1646 women) were included. The method of randomisation was unclear in all four trials and allocation concealment was reported in only one trial. Two trials used blinding of participants and outcomes. Vitamin B6 as oral capsules or lozenges resulted in decreased risk of dental decay in pregnant women (capsules: risk ratio (RR) 0.84; 95% confidence interval (CI) 0.71 to 0.98; one trial, n = 371, low quality of evidence; lozenges: RR 0.68; 95% CI 0.56 to 0.83; one trial, n = 342, low quality of evidence). A small trial showed reduced mean birthweights with vitamin B6 supplementation (mean difference -0.23 kg; 95% CI -0.42 to -0.04; n = 33; one trial). We did not find any statistically significant differences in the risk of eclampsia (capsules: n = 1242; three trials; lozenges: n = 944; one trial), pre-eclampsia (capsules n = 1197; two trials, low quality of evidence; lozenges: n = 944; one trial, low-quality evidence) or low Apgar scores at one minute (oral pyridoxine: n = 45; one trial), between supplemented and non-supplemented groups. No differences were found in Apgar scores at five minutes, or breastmilk production between controls and women receiving oral (n = 24; one trial) or intramuscular (n = 24; one trial) loading doses of pyridoxine at labour. Overall, the risk of bias was judged as unclear. The quality of the evidence using GRADE was low for both pre-eclampsia and dental decay. The other primary outcomes, preterm birth before 37 weeks and low birthweight, were not reported in the included trials. There were few trials, reporting few clinical outcomes and mostly with unclear trial methodology and inadequate follow-up. There is not enough evidence to detect clinical benefits of vitamin B6 supplementation in pregnancy and/or labour other than one trial suggesting protection against dental decay. Future trials assessing this and other outcomes such as orofacial clefts, cardiovascular malformations, neurological development, preterm birth, pre-eclampsia and adverse events are required.
This review could not provide evidence from randomised controlled trials that routine supplementation with vitamin B6 during pregnancy is of any benefit, other than one trial suggesting protection against dental decay. It may cause harm if too much is taken, as amounts well above the recommended daily allowance are associated with numbness and difficulty in walking. Vitamin B6 is a water-soluble vitamin which plays vital roles in numerous metabolic processes in the human body and helps with the development of the nervous system. Vitamin B6 is contained in many foods including meat, poultry, fish, vegetables, and bananas. It is thought that B6 may play a role in the prevention of pre-eclampsia, where the mother’s blood pressure is high with large amounts of protein in the urine or other organ dysfunction, and in babies being born too early (preterm birth). Vitamin B6 may be helpful for reducing nausea in pregnancy. This review of four trials (involving 1646 pregnant women) assessed routine B6 supplementation during pregnancy with the aim of reducing the chances of pre-eclampsia and preterm birth. Vitamin B6 as oral capsules or lozenges resulted in a decreased risk of dental decay in pregnant women in one trial. Lozenges had a greater effect, suggesting a local or topical effect of pyridoxine within the oral cavity. We did not find any clear differences in the risk of eclampsia or pre-eclampsia (three trials and two trials, respectively, low quality evidence). The studies did not have enough data to be able to make any other useful assessments. The included trials were conducted between 1960 and 1983 and did not include important newborn outcomes that have only recently been associated with vitamin B6, such as decreases in cardiovascular malformations and orofacial clefts. The trials began at different times during pregnancy, most had high rates of loss to follow-up, and adverse effects of vitamin B6 (pyridoxine) use were not assessed. Further research assessing outcomes such as orofacial clefts, cardiovascular malformations, neurological development, preterm birth, pre-eclampsia and adverse events would be helpful.
We included five RCTs (733 women) comparing exercise with no active treatment, exercise with yoga and exercise with HT. The evidence was of low quality: Limitations in study design were noted, along with inconsistency and imprecision. In the comparison of exercise versus no active treatment (three studies, n = 454 women), no evidence was found of a difference between groups in frequency or intensity of vasomotor symptoms (SMD -0.10, 95% CI -0.33 to 0.13, three RCTs, 454 women, I2 = 30%, low-quality evidence). Nor was any evidence found of a difference between groups in the frequency or intensity of vasomotor symptoms when exercise was compared with yoga (SMD -0.03, 95% CI -0.45 to 0.38, two studies, n = 279 women, I2 = 61%, low-quality evidence). It was not possible to include one of the trials in the meta-analyses; this trial compared three groups: exercise plus soy milk, soy milk only and control; results favoured exercise relative to the comparators, but study numbers were small. One trial compared exercise with HT, and the HT group reported significantly fewer flushes in 24 hours than the exercise group (mean difference 5.8, 95% CI 3.17 to 8.43, 14 participants). None of the trials found evidence of a difference between groups with respect to adverse effects, but data were very scanty. Evidence was insufficient to show whether exercise is an effective treatment for vasomotor menopausal symptoms. One small study suggested that HT is more effective than exercise. Evidence was insufficient to show the relative effectiveness of exercise when compared with HT or yoga.
Five studies randomly assigned 762 women experiencing hot flushes/night sweats. Three trials and two trials, respectively, were included in pooled comparisons of exercise versus control (n = 454 women) and exercise versus yoga (n = 279 women). One small study (14 women) compared exercise versus hormone therapy. When exercise was compared with no intervention, no evidence was found of any difference in their effect on hot flushes. One small study suggested that HT is more effective than exercise. Evidence was insufficient to show whether exercise was more effective than yoga. None of the trials found any evidence of differences between groups with respect to adverse effects, but data were very scanty. The methodological quality of the studies was variable. We assessed the evidence as of low quality: The main limitations were poor reporting of study methods, inconsistent results and lack of precision.
Four trials, in which a total of 1800 infants participated, compared oral/topical non-absorbed antifungal prophylaxis (nystatin or miconazole) with placebo or no drug. These trials had various methodological weaknesses including quasi-randomisation, lack of allocation concealment, and lack of blinding of intervention and outcomes assessment. The incidence of invasive fungal infection was very high in the control groups of three of these trials. Meta-analysis found a statistically significant reduction in the incidence of invasive fungal infection (typical risk ratio 0.20, 95% confidence interval 0.14 to 0.27; risk difference −0.18, −0.21 to −0.15) but substantial statistical heterogeneity was present. We did not find a statistically significant effect on mortality (typical risk ratio 0.87, 0.72 to 1.05; risk difference −0.03, −0.06 to 0.01). None of the trials assessed posthospital discharge outcomes. Three trials (N = 326) assessed the effect of oral/topical non-absorbed versus systemic antifungal prophylaxis. Meta-analyses did not find any statistically significant differences in the incidences of invasive fungal infection or all-cause mortality. The finding of a reduction in risk of invasive fungal infection in very low birth weight infants treated with oral/topical non-absorbed antifungal prophylaxis should be interpreted cautiously because of methodological weaknesses in the included trials. Further large randomised controlled trials in current neonatal practice settings are needed to resolve this uncertainty. These trials might compare oral/topical non-absorbed antifungal agents with placebo, with each other, or with systemic antifungal agents and should include an assessment of effect on long-term neurodevelopmental outcomes.
Study characteristics: Four trials, in which a total of 1800 infants participated, examined whether giving VLBW infants a drug to prevent fungi growing on the skin or in the gut reduced the risk of bloodstream or other severe infection. The trials used one of two commonly available drugs (nystatin or miconazole) and compared these with either a placebo ("dummy" drug) or no drug. These trials, however, had some design weaknesses that make it less certain that their results can be taken at face value. Key results: The overall analysis suggested that this treatment might reduce severe infection rates in VLBW infants but there was no evidence of a reduction in the risk of dying. Conclusions: Larger and higher quality trials are needed to resolve this uncertainty.
We included four trials with 271 patients undergoing open liver resections. The patients were randomised to ischaemic preconditioning (n = 135) and no ischaemic preconditioning (n = 136) prior to continuous vascular occlusion (portal triad clamping in three trials and hepatic vascular exclusion in one trial). All the trials excluded cirrhotic patients. We assessed all the four trials as having high risk of bias. There was no difference in mortality, liver failure, other peri-operative morbidity, hospital stay, intensive therapy unit stay, and operating time between the two groups. The proportion of patients requiring blood transfusion was lower in the ischaemic preconditioning group. There was also a trend towards a lower amount of red cell transfusion favouring ischaemic preconditioning group. There was no difference in the haemodynamic changes, blood loss, bilirubin, or prothrombin activity between the two groups. The enzyme markers of liver injury were lower in the ischaemic preconditioning group on the first post-operative day. Currently, there is no evidence to suggest a protective effect of ischaemic preconditioning in non-cirrhotic patients undergoing liver resection under continuous vascular occlusion. Ischaemic preconditioning reduces the blood transfusion requirements in patients undergoing liver resection.
The aim of this review was to assess the role of ischaemic preconditioning in liver resections performed utilising vascular occlusion. Four randomised clinical trials including 271 patients undergoing open liver resections fulfilled the inclusion criteria of this review. The patients were randomised to ischaemic preconditioning (n = 135) and no ischaemic preconditioning (n = 136) prior to continuous vascular occlusion. All the trials excluded cirrhotic patients. We assessed all the four trials as having high risk of bias (high risk of systematic error). There was no difference in mortality, liver failure, post-operative complications, hospital stay, intensive therapy unit stay, and operating time between the two groups. The proportion of patients requiring blood transfusion was lower in the ischaemic preconditioning group. The reasons for this are not clear. There was no difference in blood loss or enzyme markers of liver function between the two groups. The enzyme markers of liver injury were lower in the ischaemic preconditioning group on the first post-operative day. Currently, there is no evidence to suggest a protective effect of ischaemic preconditioning in non-cirrhotic patients undergoing liver resection under continuous vascular occlusion. Ischaemic preconditioning reduces the blood transfusion requirements in patients undergoing liver resection. Further high quality randomised clinical trials are necessary to assess the role of ischaemic preconditioning. Further studies are necessary to understand the mechanism of ischaemic preconditioning.
The review includes a total of 42 trials, and 40 of these trials contributed data on 4240 participants. Twenty studies, involving 1918 women, compared clindamycin plus an aminoglycoside (gentamicin for all studies except for one that used tobramycin) with another regimen. When assessing the individual subgroups of other antibiotic regimens (i.e. cephalosporins, monobactams, penicillins, and quinolones), there were fewer treatment failures in those treated with clindamycin plus an aminoglycoside as compared to those treated with cephalosporins (RR 0.69, 95% CI 0.49 to 0.99; participants = 872; studies = 8; low quality evidence) or penicillins (RR 0.65, 95% CI 0.46 to 0.90; participants = 689; studies = 7, low quality evidence). For the remaining subgroups for the primary analysis, the differences were not significant. There were significantly fewer wound infections in those treated with clindamycin plus aminoglycoside versus cephalosporins (RR 0.53, 95% CI 0.30 to 0.93; participants = 500; studies = 4; low quality evidence). Similarly, there were more treatment failures in those treated with an gentamicin/penicillin when compared to those treated with gentamIcin/clindamycin (RR 2.57, 95% CI 1.48 to 4.46; participants = 200; studies = 1). There were fewer treatment failures when an agent with a longer half-life that is administered less frequently was used (RR 0.61, 95% CI 0.40 to 0.92; participants = 484; studies = 2) as compared to using cefoxitin. There were more treatment failures (RR 1.94, 95% CI 1.38 to 2.72; participants = 774; studies = 7) and wound infections (RR 1.88, 95% CI 1.17 to 3.02; participants = 740; studies = 6) in those treated with a regimen with poor activity against penicillin-resistant anaerobic bacteria as compared to those treated with a regimen with good activity against penicillin-resistant anaerobic bacteria. Once-daily dosing was associated with a shorter length of hospital stay (MD -0.73, 95% CI -1.27 to -0.20; participants = 322; studies = 3). There were no differences between groups with respect to severe complications and no trials reported any maternal deaths. Regarding the secondary outcomes, three studies that compared continued oral antibiotic therapy after intravenous therapy with no oral therapy, found no differences in recurrent endometritis or other outcomes. There were no differences between groups for the outcomes of allergic reactions. The overall risk of bias was unclear in the most of the studies. The quality of the evidence using GRADE comparing clindamycin and an aminoglycoside with another regimen (compared with cephalosporins or penicillins) was low to very low for therapeutic failure, severe complications, wound infection and allergic reaction. The combination of clindamycin and gentamicin is appropriate for the treatment of endometritis. Regimens with good activity against penicillin-resistant anaerobic bacteria are better than those with poor activity against penicillin-resistant anaerobic bacteria. There is no evidence that any one regimen is associated with fewer side-effects. Following clinical improvement of uncomplicated endometritis which has been treated with intravenous therapy, the use of additional oral therapy has not been proven to be beneficial.
There were more treatment failures in women treated with an penicillin plus gentamicin (one study) compared with those treated with clindamycin plus gentamicin. Seven trials showed that an antibiotic treatment that had poor activity against bacteria resistant to penicillin had a higher failure rate and more wound infections than an antibiotic treatment that had good activity against these bacteria. There was no evidence that any of the antibiotic combinations had fewer adverse effects - including allergic reaction - than other antibiotic combinations. If the endometritis was uncomplicated and improved with intravenous antibiotics, there did not appear to be a need to follow the intravenous antibiotics with a course of oral antibiotics. Overall the reliability of the studies' results was unclear, the numbers of women studied were often small and data on other outcomes were limited; furthermore, a number of the studies had been funded by drug companies that conceivably would have had a vested interest in the results.
We included two studies, with a total of 64 randomised participants (50 and 14 participants) aged 18 years or over, with a perianal abscess. In both studies, participants were enrolled on the first post-operative day and randomised to continued packing by community district nursing teams or to no packing. Participants in the non-packing group managed their own wounds in the community and used absorbant dressings to cover the area. Fortnightly follow-up was undertaken until the cavity closed and the skin re-epithelialised, which constituted healing. For non-attenders, telephone follow-up was conducted. Both studies were at high risk of bias due to risk of attrition, performance and detection bias. It was not possible to pool the two studies for the outcome of time to healing. It is unclear whether continued post-operative packing of the cavity of perianal abscesses affects time to complete healing. One study reported a mean time to wound healing of 26.8 days (95% confidence interval (CI) 22.7 to 30.7) in the packing group and 19.5 days (95% CI 13.6 to 25.4) in the non-packing group (it was not clear if all participants healed). We re-analysed the data and found no clear difference in the time to healing (7.30 days longer in the packing group, 95% CI -2.24 to 16.84; 14 participants). This was assessed as very low quality evidence (downgraded three levels for very serious imprecision and serious risk of bias). The second study reported a median time to complete wound healing of 24.5 days (range 10 to 150 days) in the packing group and 21 days (range 8 to 90 days) in the non-packed group. There was insufficient information to be able to recreate the analysis and the original analysis was inappropriate (did not account for censoring). This second study also provided very low quality evidence (downgraded four levels for serious risk of bias, serious indirectness and very serious imprecision). There was very low quality evidence (downgraded for risk of bias, indirectness and imprecision) of no difference in wound pain scores at the initial dressing change. Both studies also reported patients' retrospective judgement of wound pain over the preceding two weeks (visual analogue scale, VAS) as lower for the non-packed group (2; both studies) compared with the packed group (0; both studies); (very low quality evidence) but we have been unable to reproduce these analyses as no variance data were published. There was no clear evidence of a difference in the number of post-operative fistulae detected between the packed and non-packed groups (risk ratio (RR) 2.31, 95% CIs 0.56 to 9.45, I2 = 0%) (very low quality evidence downgraded three levels for very serious imprecision and serious risk of bias). There was no clear evidence of a difference in the number of abscess recurrences between the packed and non-packed groups over the variable follow-up periods (RR 0.72, 95% CI 0.22 to 2.37, I2 = 0%) (very low quality evidence downgraded three levels for serious risk of bias and very serious imprecision). No study reported participant health-related quality of life/health status, incontinence rates, time to return to work or normal function, resource use in terms of number of dressing changes or visits to a nurse, or change in wound size. It is unclear whether using internal dressings (packing) for the healing of perianal abscess cavities influences time to healing, wound pain, development of fistulae, abscess recurrence or other outcomes. Despite this absence of evidence, the practice of packing abscess cavities is commonplace. Given the lack of high quality evidence, decisions to pack may be based on local practices or patient preferences. Further clinical research is needed to assess the effects and patient experience of packing.
After extensive searching to find relevant studies, we found only two randomised controlled trials (RCTs) that were eligible for this review (RCTs provide more robust results than other trial types). The studies were small with a total of 64 participants randomised, all over 18 years of age, with a perianal abscess. In the studies, participants received either packing by community nursing teams or no packing. Participants in the non-packing group managed their own wounds by using absorbant dressings to cover the area with no internal dressing. Participants were seen fortnightly until the cavity had healed. It is not clear whether time to complete wound healing is affected by packing of cavity (and what evidence exists is very low quality). There was very low quality evidence that packing made no difference to wound pain at the first dressing change. There was very low quality evidence that on judging the wound pain over the preceding two weeks, participants in the packing group had experienced more pain that those in the non-packing group. It is not clear whether packing or not affects the number of post-operative fistulae or abscess recurrences. We did not find any RCTs that compared participant health-related quality of life/health status, incontinence rates, time to return to work or normal function, resource use in terms of number of dressing changes or visits to a nurse, or change in wound size. There is no high quality evidence for the use of packing for healing perianal abscess cavities. Assessed as up to date to 17th May 2016.
Three small studies evaluating 154 infants were included in this review. One study reported a significant reduction in the risk of hyperbilirubinaemia and rate of treatment with phototherapy associated with enteral supplementation with prebiotics (risk ratio (RR) 0.75, 95% confidence interval (95% CI) 0.58 to 0.97; one study, 50 infants; low-quality evidence). Meta-analyses of two studies showed no significant difference in maximum plasma unconjugated bilirubin levels in infants with prebiotic supplementation (mean difference (MD) 0.14 mg/dL, 95% CI -0.91 to 1.20, I² = 81%, P = 0.79; two studies, 78 infants; low-quality evidence). There was no evidence of a significant difference in duration of phototherapy between the prebiotic and control groups, which was only reported by one study (MD 0.10 days, 95% CI -2.00 to 2.20; one study, 50 infants; low-quality evidence). The meta-analyses of two studies demonstrated a significant reduction in the length of hospital stay (MD -10.57 days, 95% CI -17.81 to -3.33; 2 studies, 78 infants; I² = 0%, P = 0.004; low-quality evidence). Meta-analysis of the three studies showed a significant increase in stool frequency in the prebiotic groups (MD 1.18, 95% CI 0.90 to 1.46, I² = 90%; 3 studies, 154 infants; high-quality evidence). No significant difference in mortality during hospital stay after enteral supplementation with prebiotics was reported (typical RR 0.94, 95% CI 0.14 to 6.19; I² = 6%, P = 0.95; 2 studies; 78 infants; low-quality evidence). There were no reports of the need for exchange transfusion and incidence of acute bilirubin encephalopathy, chronic bilirubin encephalopathy, and major neurodevelopmental disability in the included studies. None of the included studies reported any side effects. Current studies are unable to provide reliable evidence about the effectiveness of prebiotics on hyperbilirubinaemia. Additional large, well-designed RCTs should be undertaken in neonates that compare effects of enteral supplementation with prebiotics on neonatal hyperbilirubinaemia with supplementation of milk with any other placebo (particularly distilled water) or no supplementation.
We included three small studies (with a total of 154 infants) that compared the effects of feeding supplementation with prebiotics on neonatal jaundice to a placebo (such as distilled water). The evidence is up to date as of 14 June 2018. There is inadequate evidence to assess the effectiveness of prebiotics on neonatal jaundice. According to the available data, the incidence of neonatal hyperbilirubinaemia (low-quality evidence) and treatment with phototherapy (low-quality evidence) were decreased by feeding supplementation with prebiotics, but only one small study reported on these outcomes. The meta-analyses of these small studies demonstrated a significant reduction in the length of hospital stay (low-quality evidence) and a significant increase in stool frequency (high-quality evidence) in infants with prebiotic supplementation versus placebo. Furthermore, meta-analyses showed no significant difference in maximum plasma bilirubin levels (low-quality evidence), duration of phototherapy (low-quality evidence) and neonatal mortality (low-quality evidence) between groups. The review found only three randomised clinical trials that compared prebiotic supplementation with a placebo. More research is needed.
Two trials with a total of 78 participants met the inclusion criteria. Both the populations and the 'standard' treatments differed in the two studies. Oral steroids as an adjunct to intranasal corticosteroids One trial in adults with nasal polyps included 30 participants. All participants used intranasal corticosteroids and were randomised to either short-course oral steroids (oral methylprednisolone, 1 mg/kg and reduced progressively over a 21-day treatment course) or no additional treatment. None of the primary outcome measures of interest in this review were reported by the study. There may have been an important reduction in the size of the polyps (measured by the nasal polyps score, a secondary outcome measure) in patients receiving oral steroids and intranasal corticosteroids, compared to intranasal corticosteroids alone (mean difference (MD) -0.46, 95% confidence interval (CI) -0.87 to -0.05; 30 participants; scale 1 to 4) at the end of treatment (21 days). This corresponds to a large effect size, but we are very uncertain about this estimate as we judged the study to be at high risk of bias. Moreover, longer-term data were not available and the other outcomes of interest were not reported. Oral steroids as an adjunct to antibiotics One trial in children (mean age of eight years) without nasal polyps included 48 participants. The trial compared oral corticosteroids (oral methylprednisolone, 1 mg/kg and reduced progressively over a 15-day treatment course) with placebo in participants who also received a 30-day course of antibiotics. This study addressed one of the primary outcome measures (disease severity) and one secondary outcome (CT score). For disease severity the four key symptoms used to define chronic rhinosinusitis in children (nasal blockage, nasal discharge, facial pressure, cough) were combined into one score. There was a greater improvement in symptom severity 30 days after the start of treatment in patients who received oral steroids and antibiotics compared with placebo and antibiotics (MD -7.10, 95% CI -9.59 to -4.61; 45 participants; scale 0 to 40). The observed mean difference corresponds to a large effect size. At the same time point there was a difference in CT scan score (MD -2.90, 95% CI -4.91 to -0.89; 45 participants; scale 0 to 24). We assessed the quality of the evidence to be low. There were no data available for the longer term (three months). There might be an improvement in symptom severity, polyps size and condition of the sinuses when assessed using CT scans in patients taking oral corticosteroids when these are used as an adjunct therapy to antibiotics or intranasal corticosteroids, but the quality of the evidence supporting this is low orvery low (we are uncertain about the effect estimate; the true effect may be substantially different from the estimate of the effect). It is unclear whether the benefits of oral corticosteroids as an adjunct therapy are sustained beyond the short follow-up period reported (up to 30 days), as no longer-term data were available. There were no data in this review about the adverse effects associated with short courses of oral corticosteroids as an adjunct therapy. More research in this area, particularly research evaluating longer-term outcomes and adverse effects, is required.
This review includes evidence up to 11 August 2015. We included two randomised controlled trials with a total of 78 participants. One trial involved 30 adults with nasal polyps. Participants received either intranasal corticosteroids and oral corticosteroids or only intranasal corticosteroids. The only result reported of interest to this review was whether the size of the nasal polyps was reduced, when these treatments were completed (three weeks). One trial involved 48 children (mean age of eight years) with chronic rhinosinusitis but no nasal polyps. Participants received either antibiotics and oral corticosteroids or only antibiotics and a placebo (sugar pill). The oral corticosteroids and placebo were given for 15 days and the antibiotics were given for 30 days. The trial reported findings when the antibiotic treatment was completed (at one month). At the end of a three-week treatment course, people who took both intranasal corticosteroids and oral steroids may have had smaller nasal polyps than people who just received intranasal corticosteroids. The trial did not follow up people to determine whether the polyp size increased after the end of the trial. The trial did not provide information on adverse events or other outcomes important to patients, such as symptom severity or quality of life. Children who received both antibiotics and oral corticosteroids seemed to have a lower total symptom score and better computerised tomography (CT) scan score after treatment compared with children who received antibiotics and control treatment. The reporting of adverse effects in this trial was not very clear and so is difficult to tell if any participant experienced gastrointestinal disturbances, mood changes or difficulty in sleeping. We judged the quality of the evidence for oral steroids plus intranasal steroids for adults with nasal polyps to be very low (we are very uncertain about the estimate) as the evidence comes from one trial that has a low number of participants. The trial had a high risk of bias due to the way it was conducted. The trial did not report adverse events and did not report results after the end of treatment. We judged the quality of the evidence for oral steroids plus antibiotics for children to be low (further research is very likely to have an important impact on our confidence in the effect estimate and is likely to change the estimate) as the evidence comes from one small trial. The trial did not have a high risk of bias, but it only included children without nasal polyps, who might not have the same results as adults with nasal polyps. The trial did not report results after the end of treatment and the adverse effects of treatment were not well reported.
We included three trials involving a total of 209 participants. The studies were at moderate to high risk of bias. All included studies had differences in participant selection criteria, length of follow-up and outcome measurement, precluding a meta-analysis. The participants were all adults over 18 years with subjective tinnitus, but one study conducted in 2013 (n = 109) included only elderly patients. Improvement in tinnitus severity and disability Only the study in elderly patients used a validated instrument (Tinnitus Handicap Questionnaire) for this primary outcome. The authors of this cross-over study did not report the results of the two phases separately and found no significant differences in the proportion of patients reporting tinnitus improvement at four months of follow-up: 5% (5/93) versus 2% (2/94) in the zinc and placebo groups, respectively (risk ratio (RR) 2.53, 95% confidence interval (CI) 0.50 to 12.70; very low-quality evidence). None of the included studies reported any significant adverse effects. Secondary outcomes For the secondary outcome change in tinnitus loudness, one study reported no significant difference between the zinc and placebo groups after eight weeks: mean difference in tinnitus loudness -9.71 dB (95% CI -25.53 to 6.11; very low-quality evidence). Another study also measured tinnitus loudness but used a 0- to 100-point scale. The authors of this second study reported no significant difference between the zinc and placebo groups after four months: mean difference in tinnitus loudness rating scores 0.50 (95% CI -5.08 to 6.08; very low-quality evidence). Two studies used unvalidated instruments to assess tinnitus severity. One (with 50 participants) reported the severity of tinnitus using a non-validated scale (0 to 7 points) and found no significant difference in subjective tinnitus scores between the zinc and placebo groups at the end of eight weeks of follow-up (mean difference (MD) -1.41, 95% CI -2.97 to 0.15; very low-quality evidence). A third trial (n = 50) also evaluated the improvement of tinnitus using a non-validated instrument (a 0 to 10 scale: 10 = severe and unbearable tinnitus). In this study, after eight weeks there was no difference in the proportion of patients with improvement in their tinnitus, 8.7% (2/23) treated with zinc versus 8% (2/25) of those who received a placebo (RR 1.09, 95% CI 0.17 to 7.10, very low-quality evidence). None of the included studies reported any of our other secondary outcomes (quality of life, change in socioeconomic impact associated with work, change in anxiety and depression disorders, change in psychoacoustic parameters or change in thresholds on pure tone audiometry). We found no evidence that the use of oral zinc supplementation improves symptoms in adults with tinnitus.
We included a total of three trials involving 209 participants who were treated with oral zinc pills or placebo. All patients were adults over 18 years who had subjective tinnitus. All three studies investigated improvement in tinnitus as their primary outcome. One study assessed adverse effects and our secondary outcome 'change in overall severity of tinnitus'. Two studies assessed tinnitus loudness. Only one study, which enrolled only elderly patients, used a validated instrument (the Tinnitus Handicap Questionnaire (THQ)) to measure the primary outcome. The other two studies measured tinnitus using scales (from 0 to 7 and from 0 to 10), but these scales were not validated instruments for studying tinnitus. All three included studies had differences in their participant selection, length of follow-up and outcome measurement, which prevented a meta-analysis (combining of results). Only one trial (conducted in 2013) used a validated instrument (the THQ) to measure improvement in tinnitus, our primary outcome. The authors reported no significant difference between the groups. Another study (2003) reported the severity of tinnitus using a non-validated scale (0 to 7) and found a significant difference in the subjective tinnitus scores, which favoured the zinc group. However, this result may be biased because the losses were unbalanced and higher in the placebo group. A third study (1991) also evaluated improvement of tinnitus using a non-validated instrument (a scale of 0 to 10) and found no significant difference between groups. There were no severe adverse effects associated with zinc. Three cases of mild adverse effects were reported in different participants (e.g. mild gastric symptoms). Two studies (2003 and 2013) assessed change in tinnitus loudness (one of our secondary outcomes), but did not find a difference between patients treated with zinc compared to those who took a placebo. Two studies assessed change in the overall severity of tinnitus. One study, published in 1991, did not find any difference for this outcome between the groups. The second study, published in 2003, reported a significant reduction in subjective tinnitus score in the zinc group and no difference in the placebo group. However, both studies used a non-validated scale. The quality of the evidence is very low. We found no evidence that the use of oral zinc supplementation improves symptoms in adults with tinnitus. This evidence is up to date to 14 July 2016.
A total of 75 trials were included. Overall, the use of cell salvage reduced the rate of exposure to allogeneic RBC transfusion by a relative 38% (RR 0.62; 95% CI 0.55 to 0.70). The absolute reduction in risk (ARR) of receiving an allogeneic RBC transfusion was 21% (95% CI 15% to 26%). In orthopaedic procedures the RR of exposure to RBC transfusion was 0.46 (95% CI 0.37 to 0.57) compared to 0.77 (95% CI 0.69 to 0.86) for cardiac procedures. The use of cell salvage resulted in an average saving of 0.68 units of allogeneic RBC per patient (WMD -0.68; 95% CI -0.88 to -0.49). Cell salvage did not appear to impact adversely on clinical outcomes. The results suggest cell salvage is efficacious in reducing the need for allogeneic red cell transfusion in adult elective cardiac and orthopaedic surgery. The use of cell salvage did not appear to impact adversely on clinical outcomes. However, the methodological quality of trials was poor. As the trials were unblinded and lacked adequate concealment of treatment allocation, transfusion practices may have been influenced by knowledge of the patients' treatment status potentially biasing the results in favour of cell salvage.
The authors found 75 studies investigating the effectiveness of cell salvage in orthopaedic (36 studies), cardiac (33 studies), and vascular (6 studies) surgery. Overall, the findings show that cell salvage reduces the need for transfusions of donated blood. The authors conclude that there appears to be sufficient evidence to support the use of cell salvage in cardiac and orthopaedic surgery. Cell salvage does not appear to cause any adverse clinical outcomes. As the methodological quality of the trials was poor, the findings may be biased in favour of cell salvage. Large trials of high methodological quality that assess the relative effectiveness, safety, and cost-effectiveness of cell salvage in different surgical procedures should be the focus of future research in this area.
Fifty-one studies met our inclusion criteria, involving approximately 800,000 participants. The studies included were diverse, including observational studies, between- and within-participant experimental studies, cohort and cross-sectional studies, and time-series analyses. Few studies assessed behavioural outcomes in youth and non-smokers. Five studies assessed the primary outcomes: one observational study assessed smoking prevalence among 700,000 participants until one year after standardised packaging in Australia; four studies assessed consumption in 9394 participants, including a series of Australian national cross-sectional surveys of 8811 current smokers, in addition to three smaller studies. No studies assessed uptake, cessation, or relapse prevention. Two studies assessed quit attempts. Twenty studies examined other behavioural outcomes and 45 studies examined non-behavioural outcomes (e.g. appeal, perceptions of harm). In line with the challenges inherent in evaluating standardised tobacco packaging, a number of methodological imitations were apparent in the included studies and overall we judged most studies to be at high or unclear risk of bias in at least one domain. The one included study assessing the impact of standardised tobacco packaging on smoking prevalence in Australia found a 3.7% reduction in odds when comparing before to after the packaging change, or a 0.5 percentage point drop in smoking prevalence, when adjusting for confounders. Confidence in this finding is limited, due to the nature of the evidence available, and is therefore rated low by GRADE standards. Findings were mixed amongst the four studies assessing consumption, with some studies finding no difference and some studies finding evidence of a decrease; certainty in this outcome was rated very low by GRADE standards due to the limitations in study design. One national study of Australian adult smoker cohorts (5441 participants) found that quit attempts increased from 20.2% prior to the introduction of standardised packaging to 26.6% one year post-implementation. A second study of calls to quitlines provides indirect support for this finding, with a 78% increase observed in the number of calls after the implementation of standardised packaging. Here again, certainty is low. Studies of other behavioural outcomes found evidence of increased avoidance behaviours when using standardised packs, reduced demand for standardised packs and reduced craving. Evidence from studies measuring eye-tracking showed increased visual attention to health warnings on standardised compared to branded packs. Corroborative evidence for the latter finding came from studies assessing non-behavioural outcomes, which in general found greater warning salience when viewing standardised, than branded packs. There was mixed evidence for quitting cognitions, whereas findings with youth generally pointed towards standardised packs being less likely to motivate smoking initiation than branded packs. We found the most consistent evidence for appeal, with standardised packs rating lower than branded packs. Tobacco in standardised packs was also generally perceived as worse-tasting and lower quality than tobacco in branded packs. Standardised packaging also appeared to reduce misperceptions that some cigarettes are less harmful than others, but only when dark colours were used for the uniform colour of the pack. The available evidence suggests that standardised packaging may reduce smoking prevalence. Only one country had implemented standardised packaging at the time of this review, so evidence comes from one large observational study that provides evidence for this effect. A reduction in smoking behaviour is supported by routinely collected data by the Australian government. Data on the effects of standardised packaging on non-behavioural outcomes (e.g. appeal) are clearer and provide plausible mechanisms of effect consistent with the observed decline in prevalence. As standardised packaging is implemented in different countries, research programmes should be initiated to capture long term effects on tobacco use prevalence, behaviour, and uptake. We did not find any evidence suggesting standardised packaging may increase tobacco use.
We searched nine databases for articles evaluating standardised packaging that had been already reviewed by academics and published before January 2016. We also checked references in those papers to other studies and contacted the authors where necessary. We found 51 studies involving approximately 800,000 participants. These studies varied considerably. Some studies focused on the effect of standardised packaging in Australia, and included looking at overall smoking levels, whether smokers altered their behaviour such as by cutting down the number of cigarettes they smoked, and whether smokers were making more quit attempts. We also included experiments in which people used or viewed standardised tobacco packs and examined their responses, compared to when they were viewing branded packs. We also included studies that assessed people’s eye movements when they looked at different packs and how willing people were to buy, and how much they were willing to pay for, standardised compared to branded packs. Only five studies looked at our key outcomes. One study in Australia looked at data from 700,000 people before and after standardised packaging was introduced. This study found that there was a half a percentage point drop in the proportion of people who used tobacco after the introduction of standardised packaging, compared to before, when adjusting for other factors which could affect this. Four other studies looked at whether current smokers changed the number of cigarettes they smoked. Two studies from Australia looked at this, one using surveys which included 8811 current smokers, and found no change in the number of cigarettes smoked. The three smaller studies found mixed results. Two further studies looked at quit attempts and observed increases in these in Australia after standardised packaging was introduced. The remainder of the studies looked at other outcomes, and the most consistent finding was that standardised packaging reduced how appealing people found the packs compared with branded packs. No studies reported the number of people who quit using tobacco, the number of people who started using tobacco, or the number of people who returned to using tobacco after quitting. Certainty in these findings is limited for several reasons, including the difficulties involved in studying national policies like standardised packaging. However, findings suggesting standardised packaging may decrease tobacco use are supported by routine data from the Australian government and studies looking at other outcomes. For example, in our included studies people consistently found standardised packs less appealing than branded packs. We did not find any evidence suggesting standardised packaging may increase tobacco use.
Four studies involving 971 participants were included. All results were reported using a zero to 10 pain scale. Three studies (four treatment arms) involving 498 participants compared subcutaneous lignocaine with control with no significant difference between pain scores; MD 0.12 (95% confidence interval (CI) -0.46 to 0.69). Two studies (three treatment arms) involving 399 participants compared intravenous pain regimens with control. A significant reduction in pain score was observed with intravenous opioid and anxiolytic; MD -0.90 (95% CI -1.54 to -0.27). One study involving 60 participants compared levobupivacaine with placebo. Longer-acting local anaesthetic significantly lowered the pain score by a MD of -1.10 (95% CI -1.26 to -0.94). The data are insufficient to identify any influence of pain regimens on vascular and procedural complication rates. No studies reported appropriate blinding for all treatment arms. The largest study, comprising 661 participants, was unblinded with a quality score of two out of five. No new studies have been found since the last version of this review and the conclusions therefore remain the same. Intravenous pain regimens and levobupivacaine may have greater efficacy when compared to control for the management of pain related to femoral sheath removal. However, a definitive study is still required because the clinical difference is small. There is no evidence to support the use of subcutaneous lignocaine. There is insufficient evidence to determine if pain relief influences the rate of complications. One new study has been included as a 'study awaiting assessment' as we await further information from the study authors.
In this systematic review of randomised controlled trials four studies were reviewed. Three studies involving 498 participants compared subcutaneous lignocaine, a short-acting local anaesthetic, with a control group (participants received either no pain relief or an inactive substance known as a placebo). Two studies involving 399 people compared intravenous opioids (fentanyl or morphine) and an anxiolytic (midazolam) with a control group. One study involving 60 people compared subcutaneous levobupivacaine, a long-acting local anaesthetic, with a control group. Intravenous pain regimens and subcutaneous levobupivacaine appear to reduce the pain experienced during femoral sheath removal. However, the size of the reduction was small. A significant reduction in pain was not experienced by participants who received subcutaneous lignocaine or who were in the control group. There were insufficient data to determine a correlation between pain relief administration and either adverse events or complications. Some patients may benefit from routine pain relief using levobupivacaine or intravenous pain regimens. Identifying who may potentially benefit from pain relief requires clinical judgement and consideration of patient preference. The mild level of pain generally experienced during this procedure should not influence the decision as some people can experience moderate levels of pain with the conventional wound care.
Only one quasi-randomized controlled study with 121 study participants comparing two types of IPC devices met the inclusion criteria. The authors found no cases of symptomatic DVT or PE in either the calf-thigh compression group or the plantar compression group during the first three weeks after the THR. The calf-thigh pneumatic compression was more effective than plantar compression for reducing thigh swelling during the early postoperative stage. The strength of the evidence in this review is weak as only one trial was included and it was classified as having a high risk of bias. There is a lack of evidence from randomized controlled trials to make an informed choice of IPC device for preventing venous thromboembolism (VTE) following total hip replacement. More research is urgently required, ideally a multicenter, properly designed RCT including a sufficient number of participants. Clinically relevant outcomes such as mortality, imaging-diagnosed asymptomatic VTE and major complications must be considered.
We looked for randomized controlled trials which compared different types of IPC devices for preventing venous thromboembolism in patients after THR. We found one study with 121 participants comparing a calf-thigh compression device with a foot (plantar) compression device. There were no cases of symptomatic DVT or PE either in the calf-thigh compression group or the plantar compression group in the first three weeks after the THR. The calf-thigh pneumatic compression was more effective than plantar compression for reducing thigh swelling one week following surgery. The postoperative swelling in the calf-thigh pump group was reduced earlier than in the plantar pump group. However, other outcomes such as imaging-diagnosed asymptomatic VTE were not determined and it is not possible to draw reliable conclusions from this single study with a high risk of bias. We therefore suggest that more primary research is required to allow an informed choice of IPC device for preventing venous thromboembolism following THR.
No additional studies were included or excluded in the updated review. Twenty papers detailing 18 trials were considered. Only three trials were randomised controlled trials and were included in the review. The remaining fifteen studies were excluded for various reasons. All three included trials had a small sample size and reported the trial design, outcome measures and analysis poorly. There were also variations in the outcome measures used between the trials. In addition, there was no consistency on the reporting of mean and medians for blood loss during the operation. It was therefore not possible to pool the data to perform meta-analysis. However, the reported blood loss when using a tourniquet was between 0 and 16 ml compared to between 107 to 133 ml when not using a tourniquet (P < 0.01). Although there were significant quality issues with the available evidence, the use of a tourniquet would appear to reduce blood loss during surgery. There were no reported differences between the use or non-use of a tourniquet in terms of complications and morbidity. However, the available trials were not of sufficient size to detect rarer complications such as nerve damage.
This review found that the amount of blood loss was clearly reduced when a tourniquet was used during surgery for varicose veins, with no overall increase in operative time, reported adverse events or change in patient reported pain and activity after surgery. Three trials were included in the review, in which a total of 176 men and women (211 legs) were randomised to either use or non-use of a tourniquet. All trials took place in the UK between 1989 and 2000. Those patients who did not have a tourniquet had a wider range of total blood loss and patients in the upper limits lost a significant amount of blood. A reduction in blood loss may also result in a reduction in post-operative bruising but only one of the trials (50 patients) looked at this. It found a clear reduction in the area of bruising with the use of a tourniquet. The trials did not have a large enough number of participants to determine any rarer complications of surgery with the use of a tourniquet such as nerve damage or arterial injury, especially in older patients.
One hundred and twenty one trials with 6700 participants were included. In most trials, PRT was performed two to three times per week and at a high intensity. PRT resulted in a small but significant improvement in physical ability (33 trials, 2172 participants; SMD 0.14, 95% CI 0.05 to 0.22). Functional limitation measures also showed improvements: e.g. there was a modest improvement in gait speed (24 trials, 1179 participants, MD 0.08 m/s, 95% CI 0.04 to 0.12); and a moderate to large effect for getting out of a chair (11 trials, 384 participants, SMD -0.94, 95% CI -1.49 to -0.38). PRT had a large positive effect on muscle strength (73 trials, 3059 participants, SMD 0.84, 95% CI 0.67 to 1.00). Participants with osteoarthritis reported a reduction in pain following PRT(6 trials, 503 participants, SMD -0.30, 95% CI -0.48 to -0.13). There was no evidence from 10 other trials (587 participants) that PRT had an effect on bodily pain. Adverse events were poorly recorded but adverse events related to musculoskeletal complaints, such as joint pain and muscle soreness, were reported in many of the studies that prospectively defined and monitored these events. Serious adverse events were rare, and no serious events were reported to be directly related to the exercise programme. This review provides evidence that PRT is an effective intervention for improving physical functioning in older people, including improving strength and the performance of some simple and complex activities. However, some caution is needed with transferring these exercises for use with clinical populations because adverse events are not adequately reported.
Evidence from 121 randomised controlled trials (6,700 participants) shows that older people who exercise their muscles against a force or resistance become stronger. They also improve their performance of simple activities such as walking, climbing steps, or standing up from a chair more quickly. The improvement in activities such as getting out of a chair or stair climbing is generally greater than walking speed. Moreover, these strength training exercises also improved older people's physical abilities, including more complex daily activities such as bathing or preparing a meal. PRT also reduced pain in people with osteoarthritis. There was insufficient evidence to comment on the risks of PRT or long term effects.
Twenty-three trials with a total of 1821 children and young people were included. Generally, the trials were small, and only one was assessed to have a low risk of bias. Thirteen trials compared exercise alone with no intervention. Eight were included in the meta-analysis, and overall the results were heteregeneous. One study with a low risk of bias showed a standardised mean difference (SMD) of 1.33 (95% CI 0.43 to 2.23), while the SMD's for the three studies with a moderate risk of bias and the four studies with a high risk of bias was 0.21 (95% CI -0.17 to 0.59) and 0.57 (95% CI 0.11 to 1.04), respectively. Twelve trials compared exercise as part of a comprehensive programme with no intervention. Only four provided data sufficient to calculate overall effects, and the results indicate a moderate short-term difference in self-esteem in favour of the intervention [SMD 0.51 (95% CI 0.15 to 0.88)]. The results indicate that exercise has positive short-term effects on self-esteem in children and young people. Since there are no known negative effects of exercise and many positive effects on physical health, exercise may be an important measure in improving children's self-esteem. These conclusions are based on several small low-quality trials.
This review of trials suggests that exercise has positive short-term effects on self-esteem in children and young people, and concludes that exercise may be an important measure in improving children's self-esteem. However, the reviewers note that the trials included in the review were small-scale, and recognise the need for further well-designed research in this area.
We found no RCTs comparing pre-admission antibiotics versus no pre-admission antibiotics or placebo. We included one open-label, non-inferiority RCT with 510 participants, conducted during an epidemic in Niger, evaluating a single dose of intramuscular ceftriaxone versus a single dose of intramuscular long-acting (oily) chloramphenicol. Ceftriaxone was not inferior to chloramphenicol in reducing mortality (RR 1.21, 95% CI 0.57 to 2.56; N = 503; 308 confirmed meningococcal meningitis; 26 deaths; moderate-quality evidence), clinical failures (RR 0.83, 95% CI 0.32 to 2.15; N = 477; 18 clinical failures; moderate-quality evidence), or neurological sequelae (RR 1.29, 95% CI 0.63 to 2.62; N = 477; 29 with sequelae; low-quality evidence). No adverse effects of treatment were reported. Estimated treatment costs were similar. No data were available on disease burden due to sequelae. We found no reliable evidence to support the use pre-admission antibiotics for suspected cases of non-severe meningococcal disease. Moderate-quality evidence from one RCT indicated that single intramuscular injections of ceftriaxone and long-acting chloramphenicol were equally effective, safe, and economical in reducing serious outcomes. The choice between these antibiotics should be based on affordability, availability, and patterns of antibiotic resistance. Further RCTs comparing different pre-admission antibiotics, accompanied by intensive supportive measures, are ethically justified in people with less severe illness, and are needed to provide reliable evidence in different clinical settings.
We searched for studies comparing giving versus not giving empiric antibiotics or comparing different antibiotics in those with suspected meningococcal disease. We found one randomised trial comparing single intramuscular doses of two different long-acting antibiotics. The evidence is current to January 2017. The included study was conducted in nine primary care facilities in Niger during an outbreak of meningococcal disease in 2003. Of 510 adults and children studied, 251 received ceftriaxone and 259 received chloramphenicol. The study was funded by Médecins Sans Frontières. There was no difference in the number of people who died, did not respond to treatment, or with neurological disabilities with either antibiotic empirically. The results were similar in whom the diagnosis was subsequently confirmed. Neither antibiotic had significant adverse effects. Although the study was well conducted, the overall quality of the evidence was only moderate for death and treatment failures because the study excluded children less than two months old, pregnant women, and the severely ill. The quality of evidence was lower for neurological disabilities because of the shortness of follow-up. Since meningococcal disease has serious consequences, not giving antibiotics empirically would be unethical. However, future research comparing different antibiotics in people of all ages and illness severity is required to provide reliable evidence in different clinical settings.
We included seven studies, which involved 723 participants. We assessed four of the seven studies as being at high risk of bias and three had an unclear risk of bias; the quality of the evidence was difficult to assess as there was often insufficient detail reported to enable any conclusions to be drawn about the methodological rigour of the studies. Four trials involving 285 participants measured cognitive development and we synthesised these data in a meta-analysis. Compared to the control group, there was no statistically significant impact of the intervention on cognitive development (standardised mean difference (SMD) 0.30; 95% confidence interval -0.18 to 0.78). Only three studies reported socioemotional outcomes and there was insufficient data to combine into a meta-analysis. No study reported on adverse effects. This review does not provide evidence of the effectiveness of home-based interventions that are specifically targeted at improving developmental outcomes for preschool children from socially disadvantaged families. Future studies should endeavour to better document and report their methodological processes.
The purpose of this review was to look at whether home-based parenting programmes, which aim to improve child development by showing parents how to provide a better quality home environment for their child, are effective in doing so. Seven randomised controlled trials (RCTs) met the inclusion criteria for this review. It was possible to combine the results from four of the seven studies, which showed that children who received the programme did not have better cognitive development than a control group. Socioemotional development was measured in three studies but we could not combine this data to help reach a conclusion about effectiveness. None of the studies measured adverse effects. The quality of the evidence in the studies was difficult to assess due to poor reporting. More high quality research is needed.
Nine studies, including data of 1228 women, were included in the review evaluating the effect of either LMWH (enoxaparin or nadroparin in varying doses) or aspirin or a combination of both, on the chance of live birth in women with recurrent miscarriage, with or without inherited thrombophilia. Studies were heterogeneous with regard to study design and treatment regimen and three studies were considered to be at high risk of bias. Two of these three studies at high risk of bias showed a benefit of one treatment over the other, but in sensitivity analyses (in which studies at high risk of bias were excluded) anticoagulants did not have a beneficial effect on live birth, regardless of which anticoagulant was evaluated (risk ratio (RR) for live birth in women who received aspirin compared to placebo 0.94, (95% confidence interval (CI) 0.80 to 1.11, n = 256), in women who received LMWH compared to aspirin RR 1.08 (95% CI 0.93 to 1.26, n = 239), and in women who received LMWH and aspirin compared to no-treatment RR 1.01 (95% CI 0.87 to 1.16) n = 322). Obstetric complications such as preterm delivery, pre-eclampsia, intrauterine growth restriction and congenital malformations were not significantly affected by any treatment regimen. In included studies, aspirin did not increase the risk of bleeding, but treatment with LWMH and aspirin increased the risk of bleeding significantly in one study. Local skin reactions (pain, itching, swelling) to injection of LMWH were reported in almost 40% of patients in the same study. There is a limited number of studies on the efficacy and safety of aspirin and heparin in women with a history of at least two unexplained miscarriages with or without inherited thrombophilia. Of the nine reviewed studies quality varied, different treatments were studied and of the studies at low risk of bias only one was placebo-controlled. No beneficial effect of anticoagulants in studies at low risk of bias was found. Therefore, this review does not support the use of anticoagulants in women with unexplained recurrent miscarriage. The effect of anticoagulants in women with unexplained recurrent miscarriage and inherited thrombophilia needs to be assessed in further randomised controlled trials; at present there is no evidence of a beneficial effect.
Irrespective of the type or combination of anticoagulant, no benefit of anticoagulant treatment was found for live births. Obstetric complications were not clearly affected by any treatment regimen. Injection of low molecular weight heparin caused local skin reactions (pain, itching, swelling) in one study (side effects were not regularly reported in all studies). In the nine reviewed studies quality varied and different treatments were studied. Three studies were considered at high risk of bias. The number of studies on this topic remains limited. Thrombophilia refers to blood clotting disorders associated with a predisposition to thrombosis and thus increased risk for thrombotic events. It can be inherited as well as acquired, as is the case in the antiphospholipid syndrome. Both inherited and acquired thrombophilia are associated with vascular thrombosis as well as pregnancy complications including recurrent miscarriage and premature delivery.
Twenty-six trials fulfilled the inclusion criteria of the review. All the 26 trials except one trial of 30 participants were at high risk of bias. Nineteen of the trials with 1263 randomised participants provided data for this review. Ten of the 19 trials compared local anaesthetic wound infiltration versus inactive control. One of the 19 trials compared local anaesthetic wound infiltration with two inactive controls, normal saline and no intervention. Two of the 19 trials had four arms comparing local anaesthetic wound infiltration with inactive controls in the presence and absence of co-interventions to decrease pain after laparoscopic cholecystectomy. Four of the 19 trials had three or more arms that could be included for the comparison of local anaesthetic wound infiltration versus inactive control and different methods of local anaesthetic wound infiltration. The remaining two trials compared different methods of local anaesthetic wound infiltration. Most trials included only low anaesthetic risk people undergoing elective laparoscopic cholecystectomy. Seventeen trials randomised a total of 1095 participants to local anaesthetic wound infiltration (587 participants) versus no local anaesthetic wound infiltration (508 participants). Various anaesthetic agents were used but bupivacaine was the commonest local anaesthetic used. There was no mortality in either group in the seven trials that reported mortality (0/280 (0%) in local anaesthetic infiltration group versus 0/259 (0%) in control group). The effect of local anaesthetic on the proportion of people who developed serious adverse events was imprecise and compatible with increase or no difference in serious adverse events (seven trials; 539 participants; 2/280 (0.8%) in local anaesthetic group versus 1/259 (0.4%) in control; RR 2.00; 95% CI 0.19 to 21.59; very low quality evidence). None of the serious adverse events were related to local anaesthetic wound infiltration. None of the trials reported patient quality of life. The proportion of participants who were discharged as day surgery patients was higher in the local anaesthetic infiltration group than in the no local anaesthetic infiltration group (one trial; 97 participants; 33/50 (66.0%) in the local anaesthetic group versus 20/47 (42.6%) in the control group; RR 1.55; 95% CI 1.05 to 2.28; very low quality evidence). The effect of local anaesthetic on the length of hospital stay was compatible with a decrease, increase, or no difference in the length of hospital stay between the two groups (four trials; 327 participants; MD -0.26 days; 95% CI -0.67 to 0.16; very low quality evidence). The pain scores as measured by the visual analogue scale (0 to 10 cm) were lower in the local anaesthetic infiltration group than the control group at 4 to 8 hours (13 trials; 806 participants; MD -1.33 cm on the VAS; 95% CI -1.54 to -1.12; very low quality evidence) and 9 to 24 hours (12 trials; 756 participants; MD -0.36 cm on the VAS; 95% CI -0.53 to -0.20; very low quality evidence). The effect of local anaesthetic on the time taken to return to normal activity between the two groups was imprecise and compatible with a decrease, increase, or no difference in the time taken to return to normal activity (two trials; 195 participants; MD 0.14 days; 95% CI -0.59 to 0.87; very low quality evidence). None of the trials reported on return to work. Four trials randomised a total of 149 participants to local anaesthetic wound infiltration prior to skin incision (74 participants) versus local anaesthetic wound infiltration at the end of surgery (75 participants). Two trials randomised a total of 176 participants to four different local anaesthetics (bupivacaine, levobupivacaine, ropivacaine, neosaxitoxin). Although there were differences between the groups in some outcomes the changes were not consistent. There was no evidence to support the preference of one local anaesthetic over another or to prefer administration of local anaesthetic at a specific time compared with another. Serious adverse events were rare in studies evaluating local anaesthetic wound infiltration (very low quality evidence). There is very low quality evidence that infiltration reduces pain in low anaesthetic risk people undergoing elective laparoscopic cholecystectomy. However, the clinical importance of this reduction in pain is likely to be small. Further randomised clinical trials at low risk of systematic and random errors are necessary. Such trials should include important clinical outcomes such as quality of life and time to return to work in their assessment.
We identified 19 randomised clinical trials in this review. Most participants in the trials were low anaesthetic risk people undergoing planned laparoscopic cholecystectomy. A total of 1095 participants were randomised to local anaesthetic wound infiltration (587 participants) or no local anaesthetic wound infiltration (508 participants) in 17 trials. The choice of whether the participants received local anaesthetic agents (or not) was determined by a method similar to the toss of a coin so that the treatments were compared in groups of patients who were as similar as possible. There were no deaths in either group in the seven trials (539 participants) that reported deaths. The difference in serious complications between the groups was imprecise. There were no local anaesthetic-related complications in nearly 450 participants who received local anaesthetic wound infiltration in the different trials that reported complications. None of the trials reported quality of life or the time taken to return to work. The proportion of participants who were discharged as day surgery patients was higher in the local anaesthetic group than in the control group in the only trial that reported this information. The difference in the length of hospital stay or the time taken to return to normal activity was imprecise. Pain was lower in the participants who received intra-abdominal local anaesthetic administration compared with those in the control groups at four to eight hours and at nine to 24 hours, as measured by the visual analogue scale (a chart which rates the amount of pain on a scale of 1 to 10). In the comparisons of different methods of local anaesthetic infiltration, there were differences between the groups in some outcomes but the changes were not consistent. There is, therefore, no evidence to prefer any particular drug or method of administering local anaesthetics. Serious adverse events were rare in studies evaluating local anaesthetic wound infiltration. There is very low quality evidence that infiltration reduces pain in low anaesthetic risk people undergoing elective laparoscopic cholecystectomy. However, the clinical importance of this reduction in pain is likely to be small. Most of the trials were at high risk of bias, that is there is a possibility of arriving at wrong conclusions by overestimating the benefits or underestimating the harms of one method over another because of the way a study was conducted. The overall quality of evidence was very low. Further trials are necessary. Such trials should include outcomes such as quality of life, hospital stay, the time taken to return to normal activity, and the time taken to return to work, which are important for the person undergoing laparoscopic cholecystectomy and the people who provide funds for the treatment.
Eight studies (225 participants) were included. In general, children included in the studies were young (average age less than two years in the majority of included studies). Severity of croup was described as moderate to severe in all included studies. Six studies took place in the inpatient setting, one in the ED and one setting was not specified. Six of the eight studies were deemed to have a low risk of bias and the risk of bias was unclear in the remaining two studies. Nebulized epinephrine was associated with croup score improvement 30 minutes post-treatment (three RCTs, standardized mean difference (SMD) -0.94; 95% confidence interval (CI) -1.37 to -0.51; I2 statistic = 0%). This effect was not significant two and six hours post-treatment. Nebulized epinephrine was associated with significantly shorter hospital stay than placebo (one RCT, MD -32.0 hours; 95% CI -59.1 to -4.9). Comparing racemic and L-epinephrine, no difference in croup score was found after 30 minutes (SMD 0.33; 95% CI -0.42 to 1.08). After two hours, L-epinephrine showed significant reduction compared with racemic epinephrine (one RCT, SMD 0.87; 95% CI 0.09 to 1.65). There was no significant difference in croup score between administration of nebulized epinephrine via IPPB versus nebulization alone at 30 minutes (one RCT, SMD -0.14; 95% CI -1.24 to 0.95) or two hours (SMD -0.72; 95% CI -1.86 to 0.42). None of the studies sought or reported data on adverse effects. Nebulized epinephrine is associated with clinically and statistically significant transient reduction of symptoms of croup 30 minutes post-treatment. Evidence does not favor racemic epinephrine or L-epinephrine, or IPPB over simple nebulization. The authors note that data and analyses were limited by the small number of relevant studies and total number of participants and thus most outcomes contained data from very few or even single studies.
This review looked at trials of inhaled epinephrine for the treatment of children with croup and is comprised of only eight studies with 225 participants. Of the eight included studies, six were assessed as having low risk of bias and two as unclear risk of bias (based upon assessment of adequate random sequence generation, allocations concealment, blinding of participants and personnel, blinding of outcome assessment, completeness of outcome data, and selective reporting). Studies assessed a variety of outcome measures and few studies examined the same outcomes; therefore, most outcomes contained data from a maximum of three studies, and in some cases only single studies. Compared to no medication, inhaled epinephrine improved croup symptoms in children at 30 minutes following treatment (three studies, 94 children). This treatment effect disappeared two hours after treatment (one study, 20 children). However, children's symptoms did not become worse than prior to treatment. No study measured adverse events. The evidence is current to July 2013.
This review included five studies (with a total of 303 to 305 participants) comparing the effects of betahistine with placebo in adults with subjective idiopathic tinnitus. Four studies were parallel-group RCTs and one had a cross-over design. The risk of bias was unclear in all of the included studies. Due to heterogeneity in the outcomes measured and measurement methods used, very limited data pooling was possible. When we pooled the data from two studies for the primary outcome tinnitus loudness, the mean difference on a 0- to 10-point visual analogue scale at one-month follow-up was not significant between betahistine and placebo (-0.16, 95% confidence interval (CI) -1.01 to 0.70; 81 participants) (very low-quality evidence). There were no reports of upper gastrointestinal discomfort (significant adverse effect) in any study. As a secondary outcome, one study found no difference in the change in the Tinnitus Severity Index between betahistine and placebo (mean difference at 12 weeks 0.02, 95% CI -1.05 to 1.09; 50 participants) (moderate-quality evidence). None of the studies reported the other secondary outcomes of changes in depressive symptoms or depression, anxiety symptoms or generalised anxiety, or health-related quality of life as measured by a validated instrument, nor tinnitus intrusiveness. Other adverse effects that were reported were not treatment-related. There is an absence of evidence to suggest that betahistine has an effect on subjective idiopathic tinnitus when compared to placebo. The evidence suggests that betahistine is generally well tolerated with a similar risk of adverse effects to placebo treatments. The quality of evidence for the reported outcomes, using GRADE, ranged from moderate to very low. If future research into the effectiveness of betahistine in patients with tinnitus is felt to be warranted, it should use rigorous methodology. Randomisation and blinding should be of the highest quality, given the subjective nature of tinnitus and the strong likelihood of a placebo response. The CONSORT statement should be used in the design and reporting of future studies. We also recommend the development of validated, patient-centred outcome measures for research in the field of tinnitus.
Our review identified five randomised controlled trials with a total of 303 to 305 participants who suffered from tinnitus. These studies compared participants receiving betahistine to those receiving a placebo. Four study designs allocated participants into parallel groups. In one study, participants consented to take all study medications in a pre-defined sequence. The outcomes that we evaluated included tinnitus loudness and intrusiveness, tinnitus symptoms and side effects. The included studies did not show differences in tinnitus loudness, severity of tinnitus symptoms or side effects between participants receiving betahistine and participants receiving a placebo. No significant side effects were reported. We had planned to evaluate changes in tinnitus intrusiveness, depression and anxiety and quality of life, but these were not measured. The evidence suggests that betahistine is generally well tolerated with a similar risk of side effects to placebo. The quality of the evidence ranged from moderate to very low. The risk of bias in all of the included studies was unclear. The results were drawn from one or two studies only. In some studies, the participants that were included did not fully represent the entire population of people with tinnitus and so we cannot draw general conclusions.
Eighteen RCTs enrolling a total of 10,499 participants were eligible for the review. The results from 17 of 18 of these RCTs, published between 1995 and 2011, were suitable for meta-analysis and allowed us to quantify the therapeutic efficacy of interferon in terms of disease-free survival (17 trials) and overall survival (15 trials). Adjuvant interferon was associated with significantly improved disease-free survival (HR (hazard ratio) = 0.83; 95% CI (confidence interval) 0.78 to 0.87, P value < 0.00001) and overall survival (HR = 0.91; 95% CI 0.85 to 0.97; P value = 0.003). We detected no significant between-study heterogeneity (disease-free survival: I² statistic = 16%, Q-test P value = 0.27; overall survival: I² statistic = 6%; Q-test P value = 0.38). Considering that the 5-year overall survival rate for TNM stage II–III cutaneous melanoma is 60%, the number needed to treat (NNT) is 35 participants (95% CI = 21 to 108 participants) in order to prevent 1 death. The results of subgroup analysis failed to answer the question of whether some treatment features (i.e. dosage, duration) might have an impact on interferon efficacy or whether some participant subgroups (i.e. with or without lymph node positivity) might benefit differently from interferon adjuvant treatment. Grade 3 and 4 toxicity was observed in a minority of participants: In some trials, no-one had fever or fatigue of Grade 3 severity, but in other trials, up to 8% had fever and up to 23% had fatigue of Grade 3 severity. Less than 1% of participants had fever and fatigue of Grade 4 severity. Although it impaired quality of life, toxicity disappeared after treatment discontinuation. The results of this meta-analysis support the therapeutic efficacy of adjuvant interferon alpha for the treatment of people with high-risk (AJCC TNM stage II-III) cutaneous melanoma in terms of both disease-free survival and, though to a lower extent, overall survival. Interferon is also valid as a reference treatment in RCTs investigating new therapeutic agents for the adjuvant treatment of this participant population. Further investigation is required to select people who are most likely to benefit from this treatment.
Whereas not all single studies demonstrated a survival benefit for patients treated with interferon, combining the available evidence, we found that the use of postoperative interferon improves the survival of those with high-risk melanoma. On average, the toxicity associated with interferon administration (such as fever and fatigue) is limited; moreover, it is reversible when the treatment is stopped. Since interferon alpha is the only approved drug after surgery for those with high-risk melanoma, efforts to identify those who might benefit most from this treatment are very important in order to avoid unnecessary toxicity for those who would not benefit from interferon alpha treatment. Combination of interferon with novel drugs is another field of ongoing research to improve the life expectancy of people with high-risk melanoma.
We included 73 RCTs, with 12,212 participants, comparing GnRH antagonist to long-course GnRH agonist protocols. The quality of the evidence was moderate: limitations were poor reporting of study methods. Live birth There was no evidence of a difference in live birth rate between GnRH antagonist and long course GnRH agonist (OR 1.02, 95% CI 0.85 to 1.23; 12 RCTs, n = 2303, I2= 27%, moderate quality evidence). The evidence suggested that if the chance of live birth following GnRH agonist is assumed to be 29%, the chance following GnRH antagonist would be between 25% and 33%. OHSS GnRH antagonist was associated with lower incidence of any grade of OHSS than GnRH agonist (OR 0.61, 95% C 0.51 to 0.72; 36 RCTs, n = 7944, I2 = 31%, moderate quality evidence). The evidence suggested that if the risk of OHSS following GnRH agonist is assumed to be 11%, the risk following GnRH antagonist would be between 6% and 9%. Other adverse effects There was no evidence of a difference in miscarriage rate per woman randomised between GnRH antagonist group and GnRH agonist group (OR 1.03, 95% CI 0.82 to 1.29; 34 RCTs, n = 7082, I2 = 0%, moderate quality evidence). With respect to cycle cancellation, GnRH antagonist was associated with a lower incidence of cycle cancellation due to high risk of OHSS (OR 0.47, 95% CI 0.32 to 0.69; 19 RCTs, n = 4256, I2 = 0%). However cycle cancellation due to poor ovarian response was higher in women who received GnRH antagonist than those who were treated with GnRH agonist (OR 1.32, 95% CI 1.06 to 1.65; 25 RCTs, n = 5230, I2 = 68%; moderate quality evidence). There is moderate quality evidence that the use of GnRH antagonist compared with long-course GnRH agonist protocols is associated with a substantial reduction in OHSS without reducing the likelihood of achieving live birth.
We found 73 randomised controlled trials comparing GnRH antagonist with GnRH agonist in a total of 12,212 women undergoing ART. The evidence is current to May 2015 There was no evidence of a difference between the groups in live birth rates (i.e. rates at conclusion of a course of treatment). The evidence suggested that if the chance of live birth following GnRH agonist is assumed to be 29%, the chance following GnRH antagonist would be between 25% and 33%. However, the OHSS rates were much higher after GnRH agonist. The evidence suggested that if the risk of OHSS following GnRH agonist is assumed to be 11%, the risk following GnRH antagonist would be between 6% and 9%. The evidence was of moderate quality for both live birth and OHSS. The main limitations of the evidence were the possibility of publication bias for live birth (with small studies likely to report favourable outcomes for GnRH antagonist) and poor reporting of study methods for OHSS.
We included 15 studies with 498 participants. Ten studies compared physical activity to minimal dietary and behavioural intervention or no intervention. Five studies compared combined dietary, exercise and behavioural intervention to minimal intervention. One study compared behavioural intervention to minimal intervention. Risk of bias varied: eight studies had adequate sequence generation, seven had adequate clinician or outcome assessor blinding, seven had adequate allocation concealment, six had complete outcome data and six were free of selective reporting. No studies assessed the fertility primary outcomes of live birth or miscarriage. No studies reported the secondary reproductive outcome of menstrual regularity, as defined in this review. Lifestyle intervention may improve a secondary (endocrine) reproductive outcome, the free androgen index (FAI) (MD -1.11, 95% confidence interval (CI) -1.96 to -0.26, 6 RCTs, N = 204, I2 = 71%, low-quality evidence). Lifestyle intervention may reduce weight (kg) (MD -1.68 kg, 95% CI -2.66 to -0.70, 9 RCTs, N = 353, I2 = 47%, low-quality evidence). Lifestyle intervention may reduce body mass index (BMI) (kg/m2) (-0.34 kg/m2, 95% CI -0.68 to -0.01, 12 RCTs, N = 434, I2= 0%, low-quality evidence). We are uncertain of the effect of lifestyle intervention on glucose tolerance (glucose outcomes in oral glucose tolerance test) (mmol/L/minute) (SMD -0.02, 95% CI -0.38 to 0.33, 3 RCTs, N = 121, I2 = 0%, low-quality evidence). Lifestyle intervention may improve the free androgen index (FAI), weight and BMI in women with PCOS. We are uncertain of the effect of lifestyle intervention on glucose tolerance. There were no studies that looked at the effect of lifestyle intervention on live birth, miscarriage or menstrual regularity. Most studies in this review were of low quality mainly due to high or unclear risk of bias across most domains and high heterogeneity for the FAI outcome.
We found 15 studies that included 498 participants. Ten studies compared physical activity to minimal dietary and behavioural intervention or no intervention. Five studies compared combined dietary, exercise and behavioural intervention to minimal intervention. One study compared behavioural intervention to minimal intervention. The risk of bias in the studies varied and was generally unclear. The evidence is current to March 2018. There were no studies that investigated the effect of a healthy lifestyle on live birth, miscarriage or regularity of menstrual cycles. Adopting a healthy lifestyle may result in weight loss or reduction in male hormone levels in some individuals. Diet and exercise may not have an effect on the body's ability to maintain normal blood glucose levels. The evidence was of low quality. The main limitations in the evidence were inconsistent and imprecise findings, and poor reporting of the methods used in the studies.
We included 10 trials, with a total of 422 participants, that addressed two of the outcomes of interest to this review: swelling (oedema) and bruising (ecchymosis). Nine studies on rhinoplasty used a variety of different types, and doses, of corticosteroids. Overall, the results of the included studies showed that there is some evidence that perioperative administration of corticosteroids decreases formation of oedema over the first two postoperative days. Meta-analysis was only possible for two studies, with a total of 60 participants, and showed that a single perioperative dose of 10 mg dexamethasone decreased oedema formation in the first two days after surgery (SMD = -1.16, 95% CI: -1.71 to -0.61, low quality evidence). The evidence for ecchymosis was less consistent across the studies, with some contradictory results, but overall there was some evidence that perioperatively administered corticosteroids decreased ecchymosis formation over the first two days after surgery (SMD = -1.06, 95% CI:-1.47 to -0.65, two studies, 60 participants, low quality evidence ). The difference was not maintained after this initial period. One study, with 40 participants, showed that high doses of methylprednisolone (over 250 mg) decreased both ecchymosis and oedema between the first and seventh postoperative days. The only study that assessed facelift surgery identified no positive effect on oedema with preoperative administration of corticosteroids. Five trials did not report on harmful (adverse) effects; four trials reported that there were no adverse effects; and one trial reported adverse effects in two participants treated with corticosteroids as well as in four participants treated with placebo. None of the studies reported recovery time, patient satisfaction or quality of life. The studies included were all at an unclear risk of selection bias and at low risk of bias for other domains. There is limited evidence for rhinoplasty that a single perioperative dose of corticosteroids decreases oedema and ecchymosis formation over the first two postoperative days, but the difference is not maintained after this period. There is also limited evidence that high doses of corticosteroids decrease both ecchymosis and oedema between the first and seventh postoperative days. The clinical significance of this decrease is unknown and there is little evidence available regarding the safety of this intervention. More studies are needed because at present the available evidence does not support the use of corticosteroids for prevention of complications following facial plastic surgery.
The review authors searched the medical literature up to January 2014, and identified 10 relevant medical trials, with a total of 422 participants. Nine of these studies were on people having rhinoplasty (surgery to reshape the nose) and one was on people having a facelift.The trials investigated a variety of corticosteroid medicines, as well as different doses of corticosteroids. People in the studies were assessed for swelling and bruising for up to 10 days after surgery. None of the studies stated the funding source. There was some low quality evidence that a single dose of corticosteroid administered prior to surgery might reduce swelling and bruising over the first two days after surgery, but this advantage was not maintained beyond two days. One study, with 40 participants, showed that high doses of corticosteroid decreased both swelling and bruising between the first and seventh postoperative days. The usefulness of these results is uncertain and there is currently no evidence regarding the safety of the treatment. Five trials did not report on harmful (adverse) effects; four trials reported that there were no adverse effects; and one trial reported adverse effects in two participants treated with corticosteroids as well as in four participants treated with placebo. None of the studies reported recovery time, patient satisfaction or quality of life. Therefore, the current evidence does not support use of corticosteroids as a routine treatment in facial plastic surgery. More trials will need to be conducted before it can be established whether this treatment works and is safe.
We found17 studies involving 3212 patients. The methods of 15 studies were at high risk of bias. In only two studies was the risk of bias low. Trials used "positive drugs", of which the efficacy was not known, as controls. Different Chinese herbal preparations were tested in nearly all trials. In only one trial was a Chinese herbal preparation tested twice. In seven trials, six herbal preparations were found to be more effective at enhancing recovery than the control preparations. In the other 10 studies, seven herbal preparations were not shown to be significantly different from the control. One study did not describe the difference between the intervention and control groups. Chinese herbal medicines may shorten the symptomatic phase in patients with the common cold. However, the lack of trials of low enough risk of bias, or using a placebo or a drug clearly identified as a control, means that we are uncertain enough to be unable to recommend any kind of Chinese medicinal herbs for the common cold.
Although we included 17 trials, involving 3212 patients, in this review, the risk of bias was so high that the evidence did not support using any Chinese herbal preparation(s) for the common cold. Well-designed clinical trials are required.
Five studies provided results from adequate parallel-group data. Barnhart 1999 and Dayal 2006 enrolled perimenopausal women with complaints of decreased well being and, using three cognitive measures, found no significant effect of DHEA compared with placebo at 3 months. Wolf 1998b enrolled 75 healthy volunteers (37 women and 38 men aged 59-81) in a study of the effect of DHEA supplements on cognitive impairment induced by stress; after two weeks of treatment, placebo group performance deteriorated significantly on a test of selective attention following a psychosocial stressor (p<0.05), while deterioration was not evident in the DHEA group (p=0.85). However, when compared with placebo, DHEA was associated with significant impairment on a visual memory recall test (p<0.01) following the stressor. No significant effects were found on a third cognitive task. Effects were not found on tasks when administered in the absence of a stressor. van Niekerk 2001 found no effect on cognitive function in 46 men aged 62-76 from three months of DHEA supplementation. Nair 2006, enrolled 57 women and 87 men with low level of sulphated DHEA in a 24-month study, no significant changes in quality of life measures for either sex were found. In Von Muhlen 2008 DHEA for one year showed no benefit on cognition performance in 225 healthy older people. Reduced performance in a visual memory recall test observed in one trial and a significant drop-out rate in favour of placebo emerged in another trial. What little evidence there is from controlled trials does not support a beneficial effect of DHEA supplementation on cognitive function of non-demented middle-aged or elderly people. There is inconsistent evidence from the controlled trials about adverse effects of DHEA. In view of growing public enthusiasm for DHEA supplementation, particularly in the USA, and the theoretical possibility of long-term neuroprotective effects of DHEA there is a need for further high quality trials in which the duration of DHEA treatment is longer than one year, and the number of participants is large enough to provide adequate statistical power. Cognitive outcomes should be assessed in all trials.
In the USA there is growing public enthusiasm for DHEA supplementation as a means of retarding ageing and age-associated cognitive impairment but there is very little evidence from controlled trials. In two trials DHEA was associated with a deleterious effect on visual memory after a psychosocial stressor and quality of life measures, but there is inconsistent systematic evidence of adverse effects from DHEA. Longer-term randomized placebo controlled trials are needed for low and high doses.
Four trials involving 888 participants with previously untreated OAG were included. Surgery was Scheie's procedure in one trial and trabeculectomy in three trials. In three trials, primary medication was usually pilocarpine, in one trial it was a beta-blocker. The most recent trial included participants with on average mild OAG. At five years, the risk of progressive visual field loss, based on a three unit change of a composite visual field score, was not significantly different according to initial medication or initial trabeculectomy (odds ratio (OR) 0.74, 95% confidence interval (CI) 0.54 to 1.01). In an analysis based on mean difference (MD) as a single index of visual field loss, the between treatment group difference in MD was -0.20 decibel (dB) (95% CI -1.31 to 0.91). For a subgroup with more severe glaucoma (MD -10 dB), findings from an exploratory analysis suggest that initial trabeculectomy was associated with marginally less visual field loss at five years than initial medication, (mean difference 0.74 dB (95% CI -0.00 to 1.48). Initial trabeculectomy was associated with lower average intraocular pressure (IOP) (mean difference 2.20 mmHg (95% CI 1.63 to 2.77) but more eye symptoms than medication (P = 0.0053). Beyond five years, visual acuity did not differ according to initial treatment (OR 1.48, 95% CI 0.58 to 3.81). From three trials in more severe OAG, there is some evidence that medication was associated with more progressive visual field loss and 3 to 8 mmHg less IOP lowering than surgery. In the longer-term (two trials) the risk of failure of the randomised treatment was greater with medication than trabeculectomy (OR 3.90, 95% CI 1.60 to 9.53; hazard ratio (HR) 7.27, 95% CI 2.23 to 25.71). Medications and surgery have evolved since these trials were undertaken. In three trials the risk of developing cataract was higher with trabeculectomy (OR 2.69, 95% CI 1.64 to 4.42). Evidence from one trial suggests that, beyond five years, the risk of needing cataract surgery did not differ according to initial treatment policy (OR 0.63, 95% CI 0.15 to 2.62). Methodological weaknesses were identified in all the trials. Primary surgery lowers IOP more than primary medication but is associated with more eye discomfort. One trial suggests that visual field restriction at five years is not significantly different whether initial treatment is medication or trabeculectomy. There is some evidence from two small trials in more severe OAG, that initial medication (pilocarpine, now rarely used as first line medication) is associated with more glaucoma progression than surgery. Beyond five years, there is no evidence of a difference in the need for cataract surgery according to initial treatment. The clinical and cost-effectiveness of contemporary medication (prostaglandin analogues, alpha2-agonists and topical carbonic anhydrase inhibitors) compared with primary surgery is not known. Further RCTs of current medical treatments compared with surgery are required, particularly for people with severe glaucoma and in black ethnic groups. Outcomes should include those reported by patients. Economic evaluations are required to inform treatment policy.
It is not clear whether medication or surgery is the better treatment for OAG. The purpose of this review was to review and assess evidence from randomised studies to compare treatment with medications with surgical treatments in terms of how well they work, their relative safety and cost-effectiveness. Four relevant trials were identified, treating 888 people. Three studies were in the UK and one in the US. These trials had been initiated over many years from 1968 up to the most recent trial in 1993. The earlier trials used medications, and in one trial surgical techniques that are now rarely used. Findings of these studies suggest that, in mild OAG, worsening of the condition was not different whether first treatment was medication or surgery, but surgery was associated with more eye discomfort at five years. In more severe glaucoma, surgery lowered IOP significantly more than medications (not widely used anymore) and reduced the risk of progressive loss of visual field. In three trials the risk of developing cataract was higher with surgery (trabeculectomy), although in one trial with follow-up beyond five years there was no difference in the number of cataract surgeries between treatment groups. There was insufficient evidence to determine how well more recently available medications work compared with surgery in more severe OAG, and which was the more cost-effective treatment option. More research is required.
Thirteen trials met the inclusion criteria and are included in the review. All studies had at least one domain with unclear risk of bias. Some studies were at high risk of bias for random sequence generation (two trials), allocation concealment (two trials), blinding of outcome assessors (one trial) and incomplete outcome data (one trial). Duration and content of multiple risk factor interventions varied across the trials. Two trials recruited healthy participants and the other 11 trials recruited people with varying risks of CVD, such as participants with known hypertension and type 2 diabetes. Only one study reported CVD outcomes and multiple risk factor interventions did not reduce the incidence of cardiovascular events (RR 0.57, 95% CI 0.11 to 3.07, 232 participants, low-quality evidence); the result is imprecise (a wide confidence interval and small sample size) and makes it difficult to draw a reliable conclusion. None of the included trials reported all-cause mortality. The pooled effect indicated a reduction in systolic blood pressure (MD -6.72 mmHg, 95% CI -9.82 to -3.61, I² = 91%, 4868 participants, low-quality evidence), diastolic blood pressure (MD -4.40 mmHg, 95% CI -6.47 to -2.34, I² = 92%, 4701 participants, low-quality evidence), body mass index (MD -0.76 kg/m², 95% CI -1.29 to -0.22, I² = 80%, 2984 participants, low-quality evidence) and waist circumference (MD -3.31, 95% CI -4.77 to -1.86, I² = 55%, 393 participants, moderate-quality evidence) in favour of multiple risk factor interventions, but there was substantial heterogeneity. There was insufficient evidence to determine the effect of these interventions on consumption of fruit or vegetables, smoking cessation, glycated haemoglobin, fasting blood sugar, high density lipoprotein (HDL) cholesterol, low density lipoprotein (LDL) cholesterol and total cholesterol. None of the included trials reported on adverse events. Due to the limited evidence currently available, we can draw no conclusions as to the effectiveness of multiple risk factor interventions on combined CVD events and mortality. There is some evidence that multiple risk factor interventions may lower blood pressure levels, body mass index and waist circumference in populations in LMIC settings at high risk of hypertension and diabetes. There was considerable heterogeneity between the trials, the trials were small, and at some risk of bias. Larger studies with longer follow-up periods are required to confirm whether multiple risk factor interventions lead to reduced CVD events and mortality in LMIC settings.
We performed a thorough search of the medical literature up to June 2014. We identified 13 trials that recruited 7310 participants. Two trials recruited healthy participants and the other 11 trials recruited people at varying risk of CVD, such as participants with known hypertension("high blood pressure") and type 2 diabetes, and randomly assigned them to either a multiple risk factor intervention or to no intervention. The trials were conducted between 2001 and 2010, and published between 2004 and 2012. Three trials were conducted in Turkey. Two trials each were conducted in China and Mexico. One trial recruited participants from both China and Nigeria. The other trials were conducted in Brazil, India, Pakistan, Romania and Jordan. The content of the interventions varied across the trials; most of the trials included dietary advice and advice on physical activity. The trials follow-up the participants between six months to 30 months (average follow-up period was 13.3 months). We found that evidence for effects on cardiovascular disease events was scarce, with only one trial reporting these. None of the included trials reported deaths from any cause. Multiple risk factors interventions may lower systolic blood pressure, diastolic blood pressure, body mass index and waist circumference. We found no difference for eating more fruit and vegetables, rates of smoking cessation, measure of blood glucose sugar was for the past two to three months, fasting blood sugar, high density lipoprotein (HDL) cholesterol, low density lipoprotein (LDL) cholesterol and total cholesterol. None of the included trials reported on harms. Overall, the studies included in this review were at some risk of bias and there was variation between the results of the studies when we analysed the data. Our findings should be treated with some caution.
Data on 2281 participants from eight RCTs were available from reports of single-agent doxorubicin versus doxorubicin-based combination chemotherapy. Meta-analysis using the fixed effect model detected a higher tumour response rate with combination chemotherapy compared with single-agent chemotherapy (odds ratio [OR= 1.29; 95% confidence interval [CI], 1.03 to 1.60; p = 0.03), but the OR from a pooled analysis using the random effects model and the same data did not achieve statistical significance (OR= 1.26; 95% CI, 0.96 to 1.67; p = 0.10). No significant difference between the two regimens was detected in the pooled one-year mortality rate (OR = 0.87; 95% CI, 0.73 to 1.05; p=0.14) or two-year mortality rate (OR = 0.84; 95% CI, 0.67 to 1.06; p=0.13) (N=2097). Although reporting of adverse effects was limited and inconsistent among trials (making pooling of data for this outcome impossible), adverse effects such as nausea/vomiting and hematologic toxic effects were consistently reported as being worse with combination chemotherapy across the eight eligible studies. Compared to single-agent doxorubicin, the combination chemotherapy regimens evaluated, given in conventional doses, produced only marginal increases in response rates, at the expense of increased toxic effects and with no improvements in overall survival.
This review was conducted to find out if combining doxorubicin with other drugs is more effective than doxorubicin alone. Eight studies were considered together, which showed if combination chemotherapy is given: (1) tumour shrinkage was marginally better than in patients treated with doxorubicin alone; (2) survival was no different; and (3) side effects were worse than for patients treated with doxorubicin alone
Twenty-one trials met the inclusion criteria. A total of 2048 patients were initially enrolled in the trials (1148 received paracetamol, and 892 the placebo) and of these 1968 (96%) were included in the meta-analysis (1133 received paracetamol, and 835 the placebo). Paracetamol provided a statistically significant benefit when compared with placebo for pain relief and pain intensity at both 4 and 6 hours. Most studies were found to have moderate risk of bias, with poorly reported allocation concealment being the main problem. Risk ratio values for pain relief at 4 hours 2.85 (95% confidence interval (CI) 1.89 to 4.29), and at 6 hours 3.32 (95% CI 1.88 to 5.87). A statistically significant benefit was also found between up to 1000 mg and 1000 mg doses, the higher the dose giving greater benefit for each measure at both time points. There was no statistically significant difference between the number of patients who reported adverse events, overall this being 19% in the paracetamol group and 16% in the placebo group. Paracetamol is a safe, effective drug for the treatment of postoperative pain following the surgical removal of lower wisdom teeth.
Twenty-one trials (with over 2000 participants) were included. Paracetamol provided a statistically significant benefit when compared with placebo for pain relief at both 4 and 6 hours after taking the drug. It is most effective at 1000 mg dose, and can be taken at six hourly intervals without compromising safety. There was no statistically significant difference between the number of patients who reported adverse events, overall this being 19% in the paracetamol group and 16% in the placebo group. It should be noted that most of the studies were found to have some limitations mainly due to poor reporting of information. However the review concludes that paracetamol is a safe, effective drug for the treatment of postoperative pain following the surgical removal of lower wisdom teeth.
We included 12 studies (784 participants) in this review; sample sizes ranged from 10 to 187 participants (median 56.5). One study had three arms that were all relevant to this review and all the other studies had two arms. One study was a within-participant comparison. All studies were industry funded. Two studies provided unpublished data for healing. Nine of the included studies compared PMM treatments with other treatments and reported results for the primary outcomes. All treatments were dressings. All studies also gave the participants compression bandaging. Seven of these studies were in participants described as having 'non-responsive' or 'hard-to-heal' ulcers. Results, reported at short, medium and long durations and as time-to-event data, are summarised for the comparison of any dressing regimen incorporating PMM versus any other dressing regimen. The majority of the evidence was of low or very low certainty, and was mainly downgraded for risk of bias and imprecision. It is uncertain whether PMM dressing regimens heal VLUs quicker than non-PMM dressing regimens (low-certainty evidence from 1 trial with 100 participants) (HR 1.21, 95% CI 0.74 to 1.97). In the short term (four to eight weeks) it is unclear whether there is a difference between PMM dressing regimens and non-PMM dressing regimens in the probability of healing (very low-certainty evidence, 2 trials involving 207 participants). In the medium term (12 weeks), it is unclear whether PMM dressing regimens increase the probability of healing compared with non-PMM dressing regimens (low-certainty evidence from 4 trials with 192 participants) (RR 1.28, 95% CI 0.95 to 1.71). Over the longer term (6 months), it is also unclear whether there is a difference between PMM dressing regimens and non-PMM dressing regimens in the probability of healing (low certainty evidence, 1 trial, 100 participants) (RR 1.06, 95% CI 0.80 to 1.41). It is uncertain whether there is a difference in adverse events between PMM dressing regimens and non-PMM dressing regimens (low-certainty evidence from 5 trials, 363 participants) (RR 1.03, 95% CI 0.75 to 1.42). It is also unclear whether resource use is lower for PMM dressing regimens (low-certainty evidence, 1 trial involving 73 participants), or whether mean total costs in a German healthcare setting are different (low-certainty evidence, 1 trial in 187 participants). One cost-effectiveness analysis was not included because effectiveness was not based on complete healing. The evidence is generally of low certainty, particularly because of risk of bias and imprecision of effects. Within these limitations, we are unclear whether PMM dressing regimens influence venous ulcer healing relative to dressing regimens without PMM activity. It is also unclear whether there is a difference in rates of adverse events between PMM and non-PMM treatments. It is uncertain whether either resource use (products and staff time) or total costs associated with PMM dressing regimens are different from those for non-PMM dressing regimens. More research is needed to clarify the impact of PMM treatments on venous ulcer healing.
In September 2016 we searched for as many relevant studies as we could find that had a reliable design (randomised controlled trials) and had compared PMM treatments with other treatments for venous leg ulcers. We found 12 studies involving a total of 784 people. Ten studies gave results we could use and all treatments were dressings. All these studies gave all the participants compression therapy as well as the dressings. Most of the people in the trials had wounds that were not getting better or had been there a long time. Findings from four trials are unclear as to whether there is a benefit of PMM dressings on venous ulcer healing compared with other dressings. Five trials reported on wound side effects and their results are unclear as to whether there is a difference in rates of side effects between PMM dressings and other dressings. It is also unclear whether PMM dressings result in decreases in the amount of saline used and the time taken during dressing changes, and whether there is an effect on total costs. Overall, the certainty of the evidence was judged to be low: most studies we found were small and could have been better conducted, so it was difficult to be sure how meaningful the results were. The next step would be to do more research of better quality to see whether PMM dressings do heal venous ulcers more quickly than other dressings. This plain language summary is up to date as of September 2016.
Massage interventions improved daily weight gain by 5.1g (95% CI 3.5, 6.7g). There is no evidence that gentle, still touch is of benefit (increase in daily weight gain 0.2g; 95% CI -1.2, 1.6g). Massage interventions also appeared to reduce length of stay by 4.5 days (95% CI 2.4, 6.5) though there are methodological concerns about the blinding of this outcome. There was also some evidence that massage interventions have a slight, positive effect on postnatal complications and weight at 4 - 6 months. However, serious concerns about the methodological quality of the included studies, particularly with respect to selective reporting of outcomes, weaken credibility in these findings. Evidence that massage for preterm infants is of benefit for developmental outcomes is weak and does not warrant wider use of preterm infant massage. Where massage is currently provided by nurses, consideration should be given as to whether this is a cost-effective use of time. Future research should assess the effects of massage interventions on clinical outcome measures, such as medical complications or length of stay, and on process-of-care outcomes, such as care-giver or parental satisfaction.
The review only included randomized controlled trials, studies in which a group of babies received massage and was compared with a similar group which did not. The authors searched the medical literature and contacted experts and found 14 studies. In most of these studies babies were rubbed or stroked for about 15 minutes, three or four times a day, usually for five or ten days. Some studies also included "still, gentle touch", in which nurses put their hands on babies but did not rub or stroke them. On average, the studies found that when compared to babies who were not touched, babies receiving massage, but not "still, gentle touch", gained more weight each day (about 5 grams). They spent less time in hospital, had slightly better scores on developmental tests and had slightly fewer postnatal complications, although there were problems with how reliable these findings are. The studies did not show any negative effects of massage. Massage is time consuming for nurses to provide, but parents can perform massage without extensive training.
We included 53 trials consisting of 1139 participants. Forty-eight studies used a cross-over design, and five were performed in accordance with a parallel-group design. Forty-five studies addressed the effect of a single beta2-agonist administration, and eight focused on long-term treatment. We addressed these two different intervention regimens as different comparisons. Among primary outcomes for short-term administration, data on maximum fall in forced expiratory volume in 1 second (FEV1) showed a significant protective effect for both short-acting beta-agonists (SABA) and long-acting beta-agonists (LABA) compared with placebo, with a mean difference of -17.67% (95% confidence interval (CI) -19.51% to -15.84%, P = 0.00001, 799 participants from 72 studies). The subgroup analysis of studies performed in adults compared with those performed in children showed high heterogeneity confined to children, despite the comparable mean bronchoprotective effect. Secondary outcomes on other pulmonary function parameters confirmed a more positive and protective effect of beta2-agonists on EIA compared with placebo. Occurrence of side effects was not significantly different between beta2-agonists and placebo. Overall evaluation of the included long-term studies suggests a beta2-agonist bronchoprotective effect for the first dose of treatment. However, long-term use of both SABA and LABA induced the onset of tolerance and decreased the duration of drug effect, even after a short treatment period. Evidence of low to moderate quality shows that beta2-agonists, both SABA and LABA, when administered in a single dose, are effective and safe in preventing EIA. Long-term regular administration of inhaled beta2-agonists induces tolerance and lacks sufficient safety data. This finding appears to be of particular clinical relevance in view of the potential for prolonged regular use of beta2-agonists as monotherapy in the pretreatment of EIA, despite the warnings of drug agencies (FDA, EMA) regarding LABA.
We found 53 trials consisting of 1139 participants. Forty-eight studies used a cross-over design, which meant that each person in the trial received two or more treatments-one or more active treatments, the beta-agonist and a placebo in random order. The rest were parallel-group trials, meaning that people received either the active treatment or a placebo. Most of the studies addressed the effect of a giving a single beta2-agonist treatment before exercise and recorded the effect on lung function following exercise. Only eight focused on longer treatment-longer treatments would be needed to assess whether these treatments were harmful over the longer term. Studies in which people received a single administration of a beta-agonist showed that FEV1 (a measure of lung function) fell significantly less for people taking SABA or LABA compared with placebo (mean difference (MD) -17.67%; 95% confidence interval (CI) -19.51% to -15.84%). Other lung function measures confirmed that beta2-agonists were more beneficial compared with placebo. No significant difference in the number of side effects was noted in people taking SABA or LABA compared with people taking placebo. However, it is unlikely that people would be prescribed an inhaler for a single treatment, so we must consider longer-term studies to get a true measure of the side effects that inhalers can cause. We found that included longer-term studies showed that beta2-agonists were helpful in terms of lung function for the first dose of treatment. However, studies that provided longer-term treatment with SABA or LABA showed that over time, people built up a tolerance to the effects of treatments, and the beneficial effects lasted for shorter periods of time. Overall, we believe that the evidence was of low to moderate quality. This review shows that beta2-agonists-both SABA and LABA-when administered in a single dose, are effective and safe in preventing the symptoms of EIA. Longer-term administration of inhaled beta2-agonists induces tolerance and lacks sufficient safety data. It is important to note that taking LABA without background inhaled steroids is considered unsafe and is not currently recommended in most of the clinical guidelines for asthma. We recommend that more studies are needed to determine whether it is safe to administer inhaled beta2-agonists alone to people who experience asthma symptoms when exercising. This review is current as of August 2013.
We identified five randomised clinical trials involving 398 participants. All trials included only participants with cirrhosis as the underlying cause of hepatic encephalopathy. Trials included participants with covert or overt hepatic encephalopathy. All trials were conducted in Italy by a single team and assessed acetyl-L-carnitine compared with placebo. Oral intervention was the most frequent route of administration. All trials were at high risk of bias and were underpowered. None of the trials were sponsored by the pharmaceutical industry. None of the identified trials reported information on all-cause mortality, serious adverse events, or days of hospitalisation. Only one trial assessed quality of life using the Short Form (SF)-36 scale (67 participants; very low-quality evidence). The effects of acetyl-L-carnitine compared with placebo on general health at 90 days are uncertain (MD -6.20 points, 95% confidence interval (CI) -9.51 to -2.89). Results for additional domains of the SF-36 are also uncertain. One trial assessed fatigue using the Wessely and Powell test (121 participants; very low-quality evidence). The effects are uncertain in people with moderate-grade hepatic encephalopathy (mental fatigue: MD 0.40 points, 95% CI -0.21 to 1.01; physical fatigue: MD -0.20 points, 95% CI -0.92 to 0.52) and mild-grade hepatic encephalopathy (mental fatigue: -0.80 points, 95% CI -1.48 to -0.12; physical fatigue: 0.20 points, 95% CI -0.72 to 1.12). Meta-analysis showed a reduction in blood ammonium levels favouring acetyl-L-carnitine versus placebo (MD -13.06 mg/dL, 95% CI -17.24 to -8.99; 387 participants; 5 trials; very low-quality evidence). It is unclear whether acetyl-L-carnitine versus placebo increases the risk of non-serious adverse events (8/126 (6.34%) vs 3/120 (2.50%); RR 2.51, 95% CI 0.68 to 9.22; 2 trials; very low-quality evidence). Overall, adverse events data were poorly reported and harms may have been underestimated. This Cochrane systematic review analysed a heterogeneous group of five trials at high risk of bias and with high risk of random errors conducted by only one research team. We assessed acetyl-L-carnitine versus placebo in participants with cirrhosis with covert or overt hepatic encephalopathy. Hence, we have no data on the drug for hepatic encephalopathy in acute liver failure. We found no information about all-cause mortality, serious adverse events, or days of hospitalisation. We found no clear differences in effect between acetyl-L-carnitine and placebo regarding quality of life, fatigue, and non-serious adverse events. Acetyl-L-carnitine reduces blood ammonium levels compared with placebo. We rated all evidence as of very low quality due to pitfalls in design and execution, inconsistency, small sample sizes, and very few events. The harms profile for acetyl-L-carnitine is presently unclear. Accordingly, we need further randomised clinical trials to assess acetyl-L-carnitine versus placebo conducted according to the SPIRIT statements and reported according to the CONSORT statements.
Review authors searched the medical literature up to 10 September 2018, and identified five relevant randomised clinical trials, including a total of 398 participants. All trials were performed in Italy by only one team of investigators. All were considered at high risk of bias and included small numbers of participants, which makes potential overestimation of benefits and underestimation of harms likely. The pharmaceutical industry did not sponsor any trial. Trials tested acetyl-L-carnitine given orally or intravenously versus placebo. The drug did not seem to have effects on quality of life, fatigue, or non-serious adverse events when compared with placebo (inactive sham drug). None of the included trials reported data on participants’ all-cause mortality, serious adverse events, or days of hospitalisation. Researchers poorly reported harms caused by acetyl-L-carnitine, so the harms profile remains unclear. Risks of bias, imprecision, and outcome reporting bias all make the certainty of evidence low or very low. A reduction in blood ammonium levels favoured participants receiving acetyl-L-carnitine, but study authors observed no clinical benefits. It is clear that additional randomised clinical trials are required to assess the benefits and harms of acetyl-L-carnitine compared with placebo in the treatment of people with hepatic encephalopathy. These trials should be well designed, conducted by independent researchers, and collaborative, and should include large numbers of participants.
We included nine randomised controlled trials involving a total of 816 women with PCOS. When metformin was compared with placebo there was no clear evidence of a difference between the groups in live birth rates (OR 1.39, 95% CI 0.81 to 2.40, five RCTs, 551 women, I2 = 52%, low-quality evidence). Our findings suggest that for a woman with a 32 % chance of achieving a live birth using placebo, the corresponding chance using metformin treatment would be between 28% and 53%. When metformin was compared with placebo or no treatment, clinical pregnancy rates were higher in the metformin group (OR 1.52; 95% CI 1.07 to 2.15; eight RCTs, 775 women, I2 = 18%, moderate-quality evidence). This suggests that for a woman with a 31% chance of achieving a clinical pregnancy using placebo or no treatment, the corresponding chance using metformin treatment would be between 32% and 49%. The risk of ovarian hyperstimulation syndrome was lower in the metformin group (OR 0.29; 95% CI 0.18 to 0.49, eight RCTs, 798 women, I2 = 11%, moderate-quality evidence). This suggests that for a woman with a 27% risk of having OHSS without metformin the corresponding chance using metformin treatment would be between 6% and 15%. Side effects (mostly gastrointestinal) were more common in the metformin group (OR 4.49, 95% CI 1.88 to 10.72, for RCTs, 431 women, I2=57%, low quality evidence) The overall quality of the evidence was moderate for the outcomes of clinical pregnancy, OHSS and miscarriage, and low for other outcomes. The main limitations in the evidence were imprecision and inconsistency. This review found no conclusive evidence that metformin treatment before or during ART cycles improved live birth rates in women with PCOS. However, the use of this insulin-sensitising agent increased clinical pregnancy rates and decreased the risk of OHSS.
The review included nine randomised controlled trials involving a total of 816 women who were randomised to receive metformin (411) versus placebo or no treatment (405). The trials were conducted in the Czech Republic, Italy, Jordan, Norway, Turkey and the United Kingdom. The evidence is current to October 2014. When metformin was compared with placebo or no treatment, there was no conclusive evidence of a difference between the groups in live birth rates, but pregnancy rates were higher in the metformin group, and the risk of OHSS was lower. We estimated that for a woman with a 32 % chance of achieving a live birth using placebo, the corresponding chance using metformin would be between 28% and 53%. For a woman with a 31% chance of achieving a clinical pregnancy without metformin, the corresponding chance using metformin would be between 32% and 49%. For a woman with a 27% risk of ovarian hyperstimulation syndrome (OHSS) without metformin, the corresponding chance using metformin would be between 6% and 15%. Side effects (mostly gastrointestinal) were more common in the metformin group, though only four studies reported this outcome. The overall quality of the evidence was moderate for the outcomes of clinical pregnancy, OHSS and miscarriage, and low for other outcomes. The main limitations in the evidence were imprecision and inconsistency. We found no conclusive evidence that metformin treatment before or during ART cycles improved live birth rates in women with PCOS. However, the use of this insulin-sensitising agent increased clinical pregnancy rates and decreased the risk of OHSS.
We included two trials. One study was conducted in the USA in 1981 (250 people randomised and completed trial) and one study conducted in Spain in 2001 (1034 randomised, 935 completed trial). Both trials used extracapsular cataract extraction techniques that are not commonly used in higher income countries now. Most of the data in this review came from the larger trial, which we judged to be at low risk of bias. The mean change in visual acuity (in Snellen lines) of the operated eye four months postoperatively was similar in people given day care surgery (mean 4.1 lines standard deviation (SD) 2.3, 464 participants) compared to people treated as in-patients (mean 4.1 lines, SD 2.2, 471 participants) (P value = 0.74). No data were available from either study on intra-operative complications. Wound leakage, intraocular pressure (IOP) and corneal oedema were reported in the first day postoperatively and at four months after surgery. There was an increased risk of high IOP in the day care group in the first day after surgery (risk ratio (RR) 3.33, 95% confidence intervals (CI) 1.21 to 9.16, 935 participants) but not at four months (RR 0.61, 95% CI 0.14 to 2.55, 935 participants). The findings for the other outcomes were inconclusive with wide CIs. There were two cases of endophthalmitis observed at four months in the day care group and none in the in-patient group. The smaller study stated that there were no infections or severe hyphaemas. In a subset of participants evaluated for quality of life (VF14 questionnaire) similar change in quality of life before and four months after surgery was observed (mean change in VF14 score: day care group 25.2, SD 21.2, 150 participants; in-patient group: 23.5, SD 25.7, 155 participants; P value = 0.30). Subjective assessment of patient satisfaction in the smaller study suggested that participants preferred to recuperate at home, were more comfortable in their familiar surroundings and enjoyed the family support that they received at home. Costs were 20% more for the in-patient group and this was attributed to higher costs for overnight stay. This review provides evidence that there is cost saving with day care cataract surgery compared to in-patient cataract surgery. Although effects on visual acuity and quality of life appeared similar, the evidence with respect to postoperative complications was inconclusive because the effect estimates were imprecise. Given the wide-spread adoption of day care cataract surgery, future research in cataract clinical pathways should focus on evidence provided by high quality clinical databases (registers), which would enable clinicians and healthcare planners to agree clinical and social indications for in-patient care and so make better use of resources.
The review included two trials (up to August 2015) conducted in Spain and the USA, involving 1284 people with cataract. A total of 68 people were treated as day care surgery, while 598 stayed overnight in the hospital. The mean age of the participants was about 70 years and there were slightly more women than men. The studies were not funded by a drug company. The two studies in this review found that in developed countries at least, there was some evidence that day surgery for this type of cataract extraction may not only be cheaper but just as effective as hospitalisation and overnight stay for cataract extraction. Although the evidence on complications after surgery such as swelling of the cornea, leaking of the wound and temporary increased pressure within the eye was inconclusive, there appeared to be little differences in visual acuity and improvements in quality of life. One of the two studies showed limitations in study design and the way it was run, probably as it was an old study and reported in a less robust way. It provided fewer data for the review. The people included in the studies were representative for the group we were interested in.
We identified three new studies in this update of the review. In total seven studies were eligible for inclusion, three were ongoing RCTs and four were completed studies. The four completed studies were included in the original review and the three ongoing studies were included in this update. We did not identify any RCTs that compared TXA with EACA. Of the four completed studies, one cross-over TXA study (eight participants) was excluded from the outcome analysis because it had very flawed study methodology. Data from the other three studies were all at unclear risk of bias due to lack of reporting of study methodology. Three studies (two TXA (12 to 56 participants), one EACA (18 participants) reported in four articles (published 1983 to 1995) were included in the narrative review. All three studies compared the drug with placebo. All three studies included adults with acute leukaemia receiving chemotherapy. One study (12 participants) only included participants with acute promyelocytic leukaemia. None of the studies included children. One of the three studies reported funding sources and this study was funded by a charity. We are uncertain whether antifibrinolytics reduce the risk of bleeding (three studies; 86 participants; very low-quality evidence). Only one study reported the number of bleeding events per participant and there was no difference in the number of bleeding events seen during induction or consolidation chemotherapy between TXA and placebo (induction; 38 participants; mean difference (MD) 1.70 bleeding events, 95% confidence interval (CI) -0.37 to 3.77: consolidation; 18 participants; MD -1.50 bleeding events, 95% CI -3.25 to 0.25; very low-quality evidence). The two other studies suggested bleeding was reduced in the antifibrinolytic study arm, but this was statistically significant in only one of these two studies. Two studies reported thromboembolism and no events occurred (68 participants, very low-quality evidence). All three studies reported a reduction in platelet transfusion usage (three studies, 86 participants; very low-quality evidence), but this was reported in different ways and no meta-analysis could be performed. No trials reported the number of platelet transfusions per participant. Only one study reported the number of platelet components per participant and there was a reduction in the number of platelet components per participant during consolidation chemotherapy but not during induction chemotherapy (consolidation; 18 participants; MD -5.60 platelet units, 95% CI -9.02 to -2.18: induction; 38 participants, MD -1.00 platelet units, 95% CI -9.11 to 7.11; very low-quality evidence). Only one study reported adverse events of TXA as an outcome measure and none occurred. One study stated side effects of EACA were minimal but no further information was provided (two studies, 74 participants, very low-quality evidence). None of the studies reported on the following pre-specified outcomes: overall mortality, adverse events of transfusion, disseminated intravascular coagulation (DIC) or quality of life (QoL). Our results indicate that the evidence available for the use of antifibrinolytics in haematology patients is very limited. The trials were too small to assess whether or not antifibrinolytics decrease bleeding. No trials reported the number of platelet transfusions per participant. The trials were too small to assess whether or not antifibrinolytics increased the risk of thromboembolic events or other adverse events. There are three ongoing RCTs (1276 participants) due to be completed in 2017 and 2020.
The evidence is current to March 2016. In this update, seven randomised controlled trials were identified. Three trials are either not yet recruiting or still recruiting participants and and have not been completed. Four randomised controlled trials with a total of 95 participants were reviewed. These trials were conducted between 1983 and 1995. Data from one of the trials (eight participants) was excluded from the outcome analysis because the conduct of the study was so flawed. All three trials (86 participants) included in the outcome analysis were of adults with acute leukaemia receiving chemotherapy. None of the studies included children. One of these three studies reported funding sources and this study was funded by a charity. In people with haematological disorders who have a low platelet count and would usually be treated with platelet transfusions, we are uncertain whether antifibrinolytics decrease the risk of bleeding and the use of platelet transfusions. We are uncertain whether antifibrinolytics increase the risk of developing a clot. We are uncertain whether antifibrinolytics increases the risk of adverse events. None of the studies reported several of this review's outcomes including overall mortality, adverse events of transfusion, and quality of life. The quality of the evidence was very low, making it difficult to draw conclusions or make recommendations regarding the usefulness and safety of antifibrinolytics. The only evidence available is for adults with acute leukaemia receiving chemotherapy. We await the results of the three ongoing trials that are expected to recruit 1276 participants in total by 2020.
We included 22 randomised controlled trials (2858 women), most of which had high risk of bias in several domains. We performed 13 comparisons. Many comparisons are based on a small number of studies with small sample sizes. No analysis of our primary outcomes contained more than two studies. Intravenous iron was compared to oral iron in 10 studies (1553 women). Fatigue was reported in two studies and improved significantly favouring the intravenously treated group in one of the studies. Other anaemia symptoms were not reported. One woman died from cardiomyopathy (risk ratio (RR) 2.95; 95% confidence interval (CI) 0.12 to 71.96; two studies; one event; 374 women; low quality evidence). One woman developed arrhythmia. Both cardiac complications occurred in the intravenously treated group. Allergic reactions occurred in three women treated with intravenous iron, not statistically significant (average RR 2.78; 95% CI 0.31 to 24.92; eight studies; 1454 women; I² = 0%; low quality evidence). Gastrointestinal events were less frequent in the intravenously treated group (average RR 0.31; 95% CI 0.20 to 0.47; eight studies; 169 events; 1307 women; I² = 0%; very low quality evidence). One study evaluated red blood cell transfusion versus non-intervention. General fatigue improved significantly more in the transfusion group at three days (MD -0.80; 95% CI -1.53 to -0.07; women 388; low quality evidence), but no difference between groups was seen at six weeks. Maternal mortality was not reported. The remaining comparisons evaluated oral iron (with or without other food substances) versus placebo (three studies), intravenous iron with oral iron versus oral iron (two studies) and erythropoietin (alone or combined with iron) versus placebo or iron (seven studies). These studies did not investigate fatigue. Maternal mortality was rarely reported. The body of evidence did not allow us to reach a clear conclusion regarding the efficacy of the interventions on postpartum iron deficiency anaemia. The quality of evidence was low. Clinical outcomes were rarely reported. Laboratory values may not be reliable indicators for efficacy, as they do not always correlate with clinical treatment effects. It remains unclear which treatment modality is most effective in alleviating symptoms of postpartum anaemia. Intravenous iron was superior regarding gastrointestinal harms, however anaphylaxis and cardiac events occurred and more data are needed to establish whether this was caused by intravenous iron. The clinical significance of some temporarily improved fatigue scores in women treated with blood transfusion is uncertain and this modest effect should be balanced against known risks, e.g. maternal mortality (not reported) and maternal immunological sensitisation, which can potentially harm future pregnancies. When comparing oral iron to placebo it remains unknown whether efficacy (relief of anaemia symptoms) outweighs the documented gastrointestinal harms. We could not draw conclusions regarding erythropoietin treatment due to lack of evidence. Further research should evaluate treatment effect through clinical outcomes, i.e. presence and severity of anaemia symptoms balanced against harms, i.e. survival and severe morbidity.
We included 22 randomised controlled studies with 2858 women and performed 13 comparisons, many of which were based on few studies involving small numbers of women. The overall quality of evidence was low. Most trials were conducted in high-income countries. Ten studies, including 1553 women, compared intravenous iron with oral iron. Only one study showed a temporary positive effect on fatigue for intravenous iron. Other anaemia symptoms were not reported. One woman died from heart complications in the intravenous group. Only two studies reported on maternal deaths. Allergic reactions occurred in three women, and heart complications in two women in the intravenous group. Gastrointestinal symptoms were frequent in the oral group and caused some participants to abandon treatment. One study compared red blood cell transfusion with no transfusion. Some (but not all) fatigue scores temporarily improved in the transfused women. Maternal mortality was not reported. When comparing oral iron to placebo (three studies), anaemia symptoms were not reported. It remains unknown whether benefits of oral iron outweigh documented gastrointestinal harms. Other treatment options were compared in other studies, which did not investigate fatigue. Very few studies reported on relief of anaemia symptoms, although this is perhaps the most important purpose of treatment. The body of evidence did not allow us to fully evaluate the efficacy of the treatments on iron deficiency anaemia after childbirth and further research is needed.
From 6168 studies identified in the searches, 41 RCTs with a total of 6858 participants were included. Methodological quality ratings ranged from 1 to 9 out 12, and 13 of the 41 included studies were assessed as low risk of bias. Pooled estimates from 16 RCTs provided moderate to low quality evidence that MBR is more effective than usual care in reducing pain and disability, with standardised mean differences (SMDs) in the long term of 0.21 (95% CI 0.04 to 0.37) and 0.23 (95% CI 0.06 to 0.4) respectively. The range across all time points equated to approximately 0.5 to 1.4 units on a 0 to 10 numerical rating scale for pain and 1.4 to 2.5 points on the Roland Morris disability scale (0 to 24). There was moderate to low quality evidence of no difference on work outcomes (odds ratio (OR) at long term 1.04, 95% CI 0.73 to 1.47). Pooled estimates from 19 RCTs provided moderate to low quality evidence that MBR was more effective than physical treatment for pain and disability with SMDs in the long term of 0.51 (95% CI -0.01 to 1.04) and 0.68 (95% CI 0.16 to 1.19) respectively. Across all time points this translated to approximately 0.6 to 1.2 units on the pain scale and 1.2 to 4.0 points on the Roland Morris scale. There was moderate to low quality evidence of an effect on work outcomes (OR at long term 1.87, 95% CI 1.39 to 2.53). There was insufficient evidence to assess whether MBR interventions were associated with more adverse events than usual care or physical interventions. Sensitivity analyses did not suggest that the pooled estimates were unduly influenced by the results from low quality studies. Subgroup analyses were inconclusive regarding the influence of baseline symptom severity and intervention intensity. Patients with chronic LBP receiving MBR are likely to experience less pain and disability than those receiving usual care or a physical treatment. MBR also has a positive influence on work status compared to physical treatment. Effects are of a modest magnitude and should be balanced against the time and resource requirements of MBR programs. More intensive interventions were not responsible for effects that were substantially different to those of less intensive interventions. While we were not able to determine if symptom intensity at presentation influenced the likelihood of success, it seems appropriate that only those people with indicators of significant psychosocial impact are referred to MBR.
We collected all the published studies up to February 2014; there were 41 studies (with 6858 participants) that compared multidisciplinary treatment to other treatments. Most studies compared a multidisciplinary treatment to usual care (such as care by a general practitioner) or to treatments that only addressed physical factors (such as exercise or physiotherapy). All the people in the studies had LBP for more than three months and most had received some other sort of treatment previously. There was moderate quality evidence that multidisciplinary treatment results in larger improvements in pain and daily function than usual care or treatments aimed only at physical factors. The difference was not very large, about 1 point on a 10 point scale for pain, but this may be important for people whose symptoms have not responded to other treatments. There was also moderate evidence that multidisciplinary treatment doubled the likelihood that people were able to work in the next 6 to 12 months compared to treatments aimed at physical factors. While these programs seem to be more effective than alternatives, the effects needs to be balanced with their costs in terms of money, resources and time. Multidisciplinary treatment programs are often quite intensive and expensive, so they are probably most appropriate for people with quite severe or complex problems.
We included in this review 19 trials involving 516 participants. Seven of the included studies (N = 222) had not been evaluated in the previous review. Investigators compared several positions: prone versus supine, prone alternant versus supine, prone versus lateral right, lateral right versus supine, lateral left versus supine, lateral alternant versus supine, lateral right versus lateral left, quarter turn from prone versus supine, quarter turn from prone versus prone and good lung dependent versus good lung uppermost. Apart from two studies that compared lateral alternant versus supine, one comparing lateral right versus supine and two comparing prone or prone alternant versus the supine position, all included studies had a cross-over design. In five studies, infants were ventilated with continuous positive airway pressure (CPAP); in the other studies, infants were treated with conventional ventilation (CV). Risks of bias did not differ substantially for different comparisons and outcomes. This update detects a moderate to high grade of inconsistency, similar to previous versions. However, for the analysed outcomes, the direction of effect was the same in all studies. Therefore, we consider that this inconsistency had little effect on the conclusions of the meta-analysis. When comparing prone versus supine position, we observed an increase in arterial oxygen tension (PO2) in the prone position (mean difference (MD) 5.49 mmHg, 95% confidence interval (CI) 2.92 to 8.05 mmHg; three trials; 116 participants; I2= 0). When percent haemoglobin oxygen saturation was measured with pulse oximetry (SpO2), improvement in the prone position was between 1.13% and 3.24% (typical effect based on nine trials with 154 participants; I2= 89%). The subgroup ventilated with CPAP (three trials; 59 participants) showed a trend towards improving SpO2 in the prone position compared with the supine position, although the mean difference (1.91%) was not significant (95% CI -1.14 to 4.97) and heterogeneity was extreme (I2= 95%). Sensitivity analyses restricted to studies with low risk of selection bias showed homogeneous results and verified a small but significant effect (MD 0.64, 95% CI 0.26 to 1.02; four trials; 92 participants; I2= 0). We also noted a slight improvement in the number of episodes of desaturation; it was not possible to establish whether this effect continued once the intervention was stopped. Investigators studied few adverse effects from the interventions in sufficient detail. Two studies analysed tracheal cultures of neonates after five days on MV, reporting lower bacterial colonisation in the alternating lateral position than in the supine posture. Other effects - positive or negative - cannot be excluded in light of the relatively small numbers of neonates studied. This update of our last review in 2013 supports previous conclusions. Evidence of low to moderate quality favours the prone position for slightly improved oxygenation in neonates undergoing mechanical ventilation. However, we found no evidence to suggest that particular body positions during mechanical ventilation of the neonate are effective in producing sustained and clinically relevant improvement.
We included in this review 19 trials involving 516 participants. Comparisons included supine position versus prone and different lateral positions (right, left, alternant or quarter for prone). The outcome most often reported in these studies was change in oxygenation. We found no clear evidence that particular body positions in newborn babies who need assisted ventilation are effective in producing relevant and sustained improvement. However, putting infants who receive assisted ventilation in the face-down (prone) position for a short time slightly improves levels of oxygen in the blood (evidence of moderate quality), and these infants undergo fewer episodes of poor oxygenation (evidence of low quality). Researchers described no adverse effects for any of the positions compared, although studies did not last long enough for investigators to detect all possible effects. What's more, most of the babies participating in the studies were placed in alternate positions. For this reason, medium- or long-term adverse effects cannot be attributed to a given position. Confidence in review conclusions depends on the characteristics of included studies such as risk of bias (design limitations), consistency (heterogeneity across studies), precision (small confidence interval) and directness (same effect), and requires that all included studies were published independently of their outcomes. The quality of evidence for these outcomes allows us to have very low to moderate confidence in our conclusions.
We included 20 trials with 3791 participants. Studies were heterogenous in study design, population, antibiotic regimens, and outcomes. We grouped the sixteen different antibiotic agents studied into six categories: 1) anti-pseudomonal penicillins (three trials); 2) broad-spectrum penicillins (one trial); 3) cephalosporins (two trials); 4) carbapenems (four trials); 5) fluoroquinolones (six trials); 6) other antibiotics (four trials). Only 9 of the 20 trials protected against detection bias with blinded outcome assessment. Only one-third of the trials provided enough information to enable a judgement about whether the randomisation sequence was adequately concealed. Eighteen out of 20 trials received funding from pharmaceutical industry-sponsors. The included studies reported the following findings for clinical resolution of infection: there is evidence from one large trial at low risk of bias that patients receiving ertapenem with or without vancomycin are more likely to have resolution of their foot infection than those receiving tigecycline (RR 0.92, 95% confidence interval (CI) 0.85 to 0.99; 955 participants). It is unclear if there is a difference in rates of clinical resolution of infection between: 1) two alternative anti-pseudomonal penicillins (one trial); 2) an anti-pseudomonal penicillin and a broad-spectrum penicillin (one trial) or a carbapenem (one trial); 3) a broad-spectrum penicillin and a second-generation cephalosporin (one trial); 4) cephalosporins and other beta-lactam antibiotics (two trials); 5) carbapenems and anti-pseudomonal penicillins or broad-spectrum penicillins (four trials); 6) fluoroquinolones and anti-pseudomonal penicillins (four trials) or broad-spectrum penicillins (two trials); 7) daptomycin and vancomycin (one trial); 8) linezolid and a combination of aminopenicillins and beta-lactamase inhibitors (one trial); and 9) clindamycin and cephalexin (one trial). Carbapenems combined with anti-pseudomonal agents produced fewer adverse effects than anti-pseudomonal penicillins (RR 0.27, 95% CI 0.09 to 0.84; 1 trial). An additional trial did not find significant differences in the rate of adverse events between a carbapenem alone and an anti-pseudomonal penicillin, but the rate of diarrhoea was lower for participants treated with a carbapenem (RR 0.58, 95% CI 0.36 to 0.93; 1 trial). Daptomycin produced fewer adverse effects than vancomycin or other semi-synthetic penicillins (RR 0.61, 95%CI 0.39 to 0.94; 1 trial). Linezolid produced more adverse effects than ampicillin-sulbactam (RR 2.66; 95% CI 1.49 to 4.73; 1 trial), as did tigecycline compared to ertapenem with or without vancomycin (RR 1.47, 95% CI 1.34 to 1.60; 1 trial). There was no evidence of a difference in safety for the other comparisons. The evidence for the relative effects of different systemic antibiotics for the treatment of foot infections in diabetes is very heterogeneous and generally at unclear or high risk of bias. Consequently it is not clear if any one systemic antibiotic treatment is better than others in resolving infection or in terms of safety. One non-inferiority trial suggested that ertapenem with or without vancomycin is more effective in achieving clinical resolution of infection than tigecycline. Otherwise the relative effects of different antibiotics are unclear. The quality of the evidence is low due to limitations in the design of the included trials and important differences between them in terms of the diversity of antibiotics assessed, duration of treatments, and time points at which outcomes were assessed. Any further studies in this area should have a blinded assessment of outcomes, use standardised criteria to classify severity of infection, define clear outcome measures, and establish the duration of treatment.
We identified 20 relevant randomised controlled trials, with a total of 3791 participants. Eighteen of the 20 studies were funded by pharmaceutical companies. All trials compared systemic antibiotics with other systemic antibiotics. It is unclear whether any particular antibiotic is better than any another for curing infection or avoiding amputation. One trial suggested that ertapenem (an antibiotic) with or without vancomycin (another antibiotic) is more effective than tigecycline (another antibiotic) for resolving DFI. It is also generally unclear whether different antibiotics are associated with more or fewer adverse effects. The following differences were identified: 1. carbapenems (a class of antibiotic) combined with anti-pseudomonal agents (antibiotics that kill Pseudomonas bacteria) produced fewer adverse effects than anti-pseudomonal penicillins (another class of antibiotic); 2. daptomycin (an antibiotic) caused fewer adverse effects than vancomycin or other semi-synthetic penicillins (a class of antibiotic); 3. linezolid (an antibiotic) caused more harm than ampicillin-sulbactam (a combination of antibiotics); 4. tigecycline produced more adverse effects than the combination of ertapenem with or without vancomycin. There were important differences between the trials in terms of the diversity of antibiotics assessed, the duration of treatments, and the point at which the results were measured. The included studies had limitations in the way they were designed or performed, as a result of these differences and design limitations, our confidence in the findings of this review is low.
Six studies met our inclusion criteria and included a total of 5143 women. Of three studies with self-reported pregnancy data, two showed pregnancy to be less likely in the experimental group than in the comparison group (OR 0.48, 95% CI 0.27 to 0.87) (OR 0.60, 95% CI 0.41 to 0.87). The interventions included a clinic-based counseling program and a community-based communication project. All studies showed some association of the intervention with contraceptive use. Two showed that treatment-group women were more likely to use a modern method than the control group: ORs were 1.77 (95% CI 1.08 to 2.89) and 3.08 (95% CI 2.36 to 4.02). In another study, treatment-group women were more likely than control-group women to use pills (OR 1.78, 95% CI 1.26 to 2.50) or an intrauterine device (IUD) (OR 3.72, 95% CI 1.27 to 10.86) but less likely to use and injectable method (OR 0.23, 95% CI 0.05 to 1.00). One study used a score for method effectiveness. The methods of the special-intervention group scored higher than those of the comparison group at three months (MD 13.26, 95% CI 3.16 to 23.36). A study emphasizing IUDs showed women in the intervention group were more likely to use an IUD (OR 1.79, 95% CI 1.20 to 2.69) and less likely to use no method (OR 0.48, 95% CI 0.31 to 0.75). In another study, contraceptive use was more likely among women in a health service intervention compared to women in a community awareness program at four months (OR 1.79, 95% CI 1.40 to 2.30) or women receiving standard care at 10 to 12 months (OR 2.08, 95% CI 1.58 to 2.74). That study was the only one with a specific component on the lactational amenorrhea method (LAM) that had sufficient data on LAM use. Women in the health service group were more likely than those in the community awareness group to use LAM (OR 41.36, 95% CI 10.11 to 169.20). We considered the quality of evidence to be very low. The studies had limitations in design, analysis, or reporting. Three did not adjust for potential confounding and only two had sufficient information on intervention fidelity. Outcomes were self reported and definitions varied for contraceptive use. All studies had adequate follow-up periods but most had high losses, as often occurs in contraception studies.
Until 3 November 2014, we ran computer searches for studies of programs to improve family planning among postpartum women. We wrote to researchers for missing data. Programs had to have contact within six weeks postpartum. The special program was compared with a different program, usual care, or no service. Our main outcomes were birth control use and pregnancy. We found six studies with a total of 5143 women. Of three studies with pregnancy data, two showed fewer pregnancies in the treatment group compared to the control group. The programs in those studies were clinic counseling and community education. All studies showed the special program was related to more birth control use. In two studies, more women in the treatment group used a modern method of birth control than those in the control group. In another study, women in the treatment group were more likely to use pills or an IUD but less likely to use an injectable method. One study used a score for how well the birth control method usually worked. The methods of the treatment group scored higher than those of the control group. A study focused on IUDs showed more IUDs in the treatment group and less use of no method. Women in a health service program used birth control more often than those in a community education program or those getting standard care. Also, women in the health service group were more likely to use the lactation method. We believe the data were very low quality for pregnancy and birth control use. The studies had problems in design, analysis, and reporting. Some did not adjust for factors that could affect the results. They had self-reported outcomes and used different measures for the outcomes. All studies had good follow-up times but most lost many women to follow up.
We included five studies conducted in Europe and North America. Four separate trials compared grid laser to no treatment, sham treatment, intravitreal bevacizumab and intravitreal triamcinolone. One further trial compared subthreshold to threshold laser. Two of these trials were judged to be at high risk of bias in one or more domains. In one trial of grid laser versus observation, people receiving grid laser were more likely to gain visual acuity (VA) (10 or more ETDRS letters) at 36 months (RR 1.75, 95% confidence interval (CI) 1.08 to 2.84, 78 participants, moderate-quality evidence). The effect of grid laser on loss of VA (10 or more letters) was uncertain as the results were imprecise (RR 0.68, 95% CI 0.23 to 2.04, 78 participants, moderate-quality evidence). On average, people receiving grid laser had better improvement in VA (mean difference (MD) 0.11 logMAR, 95% CI 0.05 to 0.17, high-quality evidence). In a trial of early and delayed grid laser treatment versus sham laser (n = 108, data available for 99 participants), no participant gained or lost VA (15 or more ETDRS letters). At 12 months, there was no evidence for a difference in change in VA (from baseline) between early grid laser and sham laser (MD -0.03 logMAR, 95% confidence interval (CI) -0.07 to 0.01, 68 participants, low-quality evidence) or between delayed grid laser and sham laser (MD 0.00, 95% CI -0.04 to 0.04, 66 participants, low-quality evidence). The relative effects of subthreshold and threshold laser were uncertain. In one trial, the RR for gain of VA (15 or more letters) at 12 months was 1.68 (95% CI 0.57 to 4.95, 36 participants, moderate-quality evidence); the RR for loss of VA (15 or more letters) was 0.56 (95% CI 0.06 to 5.63, moderate-quality evidence); and at 24 months the change in VA from baseline was MD 0.07 (95% CI -0.10 to 0.24, moderate-quality evidence). The relative effects of macular grid laser and intravitreal bevacizumab were uncertain. In one trial, the RR for gain of 15 or more letters at 12 months was 0.67 (95% CI 0.39 to 1.14, 30 participants, low-quality evidence). Loss of 15 or more letters was not reported. Change in VA at 12 months was MD 0.11 logMAR (95% CI -0.36 to 0.14, low-quality evidence). The relative effects of grid laser and 1mg triamcinolone were uncertain at 12 months. RR for gain of VA (15 or more letters) was 1.13 (95% CI 0.75 to 1.71, 1 RCT, 242 participants, moderate-quality evidence); RR for loss of VA (15 or more letters) was 1.20 (95% CI 0.63 to 2.27, moderate-quality evidence); MD for change in VA was -0.03 letters (95% CI -0.12 to 0.06, moderate-quality evidence). Similar results were seen for the comparison with 4mg triamcinolone. Beyond 12 months, the visual outcomes were in favour of grid laser at 24 months and 36 months with people in the macular grid group gaining more VA. Four studies reported on adverse effects. Laser photocoagulation appeared to be well tolerated in the studies. One participant (out of 71) suffered a perforation of Bruch's membrane, but this did not affect visual acuity. Moderate-quality evidence from one RCT supports the use of grid laser photocoagulation to treat macular oedema following BRVO. There was insufficient evidence to support the use of early grid laser or subthreshold laser. There was insufficient evidence to show a benefit of intravitreal triamcinolone or anti-vascular endothelial growth factor (VEGF) over macular grid laser photocoagulation in BRVO. With recent interest in the use of intravitreal anti-VEGF or steroid therapy, assessment of treatment efficacy (change in visual acuity and foveal or central macular thickness using optical coherence tomography (OCT)) and the number of treatments needed for maintenance and long-term safety will be important for future studies.
We include five studies with a total of 715 participants. Three studies were from Italy and two were from the USA. Key results We looked primarily at the proportion of participants gaining or losing significant vision. The trial comparing grid laser to no laser showed a clear benefit for grid laser. The result of the trial comparing early grid laser to delayed grid laser for macular BRVO (a subgroup of BRVO in which the occlusion is limited to a small vessel draining a sector of the macular region) was uncertain, and the quality of the evidence was low. We could not be certain that bevacizumab injections were better than grid laser treatment, because the effect was imprecise and the quality of the evidence was low. We could not be certain if subthreshold diode laser treatment was better than threshold laser treatment because the results were imprecise. The trial comparing grid laser treatment to triamcinolone (steroid) injection was imprecise, but there was a suggestion of a benefit for grid laser over 1 mg triamcinolone at 36 months and a benefit for grid laser over 4 mg triamcinolone at 24 months. Two of the five studies were at risk of bias, meaning that there were problems with the design and execution of two of the five studies which raised questions about the validity of these two studies. Four of the five studies reported on adverse outcomes. Grid laser was well tolerated within these studies. One participant had an apparent perforation of Bruch's membrane (a membrane under the macula) following laser but this did not affect their vision. Bevacizumab injection was also well tolerated with only minor local side effects (transient red eye and superficial bleeding). Participants receiving triamcinolone injection were at risk of developing a raised eye pressure that required medication or surgery, at risk of developing a cataract, and at risk of developing a serious eye infection (endophthalmitis). Quality of the evidence Good-quality evidence was available from one trial to support macular grid laser treatment for macular swelling following a blocked vein. There is insufficient evidence to recommend early grid laser, subthreshold laser, bevacizumab injections or triamcinolone injections over grid laser. Anti-VEGF and steroid treatments are becoming increasingly popular for treating eye conditions. However, more studies are needed to assess the longer-term outcome of these treatments against grid laser treatment in the management of macular oedema after branch retinal vein occlusions.
Tiotropium plus LABA/ICS versus tiotropium We included six studies (1902 participants) with low risk of bias that compared tiotropium in addition to inhaled corticosteroid and long-acting beta2-agonist combination therapy versus tiotropium alone. We found no statistically significant differences in mortality between treatments (odds ratio (OR) 1.80, 95% confidence interval (CI) 0.55 to 5.91; two studies; 961 participants) as well as in the all-cause hospitalisations (OR 0.84, 95% CI 0.53 to 1.33; two studies; 961 participants). The effect on exacerbations was heterogeneous among trials and was not meta-analysed. Health-related quality of life measured by St. George’s Respiratory Questionnaire (SGRQ) showed a statistically significant improvement in total scores with use of tiotropium + LABA/ICS compared with tiotropium alone (mean difference (MD) -3.46, 95% CI -5.05 to -1.87; four studies; 1446 participants). Lung function was significantly different in the combined therapy (tiotropium + LABA/ICS) group, although average benefit with this therapy was small. None of the included studies included exercise tolerance as an outcome. A pooled estimate of these studies did not show a statistically significant difference in adverse events (OR 1.16, 95% CI 0.92 to 1.47; four studies; 1363 participants), serious adverse events (OR 0.86, 95% CI 0.57 to 1.30; four studies; 1758 participants) and pneumonia (Peto OR 1.62, 95% CI 0.54 to 4.82; four studies; 1758 participants). Tiotropium plus LABA/ICS versus LABA/ICS One of the six studies (60 participants) also compared combined therapy (tiotropium + LABA/ICS) versus LABA/ICS therapy alone. This study was affected by lack of power; therefore results did not allow us to draw conclusions for this comparison. This review update includes three additional studies and provides new low quality evidence supporting the finding that tiotropium + LABA/ICS-based therapy improves the disease-specific quality of life. The current evidence is insufficient to support the benefit of tiotropium + LABA/ICS-based therapy for mortality, hospital admission or exacerbations (moderate and low quality evidence). Compared with use of tiotropium alone, tiotropium + LABA/ICS-based therapy does not seem to increase undesirable effects nor serious non-fatal adverse events.
This review found six studies, involving 1902 participants, comparing the long-term efficacy and side effects of tiotropium combined with combination inhalers for treatment of patients with COPD. Not all of the people included in these studies had COPD that was severe enough to be recommended for combined therapy according to current guidelines. Current evidence shows potential benefits of treatment with tiotropium in addition to inhaled corticosteroid and long-acting beta2-agonist combination therapy through increased health-related quality of life and a small improvement i n lung function in patients receiving this combined therapy. However, this evidence does not allow us to draw conclusions about the effects of these treatments on mortality, hospitalisation for all causes and exacerbations. The frequency of serious and non-serious adverse events was not increased in either of the two groups. Overall, we assessed the evidence presented in this review to be of moderate or low quality, which means we are reasonably confident in some of the findings, but less confident in others.
We included 29 trials (n = 2431) in this review. The study sample sizes ranged from 20 to 323 participants. We considered a total of 76.6% of the included trials to have a low risk of bias, representing 86% of all participants. There is low to high quality evidence that MCE is not clinically more effective than other exercises for all follow-up periods and outcomes tested. When compared with minimal intervention, there is low to moderate quality evidence that MCE is effective for improving pain at short, intermediate and long-term follow-up with medium effect sizes (long-term, MD –12.97; 95% CI –18.51 to –7.42). There was also a clinically important difference for the outcomes function and global impression of recovery compared with minimal intervention. There is moderate to high quality evidence that there is no clinically important difference between MCE and manual therapy for all follow-up periods and outcomes tested. Finally, there is very low to low quality evidence that MCE is clinically more effective than exercise and electrophysical agents (EPA) for pain, disability, global impression of recovery and quality of life with medium to large effect sizes (pain at short term, MD –30.18; 95% CI –35.32 to –25.05). Minor or no adverse events were reported in the included trials. There is very low to moderate quality evidence that MCE has a clinically important effect compared with a minimal intervention for chronic low back pain. There is very low to low quality evidence that MCE has a clinically important effect compared with exercise plus EPA. There is moderate to high quality evidence that MCE provides similar outcomes to manual therapies and low to moderate quality evidence that it provides similar outcomes to other forms of exercises. Given the evidence that MCE is not superior to other forms of exercise, the choice of exercise for chronic LBP should probably depend on patient or therapist preferences, therapist training, costs and safety.
In total, 2431 participants were enrolled in 29 trials. The study sample sizes ranged from 20 to 323 participants, and most of them were middle-aged people recruited from primary or tertiary care. The duration of the treatment programmes ranged from 20 days to 12 weeks, and the number of treatment sessions ranged from one to five sessions per week. Sixteen trials compared MCE with other types of exercises, seven trials compared MCE with minimal intervention, five trials compared MCE with manual therapy, three trials compared MCE with a combination of exercise and electrophysical agents, and one trial compared MCE with telerehabilitation based on home exercises. MCE probably provides better improvements in pain, function and global impression of recovery than minimal intervention at all follow-up periods. MCE may provide slightly better improvements than exercise and electrophysical agents for pain, disability, global impression of recovery and the physical component of quality of life in the short and intermediate term. There is probably little or no difference between MCE and manual therapy for all outcomes and follow-up periods. Little or no difference is observed between MCE and other forms of exercise. Given the minimal evidence that MCE is superior to other forms of exercise, the choice of exercise for chronic LBP should probably depend on patient or therapist preferences, therapist training, costs and safety.
Ten prospective, parallel-group design, randomised controlled trials, involving a total of 577 participants with type 1 and type 2 diabetes mellitus, were identified. Risk of bias was high or unclear in all but two trials, which were assessed as having moderate risk of bias. Risk of bias in some domains was high in 50% of trials. Oral monopreparations of cinnamon (predominantly Cinnamomum cassia) were administered at a mean dose of 2 g daily, for a period ranging from 4 to 16 weeks. The effect of cinnamon on fasting blood glucose level was inconclusive. No statistically significant difference in glycosylated haemoglobin A1c (HbA1c), serum insulin or postprandial glucose was found between cinnamon and control groups. There were insufficient data to pool results for insulin sensitivity. No trials reported health-related quality of life, morbidity, mortality or costs. Adverse reactions to oral cinnamon were infrequent and generally mild in nature. There is insufficient evidence to support the use of cinnamon for type 1 or type 2 diabetes mellitus. Further trials, which address the issues of allocation concealment and blinding, are now required. The inclusion of other important endpoints, such as health-related quality of life, diabetes complications and costs, is also needed.
Cinnamon bark has been shown in a number of animal studies to improve blood sugar levels, though its effect in humans is not too clear. Hence, the review authors set out to determine the effect of oral cinnamon extract on blood sugar and other outcomes. The authors identified 10 randomised controlled trials, which involved 577 participants with diabetes mellitus. Cinnamon was administered in tablet or capsule form, at a mean dose of 2 g daily, for four to 16 weeks. Generally, studies were not well conducted and lacked in quality. The review authors found cinnamon to be no more effective than placebo, another active medication or no treatment in reducing glucose levels and glycosylated haemoglobin A1c (HbA1c), a long-term measurement of glucose control. None of the trials looked at health-related quality of life, morbidity, death from any cause or costs. Adverse reactions to cinnamon treatment were generally mild and infrequent. Further trials investigating long-term benefits and risks of the use of cinnamon for diabetes mellitus are required. Rigorous study design, quality reporting of study methods, and consideration of important outcomes such as health-related quality of life and diabetes complications, are key areas in need of attention.
Thirty studies (2047 participants) were included. We categorised studies by intervention type: supportive interventions during follow-up, educational interventions and behavioural therapy. Across all three intervention classes, most studies incorporated elements of more than one intervention. For the purposes of this systematic review, we categorised them by the prevailing type of intervention, which we expected would have the greatest impact on the study outcome. Baseline Epworth Sleepiness Scale (ESS) scores indicated that most participants experienced daytime sleepiness, and CPAP was indicated on the basis of sleep disturbance indices. A vast majority of recruited participants had not used CPAP previously. Most of the studies were at an unclear risk of bias overall, although because of the nature of the intervention, blinding of both study personnel and participants was not feasible, and this affected a number of key outcomes. Adverse events were not reported in these studies. Low- to moderate-quality evidence showed that all three types of interventions led to increased machine usage in CPAP-naive participants with moderate to severe OSA syndrome. Compared with usual care, supportive ongoing interventions increased machine usage by about 50 minutes per night (0.82 hours, 95% confidence interval (CI) 0.36 to 1.27, N = 803, 13 studies; low-quality evidence), increased the number of participants who used their machines for longer than four hours per night from 59 to 75 per 100 (odds ratio (OR) 2.06, 95% CI 1.22 to 3.47, N = 268, four studies; low-quality evidence) and reduced the likelihood of study withdrawal (OR 0.65, 95% CI 0.44 to 0.97, N = 903, 12 studies; moderate-quality evidence). With the exception of study withdrawal, considerable variation was evident between the results of individual studies across these outcomes. Evidence of an effect on symptoms and quality of life was statistically imprecise (ESS score -0.60 points, 95% CI -1.81 to 0.62, N = 501, eight studies; very low-quality evidence; Functional Outcomes of Sleep Questionnaire 0.98 units, 95% CI -0.84 to 2.79, N = 70, two studies; low-quality evidence, respectively). Educational interventions increased machine usage by about 35 minutes per night (0.60 hours, 95% CI 0.27 to 0.93, N = 508, seven studies; moderate-quality evidence), increased the number of participants who used their machines for longer than four hours per night from 57 to 70 per 100 (OR 1.80, 95% CI 1.09 to 2.95, N = 285, three studies; low-quality evidence) and reduced the likelihood of withdrawal from the study (OR 0.67, 95% CI 0.45 to 0.98, N = 683, eight studies; low-quality evidence). Participants experienced a small improvement in symptoms, the size of which may not be clinically significant (ESS score -1.17 points, 95% CI -2.07 to -0.26, N = 336, five studies). Behavioural therapy led to substantial improvement in average machine usage of 1.44 hours per night (95% CI 0.43 to 2.45, N = 584, six studies; low-quality evidence) and increased the number of participants who used their machines for longer than four hours per night from 28 to 47 per 100 (OR 2.23, 95% CI 1.45 to 3.45, N = 358, three studies; low-quality evidence) but with high levels of statistical heterogeneity. The estimated lower rate of withdrawal with behavioural interventions was imprecise and did not reach statistical significance (OR 0.85, 95% CI 0.57 to 1.25, N = 609, five studies, very low-quality evidence). In CPAP-naive people with severe sleep apnoea, low-quality evidence indicates that supportive interventions that encourage people to continue to use their CPAP machines increase usage compared with usual care. Moderate-quality evidence shows that a short-term educational intervention results in a modest increase in CPAP usage. Low-quality evidence indicates that behavioural therapy leads to a large increase in CPAP machine usage. The impact of improved CPAP usage on daytime sleepiness, quality of life and long-term cardiovascular risks remains unclear. For outcomes reflecting machine usage, we downgraded for risk of bias and inconsistency. An additional limitation for daytime sleepiness and quality of life measures was imprecision. Trials in people who have struggled to persist with treatment are needed, as currently little evidence is available for this population. Optimal timing and duration and long-term effectiveness of interventions remain uncertain. The relationship between improved machine usage and effect on symptoms and quality of life requires further assessment. Studies addressing the choice of interventions that best match individual patient needs and therefore result in the most successful and cost-effective therapy are needed.
We looked at evidence from randomised, parallel-group studies. Following a comprehensive literature search and assessment of existing trials, we have included 30 studies with a total of 2047 participants. A vast majority of the participants suffered from excessive daytime sleepiness and severe OSA. Duration of studies ranged from four weeks to 12 months. The evidence is current to January 2013. In combining the results from all trials, we found that all three types of interventions increased CPAP usage to varying degrees. Ongoing supportive interventions were more successful than usual care in increasing CPAP usage by about 50 minutes per night. Educational interventions resulted in a modest improvement of about 35 minutes per night. Behavioural therapy increased machine usage by just under one and a half hours per night. Some inconsistency was noted between the results of individual studies, and this introduces some uncertainty about the size of the difference that might be anticipated in practice. It is unclear whether any of these interventions also led to meaningful improvement of daytime symptoms or quality of life. Studies generally recruited people who are new to CPAP, and currently little evidence is available on people who have struggled to persist with treatment. The cost-effectiveness of the interventions has not been explored, and it is unclear which intervention is best suited for individual patients. Overall, the quality of evidence presented is low because of issues with study design and some inconsistency in results across studies. The quality of evidence for symptoms and quality of life was affected by the low number of studies that measured these outcomes.
We included 12 studies involving 1954 participants. We identified ten studies as being of methodologically poor quality and two studies as being of medium quality. We did not perform a meta-analysis but reported the results separately. Six formulations were shown to be superior to the control in improving recovery: Ertong Qingyan Jiere Koufuye was more effective than Fufang Shuanghua Koufuye for acute pharyngitis (odds ratio (OR) 2.52; 95% confidence interval (Cl) 1.11 to 5.74); Yanhouling mixture was more effective than gentamicin atomised inhalation for acute pharyngitis (OR 5.39; 95% CI 2.69 to 10.81); Qinganlan Liyan Hanpian was more effective than Fufang Caoshanhu Hanpian for acute pharyngitis (OR 2.25; 95% CI 1.08 to 4.67); sore throat capsules were more effective than antibiotics (intravenous cefalexin) for acute pharyngitis or acute tonsillitis (OR 2.36; 95% CI 1.01 to 5.51); compound dandelion soup was more effective than sodium penicillin for acute purulent tonsillitis (OR 5.06; 95% CI 1.70 to 15.05); and eliminating heat by nourishing yin and relieving sore-throat methods combined with Dikuiluqan Hanpian was more effective than Dikuiluqan Hanpian alone for children with chronic pharyngitis (OR 2.63; 95% CI 1.02 to 6.79). Another six formulations were shown to be equally efficacious as the control. Based on the existing evidence in this review, some Chinese herbal medicines for treating sore throat appeared efficacious. However, due to the lack of high quality clinical trials, the efficacy of Chinese herbal medicine for treating sore throat is controversial and questionable. Therefore we cannot recommend any kind of Chinese medical herbal formulation as an effective remedy for sore throat.
In this updated review, we included a total of 12 studies (including five new studies), involving 1954 participants. Six Chinese herbal medicines may facilitate the improvement of symptoms and increase the rate of recovery. Two studies separately reported one case of diarrhoea and one case of mild nausea; two trials reported no adverse events in the treatment group; and other studies did not report any adverse events. We identified ten studies as being of poor methodological quality and only two studies as being of medium methodological quality. Chinese medicinal herbs may be the treatment choice for sore throat, but we cannot recommend any particular preparation or formulation over another as we did not find any well-designed studies to provide strong evidence to conclusively support or reject the use of Chinese traditional herbal medicines in the treatment of sore throat. Enhancing the quality of research into Chinese medicinal herbs for sore throat is imperative, and stronger evidence from high quality, randomised controlled trials (RCTs) are needed.
We included three trials with 330 participants. We judged the quality of the evidence as very low for all the outcomes. The quality of the data was limited by the lack of complete outcome reporting, unclear risk of bias in the methods in which the studies were conducted, and the age of the studies (> 20 years). The methods of cancer staging and types of surgical procedures, which do not reflect current practice, reduced our confidence in the estimation of the effect. Two studies compared surgery to radiation therapy, and in one study chemotherapy was administered to both arms. One study administered initial chemotherapy, then responders were randomised to surgery versus control; following, both groups underwent chest and whole brain irradiation. Due to the clinical heterogeneity of the trials, we were unable to pool results for meta-analysis. All three studies reported overall survival. One study reported a mean overall survival of 199 days in the surgical arm, compared to 300 days in the radiotherapy arm (P = 0.04). One study reported overall survival as 4% in the surgical arm, compared to 10% in the radiotherapy arm at two years. Conversely, one study reported overall survival at two years as 52% in the surgical arm, compared to 18% in the radiotherapy arm. However this difference was not statistically significant (P = 0.12). One study reported early postoperative mortality as 7% for the surgical arm, compared to 0% mortality in the radiotherapy arm. One study reported the difference in mean degree of dyspnoea as −1.2 comparing surgical intervention to radiotherapy, indicating that participants undergoing radiotherapy are likely to experience more dyspnoea. This was measured using a non-validated scale. Evidence from currently available RCTs does not support a role for surgical resection in the management of limited-stage small-cell lung cancer; however our conclusions are limited by the quality of the available evidence and the lack of contemporary data. The results of the trials included in this review may not be generalisable to patients with clinical stage 1 small-cell lung cancer carefully staged using contemporary staging methods. Although some guidelines currently recommend surgical resection in clinical stage 1 small-cell lung cancer, prospective randomised controlled trials are needed to determine if there is any benefit in terms of short- and long-term mortality and quality of life compared with chemo-radiotherapy alone.
We searched for clinical trials up to 11 January 2017, and we included three studies with 330 people who had been diagnosed with small-cell lung cancer which had not spread outside the chest. Some were given surgery only, and some were not. Also, some were given chemotherapy and radiotherapy along with their surgery, and some were given chemotherapy and radiotherapy without surgery. We looked for a difference in how long people lived, and if their treatment caused any side effects. Key findings The data were all of very low quality. All three studies were quite different so could not be combined. One study reported that people lived longer without surgery (but with radiotherapy) than with surgery. One study reported 4% of people surviving at two years with surgery compared to 10% of people surviving with radiotherapy. One study reported 52% of people surviving with surgery compared to 18% of people surviving with radiotherapy. Our evidence does not support the use of surgery for people with small-cell lung cancer, but the quality of data is low and from more than 20 years ago. Better trials are needed to properly compare surgery with no surgery in people with small-cell lung cancer. Quality of the evidence We rated the quality of the evidence using one of the following grades: very low, low, moderate, or high. Very low quality evidence means we are uncertain about the results. High-quality evidence means we are very certain about the results. For this Cochrane Review, we found that the evidence was of very low quality for all the outcomes studies. We could not combine the trials as they were all very different, and the trials were very old. Some trials did not give enough information about their quality.
Thirty six trials were included in the review, reporting on 5908 participants randomly allocated to azapirones and/or placebo, benzodiazepines, antidepressants, psychotherapy or kava kava. Azapirones, including buspirone, were superior to placebo in treating GAD. The calculated number needed to treat for azapirones using the Clinical Global Impression scale was 4.4 (95% confidence interval (CI) 2.16 to 15.4). Azapirones may be less effective than benzodiazepines and we were unable to conclude if azapirones were superior to antidepressants, kava kava or psychotherapy. Azapirones appeared to be well tolerated. Fewer participants stopped taking benzodiazepines compared to azapirones. The length of studies ranged from four to nine weeks, with one study lasting 14 weeks. Azapirones appeared to be useful in the treatment of GAD, particularly for those participants who had not been on a benzodiazepine. Azapirones may not be superior to benzodiazepines and do not appear as acceptable as benzodiazepines. Side effects appeared mild and non serious in the azapirone treated group. Longer term studies are needed to show that azapirones are effective in treating GAD, which is a chronic long-term illness.
This systematic review evaluates the effectiveness of azapirones compared to other treatments. From the results of 36 randomized controlled trials, azapirones appear to be superior to placebo in short-term studies (four to nine weeks) but may not be superior to benzodiazepines. We were unable to conclude if azapirones were superior to antidepressants, psychotherapy or kava kava. As GAD is generally chronic in nature, conclusions about azapirones' long-term efficacy are not able to be made and longer term trials are needed.
This update includes 15 new eligible treatment-comparisons from 12 studies. In total, 28 treatment-comparisons, involving 4418 women, from 24 studies are now included in one or more meta-analyses. Of the 28 treatment-comparisons, 19 and 16 had published or provided extractable time-to-event data on overall survival (OS) or progression-free survival/time to progression (PFS/TTP), respectively. All 28 treatment-comparisons provided OTRR data that could be included in meta-analyses. Most women recruited to the studies were not selected on the basis of mTNBC status. In a subgroup of three treatment-comparisons assessing women with mTNBC, platinum-containing regimens may have provided a survival benefit (HR 0.75, 95% CI 0.57 to 1.00; low-quality evidence). In women unselected for intrinsic subtypes such as mTNBC, there was little or no effect on survival (HR 1.01, 95% CI 0.92 to 1.12; high-quality evidence). This effect was similar to the combined analysis of survival data for both populations (HR 0.98, 95% CI 0.89 to 1.07; I2 =39%, 1868 deaths, 2922 women; 19 trials). The difference in treatment effects between mTNBC women compared with unselected women was of borderline statistical significance (P = 0.05). Data from three treatment-comparisons with mTNBC participants showed that platinum regimens may improve PFS/TTP (HR 0.59, 95% CI 0.49 to 0.72; low-quality evidence). Thirteen treatment-comparisons of unselected metastatic participants showed that there was probably a small PFS/TTP benefit for platinum recipients, although the confidence interval included no difference (HR 0.92, 95% CI 0.84 to 1.01; moderate-quality evidence). Combined analysis of data from an estimated 1772 women who progressed or died out of 2136 women selected or unselected for mTNBC indicated that platinum-containing regimens improved PFS/TTP (HR 0.85, 95% CI 0.78 to 0.93). There was marked evidence of heterogeneity (P = 0.0004; I2 = 63%). The larger treatment benefit in mTNBC women compared with unselected women was statistically significant (P < 0.0001). There was low-quality evidence of better tumour response in both subgroups of women with mTNBC and unselected women (RR 1.33, 95% CI 1.13 to 1.56; RR 1.11, 95% CI 1.04 to 1.19, respectively). Combined analysis of both populations was closer to the effect in unselected women (RR 1.15, 95% CI 1.08 to 1.22; 4130 women). There was considerable evidence of heterogeneity (P < 0.0001; I2 = 64%), which may reflect between-study differences and general difficulties in assessing response, as well as the varying potencies of the comparators. Compared with women receiving non-platinum regimens: rates of grade 3 and 4 nausea/vomiting were probably higher among women receiving cisplatin- (RR 2.65, 95% CI 2.10 to 3.34; 1731 women; moderate-quality evidence) but the effect from carboplatin-containing regimens was less certain (RR 0.77, 95% CI 0.47 to 1.26; 1441 women; moderate-quality evidence); rates of grade 3 and 4 anaemia were higher among women receiving cisplatin- (RR 3.72, 95% CI 2.36 to 5.88; 1644 women; high-quality evidence) and carboplatin-containing regimens (RR 1.72, 95% CI 1.10 to 2.70; 1441 women; high-quality evidence); rates of grade 3 and 4 hair loss (RR 1.41, 95% CI 1.26 to 1.58; 1452 women; high-quality evidence) and leukopenia (RR 1.38, 95% CI 1.21 to 1.57; 3176 women; moderate-quality evidence) were higher among women receiving platinum-containing regimens (regardless of platinum agent). In women with metastatic breast cancer who do not have triple-negative disease, there is high-quality evidence of little or no survival benefit and excess toxicity from platinum-based regimens. There is preliminary low-quality evidence of a moderate survival benefit from platinum-based regimens for women with mTNBC. Further randomised trials of platinum-based regimens in this subpopulation of women with metastatic breast cancer are required.
24 studies involving 4418 women. The evidence is current to May 2015. Five of the 24 studies specifically assessed women with mTNBC while the other 19 studies assessed women with metastatic breast cancer in general (mainly women without mTNBC). This review found that, compared to chemotherapy without platinum, chemotherapy with platinum did not increase survival time by any important degree for women with metastatic breast cancer in general (mainly women without mTNBC). The quality of the evidence for this was considered to be high, meaning that we are confident about the results. For women with mTNBC, however, this review found that chemotherapy containing platinum may increase survival time over chemotherapy without platinum, but the quality of the evidence for this is low at this point in time (largely due to the small number of studies that have assessed mTNBC). This review also found that chemotherapy including platinum reduced the number of breast cancer recurrences compared to chemotherapy that did not contain platinum in women with mTNBC, however these findings also currently come from low-quality evidence. There was no difference in the number of breast cancer recurrences for women receiving platinum or non-platinum chemotherapy for metastatic breast cancer in general. Chemotherapy with platinum was more likely to shrink tumours compared to chemotherapy without platinum, but this result needs to be considered cautiously. Compared with women receiving chemotherapy without platinum, women receiving chemotherapy with platinum experienced higher rates of nausea/vomiting, anaemia, leukopenia and hair loss. it is difficult to justify using chemotherapy containing platinum for the treatment of metastatic breast cancer that is not mTNBC, given that similarly effective but less toxic chemotherapy is commonly available. Chemotherapy containing platinum may provide a survival benefit to mTNBC participants of sufficient magnitude to justify its use, but the quality of the evidence for this is low at this point in time. Further studies are required before a more definitive conclusion can be made.
A total of 12 studies with a combined total of 12,168 patients were included in this review. Antiplatelet agents reduced all cause (RR 0.76, 95% CI 0.60 to 0.98) and cardiovascular mortality (RR 0.54, 95% CI 0.32 to 0.93) in patients with IC compared with placebo. A reduction in total cardiovascular events was not statistically significant (RR 0.80, 95% CI 0.63 to 1.01). Data from two trials (which tested clopidogrel and picotamide respectively against aspirin) showed a significantly lower risk of all cause mortality (RR 0.73, 95% CI 0.58 to 0.93) and cardiovascular events (RR 0.81, 95% CI 0.67 to 0.98) with antiplatelets other than aspirin compared with aspirin. Antiplatelet therapy was associated with a higher risk of adverse events, including gastrointestinal symptoms (dyspepsia) (RR 2.11, 95% CI 1.23 to 3.61) and adverse events leading to cessation of therapy (RR 2.05, 95% CI 1.53 to 2.75) compared with placebo; data on major bleeding (RR 1.73, 95% CI 0.51, 5.83) and on adverse events in trials of aspirin versus alternative antiplatelet were limited. Risk of limb deterioration leading to revascularisation was significantly reduced by antiplatelet treatment compared with placebo (RR 0.65, 95% CI 0.43 to 0.97). Antiplatelet agents have a beneficial effect in reducing all cause mortality and fatal cardiovascular events in patients with IC. Treatment with antiplatelet agents in this patient group however is associated with an increase in adverse effects, including GI symptoms, and healthcare professionals and patients need to be aware of the potential harm as well as the benefit of therapy; more data are required on the effect of antiplatelets on major bleeding. Evidence on the effectiveness of aspirin versus either placebo or an alternative antiplatelet agent is lacking. Evidence for thienopyridine antiplatelet agents was particularly compelling and there is an urgent need for multicentre trials to compare the effects of aspirin against thienopyridines.
Twelve studies with a total of 12,168 patients were included in this review. The analyses show that, in patients with IC, antiplatelet agents reduced the risk of death from all causes, and from heart attack and stroke combined when compared with placebo. When aspirin was compared with other antiplatelet agents, there was some evidence that the alternative antiplatelet had a more beneficial effect in reducing all cause mortality or of suffering a cardiovascular event such as heart attack or stroke. However, this was based on only two trials. Antiplatelet usage, however, does increase the risk of indigestion and may also increase risk of major bleeding events. Despite its widespread use, the evidence for first line use of aspirin in patients with IC is weak and further research is required to determine whether aspirin would be better replaced by a different class of antiplatelet agent which has a greater beneficial effect with fewer side-effects.
This review included 10 RCTs (reported in 12 articles) consisting of 2326 participants The methodological quality of the studies varied. The type of intervention was separated into three categories; AED versus placebo or standard care, alternative neuroprotective agent versus placebo or standard care and AED versus other AED. Treatment with an AED (phenytoin or carbamazepine) decreased the risk of early seizure compared with placebo or standard care (RR 0.42, 95% CI 0.23 to 0.73; very low quality evidence). There was no evidence of a difference in the risk of late seizure occurrence between AEDs and placebo or standard care (RR 0.91, 95% CI 0.57 to 1.46; very low quality evidence). There was no evidence of a significant difference in all-cause mortality between AEDs and placebo or standard care (RR 1.08 95% CI 0.79 to 1.46,very low quality of evidence). Only one study looked at other potentially neuroprotective agents (magnesium sulfate) compared with placebo. The risk ratios were: late seizure 1.07 (95% CI 0.53 to 2.17) and all-cause mortality 1.20 (95% CI 0.80 to 1.81). The risk ratio for occurrence of early seizure was not estimable. Two studies looked at comparison of two AEDs (levetiracetam, valproate) with phenytoin used as the main comparator in each study. The risk ratio for all-cause mortality was 0.53 (95% CI 0.30 to 0.94). There was no evidence of treatment benefit of phenytoin compared with another AED for early seizures (RR 0.66, 95% 0.20 to 2.12) or late seizures(RR 0.77, 95% CI 0.46 to 1.30). Only two studies reported adverse events. The RR of any adverse event with AED compared with placebo was 1.65 (95% CI 0.73 to 3.66; low quality evidence). There were insufficient data on adverse events in the other treatment comparisons. This review found low-quality evidence that early treatment with an AED compared with placebo or standard care reduced the risk of early post-traumatic seizures. There was no evidence to support a reduction in the risk of late seizures or mortality. There was insufficient evidence to make any conclusions regarding the effectiveness or safety of other neuroprotective agents compared with placebo or for the comparison of phenytoin, a traditional AED, with another AED.
We searched for studies evaluating the effect of early administration of antiepileptic drugs or other potentially neuroprotective agents (which act by protecting the structure or function of nerves) on post-traumatic epilepsy. The primary outcomes of interest were early post-traumatic seizures (within one week of trauma) and late seizures (later than one week post-trauma). We also looked at death, time to late seizure and side effects. The evidence is current to January 2015. We found 10 clinical trials involving 2326 people reported in 12 published articles. The evidence available indicated that early treatment with a traditional antiepileptic drug (phenytoin or carbamazepine) may reduce the risk of early post-traumatic seizures. Traditional antiepileptic drugs are no more effective than placebo (a pretend pill) or standard care in reducing late seizures or mortality. Limited data were available for the comparison of an AED with another AED and for the comparison of other potentially neuroprotective agents with placebo. Most studies did not report serious side effects and other side effects. The overall quality of the evidence varied and findings should be interpreted with caution.
Four studies enrolled approximately 1700 participants. Trials lasted between nine months to two years. Three studies were randomised controlled trials, two of which showed a cluster randomised design; one trial probably was a controlled trial with researcher controlled group assignment. In children up to three years of age in Turkey, Vitamin D compared to no intervention showed a relative risk of 0.04 (95% confidence interval (CI) 0 to 0.71). Despite a marked non-compliance, a Chinese trial in children up to three years of age comparing a combined intervention of supplementation of vitamin D, calcium and nutritional counselling showed a relative risk of 0.76 (95% CI 0.61 to 0.95) compared to no intervention. In two studies conducted in older children in China and in France no rickets occurred in both the intervention and control group. There a only few studies on the prevention of nutritional rickets in term born children. Until new data become available, it appears sound to offer preventive measures (vitamin D or calcium) to groups of high risk, like infants and toddlers; children living in Africa, Asia or the Middle East or migrated children from these regions into areas where rickets is not frequent. Due to a marked clinical heterogeneity and the scarcity of data, the main and adverse effects of preventive measures against nutritional rickets should be investigated in different countries, different age groups and in children of different ethnic origin.
Four trials enrolled approximately 1700 participants and lasted between nine months and two years. Study participants were aged from one month to 15 years. There were different results on the occurrence of nutritional rickets in different settings. Adverse effects were investigated in one study only. Considering the partial high frequency of nutritional rickets, the obvious way of action of supplementation of vitamin D or calcium and the favourable risk-benefit ratio, preventive measures are reasonable in high risk groups like infants and toddlers. New studies investigating main and side effects of preventive measures against nutritional rickets in different age groups and in different countries are indicated.
Twenty studies comprising 667 participants were included in the 2006 review. In that review, there was insufficient evidence of treatment effects on major clinical outcomes to draw clinically meaningful conclusions. Searching to February 2015 identified 40 eligible studies comprising 3483 participants overall. In total, 35 studies (4039 participants) compared HF, HDF or AFB with HD, three studies (54 participants) compared AFB with HDF, and three studies (129 participants) compared HDF with HF. Risks of bias in all studies were generally high resulting in low confidence in estimated treatment effects. Convective dialysis had no significant effect on all-cause mortality (11 studies, 3396 participants: RR 0.87, 95% CI 0.72 to 1.05; I2 = 34%), but significantly reduced cardiovascular mortality (6 studies, 2889 participants: RR 0.75, 95% CI 0.61 to 0.92; I2 = 0%). One study reported no significant effect on rates of nonfatal cardiovascular events (714 participants: RR 1.14, 95% CI 0.86 to 1.50) and two studies showed no significant difference in hospitalisation (2 studies, 1688 participants: RR 1.23, 95% CI 0.93 to 1.63; I2 = 0%). One study reported rates of hypotension during dialysis were significantly reduced with convective therapy (906 participants: RR 0.72, 95% CI 0.66 to 0.80). Adverse events were not systematically evaluated in most studies and data for health-related quality of life were sparse. Convective therapies significantly reduced predialysis levels of B2 microglobulin (12 studies, 1813 participants: MD -5.55 mg/dL, 95% CI -9.11 to -1.98; I2 = 94%) and increased dialysis dose (Kt/V urea) (14 studies, 2022 participants: MD 0.07, 95% CI -0.00 to 0.14; I2 = 90%) compared to diffusive therapy, but results across studies were very heterogeneous. Sensitivity analyses limited to studies comparing HDF with HD showed very similar results. Directly comparative data for differing types of convective dialysis were insufficient to draw conclusions. Studies had important risks of bias leading to low confidence in the summary estimates and were generally limited to patients who had adequate dialysis vascular access. Convective dialysis may reduce cardiovascular but not all-cause mortality and effects on nonfatal cardiovascular events and hospitalisation are inconclusive. However, any treatment benefits of convective dialysis on all patient outcomes including cardiovascular death are unreliable due to limitations in study methods and reporting. Future studies which assess treatment effects of convection dose on patient outcomes including mortality and cardiovascular events would be informative.
We identified 40 studies enrolling 4137 adult participants. Of these, 35 studies in 4039 adults compared convective dialysis with standard haemodialysis. Overall the evidence in the studies was low or very low quality due to limitations in the methods used in the research leading to low confidence in the results. Overall, there was no evidence convective dialysis lowered risk of death from any cause but may reduce death due to heart or vascular disease. Overall treating 1000 men and women who have end-stage kidney disease with convective dialysis rather than standard haemodialysis may prevent 25 dying from heart disease. Convective therapy may reduce blood pressure falls during dialysis but there was no evidence that convective dialysis influenced chances of hospital admission or other side-effects, or improved quality of life.
We included eight studies (seven RCTs, one quasi-RCT, 315 adults (299 women), aged 18 to 75 years): six used a parallel-group design and two used a cross-over design. Sample sizes of intervention arms were five to 43 participants. Two studies, one of which was a cross-over design, compared TENS with placebo TENS (82 participants), one study compared TENS with no treatment (43 participants), and four studies compared TENS with other treatments (medication (two studies, 74 participants), electroacupuncture (one study, 44 participants), superficial warmth (one cross-over study, 32 participants), and hydrotherapy (one study, 10 participants)). Two studies compared TENS plus exercise with exercise alone (98 participants, 49 per treatment arm). None of the studies measured participant-reported pain relief of 50% or greater or PGIC. Overall, the studies were at unclear or high risk of bias, and in particular all were at high risk of bias for sample size. Only one study (14 participants) measured the primary outcome participant-reported pain relief of 30% or greater. Thirty percent achieved 30% or greater reduction in pain with TENS and exercise compared with 13% with exercise alone. One study found 10/28 participants reported pain relief of 25% or greater with TENS compared with 10/24 participants using superficial warmth (42 °C). We judged that statistical pooling was not possible because there were insufficient data and outcomes were not homogeneous. There were no data for the primary outcomes participant-reported pain relief from baseline of 50% or greater and PGIC. There was a paucity of data for secondary outcomes. One pilot cross-over study of 43 participants found that the mean (95% confidence intervals (CI)) decrease in pain intensity on movement (100-mm visual analogue scale (VAS)) during one 30-minute treatment was 11.1 mm (95% CI 5.9 to 16.3) for TENS and 2.3 mm (95% CI 2.4 to 7.7) for placebo TENS. There were no significant differences between TENS and placebo for pain at rest. One parallel group study of 39 participants found that mean ± standard deviation (SD) pain intensity (100-mm VAS) decreased from 85 ± 20 mm at baseline to 43 ± 20 mm after one week of dual-site TENS; decreased from 85 ± 10 mm at baseline to 60 ± 10 mm after single-site TENS; and decreased from 82 ± 20 mm at baseline to 80 ± 20 mm after one week of placebo TENS. The authors of seven studies concluded that TENS relieved pain but the findings of single small studies are unlikely to be correct. One study found clinically important improvements in Fibromyalgia Impact Questionnaire (FIQ) subscales for work performance, fatigue, stiffness, anxiety, and depression for TENS with exercise compared with exercise alone. One study found no additional improvements in FIQ scores when TENS was added to the first three weeks of a 12-week supervised exercise programme. No serious adverse events were reported in any of the studies although there were reports of TENS causing minor discomfort in a total of 3 participants. The quality of evidence was very low. We downgraded the GRADE rating mostly due to a lack of data; therefore, we have little confidence in the effect estimates where available. There was insufficient high-quality evidence to support or refute the use of TENS for fibromyalgia. We found a small number of inadequately powered studies with incomplete reporting of methodologies and treatment interventions.
In January 2017, we found eight clinical studies that examined 315 people. We included TENS administered to produce a non-painful 'tingling' sensation at the site of pain either as a treatment alone or combined with exercise treatment. All studies used TENS in comparison with 'fake' (called placebo or sham) TENS, no treatment, or other treatments such as medicine or hydrotherapy (treatment in water). We did not find enough high-quality studies to allow us to come to any conclusions about the effectiveness of TENS for fibromyalgia pain. Even though seven studies concluded that TENS relieved pain associated with fibromyalgia, the studies were low quality and the findings for measures of pain were inconsistently reported. Studies did not measure most of our outcomes and it was not always clear what aspects of pain were being reported (e.g. present pain, remembered pain, pain severity, etc.). Only one small pilot study found that one 30-minute treatment of TENS reduced pain on movement during and immediately after treatment; however, there were too few participants observed and it is unknown whether this effect would be maintained over a longer course of TENS treatments. Overall, it is not possible to judge whether TENS reduces pain associated with fibromyalgia. There were no serious side events reported in any of the studies. We rated the quality of the evidence from studies using four levels: very low, low, moderate, or high. Very low-quality evidence means that we are very uncertain about the results. High-quality evidence means that we are very confident in the results. The quality of the evidence was very low overall because of a lack of data.
Our systematic review included seven studies with a total of 492 participants. We included 422 participants in our analysis. Thirteen studies are awaiting classification. For the comparison dexmedetomidine versus placebo (six studies, 402 participants), most studies found a reduction in 'rescue' opioid consumption in the first 24 hours after surgery, together with in general no clinically important differences in postoperative pain (visual analogue scale (VAS) 0 to 100 mm, where 0 = no pain and 100 = worst imaginable pain) in the first 24 hours after surgery - except for one study (80 participants) with a reduction in VAS pain at two hours after surgery in favour of dexmedetomidine, with a mean difference of -30.00 mm (95% confidence interval (CI) -38.25 to -21.75). As the result of substantial heterogeneity, pooling of data in statistical meta-analyses was not appropriate. The quality of evidence was very low for our primary outcomes because of imprecision of results and risk of bias. Regarding our secondary aims, evidence was too scant in general to allow robust conclusions, or the estimates too imprecise or of poor methodological quality. Regarding adverse effects, low quality data (one study, 80 participants) suggest that the proportion of participants with hypotension requiring intervention was slightly higher in the high-dose dexmedetomidine group with a risk ratio of 2.50 (95% CI 0.94 to 6.66), but lower doses of dexmedetomidine led to no differences compared with control. Evidence for the comparison dexmedetomidine versus fentanyl was insufficient to permit robust conclusions (one study, 20 participants). Dexmedetomidine, when administered perioperatively for acute pain after abdominal surgery in adults, seemed to have some opioid-sparing effect together with in general no important differences in postoperative pain when compared with placebo. However the quality of the evidence was very low as the result of imprecision, methodological limitations and substantial heterogeneity among the seven included studies. The clinical importance for patients is uncertain, in as much as the influence of dexmedetomidine on patient-important outcomes such as gastrointestinal function, mobilization and adverse effects could not be satisfactorily determined. All included studies were relatively small, and publication bias could not be ruled out. Applicability of evidence was limited to middle-aged participants who were relatively free of co-morbidity and were undergoing elective abdominal surgery. A potential bias was a considerable quantity of unobtainable data from studies with mixed surgery. To detect and investigate patient-important outcomes, larger studies with longer periods of follow-up are needed.
Evidence is current to May 2014. We included seven studies with 492 participants from five different countries and included 422 participants in our analysis. Most participants were middle-aged. Participants had almost no diseases other than their reason for having surgery. The type of surgery was planned abdominal surgery. Three of the seven studies looked only at obesity surgery. Participants received dexmedetomidine right before or during their abdominal surgery. Six studies compared dexmedetomidine with no treatment, and one small study compared dexmedetomidine with fentanyl (a strong opioid). We reran the search in May 2015 and found nine studies of interest, which we will discuss when we update the review. In total, 13 studies are awaiting classification. Most of the studies that compared dexmedetomidine with no treatment found that dexmedetomidine reduced the need for opioids for treating pain for 24 hours after surgery. During the same period, no important differences in pain were noted, except one study (80 participants) showed a reduction in intensity of pain at two hours after surgery with dexmedetomidine. The quality of the evidence was very low because the results were not similar across studies, and because some studies were poorly conducted. The influence of dexmedetomidine on postoperative nausea and vomiting could not be determined because results were not similar across studies. No conclusion could be made for bowel function and mobilization and side effects such as postoperative sedation, as data were insufficient. One study with 80 participants reported a higher rate of low blood pressure ('low' meaning that medication was required) for participants receiving a high dose of dexmedetomidine compared with no treatment, but for lower doses of dexmedetomidine, they noted no differences compared with no treatment. For the comparison dexmedetomidine versus fentanyl, data were insufficient to allow conclusions (only one small study). Dexmedetomidine - compared with no treatment - seemed to reduce the need for opioids without worsening the experience of postoperative pain after abdominal surgery in adults. However, the quality of evidence was very low because studies were poorly conducted and because results were not similar across studies. The importance of these findings for patients was also uncertain because the influence of dexmedetomidine on bowel function, mobilization and adverse effects could not be properly determined.The seven included studies were small, so side effects associated with use of dexmedetomidine may be greater than this review reported. In addition, we could not obtain relevant data from several studies because investigators mixed abdominal surgery with other types of surgery.
The review included eight randomised controlled trials. Approximately 4300 women were recruited to detect the effect of prophylactic antibiotic administration on pregnancy outcomes. Primary outcomes Antibiotic prophylaxis did not reduce the risk of preterm prelabour rupture of membranes (risk ratio (RR) 0.31; 95% confidence interval (CI) 0.06 to 1.49 (one trial, 229 women), low quality evidence) or preterm delivery (RR 0.88; 95% CI 0.72 to 1.09 (six trials, 3663 women), highquality evidence). However, preterm delivery was reduced in the subgroup of pregnant women with a previous preterm birth who had bacterial vaginosis (BV) during the current pregnancy (RR 0.64; 95% CI 0.47 to 0.88 (one trial, 258 women)), but there was no reduction in the subgroup of pregnant women with previous preterm birth without BV during the pregnancy (RR 1.08; 95% CI 0.66 to 1.77 (two trials, 500 women)). A reduction in the risk of postpartum endometritis (RR 0.55; 95% CI 0.33 to 0.92 (one trial, 196 women)) was observed in high-risk pregnant women (women with a history of preterm birth, low birthweight, stillbirth or early perinatal death) and in all women (RR 0.53; 95% CI 0.35 to 0.82 (three trials, 627 women), moderate quality evidence). There was no difference in low birthweight (RR 0.86; 95% CI 0.53 to 1.39 (four trials; 978 women)) or neonatal sepsis (RR 11.31; 95% CI 0.64 to 200.79) (one trial, 142 women)); and blood culture confirming sepsis was not reported in any of the studies. Secondary outcomes Antibiotic prophylaxis reduced the risk of prelabour rupture of membranes (RR 0.34; 95% CI 0.15 to 0.78 (one trial, 229 women), low quality evidence) and gonococcal infection (RR 0.35; 95% CI 0.13 to 0.94 (one trial, 204 women)). There were no differences observed in other secondary outcomes (congenital abnormality; small-for-gestational age; perinatal mortality), whilst many other secondary outcomes (e.g. intrapartum fever needing treatment with antibiotics) were not reported in included trials. Regarding the route of antibiotic administration, vaginal antibiotic prophylaxis during pregnancy did not prevent infectious pregnancy outcomes. The overall risk of bias was low, except that incomplete outcome data produced high risk of bias in some studies. The quality of the evidence using GRADE was assessed as low for preterm prelabour rupture of membranes, high for preterm delivery, moderate for postpartum endometritis, low for prelabour rupture of membranes, and very low for chorioamnionitis. Intrapartum fever needing treatment with antibiotics was not reported in any of the included studies. Antibiotic prophylaxis did not reduce the risk of preterm prelabour rupture of membranes or preterm delivery (apart from in the subgroup of women with a previous preterm birth who had bacterial vaginosis). Antibiotic prophylaxis given during the second or third trimester of pregnancy reduced the risk of postpartum endometritis, term pregnancy with pre-labour rupture of membranes and gonococcal infection when given routinely to all pregnant women. Substantial bias possibly exists in the review's results because of a high rate of loss to follow-up and the small numbers of studies included in each of our analyses. There is also insufficient evidence on possible harmful effects on the baby. Therefore, we conclude that there is not enough evidence to support the use of routine antibiotics during pregnancy to prevent infectious adverse effects on pregnancy outcomes.
Antibiotics are administered to pregnant women during the second and third trimester of pregnancy (before labour) to prevent bacteria in the vagina and cervix affecting the pregnancy. Infection by some infectious organisms in a woman’s genital tract can cause health problems for the mother and her baby, and has been associated with preterm births. This review of eight randomised trials involved approximately 4300 women in their second or third trimester. We found that antibiotics did not reduce the risk of preterm prelabour rupture of the membranes (one trial, low quality of evidence), or the risk of preterm birth (six trials, highquality of evidence). Preterm delivery was reduced in pregnant women who had a previous preterm birth and an imbalance of bacteria in the vagina (bacterial vaginosis) during the current pregnancy. There was no reduction in preterm delivery in pregnant women with previous preterm birth without a bacterial imbalance during the current pregnancy (two trials). Postpartum endometritis, or infection of the uterus following birth, was reduced overall (three trials, moderate quality of evidence), as well as in a trial of high-risk women who had a previous preterm birth (one trial, moderate quality of evidence). No reduction in neonatal illness was observed. Outcomes of interest were available in trials with high losses to follow-up. We could not estimate the side effects of antibiotics since side effects were rare; however, antibiotics may still have serious side effects on women and their babies. There is, therefore, no justification to give antibiotics to all pregnant women during the second or third trimester to prevent adverse infectious effects on pregnancy outcomes.
Six RCTs compared the effect of oral ascorbic acid (1 to 4 grams) and placebo treatment in CMT1A. In five trials involving adults with CMT1A, a total of 622 participants received ascorbic acid or placebo. Trials were largely at low risk of bias. There is high-quality evidence that ascorbic acid does not improve the course of CMT1A in adults as measured by the CMT neuropathy score (0 to 36 scale) at 12 months (mean difference (MD) -0.37; 95% confidence intervals (CI) -0.83 to 0.09; five studies; N = 533), or at 24 months (MD -0.21; 95% CI -0.81 to 0.39; three studies; N = 388). Ascorbic acid treatment showed a positive effect on the nine-hole peg test versus placebo (MD -1.16 seconds; 95% CI -1.96 to -0.37), but the clinical significance of this result is probably small. Meta-analyses of other secondary outcome parameters showed no relevant benefit of ascorbic acid. In one trial, 80 children with CMT1A received ascorbic acid or placebo. The trial showed no clinical benefit of ascorbic acid treatment. Adverse effects did not differ in their nature or abundance between ascorbic acid and placebo. High-quality evidence indicates that ascorbic acid does not improve the course of CMT1A in adults in terms of the outcome parameters used. According to low-quality evidence, ascorbic acid does not improve the course of CMT1A in children. However, CMT1A is slowly progressive and the outcome parameters show only small change over time. Longer study durations should be considered, and outcome parameters more sensitive to change over time should be designed and validated for future studies.
We searched the medical literature for trials of vitamin C in CMT disease and found six trials - five in adults and one in children - on the treatment of CMT type 1A (CMT1A) with vitamin C. All compared vitamin C doses of 1 to 4 grams per day with a placebo (a dummy or sugar pill disguised as vitamin C), and lasted for 12 or 24 months. The trials in adults included a total of 622 people. The other trial included 80 children. The main measure of the effects of vitamin C in this review was change in impairment. We also collected information on disability, nerve conduction studies, sensation, muscle strength, quality of life and harmful effects of vitamin C. We found that ascorbic acid treatment did not improve impairment from CMT1A in adults as measured by the CMT neuropathy score (CMTNS). In children, the CMTNS was not reported, as it is a measure developed for adults with CMT. The measures used for children in this study did not show benefit from vitamin C. The studies were largely at low risk of bias, meaning they were well designed and the results were not easily influenced by chance. Adverse events were similar in nature and number in vitamin C and placebo groups. There is high-quality evidence for adults and low-quality evidence for children that vitamin C does not improve the course of CMT1A. However, CMT progresses slowly, so the study durations of 12 or 24 months may not have been long enough to detect effects of treatment. Further research with longer study duration and more sensitive outcome parameters should be done, although any large effect in adults or children is unlikely.
Eleven studies with 1053 patients (550 on dipyrone) met the inclusion criteria. Unfortunately, few data were available for analysis; most analyses were based on the results of single, small trials and statistical pooling of the results was inappropriate. Efficacy estimates were calculated as the weighted mean percent of patients achieving at least 50% pain relief with the range of values from trials contributing to the analysis. However, these estimates were not robust. Commonly reported adverse effects with intravenous dipyrone were dry mouth and somnolence, and one study reported pain at the injection site. Insufficient information was available for safety analyses. Limited available data indicated that single dose dipyrone was of similar efficacy to other analgesics used in renal colic pain, although intramuscular dipyrone was less effective than diclofenac 75 mg. Combining dipyrone with antispasmolytic agents did not appear to improve its efficacy. Intravenous dipyrone was more effective than intramuscular dipyrone. Dry mouth and somnolence were commonly reported with intravenous dipyrone. None of the studies reported agranulocytosis.
This review aimed to assess the effectiveness and safety of single dose dipyrone in adults with moderate/severe renal colic pain but there were too few data to obtain clear results. The data available indicated that intravenous dipyrone was more effective than intramuscular dipyrone, and combining dipyrone with antispasmolytic agents did not improve its efficacy. Commonly reported side effects included dry mouth and drowsiness, and some patients experienced pain at the injection site. Agranulocytosis was not reported.
We found three trials (involving 1915 babies) for inclusion in the review, but have included only two trials (involving 1302 healthy full-term breastfeeding infants) in the analysis. Meta-analysis of the two combined studies showed that pacifier use in healthy breastfeeding infants had no significant effect on the proportion of infants exclusively breastfed at three months (risk ratio (RR) 1.01; 95% confidence interval (CI) 0.96 to 1.07, two studies, 1228 infants), and at four months of age (RR 1.01; 95% CI 0.94 to 1.09, one study, 970 infants, moderate-quality evidence), and also had no effect on the proportion of infants partially breastfed at three months (RR 1.00; 95% CI 0.98 to 1.02, two studies, 1228 infants), and at four months of age (RR 0.99; 95% CI 0.97 to 1.02, one study, 970 infants). None of the included trials reported data on the other primary outcomes, i.e. duration of partial or exclusive breastfeeding, or secondary outcomes: breastfeeding difficulties (mastitis, cracked nipples, breast engorgement); infant's health (dental malocclusion, otitis media, oral candidiasis; sudden infant death syndrome (SIDS)); maternal satisfaction and level of confidence in parenting. One study reported that avoidance of pacifiers had no effect on cry/fuss behavior at ages four, six, or nine weeks and also reported no effect on the risk of weaning before age three months, however the data were incomplete and so could not be included for analysis. Pacifier use in healthy term breastfeeding infants, started from birth or after lactation is established, did not significantly affect the prevalence or duration of exclusive and partial breastfeeding up to four months of age. Evidence to assess the short-term breastfeeding difficulties faced by mothers and long-term effect of pacifiers on infants' health is lacking.
We updated the search on 30 June 2016. We identified three studies, with a total of 1915 babies. One study could not be included in the analysis and so findings are based on two studies involving 1302 infants. The mothers in the studies were motivated to breastfeed recruited immediately after birth and at two weeks of life, respectively. We found that unrestricted use of a pacifier did not affect the proportion of infants exclusive or partial breastfeeding at three and four months. The studies were remarkably consistent. We judged this to be moderate-quality evidence. There was no information on the effect of pacifier use on any breastfeeding difficulties experienced by the mothers, maternal satisfaction, infant crying and fussing and infant problems such as otitis media and dental malocclusion. In motivated mothers, there is moderate-quality evidence that pacifier use in healthy term breastfeeding infants before and after lactation is established does not reduce the duration of breastfeeding up to four months of age. However, there is insufficient information on the potential harms of pacifiers on infants and mothers. Until further information becomes available on the effects of pacifiers on the infant, mothers who are well-motivated to breastfeed should be encouraged to make a decision on the use of a pacifier based on personal preference.
Five studies met the inclusion criteria. Three included studies compared extended-field RT versus pelvic RT, one included study compared extended-field RT with pelvic CRT, and one study compared extended-field CRT versus pelvic CRT. Extended-field radiotherapy versus pelvic radiotherapy alone Compared to pelvic RT, extended-field RT probably reduces the risk of death (hazard ratio (HR) 0.67, 95% confidence interval (CI) 0.48 to 0.94; 1 study; 337 participants; moderate-certainty evidence) and para-aortic lymph node recurrence (risk ratio (RR) 0.36, 95% CI 0.18 to 0.70; 2 studies; 477 participants; moderate-certainty evidence), although there may or may not have been improvement in the risk of disease progression (HR 0.92, 95% CI 0.69 to 1.22; 1 study; 337 participants; moderate-certainty evidence) and severe adverse events (RR 1.05, 95% CI 0.79 to 1.41; 2 studies; 776 participants; moderate-certainty evidence). Extended-field radiotherapy versus pelvic chemoradiotherapy In a comparison of extended-field RT versus pelvic CRT, women given pelvic CRT probably had a lower risk of death (HR 0.50, 95% CI 0.39 to 0.64; 1 study; 389 participants; moderate-certainty evidence) and disease progression (HR 0.52, 95% CI 0.37 to 0.72; 1 study; 389 participants; moderate-certainty evidence). Participants given extended-field RT may or may not have had a lower risk of para-aortic lymph node recurrence (HR 0.44, 95% CI 0.20 to 0.99; 1 study; 389 participants; low-certainty evidence) and acute severe adverse events (RR 0.05, 95% CI 0.02 to 0.11; 1 study; 388 participants; moderate-certainty evidence). There were no clear differences in terms of late severe adverse events among the comparison groups (RR 1.06, 95% CI 0.69 to 1.62; 1 study; 386 participants; moderate-certainty evidence). Extended-field chemoradiotherapy versus pelvic chemoradiotherapy Very low-certainty evidence obtained from one small study (74 participants) showed that, compared to pelvic CRT, extended-field CRT may or may not have reduced risk of death (HR 0.37, 95% CI 0.14 to 0.96) and disease progression (HR 0.25, 95% CI 0.07 to 0.87). There were no clear differences between the groups in the risks of para-aortic lymph node recurrence (RR 0.19, 95% CI 0.02 to 1.54; very low-certainty evidence) and severe adverse events (acute: RR 0.95, 95% CI 0.20 to 4.39; late: RR 0.95, 95% CI 0.06 to 14.59; very low-certainty evidence). Moderate-certainty evidence shows that, compared with pelvic RT alone, extended-field RT probably improves overall survival and reduces risk of para-aortic lymph node recurrence. However, pelvic RT alone would now be considered substandard treatment, so this result cannot be extrapolated to modern standards of care. Low- to moderate-certainty evidence suggests that pelvic CRT may increase overall and progression-free survival compared to extended-field RT, although there may or may not be a higher rate of para-aortic recurrence and acute adverse events. Extended-field CRT versus pelvic CRT may improve overall or progression-free survival, but these findings should be interpreted with caution due to very low-certainty evidence. High-quality RCTs, comparing modern treatment techniques in CRT, are needed to more fully inform treatment for locally advanced cervical cancer without obvious para-aortic node involvement.
We searched databases from their inception to August 2018 and found five studies that met the inclusion criteria. Three studies compared extended-field RT versus pelvic RT. None of these three studies compared against the current gold-standard of pelvic CRT. One study compared extended-field RT versus pelvic CRT and one study compared extended-field CRT versus pelvic CRT. Compared with pelvic RT alone, women given extended-field RT may have been less likely to die and probably were less likely to have a cervical cancer come back (recurrence) in the para-aortic lymph nodes. However, extended-field RT may have made little or no difference to how often their cancer recurred elsewhere and how often they experience severe side effects. Pelvic CRT is the modern standard of treatment for locally advanced cervical cancer. In a comparison of extended-field RT alone versus pelvic CRT, women given pelvic CRT were probably less likely to die or have recurrence of their cancer. Women given extended-field RT alone may have been less likely to experience a recurrence within the para-aortic lymph nodes and have had adverse events during or shortly after treatment. There were no clear differences regarding the late adverse events between the two groups. Women given extended-field CRT may or may not have been less likely to die or have cancer progression than those women pelvic CRT. There were no clear differences in the chances of experiencing a cancer recurrence in the para-aortic lymph nodes and severe side effects between the groups. The evidence for outcomes in the comparison of extended-field RT alone versus pelvic RT alone were of moderate certainty. In the comparison of extended-field RT versus pelvic CRT, the evidence regarding the survival and side effects were of moderate certainty. The evidence for para-aortic recurrence was of low certainty. The evidence for all outcomes in a comparison of extended-field CRT versus pelvic CRT were of very-low certainty because of concerns regarding the high risk of bias and results coming from a single trial of very few women. We are moderately certain that, compared with pelvic RT alone, extended-field RT probably improves overall survival and reduces risk of para-aortic lymph node recurrence. However, pelvic RT alone would now not be considered the standard of care in women well enough to receive CRT, so these results should be reviewed with caution and cannot be extrapolated to modern treatment techniques. Low- to moderate-certainty evidence supports the use of pelvic CRT rather than extended-field RT alone, as it appears to reduce the risk of death and cancer progression. The likelihood of experiencing unwanted side effects during treatment was higher among women receiving pelvic CRT than extended-field RT. Evidence comparing extended-field CRT to pelvic CRT was very low certainty regarding outcomes and it may or may not improve survival.
Eleven trials met the inclusion criteria. Ten trials (1460 infants) compared vitamin A supplementation with a control and one (120 infants) compared different regimens of vitamin A supplementation. Compared to the control group, vitamin A appeared to have a small benefit in reducing the risk of death or oxygen requirement at one month of age (typical RR 0.93, 95% CI 0.88 to 0.99; typical RD −0.05, 95% CI −0.10 to −0.01; NNTB 20, 95% CI 10 to 100; 6 studies, 1165 infants) and the risk of chronic lung disease (oxygen requirement) at 36 weeks' postmenstrual age (typical RR 0.87, 95% CI 0.77 to 0.99; typical RD −0.07, 95% CI −0.13 to −0.01; NNTB 11, 95% CI 6 to 100; 5 studies, 986 infants) (moderate-quality evidence). There was a marginal reduction of the combined outcome of death or chronic lung disease (typical RR 0.92, 95% CI 0.84 to 1.01; typical RD −0.05, 95% CI −0.11 to 0.01; 4 studies, 1089 infants). Neurodevelopmental assessment of 88% of the surviving infants in the largest trial showed no difference between the groups at 18 to 22 months of age, corrected for prematurity (low-quality evidence). There is no evidence to support different vitamin A dosing regimens. No adverse effects of vitamin A supplementation were reported, but it was noted that intramuscular injections of vitamin A were painful. Whether clinicians decide to utilise repeat intramuscular doses of vitamin A to prevent chronic lung disease may depend upon the local incidence of this outcome and the value attached to achieving a modest reduction in the outcome balanced against the lack of other proven benefits and the acceptability of the treatment. Information on long-term neurodevelopmental status suggests no evidence of either benefit or harm from the intervention.
Eleven trials were included in this review, ten comparing vitamin A with a control (placebo or no supplementation) and one comparing different vitamin A regimens. The search for eligible trials was updated in May 2016. Compared to the control group, supplementing very low birth weight infants with vitamin A appears to have a small benefit in reducing the risk of death or oxygen requirement at one month of age and the risk of chronic lung disease (oxygen requirement) at 36 weeks' postmenstrual age (moderate-quality evidence). There was a marginal reduction of the combined outcome of death or chronic lung disease (moderate-quality evidence). Although there is a statistical reduction in chronic lung disease, these findings are consistent with either a meaningful impact on chronic lung disease or a negligible impact. The one trial that investigated neurodevelopmental status at 18 to 22 months of age correcting for prematurity found no evidence of benefit or harm associated with vitamin A supplementation compared to control (low-quality evidence). No adverse effects of vitamin A supplementation were reported, but it was noted that intramuscular injections of vitamin A were painful. Whether clinicians decide to utilise repeat intramuscular doses of vitamin A to prevent chronic lung disease may depend upon the local incidence of this outcome and the value attached to achieving a modest reduction in the outcome balanced against the lack of other proven benefits and the acceptability of the treatment. Information on long-term neurodevelopmental status suggests no evidence of either benefit or harm from the intervention.
We identified 23 studies (16 RCTs, 6 of high quality), including 37,561 participants and 4042 family caregivers, largely with advanced cancer but also congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD), HIV/AIDS and multiple sclerosis (MS), among other conditions. Meta-analysis showed increased odds of dying at home (odds ratio (OR) 2.21, 95% CI 1.31 to 3.71; Z = 2.98, P value = 0.003; Chi2 = 20.57, degrees of freedom (df) = 6, P value = 0.002; I2 = 71%; NNTB 5, 95% CI 3 to 14 (seven trials with 1222 participants, three of high quality)). In addition, narrative synthesis showed evidence of small but statistically significant beneficial effects of home palliative care services compared to usual care on reducing symptom burden for patients (three trials, two of high quality, and one CBA with 2107 participants) and of no effect on caregiver grief (three RCTs, two of high quality, and one CBA with 2113 caregivers). Evidence on cost-effectiveness (six studies) is inconclusive. The results provide clear and reliable evidence that home palliative care increases the chance of dying at home and reduces symptom burden in particular for patients with cancer, without impacting on caregiver grief. This justifies providing home palliative care for patients who wish to die at home. More work is needed to study cost-effectiveness especially for people with non-malignant conditions, assessing place of death and appropriate outcomes that are sensitive to change and valid in these populations, and to compare different models of home palliative care, in powered studies.
We reviewed all known studies that evaluated home palliative care services, i.e. experienced home care teams of health professionals specialised in the control of a wide range of problems associated with advanced illness – physical, psychological, social, spiritual. We wanted to see how much of a difference these services make to people's chances of dying at home, but also to other important aspects for patients towards the end of life, such as symptoms (e.g. pain) and family distress. We also compared the impact on the costs with care. On the basis of 23 studies including 37,561 patients and 4042 family caregivers, we found that when someone with an advanced illness gets home palliative care, their chances of dying at home more than double. Home palliative care services also help reduce the symptom burden people may experience as a result of advanced illness, without increasing grief for family caregivers after the patient dies. In these circumstances, patients who wish to die at home should be offered home palliative care. There is still scope to improve home palliative care services and increase the benefits for patients and families without raising costs.
We identified two eligible studies with small numbers of infants enrolled (64 infants). Prophylaxis with cromolyn sodium did not result in a statistically significant effect on the combined outcome of mortality and CLD at 28 days (typical RR 1.05, 95% CI 0.73 to 1.52; typical RD 0.03, 95% CI -0.20 to 0.27; 2 trials, 64 infants; I2 = 0% for both RR and RD); mortality at 28 days (typical RR 1.31, 95% CI 0.52 to 3.29; I2 = 73% typical RD 0.06, 95% CI -0.13 to 0.26; I2 = 87%; 2 trials, 64 infants) (very low quality evidence); CLD at 28 days (typical RR 0.93, 95% CI 0.53 to 1.64; I2 = 40%; typical RD -0.03, 95% CI -0.27 to 0.20; I2 = 38%; 2 trials, 64 infants) or at 36 weeks' PMA (RR 1.25, 95% CI 0.43 to 3.63; RD 0.08, 95% CI -0.29 to 0.44; 1 trial, 26 infants). There was no significant difference in CLD in survivors at 28 days (typical RR 0.97, 95% CI 0.58 to 1.63; typical RD -0.02, 95% CI -0.29 to 0.26; I2 = 0% for both RR and RD; 2 trials, 50 infants) or at 36 weeks' PMA (RR 1.04, 95% CI 0.38 to 2.87; RD 0.02, 95% CI -0.40 to 0.43; 1 trial, 22 infants). Prophylaxis with cromolyn sodium did not show a statistically significant difference in overall neonatal mortality, incidence of air leaks, necrotising enterocolitis, intraventricular haemorrhage, sepsis, and days of mechanical ventilation. There were no adverse effects noted. The quality of evidence according to GRADE was very low for one outcome (mortality to 28 days) and low for all other outcomes. The reasons for downgrading the evidence was due to design (risk of bias in one study), inconsistency between the two studies (high I2 values for mortality at 28 days for both RR and RD), and lack of precision of estimates (small sample sizes). Further research does not seem to be justified. There is currently no evidence from randomised trials that cromolyn sodium has a role in the prevention of CLD. Cromolyn sodium cannot be recommended for the prevention of CLD in preterm infants.
We found only two studies enrolling 64 infants. In one of the two studies, there was a low risk of bias whereas in the second study there were concerns about how the infants had been put into treatment groups, and whether parents and doctors were aware of which treatment was given (random sequence generation, allocation concealment and blinding of outcomes assessment). We found no studies that received funding from the industry. Prophylaxis with cromolyn sodium did not result in an important effect on the combined outcome of mortality or chronic lung disease at 28 days of age, chronic lung disease at 28 days; chronic lung disease at 28 days or at 36 weeks' PMA; or chronic lung disease in survivors at 28 days or at 36 weeks' PMA. This review of trials found no strong evidence that cromolyn sodium can prevent or reduce chronic lung disease and further research does not seem to be justified. The quality of evidence was low for most measures.
We obtained data from 18 trials and 2074 patients for the first comparison. Considering these trials together there was a high level of statistical heterogeneity, a substantial amount of which was explained by analyses of trial groups. Trials using chemotherapy cycle lengths shorter than 14 days (HR = 0.83, 95% CI = 0.69 to 1.00, p = 0.046) or cisplatin dose intensities greater than 25 mg/m2 per week (HR = 0.91, 95% CI = 0.78 to 1.05, p = 0.20) tended to show an advantage of neoadjuvant chemotherapy on survival. In contrast, trials using cycle lengths longer than 14 days (HR = 1.25, 95% CI = 1.07 to 1.46, p = 0.005) or cisplatin dose intensities lower than 25 mg/m2 per week (HR = 1.35, 95% CI = 1.11 to 1.14, p = 0.002) showed a detrimental effect of neoadjuvant chemotherapy on survival. In the second comparison, data from 5 trials and 872 patients were obtained. The combined results (HR = 0.65, 95% CI = 0.53 to 0.80, p = 0.0004) indicated a highly significant reduction in the risk of death with neoadjuvant chemotherapy, but with heterogeneity in both the design and results. The timing and dose intensity of cisplatin-based neoadjuvant chemotherapy appears to have an important impact on whether or not it benefits women with locally advanced cervical cancer and warrants further exploration. Obtaining additional IPD may improve the strength of these conclusions.
Our first comparison was based on 18 trials and 2074 women. The women who were given chemotherapy either more than a fortnight apart or with a less intense dose of cisplatin before their radiotherapy, did not live as long as those who were only given radiotherapy. However, women given chemotherapy either less than a fortnight apart or with a more intense dose of cisplatin before their radiotherapy, seemed to live longer than those who were only given radiotherapy. These second results are based on less data and are not so convincing. There were very few serious side effects that continued long after treatment and they seemed to be similar whether chemotherapy was given or not. Our second comparison was based on 5 trials and 872 women. The women who were given chemotherapy before surgery seemed to live longer than those who were only given radiotherapy. However, there was a small amount of data, there were differences between results of trials and other treatments were used. Therefore, it is not clear if the benefit might be for reasons other than the chemotherapy. Further assessment of neoadjuvant chemotherapy in randomised trials is required. It may be valuable to compare it to a combined chemotherapy and radiotherapy approach or even to use neoadjuvant chemotherapy together with combined chemotherapy and radiotherapy.
Our review included 12,545 AF participants with CKD from five studies. All participants were randomised to either DOAC (apixaban, dabigatran, edoxaban, and rivaroxaban) or dose-adjusted warfarin. Four studies used a central, interactive, automated response system for allocation concealment while the other did not specify concealment methods. Four studies were blinded while the other was partially open-label. However, given that all studies involved blinded evaluation of outcome events, we considered the risk of bias to be low. We were unable to create funnel plots due to the small number of studies, thwarting assessment of publication bias. Study duration ranged from 1.8 to 2.8 years. The large majority of participants included in this study were CKD stage G3 (12,155), and a small number were stage G4 (390). Of 12,545 participants from five studies, a total of 321 cases (2.56%) of the primary efficacy outcome occurred per year. Further, of 12,521 participants from five studies, a total of 617 cases (4.93%) of the primary safety outcome occurred per year. DOAC appeared to probably reduce the incidence of stroke and systemic embolism events (5 studies, 12,545 participants: RR 0.81, 95% CI 0.65 to 1.00; moderate certainty evidence) and to slightly reduce the incidence of major bleeding events (5 studies, 12,521 participants: RR 0.79, 95% CI 0.59 to 1.04; low certainty evidence) in comparison with warfarin. Our findings indicate that DOAC are as likely as warfarin to prevent all strokes and systemic embolic events without increasing risk of major bleeding events among AF patients with kidney impairment. These findings should encourage physicians to prescribe DOAC in AF patients with CKD without fear of bleeding. The major limitation is that the results of this study chiefly reflect CKD stage G3. Application of the results to CKD stage G4 patients requires additional investigation. Furthermore, we could not assess CKD stage G5 patients. Future reviews should assess participants at more advanced CKD stages. Additionally, we could not conduct detailed analyses of subgroups and sensitivity analyses due to lack of data.
DOAC probably reduced the incidence of stroke and systemic embolic events as a primary efficacy outcome, compared to warfarin. Further, DOAC might slightly reduce the incidence of major bleeding events as a primary safety outcome, compared to warfarin. This review demonstrated that DOAC are as likely as warfarin to prevent all strokes and systemic embolic events without increasing major bleeding events among AF patients with CKD. According to GRADE, the quality of the evidence was moderate for the primary efficacy outcome because of concerns with imprecision and low for the primary safety outcome because of concerns with inconsistency and imprecision. The results of this study chiefly apply to CKD stage G3 patients, since we could not assess those with CKD stage G4 or G5.
Eight studies were identified but only seven trials enrolling 1410 preterm infants were located. There was no significant difference detected in neonatal mortality or neurodevelopmental outcome at two years between infants treated with ethamsylate and controls. Infants treated with ethamsylate had significantly less intraventricular haemorrhage than controls at < 31 weeks (typical RR 0.63, 95% CI 0.47 to 0.86) and < 35 weeks gestation (typical RR 0.77, 0.65 to 0.92). There was also a significant reduction in grade 3 and 4 intraventricular haemorrhage when all infants < 35 weeks gestation (typical RR 0.67, 95% CI 0.49 to 0.94) were analysed as a single group, but not for the group of infants < 32 weeks alone. There was a reduction in symptomatic patent ductus arteriosus at < 31 weeks gestation (typical RR 0.32, 95% CI 0.12 to 0.87). There were no adverse effects of ethamsylate identified from this systematic review. Preterm infants treated with ethamsylate showed no reductions in mortality or neurodevelopmental impairment despite the reduction in any grade of intraventricular haemorrhage seen in infants < 35 weeks gestation.
A total of seven studies with 1410 preterm infants were included in this review. Most of these initial studies were conducted between 1980 and 1990. Preterm infants treated with ethamsylate had similar outcomes with respect to death and disability at the age of two years when compared to infants who were treated with a placebo. Infants born less than 35 weeks gestation appeared to have less intraventricular haemorrhage when treated with ethamsylate compared to controls, however this did not lead to improved developmental outcome in later childhood. There were no adverse effects noted with ethamsylate treatment. Based on these results, routine use of ethamsylate for prematurely born infants to prevent intraventricular haemorrhage cannot be recommended. It is highly unlikely that any further trials will be conducted to explore this clinical question.
We included 22 RCTs (2902 women). Participants were from different ethnic backgrounds with the majority of Chinese origin. When CHM was compared with placebo (eight RCTs), there was little or no evidence of a difference between the groups for the following pooled outcomes: hot flushes per day (MD 0.00, 95% CI -0.88 to 0.89; 2 trials, 199 women; moderate quality evidence); hot flushes per day assessed by an overall hot flush score in which a difference of one point equates to one mild hot flush per day (MD -0.81 points, 95% CI -2.08 to 0.45; 3 RCTs, 263 women; low quality evidence); and overall vasomotor symptoms per month measured by the Menopause-Specific Quality of Life questionnaire (MENQOL, scale 0 to 6) (MD -0.42 points; 95% CI -1.52 to 0.68; 3 RCTs, 256 women; low quality evidence). In addition, results from individual studies suggested there was no evidence of a difference between the groups for daily hot flushes assessed by severity (MD -0.70 points, 95% CI -1.00, -0.40; 1 RCT, 108 women; moderate quality evidence); or overall monthly hot flushes scores (MD -2.80 points, 95% CI -8.93 to 3.33; 1 RCT, 84 women; very low quality evidence); or overall daily night sweats scores (MD 0.07 points, 95% CI -0.19 to 0.33, 1 RCT, 64 women; low quality evidence); or overall monthly night sweats scores (MD 1.30 points, 95% CI -1.76 to 4.36, 1 RCT, 84 women; very low quality evidence). However one study using the Kupperman Index reported that overall monthly vasomotor symptom scores were lower in the CHM group (MD -4.79 points, 95% CI -5.52 to -4.06; 1 RCT, 69 women; low quality evidence). When CHM was compared with hormone therapy (HT) (10 RCTs), only two RCTs reported monthly vasomotor symptoms using MENQOL. It was uncertain whether CHM reduces vasomotor symptoms (MD 0.47 points, 95% CI -0.50 to 1.44; 2 RCTs, 127 women; very low quality evidence). Adverse effects were not fully reported in the included studies. Adverse events reported by women taking CHM included mild diarrhoea, breast tenderness, gastric discomfort and an unpleasant taste. Effects were inconclusive because of imprecise estimates of effects: CHM versus placebo (RR 1.51; 95% CI 0.69 to 3.33; 7 trials, 705 women; I² = 40%); CHM versus HT (RR 0.96; 95% CI 0.66 to 1.39; 2 RCTs, 864 women; I² = 0%); and CHM versus specific conventional medications (such as Fluoxetine and Estazolam) (RR 0.20; 95% CI 0.03 to 1.17; 2 RCTs, 139 women; I² = 61%). We found insufficient evidence that Chinese herbal medicines were any more or less effective than placebo or HT for the relief of vasomotor symptoms. Effects on safety were inconclusive. The quality of the evidence ranged from very low to moderate; there is a need for well-designed randomised controlled studies.
This review examined 22 randomised clinical trials where 2902 women took part in the studies; 1499 in the CHM group and 1403 in the control group which might include a placebo (non-active compound made to look, taste and smell the same as the study compound) or a drug or HT or another CHM formula (different from the one being tested). Most of the studies had a trial period for 12 weeks. The data are current to March 2015. We found insufficient evidence that CHM were any more or less effective than placebo or HT for the relief of vasomotor symptoms. Adverse effects were not well reported, some women taking CHM reported mild diarrhoea, breast tenderness, gastric discomfort and an unpleasant taste. Effects on safety were inconclusive. The quality of the evidence ranged from very low to moderate. The studies did not produce good quality evidence to allow the authors to draw a conclusive statement regarding the effectiveness or safety of CHM.
This updated review includes 30 studies (one RCT with two arms and 29 observational studies) with a total of 99,224 participants. We included 19 studies in the original review (n = 3459), all of which were observational, with 13 studies included in the meta-analysis for mortality. We included 12 new studies in this update (one RCT and 11 observational studies), and excluded one study in the original review as it has been superceded by a more recent analysis. Twenty-one studies were included in the meta-analysis (9536 individuals), of which 15 studied people infected with 2009 influenza A H1N1 virus (H1N1pdm09). Data specific to mortality were of very low quality, based predominantly on observational studies, with inconsistent reporting of variables potentially associated with the outcomes of interest, differences between studies in the way in which they were conducted, and with the likelihood of potential confounding by indication. Reported doses of corticosteroids used were high, and indications for their use were not well reported. On meta-analysis, corticosteroid therapy was associated with increased mortality (odds ratio (OR) 3.90, 95% confidence interval (CI) 2.31 to 6.60; I2 = 68%; 15 studies). A similar increase in risk of mortality was seen in a stratified analysis of studies reporting adjusted estimates (OR 2.23, 95% CI 1.54 to 3.24; I2 = 0%; 5 studies). An association between corticosteroid therapy and increased mortality was also seen on pooled analysis of six studies which reported adjusted hazard ratios (HRs) (HR 1.49, 95% CI 1.09 to 2.02; I2 = 69%). Increased odds of hospital-acquired infection related to corticosteroid therapy were found on pooled analysis of seven studies (pooled OR 2.74, 95% CI 1.51 to 4.95; I2 = 90%); all were unadjusted estimates, and we graded the data as of very low certainty. We found one RCT of adjunctive corticosteroid therapy for treating people with community-acquired pneumonia, but the number of people with laboratory-confirmed influenza in the treatment and placebo arms was too small to draw conclusions regarding the effect of corticosteroids in this group, and we did not include it in our meta-analyses of observational studies. The certainty of the available evidence from observational studies was very low, with confounding by indication a major potential concern. Although we found that adjunctive corticosteroid therapy is associated with increased mortality, this result should be interpreted with caution. In the context of clinical trials of adjunctive corticosteroid therapy in sepsis and pneumonia that report improved outcomes, including decreased mortality, more high-quality research is needed (both RCTs and observational studies that adjust for confounding by indication). The currently available evidence is insufficient to determine the effectiveness of corticosteroids for people with influenza.
We searched for studies comparing additional steroid treatment with no additional steroid treatment in individuals with influenza. The evidence is current to 3 October 2018. We identified a total of 30 studies with 99,224 individuals; one of these studies was a clinical trial. The majority of studies investigated adults admitted to hospital with pandemic influenza in 2009 and 2010. We found one relevant clinical trial, but there were very few participants (n = 24) with laboratory-confirmed influenza. The certainty of the evidence available from existing observational studies was of very low. We found that people with influenza who received additional steroid treatment may have a greater risk of death compared to those who did not receive steroid treatment. Hospital-acquired infection was the main 'side effect' related to steroid treatment reported in the included studies; most studies reported a greater risk of hospital-acquired infection in the group treated with steroids. However, it was unclear whether patients with more severe influenza had been selected to receive steroid treatment. Consequently, we were unable to determine whether additional steroid treatment in people with influenza is truly harmful or not. Further clinical trials of additional steroids in the treatment of individuals with influenza are therefore warranted. In the meantime, the use of steroids in influenza remains a clinical judgement call. In the one controlled trial there were only 24 participants with confirmed influenza infection, and there was under-representation of the sickest patients in the intensive care unit and with sepsis. The rest of the evidence was from observational studies, and we classified the certainty of this evidence as very low. A major limitation was that the indications for corticosteroid therapy were not fully specified in many of the studies; corticosteroids may have been used as a final attempt in people with the most severe disease, or conversely they may have been used to treat less severe illnesses that occurred simultaneously such as asthma exacerbations. It was noted in some studies that there was high degree of association between the use of corticosteroids and the presence of potentially confounding factors such as disease severity and underlying illnesses, suggesting that confounding by the indication for corticosteroids was likely if not adjusted for when determining effect estimates. We noted inconsistent reporting of other important variables that may be related to influenza-related death across studies, including time to hospitalisation, the use and timing of antiviral drugs and antibiotics, and the type, dose, timing, and duration of corticosteroid therapy. Additionally, for studies in which this information was reported, there were differences between studies in the way that disease severity was measured, the time point at which death was assessed, and the proportions of cases and controls treated with antivirals and/or antibiotics and in the type, dose, timing, and duration of corticosteroid therapy.
Thirty one studies (contributing 33 data sets), randomising 534 participants met the inclusion criteria of the review. Oxygen improved all pooled outcomes relating to endurance exercise capacity (distance, time, number of steps) and maximal exercise capacity (exercise time and work rate). Data relating to VO2 max could not be pooled and results from the original studies were not consistent. For the secondary outcomes of breathlessness, SaO2 and VE, comparisons were made at isotime. In all studies except two the isotime is defined as the time at which the placebo test ended. Oxygen improved breathlessness, SaO2/PaO2 and VE at isotime with endurance exercise testing. There was no data on breathlessness at isotime with maximal exercise testing. Oxygen improved SaO2/PaO2 and reduced VE at Isotime. This review provides some evidence from small, single assessment studies that ambulatory oxygen improves exercise performance in people with moderate to severe COPD. The results of the review may be affected by publication bias, and the small sample sizes in the studies. Although positive, the findings of the review require replication in larger trials with more distinct subgroups of participants. Maximal or endurance tests can be used in ambulatory oxygen assessment. Consideration should be given to the measurement of SaO2 and breathlessness at isotime as these provide important additional information. We recommend that these outcomes are included in the assessment for ambulatory oxygen. Future research needs to establish the level of benefit of ambulatory oxygen in specific subgroups of people with COPD.
Short-term studies indicate that people with chronic obstructive pulmonary disease respond to the administration of oxygen when they do exercise tests. Ambulatory oxygen is the use of supplemental oxygen during exercise and activities of daily living. One way to assess if ambulatory oxygen is beneficial for a patient with COPD is to compare the effects of breathing oxygen and breathing air on exercise capacity. Some people with COPD may benefit more than others, and trials should take account of whether people who do not already meet criteria for domiciliary oxygen also respond. This review shows that there is strong evidence that ambulatory oxygen (short-term) improves exercise capacity. Further research needs to focus on which COPD patients benefit from ambulatory oxygen, how much oxygen should be provided and the long-term effect of ambulatory oxygen.
We found two studies that met our inclusion criteria but they were of unclear quality. One study, involving 307 women, compared vaginal examinations with rectal examinations, and the other study, involving 150 women, compared two-hourly with four-hourly vaginal examinations. Both studies were of unclear quality in terms of risk of selection bias, and the study comparing the timing of the vaginal examinations excluded 27% (two hourly) to 28% (four hourly) of women after randomisation because they no longer met the inclusion criteria. When comparing routine vaginal examinations with routine rectal examinations to assess the progress of labour, we identified no difference in neonatal infections requiring antibiotics (risk ratio (RR) 0.33, 95% confidence interval (CI) 0.01 to 8.07, one study, 307 infants). There were no data on the other primary outcomes of length of labour, maternal infections requiring antibiotics and women's overall views of labour. The study did show that significantly fewer women reported that vaginal examination was very uncomfortable compared with rectal examinations (RR 0.42, 95% CI 0.25 to 0.70, one study, 303 women). We identified no difference in the secondary outcomes of augmentation, caesarean section, spontaneous vaginal birth, operative vaginal birth, perinatal mortality and admission to neonatal intensive care. Comparing two-hourly vaginal examinations with four-hourly vaginal examinations in labour, we found no difference in length of labour (mean difference in minutes (MD) -6.00, 95% CI -88.70 to 76.70, one study, 109 women). There were no data on the other primary outcomes of maternal or neonatal infections requiring antibiotics, and women's overall views of labour. We identified no difference in the secondary outcomes of augmentation, epidural for pain relief, caesarean section, spontaneous vaginal birth and operative vaginal birth. On the basis of women's preferences, vaginal examination seems to be preferred to rectal examination. For all other outcomes, we found no evidence to support or reject the use of routine vaginal examinations in labour to improve outcomes for women and babies. The two studies included in the review were both small, and carried out in high-income countries in the 1990s. It is surprising that there is such a widespread use of this intervention without good evidence of effectiveness, particularly considering the sensitivity of the procedure for the women receiving it, and the potential for adverse consequences in some settings. The effectiveness of the use and timing of routine vaginal examinations in labour, and other ways of assessing progress in labour, including maternal behavioural cues, should be the focus of new research as a matter of urgency. Women's views of ways of assessing labour progress should be given high priority in any future research in this area.
We found two studies, undertaken in the 1990s in high-income countries, but their quality was unclear. One study, involving 307 women, compared routine vaginal and rectal examinations in labour. Here, fewer women reported that vaginal examinations were very uncomfortable compared with rectal examinations. The other study, involving 150 women, compared two-hourly and four-hourly vaginal examinations, but no difference in outcomes was seen. We identified no convincing evidence to support, or reject, the use of routine vaginal examinations in labour, yet this is common practice throughout the world. More research is needed to find out if vaginal examinations are a useful measure of both normal and abnormal labour progress. If vaginal examination is not a good measure of progress, there is an urgent need to identify and evaluate an alternative measure to ensure the best outcome for mothers and babies.
Eight trials comprising 468 participants were included. For the primary outcome of subjective tinnitus loudness we found no evidence of a difference between CBT and no treatment or another intervention (yoga, education and 'minimal contact - education'). In the secondary outcomes we found evidence that quality of life scores were improved in participants who had tinnitus when comparing CBT to no treatment or another intervention (education and 'minimal contact education'). We also found evidence that depression scores improved when comparing CBT to no treatment. We found no evidence of benefit in depression scores when comparing CBT to other treatments (yoga, education and 'minimal contact - education'). There were no adverse/side effects reported in any trial. In six studies we found no evidence of a significant difference in the subjective loudness of tinnitus. However, we found a significant improvement in depression score (in six studies) and quality of life (decrease of global tinnitus severity) in another five studies, suggesting that CBT has a positive effect on the management of tinnitus.
Eight trials (468 participants) are included in this review. Data analysis did not demonstrate any significant effect in the subjective loudness of tinnitus. We found, however, a significant improvement in the depression associated with tinnitus and quality of life (decrease of global tinnitus severity), suggesting that cognitive behavioural therapy has a positive effect on the way in which people cope with tinnitus. Further research should use a limited number of validated questionnaires in a more consistent way and with a longer follow up to assess the long-term effect of cognitive behavioural therapy in patients with tinnitus.
We included four cross-over RCTs with a total of 209 participants, ranging in age from 23 to 85 and with a preponderance of men. All the studies allowed the use of hearing aids for a total period of at least eight weeks before questions on preference were asked. All studies recruited patients with bilateral hearing loss but there was considerable variation in the types and degree of sensorineural hearing loss that the participants were experiencing. Three of the studies were published before the mid-1990s whereas the fourth study was published in 2011. Therefore, only the most recent study used hearing aids incorporating technology comparable to that currently readily available in high-income settings. Of the four studies, two were conducted in the UK in National Health Service (NHS – public sector) patients: one recruited patients from primary care with hearing loss detected by a screening programme whereas the other recruited patients who had been referred by their primary care practitioner to an otolaryngology department for hearing aids. The other two studies were conducted in the United States: one study recruited only military personnel or veterans with noise-induced hearing loss whereas about half of the participants in the other study were veterans. Only one primary outcome (patient preference) was reported in all studies. The percentage of patients who preferred bilateral hearing aids varied between studies: this was 54% (51 out of 94 participants), 39% (22 out of 56), 55% (16 out of 29) and 77% (23 out of 30), respectively. We have not combined the data from these four studies. The evidence for this outcome is of very low quality. The other outcomes of interest were not reported in the included studies. This review identified only four studies comparing the use of one hearing aid with two. The studies were small and included participants of widely varying ages. There was also considerable variation in the types and degree of sensorineural hearing loss that the participants were experiencing. For the most part, the types of hearing aid evaluated would now be regarded, in high-income settings, as 'old technology', with only one study looking at 'modern' digital aids. However, the relevance of this is uncertain, as this review did not evaluate the differences in outcomes between the different types of technology. We were unable to pool data from the four studies and the very low quality of the evidence leads us to conclude that we do not know if people with hearing loss have a preference for one aid or two. Similarly, we do not know if hearing-specific health-related quality of life, or any of our other outcomes, are better with bilateral or unilateral aids.
We included four studies with a total of 209 patients, ranging in age from 23 to 85 and with more men than women. All the studies allowed the use of hearing aids for a total period of at least eight weeks before questions were asked about their preference for one or two aids. In all the studies the patients had bilateral hearing loss but there was considerable variation in what type of hearing loss they suffered from and how bad their hearing was. Three of the studies were published before the mid-1990s and the fourth study was published in 2011. Therefore, only the most recent study used 'modern' hearing aids similar to those that are widely available in high-income countries. Of the four studies, two were conducted in the UK in National Health Service (NHS – public sector) patients. One of these looked at patients from primary care whose hearing loss had been picked up by a screening programme. The other looked at patients whose primary care practitioner thought they might benefit from hearing aids so had referred them to the local ENT department to get them. The other two studies were conducted in the United States: one study recruited only people on active military duty, or who had served in the military and had hearing loss due to being exposed to loud noises. About half of the people in the other study were ex-military. Only one of the outcomes we thought was most important - patient preference - was reported in all studies. The percentage of patients who preferred two hearing aids to one varied between studies: this was 54% (51 out of 94), 39% (22 out of 56), 55% (16 out of 29) and 77% (23 out of 30), respectively. We did not combine the numbers from these four studies because it would not have been right to do so. We graded the quality of evidence for this outcome as very low on a scale that goes high – medium – low – very low. There was no information in the four studies on the other outcomes we were interested in. This review identified only four studies comparing the use of one hearing aid with two. The studies were small and included people of widely varying ages. There was also considerable variation in the types of their deafness and in how deaf they were. For the most part, the types of hearing aid evaluated would now be regarded, in high-income countries, as 'old technology', with only one study looking at 'modern' digital aids. However, we do not know if this is relevant or not. This review did not look at the differences between other 'old' and 'new' types of hearing aid. We could not combine the numbers from the four studies. Overall, this fact and the very low quality of the evidence leads us to conclude that we do not know if patients have a preference for one aid or two. Similarly, we do not know if a patient's quality of life is better with one or two aids.
We included 17 relevant trials (with 556 review-relevant participants) which we categorised into three types of interventions: (1) various exposures to bright light (n = 10); (2) various opportunities for napping (n = 4); and (3) other interventions, such as physical exercise or sleep education (n = 3). In most instances, the studies were too heterogeneous to pool. Most of the comparisons yielded low to very low quality evidence. Only one comparison provided moderate quality evidence. Overall, the included studies’ results were inconclusive. We present the results regarding sleepiness below. Bright light Combining two comparable studies (with 184 participants altogether) that investigated the effect of bright light during the night on sleepiness during a shift, revealed a mean reduction 0.83 score points of sleepiness (measured via the Stanford Sleepiness Scale (SSS) (95% confidence interval (CI) -1.3 to -0.36, very low quality evidence). Another trial did not find a significant difference in overall sleepiness on another sleepiness scale (16 participants, low quality evidence). Bright light during the night plus sunglasses at dawn did not significantly influence sleepiness compared to normal light (1 study, 17 participants, assessment via reaction time, very low quality evidence). Bright light during the day shift did not significantly reduce sleepiness during the day compared to normal light (1 trial, 61 participants, subjective assessment, low quality evidence) or compared to normal light plus placebo capsule (1 trial, 12 participants, assessment via reaction time, very low quality evidence). Napping during the night shift A meta-analysis on a single nap opportunity and the effect on the mean reaction time as a surrogate for sleepiness, resulted in a 11.87 ms reduction (95% CI 31.94 to -8.2, very low quality evidence). Two other studies also reported statistically non-significant decreases in reaction time (1 study seven participants; 1 study 49 participants, very low quality evidence). A two-nap opportunity resulted in a statistically non-significant increase of sleepiness (subjective assessment) in one study (mean difference (MD) 2.32, 95% CI -24.74 to 29.38, 1 study, 15 participants, low quality evidence). Other interventions Physical exercise and sleep education interventions showed promise, but sufficient data to draw conclusions are lacking. Given the methodological diversity of the included studies, in terms of interventions, settings, and assessment tools, their limited reporting and the very low to low quality of the evidence they present, it is not possible to determine whether shift workers' sleepiness can be reduced or if their sleep length or quality can be improved with these interventions. We need better and adequately powered RCTs of the effect of bright light, and naps, either on their own or together and other non-pharmacological interventions that also consider shift workers’ chronobiology on the investigated sleep parameters.
We found 17 randomised controlled trials (with 556 participants) to include in this review. We rated the quality of evidence provided by most of the included studies to be between low and very low. The studies could be divided into three different types of interventions: (1) exposure to bright light; (2) a napping opportunity during the night shift; or (3) others, like physical activity or sleep education. Bright light Almost all of the bright light studies we looked at had some problem with the way they were designed. This problem made it difficult to know if any differences in sleepiness and sleep between those receiving bright light and those not receiving bright light were truly because of the bright light intervention. The studies were also too different in the types of bright light they used and types of light that the control groups received to compare them to one another. Napping The studies in the napping group did not report enough information for us to be certain whether napping helps shift workers feel more awake. The studies were very short, with each study lasting only a single night. Others This group of studies, which included, for example, physical exercise and sleep education, also reported too little information for us to say whether these interventions can make shift workers less sleepy on-shift or help them sleep longer and better after their shift. We conclude that there is too much uncertainty to determine whether any person-directed, non-drug interventions can really affect shift workers with sleepiness and sleep problems. We need studies that are better designed, report their designs and results more clearly, include more participants and last for a longer time before we can be certain. Studies also need to find out if their participants are 'morning-types' or 'evening-types', to be sure that the right type of shift worker gets the right type of intervention. We searched for studies that had been published up to August 2015.