dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
22037
|
[
{
"role": "user",
"content": "Abstract: Background\nHuman milk alone may provide inadequate amounts of protein to meet the growth requirements of preterm infants because of restrictions in the amount of fluid they can tolerate. It has become common practice to feed preterm infants with breast milk fortified with protein and other nutrients but there is debate about the optimal concentration of protein in commercially available fortifiers.\nObjectives\nTo compare the effects of different protein concentrations in human milk fortifier, fed to preterm infants, on growth and neurodevelopment.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search CENTRAL (2019, Issue 8), Ovid MEDLINE and CINAHL on 15 August 2019. We also searched clinical trials databases and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nWe included all published and unpublished randomised, quasi-randomised and cluster-randomised trials comparing two different concentrations of protein in human milk fortifier.\nWe included preterm infants (less than 37 weeks' gestational age). Participants may have been exclusively fed human milk or have been supplemented with formula.\nThe concentration of protein was classified as low (< 1g protein/100 mL expressed breast milk (EBM)), moderate (≥ 1g to < 1.4g protein/100 mL EBM) or high (≥ 1.4g protein/100 mL EBM). We excluded trials that compared two protein concentrations that fell within the same category.\nData collection and analysis\nWe undertook data collection and analyses using the standard methods of Cochrane Neonatal. Two review authors independently evaluated trials. Primary outcomes included growth, neurodevelopmental outcome and mortality. Data were synthesised using risk ratios (RR), risk differences and mean differences (MD), with 95% confidence intervals (CI). We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe identified nine trials involving 861 infants. There is one trial awaiting classification, and nine ongoing trials. The trials were mostly conducted in infants born < 32 weeks' gestational age or < 1500 g birthweight, or both. All used a fortifier derived from bovine milk. Two trials fed infants exclusively with mother's own milk, three trials gave supplementary feeds with donor human milk and four trials supplemented with preterm infant formula. Overall, trials were small but generally at low or unclear risk of bias.\nHigh versus moderate protein concentration of human milk fortifier\nThere was moderate certainty evidence that a high protein concentration likely increased in-hospital weight gain compared to moderate concentration of human milk fortifier (MD 0.66 g/kg/day, 95% CI 0.51 to 0.82; trials = 6, participants = 606). The evidence was very uncertain about the effect of high versus moderate protein concentration on length gain (MD 0.01 cm/week, 95% CI –0.01 to 0.03; trials = 5, participants = 547; very low certainty evidence) and head circumference gain (MD 0.00 cm/week, 95% CI –0.01 to 0.02; trials = 5, participants = 549; very low certainty evidence).\nOnly one trial reported neonatal mortality, with no deaths in either group (participants = 45).\nModerate versus low protein concentration of human milk fortifier\nA moderate versus low protein concentration fortifier may increase weight gain (MD 2.08 g/kg/day, 95% CI 0.38 to 3.77; trials = 2, participants = 176; very low certainty evidence) with little to no effect on head circumference gain (MD 0.13 cm/week, 95% CI 0.00 to 0.26; I² = 85%; trials = 3, participants = 217; very low certainty evidence), but the evidence is very uncertain. There was low certainty evidence that a moderate protein concentration may increase length gain (MD 0.09 cm/week, 95% CI 0.05 to 0.14; trials = 3, participants = 217).\nOnly one trial reported mortality and found no difference between groups (RR 0.48, 95% CI 0.05 to 5.17; participants = 112).\nNo trials reported long term growth or neurodevelopmental outcomes including cerebral palsy and developmental delay.\nAuthors' conclusions\nFeeding preterm infants with a human milk fortifier containing high amounts of protein (≥ 1.4g/100 mL EBM) compared with a fortifier containing moderate protein concentration (≥ 1 g to < 1.4 g/100 mL EBM) results in small increases in weight gain during the neonatal admission. There may also be small increases in weight and length gain when infants are fed a fortifier containing moderate versus low protein concentration (< 1 g protein/100 mL EBM). The certainty of this evidence is very low to moderate; therefore, results may change when the findings of ongoing studies are available. There is insufficient evidence to assess the impact of protein concentration on adverse effects or long term outcomes such as neurodevelopment. Further trials are needed to determine whether modest increases in weight gain observed with higher protein concentration fortifiers are associated with benefits or harms to long term growth and neurodevelopment.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Feeding preterm infants with a human milk fortifier that contains a high protein concentration versus a moderate protein concentration resulted in small increases in weight gain but not length gain or head growth during hospital admission after birth. There were small increases in weight gain and length gain in infants fed a human milk fortifier that contained moderate concentrations of protein compared with low concentrations. There was no clear effect of protein concentration on infant death during the initial hospitalisation. There were only limited data on other health outcomes, and this evidence suggests the amount of protein in human milk fortifier does not affect the risk of infections or feeding or bowel problems. There were no available trial data about infant growth after hospital discharge, or long term development."
}
] |
query-laysum
|
22038
|
[
{
"role": "user",
"content": "Abstract: Background\nSuicide is a leading cause of death among post-secondary students worldwide. Suicidal thoughts and planning are common among post-secondary students. Previous reviews have examined the effectiveness of interventions for symptomatic individuals; however, many students at high risk of suicide are undiagnosed and untreated.\nObjectives\nWe evaluated the effect on suicide and suicide-related outcomes of primary suicide prevention interventions that targeted students within the post-secondary setting.\nSearch methods\nWe searched the following sources up to June 2011: Specialised Registers of two Cochrane Groups, Cochrane Central Register of Controlled Trials, and nine other databases, trial registers, conference proceedings, and websites of national and international organizations. We screened reference lists and contacted authors of included studies to identify additional studies. We updated the search in November 2013; we will include these results in the review's next update.\nSelection criteria\nWe included studies that tested an intervention for the primary prevention of suicide using a randomized controlled trial (RCT), controlled before-and-after (CBA), controlled interrupted time series (CITS), or interrupted time series (ITS) study design. Interventions targeted students within the post-secondary setting (i.e. college, university, academy, vocational, or any other post-secondary educational institution) without known mental illness, previous suicide attempt or self-harm, or suicidal ideation. Outcomes included suicides, suicide attempts, suicidal ideation, changes in suicide-related knowledge, attitudes and behavior, and availability of means of suicide.\nData collection and analysis\nWe used standardized electronic forms for data extraction, risk of bias and quality of evidence determination, and analysis. We estimated standardised mean differences (SMD) with 95% confidence intervals (CIs). We analysed studies by intervention type and study design. We summarized RCT effect sizes using random-effects models meta-analyses; and analysed statistical heterogeneity using the Chi2 test and I2 statistic. We described narratively the results from studies that used other study designs.\nMain results\nEight studies met inclusion criteria. They were heterogeneous in terms of participants, study designs, and interventions. Five of eight studies had high risk of bias. In 3 RCTs (312 participants), classroom-based didactic and experiential programs increased short-term knowledge of suicide (SMD = 1.51, 95% CI 0.57 to 2.45; moderate quality evidence) and knowledge of suicide prevention (SMD = 0.72, 95% CI 0.36 to 1.07; moderate quality evidence). The effect on suicide prevention self-efficacy in one RCT (152 participants) was uncertain (SMD = 0.20, 95% CI -0.13 to 0.54; low quality evidence). One CBA analysed the effects of an institutional policy that restricted student access to laboratory cyanide and mandated professional assessment for suicidal students. The incidence of student suicide decreased significantly at one university with the policy relative to 11 control universities, 2.00 vs. 8.68 per 100,000 (Z = 5.90; P < 0.05). Four CBAs explored effects of training 'gatekeepers' to recognize and respond to warning signs of emotional crises and suicide risk in students they encountered. The magnitude of effect sizes varied between studies. Gatekeeper training enhanced short-term suicide knowledge in students, peer advisors residing in student accommodation, and faculty and staff, and suicide prevention self-efficacy among peer advisors. There was no evidence of an effect on participants' suicide-related attitudes or behaviors. One CBA found no evidence of effects of gatekeeper training of peer advisors on suicide-related knowledge, self-efficacy, or gatekeeper behaviors measured four to six months after intervention.\nAuthors' conclusions\nWe found insufficient evidence to support widespread implementation of any programs or policies for primary suicide prevention in post-secondary educational settings. As all evaluated interventions combined primary and secondary prevention components, we were unable to determine the independent effects of primary preventive interventions. Classroom instruction and gatekeeper training increased short-term suicide-related knowledge. We found no studies that tested the effects of classroom instruction on suicidal behavior or long-term outcomes. Limited evidence suggested minimal longer-term effects of gatekeeper training on suicide-related knowledge, while no evidence was found evaluating its effect on suicidal behavior. A policy-based suicide intervention reduced student suicide, but findings have not been replicated. Our findings are limited by the overall low quality of the evidence and the lack of studies from middle- and low-income countries. Rigorously designed studies should test the effects of preventive interventions on important health outcomes, including suicidal ideation and behavior, in varying post-secondary settings.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified eight studies that were eligible for this Cochrane Review. All studies had both primary and secondary prevention components. That is, they targeted students known to be suicidal as well as those not known to be suicidal. We separately analysed the effects of classroom instruction, institutional policies, and gatekeeper training programs. Gatekeeper training programs train people to recognize and respond to warning signs of emotional crises or suicide risk in students they encounter. The evidence is current to June 2011."
}
] |
query-laysum
|
22039
|
[
{
"role": "user",
"content": "Abstract: Background\nTreating vaso-occlusive painful crises in people with sickle cell disease is complex and requires multiple interventions. Extra fluids are routinely given as adjunct treatment, regardless of the individual's state of hydration with the aim of slowing or stopping the sickling process and thereby alleviating pain. This is an update of a previously published Cochrane Review.\nObjectives\nTo determine the optimal route, quantity and type of fluid replacement for people with sickle cell disease with acute painful crises.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register which comprises of references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings.\nWe also conducted searches of Embase (November 2007), LILACS, www.ClinicalTrials.gov (05 January 2010), and the WHO ICTRP (30 June 2017).\nDate of most recent search of the Group's Haemoglobinopathies Trials Register: 16 February 2017.\nSelection criteria\nRandomised and quasi-randomised controlled trials that compared the administration of supplemental fluids adjunctive to analgesics by any route in people with any type of sickle cell disease during an acute painful episode, under medical supervision (inpatient, day care or community).\nData collection and analysis\nNo relevant trials have yet been identified.\nMain results\nSixteen trials were identified by the searches, all of which were not eligible for inclusion in the review.\nAuthors' conclusions\nTreating vaso-occlusive crises is complex and requires multiple interventions. Extra fluids, generally oral or intravenous, are routinely administered during acute painful episodes to people with sickle cell disease regardless of the individual's state of hydration. Reports of their use during these acute painful episodes do not state the efficacy of any single route, type or quantity of fluid compared to another. However, there are no randomised controlled trials that have assessed the safety and efficacy of different routes, types or quantities of fluid. This systematic review identifies the need for a multicentre randomised controlled trial assessing the efficacy and possible adverse effects of different routes, types and quantities of fluid administered to people with sickle cell disease during acute painful episodes.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We conclude that there is a need for large multi-centre trials to assess fluid replacement therapy for people with sickle cell disease with acute painful crises."
}
] |
query-laysum
|
22040
|
[
{
"role": "user",
"content": "Abstract: Background\nGeneral anaesthesia causes atelectasis, which can lead to impaired respiratory function. Positive end-expiratory pressure (PEEP) is a mechanical manoeuvre that increases functional residual capacity (FRC) and prevents collapse of the airways, thereby reducing atelectasis. It is not known whether intraoperative PEEP alters the risks of postoperative mortality and pulmonary complications. This review was originally published in 2010 and was updated in 2013.\nObjectives\nTo assess the benefits and harms of intraoperative PEEP in terms of postoperative mortality and pulmonary outcomes in all adult surgical patients.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) 2013, Issue 10, part of The Cochrane Library, as well as MEDLINE (via Ovid) (1966 to October 2013), EMBASE (via Ovid) (1980 to October 2013), CINAHL (via EBSCOhost) (1982 to October 2013), ISI Web of Science (1945 to October 2013) and LILACS (via BIREME interface) (1982 to October 2010). The original search was performed in January 2010.\nSelection criteria\nWe included randomized clinical trials assessing the effects of PEEP versus no PEEP during general anaesthesia on postoperative mortality and postoperative respiratory complications in adults, 16 years of age and older.\nData collection and analysis\nTwo review authors independently selected papers, assessed trial quality and extracted data. We contacted study authors to ask for additional information, when necessary. We calculated the number of additional participants needed (information size) to make reliable conclusions.\nMain results\nThis updated review includes two new randomized trials. In total, 10 randomized trials with 432 participants and four comparisons are included in this review. One trial had a low risk of bias. No differences were demonstrated in mortality, with risk ratio (RR) of 0.97 (95% confidence interval (CI) 0.20 to 4.59; P value 0.97; 268 participants, six trials, very low quality of evidence (grading of recommendations assessment, development and evaluation (GRADE)), and in pneumonia, with RR of 0.40 (95% CI 0.11 to 1.39; P value 0.15; 120 participants, three trials, very low quality of evidence (GRADE)). Statistically significant results included the following: The PEEP group had higher arterial oxygen pressure (PaO2)/fraction of inspired oxygen (FiO2) on day one postoperatively, with a mean difference of 22.98 (95% CI 4.40 to 41.55; P value 0.02; 80 participants, two trials, very low quality of evidence (GRADE)), and postoperative atelectasis (defined as an area of collapsed lung, quantified by computerized tomography scan) was less in the PEEP group (standard mean difference -1.2, 95% CI -1.78 to -0.79; P value 0.00001; 88 participants, two trials, very low quality of evidence (GRADE)). No adverse events were reported in the three trials that adequately measured these outcomes (barotrauma and cardiac complications). Using information size calculations, we estimated that a further 21,200 participants would have to be randomly assigned to allow a reliable conclusion about PEEP and mortality.\nAuthors' conclusions\nEvidence is currently insufficient to permit conclusions about whether intraoperative PEEP alters risks of postoperative mortality and respiratory complications among undifferentiated surgical patients.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Six trials reported mortality. We pooled these data and found no differences between the group of patients who received PEEP and those who did not, but because of the small number of patients, and the fact that this outcome may be rare, these results did not allow us to make a conclusion about the effect of PEEP on mortality. Two results suggested some benefit of PEEP. First, oxygenation was better on the day after surgery in the PEEP group. Second, radiological imaging showed less atelectasis after surgery in the PEEP group. The studies that we found did not suggest that intraoperative PEEP causes harm.\nBecause of the small number of studies, this finding is inconclusive. We performed calculations to predict how many more participants would be needed before reliable conclusions can be made about the effect on mortality of the application of PEEP. This number was 21,200.\nEvidence is currently insufficient to allow conclusions about how intraoperative PEEP affects postoperative mortality and respiratory complications."
}
] |
query-laysum
|
22041
|
[
{
"role": "user",
"content": "Abstract: Background\nOpen-angle glaucoma (OAG) is an important cause of blindness worldwide. Laser trabeculoplasty, a treatment modality, still does not have a clear position in the treatment sequence.\nObjectives\nTo assess the effects of laser trabeculoplasty for treating OAG and ocular hypertension (OHT) when compared to medication, glaucoma surgery or no intervention. We also wished to compare the effectiveness of different laser trabeculoplasty technologies for treating OAG and OHT.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2021, Issue 10); Ovid MEDLINE; Ovid Embase; the ISRCTN registry; LILACS, ClinicalTrials.gov and the WHO ICTRP. The date of the search was 28 October 2021. We also contacted researchers in the field.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing laser trabeculoplasty with no intervention, with medical treatment, or with surgery in people with OAG or OHT. We also included trials comparing different types of laser trabeculoplasty technologies.\nData collection and analysis\nWe used standard methods expected by Cochrane. Two authors screened search results and extracted data independently. We considered the following outcomes at 24 months: failure to control intraocular pressure (IOP), failure to stabilise visual field progression, failure to stabilise optic neuropathy progression, adverse effects, quality of life, and costs. We graded the 'certainty' of the evidence using GRADE.\nMain results\nWe included 40 studies (5613 eyes of 4028 people) in this review. The majority of the studies were conducted in Europe and in the USA. Most of the studies were at risk of performance and/or detection bias as they were unmasked. None of the studies were judged as having low risk of bias for all domains. We did not identify any studies of laser trabeculoplasty alone versus no intervention.\nLaser trabeculoplasty versus medication\nFourteen studies compared laser trabeculoplasty with medication in either people with primary OAG (7 studies) or primary or secondary OAG (7 studies); five of the 14 studies also included participants with OHT. Six studies used argon laser trabeculoplasty and eight studies used selective laser trabeculoplasty. There was considerable clinical and methodological diversity in these studies leading to statistical heterogeneity in results for the primary outcome \"failure to control IOP\" at 24 months. Risk ratios (RRs) ranged from 0.43 in favour of laser trabeculoplasty to 1.87 in favour of medication (5 studies, I2 = 89%). Studies of argon laser compared with medication were more likely to show a beneficial effect compared with studies of selective laser (test for interaction P = 0.0001) but the argon laser studies were older and the medication comparator group in those studies may have been less effective. We considered this to be low-certainty evidence because the trials were at risk of bias (they were not masked) and there was unexplained heterogeneity. There was evidence from two studies (624 eyes) that argon laser treatment was associated with less failure to stabilise visual field progression compared with medication (7% versus 11%, RR 0.70, 95% CI 0.42 to 1.16) at 24 months and one further large recent study of selective laser also reported a reduced risk of failure at 48 months (17% versus 26%) RR 0.65, 95% CI 0.52 to 0.81, 1178 eyes). We judged this outcome as moderate-certainty evidence, downgrading for risk of bias. There was only very low-certainty evidence on optic neuropathy progression. Adverse effects were more commonly seen in the laser trabeculoplasty group including peripheral anterior synechiae (PAS) associated with argon laser (32% versus 26%, RR 11.74, 95% CI 5.94 to 23.22; 624 eyes; 2 RCTs; low-certainty evidence); 5% of participants treated with laser in three studies of selective laser group had early IOP spikes (moderate-certainty evidence). One UK-based study provided moderate-certainty evidence that laser trabeculoplasty was more cost-effective.\nLaser trabeculoplasty versus trabeculectomy\nThree studies compared laser trabeculoplasty with trabeculectomy. All three studies enrolled participants with OAG (primary or secondary) and used argon laser. People receiving laser trabeculoplasty may have a higher risk of uncontrolled IOP at 24 months compared with people receiving trabeculectomy (16% versus 8%, RR 2.12, 95% CI 1.44 to 3.11; 901 eyes; 2 RCTs). We judged this to be low-certainty evidence because of risk of bias (trials were not masked) and there was inconsistency between the two trials (I2 = 68%). There was limited evidence on visual field progression suggesting a higher risk of failure with laser trabeculoplasty. There was no information on optic neuropathy progression, quality of life or costs. PAS formation and IOP spikes were not reported but in one study trabeculectomy was associated with an increased risk of cataract (RR 1.78, 95% CI 1.46 to 2.16) (very low-certainty evidence).\nAuthors' conclusions\nLaser trabeculoplasty may work better than topical medication in slowing down the progression of open-angle glaucoma (rate of visual field loss) and may be similar to modern eye drops in controlling eye pressure at a lower cost. It is not associated with serious unwanted effects, particularly for the newer types of trabeculoplasty, such as selective laser trabeculoplasty.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cochrane Review authors searched for studies that had been published up to 28 October 2021."
}
] |
query-laysum
|
22042
|
[
{
"role": "user",
"content": "Abstract: Background\nThis review is an update of a previously published review in the Cochrane Database of Systematic Reviews (Issue 6, 2015).\nFailure to respond to antiepileptic drugs in patients with uncontrolled seizure activity such as refractory status epilepticus (RSE) has led to the use of anaesthetic drugs. Coma is induced with anaesthetic drugs to achieve complete control of seizure activity. Thiopental sodium and propofol are popularly used for this purpose. Both agents have been found to be effective. However, there is a substantial lack of evidence as to which of the two drugs is better in terms of clinical outcomes.\nObjectives\nTo compare the efficacy, adverse effects, and short- and long-term outcomes of refractory status epilepticus (RSE) treated with one of the two anaesthetic agents, thiopental sodium or propofol.\nSearch methods\nWe searched the Cochrane Epilepsy Group Specialized Register (16 August 2016), the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online (CRSO, 16 August 2016), MEDLINE (Ovid, 1946 to 16 August 2016), ClinicalTrials.gov (16 August 2016), and the South Asian Database of Controlled Clinical Trials (16 August 2016). Previously we searched IndMED, but this was not accessible at the time of the latest update.\nSelection criteria\nAll randomised controlled trials (RCTs) or quasi-RCTs (regardless of blinding) assessing the control of RSE using either thiopental sodium or propofol in patients of any age and gender.\nData collection and analysis\nTwo review authors screened the search results and reviewed the abstracts of relevant and eligible trials before retrieving the full-text publications.\nMain results\nOne study with a total of 24 participants was available for review. This study was a small, single-blind, multicentre trial studying adults with RSE receiving either propofol or thiopental sodium for the control of seizure activity. This study was terminated early due to recruitment problems. For our primary outcome of total control of seizures after the first course of study drug, there were 6/14 patients versus 2/7 patients in the propofol and thiopental sodium groups, respectively (risk ratio (RR) 1.50, 95% confidence interval (CI) 0.40 to 5.61, low quality evidence). Mortality was seen in 3/14 patients versus 1/7 patients in the propofol and thiopental sodium groups, respectively (RR 1.50, 95% CI 0.19 to 11.93, low quality evidence). Our third primary outcome of length of ICU stay was not reported. For our secondary outcomes of adverse events, infection was seen in 7/14 patients versus 5/7 patients in the propofol and thiopental sodium groups, respectively (RR 0.70; 95% CI 0.35 to 1.41). Hypotension during administration of study drugs and requiring use of vasopressors was seen in 7/14 patients versus 4/7 patients in the propofol and thiopental sodium groups, respectively (RR 0.87; 95% CI 0.38 to 2.00). The other severe complication noted was non-fatal propofol infusion syndrome in one patient. Patients receiving thiopental sodium required more days of mechanical ventilation when compared with patients receiving propofol: (median (range) 17 days (5 to 70 days) with thiopental sodium versus four days (2 to 28 days) with propofol). At three months there was no evidence of a difference between the drugs with respect to outcome measures such as control of seizure activity and functional outcome.\nAuthors' conclusions\nSince the last version of this review we have found no new studies.\nThere is a lack of robust, randomised, controlled evidence to clarify the efficacy of propofol and thiopental sodium compared to each other in the treatment of RSE. There is a need for large RCTs for this serious condition.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There was no difference between the two drugs in their ability to control seizure activity. The only difference noted was the requirement for prolonged mechanical ventilation for patients in the thiopental sodium group. This could be due to the prolonged presence of the drug in the body due to its slow removal."
}
] |
query-laysum
|
22044
|
[
{
"role": "user",
"content": "Abstract: Background\nEarly discharge hospital at home is a service that provides active treatment by healthcare professionals in the patient's home for a condition that otherwise would require acute hospital inpatient care. This is an update of a Cochrane review.\nObjectives\nTo determine the effectiveness and cost of managing patients with early discharge hospital at home compared with inpatient hospital care.\nSearch methods\nWe searched the following databases to 9 January 2017: the Cochrane Effective Practice and Organisation of Care Group (EPOC) register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, and EconLit. We searched clinical trials registries.\nSelection criteria\nRandomised trials comparing early discharge hospital at home with acute hospital inpatient care for adults. We excluded obstetric, paediatric and mental health hospital at home schemes.\nData collection and analysis\nWe followed the standard methodological procedures expected by Cochrane and EPOC. We used the GRADE approach to assess the certainty of the body of evidence for the most important outcomes.\nMain results\nWe included 32 trials (N = 4746), six of them new for this update, mainly conducted in high-income countries. We judged most of the studies to have a low or unclear risk of bias. The intervention was delivered by hospital outreach services (17 trials), community-based services (11 trials), and was co-ordinated by a hospital-based stroke team or physician in conjunction with community-based services in four trials.\nStudies recruiting people recovering from stroke\nEarly discharge hospital at home probably makes little or no difference to mortality at three to six months (risk ratio (RR) 0.92, 95% confidence interval (CI) 0.57 to 1.48, N = 1114, 11 trials, moderate-certainty evidence) and may make little or no difference to the risk of hospital readmission (RR 1.09, 95% CI 0.71 to 1.66, N = 345, 5 trials, low-certainty evidence). Hospital at home may lower the risk of living in institutional setting at six months (RR 0.63, 96% CI 0.40 to 0.98; N = 574, 4 trials, low-certainty evidence) and might slightly improve patient satisfaction (N = 795, low-certainty evidence). Hospital at home probably reduces hospital length of stay, as moderate-certainty evidence found that people assigned to hospital at home are discharged from the intervention about seven days earlier than people receiving inpatient care (95% CI 10.19 to 3.17 days earlier, N = 528, 4 trials). It is uncertain whether hospital at home has an effect on cost (very low-certainty evidence).\nStudies recruiting people with a mix of medical conditions\nEarly discharge hospital at home probably makes little or no difference to mortality (RR 1.07, 95% CI 0.76 to 1.49; N = 1247, 8 trials, moderate-certainty evidence). In people with chronic obstructive pulmonary disease (COPD) there was insufficient information to determine the effect of these two approaches on mortality (RR 0.53, 95% CI 0.25 to 1.12, N = 496, 5 trials, low-certainty evidence). The intervention probably increases the risk of hospital readmission in a mix of medical conditions, although the results are also compatible with no difference and a relatively large increase in the risk of readmission (RR 1.25, 95% CI 0.98 to 1.58, N = 1276, 9 trials, moderate-certainty evidence). Early discharge hospital at home may decrease the risk of readmission for people with COPD (RR 0.86, 95% CI 0.66 to 1.13, N = 496, 5 trials low-certainty evidence). Hospital at home may lower the risk of living in an institutional setting (RR 0.69, 0.48 to 0.99; N = 484, 3 trials, low-certainty evidence). The intervention might slightly improve patient satisfaction (N = 900, low-certainty evidence). The effect of early discharge hospital at home on hospital length of stay for older patients with a mix of conditions ranged from a reduction of 20 days to a reduction of less than half a day (moderate-certainty evidence, N = 767). It is uncertain whether hospital at home has an effect on cost (very low-certainty evidence).\nStudies recruiting people undergoing elective surgery\nThree studies did not report higher rates of mortality with hospital at home compared with inpatient care (data not pooled, N = 856, low-certainty evidence; mainly orthopaedic surgery). Hospital at home may lead to little or no difference in readmission to hospital for people who were mainly recovering from orthopaedic surgery (N = 1229, low-certainty evidence). We could not establish the effects of hospital at home on the risk of living in institutional care, due to a lack of data. The intervention might slightly improve patient satisfaction (N = 1229, low-certainty evidence). People recovering from orthopaedic surgery allocated to early discharge hospital at home were discharged from the intervention on average four days earlier than people allocated to usual inpatient care (4.44 days earlier, 95% CI 6.37 to 2.51 days earlier, , N = 411, 4 trials, moderate-certainty evidence). It is uncertain whether hospital at home has an effect on cost (very low-certainty evidence).\nAuthors' conclusions\nDespite increasing interest in the potential of early discharge hospital at home services as a less expensive alternative to inpatient care, this review provides insufficient evidence of economic benefit (through a reduction in hospital length of stay) or improved health outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Compared with in-hospital care, early discharge hospital at home probably makes little or no difference to patient health outcomes or being readmitted to hospital, and probably reduces hospital length of stay and the chance of being admitted to an institution such as a care home. Patients who receive care at home might be more satisfied with the care received. The effect on health service costs is uncertain."
}
] |
query-laysum
|
22045
|
[
{
"role": "user",
"content": "Abstract: Background\nThe liver is affected by two of the most common groups of malignant tumours: primary liver tumours and liver metastases from colorectal carcinoma or other extrahepatic primary cancers. Liver metastases are significantly more common than primary liver cancer, and the reported long-term survival rate after radical surgical treatment is approximately 50%. However, R0 resection (resection for cure) is not feasible in the majority of patients; therefore, other treatments have to be considered. One of these is percutaneous ethanol injection (PEI), which causes dehydration and necrosis of tumour cells, accompanied by small-vessel thrombosis, leading to tumour ischaemia and destruction of the tumour.\nObjectives\nTo assess the beneficial and harmful effects of percutaneous ethanol injection (PEI) compared with no intervention, other ablation methods, or systemic treatments in people with liver metastases.\nSearch methods\nWe searched the following databases up to 10 September 2019: the Cochrane Hepato-Biliary Group Controlled Trials Register; the Cochrane Central Register of Controlled Trials (CENTRAL), in the Cochrane Library; MEDLINE Ovid; Embase Ovid; Science Citation Index Expanded; Conference Proceedings Citation Index – Science; Latin American Caribbean Health Sciences Literature (LILACS); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL). We also searched clinical trials registers such as ClinicalTrials.gov, the International Clinical Trials Registry Platform (ICTRP), and the US Food and Drug Administration (FDA) (17 September 2019).\nSelection criteria\nRandomised clinical trials assessing beneficial and harmful effects of percutaneous ethanol injection and its comparators (no intervention, other ablation methods, systemic treatments) for liver metastases.\nData collection and analysis\nWe followed standard methodological procedures as outlined by Cochrane. We extracted information on participant characteristics, interventions, study outcomes, study design, and trial methods. Two review authors performed data extraction and assessed risk of bias independently. We assessed the certainty of evidence by using GRADE. We resolved disagreements by discussion.\nMain results\nWe identified only one randomised clinical trial comparing percutaneous intratumour ethanol injection (PEI) in addition to transcatheter arterial chemoembolisation (TACE) versus TACE alone. The trial was conducted in China and included 48 trial participants with liver metastases: 25 received PEI plus TACE, and 23 received TACE alone. The trial included 37 male and 11 female participants. Mean participant age was 49.3 years. Sites of primary tumours included colon (27 cases), stomach (12 cases), pancreas (3 cases), lung (3 cases), breast (2 cases), and ovary (1 case). Seven participants had a single tumour, 15 had two tumours, and 26 had three or more tumours in the liver. The bulk diameter of the tumour on average was 3.9 cm, ranging from 1.2 cm to 7.6 cm.\nParticipants were followed for 10 months to 43 months. The trial reported survival data after one, two, and three years. In the PEI + TACE group, 92%, 80%, and 64% of participants survived after one year, two years, and three years; in the TACE alone group, these percentages were 78.3%, 65.2%, and 47.8%, respectively. Upon conversion of these data to mortality rates, the calculated risk ratio (RR) for mortality at last follow-up when PEI plus TACE was compared with TACE alone was 0.69 (95% confidence interval (CI) 0.36 to 1.33; very low-certainty evidence) after three years of follow-up. Local recurrence was 16% in the PEI plus TACE group and 39.1% in the TACE group, resulting in an RR of 0.41 (95% CI 0.15 to 1.15; very low-certainty evidence). Forty-five out of a total of 68 tumours (66.2%) shrunk by at least 25% in the PEI plus TACE group versus 31 out of a total of 64 tumours (48.4%) in the TACE group. Trial authors reported some adverse events but provided very few details. We did not find data on time to mortality, failure to clear liver metastases, recurrence of liver metastases, health-related quality of life, or time to progression of liver metastases.\nThe single included trial did not provide information on funding nor on conflict of interest.\nAuthors' conclusions\nEvidence for the effectiveness of PEI plus TACE versus TACE in people with liver metastases is of very low certainty and is based on one small randomised clinical trial at high risk of bias. Currently, it cannot be determined whether adding PEI to TACE makes a difference in comparison to using TACE alone. Evidence for benefits or harms of PEI compared with no intervention, other ablation methods, or systemic treatments is lacking.\n\nGiven the provided abstract, please respond to the following query: \"Is local destruction of the cancer that spread to the liver with percutaneous ethanol injection beneficial?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum
|
22046
|
[
{
"role": "user",
"content": "Abstract: Background\nBronchiolitis is an acute, viral lower respiratory tract infection affecting infants and is sometimes treated with bronchodilators.\nObjectives\nTo assess the effects of bronchodilators on clinical outcomes in infants (0 to 12 months) with acute bronchiolitis.\nSearch methods\nWe searched CENTRAL 2013, Issue 12, MEDLINE (1966 to January Week 2, 2014) and EMBASE (1998 to January 2014).\nSelection criteria\nRandomized controlled trials (RCTs) comparing bronchodilators (other than epinephrine) with placebo for bronchiolitis.\nData collection and analysis\nTwo authors assessed trial quality and extracted data. We obtained unpublished data from trial authors.\nMain results\nWe included 30 trials (35 data sets) representing 1992 infants with bronchiolitis. In 11 inpatient and 10 outpatient studies, oxygen saturation did not improve with bronchodilators (mean difference (MD) -0.43, 95% confidence interval (CI) -0.92 to 0.06, n = 1242). Outpatient bronchodilator treatment did not reduce the rate of hospitalization (11.9% in bronchodilator group versus 15.9% in placebo group, odds ratio (OR) 0.75, 95% CI 0.46 to 1.21, n = 710). Inpatient bronchodilator treatment did not reduce the duration of hospitalization (MD 0.06, 95% CI -0.27 to 0.39, n = 349).\nEffect estimates for inpatients (MD -0.62, 95% CI -1.40 to 0.16) were slightly larger than for outpatients (MD -0.25, 95% CI -0.61 to 0.11) for oximetry. Oximetry outcomes showed significant heterogeneity (I2 statistic = 81%). Including only studies with low risk of bias had little impact on the overall effect size of oximetry (MD -0.38, 95% CI -0.75 to 0.00) but results were close to statistical significance.\nIn eight inpatient studies, there was no change in average clinical score (standardized MD (SMD) -0.14, 95% CI -0.41 to 0.12) with bronchodilators. In nine outpatient studies, the average clinical score decreased slightly with bronchodilators (SMD -0.42, 95% CI -0.79 to -0.06), a statistically significant finding of questionable clinical importance. The clinical score outcome showed significant heterogeneity (I2 statistic = 73%). Including only studies with low risk of bias reduced the heterogeneity but had little impact on the overall effect size of average clinical score (SMD -0.22, 95% CI -0.41 to -0.03).\nSub-analyses limited to nebulized albuterol or salbutamol among outpatients (nine studies) showed no effect on oxygen saturation (MD -0.19, 95% CI -0.59 to 0.21, n = 572), average clinical score (SMD -0.36, 95% CI -0.83 to 0.11, n = 532) or hospital admission after treatment (OR 0.77, 95% CI 0.44 to 1.33, n = 404).\nAdverse effects included tachycardia, oxygen desaturation and tremors.\nAuthors' conclusions\nBronchodilators such as albuterol or salbutamol do not improve oxygen saturation, do not reduce hospital admission after outpatient treatment, do not shorten the duration of hospitalization and do not reduce the time to resolution of illness at home. Given the adverse side effects and the expense associated with these treatments, bronchodilators are not effective in the routine management of bronchiolitis. This meta-analysis continues to be limited by the small sample sizes and the lack of standardized study design and validated outcomes across the studies. Future trials with large sample sizes, standardized methodology across clinical sites and consistent assessment methods are needed to answer completely the question of efficacy.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no effect of bronchodilators on oxygen saturation. Infants hospitalized for bronchiolitis showed no significant benefit of bronchodilator treatment. This review also found that bronchodilators do not reduce the need for hospitalization, do not shorten the length of stay in hospital and do not shorten the length of the illness at home. Reviewing the subgroup of studies using albuterol (salbutamol), we found no effect of this bronchodilator on oxygen saturation or clinical scores. Side effects of bronchodilators include rapid heart beat, decrease in oxygen and shakiness. Given these side effects, little evidence that they are effective and the expense associated with these treatments, bronchodilators are not helpful in the management of bronchiolitis."
}
] |
query-laysum
|
22047
|
[
{
"role": "user",
"content": "Abstract: Background\nIn the recent years, a variety of bronchoscopic lung volume reduction (BLVR) procedures have emerged that may provide a treatment option to participants suffering from moderate to severe chronic obstructive pulmonary disease (COPD).\nObjectives\nTo assess the effects of BLVR on the short- and long-term health outcomes in participants with moderate to severe COPD and determine the effectiveness and cost-effectiveness of each individual technique.\nSearch methods\nStudies were identified from the Cochrane Airways Group Specialised Register (CAGR) and by handsearching of respiratory journals and meeting abstracts. All searches are current until 07 December 2016.\nSelection criteria\nWe included randomized controlled trials (RCTs). We included studies reported as full text, those published as abstract only and unpublished data, if available.\nData collection and analysis\nTwo independent review authors assessed studies for inclusion and extracted data. Where possible, data from more than one study were combined in a meta-analysis using RevMan 5 software.\nMain results\nAeriSeal\nOne RCT of 95 participants found that AeriSeal compared to control led to a significant median improvement in forced expiratory volume in one second (FEV1) (18.9%, interquartile range (IQR) -0.7% to 41.9% versus 1.3%, IQR -8.2% to 12.9%), and higher quality of life, as measured by the St Georges Respiratory Questionnaire (SGRQ) (-12 units, IQR -22 units to -5 units, versus -3 units, IQR -5 units to 1 units), P = 0.043 and P = 0.0072 respectively. Although there was no significant difference in mortality (Odds Ratio (OR) 2.90, 95% CI 0.14 to 62.15), adverse events were more common for participants treated with AeriSeal (OR 3.71, 95% CI 1.34 to 10.24). The quality of evidence found in this prematurely terminated study was rated low to moderate.\nAirway bypass stents\nTreatment with airway bypass stents compared to control did not lead to significant between-group changes in FEV1 (0.95%, 95% CI -0.16% to 2.06%) or SGRQ scores (-2.00 units, 95% CI -5.58 units to 1.58 units), as found by one study comprising 315 participants. There was no significant difference in mortality (OR 0.76, 95% CI 0.21 to 2.77), nor were there significant differences in adverse events (OR 1.33, 95% CI 0.65 to 2.73) between the two groups. The quality of evidence was rated moderate to high.\nEndobronchial coils\nThree studies comprising 461 participants showed that treatment with endobronchial coils compared to control led to a significant between-group mean difference in FEV1 (10.88%, 95% CI 5.20% to 16.55%) and SGRQ (-9.14 units, 95% CI -11.59 units to -6.70 units). There were no significant differences in mortality (OR 1.49, 95% CI 0.67 to 3.29), but adverse events were significantly more common for participants treated with coils (OR 2.14, 95% CI 1.41 to 3.23). The quality of evidence ranged from low to high.\nEndobronchial valves\nFive studies comprising 703 participants found that endobronchial valves versus control led to significant improvements in FEV1 (standardized mean difference (SMD) 0.48, 95% CI 0.32 to 0.64) and scores on the SGRQ (-7.29 units, 95% CI -11.12 units to -3.45 units). There were no significant differences in mortality between the two groups (OR 1.07, 95% CI 0.47 to 2.43) but adverse events were more common in the endobronchial valve group (OR 5.85, 95% CI 2.16 to 15.84). Participant selection plays an important role as absence of collateral ventilation was associated with superior clinically significant improvements in health outcomes. The quality of evidence ranged from low to high.\nIntrabronchial valves\nIn the comparison of partial bilateral placement of intrabronchial valves to control, one trial favoured control in FEV1 (-2.11% versus 0.04%, P = 0.001) and one trial found no difference between the groups (0.9 L versus 0.87 L, P = 0.065). There were no significant differences in SGRQ scores (MD 2.64 units, 95% CI -0.28 units to 5.56 units) or mortality rates (OR 4.95, 95% CI 0.85 to 28.94), but adverse events were more frequent (OR 3.41, 95% CI 1.48 to 7.84) in participants treated with intrabronchial valves. The lack of functional benefits may be explained by the procedural strategy used, as another study (22 participants) compared unilateral versus partial bilateral placement, finding significant improvements in FEV1 and SGRQ when using the unilateral approach. The quality of evidence ranged between moderate to high.\nVapour ablation\nOne study of 69 participants found significant mean between-group differences in FEV1 (14.70%, 95% CI 7.98% to 21.42%) and SGRQ (-9.70 units, 95% CI -15.62 units to -3.78 units), favouring vapour ablation over control. There was no significant between-group difference in mortality (OR 2.82, 95% CI 0.13 to 61.06), but vapour ablation led to significantly more adverse events (OR 3.86, 95% CI 1.00 to 14.97). The quality of evidence ranged from low to moderate.\nAuthors' conclusions\nResults for selected BLVR procedures indicate they can provide significant and clinically meaningful short-term (up to one year) improvements in health outcomes, but this was at the expense of increased adverse events. The currently available evidence is not sufficient to assess the effect of BLVR procedures on mortality. These findings are limited by the lack of long-term follow-up data, limited availability of cost-effectiveness data, significant heterogeneity in results, presence of skew and high CIs, and the open-label character of a number of the studies.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "BLVR procedures are a collection of innovative non-surgical procedures that aim to improve the disease status and lung function of participants suffering from moderate to severe COPD, specifically those participants who remain limited despite conventional treatment."
}
] |
query-laysum
|
22048
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with cancer experience a variety of symptoms as a result of their disease and the therapies involved in its management. Inadequate symptom management has implications for patient outcomes including functioning, psychological well-being, and quality of life (QoL). Attempts to reduce the incidence and severity of cancer symptoms have involved the development and testing of psycho-educational interventions to enhance patients' symptom self-management. With the trend for care to be provided nearer patients' homes, telephone-delivered psycho-educational interventions have evolved to provide support for the management of a range of cancer symptoms. Early indications suggest that these can reduce symptom severity and distress through enhanced symptom self-management.\nObjectives\nTo assess the effectiveness of telephone-delivered interventions for reducing symptoms associated with cancer and its treatment. To determine which symptoms are most responsive to telephone interventions. To determine whether certain configurations (e.g. with/without additional support such as face-to-face, printed or electronic resources) and duration/frequency of intervention calls mediate observed cancer symptom outcome effects.\nSearch methods\nWe searched the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 1); MEDLINE via OVID (1946 to January 2019); Embase via OVID (1980 to January 2019); (CINAHL) via Athens (1982 to January 2019); British Nursing Index (1984 to January 2019); and PsycINFO (1989 to January 2019). We searched conference proceedings to identify published abstracts, as well as SIGLE and trial registers for unpublished studies. We searched the reference lists of all included articles for additional relevant studies. Finally, we handsearched the following journals: Cancer, Journal of Clinical Oncology, Psycho-oncology, Cancer Practice, Cancer Nursing, Oncology Nursing Forum, Journal of Pain and Symptom Management, and Palliative Medicine. We restricted our search to publications published in English.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs that compared one or more telephone interventions with one other, or with other types of interventions (e.g. a face-to-face intervention) and/or usual care, with the stated aim of addressing any physical or psychological symptoms of cancer and its treatment, which recruited adults (over 18 years) with a clinical diagnosis of cancer, regardless of tumour type, stage of cancer, type of treatment, and time of recruitment (e.g. before, during, or after treatment).\nData collection and analysis\nWe used Cochrane methods for trial selection, data extraction and analysis. When possible, anxiety, depressive symptoms, fatigue, emotional distress, pain, uncertainty, sexually-related and lung cancer symptoms as well as secondary outcomes are reported as standardised mean differences (SMDs) with 95% confidence intervals (CIs), and we presented a descriptive synthesis of study findings. We reported on findings according to symptoms addressed and intervention types (e.g. telephone only, telephone combined with other elements). As many studies included small samples, and because baseline scores for study outcomes often varied for intervention and control groups, we used change scores and associated standard deviations. The certainty of the evidence for each outcome was interpreted using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nThirty-two studies were eligible for inclusion; most had moderate risk of bias,often related to blinding. Collectively, researchers recruited 6250 people and studied interventions in people with a variety of cancer types and across the disease trajectory, although many participants had breast cancer or early-stage cancer and/or were starting treatment. Studies measured symptoms of anxiety, depression, emotional distress, uncertainty, fatigue, and pain, as well as sexually-related symptoms and general symptom intensity and/or distress.\nInterventions were primarily delivered by nurses (n = 24), most of whom (n = 16) had a background in oncology, research, or psychiatry. Ten interventions were delivered solely by telephone; the rest combined telephone with additional elements (i.e. face-to-face consultations and digital/online/printed resources). The number of calls delivered ranged from 1 to 18; most interventions provided three or four calls.\nTwenty-one studies provided evidence on effectiveness of telephone-delivered interventions and the majority appeared to reduce symptoms of depression compared to control. Nine studies contributed quantitative change scores (CSs) and associated standard deviation results (or these could be calculated). Likewise, many telephone interventions appeared effective when compared to control in reducing anxiety (16 studies; 5 contributed quantitative CS results); fatigue (9 studies; 6 contributed to quantitative CS results); and emotional distress (7 studies; 5 contributed quantitative CS results). Due to significant clinical heterogeneity with regards to interventions introduced, study participants recruited, and outcomes measured, meta-analysis was not conducted.\nFor other symptoms (uncertainty, pain, sexually-related symptoms, dyspnoea, and general symptom experience), evidence was limited; similarly meta-analysis was not possible, and results from individual studies were largely conflicting, making conclusions about their management through telephone-delivered interventions difficult to draw. Heterogeneity was considerable across all trials for all outcomes.\nOverall, the certainty of evidence was very low for all outcomes in the review. Outcomes were all downgraded due to concerns about overall risk of bias profiles being frequently unclear, uncertainty in effect estimates and due to some inconsistencies in results and general heterogeneity.\nUnsubstantiated evidence suggests that telephone interventions in some capacity may have a place in symptom management for adults with cancer. However, in the absence of reliable and homogeneous evidence, caution is needed in interpreting the narrative synthesis. Further, there were no clear patterns across studies regarding which forms of interventions (telephone alone versus augmented with other elements) are most effective. It is impossible to conclude with any certainty which forms of telephone intervention are most effective in managing the range of cancer-related symptoms that people with cancer experience.\nAuthors' conclusions\nTelephone interventions provide a convenient way of supporting self-management of cancer-related symptoms for adults with cancer. These interventions are becoming more important with the shift of care closer to patients' homes, the need for resource/cost containment, and the potential for voluntary sector providers to deliver healthcare interventions. Some evidence supports the use of telephone-delivered interventions for symptom management for adults with cancer; most evidence relates to four commonly experienced symptoms - depression, anxiety, emotional distress, and fatigue. Some telephone-delivered interventions were augmented by combining them with face-to-face meetings and provision of printed or digital materials. Review authors were unable to determine whether telephone alone or in combination with other elements provides optimal reduction in symptoms; it appears most likely that this will vary by symptom. It is noteworthy that, despite the potential for telephone interventions to deliver cost savings, none of the studies reviewed included any form of health economic evaluation.\nFurther robust and adequately reported trials are needed across all cancer-related symptoms, as the certainty of evidence generated in studies within this review was very low, and reporting was of variable quality. Researchers must strive to reduce variability between studies in the future. Studies in this review are characterised by clinical and methodological diversity; the level of this diversity hindered comparison across studies. At the very least, efforts should be made to standardise outcome measures. Finally, studies were compromised by inclusion of small samples, inadequate concealment of group allocation, lack of observer blinding, and short length of follow-up. Consequently, conclusions related to symptoms most amenable to management by telephone-delivered interventions are tentative.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 32 studies with a total of 6250 participants. Most studies (21) were from the USA. Nine studies recruited women with breast cancer, 11 included people with breast, colorectal, lung, or prostate cancer. Fourteen studies included people with early-stage cancer. Nurses provided interventions in 24 studies. Only 10 studies delivered interventions solely by telephone, and 16 studies combined telephone calls with other materials (printed or digital). Studies measured symptoms of depression, anxiety, emotional distress, uncertainty, fatigue, pain, sexual symptoms, and breathlessness. They also measured the effect of all the symptoms together (the general symptom experience).\nMost studies compared a telephone intervention with usual care alone or usual care with additional support. Eight studies compared two telephone interventions against each other; some also compared these with usual care.\nBecause the studies were so different from each other, we could not combine the results into one analysis for each symptom. However, some studies measured changes in symptoms using standardised or similar scales. They recorded participants’ scale scores at the beginning of the intervention, during the intervention, and at the end, resulting in a ‘change score’. We analysed the results from studies that recorded change scores."
}
] |
query-laysum
|
22049
|
[
{
"role": "user",
"content": "Abstract: Background\nCritically ill newborn infants undergo a variety of painful procedures or experience a variety of painful conditions during their early life in the neonatal unit. In the critically ill paediatric and neonatal population, clonidine is prescribed as an adjunct to opioids or benzodiazepines aiming to reduce the doses of these drugs that are required for analgesia or sedation, or to facilitate weaning from mechanical ventilation. It has been shown that clonidine premedication might have a positive effect on postoperative pain in children.\nObjectives\nTo assess the benefit and harms of clonidine for the prevention or treatment of procedural pain; postoperative pain; or pain associated with clinical conditions in non-ventilated neonates.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the CENTRAL, MEDLINE via PubMed, Embase, and CINAHL to December 2018. We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials. We ran an updated search from 1 January 2018 to 11 March 2020 in CENTRAL via CRS Web, MEDLINE via Ovid, and CINAHL via EBSCOhost.\nSelection criteria\nRandomised controlled trials, quasi-randomised controlled trials, and cluster trials comparing clonidine to placebo or no treatment, opioids, paracetamol, dexmedetomidine, or non-pharmacological pain-reducing interventions for the management of procedural pain, postoperative pain, and pain associated with clinical conditions in preterm and term newborns.\nData collection and analysis\nTwo review authors independently planned to extract data (e.g. number of participants, birth weight, gestational age, modality of administration, and dose of clonidine) and assess the risk of bias (e.g. adequacy of randomisation, blinding, completeness of follow-up). The primary outcome considered was pain: for procedural pain, the mean values of each analgesia scale assessed during the procedure and at one to two hours after the procedure; for postoperative pain and for pain associated with clinical conditions, the mean values of each analgesia scale assessed at 30 minutes, three hours, and 12 hours after the administration of the intervention. We planned to use the GRADE approach to assess the quality of evidence.\nMain results\nOur search strategy yielded 3383 references. Two review authors independently assessed all references for inclusion. We did not find any completed studies for inclusion. We excluded three trials where clonidine was administered for spinal anaesthesia.\nAuthors' conclusions\nWe did not find any studies that met our inclusion criteria and hence there is no evidence to recommend or refute the use of clonidine for the prevention or treatment of procedural or postoperative pain, or pain associated with clinical conditions in neonates.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "does clonidine administration reduce pain in newborns exposed to painful procedures or conditions?"
}
] |
query-laysum
|
22050
|
[
{
"role": "user",
"content": "Abstract: Background\nAcne is an inflammatory disorder with a high global burden. It is common in adolescents and primarily affects sebaceous gland-rich areas. The clinical benefit of the topical acne treatments azelaic acid, salicylic acid, nicotinamide, sulphur, zinc, and alpha-hydroxy acid is unclear.\nObjectives\nTo assess the effects of topical treatments (azelaic acid, salicylic acid, nicotinamide, zinc, alpha-hydroxy acid, and sulphur) for acne.\nSearch methods\nWe searched the following databases up to May 2019: the Cochrane Skin Group Specialised Register, CENTRAL, MEDLINE, Embase, and LILACS. We also searched five trials registers.\nSelection criteria\nClinical randomised controlled trials of the six topical treatments compared with other topical treatments, placebo, or no treatment in people with acne.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Key outcomes included participants' global self-assessment of acne improvement (PGA), withdrawal for any reason, minor adverse events (assessed as total number of participants who experienced at least one minor adverse event), and quality of life.\nMain results\nWe included 49 trials (3880 reported participants) set in clinics, hospitals, research centres, and university settings in Europe, Asia, and the USA.\nThe vast majority of participants had mild to moderate acne, were aged between 12 to 30 years (range: 10 to 45 years), and were female. Treatment lasted over eight weeks in 59% of the studies. Study duration ranged from three months to three years.\nWe assessed 26 studies as being at high risk of bias in at least one domain, but most domains were at low or unclear risk of bias.\nWe grouped outcome assessment into short-term (less than or equal to 4 weeks), medium-term (from 5 to 8 weeks), and long-term treatment (more than 8 weeks). The following results were measured at the end of treatment, which was mainly long-term for the PGA outcome and mixed length (medium-term mainly) for minor adverse events.\nAzelaic acid\nIn terms of treatment response (PGA), azelaic acid is probably less effective than benzoyl peroxide (risk ratio (RR) 0.82, 95% confidence interval (CI) 0.72 to 0.95; 1 study, 351 participants), but there is probably little or no difference when comparing azelaic acid to tretinoin (RR 0.94, 95% CI 0.78 to 1.14; 1 study, 289 participants) (both moderate-quality evidence). There may be little or no difference in PGA when comparing azelaic acid to clindamycin (RR 1.13, 95% CI 0.92 to 1.38; 1 study, 229 participants; low-quality evidence), but we are uncertain whether there is a difference between azelaic acid and adapalene (1 study, 55 participants; very low-quality evidence).\nLow-quality evidence indicates there may be no differences in rates of withdrawal for any reason when comparing azelaic acid with benzoyl peroxide (RR 0.88, 95% CI 0.60 to 1.29; 1 study, 351 participants), clindamycin (RR 1.30, 95% CI 0.48 to 3.56; 2 studies, 329 participants), or tretinoin (RR 0.66, 95% CI 0.29 to 1.47; 2 studies, 309 participants), but we are uncertain whether there is a difference between azelaic acid and adapalene (1 study, 55 participants; very low-quality evidence).\nIn terms of total minor adverse events, we are uncertain if there is a difference between azelaic acid compared to adapalene (1 study; 55 participants) or benzoyl peroxide (1 study, 30 participants) (both very low-quality evidence). There may be no difference when comparing azelaic acid to clindamycin (RR 1.50, 95% CI 0.67 to 3.35; 1 study, 100 participants; low-quality evidence). Total minor adverse events were not reported in the comparison of azelaic acid versus tretinoin, but individual application site reactions were reported, such as scaling.\nSalicylic acid\nFor PGA, there may be little or no difference between salicylic acid and tretinoin (RR 1.00, 95% CI 0.92 to 1.09; 1 study, 46 participants; low-quality evidence); we are not certain whether there is a difference between salicylic acid and pyruvic acid (1 study, 86 participants; very low-quality evidence); and PGA was not measured in the comparison of salicylic acid versus benzoyl peroxide.\nThere may be no difference between groups in withdrawals when comparing salicylic acid and pyruvic acid (RR 0.89, 95% CI 0.53 to 1.50; 1 study, 86 participants); when salicylic acid was compared to tretinoin, neither group had withdrawals (both based on low-quality evidence (2 studies, 74 participants)). We are uncertain whether there is a difference in withdrawals between salicylic acid and benzoyl peroxide (1 study, 41 participants; very low-quality evidence).\nFor total minor adverse events, we are uncertain if there is any difference between salicylic acid and benzoyl peroxide (1 study, 41 participants) or tretinoin (2 studies, 74 participants) (both very low-quality evidence). This outcome was not reported for salicylic acid versus pyruvic acid, but individual application site reactions were reported, such as scaling and redness.\nNicotinamide\nFour studies evaluated nicotinamide against clindamycin or erythromycin, but none measured PGA. Low-quality evidence showed there may be no difference in withdrawals between nicotinamide and clindamycin (RR 1.12, 95% CI 0.49 to 2.60; 3 studies, 216 participants) or erythromycin (RR 1.40, 95% CI 0.46 to 4.22; 1 study, 158 participants), or in total minor adverse events between nicotinamide and clindamycin (RR 1.20, 95% CI 0.73 to 1.99; 3 studies, 216 participants; low-quality evidence). Total minor adverse events were not reported in the nicotinamide versus erythromycin comparison.\nAlpha-hydroxy (fruit) acid\nThere may be no difference in PGA when comparing glycolic acid peel to salicylic-mandelic acid peel (RR 1.06, 95% CI 0.88 to 1.26; 1 study, 40 participants; low-quality evidence), and we are uncertain if there is a difference in total minor adverse events due to very low-quality evidence (1 study, 44 participants). Neither group had withdrawals (2 studies, 84 participants; low-quality evidence).\nAuthors' conclusions\nCompared to benzoyl peroxide, azelaic acid probably leads to a worse treatment response, measured using PGA. When compared to tretinoin, azelaic acid probably makes little or no difference to treatment response. For other comparisons and outcomes the quality of evidence was low or very low.\nRisk of bias and imprecision limit our confidence in the evidence. We encourage the comparison of more methodologically robust head-to-head trials against commonly used active drugs.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The following results were measured at the end of treatment, which was mainly long term (more than 8 weeks) for the outcome 'Participants' global self-assessment of acne improvement' (PGA) and mixed in length, but mainly medium term (from 5 to 8 weeks), for 'Total number of participants who experienced at least one minor side effect'.\nAzelaic acid probably leads to worse PGA when compared to benzoyl peroxide, but when compared to tretinoin, there is probably little or no difference (both moderate-quality evidence). When comparing azelaic acid to clindamycin, there may be little or no difference in PGA (low-quality evidence), but we are uncertain whether azelaic acid reduces PGA compared to adapalene (very low-quality evidence).\nIn terms of participant withdrawal (for any reason), there may be no difference when azelaic acid is compared with benzoyl peroxide, clindamycin, and tretinoin (all low-quality evidence). We are uncertain whether azelaic acid reduces withdrawals when compared to adapalene (very low-quality evidence).\nWe are uncertain whether azelaic acid has fewer total minor adverse events when compared to adapalene or benzoyl peroxide (very-low quality evidence). When comparing azelaic acid to clindamycin, there may be no difference in total adverse events (low-quality evidence). The studies that compared azelaic acid with tretinoin only reported individual side effects (e.g. scaling).\nWe are uncertain if there is a difference between salicylic acid and pyruvic acid on PGA score (very low-quality evidence). There may be little or no difference between salicylic acid and tretinoin in PGA (low-quality evidence). No study comparing salicylic acid with benzoyl peroxide assessed PGA. There may be no difference in withdrawals when comparing salicylic acid and pyruvic acid; there were no withdrawals when salicylic acid was compared to tretinoin (both low-quality evidence). We are uncertain if there is a difference in withdrawals between salicylic acid and benzoyl peroxide (very low-quality evidence).\nWe are uncertain whether salicylic acid reduces total minor adverse events when compared to benzoyl peroxide or tretinoin (very low-quality evidence). For salicylic acid compared with pyruvic acid only individual application site reactions were reported (e.g. scaling and redness).\nNone of the four studies assessing nicotinamide (compared to clindamycin or erythromycin) assessed PGA. Nicotinamide may make no difference to withdrawals when compared to clindamycin or erythromycin, and may make no difference to total minor adverse events when compared to clindamycin (both low-quality evidence); however, no studies comparing nicotinamide with erythromycin looked at total minor adverse events.\nGlycolic acid peels may make no difference to PGA when compared to salicylic-mandelic acid peels (low-quality evidence), we are uncertain of the effect on total minor adverse events (very low-quality evidence), and there were no withdrawals (low-quality evidence)."
}
] |
query-laysum
|
22051
|
[
{
"role": "user",
"content": "Abstract: Background\nThe surgical stress response plays an important role on the pathogenesis of perioperative cardiac complications. Alpha-2 adrenergic agonists attenuate this response and may help prevent postoperative cardiac complications.\nObjectives\nTo determine the efficacy and safety of α-2 adrenergic agonists for reducing mortality and cardiac complications in adults undergoing cardiac surgery and non-cardiac surgery.\nSearch methods\nWe searched CENTRAL (2017, Issue 4), MEDLINE (1950 to April Week 4, 2017), Embase (1980 to May 2017), the Science Citation Index, clinical trial registries, and reference lists of included articles.\nSelection criteria\nWe included randomized controlled trials that compared α-2 adrenergic agonists (i.e. clonidine, dexmedetomidine or mivazerol) against placebo or non-α-2 adrenergic agonists. Included trials had to evaluate the efficacy and safety of α-2 adrenergic agonists for preventing perioperative mortality or cardiac complications (or both), or measure one or more relevant outcomes (i.e. death, myocardial infarction, heart failure, acute stroke, supraventricular tachyarrhythmia and myocardial ischaemia).\nData collection and analysis\nTwo authors independently assessed trial quality, extracted data and independently performed computer entry of abstracted data. We contacted study authors for additional information. Adverse event data were gathered from the trials. We evaluated included studies using the Cochrane 'Risk of bias' tool, and the quality of the evidence underlying pooled treatment effects using GRADE methodology. Given the clinical heterogeneity between cardiac and non-cardiac surgery, we analysed these subgroups separately. We expressed treatment effects as pooled risk ratios (RR) with 95% confidence intervals (CI).\nMain results\nWe included 47 trials with 17,039 participants. Of these studies, 24 trials only included participants undergoing cardiac surgery, 23 only included participants undergoing non-cardiac surgery and eight only included participants undergoing vascular surgery. The α-2 adrenergic agonist studied was clonidine in 21 trials, dexmedetomidine in 24 trials and mivazerol in two trials.\nIn non-cardiac surgery, there was high quality evidence that α-2 adrenergic agonists led to a similar risk of all-cause mortality compared with control groups (1.3% with α-2 adrenergic agonists versus 1.7% with control; RR 0.80, 95% CI 0.61 to 1.04; participants = 14,081; studies = 16). Additionally, the risk of cardiac mortality was similar between treatment groups (0.8% with α-2 adrenergic agonists versus 1.0% with control; RR 0.86, 95% CI 0.60 to 1.23; participants = 12,525; studies = 5, high quality evidence). The risk of myocardial infarction was probably similar between treatment groups (RR 0.94, 95% CI 0.69 to 1.27; participants = 13,907; studies = 12, moderate quality evidence). There was no associated effect on the risk of stroke (RR 0.93, 95% CI 0.55 to 1.56; participants = 11,542; studies = 7; high quality evidence). Conversely, α-2 adrenergic agonists probably increase the risks of clinically significant bradycardia (RR 1.59, 95% CI 1.18 to 2.13; participants = 14,035; studies = 16) and hypotension (RR 1.24, 95% CI 1.03 to 1.48; participants = 13,738; studies = 15), based on moderate quality evidence.\nThere was insufficient evidence to determine the effect of α-2 adrenergic agonists on all-cause mortality in cardiac surgery (RR 0.52, 95% CI 0.26 to 1.04; participants = 1947; studies = 16) and myocardial infarction (RR 1.01, 95% CI 0.43 to 2.40; participants = 782; studies = 8), based on moderate quality evidence. There was one cardiac death in the clonidine arm of a study of 22 participants. Based on very limited data, α-2 adrenergic agonists may have reduced the risk of stroke (RR 0.37, 95% CI 0.15 to 0.93; participants = 1175; studies = 7; outcome events = 18; low quality evidence). Conversely, α-2 adrenergic agonists increased the risk of bradycardia from 6.4% to 12.0% (RR 1.88, 95% CI 1.35 to 2.62; participants = 1477; studies = 10; moderate quality evidence), but their effect on hypotension was uncertain (RR 1.19, 95% CI 0.87 to 1.64; participants = 1413; studies = 9; low quality evidence).\nThese results were qualitatively unchanged in subgroup analyses and sensitivity analyses.\nAuthors' conclusions\nOur review concludes that prophylactic α-2 adrenergic agonists generally do not prevent perioperative death or major cardiac complications. For non-cardiac surgery, there is moderate-to-high quality evidence that these agents do not prevent death, myocardial infarction or stroke. Conversely, there is moderate quality evidence that these agents have important adverse effects, namely increased risks of hypotension and bradycardia. For cardiac surgery, there is moderate quality evidence that α-2 adrenergic agonists have no effect on the risk of mortality or myocardial infarction, and that they increase the risk of bradycardia. The quality of evidence was inadequate to draw conclusions regarding the effects of alpha-2 agonists on stroke or hypotension during cardiac surgery.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Do alpha-2 adrenergic agonists (clonidine, dexmedetomidine and mivazerol) reduce the number of deaths and heart complications when given around the time of surgery?"
}
] |
query-laysum
|
22052
|
[
{
"role": "user",
"content": "Abstract: Background\nHaemophilia is a bleeding disorder associated with haemorrhaging into joints and muscles. Exercise is often used to aid recovery after bleeds, and to improve joint function in the presence of arthropathy.\nObjectives\nOur objective was to systematically review the available evidence on the safety and effectiveness of exercise for people with haemophilia.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Coagulopathies Trials Register and electronic databases PubMed, OVID-Embase, and CINAHL. We hand searched abstracts from congresses of the World Federation of Hemophilia and the European Hematology Association, trial registries and the reference lists of relevant articles.\nDate of the last search of the Cochrane Cystic Fibrosis and Genetic Disorders Group's Coagulopathies Trials Register: 14 December 2016.\nSelection criteria\nRandomized or quasi-randomized controlled studies comparing any exercise intervention considered relevant in haemophilia management including supervised, unsupervised, aquatic, strengthening, aerobic or cardiovascular, stretching, proprioceptive and balance training exercise programs in males of any age with haemophilia A or B of any severity (those with co-morbidities were not excluded).\nData collection and analysis\nTwo authors reviewed the identified abstracts to determine their eligibility. For studies meeting the inclusion criteria, full articles were obtained. The two authors extracted data and assessed the risk of bias. Any disagreements were resolved by discussion. The authors contacted study investigators to obtain any missing data.\nMain results\nEight studies were included, which represented 233 males with all severities of haemophilia A and B, ranging in age from eight years to 49 years. Study duration ranged from four to 12 weeks. Exercise interventions varied greatly and included resistance exercises, isometric exercises, bicycle ergometry, treadmill walking and hydrotherapy; therefore, comparison between studies was difficult.\nNone of the studies measured or reported adverse effects from the interventions. None of the studies reported outcomes regarding bleed frequency, quality of life or aerobic activity. Overall risk of bias across all studies was assessed as unclear.\nVery few studies provided sufficient information for comparison. None of the studies reported data that favoured the control group. One study reported that six weeks of resistance training improved joint health status (Colorado score) compared to controls. The addition of pulsed electromagnetic fields also improved ankle scores compared to exercises alone, but this was not seen in the elbows or knees.\nTwo studies reported statistically significant improvements in pain intensity after exercise interventions compared to controls. Hydrotherapy exercises produced significant decreases in pain compared to controls and land-based exercise groups.\nTwo studies found improvement in joint motion in the exercise group compared to controls. One study compared land- and water-based exercises; there was no difference in the range of motion between the two groups; however, the water-based exercise group did show improvement over the control group.\nOne study, comparing joint traction and proprioceptive neuromuscular facilitation for the elbow to a control group, showed no differences in biceps girth or strength after 12 weeks of intervention.\nSome studies reported comparisons between interventions. In one study, treadmill training significantly improved balance in children compared to bicycle ergometry. Another study added partial weight bearing exercises to quadriceps exercises and showed improved walking tolerance.\nFour studies evaluated quadriceps or hamstring strength (or both). The addition of bicycle ergometry and exercises with weights was more effective than static exercises and treadmill walking for strengthening knee flexors and extensors. Partial weight-bearing exercises through range were more effective than static and short arc exercises for improving knee extensor strength. The addition of treadmill walking to ultrasound, stretching and strengthening exercises showed increased peak torque of knee flexors and extensors and decrease in knee effusion.\nThe results should be interpreted with caution due to the quality of evidence (GRADE) as outlined in the summary of findings tables, which demonstrates that all but one of the outcomes assessed were rated as low or very low due to the small sample sizes and potential bias.\nAuthors' conclusions\nThese results must be considered with caution. There is a lack of confidence in the results due to the small number of included studies and the inability to pool the results due to the heterogeneity of outcome measures. Most exercise interventions produced improvement in one or more of the measured outcomes including pain, range of motion, strength and walking tolerance. Hydrotherapy may be more effective than land exercises for pain relief in adults. Functional exercises such as treadmill walking and partial weight bearing exercises seem to be more effective than static or short arc exercises for improving muscle strength. These findings are consistent with the many non-controlled intervention reports in the haemophilia literature. No adverse effects were reported as a result of any of the interventions. However, some groups used prophylactic factor prior to exercise and other groups studied only subjects with moderate haemophilia. Therefore, the safety of these techniques for persons with severe haemophilia remains unclear.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included eight studies with 233 male participants with haemophilia A or B (of any severity), aged eight to 49 years. Length of study ranged from four to 12 weeks."
}
] |
query-laysum
|
22053
|
[
{
"role": "user",
"content": "Abstract: Background\nAround 16% of adults have symptoms of overactive bladder (OAB; urgency with frequency and/or urge incontinence), with prevalence increasing with age. Anticholinergic drugs are commonly used to treat this condition.\nThis is an update of a Cochrane Review first published in 2002 and last updated in 2006.\nObjectives\nTo assess the effects of anticholinergic drugs compared with placebo or no treatment for treating overactive bladder syndrome in adults.\nSearch methods\nWe searched the Cochrane Incontinence Specialised Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, ClinicalTrials.gov, WHO ICTRP and handsearching of journals and conference proceedings (searched 14 January 2020), and the reference lists of relevant articles. We updated this search on 3 May 2022, but these results have not yet been fully incorporated.\nSelection criteria\nWe included randomised or quasi-randomised trials in adults with overactive bladder syndrome that compared an anticholinergic drug alone with placebo treatment.\nData collection and analysis\nTwo review authors independently assessed eligibility and extracted data from the included studies, including an assessment of the risk of bias. We assessed the certainty of the body of evidence using the GRADE approach. We processed data as described in the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nWe included 104 studies, 71 of which were new or updated for this version of the review. Although 12 studies did not report the number of participants, there were 47,106 people in the remainder of the included studies. The majority of the studies had insufficient information to allow judgement of risk of bias and we judged them to be unclear for all domains. Nine anticholinergic drugs were included in these studies: darifenacin; fesoterodine; imidafenacin; oxybutynin; propantheline; propiverine; solifenacin; tolterodine and trospium. No studies were found that compared anticholinergic drugs to no treatment.\nAt the end of the treatment period, anticholinergics may slightly increase condition-specific quality of life (mean difference (MD) 4.41 lower, 95% confidence interval (CI) 5.28 lower to 3.54 lower (scale range -100 to 0); 12 studies, 6804 participants; low-certainty evidence). Anticholinergics are probably better than placebo in terms of patient perception of cure or improvement (risk ratio (RR) 1.38, 95% CI 1.15 to 1.66; 9 studies, 8457 participants; moderate-certainty evidence), and the mean number of urgency episodes per 24-hour period (MD 0.85 lower, 95% CI 1.03 lower to 0.67 lower; 23 studies, 16,875 participants; moderate-certainty evidence).\nCompared to placebo, anticholinergics may result in an increase in dry mouth adverse events (RR 3.50, 95% CI 3.26 to 3.75; 66 studies, 38,368 participants; low-certainty evidence), and may result in an increased risk of urinary retention (RR 3.52, 95% CI 2.04 to 6.08; 17 studies, 7862 participants; low-certainty evidence). Taking anticholinergics may be more likely to lead to participants withdrawing from the studies due to adverse events (RR 1.37, 95% CI 1.21 to 1.56; 61 studies, 36,943 participants; low-certainty evidence). However, taking anticholinergics probably reduces the mean number of micturitions per 24-hour period compared to placebo (MD 0.85 lower, 95% CI 0.98 lower to 0.73 lower; 30 studies, 19,395 participants; moderate-certainty evidence).\nAuthors' conclusions\nThe use of anticholinergic drugs by people with overactive bladder syndrome results in important but modest improvements in symptoms compared with placebo treatment. In addition, recent studies suggest that this is generally associated with only modest improvement in quality of life. Adverse effects were higher with all anticholinergics compared with placebo. Withdrawals due to adverse effects were also higher for all anticholinergics except tolterodine. It is not known whether any benefits of anticholinergics are sustained during long-term treatment or after treatment stops.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We studied evidence that was available up until 14 January 2020. We updated this search on 3 May 2022, but these results have not yet been fully incorporated into the review."
}
] |
query-laysum
|
22054
|
[
{
"role": "user",
"content": "Abstract: Background\nThis review is an update of a previously published review in the Cochrane Database of Systematic Reviews (2009, Issue 3).Tea is one of the most commonly consumed beverages worldwide. Teas from the plant Camellia sinensis can be grouped into green, black and oolong tea, and drinking habits vary cross-culturally. C sinensis contains polyphenols, one subgroup being catechins. Catechins are powerful antioxidants, and laboratory studies have suggested that these compounds may inhibit cancer cell proliferation. Some experimental and nonexperimental epidemiological studies have suggested that green tea may have cancer-preventative effects.\nObjectives\nTo assess possible associations between green tea consumption and the risk of cancer incidence and mortality as primary outcomes, and safety data and quality of life as secondary outcomes.\nSearch methods\nWe searched eligible studies up to January 2019 in CENTRAL, MEDLINE, Embase, ClinicalTrials.gov, and reference lists of previous reviews and included studies.\nSelection criteria\nWe included all epidemiological studies, experimental (i.e. randomised controlled trials (RCTs)) and nonexperimental (non-randomised studies, i.e. observational studies with both cohort and case-control design) that investigated the association of green tea consumption with cancer risk or quality of life, or both.\nData collection and analysis\nTwo or more review authors independently applied the study criteria, extracted data and assessed methodological quality of studies. We summarised the results according to diagnosis of cancer type.\nMain results\nIn this review update, we included in total 142 completed studies (11 experimental and 131 nonexperimental) and two ongoing studies. This is an additional 10 experimental and 85 nonexperimental studies from those included in the previous version of the review.\nEleven experimental studies allocated a total of 1795 participants to either green tea extract or placebo, all demonstrating an overall high methodological quality based on 'Risk of bias' assessment. For incident prostate cancer, the summary risk ratio (RR) in the green tea-supplemented participants was 0.50 (95% confidence interval (CI) 0.18 to 1.36), based on three studies and involving 201 participants (low-certainty evidence). The summary RR for gynaecological cancer was 1.50 (95% CI 0.41 to 5.48; 2 studies, 1157 participants; low-certainty evidence). No evidence of effect of non-melanoma skin cancer emerged (summary RR 1.00, 95% CI 0.06 to 15.92; 1 study, 1075 participants; low-certainty evidence). In addition, adverse effects of green tea extract intake were reported, including gastrointestinal disorders, elevation of liver enzymes, and, more rarely, insomnia, raised blood pressure and skin/subcutaneous reactions. Consumption of green tea extracts induced a slight improvement in quality of life, compared with placebo, based on three experimental studies.\nIn nonexperimental studies, we included over 1,100,000 participants from 46 cohort studies and 85 case-control studies, which were on average of intermediate to high methodological quality based on Newcastle-Ottawa Scale 'Risk of bias' assessment. When comparing the highest intake of green tea with the lowest, we found a lower overall cancer incidence (summary RR 0.83, 95% CI 0.65 to 1.07), based on three studies, involving 52,479 participants (low-certainty evidence). Conversely, we found no association between green tea consumption and cancer-related mortality (summary RR 0.99, 95% CI 0.91 to 1.07), based on eight studies and 504,366 participants (low-certainty evidence).\nFor most of the site-specific cancers we observed a decreased RR in the highest category of green tea consumption compared with the lowest one. After stratifying the analysis according to study design, we found strongly conflicting results for some cancer sites: oesophageal, prostate and urinary tract cancer, and leukaemia showed an increased RR in cohort studies and a decreased RR or no difference in case-control studies.\nAuthors' conclusions\nOverall, findings from experimental and nonexperimental epidemiological studies yielded inconsistent results, thus providing limited evidence for the beneficial effect of green tea consumption on the overall risk of cancer or on specific cancer sites.\nSome evidence of a beneficial effect of green tea at some cancer sites emerged from the RCTs and from case-control studies, but their methodological limitations, such as the low number and size of the studies, and the inconsistencies with the results of cohort studies, limit the interpretability of the RR estimates. The studies also indicated the occurrence of several side effects associated with high intakes of green tea. In addition, the majority of included studies were carried out in Asian populations characterised by a high intake of green tea, thus limiting the generalisability of the findings to other populations. Well conducted and adequately powered RCTs would be needed to draw conclusions on the possible beneficial effects of green tea consumption on cancer risk.\n\nGiven the provided abstract, please respond to the following query: \"Main findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this review we included 142 studies with more than 1.1 million participants looking for an association between green tea consumption and cancers of the digestive tract and the female reproductive system, breast, prostate, kidney and urinary tract, nasopharynx, lung, blood, skin, thyroid and brain. The majority of the studies were of medium to high quality in terms of how they were conducted. Overall, the evidence from the studies showed that the consumption of green tea to reduce the risk of cancer was inconsistent.\nSome studies suggested a beneficial effect on cancer risk, while others indicated no effect, and even suggested a slightly increased cancer risk. In particular, results from experimental studies suggested that green tea extract supplementation yielded a decreased risk for prostate cancer, but increased risk for gynaecological cancers. For non-melanoma skin cancer no difference in cancer cases emerged. Green tea supplementation seemed to slightly improve quality of life compared with placebo, although it was associated with some adverse effects including gastrointestinal disorders, higher levels of liver enzymes, and, more rarely, insomnia, raised blood pressure and skin reactions.\nIn nonexperimental studies, comparing people consuming the highest amount of green tea to those in the lowest category of consumption, we found an indication of a lower occurrence of new cases of overall types of cancer, while no difference emerged for lethal cases. However, results according to the type of cancer and study design were inconsistent."
}
] |
query-laysum
|
22055
|
[
{
"role": "user",
"content": "Abstract: Background\nA detailed summary and meta-analysis of the dose-related effect of pravastatin on lipids is not available.\nObjectives\nPrimary objective\nTo assess the pharmacology of pravastatin by characterizing the dose-related effect and variability of the effect of pravastatin on the surrogate marker: low-density lipoprotein (LDL cholesterol). The effect of pravastatin on morbidity and mortality is not the objective of this systematic review.\nSecondary objectives\n• To assess the dose-related effect and variability of effect of pravastatin on the following surrogate markers: total cholesterol; high-density lipoprotein (HDL cholesterol); and triglycerides. \n• To assess the effect of pravastatin on withdrawals due to adverse effects.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomized controlled trials (RCTs) up to September 2021: CENTRAL (2021, Issue 8), Ovid MEDLINE, Ovid Embase, Bireme LILACS, the WHO International Clinical Trials Registry Platform, and ClinicalTrials.gov. We also contacted authors of relevant papers regarding further published and unpublished work. The searches had no language restrictions.\nSelection criteria\nRandomized placebo-controlled trials evaluating the dose response of different fixed doses of pravastatin on blood lipids over a duration of three to 12 weeks in participants of any age with and without evidence of cardiovascular disease.\nData collection and analysis\nTwo review authors independently assessed eligibility criteria for studies to be included, and extracted data. We entered lipid data from placebo-controlled trials into Review Manager 5 as continuous data and withdrawal due to adverse effects (WDAEs) data as dichotomous data. We searched for WDAEs information from all trials. We assessed all trials using Cochrane's risk of bias tool under the categories of sequence generation, allocation concealment, blinding, incomplete outcome data, selective reporting, and other potential biases.\nMain results\nSixty-four RCTs evaluated the dose-related efficacy of pravastatin in 9771 participants. The participants were of any age, with and without evidence of cardiovascular disease, and pravastatin effects were studied within a treatment period of three to 12 weeks. Log dose-response data over the doses of 5 mg to 160 mg revealed strong linear dose-related effects on blood total cholesterol and LDL cholesterol, and a weak linear dose-related effect on blood triglycerides. There was no dose-related effect of pravastatin on blood HDL cholesterol. Pravastatin 10 mg/day to 80 mg/day reduced LDL cholesterol by 21.7% to 31.9%, total cholesterol by 16.1% to 23.3%,and triglycerides by 5.8% to 20.0%. The certainty of evidence for these effects was judged to be moderate to high. For every two-fold dose increase there was a 3.4% (95% confidence interval (CI) 2.2 to 4.6) decrease in blood LDL cholesterol. This represented a dose-response slope that was less than the other studied statins: atorvastatin, rosuvastatin, fluvastatin, pitavastatin and cerivastatin. From other systematic reviews we conducted on statins for its effect to reduce LDL cholesterol, pravastatin is similar to fluvastatin, but has a decreased effect compared to atorvastatin, rosuvastatin, pitavastatin and cerivastatin. The effect of pravastatin compared to placebo on WADES has a risk ratio (RR) of 0.81 (95% CI 0.63 to 1.03). The certainty of evidence was judged to be very low.\nAuthors' conclusions\nPravastatin lowers blood total cholesterol, LDL cholesterol and triglyceride in a dose-dependent linear fashion. This review did not provide a good estimate of the incidence of harms associated with pravastatin because of the lack of reporting of adverse effects in 48.4% of the randomized placebo-controlled trials.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared pravastatin with a non-active form of treatment (placebo) over a three to 12-week duration and measured cholesterol and other fats in the blood. We assessed unwanted effects by looking at how many people left the studies because of unwanted effects. We considered people of any age, with and without evidence of cardiovascular disease."
}
] |
query-laysum
|
22056
|
[
{
"role": "user",
"content": "Abstract: Background\nPlacental transfusion (by means of delayed cord clamping (DCC), cord milking, or cord stripping) confers benefits for preterm infants. It is not known if providing respiratory support to preterm infants before cord clamping improves outcomes.\nObjectives\nTo assess the efficacy and safety of respiratory support provided during DCC compared with no respiratory support during placental transfusion (in the form of DCC, milking, or stripping) in preterm infants immediately after delivery.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL, 2017, Issue 5), MEDLINE via PubMed (1966 to 19 June 2017), Embase (1980 to 19 June 2017), and CINAHL (1982 to 19 June 2017). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomized controlled trials and quasi-randomized trials.\nSelection criteria\nRandomized, cluster randomized, or quasi-randomized controlled trials enrolling preterm infants undergoing DCC, where one of the groups received respiratory support before cord clamping and the control group received no respiratory support before cord clamping.\nData collection and analysis\nAll review authors assisted with data collection, assessment, and extraction. Two review authors assessed the quality of evidence using the GRADE approach. We contacted study authors to request missing information.\nMain results\nOne study fulfilled the review criteria. In this study, 150 preterm infants of less than 32 weeks' gestation undergoing 60 second DCC were randomized to a group who received respiratory support in the form of continuous positive airway pressure (CPAP) or positive pressure ventilation during DCC and a group that did not receive respiratory support during the procedure. Mortality during hospital admission was not significantly different between groups with wide confidence intervals (CI) for magnitude of effect (risk ratio (RR) 1.67, 95% CI 0.41 to 6.73). The study did not report neurodevelopmental disability and death or disability at two to three years of age. There were no significant differences between groups in condition at birth (Apgar scores or intubation in the delivery room), use of inotropic agents (RR 1.25, CI 0.63 to 2.49), and receipt of blood transfusion (RR 1.03, 95% CI 0.70 to 1.54). In addition, there were no significant differences in the incidences of any intraventricular haemorrhage (RR 1.50, 95% CI 0.65 to 3.46) and severe intraventricular haemorrhage (RR 1.33, 95% CI 0.31 to 5.75). Several continuous variables were reported in subgroups depending on method of delivery. Unpublished data for each group as a whole was made available and showed peak haematocrit in the first 24 hours and duration of phototherapy did not differ significantly. Overall, the quality of evidence for several key neonatal outcomes (e.g. mortality and intraventricular haemorrhage) was low because of lack of precision with wide CIs.\nAuthors' conclusions\nThe results from one study with wide CIs for magnitude of effect do not provide evidence either for or against the use of respiratory support before clamping the umbilical cord. A greater body of evidence is required as many of the outcomes of interest to the review occurred infrequently. Similarly, the one included study cannot answer the question of whether the intervention is or is not harmful.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "we searched medical databases and found one study for inclusion in this review. Preterm infants born before 32 weeks' gestation (32 weeks from the first day of the woman's last period (menstruation) to the current date) who had clamping of the umbilical cord delayed for 60 seconds after birth were selected at random to enter a group of babies who received breathing support and a group of babies who did not receive breathing support. The breathing support was given after birth of the baby and before the cord was clamped. Breathing support was the use of CPAP for infants breathing on their own or applying intermittent airwaypressureto expand the lungs in babies not breathing well on their own. Most of the study infants (83%) were delivered by caesarean section."
}
] |
query-laysum
|
22057
|
[
{
"role": "user",
"content": "Abstract: Background\nPortal hypertension commonly accompanies advanced liver disease and often gives rise to life-threatening complications, including bleeding (haemorrhage) from oesophageal and gastrointestinal varices. Variceal bleeding commonly occurs in children and adolescents with chronic liver disease or portal vein thrombosis. Prevention is, therefore, important. Randomised clinical trials have shown that non-selective beta-blockers and endoscopic variceal band ligation decrease the incidence of variceal bleeding in adults. In children and adolescents, band ligation, beta-blockers, and sclerotherapy have been proposed as primary prophylaxis alternatives for oesophageal variceal bleeding. However, it is unknown whether these interventions are of benefit or harm when used for primary prophylaxis in children and adolescents.\nObjectives\nTo assess the benefits and harms of band ligation compared with sham or no intervention for primary prophylaxis of oesophageal variceal bleeding in children and adolescents with chronic liver disease or portal vein thrombosis.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, PubMed, Embase, and two other databases (April 2020). We scrutinised the reference lists of the retrieved publications, and we also handsearched abstract books of the two main paediatric gastroenterology and hepatology conferences from January 2008 to December 2019. We also searched clinicaltrials.gov, the United States Food and Drug Administration (FDA), the European Medicines Agency (EMA), and the World Health Organization (WHO) for ongoing clinical trials. We imposed no language or document type restrictions on our search.\nSelection criteria\nWe aimed to include randomised clinical trials irrespective of blinding, language, or publication status, to assess the benefits and harms of band ligation versus sham or no intervention for primary prophylaxis of oesophageal variceal bleeding in children with chronic liver disease or portal vein thrombosis. If the search for randomised clinical trials retrieved quasi-randomised and other observational studies, then we read them through to extract information on harm.\nData collection and analysis\nWe used standard Cochrane methodology to perform this systematic review. We used GRADE to assess the certainty of evidence for each outcome. Our primary outcomes were all-cause mortality, serious adverse events and liver-related morbidity, and quality of life. Our secondary outcomes were oesophageal variceal bleeding and adverse events not considered serious. We used the intention-to-treat principle. We analysed data using Review Manager 5.\nMain results\nOne conference abstract, describing a feasibility multi-centre randomised clinical trial, fulfilled our review inclusion criteria. We judged the trial at overall high risk of bias. This trial was conducted in three hospital centres in the United Kingdom. The aim of the trial was to determine the feasibility and safety of further larger randomised clinical trials of prophylactic band ligation versus no active treatment in children with portal hypertension and large oesophageal varices. Twelve children received prophylactic band ligation and 10 children received no active treatment. There was no information on the age of the children included, or about the diagnosis of any child included. All children were followed up for at least six months. Mortality was 8% (1/12) in the band ligation group versus 0% (0/10) in the no active intervention group (risk ratio (RR) 2.54, 95% confidence interval (CI) 0.11 to 56.25; very low certainty of evidence). The abstract did not report when the death occurred, but we assume it happened between the six-month follow-up and one year. No child (0%) in the band ligation group developed adverse events (RR 0.28, 95% CI 0.01 to 6.25; very low certainty of evidence) but one child out of 10 (10%) in the no active intervention group developed idiopathic thrombocytopaenic purpura. One child out of 12 (8%) in the band ligation group underwent liver transplantation versus none in the no active intervention group (0%) (RR 2.54, 95% CI 0.11 to 56.25; very low certainty of evidence). The trial reported no other serious adverse events or liver-related morbidity. Quality of life was not reported. Oesophageal variceal bleeding occurred in 8% (1/12) of the children in the band ligation group versus 30% (3/10) of the children in the no active intervention group (RR 0.28, 95% CI 0.03 to 2.27; very low certainty of evidence). No adverse events considered non-serious were reported. Two children were lost to follow-up by one-year. Ten children in total completed the trial at two-year follow-up. There was no information on funding.\nWe found two observational studies on endoscopic variceal ligation when searching for randomised trials. One found no harm, and the other reported Enterobactercloacae septicaemia in one child and mild, transient, upper oesophageal sphincter stenosis in another. We did not assess these studies for risk of bias.\nWe did not find any ongoing randomised clinical trials of interest to our review.\nAuthors' conclusions\nThe evidence, obtained from only one feasibility randomised clinical trial at high risk of bias, is very scanty. It is very uncertain about whether prophylactic band ligation versus sham or no (active) intervention may affect mortality, serious adverse events and liver-related morbidity, or oesophageal variceal bleeding in children and adolescents with portal hypertension and large oesophageal varices. We have no data on quality of life. No adverse events considered non-serious were reported. The results presented in the trial need to be interpreted with caution. In addition, the highly limited data cover only part of our research question; namely, children with portal hypertension and large oesophageal varices. Data on children with portal vein thrombosis are lacking.\nLarger randomised clinical trials assessing the benefits and harms of band ligation compared with sham treatment for primary prophylaxis of oesophageal variceal bleeding in children and adolescents with chronic liver disease or portal vein thrombosis are needed. The trials should include important clinical outcomes such as death, quality of life, failure to control bleeding, and adverse events.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The study randomised 22 children with portal hypertension and enlarged veins into two groups:\n- one group had an elastic ring placed around the enlarged vein (12 children); and\n- one group had no active treatment (10 children).\nThere was no information about the diagnosis, or age, of any of the children. The children were followed for at least six months. Two children were lost to follow-up by one year. Only 10 children in total completed the trial study at two-year follow-up.\nIn the group who had an elastic ring placed:\n- one child died;\n- no children had any other serious unwanted effects;\n- one child needed a liver transplant; and \n- one child had oesophageal bleeding.\nIn the group who received no active treatment:\n- no children died; \n- one child had a serious unwanted effect: an immune disorder in which blood does not clot properly (idiopathic thrombocytopaenic purpura); \n- no children needed a liver transplant; and\n- three children had oesophageal bleeding.\nNo unwanted effects that were less serious were reported in either group. The study did not report any information about the children's well-being (quality of life)."
}
] |
query-laysum
|
22058
|
[
{
"role": "user",
"content": "Abstract: Background\nImproving mobility outcomes after hip fracture is key to recovery. Possible strategies include gait training, exercise and muscle stimulation. This is an update of a Cochrane Review last published in 2011.\nObjectives\nTo evaluate the effects (benefits and harms) of interventions aimed at improving mobility and physical functioning after hip fracture surgery in adults.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL, trial registers and reference lists, to March 2021.\nSelection criteria\nAll randomised or quasi-randomised trials assessing mobility strategies after hip fracture surgery. Eligible strategies aimed to improve mobility and included care programmes, exercise (gait, balance and functional training, resistance/strength training, endurance, flexibility, three-dimensional (3D) exercise and general physical activity) or muscle stimulation. Intervention was compared with usual care (in-hospital) or with usual care, no intervention, sham exercise or social visit (post-hospital).\nData collection and analysis\nMembers of the review author team independently selected trials for inclusion, assessed risk of bias and extracted data. We used standard methodological procedures expected by Cochrane. We used the assessment time point closest to four months for in-hospital studies, and the time point closest to the end of the intervention for post-hospital studies. Critical outcomes were mobility, walking speed, functioning, health-related quality of life, mortality, adverse effects and return to living at pre-fracture residence.\nMain results\nWe included 40 randomised controlled trials (RCTs) with 4059 participants from 17 countries. On average, participants were 80 years old and 80% were women. The median number of study participants was 81 and all trials had unclear or high risk of bias for one or more domains. Most trials excluded people with cognitive impairment (70%), immobility and/or medical conditions affecting mobility (72%).\nIn-hospital setting, mobility strategy versus control\nEighteen trials (1433 participants) compared mobility strategies with control (usual care) in hospitals. Overall, such strategies may lead to a moderate, clinically-meaningful increase in mobility (standardised mean difference (SMD) 0.53, 95% confidence interval (CI) 0.10 to 0.96; 7 studies, 507 participants; low-certainty evidence) and a small, clinically meaningful improvement in walking speed (CI crosses zero so does not rule out a lack of effect (SMD 0.16, 95% CI -0.05 to 0.37; 6 studies, 360 participants; moderate-certainty evidence). Mobility strategies may make little or no difference to short-term (risk ratio (RR) 1.06, 95% CI 0.48 to 2.30; 6 studies, 489 participants; low-certainty evidence) or long-term mortality (RR 1.22, 95% CI 0.48 to 3.12; 2 studies, 133 participants; low-certainty evidence), adverse events measured by hospital re-admission (RR 0.70, 95% CI 0.44 to 1.11; 4 studies, 322 participants; low-certainty evidence), or return to pre-fracture residence (RR 1.07, 95% CI 0.73 to 1.56; 2 studies, 240 participants; low-certainty evidence). We are uncertain whether mobility strategies improve functioning or health-related quality of life as the certainty of evidence was very low.\nGait, balance and functional training probably causes a moderate improvement in mobility (SMD 0.57, 95% CI 0.07 to 1.06; 6 studies, 463 participants; moderate-certainty evidence). There was little or no difference in effects on mobility for resistance training. No studies of other types of exercise or electrical stimulation reported mobility outcomes.\nPost-hospital setting, mobility strategy versus control\nTwenty-two trials (2626 participants) compared mobility strategies with control (usual care, no intervention, sham exercise or social visit) in the post-hospital setting. Mobility strategies lead to a small, clinically meaningful increase in mobility (SMD 0.32, 95% CI 0.11 to 0.54; 7 studies, 761 participants; high-certainty evidence) and a small, clinically meaningful improvement in walking speed compared to control (SMD 0.16, 95% CI 0.04 to 0.29; 14 studies, 1067 participants; high-certainty evidence). Mobility strategies lead to a small, non-clinically meaningful increase in functioning (SMD 0.23, 95% CI 0.10 to 0.36; 9 studies, 936 participants; high-certainty evidence), and probably lead to a slight increase in quality of life that may not be clinically meaningful (SMD 0.14, 95% CI -0.00 to 0.29; 10 studies, 785 participants; moderate-certainty evidence). Mobility strategies probably make little or no difference to short-term mortality (RR 1.01, 95% CI 0.49 to 2.06; 8 studies, 737 participants; moderate-certainty evidence). Mobility strategies may make little or no difference to long-term mortality (RR 0.73, 95% CI 0.39 to 1.37; 4 studies, 588 participants; low-certainty evidence) or adverse events measured by hospital re-admission (95% CI includes a large reduction and large increase, RR 0.86, 95% CI 0.52 to 1.42; 2 studies, 206 participants; low-certainty evidence).\nTraining involving gait, balance and functional exercise leads to a small, clinically meaningful increase in mobility (SMD 0.20, 95% CI 0.05 to 0.36; 5 studies, 621 participants; high-certainty evidence), while training classified as being primarily resistance or strength exercise may lead to a clinically meaningful increase in mobility measured using distance walked in six minutes (mean difference (MD) 55.65, 95% CI 28.58 to 82.72; 3 studies, 198 participants; low-certainty evidence). Training involving multiple intervention components probably leads to a substantial, clinically meaningful increase in mobility (SMD 0.94, 95% CI 0.53 to 1.34; 2 studies, 104 participants; moderate-certainty evidence). We are uncertain of the effect of aerobic training on mobility (very low-certainty evidence). No studies of other types of exercise or electrical stimulation reported mobility outcomes.\nAuthors' conclusions\nInterventions targeting improvement in mobility after hip fracture may cause clinically meaningful improvement in mobility and walking speed in hospital and post-hospital settings, compared with conventional care. Interventions that include training of gait, balance and functional tasks are particularly effective. There was little or no between-group difference in the number of adverse events reported. Future trials should include long-term follow-up and economic outcomes, determine the relative impact of different types of exercise and establish effectiveness in emerging economies.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out:\n- whether mobility treatments delivered in the hospital or after discharge from hospital help people move around better;\n- what type of treatments help people move well after hip fracture.\nWe also wanted to know if mobility treatments can cause unwanted effects."
}
] |
query-laysum
|
22059
|
[
{
"role": "user",
"content": "Abstract: Background\nStroke is caused by the interruption of blood flow to the brain (ischemic stroke) or the rupture of blood vessels within the brain (hemorrhagic stroke) and may lead to changes in perception, cognition, mood, speech, health-related quality of life, and function, such as difficulty walking and using the arm. Activity limitations (decreased function) of the upper extremity are a common finding for individuals living with stroke. Mental practice (MP) is a training method that uses cognitive rehearsal of activities to improve performance of those activities.\nObjectives\nTo determine whether MP improves outcomes of upper extremity rehabilitation for individuals living with the effects of stroke.\nIn particular, we sought to (1) determine the effects of MP on upper extremity activity, upper extremity impairment, activities of daily living, health-related quality of life, economic costs, and adverse effects; and (2) explore whether effects differed according to (a) the time post stroke at which MP was delivered, (b) the dose of MP provided, or (c) the type of comparison performed.\nSearch methods\nWe last searched the Cochrane Stroke Group Trials Register on September 17, 2019. On September 3, 2019, we searched the Cochrane Central Register of Controlled Trials (the Cochrane Library), MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Scopus, Web of Science, the Physiotherapy Evidence Database (PEDro), and REHABDATA. On October 2, 2019, we searched ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform. We reviewed the reference lists of included studies.\nSelection criteria\nWe included randomized controlled trials (RCTs) of adult participants with stroke who had deficits in upper extremity function (called upper extremity activity).\nData collection and analysis\nTwo review authors screened titles and abstracts of the citations produced by the literature search and excluded obviously irrelevant studies. We obtained the full text of all remaining studies, and both review authors then independently selected trials for inclusion. We combined studies when the review produced a minimum of two trials employing a particular intervention strategy and a common outcome. We considered the primary outcome to be the ability of the arm to be used for appropriate tasks, called upper extremity activity. Secondary outcomes included upper extremity impairment (such as quality of movement, range of motion, tone, presence of synergistic movement), activities of daily living (ADLs), health-related quality of life (HRQL), economic costs, and adverse events. We assessed risk of bias in the included studies and applied GRADE to assess the certainty of the evidence. We completed subgroup analyses for time since stroke, dosage of MP, type of comparison, and type of arm activity outcome measure.\nMain results\nWe included 25 studies involving 676 participants from nine countries. For the comparison of MP in addition to other treatment versus the other treatment, MP in combination with other treatment appears more effective in improving upper extremity activity than the other treatment without MP (standardized mean difference [SMD] 0.66, 95% confidence interval [CI] 0.39 to 0.94; I² = 39%; 15 studies; 397 participants); the GRADE certainty of evidence score was moderate based on risk of bias for the upper extremity activity outcome. For upper extremity impairment, results were as follows: SMD 0.59, 95% CI 0.30 to 0.87; I² = 43%; 15 studies; 397 participants, with a GRADE score of moderate, based on risk of bias. For ADLs, results were as follows: SMD 0.08, 95% CI -0.24 to 0.39; I² = 0%; 4 studies; 157 participants; the GRADE score was low due to risk of bias and small sample size. For the comparison of MP versus conventional treatment, the only outcome with available data to combine (3 studies; 50 participants) was upper extremity impairment (SMD 0.34, 95% CI -0.33 to 1.00; I² = 21%); GRADE for the impairment outcome in this comparison was low due to risk of bias and small sample size. Subgroup analyses of time post stroke, dosage of MP, or comparison type for the MP in combination with other rehabilitation treatment versus the other treatment comparison showed no differences. The secondary outcome of health-related quality of life was reported in only one study, and no study noted the outcomes of economic costs and adverse events.\nAuthors' conclusions\nModerate-certainty evidence shows that MP in addition to other treatment versus the other treatment appears to be beneficial in improving upper extremity activity. Moderate-certainty evidence also shows that MP in addition to other treatment versus the other treatment appears to be beneficial in improving upper extremity impairment after stroke. Low-certainty evidence suggests that ADLs may not be improved with MP in addition to other treatment versus the other treatment. Low-certainty evidence also suggests that MP versus conventional treatment may not improve upper extremity impairment. Further study is required to evaluate effects of MP on time post stroke, the volume of MP required to affect outcomes, and whether improvement is maintained over the long term.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Mental practice is a process through which an individual repeatedly mentally rehearses an action or task without actually physically performing the action or task. The goal of mental practice is to improve performance of those actions or tasks. Mental practice has been proposed as a potential adjunct to physical practice that is commonly performed by survivors of stroke undergoing rehabilitation."
}
] |
query-laysum
|
22060
|
[
{
"role": "user",
"content": "Abstract: Background\nMeasles, mumps, rubella, and varicella (chickenpox) are serious diseases that can lead to serious complications, disability, and death. However, public debate over the safety of the trivalent MMR vaccine and the resultant drop in vaccination coverage in several countries persists, despite its almost universal use and accepted effectiveness. This is an update of a review published in 2005 and updated in 2012.\nObjectives\nTo assess the effectiveness, safety, and long- and short-term adverse effects associated with the trivalent vaccine, containing measles, rubella, mumps strains (MMR), or concurrent administration of MMR vaccine and varicella vaccine (MMR+V), or tetravalent vaccine containing measles, rubella, mumps, and varicella strains (MMRV), given to children aged up to 15 years.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2019, Issue 5), which includes the Cochrane Acute Respiratory Infections Group's Specialised Register, MEDLINE (1966 to 2 May 2019), Embase (1974 to 2 May 2019), the WHO International Clinical Trials Registry Platform (2 May 2019), and ClinicalTrials.gov (2 May 2019).\nSelection criteria\nWe included randomised controlled trials (RCTs), controlled clinical trials (CCTs), prospective and retrospective cohort studies (PCS/RCS), case-control studies (CCS), interrupted time-series (ITS) studies, case cross-over (CCO) studies, case-only ecological method (COEM) studies, self-controlled case series (SCCS) studies, person-time cohort (PTC) studies, and case-coverage design/screening methods (CCD/SM) studies, assessing any combined MMR or MMRV / MMR+V vaccine given in any dose, preparation or time schedule compared with no intervention or placebo, on healthy children up to 15 years of age.\nData collection and analysis\nTwo review authors independently extracted data and assessed the methodological quality of the included studies. We grouped studies for quantitative analysis according to study design, vaccine type (MMR, MMRV, MMR+V), virus strain, and study settings. Outcomes of interest were cases of measles, mumps, rubella, and varicella, and harms. Certainty of evidence of was rated using GRADE.\nMain results\nWe included 138 studies (23,480,668 participants). Fifty-one studies (10,248,159 children) assessed vaccine effectiveness and 87 studies (13,232,509 children) assessed the association between vaccines and a variety of harms. We included 74 new studies to this 2019 version of the review.\nEffectiveness\nVaccine effectiveness in preventing measles was 95% after one dose (relative risk (RR) 0.05, 95% CI 0.02 to 0.13; 7 cohort studies; 12,039 children; moderate certainty evidence) and 96% after two doses (RR 0.04, 95% CI 0.01 to 0.28; 5 cohort studies; 21,604 children; moderate certainty evidence). The effectiveness in preventing cases among household contacts or preventing transmission to others the children were in contact with after one dose was 81% (RR 0.19, 95% CI 0.04 to 0.89; 3 cohort studies; 151 children; low certainty evidence), after two doses 85% (RR 0.15, 95% CI 0.03 to 0.75; 3 cohort studies; 378 children; low certainty evidence), and after three doses was 96% (RR 0.04, 95% CI 0.01 to 0.23; 2 cohort studies; 151 children; low certainty evidence). The effectiveness (at least one dose) in preventing measles after exposure (post-exposure prophylaxis) was 74% (RR 0.26, 95% CI 0.14 to 0.50; 2 cohort studies; 283 children; low certainty evidence).\nThe effectiveness of Jeryl Lynn containing MMR vaccine in preventing mumps was 72% after one dose (RR 0.24, 95% CI 0.08 to 0.76; 6 cohort studies; 9915 children; moderate certainty evidence), 86% after two doses (RR 0.12, 95% CI 0.04 to 0.35; 5 cohort studies; 7792 children; moderate certainty evidence). Effectiveness in preventing cases among household contacts was 74% (RR 0.26, 95% CI 0.13 to 0.49; 3 cohort studies; 1036 children; moderate certainty evidence). \nVaccine effectiveness against rubella, using a vaccine with the BRD2 strain which is only used in China, is 89% (RR 0.11, 95% CI 0.03 to 0.42; 1 cohort study; 1621 children; moderate certainty evidence). \nVaccine effectiveness against varicella (any severity) after two doses in children aged 11 to 22 months is 95% in a 10 years follow-up (rate ratio (rr) 0.05, 95% CI 0.03 to 0.08; 1 RCT; 2279 children; high certainty evidence).\nSafety\nThere is evidence supporting an association between aseptic meningitis and MMR vaccines containing Urabe and Leningrad-Zagreb mumps strains, but no evidence supporting this association for MMR vaccines containing Jeryl Lynn mumps strains (rr 1.30, 95% CI 0.66 to 2.56; low certainty evidence). The analyses provide evidence supporting an association between MMR/MMR+V/MMRV vaccines (Jeryl Lynn strain) and febrile seizures. Febrile seizures normally occur in 2% to 4% of healthy children at least once before the age of 5. The attributable risk febrile seizures vaccine-induced is estimated to be from 1 per 1700 to 1 per 1150 administered doses.\nThe analyses provide evidence supporting an association between MMR vaccination and idiopathic thrombocytopaenic purpura (ITP). However, the risk of ITP after vaccination is smaller than after natural infection with these viruses. Natural infection of ITP occur in 5 cases per 100,000 (1 case per 20,000) per year. The attributable risk is estimated about 1 case of ITP per 40,000 administered MMR doses.\nThere is no evidence of an association between MMR immunisation and encephalitis or encephalopathy (rate ratio 0.90, 95% CI 0.50 to 1.61; 2 observational studies; 1,071,088 children; low certainty evidence), and autistic spectrum disorders (rate ratio 0.93, 95% CI 0.85 to 1.01; 2 observational studies; 1,194,764 children; moderate certainty). There is insufficient evidence to determine the association between MMR immunisation and inflammatory bowel disease (odds ratio 1.42, 95% CI 0.93 to 2.16; 3 observational studies; 409 cases and 1416 controls; moderate certainty evidence).\nAdditionally, there is no evidence supporting an association between MMR immunisation and cognitive delay, type 1 diabetes, asthma, dermatitis/eczema, hay fever, leukaemia, multiple sclerosis, gait disturbance, and bacterial or viral infections.\nAuthors' conclusions\nExisting evidence on the safety and effectiveness of MMR/MMRV vaccines support their use for mass immunisation. Campaigns aimed at global eradication should assess epidemiological and socioeconomic situations of the countries as well as the capacity to achieve high vaccination coverage. More evidence is needed to assess whether the protective effect of MMR/MMRV could wane with time since immunisation.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our certainty (confidence) in the evidence is slightly limited by the design of most of the studies. Nonetheless, we judged the certainty of the evidence for the effectiveness of the MMR vaccine to be moderate, and that for the varicella vaccine to be high. Our certainty in the evidence for autism and febrile seizures was also moderate."
}
] |
query-laysum
|
22061
|
[
{
"role": "user",
"content": "Abstract: Background\nThe efficacy of chlorpromazine, a benchmark antipsychotic, has not been fully assessed in direct comparison with different individual antipsychotics. Penfluridol is another old antipsychotic with a long half-life so one oral dose may last up to one week. This could confer advantage.\nObjectives\nTo assess the clinical effects of chlorpromazine compared with penfluridol for adults with schizophrenia.\nSearch methods\nOn 31 March 2017, we searched the Cochrane Schizophrenia Group’s Study-Based Register of Trials which is based on regular searches of CINAHL, BIOSIS, AMED, Embase, PubMed, MEDLINE, PsycINFO, and registries of clinical trials. There are no language, date, document type, or publication status limitations for inclusion of records in the register.\nSelection criteria\nWe included all randomised clinical trials focusing on chlorpromazine versus penfluridol for adults with schizophrenia or related disorders. Outcomes of interest were death, service utilisation, global state, mental state, adverse effects and leaving the study early. We included trials meeting our selection criteria and reporting useable data.\nData collection and analysis\nWe extracted data independently. For binary outcomes, we calculated risk ratio (RR) and its 95% confidence interval (CI), on an intention-to-treat basis. For continuous data, we planned to estimate the mean difference (MD) between groups and its 95% CI. We employed a fixed-effect model for analyses. We assessed risk of bias for included studies and created a 'Summary of findings' table using GRADE.\nMain results\nThe review includes three studies with a total of 130 participants. Short-term results for hospital admissions showed no clear difference between chlorpromazine and penfluridol (1 RCT, n = 29, RR 0.19, 95% CI 0.01 to 3.60, low-quality evidence). No clear difference in the incidence of akathisia was found at medium term (2 RCTs, n = 85, RR 0.19, 95% CI 0.04 to 1.06, low-quality evidence), and similar numbers of participants - nearly half - from each treatment group left the study early (3 RCTs, n = 130, RR 1.21, 95% CI 0.83 to 1.77, low-quality evidence). The risk of needing additional antiparkinsonian medication was less in the chlorpromazine group (2 RCTs, n = 74, RR 0.70, 95% CI 0.51 to 0.95). No useable data reported clinically important change in global or mental state. No data were reported for relapse. No deaths were reported by the trials.\nAuthors' conclusions\nOnly three small studies provided data and the quality of reporting and evidence is low. Limited data indicate the efficacy and adverse effects profiles of chlorpromazine and penfluridol are generally similar. Penfluridol, however, may confer advantage by needing to be given only once per week. Firm conclusions are not possible without good-quality trials, and where these treatments are used, such trials are justified.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "How effective is the antipsychotic chlorpromazine as a treatment for schizophrenia compared to penfluridol?"
}
] |
query-laysum
|
22062
|
[
{
"role": "user",
"content": "Abstract: Background\nPeripheral nerve blocks can be performed using ultrasound guidance. It is not yet clear whether this method of nerve location has benefits over other existing methods. This review was originally published in 2009 and was updated in 2014.\nObjectives\nThe objective of this review was to assess whether the use of ultrasound to guide peripheral nerve blockade has any advantages over other methods of peripheral nerve location. Specifically, we have asked whether the use of ultrasound guidance:\n1. improves success rates and effectiveness of regional anaesthetic blocks, by increasing the number of blocks that are assessed as adequate\n2. reduces the complications, such as cardiorespiratory arrest, pneumothorax or vascular puncture, associated with the performance of regional anaesthetic blocks\nSearch methods\nIn the 2014 update we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 8); MEDLINE (July 2008 to August 2014); EMBASE (July 2008 to August 2014); ISI Web of Science (2008 to April 2013); CINAHL (July 2014); and LILACS (July 2008 to August 2014). We completed forward and backward citation and clinical trials register searches.The original search was to July 2008. We reran the search in May 2015. We have added 11 potential new studies of interest to the list of 'Studies awaiting classification' and will incorporate them into the formal review findings during future review updates.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing ultrasound-guided peripheral nerve block of the upper and lower limbs, alone or combined, with at least one other method of nerve location. In the 2014 update, we excluded studies that had given general anaesthetic, spinal, epidural or other nerve blocks to all participants, as well as those measuring the minimum effective dose of anaesthetic drug. This resulted in the exclusion of five studies from the original review.\nData collection and analysis\nTwo authors independently assessed trial quality and extracted data. We used standard Cochrane methodological procedures, including an assessment of risk of bias and degree of practitioner experience for all studies.\nMain results\nWe included 32 RCTs with 2844 adult participants. Twenty-six assessed upper-limb and six assessed lower-limb blocks. Seventeen compared ultrasound with peripheral nerve stimulation (PNS), and nine compared ultrasound combined with nerve stimulation (US + NS) against PNS alone. Two studies compared ultrasound with anatomical landmark technique, one with a transarterial approach, and three were three-arm designs that included US, US + PNS and PNS.\nThere were variations in the quality of evidence, with a lack of detail in many of the studies to judge whether randomization, allocation concealment and blinding of outcome assessors was sufficient. It was not possible to blind practitioners and there was therefore a high risk of performance bias across all studies, leading us to downgrade the evidence for study limitations using GRADE. There was insufficient detail on the experience and expertise of practitioners and whether experience was equivalent between intervention and control.\nWe performed meta-analysis for our main outcomes. We found that ultrasound guidance produces superior peripheral nerve block success rates, with more blocks being assessed as sufficient for surgery following sensory or motor testing (Mantel-Haenszel (M-H) odds ratio (OR), fixed-effect 2.94 (95% confidence interval (CI) 2.14 to 4.04); 1346 participants), and fewer blocks requiring supplementation or conversion to general anaesthetic (M-H OR, fixed-effect 0.28 (95% CI 0.20 to 0.39); 1807 participants) compared with the use of PNS, anatomical landmark techniques or a transarterial approach. We were not concerned by risks of indirectness, imprecision or inconsistency for these outcomes and used GRADE to assess these outcomes as being of moderate quality. Results were similarly advantageous for studies comparing US + PNS with NS alone for the above outcomes (M-H OR, fixed-effect 3.33 (95% CI 2.13 to 5.20); 719 participants, and M-H OR, fixed-effect 0.34 (95% CI 0.21 to 0.56); 712 participants respectively). There were lower incidences of paraesthesia in both the ultrasound comparison groups (M-H OR, fixed-effect 0.42 (95% CI 0.23 to 0.76); 471 participants, and M-H OR, fixed-effect 0.97 (95% CI 0.30 to 3.12); 178 participants respectively) and lower incidences of vascular puncture in both groups (M-H OR, fixed-effect 0.19 (95% CI 0.07 to 0.57); 387 participants, and M-H OR, fixed-effect 0.22 (95% CI 0.05 to 0.90); 143 participants). There were fewer studies for these outcomes and we therefore downgraded both for imprecision and paraesthesia for potential publication bias. This gave an overall GRADE assessment of very low and low for these two outcomes respectively. Our analysis showed that it took less time to perform nerve blocks in the ultrasound group (mean difference (MD), IV, fixed-effect -1.06 (95% CI -1.41 to -0.72); 690 participants) but more time to perform the block when ultrasound was combined with a PNS technique (MD, IV, fixed-effect 0.76 (95% CI 0.55 to 0.98); 587 participants). With high levels of unexplained statistical heterogeneity, we graded this outcome as very low quality. We did not combine data for other outcomes as study results had been reported using differing scales or with a combination of mean and median data, but our interpretation of individual study data favoured ultrasound for a reduction in other minor complications and reduction in onset time of block and number of attempts to perform block.\nAuthors' conclusions\nThere is evidence that peripheral nerve blocks performed by ultrasound guidance alone, or in combination with PNS, are superior in terms of improved sensory and motor block, reduced need for supplementation and fewer minor complications reported. Using ultrasound alone shortens performance time when compared with nerve stimulation, but when used in combination with PNS it increases performance time.\nWe were unable to determine whether these findings reflect the use of ultrasound in experienced hands and it was beyond the scope of this review to consider the learning curve associated with peripheral nerve blocks by ultrasound technique compared with other methods.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There was variation in the quality of the studies and authors had not always made sufficient attempts to ensure that the outcome assessors were unaware of what technique had been used for the nerve block. Studies had also often not clearly explained how experienced the people giving the nerve block were. This is particularly important, as ultrasound is still a relatively new technique and some anaesthetists may have limited experience. We rated our evidence for whether the nerve blocks were sufficient and adequate for surgery as of moderate quality, but evidence for our other outcomes was either low or very low."
}
] |
query-laysum
|
22063
|
[
{
"role": "user",
"content": "Abstract: Background\nBladder dysfunction is a common complication following radical hysterectomy, caused by the damage to pelvic autonomic nerves that innervate the muscles of the bladder, urethral sphincter, and pelvic floor fasciae. Bladder dysfunction increases the rates of urinary tract infection, hospital visits or admission, and patient dissatisfaction. In addition, bladder dysfunction can also negatively impact patient quality of life (QoL). Several postoperative interventions have been proposed to prevent bladder dysfunction following radical hysterectomy. To our knowledge, there has been no systematic review evaluating the effectiveness and safety of these interventions for preventing bladder dysfunction following radical hysterectomy in women with cervical cancer.\nObjectives\nTo evaluate the effectiveness and safety of postoperative interventions for preventing bladder dysfunction following radical hysterectomy in women with early-stage cervical cancer (stage IA2 to IIA2).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 4) in the Cochrane Library, MEDLINE via Ovid (1946 to April week 2, 2020), and Embase via Ovid (1980 to 2020, week 16). We also checked registers of clinical trials, grey literature, conference reports, and citation lists of included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) evaluating the effectiveness and safety of any type of postoperative interventions for preventing bladder dysfunction following a radical hysterectomy in women with stage IA2 to IIA2 cervical cancer.\nData collection and analysis\nTwo review authors independently selected potentially relevant RCTs, extracted data, assessed risk of bias, compared results, and made judgments on the quality and certainty of the evidence. We resolved any disagreements through discussion or consultation with a third review author. Outcomes of interest consisted of spontaneous voiding recovery one week after the operation, quality of life (QoL), adverse events, post-void residual urine volume one month after the operation, urinary tract infection over the one month following the operation, and subjective urinary symptoms.\nMain results\nWe identified 1464 records as a result of the search (excluding duplicates). Of the 20 records that potentially met the review criteria, we included five reports of four studies. Most of the studies had unclear risks of selection and reporting biases. Of the four studies, one compared bethanechol versus placebo and three studies compared suprapubic catheterisation with intermittent self-catheterisation. We identified two ongoing studies.\nBethanechol versus placebo\nThe study reported no information on the rate of spontaneous voiding recovery at one week following the operation, QoL, adverse events, urinary tract infection in the first month after surgery, and subjective urinary symptoms for this comparison. The volume of post-void residual urine, assessed at one month after surgery, among women receiving bethanechol was lower than those in the placebo group (mean difference (MD) -37.4 mL, 95% confidence interval (CI) -60.35 to -14.45; one study, 39 participants; very-low certainty evidence).\nSuprapubic catheterisation versus intermittent self-catheterisation\nThe studies reported no information on the rate of spontaneous voiding recovery at one week and post-void residual urine volume at one month following the operation for this comparison. There was no difference in risks of acute complication (risk ratio (RR) 0.77, 95% CI 0.24 to 2.49; one study, 71 participants; very low certainty evidence) and urinary tract infections during the first month after surgery (RR 0.77, 95% CI 0.53 to 1.13; two studies, 95 participants; very- low certainty evidence) between participants who underwent suprapubic catheterisation and those who underwent intermittent self-catheterisation. Available data were insufficient to calculate the relative measures of the effect of interventions on QoL and subjective urinary symptoms.\nAuthors' conclusions\nNone of the included studies reported rate of spontaneous voiding recovery one week after surgery, time to a post-void residual volume of urine of 50 mL or less, or post-void residual urine volume at 6 and 12 months after surgery, all of which are important outcomes for assessing postoperative bladder dysfunction.\nLimited evidence suggested that bethanechol may minimise the risk of bladder dysfunction after radical hysterectomy by lowering post-void residual urine volume. The certainty of this evidence, however, was very low. The effectiveness of different types of postoperative urinary catheterisation (suprapubic and intermittent self-catheterisation) remain unproven.\n\nGiven the provided abstract, please respond to the following query: \"Main findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found four studies that met the inclusion criteria. One study compared a medication called bethanechol to placebo (a substance that has no therapeutic effect, used as a control in testing drugs). Three studies compared suprapubic catheterisation (insertion of a flexible tube (catheter) into the bladder through a cut in the lower abdomen to drain urine) with intermittent self-catheterisation (insertion of a catheter via the urethra, into the bladder at intervals throughout the day)."
}
] |
query-laysum
|
22064
|
[
{
"role": "user",
"content": "Abstract: Background\nLaparoscopic appendectomy is amongst the most common general surgical procedures performed in the developed world. Arguably, the most critical part of this procedure is effective closure of the appendix stump to prevent catastrophic intra-abdominal complications from a faecal leak into the abdominal cavity.\n\nA variety of methods to close the appendix stump are used worldwide; these can be broadly divided into traditional ligatures (such as intracorporeal or extracorporeal ligatures or Roeder loops) and mechanical devices (such as stapling devices, clips, or electrothermal devices). However, the optimal method remains unclear.\nObjectives\nTo compare all surgical techniques now used for appendix stump closure during laparoscopic appendectomy.\nSearch methods\nIn June 2017, we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 6) in the Cochrane Library, MEDLINE Ovid (1946 to 14 June 2017), Embase Ovid (1974 to 14 June 2017), Science Citation Index - Expanded (14 June 2017), China Biological Medicine Database (CBM), the World Health Organization International Trials Registry Platform search portal, ClinicalTrials.gov, Current Controlled Trials, the Chinese Clinical Trials Register, and the EU Clinical Trials Register (all in June 2017). We searched the reference lists of relevant publications as well as meeting abstracts and Conference Proceedings Citation Index to look for additional relevant clinical trials.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared mechanical appendix stump closure (stapler, clips, or electrothermal devices) versus ligation (Endoloop, Roeder loop, or intracorporeal knot techniques) for uncomplicated appendicitis.\nData collection and analysis\nTwo review authors identified trials for inclusion, collected data, and assessed risk of bias independently. We performed the meta-analysis using Review Manager 5. We calculated the odds ratio (OR) for dichotomous outcomes and the mean difference (MD) for continuous outcomes, with 95% confidence intervals (CIs).\nMain results\nWe included eight randomised studies encompassing 850 participants. Five studies compared titanium clips versus ligature, two studies compared an endoscopic stapler device versus ligature, and one study compared an endoscopic stapler device, titanium clips, and ligature. In our analyses of primary outcomes, we found no differences in total complications (OR 0.97, 95% CI 0.27 to 3.50, 8 RCTs, very low-quality evidence), intraoperative complications (OR 0.93, 95% CI 0.34 to 2.55, 8 RCTs, very low-quality evidence), or postoperative complications (OR 0.80, 95% CI 0.21 to 3.13, 8 RCTs, very low-quality evidence) between ligature and all types of mechanical devices. However, our analyses of secondary outcomes revealed that use of mechanical devices saved approximately nine minutes of total operating time when compared with use of a ligature (mean difference (MD) -9.04 minutes, 95% CI -12.97 to -5.11 minutes, 8 RCTs, very low-quality evidence). However, this finding did not translate into a clinically or statistically significant reduction in inpatient hospital stay (MD 0.02 days, 95% CI -0.12 to 0.17 days, 8 RCTs, very low-quality evidence). Available information was insufficient for reliable comparison of total hospital costs and postoperative pain/quality of life between the two approaches. Overall, evidence across all analyses was of very low quality, with substantial potential for confounding factors. Given the limitations of all studies in terms of bias and the low quality of available evidence, a clear conclusion regarding superiority of any one particular type of mechanical device over another is not possible.\nAuthors' conclusions\nEvidence is insufficient at present to advocate omission of conventional ligature-based appendix stump closure in favour of any single type of mechanical device over another in uncomplicated appendicitis.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Appendicitis is an Inflammation of the appendix. The conventional treatment for this condition involves an operation to remove the appendix, called an appendectomy. In recent years, this operation has been increasingly performed as keyhole surgery - laparoscopic appendectomy. For removal of the appendix during laparoscopic appendectomy, the best method of closing the remaining appendix stump to avoid leakage of bowel contents is unclear. Traditional approaches have involved ligatures and knots. However, in recent years, some surgeons have elected to use automated mechanical devices rather than ligatures, and it is unclear whether these devices reduce complications during laparoscopic appendectomy when compared with ligatures."
}
] |
query-laysum
|
22065
|
[
{
"role": "user",
"content": "Abstract: Background\nGlobal prevalence of overweight and obesity are alarming. For tackling this public health problem, preventive public health and policy actions are urgently needed. Some countries implemented food taxes in the past and some were subsequently abolished. Some countries, such as Norway, Hungary, Denmark, Bermuda, Dominica, St. Vincent and the Grenadines, and the Navajo Nation (USA), specifically implemented taxes on unprocessed sugar and sugar-added foods. These taxes on unprocessed sugar and sugar-added foods are fiscal policy interventions, implemented to decrease their consumption and in turn reduce adverse health-related, economic and social effects associated with these food products.\nObjectives\nTo assess the effects of taxation of unprocessed sugar or sugar-added foods in the general population on the consumption of unprocessed sugar or sugar-added foods, the prevalence and incidence of overweight and obesity, and the prevalence and incidence of other diet-related health outcomes.\nSearch methods\nWe searched CENTRAL, Cochrane Database of Systematic Reviews, MEDLINE, Embase and 15 other databases and trials registers on 12 September 2019. We handsearched the reference list of all records of included studies, searched websites of international organisations and institutions, and contacted review advisory group members to identify planned, ongoing or unpublished studies.\nSelection criteria\nWe included studies with the following populations: children (0 to 17 years) and adults (18 years or older) from any country and setting. Exclusion applied to studies with specific subgroups, such as people with any disease who were overweight or obese as a side-effect of the disease. The review included studies with taxes on or artificial increases of selling prices for unprocessed sugar or food products that contain added sugar (e.g. sweets, ice cream, confectionery, and bakery products), or both, as intervention, regardless of the taxation level or price increase. In line with Cochrane Effective Practice and Organisation of Care (EPOC) criteria, we included randomised controlled trials (RCTs), cluster-randomised controlled trials (cRCTs), non-randomised controlled trials (nRCTs), controlled before-after (CBA) studies, and interrupted time series (ITS) studies. We included controlled studies with more than one intervention or control site and ITS studies with a clearly defined intervention time and at least three data points before and three after the intervention. Our primary outcomes were consumption of unprocessed sugar or sugar-added foods, energy intake, overweight, and obesity. Our secondary outcomes were substitution and diet, expenditure, demand, and other health outcomes.\nData collection and analysis\nTwo review authors independently screened all eligible records for inclusion, assessed the risk of bias, and performed data extraction.Two review authors independently assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe retrieved a total of 24,454 records. After deduplicating records, 18,767 records remained for title and abstract screening. Of 11 potentially relevant studies, we included one ITS study with 40,210 household-level observations from the Hungarian Household Budget and Living Conditions Survey. The baseline ranged from January 2008 to August 2011, the intervention was implemented on September 2011, and follow-up was until December 2012 (16 months). The intervention was a tax - the so-called 'Hungarian public health product tax' - on sugar-added foods, including selected foods exceeding a specific sugar threshold value. The intervention includes co-interventions: the taxation of sugar-sweetened beverages (SSBs) and of foods high in salt or caffeine.\nThe study provides evidence on the effect of taxing foods exceeding a specific sugar threshold value on the consumption of sugar-added foods. After implementation of the Hungarian public health product tax, the mean consumption of taxed sugar-added foods (measured in units of kg) decreased by 4.0% (standardised mean difference (SMD) −0.040, 95% confidence interval (CI) −0.07 to −0.01; very low-certainty evidence).\nThe study was at low risk of bias in terms of performance bias, detection bias and reporting bias, with the shape of effect pre-specified and the intervention unlikely to have any effect on data collection. The study was at unclear risk of attrition bias and at high risk in terms of other bias and the independence of the intervention. We rated the certainty of the evidence as very low for the primary and secondary outcomes.\nThe Hungarian public health product tax included a tax on sugar-added foods but did not include a tax on unprocessed sugar. We did not find eligible studies reporting on the taxation of unprocessed sugar. No studies reported on the primary outcomes of consumption of unprocessed sugar, energy intake, overweight, and obesity. No studies reported on the secondary outcomes of substitution and diet, demand, and other health outcomes. No studies reported on differential effects across population subgroups.\nWe could not perform meta-analyses or pool study results.\nAuthors' conclusions\nThere was very limited evidence and the certainty of the evidence was very low. Despite the reported reduction in consumption of taxed sugar-added foods, we are uncertain whether taxing unprocessed sugar or sugar-added foods has an effect on reducing their consumption and preventing obesity or other adverse health outcomes. Further robustly conducted studies are required to draw concrete conclusions on the effectiveness of taxing unprocessed sugar or sugar-added foods for reducing their consumption and preventing obesity or other adverse health outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Which studies were included in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for ongoing or published studies up to October 2019. Of a total of 24,454 records retrieved, we identified one 'interrupted time series' (ITS) study meeting our eligibility criteria to assess the impact of a tax on sugar-added foods (but not unprocessed sugar). The study used data from the Hungarian Household Budget and Living Conditions Survey, with observations from 40,210 households. Evidence from the study included a 'baseline' (the situation before taxation), ranging from January 2008 to August 2011. The Hungarian public health product tax was implemented on September 2011. The duration of the follow-up period (measuring the effects of taxation) was 16 months. The study was funded by the Scottish Institute for Research in Economics (SIRE) Early Career Engagement Grant."
}
] |
query-laysum
|
22066
|
[
{
"role": "user",
"content": "Abstract: Background\nVitamin A deficiency is a significant public health problem in many low- and middle-income countries, especially affecting young children, women of reproductive age, and pregnant women. Fortification of staple foods with vitamin A has been used to increase vitamin A consumption among these groups.\nObjectives\nTo assess the effects of fortifying staple foods with vitamin A for reducing vitamin A deficiency and improving health-related outcomes in the general population older than two years of age.\nSearch methods\nWe searched the following international databases with no language or date restrictions: Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 6) in the Cochrane Library; MEDLINE and MEDLINE In Process OVID; Embase OVID; CINAHL Ebsco; Web of Science (ISI) SCI, SSCI, CPCI-exp and CPCI-SSH; BIOSIS (ISI); POPLINE; Bibliomap; TRoPHI; ASSIA (Proquest); IBECS; SCIELO; Global Index Medicus - AFRO and EMRO; LILACS; PAHO; WHOLIS; WPRO; IMSEAR; IndMED; and Native Health Research Database. We also searched clinicaltrials.gov and the International Clinical Trials Registry Platform to identify ongoing and unpublished studies. The date of the last search was 19 July 2018.\nSelection criteria\nWe included individually or cluster-randomised controlled trials (RCTs) in this review. The intervention included fortification of staple foods (sugar, edible oils, edible fats, maize flour or corn meal, wheat flour, milk and dairy products, and condiments and seasonings) with vitamin A alone or in combination with other vitamins and minerals. We included the general population older than two years of age (including pregnant and lactating women) from any country.\nData collection and analysis\nTwo authors independently screened and assessed eligibility of studies for inclusion, extracted data from included studies and assessed their risk of bias. We used standard Cochrane methodology to carry out the review.\nMain results\nWe included 10 randomised controlled trials involving 4455 participants. All the studies were conducted in low- and upper-middle income countries where vitamin A deficiency was a public health issue. One of the included trials did not contribute data to the outcomes of interest.\nThree trials compared provision of staple foods fortified with vitamin A versus unfortified staple food, five trials compared provision of staple foods fortified with vitamin A plus other micronutrients versus unfortified staple foods, and two trials compared provision of staple foods fortified with vitamin A plus other micronutrients versus no intervention. No studies compared staple foods fortified with vitamin A alone versus no intervention.\nThe duration of interventions ranged from three to nine months. We assessed six studies at high risk of bias overall. Government organisations, non-governmental organisations, the private sector, and academic institutions funded the included studies; funding source does not appear to have distorted the results.\nStaple food fortified with vitamin A versus unfortified staple food\nWe are uncertain whether fortifying staple foods with vitamin A alone makes little or no difference for serum retinol concentration (mean difference (MD) 0.03 μmol/L, 95% CI −0.06 to 0.12; 3 studies, 1829 participants; I² = 90%, very low-certainty evidence). It is uncertain whether vitamin A alone reduces the risk of subclinical vitamin A deficiency (risk ratio (RR) 0.45, 95% CI 0.19 to 1.05; 2 studies; 993 participants; I² = 33%, very low-certainty evidence). The certainty of the evidence was mainly affected by risk of bias, imprecision and inconsistency.\nIt is uncertain whether vitamin A fortification reduces clinical vitamin A deficiency, defined as night blindness (RR 0.11, 95% CI 0.01 to 1.98; 1 study, 581 participants, very low-certainty evidence). The certainty of the evidence was mainly affected by imprecision, inconsistency, and risk of bias.\nStaple foods fortified with vitamin A versus no intervention\nNo studies provided data for this comparison.\nStaple foods fortified with vitamin A plus other micronutrients versus same unfortified staple foods\nFortifying staple foods with vitamin A plus other micronutrients may not increase the serum retinol concentration (MD 0.08 μmol/L, 95% CI -0.06 to 0.22; 4 studies; 1009 participants; I² = 95%, low-certainty evidence). The certainty of the evidence was mainly affected by serious inconsistency and risk of bias.\nIn comparison to unfortified staple foods, fortification with vitamin A plus other micronutrients probably reduces the risk of subclinical vitamin A deficiency (RR 0.27, 95% CI 0.16 to 0.49; 3 studies; 923 participants; I² = 0%; moderate-certainty evidence). The certainty of the evidence was mainly affected by serious risk of bias.\nStaple foods fortified with vitamin A plus other micronutrients versus no intervention\nFortification of staple foods with vitamin A plus other micronutrients may increase serum retinol concentration (MD 0.22 μmol/L, 95% CI 0.15 to 0.30; 2 studies; 318 participants; I² = 0%; low-certainty evidence). When compared to no intervention, it is uncertain whether the intervention reduces the risk of subclinical vitamin A deficiency (RR 0.71, 95% CI 0.52 to 0.98; 2 studies; 318 participants; I² = 0%; very low-certainty evidence) . The certainty of the evidence was affected mainly by serious imprecision and risk of bias.\nNo trials reported on the outcomes of all-cause morbidity, all-cause mortality, adverse effects, food intake, congenital anomalies (for pregnant women), or breast milk concentration (for lactating women).\nAuthors' conclusions\nFortifying staple foods with vitamin A alone may make little or no difference to serum retinol concentrations or the risk of subclinical vitamin A deficiency. In comparison with provision of unfortified foods, provision of staple foods fortified with vitamin A plus other micronutrients may not increase serum retinol concentration but probably reduces the risk of subclinical vitamin A deficiency.\nCompared to no intervention, staple foods fortified with vitamin A plus other micronutrients may increase serum retinol concentration, although it is uncertain whether the intervention reduces the risk of subclinical vitamin A deficiency as the certainty of the evidence has been assessed as very low.\nIt was not possible to estimate the effect of staple food fortification on outcomes such as mortality, morbidity, adverse effects, congenital anomalies, or breast milk vitamin A, as no trials included these outcomes.\nThe type of funding source for the studies did not appear to distort the results from the analysis.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to investigate the effects of fortifying common staple foods with vitamin A, with or without other micronutrients, in populations two years of age and older. Staple foods used in public health fortification programmes have included refined sugar, edible vegetable oils and fats, rice, wheat flour, maize flours and corn meals, condiments and seasonings, and powdered or liquid milk. We searched for all the possible information on this question and found 10 eligible studies."
}
] |
query-laysum
|
22067
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic neck pain is a highly prevalent condition, affecting 10% to 24% of the general population. Transcutaneous electrical nerve stimulation (TENS) is the noninvasive, transcutaneous use of electrical stimulation to produce analgesia. It is a simple, low-cost and safe intervention used in clinical practice as an adjunct treatment for painful musculoskeletal conditions that have a considerable impact on daily activities, such as chronic neck pain. This review is a split from a Cochrane Review on electrotherapy for neck pain, published in 2013, and focuses specifically on TENS for chronic neck pain.\nObjectives\nTo evaluate the effectiveness of transcutaneous electrical nerve stimulation (TENS) (alone or in association with other interventions) compared with sham and other clinical interventions for the treatment of chronic neck pain.\nSearch methods\nWe searched Cochrane Back and Neck Trials Register, CENTRAL, MEDLINE, Embase, five other databases and two trials registers to 9 November 2018. We also screened the reference lists of relevant studies to identify additional trials. There were no language, source, or publication date restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) involving adults (≥ 18 years of age) with chronic neck pain (lasting > 12 weeks) that compared TENS alone or in combination with other treatments versus active or inactive treatments. The primary outcomes were pain, disability and adverse events.\nData collection and analysis\nTwo independent review authors selected the trials, extracted data and assessed the risk of bias of included studies. A third review author was consulted in case of disagreements. We used the Cochrane 'Risk of bias' tool (adapted by Cochrane Back and Neck), to assess the risk of bias of individual trials and GRADE to assess the certainty of evidence. We used risk ratios (RRs) to measure treatment effects for dichotomous outcomes, and mean differences (MDs) for continuous outcomes, with their respective 95% confidence intervals (CIs).\nMain results\nWe included seven RCTs with a total of 651 participants, mean age 31.7 to 55.5 years, conducted in three different countries (Turkey, Jordan and China). The length of follow-up ranged from one week to six months. Most RCTs used continuous TENS, with a frequency of 60 Hz to 100 Hz, pulse width of 40 μs to 250 μs and tolerable intensity, described as a tingling sensation without contraction, in daily sessions lasting 20 to 60 minutes. Due to heterogeneity in interventions and outcomes, we did not pool individual study data into meta-analyses. Overall, we judged most studies as being at low risk for selection bias and high risk for performance and detection bias.\nBased on the GRADE approach, there was very low-certainty evidence from two trials about the effects of conventional TENS when compared to sham TENS at short-term (up to 3 months after treatment) follow-up, on pain (assessed by the Visual Analogue Scale (VAS)) (MD -0.10, 95% CI -0.97 to 0.77) and the percentage of participants presenting improvement of pain (RR 1.57, 95% CI 0.84 to 2.92). None of the included studies reported on disability or adverse events.\nAuthors' conclusions\nThis review found very low-certainty evidence of a difference between TENS compared to sham TENS on reducing neck pain; therefore, we are unsure about the effect estimate. At present, there is insufficient evidence regarding the use of TENS in patients with chronic neck pain. Additional well-designed, -conducted and -reported RCTs are needed to reach robust conclusions.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Because of the differences between each of the included studies, we decided that it would not be appropriate to combine their results. Out of the seven studies included, two reported that TENS was no better than inactive treatment (placebo) in reducing the participants' neck pain. None of the included studies assessed disability or adverse events."
}
] |
query-laysum
|
22068
|
[
{
"role": "user",
"content": "Abstract: Background\nNasopharyngeal cancer is endemic in a few well-defined populations. The prognosis for advanced nasopharyngeal cancer is poor, but early-stage disease is curable and a high survival rate can be achieved. Screening for early-stage disease could lead to improved outcomes. Epstein-Barr virus (EBV) serology and nasopharyngoscopy are most commonly used for screening. The efficacy and true benefit of screening remain uncertain due to potential selection, lead-time and length-time biases.\nObjectives\nTo determine the effectiveness of screening of asymptomatic individuals by EBV serology and/or nasopharyngoscopy in reducing the mortality of nasopharyngeal cancer compared to no screening. To assess the impact of screening for nasopharyngeal cancer on incidence, survival, adverse effects, cost-effectiveness and quality of life.\nSearch methods\nThe Cochrane Ear, Nose and Throat Disorders Group (CENTDG) Trials Search Co-ordinator searched the CENTDG Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 6); PubMed; EMBASE; CINAHL; Web of Science; Clinicaltrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 6 July 2015.\nSelection criteria\nRandomised controlled trials (RCT) and controlled clinical trials (CCT) evaluating screening for nasopharyngeal cancer versus no screening. Randomisation either by clusters or individuals was acceptable.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration. Our primary outcome measure was nasopharyngeal cancer-specific mortality. Secondary outcomes were incidence of nasopharyngeal cancer by stage and histopathological classification at diagnosis, survival (two-year, three-year, five-year and 10-year), harms of screening (physical and psychosocial), quality of life (via validated tools such as the SF-36 and patient satisfaction), cost-effectiveness and all-cause mortality.\nMain results\nWe identified no trials that met the review inclusion criteria. We retrieved 31 full-text studies for further investigation following the search. However, none met the eligibility criteria for a RCT or CCT investigation on the efficacy of screening for nasopharyngeal cancer.\nAuthors' conclusions\nNo data from RCTs or CCTs are available to allow us to determine the efficacy of screening for nasopharyngeal cancer, or the cost-effectiveness and cost-benefit of a screening strategy. High-quality studies with long-term follow-up of mortality and cost-effectiveness are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is a need for high-quality studies to determine the effectiveness of a screening programme for nasopharyngeal cancer, especially studies that assess long-term outcomes, such as mortality, and cost-effectiveness."
}
] |
query-laysum
|
22069
|
[
{
"role": "user",
"content": "Abstract: Background\nOtitis media with effusion (OME) is an accumulation of fluid in the middle ear cavity, common amongst young children. The fluid may cause hearing loss. When persistent, it may lead to developmental delay, social difficulty and poor quality of life. Management of OME includes watchful waiting, autoinflation, medical and surgical treatment. Antibiotics are sometimes used to treat any bacteria present in the effusion, or associated biofilms.\nObjectives\nTo assess the effects (benefits and harms) of oral antibiotics for otitis media with effusion (OME) in children.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register, CENTRAL, Ovid MEDLINE, Ovid Embase, Web of Science, ClinicalTrials.gov, ICTRP and additional sources for published and unpublished studies to 20 January 2023.\nSelection criteria\nWe included randomised controlled trials and quasi-randomised trials in children aged 6 months to 12 years with unilateral or bilateral OME. We included studies that compared oral antibiotics with either placebo or no treatment.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were determined following a multi-stakeholder prioritisation exercise and were: 1) hearing, 2) otitis media-specific quality of life and 3) anaphylaxis. Secondary outcomes were: 1) persistence of OME, 2) adverse effects, 3) receptive language skills, 4) speech development, 5) cognitive development, 6) psychosocial skills, 7) listening skills, 8) generic health-related quality of life, 9) parental stress, 10) vestibular function and 11) episodes of acute otitis media. We used GRADE to assess the certainty of evidence for each outcome.\nAlthough we included all measures of hearing assessment, the proportion of children who returned to normal hearing was our preferred method to assess hearing, due to challenges in interpreting the results of mean hearing thresholds.\nMain results\nWe identified 19 completed studies that met our inclusion criteria (2581 participants). They assessed a variety of oral antibiotics (including penicillins, cephalosporins, macrolides and trimethoprim), with most studies using a 10- to 14-day treatment course. We had some concerns about the risk of bias in all studies included in this review. Here we report our primary outcomes and main secondary outcome, at the longest reported follow-up time.\nAntibiotics versus placebo\nWe included 11 studies for this comparison, but none reported all of our outcomes of interest and limited meta-analysis was possible.\nHearing\nOne study found that more children may return to normal hearing by two months (resolution of the air-bone gap) after receiving antibiotics as compared with placebo, but the evidence is very uncertain (Peto odds ratio (OR) 9.59, 95% confidence interval (CI) 3.51 to 26.18; 20/49 children who received antibiotics returned to normal hearing versus 0/37 who received placebo; 1 study, 86 participants; very low-certainty evidence).\nDisease-specific quality of life\nNo studies assessed this outcome.\nPresence/persistence of OME\nAt 6 to 12 months of follow-up, the use of antibiotics compared with placebo may slightly reduce the number of children with persistent OME, but the confidence intervals were wide, and the evidence is very uncertain (risk ratio (RR) 0.89, 95% CI 0.68 to 1.17; 48% versus 54%; number needed to treat (NNT) 17; 2 studies, 324 participants; very low-certainty evidence).\nAdverse event: anaphylaxis\nNo studies provided specific data on anaphylaxis. Three of the included studies (448 children) did report adverse events in sufficient detail to assume that no anaphylactic reactions occurred, but the evidence is very uncertain (very low-certainty evidence).\nAntibiotics versus no treatment\nWe included eight studies for this comparison, but very limited meta-analysis was possible.\nHearing\nOne study found that the use of antibiotics compared to no treatment may result in little to no difference in final hearing threshold at three months (mean difference (MD) -5.38 dB HL, 95% CI -9.12 to -1.64; 1 study, 73 participants; low-certainty evidence). The only data identified on the return to normal hearing were reported at 10 days of follow-up, which we considered to be too short to accurately reflect the efficacy of antibiotics.\nDisease-specific quality of life\nNo studies assessed this outcome.\nPresence/persistence of OME\nAntibiotics may reduce the proportion of children who have persistent OME at up to three months of follow-up, when compared with no treatment (RR 0.64, 95% CI 0.50 to 0.80; 6 studies, 542 participants; low-certainty evidence).\nAdverse event: anaphylaxis\nNo studies provided specific data on anaphylaxis. Two of the included studies (180 children) did report adverse events in sufficient detail to assume that no anaphylactic reactions occurred, but the evidence is very uncertain (very low-certainty evidence).\nAuthors' conclusions\nThe evidence for the use of antibiotics for OME is of low to very low certainty. Although the use of antibiotics compared to no treatment may have a slight beneficial effect on the resolution of OME at up to three months, the overall impact on hearing is very uncertain. The long-term effects of antibiotics are unclear and few of the studies included in this review reported on potential harms. These important endpoints should be considered when weighing up the potential short- and long-term benefits and harms of antibiotic treatment in a condition with a high spontaneous resolution rate.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up-to-date to January 2023."
}
] |
query-laysum
|
22070
|
[
{
"role": "user",
"content": "Abstract: Background\nDespite the widely documented risks of not breastfeeding, initiation rates remain relatively low in many high-income countries, particularly among women in lower-income groups. In low- and middle-income countries, many women do not follow World Health Organization (WHO) recommendations to initiate breastfeeding within the first hour after birth. This is an update of a Cochrane Review, first published in 2005.\nObjectives\nTo identify and describe health promotion activities intended to increase the initiation rate of breastfeeding.\nTo evaluate the effectiveness of different types of breastfeeding promotion activities, in terms of changing the number of women who initiate breastfeeding.\nTo evaluate the effectiveness of different types of breastfeeding promotion activities, in terms of changing the number of women who initiate breastfeeding early (within one hour after birth).\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth's Trials Register (29 February 2016) and scanned reference lists of all articles obtained.\nSelection criteria\nRandomised controlled trials (RCTs), with or without blinding, of any breastfeeding promotion intervention in any population group, except women and infants with a specific health problem.\nData collection and analysis\nTwo review authors independently assessed trial reports for inclusion, extracted data and assessed trial quality. Discrepancies were resolved through discussion and a third review author was involved when necessary. We contacted investigators to obtain missing information.\nMain results\nTwenty-eight trials involving 107,362 women in seven countries are included in this updated review. Five studies involving 3,124 women did not contribute outcome data and we excluded them from the analyses. The methodological quality of the included trials was mixed, with significant numbers of studies at high or unclear risk of bias due to: inadequate allocation concealment (N = 20); lack of blinding of outcome assessment (N = 20); incomplete outcome data (N = 19); selective reporting (N = 22) and bias from other potential sources (N = 17).\nHealthcare professional-led breastfeeding education and support versus standard care\nThe studies pooled here compare professional health workers delivering breastfeeding education and support during the prenatal and postpartum periods with standard care. Interventions included promotion campaigns and counselling, and all took place in a formal setting. There was evidence from five trials involving 564 women for improved rates ofbreastfeeding initiation among women who received healthcare professional-led breastfeeding education and support (average risk ratio (RR) 1.43, 95% confidence interval (CI) 1.07 to 1.92; Tau² = 0.07, I² = 62%, low-quality evidence) compared to those women who received standard care. We downgraded evidence due to design limitations and heterogeneity. The outcome of early initiation of breastfeeding was not reported in the studies under this comparison.\nNon-healthcare professional-led breastfeeding education and support versus standard care\nThere was evidence from eight trials of 5712 women for improved rates of breastfeeding initiation among women who received interventions from non-healthcare professional counsellors and support groups (average RR 1.22, 95% CI 1.06 to 1.40; Tau² = 0.02, I² = 86%, low-quality evidence) compared to women who received standard care. In three trials of 76,373 women, there was no clear difference between groups in terms of the number of women practicing early initiation of breastfeeding (average RR 1.70, 95% CI 0.98 to 2.95; Tau² = 0.18, I² = 78%, very low-quality evidence). We downgraded the evidence for a combination of design limitations, heterogeneity and imprecision (wide confidence intervals crossing the line of no effect).\nOther comparisons\nOther comparisons in this review also looked at the rates of initiation of breastfeeding and there were no clear differences between groups for the following comparisons of combined healthcare professional-led education with peer support or community educator versus standard care (2 studies, 1371 women) or attention control (1 study, 237 women), breastfeeding education using multimedia (a self-help manual or a video) versus routine care (2 studies, 497 women); early mother-infant contact versus standard care (2 studies, 309 women); and community-based breastfeeding groups versus no breastfeeding groups (1 study, 18,603 women). None of these comparisons reported data on early initiation of breastfeeding.\nAuthors' conclusions\nThis review found low-quality evidence that healthcare professional-led breastfeeding education and non-healthcare professional-led counselling and peer support interventions can result in some improvements in the number of women beginning to breastfeed. The majority of the trials were conducted in the USA, among women on low incomes and who varied in ethnicity and feeding intention, thus limiting the generalisability of these results to other settings.\nFuture studies would ideally be conducted in a range of low- and high-income settings, with data on breastfeeding rates over various timeframes, and explore the effectiveness of interventions that are initiated prior to conception or during pregnancy. These might include well-described interventions, including health education, early and continuing mother-infant contact, and initiatives to help mothers overcome societal barriers to breastfeeding, all with clearly defined outcome measures.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for evidence on 29 February 2016. This updated review now includes 28 randomised controlled studies involving 107,362 women. Twenty studies involving 27,865 women looked at interventions to increase the number of women who started breastfeeding, in three high-income countries (Australia, 1 study; UK, 4 studies; and USA, 14 studies) and one lower middle-income country (Nicaragua, 1 study). Three studies investigated the effect of an intervention to increase the number of women who started breastfeeding early, within one hour after birth. These involved 76,373 women from Malawi, Nigeria and Ghana. The study from Malawi was large, with 55,931 participants.\nHealth education delivered by doctors and nurses and counselling and peer support by trained volunteers improved the number of women who began breastfeeding their babies. Five studies involving 564 women reported that women who received breastfeeding education and support from doctors or nurses were more likely to start breastfeeding compared to women who received standard care. Four of these studies were conducted in low-income or amongst minority ethnic women in the USA, where baseline breastfeeding rates are typically low. Eight studies involving 5712 women showed improved rates of starting breastfeeding with trained volunteer-delivered interventions and support groups compared to the women who received standard care.\nBreastfeeding education provided by trained volunteers could also improve the rates of early initiation of breastfeeding, within one hour of giving birth, in low-income countries.\nWe assessed all the evidence in this review to be low-quality because of limitations in study design and variations in the interventions, to whom, when, where, and how an intervention was delivered. Standard care also differed and could include some breastfeeding support, for example, in the UK.\nWe found too little evidence to say whether strategies with multimedia, early mother-infant contact, or community-based breastfeeding groups were able to improve breastfeeding initiation."
}
] |
query-laysum
|
22071
|
[
{
"role": "user",
"content": "Abstract: Background\nThe rising prevalence of autism spectrum disorders (ASD) increases the need for evidence-based behavioral treatments to lessen the impact of symptoms on children's functioning. At present, there are no curative or psychopharmacological therapies to effectively treat all symptoms of the disorders. Early intensive behavioral intervention (EIBI) is a treatment based on the principles of applied behavior analysis. Delivered for multiple years at an intensity of 20 to 40 hours per week, it is one of the more well-established treatments for ASD. This is an update of a Cochrane review last published in 2012.\nObjectives\nTo systematically review the evidence for the effectiveness of EIBI in increasing functional behaviors and skills, decreasing autism severity, and improving intelligence and communication skills for young children with ASD.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, 12 additional electronic databases and two trials registers in August 2017. We also checked references and contacted study authors to identify additional studies.\nSelection criteria\nRandomized control trials (RCTs), quasi-RCTs, and controlled clinical trials (CCTs) in which EIBI was compared to a no-treatment or treatment-as-usual control condition. Participants must have been less than six years of age at treatment onset and assigned to their study condition prior to commencing treatment.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nWe synthesized the results of the five studies using a random-effects model of meta-analysis, with a mean difference (MD) effect size for outcomes assessed on identical scales, and a standardized mean difference (SMD) effect size (Hedges' g) with small sample correction for outcomes measured on different scales. We rated the quality of the evidence using the GRADE approach.\nMain results\nWe included five studies (one RCT and four CCTs) with a total of 219 children: 116 children in the EIBI groups and 103 children in the generic, special education services groups. The age of the children ranged between 30.2 months and 42.5 months. Three of the five studies were conducted in the USA and two in the UK, with a treatment duration of 24 months to 36 months. All studies used a treatment-as-usual comparison group.\nPrimary outcomes\nWe found evidence at post-treatment that EIBI improves adaptive behaviour (MD 9.58 (assessed using Vineland Adaptive Behavior Scale (VABS) Composite; normative mean = 100, normative SD = 15), 95% confidence interval (CI) 5.57 to 13.60, P < 0.001; 5 studies, 202 participants; low-quality evidence; lower values indicate positive effects). We found no evidence at post-treatment that EIBI improves autism symptom severity (SMD −0.34, 95% CI −0.79 to 0.11, P = 0.14; 2 studies, 81 participants; very low-quality evidence).\nNo adverse effects were reported across studies.\nSecondary outcomes\nWe found evidence at post-treatment that EIBI improves IQ (MD 15.44 (assessed using standardized IQ tests; scale 0 to 100, normative SD = 15), 95% CI 9.29 to 21.59, P < 0.001; 5 studies, 202 participants; low-quality evidence) and expressive (SMD 0.51, 95% CI 0.12 to 0.90, P = 0.01; 4 studies, 165 participants; low-quality evidence) and receptive (SMD 0.55, 95% CI 0.23 to 0.87, P = 0.001; 4 studies, 164 participants; low-quality evidence) language skills. We found no evidence at post-treatment that EIBI improves problem behaviour (SMD −0.58, 95% CI −1.24 to 0.07, P = 0.08; 2 studies, 67 participants; very low-quality evidence).\nAuthors' conclusions\nThere is weak evidence that EIBI may be an effective behavioral treatment for some children with ASD; the strength of the evidence in this review is limited because it mostly comes from small studies that are not of the optimum design. Due to the inclusion of non-randomized studies, there is a high risk of bias and we rated the overall quality of evidence as 'low' or 'very low' using the GRADE system, meaning further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.\nIt is important that providers of EIBI are aware of the current evidence and use clinical decision-making guidelines, such as seeking the family’s input and drawing upon prior clinical experience, when making recommendations to clients on the use EIBI. Additional studies using rigorous research designs are needed to make stronger conclusions about the effects of EIBI for children with ASD.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence supports the use of EIBI for some children with ASD. However, the results should be interpreted with caution, as the quality of the evidence is weak; only a small number of children were involved in the studies, and only one study had an optimum design in which children were randomly assigned to treatment groups."
}
] |
query-laysum
|
22072
|
[
{
"role": "user",
"content": "Abstract: Background\nThe World Health Organization's (WHO’s) Health Promoting Schools (HPS) framework is an holistic, settings-based approach to promoting health and educational attainment in school. The effectiveness of this approach has not been previously rigorously reviewed.\nObjectives\nTo assess the effectiveness of the Health Promoting Schools (HPS) framework in improving the health and well-being of students and their academic achievement.\nSearch methods\nWe searched the following electronic databases in January 2011 and again in March and April 2013: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO, CINAHL, Campbell Library, ASSIA, BiblioMap, CAB Abstracts, IBSS, Social Science Citation Index, Sociological Abstracts, TRoPHI, Global Health Database, SIGLE, Australian Education Index, British Education Index, Education Resources Information Centre, Database of Education Research, Dissertation Express, Index to Theses in Great Britain and Ireland, ClinicalTrials.gov, Current controlled trials, and WHO International Clinical Trials Registry Platform. We also searched relevant websites, handsearched reference lists, and used citation tracking to identify other relevant articles.\nSelection criteria\nWe included cluster-randomised controlled trials where randomisation took place at the level of school, district or other geographical area. Participants were children and young people aged four to 18 years, attending schools or colleges. In this review, we define HPS interventions as comprising the following three elements: input to the curriculum; changes to the school’s ethos or environment or both; and engagement with families or communities, or both. We compared this intervention against schools that implemented either no intervention or continued with their usual practice, or any programme that included just one or two of the above mentioned HPS elements.\nData collection and analysis\nAt least two review authors identified relevant trials, extracted data, and assessed risk of bias in the trials. We grouped different types of interventions according to the health topic targeted or the approach used, or both. Where data permitted, we performed random-effects meta-analyses to provide a summary of results across studies.\nMain results\nWe included 67 eligible cluster trials, randomising 1443 schools or districts. This is made up of 1345 schools and 98 districts. The studies tackled a range of health issues: physical activity (4), nutrition (12), physical activity and nutrition combined (18), bullying (7), tobacco (5), alcohol (2), sexual health (2), violence (2), mental health (2), hand-washing (2), multiple risk behaviours (7), cycle-helmet use (1), eating disorders (1), sun protection (1), and oral health (1). The quality of evidence overall was low to moderate as determined by the GRADE approach. 'Risk of bias' assessments identified methodological limitations, including heavy reliance on self-reported data and high attrition rates for some studies. In addition, there was a lack of long-term follow-up data for most studies.\nWe found positive effects for some interventions for: body mass index (BMI), physical activity, physical fitness, fruit and vegetable intake, tobacco use, and being bullied. Intervention effects were generally small but have the potential to produce public health benefits at the population level. We found little evidence of effectiveness for standardised body mass index (zBMI) and no evidence of effectiveness for fat intake, alcohol use, drug use, mental health, violence and bullying others; however, only a small number of studies focused on these latter outcomes. It was not possible to meta-analyse data on other health outcomes due to lack of data. Few studies provided details on adverse events or outcomes related to the interventions. In addition, few studies included any academic, attendance or school-related outcomes. We therefore cannot draw any clear conclusions as to the effectiveness of this approach for improving academic achievement.\nAuthors' conclusions\nThe results of this review provide evidence for the effectiveness of some interventions based on the HPS framework for improving certain health outcomes but not others. More well-designed research is required to establish the effectiveness of this approach for other health topics and academic achievement.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, we found some evidence to suggest the HPS approach can produce improvements in certain areas of health, but there are not enough data to draw conclusions about its effectiveness for others. We need more studies to find out if this approach can improve other aspects of health and how students perform at school."
}
] |
query-laysum
|
22073
|
[
{
"role": "user",
"content": "Abstract: Background\nThe distal tibial metaphysis is located in the lower (distal) part of the tibia (shin bone). Fractures of this part of the tibia are most commonly due to a high energy injury in young men and to osteoporosis in older women. The optimal methods of surgical intervention for a distal tibial metaphyseal fracture remain uncertain.\nObjectives\nTo assess the effects (benefits and harms) of surgical interventions for distal tibial metaphyseal fractures in adults. We planned to compare surgical versus non-surgical (conservative) treatment, and different methods of surgical intervention.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (9 December 2014), the Cochrane Central Register of Controlled Trials (2014, Issue 12), MEDLINE (1946 to November Week 3 2014), EMBASE (1980 to 2014 Week 48), the Airiti Library (1967 to 2014 Week 8), China Knowledge Resource Integrated Database (1915 to 2014 Week 8), ClinicalTrials.gov (February 2014) and reference lists of included studies.\nSelection criteria\nWe included randomised and quasi-randomised controlled clinical studies comparing surgical versus non-surgical (conservative) treatment or different surgical interventions for treating distal tibial metaphyseal fractures in adults. Our primary outcomes were patient-reported function and the need for secondary or revision surgery or substantive physiotherapy because of adverse outcomes.\nData collection and analysis\nTwo authors independently selected studies, assessed the risk of bias in each study and extracted data. We resolved disagreement by discussion and, where necessary, in consultation with a third author. Where appropriate we pooled data using the fixed-effect model.\nMain results\nWe included three randomised trials that evaluated intramedullary nailing versus plating in 213 participants, with useable data from 173 participants of whom 112 were male. The mean age of participants in individual studies ranged from 41 to 44 years. There were no trials comparing surgery with non-surgical treatment. The three included trials were at high risk of performance bias, with one trial also being at high risk of selection, detection and attrition bias. Overall, the quality of available evidence was rated as very low for all outcomes, meaning that we are very unsure about the estimates for all outcomes.\nAlthough the pooled results of three different measures of foot and ankle function indicated a small difference in favour of nailing (standard mean difference 0.28, 95% CI -0.02 to 0.59; 172 participants, 3 trials), the results of individual trials indicated that this was very unlikely to be a clinically important difference. Pooled data (173 participants, 3 trials) for the need for reoperation or substantive physiotherapy for adverse events favoured nailing (4/90 versus 10/83; RR 0.37, 95% CI 0.12 to 1.12), but included the possibility of a better outcome after plating. Based on an illustrative risk of 100 re-operations for adverse outcomes within one year of plate fixation in 1000 people with these fractures, 63 fewer (95% CI 88 fewer to 12 more) people per 1000 would have re-operations after nailing. Similarly pooled data (173 participants, 3 trials) for the symptomatic nonunion or malunion, wound complications and fracture union favoured nailing but the 95% confidence intervals crossed the line of no effect and thus included the possibility of a better outcome after plating. Evidence from one trial (85 participants) showed no clinically important difference in pain between the two groups.\nAuthors' conclusions\nOverall, there is either no or insufficient evidence to draw definitive conclusions on the use of surgery or the best surgical intervention for distal tibial metaphyseal fractures in adults. The available evidence, which is of very low quality, found no clinically important differences in function or pain, and did not confirm a difference in the need for re-operation or risk of complications between nailing and plating.\nThe addition of evidence from two ongoing trials of nailing versus plating should inform this question in future updates. Further randomised trials are warranted on other issues, but should be preceded by research to identify priority questions.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Breaks in the lower (distal) end of the shin bone (or tibia) are mostly caused by high-energy trauma, such as motor vehicle accidents. We set out to compare surgical treatment (such as putting the broken parts back into position and fixing these either by inserting a metal nail into the central cavity of the bone (nailing) or with a metal plate and securing it to the bone using screws (plating)) with non-surgical treatment (plaster cast immobilisation). We also set out to compare different methods of surgery such as nailing versus plating."
}
] |
query-laysum
|
22074
|
[
{
"role": "user",
"content": "Abstract: Background\nIdiopathic hypersomnia is a disorder of excessive daytime sleepiness, often accompanied by long sleep times or pronounced difficulty in awakening, in the absence of a known cause. The optimal treatment strategy for idiopathic hypersomnia is currently unknown.\nObjectives\nTo assess the effects of medications for daytime sleepiness and related symptoms in individuals with idiopathic hypersomnia and, in particular, whether medications may: 1. reduce subjective measures of sleepiness; 2. reduce objective measures of sleepiness; 3. reduce symptoms of cognitive dysfunction; 4. improve quality of life; and 5. be associated with adverse events.\nSearch methods\nWe searched the following databases on 4 February 2021: Cochrane Register of Studies (CRS Web), MEDLINE (Ovid, 1946 to 1 February 2021), and reference lists of articles. CRS Web includes randomized or quasi-randomized controlled trials from PubMed, Embase, ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), the Cochrane Central Register of Controlled Trials (CENTRAL), and the specialized registers of Cochrane Review Groups, including the Cochrane Epilepsy Group. We previously searched the WHO ICTRP separately when loading of ICTRP records into CRS Web was temporarily suspended.\nSelection criteria\nRandomized studies comparing any medication to placebo, another medication, or a behavioral intervention.\nData collection and analysis\nTwo review authors independently extracted data and assessed trial quality. We contacted study authors for additional data. We collected data on adverse events from the included trials.\nMain results\nWe included three trials, with a total of 112 participants. Risk of bias was low for the included studies. Two pharmaceutical company-sponsored trials compared modafinil with placebo, involving 102 participants, nearly all of whom had idiopathic hypersomnia without long sleep time. Modafinil significantly improved self-reported sleepiness on the Epworth Sleepiness Scale by 5.08 points more than placebo (95% confidence interval (CI) 3.01 to 7.16; 2 studies, 101 participants; high-certainty evidence). Modafinil also significantly improved disease severity on the Clinical Global Impression of Severity scale by 1.02 points (95% CI 0.11 to 1.93; 1 study, 30 participants; moderate-certainty evidence) and resulted in a greater proportion of participants who were \"much improved\" or \"very much improved\" on the Clinical Global Impression of Change (odds ratio (OR) for improvement 5.14, 95% CI 1.76 to 15.00; 1 study, 70 participants; moderate-certainty evidence). Ability to remain awake on the Maintenance of Wakefulness Test was significantly improved with modafinil, by 4.74 minutes more than with placebo (95% CI 2.46 to 7.01; 2 studies, 99 participants; high-certainty evidence). Ratings of exhaustion and effectiveness/performance were improved with modafinil compared to placebo in one study. Number of naps per week was no different between modafinil and placebo across two studies. Participants receiving modafinil experienced more side effects, although the difference did not reach statistical significance (OR 1.68, 95% CI 0.28 to 9.94; 2 studies, 102 participants; low-certainty evidence).\nOne trial studying 20 participants with different disorders of sleepiness included 10 participants with idiopathic hypersomnia, with or without long sleep time, and compared clarithromycin to placebo. We only included the subset of trial data for those participants with idiopathic hypersomnia, per our protocol. There were no significant differences between clarithromycin and placebo for the Epworth Sleepiness Scale, psychomotor vigilance testing, sleep inertia, other subjective ratings, or side effects.\nAuthors' conclusions\nModafinil is effective for the treatment of several aspects of idiopathic hypersomnia symptomatology, based on studies predominantly including participants with idiopathic hypersomnia without long sleep times, with low risk of bias, and evidence certainty ranging from high to low. There is insufficient evidence to conclude whether clarithromycin is effective for the treatment of idiopathic hypersomnia. There is a clear need for additional studies testing interventions for the treatment of idiopathic hypersomnia.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Modafinil improves sleepiness and the ability to stay awake during testing in a sleep laboratory. It probably improves the overall severity of idiopathic hypersomnia, feelings of exhaustion, and daytime performance. Modafinil may cause headaches and stomach symptoms such as nausea.\nIt is uncertain whether or not clarithromycin helps with daytime sleepiness or other symptoms of idiopathic hypersomnia. It is uncertain whether there is a difference in side effects between clarithromycin and placebo, based on the information included in this review."
}
] |
query-laysum
|
22075
|
[
{
"role": "user",
"content": "Abstract: Background\nWithdrawal is a necessary step prior to drug-free treatment or as the endpoint of long-term substitution treatment.\nObjectives\nTo assess the effectiveness of interventions involving the use of alpha2-adrenergic agonists compared with placebo, reducing doses of methadone, symptomatic medications, or an alpha2-adrenergic agonist regimen different to the experimental intervention, for the management of the acute phase of opioid withdrawal. Outcomes included the withdrawal syndrome experienced, duration of treatment, occurrence of adverse effects, and completion of treatment.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (1946 to November week 2, 2015), EMBASE (January 1985 to November week 2, 2015), PsycINFO (1806 to November week 2, 2015), Web of Science, and reference lists of articles.\nSelection criteria\nRandomised controlled trials comparing alpha2-adrenergic agonists (clonidine, lofexidine, guanfacine, tizanidine) with reducing doses of methadone, symptomatic medications or placebo, or comparing different alpha2-adrenergic agonists to modify the signs and symptoms of withdrawal in participants who were opioid dependent.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included 26 randomised controlled trials involving 1728 participants. Six studies compared an alpha2-adrenergic agonist with placebo, 12 with reducing doses of methadone, four with symptomatic medications, and five compared different alpha2-adrenergic agonists. We assessed 10 studies as having a high risk of bias in at least one of the methodological domains that were considered.\nWe found moderate-quality evidence that alpha2-adrenergic agonists were more effective than placebo in ameliorating withdrawal in terms of the likelihood of severe withdrawal (risk ratio (RR) 0.32, 95% confidence interval (CI) 0.18 to 0.57; 3 studies; 148 participants). We found moderate-quality evidence that completion of treatment was significantly more likely with alpha2-adrenergic agonists compared with placebo (RR 1.95, 95% CI 1.34 to 2.84; 3 studies; 148 participants).\nPeak withdrawal severity may be greater with alpha2-adrenergic agonists than with reducing doses of methadone, as measured by the likelihood of severe withdrawal (RR 1.18, 95% CI 0.81 to 1.73; 5 studies; 340 participants; low quality), and peak withdrawal score (standardised mean difference (SMD) 0.22, 95% CI -0.02 to 0.46; 2 studies; 263 participants; moderate quality), but these differences were not significant and there is no significant difference in severity when considered over the entire duration of the withdrawal episode (SMD 0.13, 95% CI -0.24 to 0.49; 3 studies; 119 participants; moderate quality). The signs and symptoms of withdrawal occurred and resolved earlier with alpha2-adrenergic agonists. The duration of treatment was significantly longer with reducing doses of methadone (SMD -1.07, 95% CI -1.31 to -0.83; 3 studies; 310 participants; low quality). Hypotensive or other adverse effects were significantly more likely with alpha2-adrenergic agonists (RR 1.92, 95% CI 1.19 to 3.10; 6 studies; 464 participants; low quality), but there was no significant difference in rates of completion of withdrawal treatment (RR 0.85, 95% CI 0.69 to 1.05; 9 studies; 659 participants; low quality).\nThere were insufficient data for quantitative comparison of different alpha2-adrenergic agonists. Available data suggest that lofexidine does not reduce blood pressure to the same extent as clonidine, but is otherwise similar to clonidine.\nAuthors' conclusions\nClonidine and lofexidine are more effective than placebo for the management of withdrawal from heroin or methadone. We detected no significant difference in efficacy between treatment regimens based on clonidine or lofexidine and those based on reducing doses of methadone over a period of around 10 days, but methadone was associated with fewer adverse effects than clonidine, and lofexidine has a better safety profile than clonidine.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 26 randomised controlled trials (clinical studies where people are randomly put into one of two or more treatment groups), involving 1728 opioid-dependent participants. The studies were undertaken in 12 different countries and involved treatment with an alpha2-adrenergic agonist (clonidine, lofexidine, guanfacine, and in one study, tizanidine) compared with reducing doses of methadone (12 studies), placebo (six studies), or symptomatic medications (four studies). Five studies compared different alpha2-adrenergic agonists. Treatment was scheduled to last for one to two weeks in most studies; the shortest duration was three days, and the longest was 30 days.\nSix studies received some financial support from a pharmaceutical company."
}
] |
query-laysum
|
22076
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease (SCD) is one of the commonest severe monogenic disorders in the world, due to the inheritance of two abnormal haemoglobin (beta globin) genes. SCD can cause severe pain, significant end-organ damage, pulmonary complications, and premature death. Silent cerebral infarcts are the commonest neurological complication in children and probably adults with SCD. Silent cerebral infarcts also affect academic performance, increase cognitive deficits and may lower intelligence quotient.\nObjectives\nTo assess the effectiveness of interventions to reduce or prevent silent cerebral infarcts in people with SCD.\nSearch methods\nWe searched for relevant trials in the Cochrane Library, MEDLINE (from 1946), Embase (from 1974), the Transfusion Evidence Library (from 1980), and ongoing trial databases; all searches current to 14 November 2019. We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register: 07 October 2019.\nSelection criteria\nRandomised controlled trials comparing interventions to prevent silent cerebral infarcts in people with SCD. There were no restrictions by outcomes examined, language or publication status.\nData collection and analysis\nWe used standard Cochrane methodological procedures.\nMain results\nWe included five trials (660 children or adolescents) published between 1998 and 2016. Four of the five trials were terminated early. The vast majority of participants had the haemoglobin (Hb)SS form of SCD. One trial focused on preventing silent cerebral infarcts or stroke; three trials were for primary stroke prevention and one trial dealt with secondary stroke prevention.\nThree trials compared the use of regular long-term red blood cell transfusions to standard care. Two of these trials included children with no previous long-term transfusions: one in children with normal transcranial doppler (TCD) velocities; and one in children with abnormal TCD velocities. The third trial included children and adolescents on long-term transfusion.\nTwo trials compared the drug hydroxyurea and phlebotomy to long-term transfusions and iron chelation therapy: one in primary prevention (children), and one in secondary prevention (children and adolescents).\nThe quality of the evidence was moderate to very low across different outcomes according to GRADE methodology. This was due to trials being at high risk of bias because they were unblinded; indirectness (available evidence was only for children with HbSS); and imprecise outcome estimates.\nLong-term red blood cell transfusions versus standard care\nChildren with no previous long-term transfusions and higher risk of stroke (abnormal TCD velocities or previous history of silent cerebral infarcts)\nLong-term red blood cell transfusions may reduce the incidence of silent cerebral infarcts in children with abnormal TCD velocities, risk ratio (RR) 0.11 (95% confidence interval (CI) 0.02 to 0.86) (one trial, 124 participants, low-quality evidence); but make little or no difference to the incidence of silent cerebral infarcts in children with previous silent cerebral infarcts on magnetic resonance imaging and normal or conditional TCDs, RR 0.70 (95% CI 0.23 to 2.13) (one trial, 196 participants, low-quality evidence).\nNo deaths were reported in either trial.\nLong-term red blood cell transfusions may reduce the incidence of: acute chest syndrome, RR 0.24 (95% CI 0.12 to 0.49) (two trials, 326 participants, low-quality evidence); and painful crisis, RR 0.63 (95% CI 0.42 to 0.95) (two trials, 326 participants, low-quality evidence); and probably reduces the incidence of clinical stroke, RR 0.12 (95% CI 0.03 to 0.49) (two trials, 326 participants, moderate-quality evidence).\nLong-term red blood cell transfusions may improve quality of life in children with previous silent cerebral infarcts (difference estimate -0.54; 95% confidence interval -0.92 to -0.17; one trial; 166 participants), but may have no effect on cognitive function (least squares means: 1.7, 95% CI -1.1 to 4.4) (one trial, 166 participants, low-quality evidence).\nTransfusions continued versus transfusions halted: children and adolescents with normalised TCD velocities (79 participants; one trial)\nContinuing red blood cell transfusions may reduce the incidence of silent cerebral infarcts, RR 0.29 (95% CI 0.09 to 0.97 (low-quality evidence).\nWe are very uncertain whether continuing red blood cell transfusions has any effect on all-cause mortality, Peto odds ratio (OR) 8.00 (95% CI 0.16 to 404.12); or clinical stroke, RR 0.22 (95% CI 0.01 to 4.35) (very low-quality evidence).\nThe trial did not report: comparative numbers for SCD-related adverse events; quality of life; or cognitive function.\nHydroxyurea and phlebotomy versus transfusions and chelation\nPrimary prevention, children (121 participants; one trial)\nWe are very uncertain whether switching to hydroxyurea and phlebotomy has any effect on: silent cerebral infarcts (no infarcts); all-cause mortality (no deaths); risk of stroke (no strokes); or SCD-related complications, RR 1.52 (95% CI 0.58 to 4.02) (very low-quality evidence).\nSecondary prevention, children and adolescents with a history of stroke (133 participants; one trial)\nWe are very uncertain whether switching to hydroxyurea and phlebotomy has any effect on: silent cerebral infarcts, Peto OR 7.28 (95% CI 0.14 to 366.91); all-cause mortality, Peto OR 1.02 (95%CI 0.06 to 16.41); or clinical stroke, RR 14.78 (95% CI 0.86 to 253.66) (very low-quality evidence).\nSwitching to hydroxyurea and phlebotomy may increase the risk of SCD-related complications, RR 3.10 (95% CI 1.42 to 6.75) (low-quality evidence).\nNeither trial reported on quality of life or cognitive function.\nAuthors' conclusions\nWe identified no trials for preventing silent cerebral infarcts in adults, or in children who do not have HbSS SCD.\nLong-term red blood cell transfusions may reduce the incidence of silent cerebral infarcts in children with abnormal TCD velocities, but may have little or no effect on children with normal TCD velocities. In children who are at higher risk of stroke and have not had previous long-term transfusions, long-term red blood cell transfusions probably reduce the risk of stroke, and other SCD-related complications (acute chest syndrome and painful crises).\nIn children and adolescents at high risk of stroke whose TCD velocities have normalised, continuing red blood cell transfusions may reduce the risk of silent cerebral infarcts. No treatment duration threshold has been established for stopping transfusions.\nSwitching to hydroxyurea with phlebotomy may increase the risk of silent cerebral infarcts and SCD-related serious adverse events in secondary stroke prevention.\nAll other evidence in this review is of very low-quality.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to determine if there were any safe and effective interventions that prevent silent strokes (also known as silent cerebral infarcts) in people with sickle cell disease (SCD)."
}
] |
query-laysum
|
22077
|
[
{
"role": "user",
"content": "Abstract: Background\nHealth workers move between public and private organizations in both urban and rural areas during the course of their career. Depending on the proportion of the population served by public or private organizations in a particular setting, this movement may result in imbalances in the number of healthcare providers available relative to the population receiving care from that sector. However, both public and private organizations are needed as each sector has unique contributions to make to the effective delivery of health services.\nObjectives\nTo assess the effects of financial incentives and movement restriction interventions to manage the movement of health workers between public and private organizations in low- and middle-income countries.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (10 November 2012); EMBASE (7 June 2011); LILACS (9 June 2011); MEDLINE (10 November 2012); CINAHL (13 August 2012); and the British Nursing Index (13 August 2012).\nSelection criteria\nRandomized controlled trials and non-randomized controlled trials; controlled before-and-after studies if pre- and post-intervention periods for study and control groups were the same and there were at least two units included in both the intervention and control groups; uncontrolled and controlled interrupted time series studies if the point in time when the intervention occurred was clearly defined and there were at least three or more data points before and after the intervention. Interventions included payment of special allowances, increasing salaries, bonding health workers, offering bursary schemes, scholarships or lucrative terminal benefits, and hiring people on contract basis.\nData collection and analysis\nTwo review authors independently applied the criteria for inclusion and exclusion of studies to the titles and abstracts of all articles obtained from the search. The same two review authors independently screened the full reports of the selected citations. At each stage, we compared the results and resolved discrepancies through discussion with a third review author.\nMain results\nWe found no studies that were eligible for inclusion in this review.\nAuthors' conclusions\nWe identified no rigorous studies on the effects of interventions to manage the movement of health workers between public and private organizations in low- and middle-income countries. Health worker availability is a key obstacle in delivery of health services. Interventions to make the health sector more responsive to the expectations of populations by having more health workers in the sector that serves most people would contribute to the more efficient use of the health workforce. More research is needed to assess the effect of increase in salaries, offering scholarships or bonding on movement of health workers in one sector compared with another.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Although these types of approaches are not uncommon, the review could not find any relevant studies that gave a reliable assessment of their impact. There is still a lot of work to be done to understand how governments can ensure that health workers serve in those health facilities that care for the majority of the population."
}
] |
query-laysum
|
22078
|
[
{
"role": "user",
"content": "Abstract: Background\nSome hospital patients may be at risk of or may present with major bleeding. Abnormalities of clotting (coagulation) are often recorded in these people, and the traditional management has been with transfusions of blood components, either to prevent bleeding (prophylactic) or to treat bleeding (therapeutic). There is growing interest in the use of targeted therapies with specific pro-coagulant haemostatic (causing bleeding to stop and to keep blood within a damaged blood vessel) factor concentrates in place of plasma.\nObjectives\nTo assess the effects and safety of pro-coagulant haemostatic factors and factor concentrates in the prevention and treatment of bleeding in people without haemophilia.\nSearch methods\nWe searched for randomised controlled trials (RCTs) in the Cochrane Central Register of Controlled Trials (2018, issue 3), MEDLINE (from 1948), Embase (from 1974), CINAHL (from 1938), PubMed (publications in process to 18 April 2018), PROSPERO, Transfusion Evidence Library (from 1950), LILACS (from 1980), IndMED (from 1985), KoreaMed (from 1934), Web of Science Conference Proceedings Citation Index (from 1990) and ongoing trial databases to 18 April 2018.\nSelection criteria\nWe included RCTs that compared intravenous administration of a pro-coagulant haemostatic factor concentrate, either with placebo, current best or standard treatment, or another pro-coagulant haemostatic factor concentrate for prevention or treatment of bleeding. There was no restriction on the types of participants. We excluded studies of desmopressin, tranexamic acid and aminocaproic acid and use of pro-coagulant haemostatic factors for vitamin K over-anticoagulation.\nData collection and analysis\nWe followed standard Cochrane methodological procedures.\nMain results\nWe identified 31 RCTs with 2392 participants and 22 ongoing trials. There were 13 therapeutic RCTs that randomised 1057 participants (range from 20 to 249 participants) and 18 prophylactic trials that randomised 1335 participants (range 20 to 479 participants). The pro-coagulant haemostatic factor concentrate was fibrinogen in 23 trials, Factor XIII in seven trials and pro-thrombin complex concentrates (PCC) in one trial.\nSeventeen trials had industrial funding or support, eight studies either did not declare their funding or were unclear about their source of funding and six studies declared non-industrial funding sources.\nCertainty in the evidence and included study bias\nOur certainty in the evidence, using GRADE criteria, ranged from very low to high across all outcomes. We assessed most outcomes as being of low certainty. Risks of bias were a concern in many of the RCTs; randomisation methodology was unclear in 15 RCTs, with allocation concealment unclear in 14 RCTs and at high risk of bias in five RCTs. The blinding status of outcome assessors was unclear in 13 RCTs and at high risk of bias in five RCTs, although most outcomes in these trials were objective and not prone to observer bias. Study personnel were often unblinded or insufficient information was available to assess their level of blinding (five RCTs were at unclear risk and seven at high risk of bias).\nPrimary outcomes\nAll-cause mortality was reported by 21 RCTs, arterial thromboembolic events by 22 RCTs, and venous thromboembolic events by 21 RCTs.\nFibrinogen concentrate: prophylactic trials with inactive comparator (nine RCTs)\nThe trials had heterogeneous clinical settings and outcome time points, so we did not pool the data. Compared to placebo, there was no evidence that prophylactic fibrinogen concentrate reduced all-cause mortality (4 RCTs; 248 participants). Compared to inactive comparators there was low- to moderate-quality evidence that prophylactic fibrinogen concentrate did not increase the risk of arterial or venous thromboembolic complications (7 RCTs; 398 participants).\nFibrinogen concentrate: prophylactic trials with active comparator (two RCTs)\nThere was no mortality or incidence of thromboembolic events in these two RCTs (with 57 participants).\nFibrinogen concentrate: therapeutic trials with inactive comparator (eight RCTs)\nThe trials had heterogeneous surgical settings and outcome time points, so we pooled data for subgroups only. Compared to an inactive comparator, there was no evidence (quality ranging from low to high) that fibrinogen concentrate reduced all-cause mortality in actively bleeding participants (7 RCTs; 724 participants). Compared to inactive comparators there was no evidence that the use of fibrinogen concentrate in active bleeding increased arterial (7 RCTs; 607 participants) or venous (6 RCTs; 562 participants) thromboembolic events.\nFibrinogen concentrate: therapeutic trials with active comparator (four RCTs)\nWe did not pool the outcome data, as they were not measured at comparable time points. Compared to other active pro-coagulant agents, there was no evidence (very low to moderate quality) that fibrinogen concentrate reduced all-cause mortality in actively bleeding participants (4 RCTs; 220 participants). There was no evidence that fibrinogen concentrate increased the risk of arterial (3 RCTs; 126 participants) or venous (4 RCTs; 220 participants) thromboembolic events.\nFactorXIII: Prophylactic trials with inactive comparator (six trials)\nThe trials were heterogeneous in their surgical settings and time points for outcome analysis, so we pooled data for subgroups only. Compared to an inactive comparator, there was no evidence that prophylactic Factor XIII reduced all-cause mortality (5 RCTs; 414 participants). There was no evidence (very low to low quality) of a difference in the arterial or venous event rate between Factor XIII and inactive comparators (4 trials; 354 participants).\nFactorXIII: therapeutic trials with inactive comparator (one trial)\nThere was no mortality or incidence of thromboembolic events in this trial.\nProthrombin complex concentrate (PCC): prophylactic trials with inactive comparator (one trial)\nThere was no evidence (moderate quality) that PCC reduced all-cause mortality (1 trial; 78 participants). No thromboembolic complications were reported in this trial.\nAuthors' conclusions\nThe paucity of good-quality comparable evidence precludes the drawing of conclusions for clinical practice. Further research is required to determine the risk-to-benefit ratio of these interventions. The sample sizes of future RCTs would need to be greatly increased to detect a reduction in mortality or thromboembolic events between treatment arms. To improve consistency in outcome reporting, the development of core outcome sets is essential and may help address a number of the limitations identified in this review.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our certainty in the evidence is low, but future research may change the findings of this review."
}
] |
query-laysum
|
22079
|
[
{
"role": "user",
"content": "Abstract: Background\nAutism spectrum disorder (ASD) is a behaviourally diagnosed condition. It is defined by impairments in social communication or the presence of restricted or repetitive behaviours, or both. Diagnosis is made according to existing classification systems. In recent years, especially following publication of the Diagnostic and Statistical Manual of Mental Disorders - Fifth Edition (DSM-5; APA 2013), children are given the diagnosis of ASD, rather than subclassifications of the spectrum such as autistic disorder, Asperger syndrome, or pervasive developmental disorder - not otherwise specified. Tests to diagnose ASD have been developed using parent or carer interview, child observation, or a combination of both.\nObjectives\nPrimary objectives\n1. To identify which diagnostic tools, including updated versions, most accurately diagnose ASD in preschool children when compared with multi-disciplinary team clinical judgement.\n2. To identify how the best of the interview tools compare with CARS, then how CARS compares with ADOS.\na. Which ASD diagnostic tool - among ADOS, ADI-R, CARS, DISCO, GARS, and 3di - has the best diagnostic test accuracy?\nb. Is the diagnostic test accuracy of any one test sufficient for that test to be suitable as a sole assessment tool for preschool children?\nc. Is there any combination of tests that, if offered in sequence, would provide suitable diagnostic test accuracy and enhance test efficiency?\nd. If data are available, does the combination of an interview tool with a structured observation test have better diagnostic test accuracy (i.e. fewer false-positives and fewer false-negatives) than either test alone?\nAs only one interview tool was identified, we modified the first three aims to a single aim (Differences between protocol and review): This Review evaluated diagnostic tests in terms of sensitivity and specificity. Specificity is the most important factor for diagnosis; however, both sensitivity and specificity are of interest in this Review because there is an inherent trade-off between these two factors.\nSecondary objectives\n1. To determine whether any diagnostic test has greater diagnostic test accuracy for age-specific subgroups within the preschool age range.\nSearch methods\nIn July 2016, we searched CENTRAL, MEDLINE, Embase, PsycINFO, 10 other databases, and the reference lists of all included publications.\nSelection criteria\nPublications had to: \n1. report diagnostic test accuracy for any of the following six included diagnostic tools: Autism Diagnostic Interview - Revised (ADI-R), Gilliam Autism Rating Scale (GARS), Diagnostic Interview for Social and Communication Disorder (DISCO), Developmental, Dimensional, and Diagnostic Interview (3di), Autism Diagnostic Observation Schedule - Generic (ADOS), and Childhood Autism Rating Scale (CARS); \n2. include children of preschool age (under six years of age) suspected of having an ASD; and \n3. have a multi-disciplinary assessment, or similar, as the reference standard.\nEligible studies included cohort, cross-sectional, randomised test accuracy, and case-control studies. The target condition was ASD.\nData collection and analysis\nTwo review authors independently assessed all studies for inclusion and extracted data using standardised forms. A third review author settled disagreements. We assessed methodological quality using the QUADAS-2 instrument (Quality Assessment of Studies of Diagnostic Accuracy - Revised). We conducted separate univariate random-effects logistical regressions for sensitivity and specificity for CARS and ADI-R. We conducted meta-analyses of pairs of sensitivity and specificity using bivariate random-effects methods for ADOS.\nMain results\nIn this Review, we included 21 sets of analyses reporting different tools or cohorts of children from 13 publications, many with high risk of bias or potential conflicts of interest or a combination of both. Overall, the prevalence of ASD for children in the included analyses was 74%.\nFor versions and modules of ADOS, there were 12 analyses with 1625 children. Sensitivity of ADOS ranged from 0.76 to 0.98, and specificity ranged from 0.20 to 1.00. The summary sensitivity was 0.94 (95% confidence interval (CI) 0.89 to 0.97), and the summary specificity was 0.80 (95% CI 0.68 to 0.88).\nFor CARS, there were four analyses with 641 children. Sensitivity of CARS ranged from 0.66 to 0.89, and specificity ranged from 0.21 to 1.00. The summary sensitivity for CARS was 0.80 (95% CI 0.61 to 0.91), and the summary specificity was 0.88 (95% CI 0.64 to 0.96).\nFor ADI-R, there were five analyses with 634 children. Sensitivity for ADI-R ranged from 0.19 to 0.75, and specificity ranged from 0.63 to 1.00. The summary sensitivity for the ADI-R was 0.52 (95% CI 0.32 to 0.71), and the summary specificity was 0.84 (95% CI 0.61 to 0.95).\nStudies that compared tests were few and too small to allow clear conclusions.\nIn two studies that included analyses for both ADI-R and ADOS, tests scored similarly for sensitivity, but ADOS scored higher for specificity. In two studies that included analyses for ADI-R, ADOS, and CARS, ADOS had the highest sensitivity and CARS the highest specificity.\nIn one study that explored individual and additive sensitivity and specificity of ADOS and ADI-R, combining the two tests did not increase the sensitivity nor the specificity of ADOS used alone.\nPerformance for all tests was lower when we excluded studies at high risk of bias.\nAuthors' conclusions\nWe observed substantial variation in sensitivity and specificity of all tests, which was likely attributable to methodological differences and variations in the clinical characteristics of populations recruited.\nWhen we compared summary statistics for ADOS, CARS, and ADI-R, we found that ADOS was most sensitive. All tools performed similarly for specificity. In lower prevalence populations, the risk of falsely identifying children who do not have ASD would be higher.\nNow available are new versions of tools that require diagnostic test accuracy assessment, ideally in clinically relevant situations, with methods at low risk of bias and in children of varying abilities.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the Review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Six tests were reviewed: Four gathered information about children’s behaviours from interviews with parents or carers (Autism Diagnostic Interview-Revised (ADI-R), Gilliam Autism Rating Scale (GARS), Diagnostic Interview for Social and Communication Disorder (DISCO), and Developmental, Dimensional, and Diagnostic Interview (3di)); one required that a trained professional observe a child’s behaviour on specific tasks (Autism Diagnostic Observation Schedule (ADOS)); and one combined observation of the child with interview of parents or carers (Childhood Autism Rating Scale (CARS))."
}
] |
query-laysum
|
22080
|
[
{
"role": "user",
"content": "Abstract: Background\nApproximately one-fifth of all subfertile couples seeking fertility treatment show clinically relevant levels of anxiety, depression, or distress. Psychological and educational interventions are frequently offered to subfertile couples, but their effectiveness, both in improving mental health and pregnancy rates, is unclear.\nObjectives\nTo assess the effectiveness of psychological and educational interventions for subfertile couples on psychological and fertility treatment outcomes.\nSearch methods\nWe searched (from inception to 2 April 2015) the Cochrane Gynaecology and Fertility Group Specialised Register of Controlled Trials, the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 2, 2015), MEDLINE, EMBASE, PsycINFO, EBSCO CINAHL, DARE, Web of Science, OpenGrey, LILACS, PubMed, and ongoing trials registers. We handsearched reference lists and contacted experts in the field.\nSelection criteria\nWe included published and unpublished randomised controlled trials (RCTs), cluster randomised trials, and cross-over trials (first phase) evaluating the effectiveness of psychological and educational interventions on psychological and fertility treatment outcomes in subfertile couples.\nData collection and analysis\nTwo review authors independently assessed trial risk of bias and extracted data. We contacted study authors for additional information. Our primary outcomes were psychological measures (anxiety and depression) and fertility rates (live birth or ongoing pregnancy). We assessed the overall quality of the evidence using GRADE criteria.\nAs we did not consider the included studies to be sufficiently similar to permit meaningful pooling, we summarised the results of the individual studies by presenting the median and interquartile range (IQR) of effects as well as the minimum and maximum values. We calculated standardised mean differences (SMDs) for continuous variables and odds ratios (ORs) for dichotomous outcomes.\nMain results\nWe included 39 studies involving 4925 participants undergoing assisted reproductive technology. Studies were heterogeneous with respect to a number of factors, including nature and duration of interventions, participants, and comparator groups. As a result, we judged that pooling results would not result in a clinically meaningful estimate of a treatment effect. There were substantial methodological weaknesses in the studies, all of which were judged to be at high risk of bias for one or more quality assessment domains. There was concern about attrition bias (24 studies), performance bias for psychological outcomes (27 studies) and fertility outcomes (18 studies), and detection bias for psychological outcomes (26 studies). We therefore considered study-specific estimates of intervention effects to be unreliable. Thirty-three studies reported the outcome mental health. Only two studies reported the outcome live birth, and both of these had substantial attrition. One study reported ongoing pregnancy, again with substantial attrition. We have combined live birth and ongoing pregnancy in one outcome.\nPsychological outcomes\nStudies utilised a variety of measures of anxiety and depression. In all cases a low score denoted benefit from the intervention.\nSMDs for anxiety were as follows: psychological interventions versus attentional control or usual care: median (IQR) = -0.30 (-0.84 to 0.00), minimum value -5.13; maximum value 0.84, 17 RCTs, 2042 participants; educational interventions versus attentional control or usual care: median = 0.03, minimum value -0.38; maximum value 0.23, 4 RCTs, 330 participants.\nSMDs for depression were as follows: psychological interventions versus attentional control or usual care: median (IQR) = -0.45 (-0.68 to -0.08), minimum value -3.01; maximum value 1.23, 12 RCTs, 1160 participants; educational interventions versus attentional control or usual care: median = -0.33, minimum value -0.46; maximum value 0.17, 3 RCTs, 304 participants.\nFertility outcomes\nWhen psychological interventions were compared with attentional control or usual care, ORs for live birth or ongoing pregnancy ranged from minimum value 1.13 to maximum value 10.05. No studies of educational interventions reported this outcome.\nAuthors' conclusions\nThe effects of psychological and educational interventions on mental health including distress, and live birth or ongoing pregnancy rates is uncertain due to the very low quality of the evidence. Existing trials of psychological and educational interventions for subfertility were generally poorly designed and executed, resulting in very serious risk of bias and serious inconsistency in study findings. There is a need for studies employing appropriate methodological techniques to investigate the benefits of these treatments for this population. In particular, attentional control groups should be employed, that is groups receiving a treatment that mimics the amount of time and attention received by the treatment group but is not thought to have a specific effect upon the participants, in order to distinguish between therapeutic and non-specific effects of interventions. Where attrition cannot be minimised, appropriate statistical techniques for handling drop-out must be applied. Failure to address these issues in study design has resulted in studies that do not provide a valid basis for answering questions about the effectiveness of these interventions.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Approximately one-fifth of all subfertile couples seeking fertility treatment show clinically relevant levels of anxiety, depression, or distress. Psychological and educational interventions are frequently offered to subfertile couples, but their effectiveness, both in improving mental health and pregnancy rates, is unclear.Objective:To assess the effectiveness of psychological and educational interventions for subfertile couples on psychological and fertility treatment outcomes.Study characteristics:We included 39 studies involving 4925 participants undergoing assisted reproductive technology. Studies varied widely with respect to a number of factors, including nature and duration of interventions, participants, and comparator groups. The evidence is current to April 2015.Key results:There were substantial methodological weaknesses in the studies, all of which were judged to be at high risk of bias for one or more quality assessment domains. We therefore determined that pooling results would not result in a clinically meaningful estimate of a treatment effect and that we could not present a pooled analysis in the 'Summary of findings' table. There was concern about bias because of differences in care and amount of attention given to participants for psychological outcomes (27 studies) and fertility outcomes (18 studies), the amount of withdrawal (24 studies), and the manner in which outcome measurements were taken (26 studies). We therefore considered the results from each study to be unreliable. Thirty-three studies reported the outcome mental health. Only two studies reported the outcome live birth, and one study reported ongoing pregnancy; all of these studies had substantial attrition. It was not possible to answer the review question for any of the primary outcomes.Quality of the evidence:We judged the overall quality of the evidence to be very low, the main reasons being very serious risk of bias and serious inconsistency in study findings."
}
] |
query-laysum
|
22081
|
[
{
"role": "user",
"content": "Abstract: Background\nThe most frequent indications for tooth extractions, generally performed by general dental practitioners, are dental caries and periodontal infections. Systemic antibiotics may be prescribed to patients undergoing extractions to prevent complications due to infection. This is an update of a review first published in 2012.\nObjectives\nTo determine the effect of systemic antibiotic prophylaxis on the prevention of infectious complications following tooth extractions.\nSearch methods\nCochrane Oral Health’s Information Specialist searched the following databases: Cochrane Oral Health Trials Register (to 16 April 2020), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library, 2020, Issue 3), MEDLINE Ovid (1946 to 16 April 2020), Embase Ovid (1980 to 16 April 2020), and LILACS (1982 to 16 April 2020). The US National Institutes of Health Trials Registry (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised, double-blind, placebo-controlled trials of systemic antibiotic prophylaxis in patients undergoing tooth extraction(s) for any indication.\nData collection and analysis\nAt least two review authors independently performed data extraction and 'Risk of bias' assessment for the included studies. We contacted trial authors for further details where these were unclear. For dichotomous outcomes, we calculated risk ratios (RR) and 95% confidence intervals (CI) using random-effects models. For continuous outcomes, we used mean differences (MD) with 95% CI using random-effects models. We examined potential sources of heterogeneity. We assessed the certainty of the body of evidence for key outcomes as high, moderate, low, or very low, using the GRADE approach.\nMain results\nWe included 23 trials that randomised approximately 3206 participants (2583 analysed) to prophylactic antibiotics or placebo. Although general dentists perform dental extractions because of severe dental caries or periodontal infection, only one of the trials evaluated the role of antibiotic prophylaxis in groups of patients affected by those clinical conditions.\nWe assessed 16 trials as being at high risk of bias, three at low risk, and four as unclear.\nCompared to placebo, antibiotics may reduce the risk of postsurgical infectious complications in patients undergoing third molar extractions by approximately 66% (RR 0.34, 95% CI 0.19 to 0.64; 1728 participants; 12 studies; low-certainty evidence), which means that 19 people (95% CI 15 to 34) need to be treated with antibiotics to prevent one infection following extraction of impacted wisdom teeth. Antibiotics may also reduce the risk of dry socket by 34% (RR 0.66, 95% CI 0.45 to 0.97; 1882 participants; 13 studies; low-certainty evidence), which means that 46 people (95% CI 29 to 62) need to take antibiotics to prevent one case of dry socket following extraction of impacted wisdom teeth.\nThe evidence for our other outcomes is uncertain: pain, whether measured dichotomously as presence or absence (RR 0.59, 95% CI 0.31 to 1.12; 675 participants; 3 studies) or continuously using a visual analogue scale (0-to-10-centimetre scale, where 0 is no pain) (MD −0.26, 95% CI −0.59 to 0.07; 422 participants; 4 studies); fever (RR 0.66, 95% CI 0.24 to 1.79; 475 participants; 4 studies); and adverse effects, which were mild and transient (RR 1.46, 95% CI 0.81 to 2.64; 1277 participants; 8 studies) (very low-certainty evidence).\nWe found no clear evidence that the timing of antibiotic administration (preoperative, postoperative, or both) was important.\nThe included studies enrolled a subset of patients undergoing dental extractions, that is healthy people who had surgical extraction of third molars. Consequently, the results of this review may not be generalisable to all people undergoing tooth extractions.\nAuthors' conclusions\nThe vast majority (21 out of 23) of the trials included in this review included only healthy patients undergoing extraction of impacted third molars, often performed by oral surgeons. None of the studies evaluated tooth extraction in immunocompromised patients. We found low-certainty evidence that prophylactic antibiotics may reduce the risk of infection and dry socket following third molar extraction when compared to placebo, and very low-certainty evidence of no increase in the risk of adverse effects. On average, treating 19 healthy patients with prophylactic antibiotics may stop one person from getting an infection. It is unclear whether the evidence in this review is generalisable to patients with concomitant illnesses or patients at a higher risk of infection. Due to the increasing prevalence of bacteria that are resistant to antibiotic treatment, clinicians should evaluate if and when to prescribe prophylactic antibiotic therapy before a dental extraction for each patient on the basis of the patient's clinical conditions (healthy or affected by systemic pathology) and level of risk from infective complications. Immunocompromised patients, in particular, need an individualised approach in consultation with their treating medical specialist.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We did not find studies in patients with depressed immune systems, other illnesses, or in young children or older patients, therefore the results of our review probably do not apply to people who may be at high risk of infection. Also, extractions were mainly carried out by oral surgeons, so the review may not apply to dentists working in general practice.\nAnother concern, which cannot be assessed by clinical studies (i.e. studies testing new medical approaches in people), is that widespread use of antibiotics by people who do not have an infection is likely to contribute to the development of antimicrobial resistance.\nWe concluded that antibiotics given to healthy people when they are having teeth extracted may help prevent infection, but the decision to use an antibiotic should be judged on an individual patient basis based on their state of health and possible complications of getting an infection."
}
] |
query-laysum
|
22082
|
[
{
"role": "user",
"content": "Abstract: Background\nEndometriosis is a condition characterised by the presence of ectopic deposits of endometrial-like tissue outside the uterus, usually in the pelvis. The impact of laparoscopic treatment on overall pain is uncertain and a significant proportion of women will require further surgery. Therefore, adjuvant medical therapies following surgery, such as the levonorgestrel-releasing intrauterine device (LNG-IUD), have been considered to reduce recurrence of symptoms.\nObjectives\nTo determine the effectiveness and safety of post-operative LNG-IUD in women with symptomatic endometriosis.\nSearch methods\nWe searched the following databases from inception to January 2021: The Specialised Register of the Cochrane Gynaecology and Fertility Group, CENTRAL (which now includes records from two trial registries), MEDLINE, Embase, PsycINFO, LILACS and Epistemonikos. We handsearched citation lists of relevant publications, review articles, abstracts of scientific meetings and included studies. We contacted experts in the field for information about any additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing women undergoing surgical treatment of endometriosis with uterine preservation who were assigned to LNG-IUD insertion, versus control conditions including expectant management, post-operative insertion of placebo (inert intrauterine device), or other medical treatment such as gonadotrophin-releasing hormone agonist (GnRH-a) drugs.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, and extracted data to allow for an intention-to-treat analysis. For dichotomous data, we calculated the risk ratio (RR) and 95% confidence interval (CI) using the Mantel-Haenszel fixed-effect method. For continuous data, we calculated the mean difference (MD) and 95% CI using the inverse variance fixed-effect method.\nMain results\nFour RCTs were included, with a total of 157 women. Two studies are ongoing. The GRADE certainty of evidence was very low to low. The certainty of evidence was graded down primarily for serious risk of bias and imprecision.\nLNG-IUD versus expectant management\nOverall pain: No studies reported on the primary outcome of overall pain.\nDysmenorrhoea: We are uncertain whether LNG-IUD improves dysmenorrhoea at 12 months. Data on this outcome were reported on by two RCTs; meta-analysis was not possible (RCT 1: delta of median visual analogue scale (VAS) 81 versus 50, P = 0.006, n = 55; RCT 2: fall in VAS by 50 (35 to 65) versus 30 (25 to 40), P = 0.021, n = 40; low-certainty evidence).\nQuality of life: We are uncertain whether LNG-IUD improves quality of life at 12 months. One trial demonstrated a change in total quality of life score with postoperative LNG-IUD from baseline (mean 61.2 (standard deviation (SD) 14.8) to 12 months (mean 70.3 (SD 16.2) compared to expectant management (baseline 55.1 (SD 17.0) to 57.0 (SD 33.2) at 12 months) (n = 55, P = 0.014, very low-certainty evidence).\nPatient satisfaction: Two studies found higher rates of satisfaction with LNG-IUD compared to expectant management; however, combining the studies in meta-analysis was not possible (n = 95, very low-certainty evidence). One study found 75% (15/20) of those given post-operative LNG-IUD were \"satisfied\" or \"very satisfied\", compared to 50% (10/20) of those in the expectant management group (RR 1.5, 95% CI 0.90-2.49, 1 RCT, n=40, very low-certainty evidence). The second study found that fewer were \"very satisfied\" in the expectant management group when compared to LNG, but there were no data to include in a meta-analysis.\nAdverse events: One study found a significantly higher proportion of women reporting melasma (n = 55, P = 0.015, very low-certainty evidence) and bloating (n = 55, P = 0.021, very low-certainty evidence) following post-operative LNG-IUD. There were no differences in other reported adverse events, such as weight gain, acne, and headaches.\nLNG-IUD versus GnRH-a\nOverall pain: No studies reported on the primary outcome of overall pain.\nChronic pelvic pain: We are uncertain whether LNG-IUD improves chronic pelvic pain at 12 months when compared to GnRH-a (VAS pain scale) (MD -2.0, 95% CI -20.2 to 16.2, 1 RCT, n = 40, very low-certainty evidence).\nDysmenorrhoea: We are uncertain whether LNG-IUD improves dysmenorrhoea at six months when compared to GnRH-a (measured as a reduction in VAS pain score) (MD 1.70, 95%.CI -0.14 to 3.54, 1 RCT, n = 18, very low-certainty evidence).\nAdverse events: One study suggested that vasomotor symptoms were the most common adverse events reported with patients receiving GnRH-a, and irregular bleeding in those receiving LNG-IUD (n = 40, very low-certainty evidence)\nAuthors' conclusions\nPost-operative LNG-IUD is widely used to reduce endometriosis-related pain and to improve operative outcomes. This review demonstrates that there is no high-quality evidence to support this practice. This review highlights the need for further studies with large sample sizes to assess the effectiveness of post-operative adjuvant hormonal IUD on the core endometriosis outcomes (overall pain, most troublesome symptom, and quality of life).\n\nGiven the provided abstract, please respond to the following query: \"How is endometriosis treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Endometriosis is usually managed with hormonal medications, surgery, or a combination of both. The progestogen levonorgestrel is one such hormonal medication that is believed to stop the growth of endometrial tissue outside the womb."
}
] |
query-laysum
|
22083
|
[
{
"role": "user",
"content": "Abstract: Background\nNutritional labelling is advocated as a means to promote healthier food purchasing and consumption, including lower energy intake. Internationally, many different nutritional labelling schemes have been introduced. There is no consensus on whether such labelling is effective in promoting healthier behaviour.\nObjectives\nTo assess the impact of nutritional labelling for food and non-alcoholic drinks on purchasing and consumption of healthier items. Our secondary objective was to explore possible effect moderators of nutritional labelling on purchasing and consumption.\nSearch methods\nWe searched 13 electronic databases including CENTRAL, MEDLINE and Embase to 26 April 2017. We also handsearched references and citations and sought unpublished studies through websites and trials registries.\nSelection criteria\nEligible studies: were randomised or quasi-randomised controlled trials (RCTs/Q-RCTs), controlled before-and-after studies, or interrupted time series (ITS) studies; compared a labelled product (with information on nutrients or energy) with the same product without a nutritional label; assessed objectively measured purchasing or consumption of foods or non-alcoholic drinks in real-world or laboratory settings.\nData collection and analysis\nTwo authors independently selected studies for inclusion and extracted study data. We applied the Cochrane 'Risk of bias' tool and GRADE to assess the quality of evidence. We pooled studies that evaluated similar interventions and outcomes using a random-effects meta-analysis, and we synthesised data from other studies in a narrative summary.\nMain results\nWe included 28 studies, comprising 17 RCTs, 5 Q-RCTs and 6 ITS studies. Most (21/28) took place in the USA, and 19 took place in university settings, 14 of which mainly involved university students or staff. Most (20/28) studies assessed the impact of labelling on menus or menu boards, or nutritional labelling placed on, or adjacent to, a range of foods or drinks from which participants could choose. Eight studies provided participants with only one labelled food or drink option (in which labelling was present on a container or packaging, adjacent to the food or on a display board) and measured the amount consumed. The most frequently assessed labelling type was energy (i.e. calorie) information (12/28).\nEleven studies assessed the impact of nutritional labelling on purchasing food or drink options in real-world settings, including purchases from vending machines (one cluster-RCT), grocery stores (one ITS), or restaurants, cafeterias or coffee shops (three RCTs, one Q-RCT and five ITS). Findings on vending machines and grocery stores were not interpretable, and were rated as very low quality. A meta-analysis of the three RCTs, all of which assessed energy labelling on menus in restaurants, demonstrated a statistically significant reduction of 47 kcal in energy purchased (MD −46.72 kcal, 95% CI −78.35, −15.10, N = 1877). Assuming an average meal of 600 kcal, energy labelling on menus would reduce energy purchased per meal by 7.8% (95% CI 2.5% to 13.1%). The quality of the evidence for these three studies was rated as low, so our confidence in the effect estimate is limited and may change with further studies. Of the remaining six studies, only two (both ITS studies involving energy labels on menus or menus boards in a coffee shop or cafeteria) were at low risk of bias, and their results support the meta-analysis. The results of the other four studies which were conducted in a restaurant, cafeterias (2 studies) or a coffee shop, were not clearly reported and were at high risk of bias.\nSeventeen studies assessed the impact of nutritional labels on consumption in artificial settings or scenarios (henceforth referred to as laboratory studies or settings). Of these, eight (all RCTs) assessed the effect of labels on menus or placed on a range of food options. A meta-analysis of these studies did not conclusively demonstrate a reduction in energy consumed during a meal (MD −50 kcal, 95% CI −104.41, 3.88, N = 1705). We rated the quality of the evidence as low, so our confidence in the effect estimate is limited and may change with further studies.\nSix laboratory studies (four RCTs and two Q-RCTs) assessed the impact of labelling a single food or drink option (such as chocolate, pasta or soft drinks) on energy consumed during a snack or meal. A meta-analysis of these studies did not demonstrate a statistically significant difference in energy (kcal) consumed (SMD 0.05, 95% CI −0.17 to 0.27, N = 732). However, the confidence intervals were wide, suggesting uncertainty in the true effect size. We rated the quality of the evidence as low, so our confidence in the effect estimate is limited and may change with further studies.\nThere was no evidence that nutritional labelling had the unintended harm of increasing energy purchased or consumed. Indirect evidence came from five laboratory studies that involved mislabelling single nutrient content (i.e. placing low energy or low fat labels on high-energy foods) during a snack or meal. A meta-analysis of these studies did not demonstrate a statistically significant increase in energy (kcal) consumed (SMD 0.19, 95% CI −0.14to 0.51, N = 718). The effect was small and the confidence intervals wide, suggesting uncertainty in the true effect size. We rated the quality of the evidence from these studies as very low, providing very little confidence in the effect estimate.\nAuthors' conclusions\nFindings from a small body of low-quality evidence suggest that nutritional labelling comprising energy information on menus may reduce energy purchased in restaurants. The evidence assessing the impact on consumption of energy information on menus or on a range of food options in laboratory settings suggests a similar effect to that observed for purchasing, although the evidence is less definite and also of low quality.\nAccordingly, and in the absence of observed harms, we tentatively suggest that nutritional labelling on menus in restaurants could be used as part of a wider set of measures to tackle obesity. Additional high-quality research in real-world settings is needed to enable more certain conclusions.\nFurther high-quality research is also needed to address the dearth of evidence from grocery stores and vending machines and to assess potential moderators of the intervention effect, including socioeconomic status.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Nutritional labelling on restaurant menus reduced the amount of energy (i.e. calories) purchased, but the quality of the three studies that contributed to this finding was low, so our confidence in the effect estimate is limited and may change with further studies. Eight studies assessed this same type of intervention in laboratory settings, but instead of evaluating how much energy participants purchased, these studies evaluated how much energy participants consumed. These studies did not conclusively demonstrate a reduction in energy consumed when menus or foods were labelled, and they were also of low quality.\nIn addition, six laboratory studies assessed how much energy participants consumed when they were given one food or drink option with or without labels, and five laboratory studies assessed how much energy participants consumed when foods were experimentally labelled as low energy or low fat when they were actually high-energy foods (i.e. mislabelling). Results from these two groups of studies were inconclusive and of low, or in the case of mislabelling studies, very low quality. We found some studies that assessed labelling on vending machines and grocery stores, but their results were not easy to interpret, so we could not use them to inform this review."
}
] |
query-laysum
|
22084
|
[
{
"role": "user",
"content": "Abstract: Background\nWith potential antiviral and anti-inflammatory properties, Janus kinase (JAK) inhibitors represent a potential treatment for symptomatic severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. They may modulate the exuberant immune response to SARS-CoV-2 infection. Furthermore, a direct antiviral effect has been described. An understanding of the current evidence regarding the efficacy and safety of JAK inhibitors as a treatment for coronavirus disease 2019 (COVID-19) is required.\nObjectives\nTo assess the effects of systemic JAK inhibitors plus standard of care compared to standard of care alone (plus/minus placebo) on clinical outcomes in individuals (outpatient or in-hospital) with any severity of COVID-19, and to maintain the currency of the evidence using a living systematic review approach.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register (comprising MEDLINE, Embase, ClinicalTrials.gov, World Health Organization (WHO) International Clinical Trials Registry Platform, medRxiv, and Cochrane Central Register of Controlled Trials), Web of Science, WHO COVID-19 Global literature on coronavirus disease, and the US Department of Veterans Affairs Evidence Synthesis Program (VA ESP) Covid-19 Evidence Reviews to identify studies up to February 2022. We monitor newly published randomised controlled trials (RCTs) weekly using the Cochrane COVID-19 Study Register, and have incorporated all new trials from this source until the first week of April 2022.\nSelection criteria\nWe included RCTs that compared systemic JAK inhibitors plus standard of care to standard of care alone (plus/minus placebo) for the treatment of individuals with COVID-19. We used the WHO definitions of illness severity for COVID-19.\nData collection and analysis\nWe assessed risk of bias of primary outcomes using Cochrane's Risk of Bias 2 (RoB 2) tool. We used GRADE to rate the certainty of evidence for the following primary outcomes: all-cause mortality (up to day 28), all-cause mortality (up to day 60), improvement in clinical status: alive and without need for in-hospital medical care (up to day 28), worsening of clinical status: new need for invasive mechanical ventilation or death (up to day 28), adverse events (any grade), serious adverse events, secondary infections.\nMain results\nWe included six RCTs with 11,145 participants investigating systemic JAK inhibitors plus standard of care compared to standard of care alone (plus/minus placebo). Standard of care followed local protocols and included the application of glucocorticoids (five studies reported their use in a range of 70% to 95% of their participants; one study restricted glucocorticoid use to non-COVID-19 specific indications), antibiotic agents, anticoagulants, and antiviral agents, as well as non-pharmaceutical procedures. At study entry, about 65% of participants required low-flow oxygen, about 23% required high-flow oxygen or non-invasive ventilation, about 8% did not need any respiratory support, and only about 4% were intubated. We also identified 13 ongoing studies, and 9 studies that are completed or terminated and where classification is pending.\nIndividuals with moderate to severe disease\nFour studies investigated the single agent baricitinib (10,815 participants), one tofacitinib (289 participants), and one ruxolitinib (41 participants).\nSystemic JAK inhibitors probably decrease all-cause mortality at up to day 28 (95 of 1000 participants in the intervention group versus 131 of 1000 participants in the control group; risk ratio (RR) 0.72, 95% confidence interval (CI) 0.57 to 0.91; 6 studies, 11,145 participants; moderate-certainty evidence), and decrease all-cause mortality at up to day 60 (125 of 1000 participants in the intervention group versus 181 of 1000 participants in the control group; RR 0.69, 95% CI 0.56 to 0.86; 2 studies, 1626 participants; high-certainty evidence).\nSystemic JAK inhibitors probably make little or no difference in improvement in clinical status (discharged alive or hospitalised, but no longer requiring ongoing medical care) (801 of 1000 participants in the intervention group versus 778 of 1000 participants in the control group; RR 1.03, 95% CI 1.00 to 1.06; 4 studies, 10,802 participants; moderate-certainty evidence). They probably decrease the risk of worsening of clinical status (new need for invasive mechanical ventilation or death at day 28) (154 of 1000 participants in the intervention group versus 172 of 1000 participants in the control group; RR 0.90, 95% CI 0.82 to 0.98; 2 studies, 9417 participants; moderate-certainty evidence).\nSystemic JAK inhibitors probably make little or no difference in the rate of adverse events (any grade) (427 of 1000 participants in the intervention group versus 441 of 1000 participants in the control group; RR 0.97, 95% CI 0.88 to 1.08; 3 studies, 1885 participants; moderate-certainty evidence), and probably decrease the occurrence of serious adverse events (160 of 1000 participants in the intervention group versus 202 of 1000 participants in the control group; RR 0.79, 95% CI 0.68 to 0.92; 4 studies, 2901 participants; moderate-certainty evidence). JAK inhibitors may make little or no difference to the rate of secondary infection (111 of 1000 participants in the intervention group versus 113 of 1000 participants in the control group; RR 0.98, 95% CI 0.89 to 1.09; 4 studies, 10,041 participants; low-certainty evidence). Subgroup analysis by severity of COVID-19 disease or type of JAK inhibitor did not identify specific subgroups which benefit more or less from systemic JAK inhibitors.\nIndividuals with asymptomatic or mild disease\nWe did not identify any trial for this population.\nAuthors' conclusions\nIn hospitalised individuals with moderate to severe COVID-19, moderate-certainty evidence shows that systemic JAK inhibitors probably decrease all-cause mortality. Baricitinib was the most often evaluated JAK inhibitor. Moderate-certainty evidence suggests that they probably make little or no difference in improvement in clinical status. Moderate-certainty evidence indicates that systemic JAK inhibitors probably decrease the risk of worsening of clinical status and make little or no difference in the rate of adverse events of any grade, whilst they probably decrease the occurrence of serious adverse events. Based on low-certainty evidence, JAK inhibitors may make little or no difference in the rate of secondary infection. Subgroup analysis by severity of COVID-19 or type of agent failed to identify specific subgroups which benefit more or less from systemic JAK inhibitors.\nCurrently, there is no evidence on the efficacy and safety of systemic JAK inhibitors for individuals with asymptomatic or mild disease (non-hospitalised individuals).\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Systemic JAK inhibitors plus usual care compared to usual care alone probably lead to fewer deaths from all causes at up to day 28 and lead to fewer deaths from all causes up to day 60 in the studies. It was not possible to identify subgroups of participants (according to severity of illness) who might benefit most from treatment with JAK inhibitors. Systemic JAK inhibitors probably make little or no difference in improvement in clinical status. They probably decrease the risk of worsening of clinical status. Systemic JAK inhibitors probably make little or no difference in the rate of unwanted effects, and probably decrease slightly the occurrence of serious unwanted effects. Systemic JAK inhibitors may result in little or no difference in the rate of infections acquired in hospital."
}
] |
query-laysum
|
22085
|
[
{
"role": "user",
"content": "Abstract: Background\nDampness and mould in buildings have been associated with adverse respiratory symptoms, asthma and respiratory infections of inhabitants. Moisture damage is a very common problem in private houses, workplaces and public buildings such as schools.\nObjectives\nTo determine the effectiveness of repairing buildings damaged by dampness and mould in order to reduce or prevent respiratory tract symptoms, infections and symptoms of asthma.\nSearch methods\nWe searched CENTRAL (2014, Issue 10), MEDLINE (1951 to November week 1, 2014), EMBASE (1974 to November 2014), CINAHL (1982 to November 2014), Science Citation Index (1973 to November 2014), Biosis Previews (1989 to June 2011), NIOSHTIC (1930 to March 2014) and CISDOC (1974 to March 2014).\nSelection criteria\nRandomised controlled trials (RCTs), cluster-RCTs (cRCTs), interrupted time series studies and controlled before-after (CBA) studies of the effects of remediating dampness and mould in a building on respiratory symptoms, infections and asthma.\nData collection and analysis\nTwo authors independently extracted data and assessed the risk of bias in the included studies.\nMain results\nWe included 12 studies (8028 participants): two RCTs (294 participants), one cRCT (4407 participants) and nine CBA studies (3327 participants). The interventions varied from thorough renovation to cleaning only.\nRepairing houses decreased asthma-related symptoms in adults (among others, wheezing (odds ratio (OR) 0.64; 95% confidence interval (CI) 0.55 to 0.75) and respiratory infections (among others, rhinitis (OR 0.57; 95% CI 0.49 to 0.66), two studies, moderate-quality evidence). For children, we did not find a difference between repaired houses and receiving information only, in the number of asthma days or emergency department visits because of asthma (one study, moderate-quality evidence).\nOne CBA study showed very low-quality evidence that after repairing a mould-damaged office building, asthma-related and other respiratory symptoms decreased. In another CBA study, there was no difference in symptoms between full or partial repair of houses.\nFor children in schools, the evidence of an effect of mould remediation on respiratory symptoms was inconsistent and out of many symptom measures only respiratory infections might have decreased after the intervention. For staff in schools, there was very low-quality evidence that asthma-related and other respiratory symptoms in mould-damaged schools were similar to those of staff in non-damaged schools, both before and after intervention.\nAuthors' conclusions\nWe found moderate to very low-quality evidence that repairing mould-damaged houses and offices decreases asthma-related symptoms and respiratory infections compared to no intervention in adults. There is very low-quality evidence that although repairing schools did not significantly change respiratory symptoms in staff, pupils' visits to physicians due to a common cold were less frequent after remediation of the school. Better research, preferably with a cRCT design and with more validated outcome measures, is needed.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 12 studies with 8028 participants. Three were randomised controlled trials (RCTs) and nine were non-RCTs with a control group. The repairs aimed to remove mould and dampness from family houses, schools or, in one study, an office building."
}
] |
query-laysum
|
22086
|
[
{
"role": "user",
"content": "Abstract: Background\nHeart failure is the end stage of heart disease, and the prevalence and incidence of the condition is rapidly increasing. Although heart transplantation is one type of surgical treatment for people with end-stage heart failure, donor availability is limited. Implantable ventricular assist devices (VADs) therefore offer an alternative treatment to heart transplantation. Although two studies reported the beneficial effects of exercise-based cardiac rehabilitation (CR) on functional capacity and quality of life (QOL) by performing systematic reviews and meta-analyses, both systematic reviews included studies with limited design (e.g. non-randomised, retrospective studies) or participants with implantable or extracorporeal VADs.\nObjectives\nTo determine the benefits and harms of exercise-based CR for people with implantable VADs.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE, Embase, PsycINFO, Conference Proceedings Citation Index-Science (CPCI-S) on Web of Science, CINAHL, and LILACS on 3 October 2017 with no limitations on date, language, or publication status. We also searched two clinical trials registers on 10 August 2017 and checked the reference lists of primary studies and review articles.\nSelection criteria\nRandomised controlled trials (RCTs) regardless of cluster or individual randomisation, and full-text studies, those published as abstract only, and unpublished data were eligible. However, only individually RCTs and full-text publications were included.\nData collection and analysis\nTwo review authors independently extracted outcome data from the included studies. We double-checked that data were entered correctly by comparing the data presented in the systematic review with the study reports. We had no dichotomous data to analyse and used mean difference or standardised mean difference with 95% confidence intervals (CIs) for continuous data. Furthermore, we assessed the quality of evidence as it relates to those studies that contribute data to the meta-analyses for the prespecified outcomes, using GRADEpro software.\nMain results\nWe included two studies with a total of 40 participants in the review. Exercise-based CR consisted of aerobic or resistance training or both three times per week for six to eight weeks. Exercise intensity was 50% of oxygen consumption (VO2) reserve, or ranged from 60% to 80% of heart rate reserve. Two serious adverse events were observed in one trial, in which participants did not complete the study due to infections. Furthermore, a total of four participants in each group required visits to the emergency department, although these participants did complete the study. Summary scores from the 36-item Short Form Health Survey (SF-36) and the Kansas City Cardiomyopathy Questionnaire (KCCQ) were measured as quality of life. One trial reported that the KCCQ summary score improved by 14.4 points in the exercise group compared with 0.5 points in the usual care group. The other trial reported that the SF-36 total score improved by 29.2 points in the exercise group compared with 16.3 points in the usual care group. A large difference in quality of life was observed between groups at the end of follow-up (standardised mean difference 0.88, 95% CI -0.12 to 1.88; 37 participants; 2 studies; very low-quality of evidence). However, there was no evidence for the effectiveness of exercise-based CR due to the young age of the participants, high risk of performance bias, very small sample size, and wide confidence intervals, which resulted in very low-quality evidence. Furthermore, we were not able to determine the effect of exercise-based CR on mortality, rehospitalisation, heart transplantation, and cost, as these outcomes were not reported.\nAuthors' conclusions\nThe evidence is currently inadequate to assess the safety and efficacy of exercise-based CR for people with implantable VADs compared with usual care. The amount of RCT evidence was very limited and of very low quality. In addition, the training duration was very short term, that is from six to eight weeks. Further high-quality and well-reported RCTs of exercise-based CR for people with implantable VADs are needed. Such trials need to collect data on events (mortality and rehospitalisation), patient-related outcomes (including quality of life), and cost-effectiveness.\n\nGiven the provided abstract, please respond to the following query: \"Conclusion\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The current evidence is inadequate to assess the benefits and harms of exercise-based CR for people with implantable VADs compared with usual care. The amount of randomised controlled trial evidence was very limited and of very low quality. In addition, the training duration was very short term. High-quality randomised controlled trials are needed to collect data on events (death and rehospitalisation), patient-related outcomes (including quality of life), and cost."
}
] |
query-laysum
|
22087
|
[
{
"role": "user",
"content": "Abstract: Background\nAnovulation is a common cause of infertility. Drugs used to treat anovulation include selective oestrogen receptor modulators, aromatase inhibitors and gonadotrophins. Ovulation triggers are used with these drugs, as a surrogate for the hormonal surge seen in spontaneous menstrual cycles, to control the timing of ovulation and the timing of sexual intercourse. Ovulation triggers given without reliable evidence of oocyte maturity could be inappropriately timed; they increase costs, and the need to time intercourse precisely after the ovulation trigger is given adds to psychological stress.\nThis is an update of a Cochrane review first published in Issue 3, 2008, of the Cochrane Database of Systematic Reviews.\nObjectives\nTo determine the benefits and harms of administering an ovulation trigger to anovulatory women receiving treatment with ovulation-inducing agents in comparison with spontaneous ovulation following ovulation induction.\nSearch methods\nWe updated searches of the Menstrual Disorders and Subfertility Group (MDSG) Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and PsycINFO to November 2013. We checked conference proceedings, trial registries and reference lists and contacted researchers.\nSelection criteria\nParallel-group, randomised, controlled trials (RCTs) evaluating the administration of an ovulation trigger to anovulatory women receiving treatment with ovulation-inducing agents.\nData collection and analysis\nWe independently assessed trial eligibility and trial quality and extracted data. We calculated odds ratios (ORs) with 95% confidence intervals (CIs) for dichotomous data and used the random-effects model in meta-analyses when significant heterogeneity was present. We assessed overall quality of the evidence by using the GRADE approach.\nMain results\nNo new trials were identified. This review includes two RCTs with low risk of bias that compared urinary human chorionic gonadotrophin (hCG) versus no treatment in anovulatory women receiving clomiphene citrate. Urinary hCG did not result in an increase in live birth rate over no hCG (OR 0.97, 95% CI 0.52 to 1.83; two trials, 305 participants, I2 = 16%; low-quality evidence), but very serious imprecision around the effect estimate reduces our confidence in the apparent lack of effect of hCG as an ovulation trigger in clomiphene-induced cycles in anovulatory women.\nAmong this review's secondary outcomes, urinary hCG may not increase ovulation rate (OR 0.99, 95% CI 0.36 to 2.77; two trials, 305 participants, I2 = 55%; low-quality evidence), clinical pregnancy rate (OR 1.02, 95% CI 0.56 to 1.89; two trials, 305 participants, I2 = 35%; low-quality evidence) or miscarriage rate in pregnant women (OR 1.19, 95% CI 0.17 to 8.23; two trials, 54 participants, I2 = 0%; low-quality evidence). Multiple pregnancies and preterm deliveries were uncommon, and ovarian hyperstimulation syndrome, adverse events and deaths were not reported as outcomes in either trial.\n\nWe found no trials evaluating other ovulation triggers.\nAuthors' conclusions\nEvidence is inadequate to recommend or refute the use of urinary hCG as an ovulation trigger in anovulatory women treated with clomiphene citrate. We found no trials evaluating the use of ovulation triggers in anovulatory women treated with other ovulation-inducing agents.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Giving women on clomiphene citrate additional urinary hCG may not increase their chances of delivering live babies, ovulating or becoming pregnant. Multiple pregnancies, miscarriages and preterm deliveries were not more common with or without an ovulation trigger. No serious adverse events were reported in either study."
}
] |
query-laysum
|
22088
|
[
{
"role": "user",
"content": "Abstract: Background\nHerpes simplex labialis (HSL), also known as cold sores, is a common disease of the lips caused by the herpes simplex virus, which is found throughout the world. It presents as a painful vesicular eruption, forming unsightly crusts, which cause cosmetic disfigurement and psychosocial distress. There is no cure available, and it recurs periodically.\nObjectives\nTo assess the effects of interventions for the prevention of HSL in people of all ages.\nSearch methods\nWe searched the following databases up to 19 May 2015: the Cochrane Skin Group Specialised Register, the Oral Health Group Specialised Register, CENTRAL in the Cochrane Library (Issue 4, 2015), MEDLINE (from 1946), EMBASE (from 1974), LILACS (from 1982), the China National Knowledge Infrastructure (CNKI) database, Airiti Library, and 5 trial registers. To identify further references to relevant randomised controlled trials, we scanned the bibliographies of included studies and published reviews, and we also contacted the original researchers of our included studies.\nSelection criteria\nRandomised controlled trials (RCTs) of interventions for preventing HSL in immunocompetent people.\nData collection and analysis\nTwo authors independently selected trials, extracted data, and assessed the risk of bias. A third author was available for resolving differences of opinion.\nMain results\nThis review included 32 RCTs, with a total of 2640 immunocompetent participants, covering 19 treatments. The quality of the body of evidence was low to moderate for most outcomes, but was very low for a few outcomes. Our primary outcomes were 'Incidence of HSL' and 'Adverse effects during use of the preventative intervention'.\nThe evidence for short-term (≤ 1 month) use of oral aciclovir in preventing recurrent HSL was inconsistent across the doses used in the studies: 2 RCTs showed low quality evidence for a reduced recurrence of HSL with aciclovir 400 mg twice daily (risk ratio (RR) 0.26, 95% confidence interval (CI) 0.13 to 0.51; n = 177), while 1 RCT testing aciclovir 800 mg twice daily and 2 RCTs testing 200 mg 5 times daily found no similar preventive effects (RR 1.08, 95% CI 0.62 to 1.87; n = 237; moderate quality evidence and RR 0.46, 95% CI 0.20 to 1.07; n = 66; low quality evidence, respectively). The direction of intervention effect was unrelated to the risk of bias. The evidence from 1 RCT for the effect of short-term use of valaciclovir in reducing recurrence of HSL by clinical evaluation was uncertain (RR 0.55, 95% CI 0.23 to 1.28; n = 125; moderate quality evidence), as was the evidence from 1 RCT testing short-term use of famciclovir.\nLong-term (> 1 month) use of oral antiviral agents reduced the recurrence of HSL. There was low quality evidence from 1 RCT that long-term use of oral aciclovir reduced clinical recurrences (1.80 versus 0.85 episodes per participant per a 4-month period, P = 0.009) and virological recurrence (1.40 versus 0.40 episodes per participant per a 4-month period, P = 0.003). One RCT found long-term use of valaciclovir effective in reducing the incidence of HSL (with a decrease of 0.09 episodes per participant per month; n = 95). One RCT found that a long-term suppressive regimen of valaciclovir had a lower incidence of HSL than an episodic regimen of valciclovir (difference in means (MD) -0.10 episodes per participant per month, 95% CI -0.16 to -0.05; n = 120).\nThese trials found no increase in adverse events associated with the use of oral antiviral agents (moderate quality evidence).\nThere was no evidence to show that short-term use of topical antiviral agents prevented recurrent HSL. There was moderate quality evidence from 2 RCTs that topical aciclovir 5% cream probably has little effect on preventing recurrence of HSL (pooled RR 0.91, 95% CI 0.48 to 1.72; n = 271). There was moderate quality evidence from a single RCT that topical foscarnet 3% cream has little effect in preventing HSL (RR 1.08, 95% CI 0.82 to 1.40; n = 295).\nThe efficacy of long-term use of topical aciclovir cream was uncertain. One RCT found significantly fewer research-diagnosed recurrences of HSL when on aciclovir cream treatment than on placebo (P < 0.05), but found no significant differences in the mean number of participant-reported recurrences between the 2 groups (P ≥ 0.05). One RCT found no preventive effect of topical application of 1,5-pentanediol gel for 26 weeks (P > 0.05). Another RCT found that the group who used 2-hydroxypropyl-β-cyclo dextrin 20% gel for 6 months had significantly more recurrences than the placebo group (P = 0.003).\nThese studies found no increase in adverse events related to the use of topical antiviral agents.\nTwo RCTs found that the application of sunscreen significantly prevented recurrent HSL induced by experimental ultraviolet light (pooled RR 0.07, 95% CI 0.01 to 0.33; n = 111), but another RCT found that sunscreen did not prevent HSL induced by sunlight (RR 1.13, 95% CI 0.25 to 5.06; n = 51). These RCTs did not report adverse events.\nThere were very few data suggesting that thymopentin, low-level laser therapy, and hypnotherapy are effective in preventing recurrent HSL, with one to two RCTs for each intervention. We failed to find any evidence of efficacy for lysine, LongoVital® supplementation, gamma globulin, herpes simplex virus (HSV) type I subunit vaccine, and yellow fever vaccine in preventing HSL. There were no consistent data supporting the efficacy of levamisole and interferon, which were also associated with an increased risk of adverse effects such as fever.\nAuthors' conclusions\nThe current evidence demonstrates that long-term use of oral antiviral agents can prevent HSL, but the clinical benefit is small. We did not find evidence of an increased risk of adverse events. On the other hand, the evidence on topical antiviral agents and other interventions either showed no efficacy or could not confirm their efficacy in preventing HSL.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was low to moderate for most outcomes, but was very low for some outcomes."
}
] |
query-laysum
|
22089
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic suppurative otitis media (CSOM) is a chronic inflammation and infection of the middle ear and mastoid cavity, characterised by ear discharge (otorrhoea) through a perforated tympanic membrane. The predominant symptoms of CSOM are ear discharge and hearing loss.\nSystemic antibiotics are a commonly used treatment option for CSOM, which act to kill or inhibit the growth of micro-organisms that may be responsible for the infection. Antibiotics can be used alone or in addition to other treatments for CSOM.\nObjectives\nTo assess the effects of systemic antibiotics for people with CSOM.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Register; Central Register of Controlled Trials (CENTRAL via the Cochrane Register of Studies); Ovid MEDLINE; Ovid Embase; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 16 March 2020.\nSelection criteria\nWe included randomised controlled trials comparing systemic antibiotics (oral, injection) against placebo/no treatment or other systemic antibiotics with at least a one-week follow-up period, involving patients with chronic (at least two weeks) ear discharge of unknown cause or due to CSOM. Other treatments were allowed if both treatment and control arms also received it.\nData collection and analysis\nWe used the standard Cochrane methodological procedures. We used GRADE to assess the certainty of the evidence for each outcome.\nOur primary outcomes were: resolution of ear discharge or 'dry ear' (whether otoscopically confirmed or not, measured at between one week and up to two weeks, two weeks to up to four weeks, and after four weeks); health-related quality of life using a validated instrument; ear pain (otalgia) or discomfort or local irritation. Secondary outcomes included hearing, serious complications and ototoxicity measured in several ways.\nMain results\nWe included 18 studies (2135 participants) with unclear or high risk of bias.\n1. Systemic antibiotics versus no treatment/placebo\nIt is very uncertain if there is a difference between systemic (intravenous) antibiotics and placebo in the resolution of ear discharge at between one and two weeks (risk ratio (RR) 8.47, 95% confidence interval (CI) 1.88 to 38.21; 33 participants; 1 study; very low-certainty evidence). The study did not report results for resolution of ear discharge after two weeks. Health-related quality of life was not reported. The evidence is very uncertain for hearing and serious (intracranial) complications. Ear pain and suspected ototoxicity were not reported.\n2. Systemic antibiotics versus no treatment/placebo (both study arms received topical antibiotics)\nSix studies were included of which five presented useable data. There may be little or no difference in the resolution of ear discharge at between one to two weeks for oral ciprofloxacin compared to placebo or no treatment when ciprofloxacin ear drops were used in both intervention arms (RR 1.02, 95% CI 0.93 to 1.12; 390 participants; low-certainty evidence). No results after two weeks were reported. Health-related quality of life was not reported. The evidence is very uncertain for ear pain, serious complications and suspected ototoxicity.\n3. Systemic antibiotics versus no treatment/placebo (both study arms received other background treatments)\nTwo studies used topical antibiotics plus steroids as background treatment in both arms. It is very uncertain if there is a difference in resolution of ear discharge between metronidazole and placebo at four weeks (RR 0.91, 95% CI 0.51 to 1.65; 40 participants; 1 study; very low-certainty evidence). This study did not report other outcomes. It is also very uncertain if resolution of ear discharge at six weeks was improved with co-trimoxazole compared to placebo (RR 1.54, 95% CI 1.09 to 2.16; 98 participants; 1 study; very low-certainty evidence). Resolution of ear discharge was not reported at other time points. From the narrative report there was no evidence of a difference between groups for health-related quality of life, hearing or serious complications (very low-certainty evidence).\nOne study (136 participants) used topical antiseptics as background treatment in both arms and found similar resolution of ear discharge between the amoxicillin and no treatment groups at three to four months (RR 1.03, 95% CI 0.75 to 1.41; 136 participants; 1 study; very low-certainty evidence). The narrative report indicated no evidence of differences in hearing or suspected ototoxicity (both very low-certainty evidence). No other outcomes were reported.\n4. Different types of systemic antibiotics\nThis is a summary of four comparisons, where different antibiotics were compared to each other. Eight studies compared different types of systemic antibiotics against each other: quinolones against beta-lactams (four studies), lincosamides against nitroimidazoles (one study) and comparisons of different types of beta-lactams (three studies). It was not possible to conclude if there was one class or type of systemic antibiotic that was better in terms of resolution of ear discharge. The studies did not report adverse events well.\nAuthors' conclusions\nThere was a limited amount of evidence available to examine whether systemic antibiotics are effective in achieving resolution of ear discharge for people with CSOM. When used alone (with or without aural toileting), we are very uncertain if systemic antibiotics are more effective than placebo or no treatment. When added to an effective intervention such as topical antibiotics, there seems to be little or no difference in resolution of ear discharge (low-certainty evidence). Data were only available for certain classes of antibiotics and it is very uncertain whether one class of systemic antibiotic may be more effective than another. Adverse effects of systemic antibiotics were poorly reported in the studies included. As we found very sparse evidence for their efficacy, the possibility of adverse events may detract from their use for CSOM.\n\nGiven the provided abstract, please respond to the following query: \"How-up-to date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence in this Cochrane Review is current to March 2020."
}
] |
query-laysum
|
22090
|
[
{
"role": "user",
"content": "Abstract: Background\nAcute respiratory infections (ARIs) are by far the most common reason for prescribing an antibiotic in primary care, even though the majority of ARIs are of viral or non-severe bacterial aetiology. It follows that in many cases antibiotic use will not be beneficial to a patient's recovery but may expose them to potential side effects. Furthermore, limiting unnecessary antibiotic use is a key factor in controlling antibiotic resistance. One strategy to reduce antibiotic use in primary care is point-of-care biomarkers. A point-of-care biomarker (test) of inflammation identifies part of the acute phase response to tissue injury regardless of the aetiology (infection, trauma, or inflammation) and may be used as a surrogate marker of infection, potentially assisting the physician in the clinical decision whether to use an antibiotic to treat ARIs. Biomarkers may guide antibiotic prescription by ruling out a serious bacterial infection and help identify patients in whom no benefit from antibiotic treatment can be anticipated. This is an update of a Cochrane Review first published in 2014.\nObjectives\nTo assess the benefits and harms of point-of-care biomarker tests of inflammation to guide antibiotic treatment in people presenting with symptoms of acute respiratory infections in primary care settings regardless of patient age.\nSearch methods\nWe searched CENTRAL (2022, Issue 6), MEDLINE (1946 to 14 June 2022), Embase (1974 to 14 June 2022), CINAHL (1981 to 14 June 2022), Web of Science (1955 to 14 June 2022), and LILACS (1982 to 14 June 2022). We also searched three trial registries (10 December 2021) for completed and ongoing trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) in primary care patients with ARIs that compared the use of point-of-care biomarkers with standard care. We included trials that randomised individual participants, as well as trials that randomised clusters of patients (cluster-RCTs).\nData collection and analysis\nTwo review authors independently extracted data on the following primary outcomes: number of participants given an antibiotic prescription at index consultation and within 28 days follow-up; participant recovery within seven days follow-up; and total mortality within 28 days follow-up. We assessed risk of bias using the Cochrane risk of bias tool and the certainty of the evidence using GRADE. We used random-effects meta-analyses when feasible. We further analysed results with considerable heterogeneity in prespecified subgroups of individual and cluster-RCTs.\nMain results\nWe included seven new trials in this update, for a total of 13 included trials. Twelve trials (10,218 participants in total, 2335 of which were children) evaluated a C-reactive protein point-of-care test, and one trial (317 adult participants) evaluated a procalcitonin point-of-care test. The studies were conducted in Europe, Russia, and Asia. Overall, the included trials had a low or unclear risk of bias. However all studies were open-labelled, thereby introducing high risk of bias due to lack of blinding.\nThe use of C-reactive protein point-of-care tests to guide antibiotic prescription likely reduces the number of participants given an antibiotic prescription, from 516 prescriptions of antibiotics per 1000 participants in the control group to 397 prescriptions of antibiotics per 1000 participants in the intervention group (risk ratio (RR) 0.77, 95% confidence interval (CI) 0.69 to 0.86; 12 trials, 10,218 participants; I² = 79%; moderate-certainty evidence).\nOverall, use of C-reactive protein tests also reduce the number of participants given an antibiotic prescription within 28 days follow-up (664 prescriptions of antibiotics per 1000 participants in the control group versus 538 prescriptions of antibiotics per 1000 participants in the intervention group) (RR 0.81, 95% CI 0.76 to 0.86; 7 trials, 5091 participants; I² = 29; high-certainty evidence).\nThe prescription of antibiotics as guided by C-reactive protein tests likely does not reduce the number of participants recovered, within seven or 28 days follow-up (567 participants recovered within seven days follow-up per 1000 participants in the control group versus 584 participants recovered within seven days follow-up per 1000 participants in the intervention group) (recovery within seven days follow-up: RR 1.03, 95% CI 0.96 to 1.12; I² = 0%; moderate-certainty evidence) (recovery within 28 days follow-up: RR 1.02, 95% CI 0.79 to 1.32; I² = 0%; moderate-certainty evidence). The use of C-reactive protein tests may not increase total mortality within 28 days follow-up, from 1 death per 1000 participants in the control group to 0 deaths per 1000 participants in the intervention group (RR 0.53, 95% CI 0.10 to 2.92; I² = 0%; low-certainty evidence).\nWe are uncertain as to whether procalcitonin affects any of the primary or secondary outcomes because there were few participants, thereby limiting the certainty of evidence.\nWe assessed the certainty of the evidence as moderate to high according to GRADE for the primary outcomes for C-reactive protein test, except for mortality, as there were very few deaths, thereby limiting the certainty of the evidence.\nAuthors' conclusions\nThe use of C-reactive protein point-of-care tests as an adjunct to standard care likely reduces the number of participants given an antibiotic prescription in primary care patients who present with symptoms of acute respiratory infection. The use of C-reactive protein point-of-care tests likely does not affect recovery rates. It is unlikely that further research will substantially change our conclusion regarding the reduction in number of participants given an antibiotic prescription, although the size of the estimated effect may change.\nThe use of C-reactive protein point-of-care tests may not increase mortality within 28 days follow-up, but there were very few events. Studies that recorded deaths and hospital admissions were performed in children from low- and middle-income countries and older adults with comorbidities.\nFuture studies should focus on children, immunocompromised individuals, and people aged 80 years and above with comorbidities. More studies evaluating procalcitonin and potential new biomarkers as point-of-care tests used in primary care to guide antibiotic prescription are needed.\nFurthermore, studies are needed to validate C-reactive protein decision algorithms, with a specific focus on potential age group differences.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to know whether biomarkers as a point-of-care test used by primary care doctors can help decide whether to use antibiotics in people with airway infections.\nWe were interested in the effect of biomarker guidance on the number of prescriptions of antibiotics, patient recovery, hospital admissions, and risk of death."
}
] |
query-laysum
|
22091
|
[
{
"role": "user",
"content": "Abstract: Background\nEndophthalmitis is a sight-threatening emergency that requires prompt diagnosis and treatment. The condition is characterised by purulent inflammation of the intraocular fluids caused by an infective agent. In exogenous endophthalmitis, the infective agent is foreign and typically introduced into the eye through intraocular surgery or open globe trauma.\nObjectives\nTo assess the potential role of combined pars plana vitrectomy and intravitreal antibiotics in the acute management of exogenous endophthalmitis, versus the standard of care, defined as vitreous tap and intravitreal antibiotics.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; which contains the Cochrane Eyes and Vision Trials Register; 2022, Issue 5); Ovid MEDLINE; Ovid Embase; the International Standard Randomised Controlled Trial Number registry; ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform. There were no restrictions to language or year of publication. The date of the search was 5 May 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared pars plana vitrectomy and intravitreal injection of antibiotics versus intravitreal injection of antibiotics alone, for the immediate management of exogenous endophthalmitis.\nData collection and analysis\nWe used standard methods expected by Cochrane. Two review authors independently screened search results and extracted data. We considered the following outcomes: visual acuity improvement and change in visual acuity at three and six months; additional surgical procedures, including vitrectomy and cataract surgery, at any time during follow-up; quality of life and adverse effects. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe identified a single RCT that met our inclusion criteria. The included RCT enrolled a total of 420 participants with clinical evidence of endophthalmitis, within six weeks of cataract surgery or secondary intraocular lens implantation. Participants were randomly assigned according to a 2 x 2 factorial design to either treatment with vitrectomy (VIT) or vitreous tap biopsy (TAP) and to treatment with or without systemic antibiotics. Twenty-four participants did not have a final follow-up: 12 died, five withdrew consent to be followed up, and seven were not willing to return for the visit.\nThe study did not report visual acuity according to the review's predefined outcomes. At three months, 41% of all participants achieved 20/40 or better visual acuity and 69% had 20/100 or better acuity. The study authors reported that there was no statistically significant difference in visual acuity between treatment groups (very low-certainty evidence). There was low-certainty evidence of a similar requirement for additional surgical procedures (risk ratio RR 0.90, 95% confidence interval 0.66 to 1.21). Adverse effects included: VIT group: dislocated intraocular lens (n = 2), macular infarction (n = 1). TAP group: expulsive haemorrhage (n = 1). Quality of life and mean change in visual acuity were not reported.\nAuthors' conclusions\nWe identified a single RCT (published 27 years ago) for the role of early vitrectomy in exogenous endophthalmitis, which suggests that there may be no difference between groups (VIT vs TAP) for visual acuity at three or nine months' follow-up.\nWe are of the opinion that there is a clear need for more randomised studies comparing the role of primary vitrectomy in exogenous endophthalmitis. Moreover, since the original RCT study, there have been incremental changes in the surgical techniques with which vitrectomy is performed. Such advances are likely to influence the outcome of early vitrectomy in exogenous endophthalmitis.\n\nGiven the provided abstract, please respond to the following query: \"How is exogenous endophthalmitis treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Although endophthalmitis is rare, the consequence following an infection maybe be profound. Prompt and appropriate treatment is critical. Currently, the majority of endophthalmitis infections have a diagnostic biopsy of the vitreous gel and are then treated with antibiotics. The biopsy involves a ‘tap’ in which a small sample of the vitreous gel is removed, in order to identify the type of infection. People will then have antibiotics injected into the fluid inside the eyeball to treat the infection. If this does not work, an eye operation is sometimes required later on, where the infected jelly inside the eyeball is removed (known as a “vitrectomy” operation). Vitrectomy involves surgical removal of the vitreous gel from inside the eye. In individuals with endophthalmitis, vitrectomy may aid in removing the infection, help the eye recover more quickly and potentially limit the damage caused by the infection. However, the impact of vitrectomy on the management of endophthalmitis remains unclear."
}
] |
query-laysum
|
22092
|
[
{
"role": "user",
"content": "Abstract: Background\nGovernments and health systems are increasingly using mobile devices to communicate with patients and the public. Targeted digital client communication is when the health system transmits information to particular individuals or groups of people, based on their health or demographic status. Common types of targeted client communication are text messages that remind people to go to appointments or take their medicines. Other types include phone calls, interactive voice response, or multimedia messages that offer healthcare information, advice, monitoring, and support.\nObjectives\nTo explore clients' perceptions and experiences of targeted digital communication via mobile devices on topics related to reproductive, maternal, newborn, child, or adolescent health (RMNCAH).\nSearch methods\nWe searched MEDLINE (OvidSP), MEDLINE In-Process & Other Non-Indexed Citations (OvidSP), Embase (Ovid), World Health Organization Global Health Library, and POPLINE databases for eligible studies from inception to 3-6 July 2017 dependant on the database (See appendix 2).\nSelection criteria\nWe included studies that used qualitative methods for data collection and analysis; that explored clients' perceptions and experiences of targeted digital communication via mobile device in the areas of RMNCAH; and were from any setting globally.\nData collection and analysis\nWe used maximum variation purposive sampling for data synthesis, employing a three-step sampling frame. We conducted a framework thematic analysis using the Supporting the Use of Research Evidence (SURE) framework as our starting point. We assessed our confidence in the findings using the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach. We used a matrix approach to explore whether potential implementation barriers identified in our synthesis had been addressed in the trials included in the related Cochrane Reviews of effectiveness.\nMain results\nWe included 35 studies, from a wide range of countries on six continents. Nineteen studies were conducted in low- and middle-income settings and sixteen in high-income settings. Some of the studies explored the views of people who had experienced the interventions, whereas others were hypothetical in nature, asking what people felt they would like from a digital health intervention. The studies covered a range of digital targeted client communication, for example medication or appointment reminders, prenatal health information, support for smoking cessation while pregnant, or general sexual health information.\nOur synthesis showed that clients' experiences of these types of programmes were mixed. Some felt that these programmes provided them with feelings of support and connectedness, as they felt that someone was taking the time to send them messages (moderate confidence in the evidence). They also described sharing the messages with their friends and family (moderate confidence).\nHowever, clients also pointed to problems when using these programmes. Some clients had poor access to cell networks and to the internet (high confidence). Others had no phone, had lost or broken their phone, could not afford airtime, or had changed their phone number (moderate confidence). Some clients, particularly women and teenagers, had their access to phones controlled by others (moderate confidence). The cost of messages could also be a problem, and many thought that messages should be free of charge (high confidence). Language issues as well as skills in reading, writing, and using mobile phones could also be a problem (moderate confidence).\nClients dealing with stigmatised or personal health conditions such as HIV, family planning, or abortion care were also concerned about privacy and confidentiality (high confidence). Some clients suggested strategies to deal with these issues, such as using neutral language and tailoring the content, timing, and frequency of messages (high confidence).\nClients wanted messages at a time and frequency that was convenient for them (moderate confidence). They had preferences for different delivery channels (e.g. short message service (SMS) or interactive voice response) (moderate confidence). They also had preferences about message content, including new knowledge, reminders, solutions, and suggestions about health issues (moderate confidence). Clients' views about who sent the digital health communication could influence their views of the programme (moderate confidence).\nFor an overview of the findings and our confidence in the evidence, please see the 'Summary of qualitative findings' tables.\nOur matrix shows that many of the trials assessing these types of programmes did not try to address the problems we identified, although this may have been a reporting issue.\nAuthors' conclusions\nOur synthesis identified several factors that can influence the successful implementation of targeted client communication programmes using mobile devices. These include barriers to use that have equity implications. Programme planners should take these factors into account when designing and implementing programmes. Future trial authors also need to actively address these factors and to report their efforts in their trial publications.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Many clients like receiving messages from the health services by mobile phone. However, some clients have problems receiving messages due to lack of network access, internet, or phone, or language, reading, or privacy issues. Clients' experiences are also influenced by message timing, frequency, content, and sender."
}
] |
query-laysum
|
22093
|
[
{
"role": "user",
"content": "Abstract: Background\nXpert MTB/RIF and Xpert MTB/RIF Ultra (Xpert Ultra) are World Health Organization (WHO)-recommended rapid tests that simultaneously detect tuberculosis and rifampicin resistance in people with signs and symptoms of tuberculosis. This review builds on our recent extensive Cochrane Review of Xpert MTB/RIF accuracy.\nObjectives\nTo compare the diagnostic accuracy of Xpert Ultra and Xpert MTB/RIF for the detection of pulmonary tuberculosis and detection of rifampicin resistance in adults with presumptive pulmonary tuberculosis. For pulmonary tuberculosis and rifampicin resistance, we also investigated potential sources of heterogeneity.\nWe also summarized the frequency of Xpert Ultra trace-positive results, and estimated the accuracy of Xpert Ultra after repeat testing in those with trace-positive results.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register, MEDLINE, Embase, Science Citation Index, Web of Science, LILACS, Scopus, the WHO ICTRP, the ISRCTN registry, and ProQuest to 28 January 2020 with no language restriction.\nSelection criteria\nWe included diagnostic accuracy studies using respiratory specimens in adults with presumptive pulmonary tuberculosis that directly compared the index tests. For pulmonary tuberculosis detection, the reference standards were culture and a composite reference standard. For rifampicin resistance, the reference standards were culture-based drug susceptibility testing and line probe assays.\nData collection and analysis\nTwo review authors independently extracted data using a standardized form, including data by smear and HIV status. We assessed risk of bias using QUADAS-2 and QUADAS-C. We performed meta-analyses comparing pooled sensitivities and specificities, separately for pulmonary tuberculosis detection and rifampicin resistance detection, and separately by reference standard. Most analyses used a bivariate random-effects model. For tuberculosis detection, we estimated accuracy in studies in participants who were not selected based on prior microscopy testing or history of tuberculosis. We performed subgroup analyses by smear status, HIV status, and history of tuberculosis. We summarized Xpert Ultra trace results.\nMain results\nWe identified nine studies (3500 participants): seven had unselected participants (2834 participants). All compared Xpert Ultra and Xpert MTB/RIF for pulmonary tuberculosis detection; seven studies used a paired comparative accuracy design, and two studies used a randomized design. Five studies compared Xpert Ultra and Xpert MTB/RIF for rifampicin resistance detection; four studies used a paired design, and one study used a randomized design. Of the nine included studies, seven (78%) were mainly or exclusively in high tuberculosis burden countries. For pulmonary tuberculosis detection, most studies had low risk of bias in all domains.\nPulmonary tuberculosis detection\nXpert Ultra pooled sensitivity and specificity (95% credible interval) against culture were 90.9% (86.2 to 94.7) and 95.6% (93.0 to 97.4) (7 studies, 2834 participants; high-certainty evidence) versus Xpert MTB/RIF pooled sensitivity and specificity of 84.7% (78.6 to 89.9) and 98.4% (97.0 to 99.3) (7 studies, 2835 participants; high-certainty evidence). The difference in the accuracy of Xpert Ultra minus Xpert MTB/RIF was estimated at 6.3% (0.1 to 12.8) for sensitivity and −2.7% (−5.7 to −0.5) for specificity. If the point estimates for Xpert Ultra and Xpert MTB/RIF are applied to a hypothetical cohort of 1000 patients, where 10% of those presenting with symptoms have pulmonary tuberculosis, Xpert Ultra will miss 9 cases, and Xpert MTB/RIF will miss 15 cases. The number of people wrongly diagnosed with pulmonary tuberculosis would be 40 with Xpert Ultra and 14 with Xpert MTB/RIF.\nIn smear-negative, culture-positive participants, pooled sensitivity was 77.5% (67.6 to 85.6) for Xpert Ultra versus 60.6% (48.4 to 71.7) for Xpert MTB/RIF; pooled specificity was 95.8% (92.9 to 97.7) for Xpert Ultra versus 98.8% (97.7 to 99.5) for Xpert MTB/RIF (6 studies).\nIn people living with HIV, pooled sensitivity was 87.6% (75.4 to 94.1) for Xpert Ultra versus 74.9% (58.7 to 86.2) for Xpert MTB/RIF; pooled specificity was 92.8% (82.3 to 97.0) for Xpert Ultra versus 99.7% (98.6 to 100.0) for Xpert MTB/RIF (3 studies).\nIn participants with a history of tuberculosis, pooled sensitivity was 84.2% (72.5 to 91.7) for Xpert Ultra versus 81.8% (68.7 to 90.0) for Xpert MTB/RIF; pooled specificity was 88.2% (70.5 to 96.6) for Xpert Ultra versus 97.4% (91.7 to 99.5) for Xpert MTB/RIF (4 studies).\nThe proportion of Ultra trace-positive results ranged from 3.0% to 30.4%. Data were insufficient to estimate the accuracy of Xpert Ultra repeat testing in individuals with initial trace-positive results.\nRifampicin resistance detection\nPooled sensitivity and specificity were 94.9% (88.9 to 97.9) and 99.1% (97.7 to 99.8) (5 studies, 921 participants; high-certainty evidence) for Xpert Ultra versus 95.3% (90.0 to 98.1) and 98.8% (97.2 to 99.6) (5 studies, 930 participants; high-certainty evidence) for Xpert MTB/RIF. The difference in the accuracy of Xpert Ultra minus Xpert MTB/RIF was estimated at −0.3% (−6.9 to 5.7) for sensitivity and 0.3% (−1.2 to 2.0) for specificity. If the point estimates for Xpert Ultra and Xpert MTB/RIF are applied to a hypothetical cohort of 1000 patients, where 10% of those presenting with symptoms have rifampicin resistance, Xpert Ultra will miss 5 cases, and Xpert MTB/RIF will miss 5 cases. The number of people wrongly diagnosed with rifampicin resistance would be 8 with Xpert Ultra and 11 with Xpert MTB/RIF.\nWe identified a higher number of rifampicin resistance indeterminate results with Xpert Ultra, pooled proportion 7.6% (2.4 to 21.0) compared to Xpert MTB/RIF pooled proportion 0.8% (0.2 to 2.4). The estimated difference in the pooled proportion of indeterminate rifampicin resistance results for Xpert Ultra versus Xpert MTB/RIF was 6.7% (1.4 to 20.1).\nAuthors' conclusions\nXpert Ultra has higher sensitivity and lower specificity than Xpert MTB/RIF for pulmonary tuberculosis, especially in smear-negative participants and people living with HIV. Xpert Ultra specificity was lower than that of Xpert MTB/RIF in participants with a history of tuberculosis. The sensitivity and specificity trade-off would be expected to vary by setting. For detection of rifampicin resistance, Xpert Ultra and Xpert MTB/RIF had similar sensitivity and specificity. Ultra trace-positive results were common.\nXpert Ultra and Xpert MTB/RIF provide accurate results and can allow rapid initiation of treatment for rifampicin-resistant and multidrug-resistant tuberculosis.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We compared the diagnostic accuracy of Xpert Ultra and Xpert MTB/RIF with results primarily measured against culture (detection of pulmonary tuberculosis) and drug susceptibility testing and line probe assays (detection of rifampicin resistance)."
}
] |
query-laysum
|
22094
|
[
{
"role": "user",
"content": "Abstract: Background\nOur March 2021 edition of this review showed thoracic imaging computed tomography (CT) to be sensitive and moderately specific in diagnosing COVID-19 pneumonia. This new edition is an update of the review.\nObjectives\nOur objectives were to evaluate the diagnostic accuracy of thoracic imaging in people with suspected COVID-19; assess the rate of positive imaging in people who had an initial reverse transcriptase polymerase chain reaction (RT-PCR) negative result and a positive RT-PCR result on follow-up; and evaluate the accuracy of thoracic imaging for screening COVID-19 in asymptomatic individuals. The secondary objective was to assess threshold effects of index test positivity on accuracy.\nSearch methods\nWe searched the COVID-19 Living Evidence Database from the University of Bern, the Cochrane COVID-19 Study Register, The Stephen B. Thacker CDC Library, and repositories of COVID-19 publications through to 17 February 2021. We did not apply any language restrictions.\nSelection criteria\nWe included diagnostic accuracy studies of all designs, except for case-control, that recruited participants of any age group suspected to have COVID-19. Studies had to assess chest CT, chest X-ray, or ultrasound of the lungs for the diagnosis of COVID-19, use a reference standard that included RT-PCR, and report estimates of test accuracy or provide data from which we could compute estimates. We excluded studies that used imaging as part of the reference standard and studies that excluded participants with normal index test results.\nData collection and analysis\nThe review authors independently and in duplicate screened articles, extracted data and assessed risk of bias and applicability concerns using QUADAS-2. We presented sensitivity and specificity per study on paired forest plots, and summarized pooled estimates in tables. We used a bivariate meta-analysis model where appropriate.\nMain results\nWe included 98 studies in this review. Of these, 94 were included for evaluating the diagnostic accuracy of thoracic imaging in the evaluation of people with suspected COVID-19. Eight studies were included for assessing the rate of positive imaging in individuals with initial RT-PCR negative results and positive RT-PCR results on follow-up, and 10 studies were included for evaluating the accuracy of thoracic imaging for imagining asymptomatic individuals.\nFor all 98 included studies, risk of bias was high or unclear in 52 (53%) studies with respect to participant selection, in 64 (65%) studies with respect to reference standard, in 46 (47%) studies with respect to index test, and in 48 (49%) studies with respect to flow and timing. Concerns about the applicability of the evidence to: participants were high or unclear in eight (8%) studies; index test were high or unclear in seven (7%) studies; and reference standard were high or unclear in seven (7%) studies.\nImaging in people with suspected COVID-19\nWe included 94 studies. Eighty-seven studies evaluated one imaging modality, and seven studies evaluated two imaging modalities. All studies used RT-PCR alone or in combination with other criteria (for example, clinical signs and symptoms, positive contacts) as the reference standard for the diagnosis of COVID-19.\nFor chest CT (69 studies, 28285 participants, 14,342 (51%) cases), sensitivities ranged from 45% to 100%, and specificities from 10% to 99%. The pooled sensitivity of chest CT was 86.9% (95% confidence interval (CI) 83.6 to 89.6), and pooled specificity was 78.3% (95% CI 73.7 to 82.3). Definition for index test positivity was a source of heterogeneity for sensitivity, but not specificity. Reference standard was not a source of heterogeneity.\nFor chest X-ray (17 studies, 8529 participants, 5303 (62%) cases), the sensitivity ranged from 44% to 94% and specificity from 24 to 93%. The pooled sensitivity of chest X-ray was 73.1% (95% CI 64.1 to 80.5), and pooled specificity was 73.3% (95% CI 61.9 to 82.2). Definition for index test positivity was not found to be a source of heterogeneity. Definition for index test positivity and reference standard were not found to be sources of heterogeneity.\nFor ultrasound of the lungs (15 studies, 2410 participants, 1158 (48%) cases), the sensitivity ranged from 73% to 94% and the specificity ranged from 21% to 98%. The pooled sensitivity of ultrasound was 88.9% (95% CI 84.9 to 92.0), and the pooled specificity was 72.2% (95% CI 58.8 to 82.5). Definition for index test positivity and reference standard were not found to be sources of heterogeneity.\nIndirect comparisons of modalities evaluated across all 94 studies indicated that chest CT and ultrasound gave higher sensitivity estimates than X-ray (P = 0.0003 and P = 0.001, respectively). Chest CT and ultrasound gave similar sensitivities (P = 0.42). All modalities had similar specificities (CT versus X-ray P = 0.36; CT versus ultrasound P = 0.32; X-ray versus ultrasound P = 0.89).\nImaging in PCR-negative people who subsequently became positive\nFor rate of positive imaging in individuals with initial RT-PCR negative results, we included 8 studies (7 CT, 1 ultrasound) with a total of 198 participants suspected of having COVID-19, all of whom had a final diagnosis of COVID-19. Most studies (7/8) evaluated CT. Of 177 participants with initially negative RT-PCR who had positive RT-PCR results on follow-up testing, 75.8% (95% CI 45.3 to 92.2) had positive CT findings.\nImaging in asymptomatic PCR-positive people\nFor imaging asymptomatic individuals, we included 10 studies (7 CT, 1 X-ray, 2 ultrasound) with a total of 3548 asymptomatic participants, of whom 364 (10%) had a final diagnosis of COVID-19. For chest CT (7 studies, 3134 participants, 315 (10%) cases), the pooled sensitivity was 55.7% (95% CI 35.4 to 74.3) and the pooled specificity was 91.1% (95% CI 82.6 to 95.7).\nAuthors' conclusions\nChest CT and ultrasound of the lungs are sensitive and moderately specific in diagnosing COVID-19. Chest X-ray is moderately sensitive and moderately specific in diagnosing COVID-19. Thus, chest CT and ultrasound may have more utility for ruling out COVID-19 than for differentiating SARS-CoV-2 infection from other causes of respiratory illness. The uncertainty resulting from high or unclear risk of bias and the heterogeneity of included studies limit our ability to confidently draw conclusions based on our results.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to know whether chest imaging is accurate enough to diagnose COVID-19 in people with suspected infection; we included studies in people with suspected COVID-19 only and excluded studies in people with confirmed COVID-19. We also wanted to assess the accuracy of chest imaging for screening asymptomatic people.\nThe evidence is up to date to 17 February 2021."
}
] |
query-laysum
|
22095
|
[
{
"role": "user",
"content": "Abstract: Background\nFatigue is a common and potentially distressing symptom for people with rheumatoid arthritis with no accepted evidence based management guidelines. Non-pharmacological interventions, such as physical activity and psychosocial interventions, have been shown to help people with a range of other long-term conditions to manage subjective fatigue.\nObjectives\nTo evaluate the benefit and harm of non-pharmacological interventions for the management of fatigue in people with rheumatoid arthritis. This included any intervention that was not classified as pharmacological in accordance with European Union (EU) Directive 2001/83/EEC.\nSearch methods\nThe following electronic databases were searched up to October 2012, Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE; EMBASE; AMED; CINAHL; PsycINFO; Social Science Citation Index; Web of Science; Dissertation Abstracts International; Current Controlled Trials Register; The National Research Register Archive; The UKCRN Portfolio Database. In addition, reference lists of articles identified for inclusion were checked for additional studies and key authors were contacted.\nSelection criteria\nRandomised controlled trials were included if they evaluated a non-pharmacological intervention in people with rheumatoid arthritis with self-reported fatigue as an outcome measure.\nData collection and analysis\nTwo review authors selected relevant trials, assessed risk of bias and extracted data. Where appropriate, data were pooled using meta-analysis with a random-effects model.\nMain results\nTwenty-four studies met the inclusion criteria, with a total of 2882 participants with rheumatoid arthritis. Included studies investigated physical activity interventions (n = 6 studies; 388 participants), psychosocial interventions (n = 13 studies; 1579 participants), herbal medicine (n = 1 study; 58 participants), omega-3 fatty acid supplementation (n = 1 study; 81 participants), Mediterranean diet (n = 1 study; 51 participants), reflexology (n = 1 study; 11 participants) and the provision of Health Tracker information (n = 1 study; 714 participants). Physical activity was statistically significantly more effective than the control at the end of the intervention period (standardized mean difference (SMD) -0.36, 95% confidence interval (CI) -0.62 to -0.10; back translated to mean difference of 14.4 points lower, 95% CI -4.0 to -24.8 on a 100 point scale where a lower score means less fatigue; number needed to treat for an additional beneficial outcome (NNTB) 7, 95% CI 4 to 26) demonstrating a small beneficial effect upon fatigue. Psychosocial intervention was statistically significantly more effective than the control at the end of the intervention period (SMD -0.24, 95% CI -0.40 to -0.07; back translated to mean difference of 9.6 points lower, 95% CI -2.8 to -16.0 on a 100 point scale, lower score means less fatigue; NNTB 10, 95% CI 6 to 33) demonstrating a small beneficial effect upon fatigue. For the remaining interventions meta-analysis was not possible and there was either no statistically significant difference between trial arms or findings were not reported. Only three studies reported any adverse events and none of these were serious, however, it is possible that the low incidence was in part due to poor reporting. The quality of the evidence ranged from moderate quality for physical activity interventions and Mediterranean diet to low quality for psychosocial interventions and all other interventions.\nAuthors' conclusions\nThis review provides some evidence that physical activity and psychosocial interventions provide benefit in relation to self-reported fatigue in adults with rheumatoid arthritis. There is currently insufficient evidence of the effectiveness of other non-pharmacological interventions.\n\nGiven the provided abstract, please respond to the following query: \"What happens to people with rheumatoid arthritis who use non-pharmacological interventions?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- At the end of the intervention, people receiving a control had a mean score of 63 on a scale of 0 to 100 with a lower score meaning less fatigue.\n- People who used physical activity rated their fatigue as 54 on a scale of 0 to 100 at the end of the intervention, that is 9 points lower than the people who received the control.\n- People who participated in a psychosocial intervention rated their fatigue as 57 on a scale of 0 to 100 at the end of the intervention, that is 6 points lower than the people who received the control."
}
] |
query-laysum
|
22096
|
[
{
"role": "user",
"content": "Abstract: Background\nAvailable evidence has been inconclusive on whether pulmonary artery perfusion during cardiopulmonary bypass (CPB) is associated with decreased or increased mortality, pulmonary events, and serious adverse events (SAEs) after open heart surgery. To our knowledge, no previous systematic reviews have included meta-analyses of these interventions.\nObjectives\nTo assess the benefits and harms of single-shot or continuous pulmonary artery perfusion with blood (oxygenated or deoxygenated) or a preservation solution compared with no perfusion during cardiopulmonary bypass (CPB) in terms of mortality, pulmonary events, serious adverse events (SAEs), and increased inflammatory markers for adult surgical patients.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Science Citation Index Expanded, and advanced Google for relevant studies. We handsearched retrieved study reports and scanned citations of included studies and relevant reviews to ensure that no relevant trials were missed. We searched for ongoing trials and unpublished trials in the World Health Organization International Clinical Trials Registry Platform (ICTRP) and at clinicaltrials.gov (4 July 2017). We contacted medicinal firms producing preservation solutions to retrieve additional studies conducted to examine relevant interventions.\nSelection criteria\nWe included randomized controlled trials (RCTs) that compared pulmonary artery perfusion versus no perfusion during CPB in adult patients (≧ 18 years).\nData collection and analysis\nTwo independent review authors extracted data, conducted fixed-effect and random-effects meta-analyses, and calculated risk ratios (RRs) or odds ratios (ORs) for dichotomous outcomes. For continuous data, we have presented mean differences (MDs) and 95% confidence intervals (CIs) as estimates of the intervention effect. To minimize the risk of systematic error, we assessed risk of bias of included trials. To reduce the risk of random errors caused by sparse data and repetitive updating of cumulative meta-analyses, we applied Trial Sequential Analyses (TSAs). We used GRADE principles to assess the quality of evidence.\nMain results\nWe included in this review four RCTs (210 participants) reporting relevant outcomes. Investigators randomly assigned participants to pulmonary artery perfusion with blood versus no perfusion during CPB. Only one trial included the pulmonary artery perfusion intervention with a preservation solution; therefore we did not perform meta-analysis. Likewise, only one trial reported patient-specific data for the outcome \"pulmonary events\"; therefore we have provided no results from meta-analysis. Instead, review authors added two explorative secondary outcomes for this version of the review: the ratio of partial pressure of oxygen in arterial blood (PaO2) to fraction of inspired oxygen (FiO2); and intubation time. Last, review authors found no comparable data for the secondary outcome inflammatory markers.\nThe effect of pulmonary artery perfusion on all-cause mortality was uncertain (Peto OR 1.78, 95% CI 0.43 to 7.40; TSA adjusted CI 0.01 to 493; 4 studies, 210 participants; GRADE: very low quality). Sensitivity analysis of one trial with overall low risk of bias (except for blinding of personnel during the surgical procedure) yielded no evidence of a difference for mortality (Peto OR 1.65, 95% CI 0.27 to 10.15; 1 study, 60 participants). The TSA calculated required information size was not reached and the futility boundaries did not cross; thus this analysis cannot refute a 100% increase in mortality.\nThe effect of pulmonary artery perfusion with blood on SAEs was likewise uncertain (RR 1.12, 95% CI 0.66 to 1.89; 3 studies, 180 participants; GRADE: very low quality). Data show an association between pulmonary artery perfusion with blood during CPB and a higher postoperative PaO2/FiO2 ratio (MD 27.80, 95% CI 5.67 to 49.93; 3 studies, 119 participants; TSA adjusted CI 5.67 to 49.93; GRADE: very low quality), although TSA could not confirm or refute a 10% increase in the PaO2/FiO2 ratio, as the required information size was not reached.\nAuthors' conclusions\nThe effects of pulmonary artery perfusion with blood during cardiopulmonary bypass (CPB) are uncertain owing to the small numbers of participants included in meta-analyses. Risks of death and serious adverse events may be higher with pulmonary artery perfusion with blood during CPB, and robust evidence for any beneficial effects is lacking. Future randomized controlled trials (RCTs) should provide long-term follow-up and patient stratification by preoperative lung function and other documented risk factors for mortality. One study that is awaiting classification (epub abstract with preliminary results) may change the results of this review when full study details have been published.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "During open heart surgery, the heart-lung machine temporarily takes over the function of the heart and lungs. During extracorporeal circulation (ECC), only the systemic circuit is perfused with oxygenated blood with no blood supply to the lungs. This systematic review assesses the beneficial and harmful effects of additional perfusion of the pulmonary circuit with blood or a preservation solution compared with no blood supply to the lungs during ECC in adults undergoing open heart surgery. We report numbers of deaths, serious adverse events, and pulmonary events (for this version of the review, mechanical ventilation and oxygenation after surgery)."
}
] |
query-laysum
|
22097
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is the first update of a review published in 2010. While calcium channel blockers (CCBs) are often recommended as a first-line drug to treat hypertension, the effect of CCBs on the prevention of cardiovascular events, as compared with other antihypertensive drug classes, is still debated.\nObjectives\nTo determine whether CCBs used as first-line therapy for hypertension are different from other classes of antihypertensive drugs in reducing the incidence of major adverse cardiovascular events.\nSearch methods\nFor this updated review, the Cochrane Hypertension Information Specialist searched the following databases for randomised controlled trials (RCTs) up to 1 September 2020: the Cochrane Hypertension Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL 2020, Issue 1), Ovid MEDLINE, Ovid Embase, the World Health Organization International Clinical Trials Registry Platform, and ClinicalTrials.gov. We also contacted the authors of relevant papers regarding further published and unpublished work and checked the references of published studies to identify additional trials. The searches had no language restrictions.\nSelection criteria\nRandomised controlled trials comparing first-line CCBs with other antihypertensive classes, with at least 100 randomised hypertensive participants and a follow-up of at least two years.\nData collection and analysis\nThree review authors independently selected the included trials, evaluated the risk of bias, and entered the data for analysis. Any disagreements were resolved through discussion. We contacted study authors for additional information.\nMain results\nThis update contains five new trials. We included a total of 23 RCTs (18 dihydropyridines, 4 non-dihydropyridines, 1 not specified) with 153,849 participants with hypertension. All-cause mortality was not different between first-line CCBs and any other antihypertensive classes. As compared to diuretics, CCBs probably increased major cardiovascular events (risk ratio (RR) 1.05, 95% confidence interval (CI) 1.00 to 1.09, P = 0.03) and increased congestive heart failure events (RR 1.37, 95% CI 1.25 to 1.51, moderate-certainty evidence). As compared to beta-blockers, CCBs reduced the following outcomes: major cardiovascular events (RR 0.84, 95% CI 0.77 to 0.92), stroke (RR 0.77, 95% CI 0.67 to 0.88, moderate-certainty evidence), and cardiovascular mortality (RR 0.90, 95% CI 0.81 to 0.99, low-certainty evidence). As compared to angiotensin-converting enzyme (ACE) inhibitors, CCBs reduced stroke (RR 0.90, 95% CI 0.81 to 0.99, low-certainty evidence) and increased congestive heart failure (RR 1.16, 95% CI 1.06 to 1.28, low-certainty evidence). As compared to angiotensin receptor blockers (ARBs), CCBs reduced myocardial infarction (RR 0.82, 95% CI 0.72 to 0.94, moderate-certainty evidence) and increased congestive heart failure (RR 1.20, 95% CI 1.06 to 1.36, low-certainty evidence).\nAuthors' conclusions\nFor the treatment of hypertension, there is moderate certainty evidence that diuretics reduce major cardiovascular events and congestive heart failure more than CCBs. There is low to moderate certainty evidence that CCBs probably reduce major cardiovascular events more than beta-blockers. There is low to moderate certainty evidence that CCBs reduced stroke when compared to angiotensin-converting enzyme (ACE) inhibitors and reduced myocardial infarction when compared to angiotensin receptor blockers (ARBs), but increased congestive heart failure when compared to ACE inhibitors and ARBs. Many of the differences found in the current review are not robust, and further trials might change the conclusions. More well-designed RCTs studying the mortality and morbidity of individuals taking CCBs as compared with other antihypertensive drug classes are needed for patients with different stages of hypertension, different ages, and with different comorbidities such as diabetes.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this first update of a review published in 2010, we wanted to find out if calcium channel blockers (CCBs) can prevent harmful cardiovascular events such as stroke, heart attack, and heart failure when compared to other antihypertensive (blood pressure-lowering) medications used for individuals with raised blood pressure (hypertension)."
}
] |
query-laysum
|
22098
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatitis B vaccine has been recommended for use in people living with HIV (PLHIV) mostly because of the similarities in routes of infection and their prevalence in the same geographic areas. PLHIV may not develop sero-protection after receiving standard hepatitis B vaccine due to their compromised immune status.\nObjectives\nTo evaluate the efficacy of hepatitis B virus vaccine in PLHIV compared to placebo or no vaccine.\nSearch methods\nWe searched 6 English language databases in July 2012, and updated the search in June 2013 and August 2014. We searched the grey literature, conference proceedings, specialised web sites, and contacted experts in the field.\nSelection criteria\nRandomised controlled trials of hepatitis B vaccine compared to placebo or no vaccine, evaluating relevant outcomes of efficacy and safety.\nData collection and analysis\nTwo review authors independently sought and extracted data on study design, participants, hepatitis B infection, hepatitis B related morbidity and mortality, anti-HBs immunogenicity and adverse effects related to vaccines from published articles or through correspondence with authors. Data were analysed qualitatively.\nMain results\nOne double-blind randomised controlled trial with 26 participants who were on antiretroviral therapy (ART), comparing hepatitis B vaccine to placebo conducted in Spain met our eligibility criteria and was included in this review. The study ran for three years and participants were followed up on a monthly basis. The study reported adequate humoral response to vaccine at 12 months and no local or systematic side effects in both intervention and control groups. This humoral response was lost when the participants stopped taking ART. The sample size of the study was small and the study was conducted in a high income setting unlike the areas of highest burden of hepatitis B and HIV co-infections.\nAuthors' conclusions\nThe evidence from this study is insufficient to support any recommendations regarding the use of hepatitis B vaccine in PLHIV. Neither does this evidence demonstrate that hepatitis B vaccine is unsafe in PLHIV. Further randomised controlled trials in high prevalence areas are required to generate evidence on the long term efficacy and safety of hepatitis B vaccine in PLHIV with and without ART. Different regimens and routes of administration should also be explored.\n\nGiven the provided abstract, please respond to the following query: \"Study Characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our search for eligible papers was updated in August 2014 and we found one trial with 26 adult participants in Spain. The study sought to test if hepatitis B virus vaccine was better than placebo in preventing PLHIV from getting hepatitis B."
}
] |
query-laysum
|
22099
|
[
{
"role": "user",
"content": "Abstract: Background\nThis review will not be updated. It should be considered out of date. It has been superseded by this review:\nIrlam JH, Siegfried N, Visser ME, Rollins NC. Micronutrient supplementation for children with HIV infection. Cochrane Database of Systematic Reviews 2013, Issue 10. Art. No.: CD010666. DOI: 10.1002/14651858.CD010666.Link to Cochrane Library. [PubMed]\nThe most prevalent haematologic disturbance associated with HIV in children (apart from CD4 lymphocytopenia) is anaemia. Anaemia associated with HIV arises from multiple mechanisms, including the direct inhibitory effect of HIV on red cell precursors, other locally prevalent and/or opportunistic infections, micronutrient deficiency, anaemia of chronic disease, and as a consequence of medicines given for HIV and/or other concurrent illnesses. Iron deficiency is the most common cause of nutritional anaemia globally. There is significant geographical overlap of areas of the world where iron deficiency anaemia (IDA) and paediatric HIV are distributed. Given the high prevalence of IDA, it is likely that many HIV-infected children also are iron deficient. The contribution of iron deficiency to anaemia in HIV-infected children has been described but is incompletely understood. Currently, iron supplementation for anaemic infants and children is routinely practiced without any obvious effect in most developing countries, which bear most of the burden of global paediatric HIV infections.\nBecause iron deficiency and IDA are common in HIV-infected children in high-prevalence areas and because there are concerns about possible deleterious effects of iron, this review aims to assess the evidence for iron supplementation for reducing morbidity and mortality in HIV-infected children.\nObjectives\nTo determine whether iron supplementation improves clinical, immunologic, and virologic outcomes in children infected with HIV\nSearch methods\nWe used the comprehensive search strategy developed specifically by the Cochrane HIV/AIDS Review Group to identify HIV/AIDS randomised controlled trials, and searched the following electronic databases: MEDLINE (searched November 2007); Embase (searched December 2007); and CENTRAL (December 2007). This search was supplemented with a search of AIDSearch (searched December 2007) and NLM Gateway (searched December 2007) to identify relevant conference abstracts, as well as a search of the reference lists of all eligible articles. The search was not limited by language or publication status.\nSelection criteria\nRandomised controlled trials (RCTs) of iron supplementation in any form and dose in HIV-infected children aged 12 years and younger.\nData collection and analysis\nWe independently screened the results of the search to select potentially relevant studies and to retrieve the full articles. We independently applied the inclusion criteria to the potentially relevant studies. \nNo studies were identified that fulfilled the selection criteria.\nMain results\nNo RCTs of iron supplementation in HIV-infected children were found.\nAuthors' conclusions\nImplications for clinical practice:\nThe current clinical practice of iron supplementation in HIV-infected children is based on weak evidence comprising observational studies and expert opinions.\nImplications for research:\nHigh-quality RCTs of iron supplementation are urgently required, especially in areas with significant overlap of high prevalence of HIV, iron deficiency anaemia, and malaria. Policy makers should prioritise funding for these trials.\n\nGiven the provided abstract, please respond to the following query: \"This review will not be updated. It should be considered out of date. It has been replaced by this review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
] |
query-laysum
|
22100
|
[
{
"role": "user",
"content": "Abstract: Background\nCaring for someone with dementia can be emotionally and physically demanding. Respite care is any intervention designed to give rest or relief to caregivers. It is not clear what positive and negative effects such care may have on them, or on people with dementia.\nObjectives\nTo assess the benefits and harms of respite care for people with dementia and their caregivers, in particular the effect of respite care on rates of institutionalisation.\nSearch methods\nThe trials were identified from a search of ALOIS, the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group, using the terms respite* OR daycare OR caregiver* relief. ALOIS contains up-to-date records from all major healthcare databases and many ongoing trial databases.\nSelection criteria\nRandomised controlled trials comparing respite care with a control intervention for people with dementia.\nData collection and analysis\nTwo review authors carried out study selection independently and reached a consensus through discussion. Data were extracted by a single review author. The review authors contacted all investigators for methodological details not reported in the text and for additional data for three studies included in the previous version of the review.\nMain results\nFour trials are now included in the review, with 753 participants. They were different in many ways including the intervention, duration, outcomes and control group so pooling of data was not possible. Overall, the quality of the evidence was rated as very low. Re-analysis of outcomes using data from the published studies found no significant effects of respite care compared to no respite care on any caregiver variable. When respite care was compared to polarity therapy a significant effect was found in favour of polarity therapy for caregiver perceived stress (n = 38, MD 5.80, 95% CI 1.43 to 10.17), but not for other measures of psychological health and other caregiver outcomes. No studies reported evaluable data on outcomes related to the people with dementia.\nAuthors' conclusions\nCurrent evidence does not demonstrate any benefits or adverse effects from the use of respite care for people with dementia or their caregivers. These results should be treated with caution, however, as they may reflect the lack of high quality research in this area rather than an actual lack of benefit. Given the frequency with which respite care is advocated and provided, well-designed trials are needed in this area.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Caring for someone with dementia can be emotionally and physically demanding. Respite care is any intervention designed to give rest or relief to caregivers and it is not clear what positive and negative effects such care may have on them, or on people with dementia."
}
] |
query-laysum
|
22101
|
[
{
"role": "user",
"content": "Abstract: Background\nThe use of botulinum toxin as an investigative and treatment modality for strabismus is well reported in the medical literature. However, it is unclear how effective it is in comparison with other treatment options for strabismus.\nObjectives\nThe primary objective was to examine the efficacy of botulinum toxin therapy in the treatment of strabismus compared with alternative conservative or surgical treatment options. This review sought to ascertain those types of strabismus that particularly benefit from the use of botulinum toxin as a treatment option (such as small angle strabismus or strabismus with binocular potential, i.e. the potential to use both eyes together as a pair). The secondary objectives were to investigate the dose effect and complication rates associated with botulinum toxin.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, LILACS and three trials registers on 6 July 2022, together with reference checking to identify additional studies. We did not use any date or language restrictions in the electronic searches for trials.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) comparing botulinum toxin with strabismus surgery, botulinum toxin alternatives (i.e. bupivacaine) and conservative therapy such as orthoptic exercises, prisms, or lens therapy for people of any age with strabismus. All relevant RCTs identified in this update compared botulinum toxin with strabismus surgery.\nData collection and analysis\nWe used standard methods expected by Cochrane and assessed the certainty of the body of evidence using GRADE.\nMain results\nWe included four RCTs with 242 participants that enrolled adults with esotropia or exotropia, children with acquired esotropia, and children with infantile esotropia. The follow-up period ranged from six to 36 months. Two studies were conducted in Spain, and one each in Canada and South Africa. We judged the included studies to have a mixture of low, unclear and high risk of bias. We did not consider any of the included studies to be at low risk of bias for all domains.\nAll four studies reported the proportion of participants who improved or corrected strabismus, defined as ≤ 10 prism diopters (PD) at six months (two studies) or ≤ 8 PD at one year (two studies). Low-certainty evidence suggested that participants treated with the surgery may be more likely to improve or correct strabismus compared with those who treated with botulinum toxin (risk ratio (RR) 0.72, 95% confidence interval (CI) 0.53 to 0.99; I² = 50%; 4 studies, 242 participants; low-certainty evidence).\nOne study, which enrolled 110 children with infantile esotropia, suggested that surgery may reduce the incidence of additional surgical intervention required, but the evidence was very uncertain (RR 3.05, 95% CI 1.34 to 6.91; 1 study, 101 participants; very low-certainty evidence).\nTwo studies conducted in Spain compared botulinum toxin with surgery in children who required retreatment for acquired or infantile esotropia. These two studies provided low-certainty evidence that botulinum toxin may have little to no effect on achieving sensory fusion (RR 0.88, 95% CI 0.63 to 1.23; I² = 0%; 2 studies, 102 participants) and stereopsis (RR 0.86, 95% CI 0.59 to 1.25; I² = 0%; 2 studies, 102 participants) compared with surgery.\nThree studies reported non-serious adverse events. Partial transient ptosis (range 16.7% to 37.0%) and transient vertical deviation (range 5.6% to 18.5%) were observed among participants treated with botulinum toxin in three studies. In one study, 44.7% participants in the surgery group experienced discomfort. No studies reported serious adverse events or postintervention quality of life.\nAuthors' conclusions\nIt remains unclear whether botulinum toxin may be an alternative to strabismus surgery as an independent treatment modality among certain types of strabismus because we found only low and very low-certainty evidence in this review update.\nLow-certainty evidence suggests that strabismus surgery may be preferable to botulinum toxin injection to improve or correct strabismus when types of strabismus and different age groups are combined. We found low-certainty evidence suggesting botulinum toxin may have little to no effect on achievement of binocular single vision compared with surgery in children with acquired or infantile esotropia. We did not find sufficient evidence to draw any meaningful conclusions with respect to need for additional surgery, quality of life, and serious adverse events.\nWe identified three ongoing trials comparing botulinum toxin with conventional surgeries in the varying types of strabismus, whose results will provide relevant evidence for our stated objectives. Future trials should be rigorously designed, and investigators should analyze outcome data appropriately and report adequate information to provide evidence of high certainty. Quality of life and cost-effectiveness should be examined in addition to clinical and safety outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found four studies with 242 adults and children with strabismus. The biggest study was in 110 children and the smallest study was in 30 people. The studies were conducted in Canada, South Africa, and Spain. Two studies lasted for six months and the longest study lasted 36 months.\nThe review showed the following.\n• Strabismus surgery may have better chance of recovering correct alignment of the eyes compared with botulinum toxin treatment in people requiring primary treatment or retreatment for strabismus.\n• People treated with botulinum toxin may need to re-treat with surgery more frequently than people treated with surgery.\n• Using botulinum toxin may not make any difference in the achievement of binocularity (the ability to focus on an object with both eyes and see a single image); and\n• Ptosis (the upper eyelid drooping over the eye) occurred commonly in people receiving botulinum toxin in these studies. The number of people affected ranged from 2 to 4 in 10 people. Everyone recovered when treatment was stopped."
}
] |
query-laysum
|
22102
|
[
{
"role": "user",
"content": "Abstract: Background\nComplete deletion of both the short arm of chromosome 1 (1p) and the long arm of chromosome 19 (19q), known as 1p/19q codeletion, is a mutation that can occur in gliomas. It occurs in a type of glioma known as oligodendroglioma and its higher grade counterpart known as anaplastic oligodendroglioma. Detection of 1p/19q codeletion in gliomas is important because, together with another mutation in an enzyme known as isocitrate dehydrogenase, it is needed to make the diagnosis of an oligodendroglioma. Presence of 1p/19q codeletion also informs patient prognosis and prediction of the best drug treatment. The main two tests in use are fluorescent in situ hybridisation (FISH) and polymerase chain reaction (PCR)-based loss of heterozygosity (LOH) assays (also known as PCR-based short tandem repeat or microsatellite analysis). Many other tests are available. None of the tests is perfect, although PCR-based LOH is expected to have very high sensitivity.\nObjectives\nTo estimate the sensitivity and specificity and cost-effectiveness of different deoxyribonucleic acid (DNA)-based techniques for determining 1p/19q codeletion status in glioma.\nSearch methods\nWe searched MEDLINE, Embase and BIOSIS up to July 2019. There were no restrictions based on language or date of publication. We sought economic evaluation studies from the results of this search and using the National Health Service Economic Evaluation Database.\nSelection criteria\nWe included cross-sectional studies in adults with glioma or any subtype of glioma, presenting raw data or cross-tabulations of two or more DNA-based tests for 1p/19q codeletion. We also sought economic evaluations of these tests.\nData collection and analysis\nWe followed procedures outlined in the Cochrane Handbook for Diagnostic Test Accuracy Reviews. Two review authors independently screened titles/abstracts/full texts, performed data extraction, and undertook applicability and risk of bias assessments using QUADAS-2. Meta-analyses used the hierarchical summary ROC model to estimate and compare test accuracy. We used FISH and PCR-based LOH as alternate reference standards to examine how tests compared with those in common use, and conducted a latent class analysis comparing FISH and PCR-based LOH. We constructed an economic model to evaluate cost-effectiveness.\nMain results\nWe included 53 studies examining: PCR-based LOH, FISH, single nucleotide polymorphism (SNP) array, next-generation sequencing (NGS), comparative genomic hybridisation (CGH), array comparative genomic hybridisation (aCGH), multiplex-ligation-dependent probe amplification (MLPA), real-time PCR, chromogenic in situ hybridisation (CISH), mass spectrometry (MS), restriction fragment length polymorphism (RFLP) analysis, G-banding, methylation array and NanoString. Risk of bias was low for only one study; most gave us concerns about how patients were selected or about missing data. We had applicability concerns about many of the studies because only patients with specific subtypes of glioma were included. 1520 participants contributed to analyses using FISH as the reference, 1304 participants to analyses involving PCR-based LOH as the reference and 262 participants to analyses of comparisons between methods from studies not including FISH or PCR-based LOH.\nMost evidence was available for comparison of FISH with PCR-based LOH (15 studies, 915 participants): PCR-based LOH detected 94% of FISH-determined codeletions (95% credible interval (CrI) 83% to 98%) and FISH detected 91% of codeletions determined by PCR-based LOH (CrI 78% to 97%). Of tumours determined not to have a deletion by FISH, 94% (CrI 87% to 98%) had a deletion detected by PCR-based LOH, and of those determined not to have a deletion by PCR-based LOH, 96% (CrI 90% to 99%) had a deletion detected by FISH. The latent class analysis suggested that PCR-based LOH may be slightly more accurate than FISH. Most other techniques appeared to have high sensitivity (i.e. produced few false-negative results) for detection of 1p/19q codeletion when either FISH or PCR-based LOH was considered as the reference standard, although there was limited evidence. There was some indication of differences in specificity (false-positive rate) with some techniques. Both NGS and SNP array had high specificity when considered against FISH as the reference standard (NGS: 6 studies, 243 participants; SNP: 6 studies, 111 participants), although we rated certainty in the evidence as low or very low. NGS and SNP array also had high specificity when PCR-based LOH was considered the reference standard, although with much more uncertainty as these results were based on fewer studies (just one study with 49 participants for NGS and two studies with 33 participants for SNP array).\nG-banding had low sensitivity and specificity when PCR-based LOH was the reference standard. Although MS had very high sensitivity and specificity when both FISH and PCR-based LOH were considered the reference standard, these results were based on only one study with a small number of participants. Real-time PCR also showed high specificity with FISH as a reference standard, although there were only two studies including 40 participants.\nWe found no relevant economic evaluations. Our economic model using FISH as the reference standard suggested that the resource-optimising test depends on which measure of diagnostic accuracy is most important. With FISH as the reference standard, MLPA is likely to be cost-effective if society was willing to pay GBP 1000 or less for a true positive detected. However, as the value placed on a true positive increased, CISH was most cost-effective. Findings differed when the outcome measure changed to either true negative detected or correct diagnosis. When PCR-based LOH was used as the reference standard, MLPA was likely to be cost-effective for all measures of diagnostic accuracy at lower threshold values for willingness to pay. However, as the threshold values increased, none of the tests were clearly more likely to be considered cost-effective.\nAuthors' conclusions\nIn our review, most techniques (except G-banding) appeared to have good sensitivity (few false negatives) for detection of 1p/19q codeletions in glioma against both FISH and PCR-based LOH as a reference standard. However, we judged the certainty of the evidence low or very low for all the tests. There are possible differences in specificity, with both NGS and SNP array having high specificity (fewer false positives) for 1p/19q codeletion when considered against FISH as the reference standard. The economic analysis should be interpreted with caution due to the small number of studies.\n\nGiven the provided abstract, please respond to the following query: \"What are the implications of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The limited evidence suggests that currently used techniques show good sensitivity for detection of 1p/19q codeletion. NGS and SNP arrays may have higher specificity when FISH is the reference standard, but this comes at greater cost per test."
}
] |
query-laysum
|
22103
|
[
{
"role": "user",
"content": "Abstract: Background\nMany women would like to avoid pharmacological or invasive methods of pain management in labour and this may contribute to the popularity of complementary methods of pain management. This review examined currently available evidence on the use of relaxation therapies for pain management in labour. This is an update of a review first published in 2011.\nObjectives\nTo examine the effects of mind-body relaxation techniques for pain management in labour on maternal and neonatal well-being during and after labour.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth's Trials Register (9 May 2017), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, Issue 5 2017), MEDLINE (1966 to 24 May 2017), CINAHL (1980 to 24 May 2017), the Australian New Zealand Clinical Trials Registry (18 May 2017), ClinicalTrials.gov (18 May 2017), the ISRCTN Register (18 May 2017), the WHO International Clinical Trials Registry Platform (ICTRP) (18 May 2017), and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials (including quasi randomised and cluster trials) comparing relaxation methods with standard care, no treatment, other non-pharmacological forms of pain management in labour or placebo.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We attempted to contact study authors for additional information. We assessed evidence quality with GRADE methodology.\nMain results\nThis review update includes 19 studies (2519 women), 15 of which (1731 women) contribute data. Interventions examined included relaxation, yoga, music and mindfulness. Approximately half of the studies had a low risk of bias for random sequence generation and attrition bias. The majority of studies had a high risk of bias for performance and detection bias, and unclear risk of bias for, allocation concealment, reporting bias and other bias. We assessed the evidence from these studies as ranging from low to very low quality, and therefore the effects below should be interpreted with caution.\nRelaxation\nWe found that relaxation compared to usual care provided lowered the intensity of pain (measured on a scale of 0 to 10 with low scores indicating less pain) during the latent phase of labour (mean difference (MD) -1.25, 95% confidence interval (CI) -1.97 to -0.53, one trial, 40 women). Four trials reported pain intensity in the active phase; there was high heterogeneity between trials and very low-quality evidence suggested that there was no strong evidence that the effects were any different between groups for this outcome (MD -1.08, 95% CI -2.57 to 0.41, four trials, 271 women, random-effects analysis). Very low-quality evidence showed that women receiving relaxation reported greater satisfaction with pain relief during labour (risk ratio (RR) 8.00, 95% CI 1.10 to 58.19, one trial, 40 women), and showed no clear benefit for satisfaction with childbirth experience (assessed using different scales) (standard mean difference (SMD) -0.03, 95% CI -0.37 to 0.31, three trials, 1176 women). For safety outcomes there was very low-quality evidence of no clear reduction in assisted vaginal birth (average RR 0.61, 95% CI 0.20 to 1.84, four trials, 1122 women) or in caesarean section rates (average RR 0.73, 95% CI 0.26 to 2.01, four trials, 1122 women). Sense of control in labour, and breastfeeding were not reported under this comparison.\nYoga\nWhen comparing yoga to control interventions there was low-quality evidence that yoga lowered pain intensity (measured on a scale of 0 to 10) with low scores indicating less pain) (MD -6.12, 95% CI -11.77 to -0.47, one trial, 66 women), greater satisfaction with pain relief (MD 7.88, 95% CI 1.51 to 14.25, one trial, 66 women) and greater satisfaction with childbirth experience (MD 6.34, 95% CI 0.26 to 12.42 one trial, 66 women (assessed using the Maternal Comfort Scale with higher score indicating greater comfort). Sense of control in labour, breastfeeding, assisted vaginal birth, and caesarean section were not reported under this comparison.\nMusic\nWhen comparing music to control interventions there was evidence of lower pain intensity in the latent phase for women receiving music (measured on a scale of 0 to 10 with low scores indicating less pain) (MD -0.73, 95% CI -1.01 to -0.45, random-effects analysis, two trials, 192 women) and very low-quality evidence of no clear benefit in the active phase (MD -0.51, 95% CI -1.10 to 0.07, three trials, 217 women). Very low-quality evidence suggested no clear benefit in terms of reducing assisted vaginal birth (RR 0.41, 95% CI 0.08 to 2.05, one trial, 156 women) or caesarean section rate (RR 0.78, 95% CI 0.36 to 1.70, two trials, 216 women). Satisfaction with pain relief, sense of control in labour, satisfaction with childbirth experience, and breastfeeding were not reported under this comparison.\nAudio analgesia\nOne trial evaluating audio analgesia versus control only reported one outcome and showed no evidence of benefit in satisfaction with pain relief.\nMindfulness\nOne trial evaluating mindfulness versus usual care found an increase in sense of control for the mindfulness group (using the Childbirth Self-Efficacy Inventory) (MD 31.30, 95% CI 1.61 to 60.99, 26 women). There is no strong evidence that the effects were any different between groups for satisfaction in childbirth, or for caesarean section rate, need for assisted vaginal delivery or need for pharmacological pain relief. No other outcomes were reported in this trial.\nAuthors' conclusions\nRelaxation, yoga and music may have a role with reducing pain, and increasing satisfaction with pain relief, although the quality of evidence varies between very low to low. There was insufficient evidence for the role of mindfulness and audio-analgesia. The majority of trials did not report on the safety of the interventions. Further randomised controlled trials of relaxation modalities for pain management in labour are needed. Trials should be adequately powered and include clinically relevant outcomes such as those described in this review.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The use of some relaxation therapies, yoga, or music may possibly be helpful with reducing the intensity of pain, and in helping women feel more in control and satisfied with their labours. However, the wide variations in types of techniques used in these studies make it difficult to say specifically what might help women. Therefore further research studies are needed."
}
] |
query-laysum
|
22104
|
[
{
"role": "user",
"content": "Abstract: Background\nIdiopathic intracranial hypertension (IIH) has an estimated incidence of one to three people per 100,000 people per year, and occurs most commonly in obese, young women. IIH is associated with severe morbidity, notably due to a significant threat to sight and severe headache. Several different management options have been proposed. Conservative measures centre on weight loss. Pharmacological therapy includes use of diuretics. Refractory and sight-threatening cases demand surgical intervention, most often in the form of cerebrospinal fluid (CSF) diversion or optic nerve sheath fenestration. Other treatments include venous sinus stenting and bariatric surgery.\nObjectives\nTo assess the effects of any intervention for IIH in any patient group.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2015 Issue 6), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to July 2015), EMBASE (January 1980 to July 2015), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 22 July 2015.\nSelection criteria\nWe included only randomised controlled trials (RCTs) in which any intervention was compared to placebo, or to another form of treatment, for people with a clinical diagnosis of IIH.\nData collection and analysis\nTwo review authors independently assessed the search results for trials to be included in the review. We resolved any discrepancies by third party decision.\nMain results\nWe identified two completed RCTs (enrolling a total of 211 participants and conducted in the UK and US) and two ongoing trials that met the inclusion criteria. Both completed trials compared acetazolamide to placebo, in conjunction with a weight loss intervention in both groups. Attrition bias was a problem in both trials with high loss to follow-up, in one study this loss to follow-up occurred particularly in the acetazolamide arm. One trial was unmasked and we judged it to be at risk of performance and detection bias.\nIn these studies, change in visual acuity was similar in the treatment and control groups as measured by logMAR acuity. In one study people in the acetalomazide group had a similar change in logMAR acuity compared to the placebo group between baseline and 12 months in the right eye (MD 0.04 logMAR, 95% CI -0.08 to 0.16) and left eye (MD 0.03 logMAR, 95% CI -0.09 to 0.15). In the other study people in the acetalomazide group had a similar change in vision over six months compared with people in the placebo group (mean difference in change in letters read was 0.01 (95% CI -1.45 to 1.46). One study reported no cases of visual loss in 21 people treated with acetalomazide compared to 2/20 cases in the placebo group (odds ratio 0.17, 95% CI 0.01, 3.82).\nThe prespecified outcome for this review was reduction in CSF pressure to normal levels which was not reported by the two trials. One trial reported that, in a subsample of 85 participants who agreed to lumbar puncture at 6 months, people in the acetalomazide group on average had a greater reduction in CSF pressure (MD -59.9 mmH2O, 95% CI -96.4, -23.4).\nIn one study, people in the acetalozamide group on average experienced a greater reduction in papilloedema as assessed by fundus photographs MD -0.70 (95% CI -1.00 to -0.40) and by clinical grading MD -0.91 (95% CI -1.27 to -0.54) between baseline and six months in the study eye.\nHeadache was recorded as present/absent in one study at 12 months (OR 0.42, 95% CI 0.12,1.41, 41 participants). Both studies reported headache on visual analogue scales (different ones) but results were inconclusive (MD for change in headache score measured on 10-point visual analogue scale at 12 months was 1.0 (-1.80, 3.70, 41 participants) and MD for change in headache score on a 6 point scale measured at 6 months was -0.45 (-3.5,2.6, number of participants unclear).\nIn one study, a similar proportion of people in the acetalomazide group were in remission (however, the trial authors did not state their definition of this term) at 12 months compared to the placebo group. However, the 95% CIs were wide and there is considerable uncertainty as to the effect (OR 1.13 (95% CI 0.32 to 3.90, 41 participants).\nIn one study of 185 participants, people in the acetalomazide group had an increased risk of decreased CO2, diarrhoea, dysgeusia, fatigue, nausea, paresthesia, tinnitus and vomiting compared to people in the placebo group. In general, the estimates of effect were uncertain with wide 95% CIs. Adverse effects were not reported in the other study.\nOne study reported that quality of life was better in acetazolamide-treated patients based on the visual quality of life (VFQ-25) (MD 6.35, 95% CI 2.22 to 10.47) and the physical (MD 3.02, 95% CI 0.34 to 5.70) and mental (MD 3.45, 95% CI 0.35 to 6.55) components of the 36-Item Short Form Health Survey tool at six months. Costs were not reported in either study.\nWe judged the evidence to be low certainty (GRADE) downgrading for imprecision and risk of bias.\nAuthors' conclusions\nAlthough the two included RCTs showed modest benefits for acetazolamide for some outcomes, there is insufficient evidence to recommend or reject the efficacy of this intervention, or any other treatments currently available, for treating people with IIH. Further high-quality RCTs are required in order to adequately assess the effect of acetazolamide therapy in people with IIH.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We attempted to find all of the published randomised controlled trials (RCT, a type of rigorous study that compares one treatment option against another) that investigated any treatment for idiopathic intracranial hypertension (IIH) in any patient group. We looked at a number of outcomes including reduction in vision, improvement of headache and quality of life."
}
] |
query-laysum
|
22105
|
[
{
"role": "user",
"content": "Abstract: Background\nPressure ulcers, also known as bedsores, decubitus ulcers and pressure injuries, are localised areas of injury to the skin or the underlying tissue, or both. Dressings are widely used to treat pressure ulcers and there are many options to choose from including alginate dressings. A clear and current overview of current evidence is required to facilitate decision-making regarding dressing use for the treatment of pressure ulcers. This review is part of a suite of Cochrane reviews investigating the use of dressings in the treatment of pressure ulcers. Each review will focus on a particular dressing type.\nObjectives\nTo assess the effects of alginate dressings for treating pressure ulcers in any care setting.\nSearch methods\nFor this review, in April 2015 we searched the following databases the Cochrane Wounds Group Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE; and EBSCO CINAHL. There were no restrictions based on language or date of publication.\nSelection criteria\nPublished or unpublished randomised controlled trials (RCTs) comparing the effects of alginate with alternative wound dressings or no dressing in the treatment of pressure ulcers (stage II or above).\nData collection and analysis\nTwo review authors independently performed study selection, risk of bias assessment and data extraction.\nMain results\nWe included six studies (336 participants) in this review; all studies had two arms. The included studies compared alginate dressings with six other interventions that included: hydrocolloid dressings, silver containing alginate dressings, and radiant heat therapy. Each of the six comparisons included just one study and these had limited participant numbers and short follow-up times. All the evidence was of low or very low quality. Where data were available there was no evidence of a difference between alginate dressings and alternative treatments in terms of complete wound healing or adverse events.\nAuthors' conclusions\nThe relative effects of alginate dressings compared with alternative treatments are unclear. The existing trials are small, of short duration and at risk of bias. Decision makers may wish to consider aspects such as cost of dressings and the wound management properties offered by each dressing type, for example, exudate management.\n\nGiven the provided abstract, please respond to the following query: \"What are pressure ulcers, and who is at risk?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Pressure ulcers, also known as bedsores, decubitus ulcers and pressure injuries, are wounds involving the skin and sometimes the tissue that lies underneath. Pressure ulcers can be painful, may become infected, and so affect people's quality of life. People at risk of developing pressure ulcers include those with spinal cord injuries, and those who are immobile or who have limited mobility - such as elderly people and people who are ill as a result of short-term or long-term medical conditions.\nIn 2004 the total annual cost of treating pressure ulcers in the UK was estimated as being GBP 1.4 to 2.1 billion, which was equivalent to 4% of the total National Health Service expenditure. People with pressure ulcers have longer stays in hospital, and this increases hospital costs. Figures from the USA for 2006 suggest that half a million hospital stays had 'pressure ulcer' noted as a diagnosis; the total hospital costs of these stays was USD 11 billion."
}
] |
query-laysum
|
22106
|
[
{
"role": "user",
"content": "Abstract: Background\nLymphomas are the third most common malignancy in childhood. Cure rates are high but have reached a plateau. Therefore new treatment modalities should be developed. Antibody therapy is a successful new treatment option in adult lymphoma. However, none of the therapeutic antibodies available for adults with cancer have been approved for treatment of paediatric lymphoma.\nObjectives\nTo assess the efficacy of antibody therapy for childhood lymphoma in terms of survival, response and relapse rates, compared with therapy not including antibody treatment. To assess quality of life and the occurrence of adverse effects caused by antibody therapy treatment in children compared with therapy not including antibody treatment.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, 2014, Issue 10), MEDLINE in PubMed (from 1945 to October 2014), EMBASE in EMBASE.com (from 1980 to October 2014) and reference lists of relevant articles. Furthermore, we searched conference proceedings abstracts of SIOP, ASCO and ASH for studies from 2009 to 2013), and the World Health Organization (WHO) ICTRP portal and ClinicalTrials.gov for ongoing trials.\nSelection criteria\nRandomised controlled trials and controlled clinical trials comparing conventional therapy with antibody therapy in children with lymphoma.\nData collection and analysis\nTwo authors independently performed the study selection.\nMain results\nWe found no studies meeting the inclusion criteria of the review.\nAuthors' conclusions\nAt this moment, it is not possible to draw evidence-based conclusions regarding clinical practice. Phase I and II studies show a positive effect of using antibody therapy in childhood lymphoma. Further research is needed to evaluate and implement antibody therapy for paediatric lymphoma.\n\nGiven the provided abstract, please respond to the following query: \"Study funding sources\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included no studies in our analysis."
}
] |
query-laysum
|
22107
|
[
{
"role": "user",
"content": "Abstract: Background\nHigh-flow nasal cannulae (HFNC) deliver high flows of blended humidified air and oxygen via wide-bore nasal cannulae and may be useful in providing respiratory support for adults experiencing acute respiratory failure, or at risk of acute respiratory failure, in the intensive care unit (ICU). This is an update of an earlier version of the review.\nObjectives\nTo assess the effectiveness of HFNC compared to standard oxygen therapy, or non-invasive ventilation (NIV) or non-invasive positive pressure ventilation (NIPPV), for respiratory support in adults in the ICU.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, Web of Science, and the Cochrane COVID-19 Register (17 April 2020), clinical trial registers (6 April 2020) and conducted forward and backward citation searches.\nSelection criteria\nWe included randomized controlled studies (RCTs) with a parallel-group or cross-over design comparing HFNC use versus other types of non-invasive respiratory support (standard oxygen therapy via nasal cannulae or mask; or NIV or NIPPV which included continuous positive airway pressure and bilevel positive airway pressure) in adults admitted to the ICU.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane.\nMain results\nWe included 31 studies (22 parallel-group and nine cross-over designs) with 5136 participants; this update included 20 new studies. Twenty-one studies compared HFNC with standard oxygen therapy, and 13 compared HFNC with NIV or NIPPV; three studies included both comparisons. We found 51 ongoing studies (estimated 12,807 participants), and 19 studies awaiting classification for which we could not ascertain study eligibility information.\nIn 18 studies, treatment was initiated after extubation. In the remaining studies, participants were not previously mechanically ventilated.\nHFNC versus standard oxygen therapy\nHFNC may lead to less treatment failure as indicated by escalation to alternative types of oxygen therapy (risk ratio (RR) 0.62, 95% confidence interval (CI) 0.45 to 0.86; 15 studies, 3044 participants; low-certainty evidence). HFNC probably makes little or no difference in mortality when compared with standard oxygen therapy (RR 0.96, 95% CI 0.82 to 1.11; 11 studies, 2673 participants; moderate-certainty evidence). HFNC probably results in little or no difference to cases of pneumonia (RR 0.72, 95% CI 0.48 to 1.09; 4 studies, 1057 participants; moderate-certainty evidence), and we were uncertain of its effect on nasal mucosa or skin trauma (RR 3.66, 95% CI 0.43 to 31.48; 2 studies, 617 participants; very low-certainty evidence). We found low-certainty evidence that HFNC may make little or no difference to the length of ICU stay according to the type of respiratory support used (MD 0.12 days, 95% CI -0.03 to 0.27; 7 studies, 1014 participants). We are uncertain whether HFNC made any difference to the ratio of partial pressure of arterial oxygen to the fraction of inspired oxygen (PaO2/FiO2) within 24 hours of treatment (MD 10.34 mmHg, 95% CI -17.31 to 38; 5 studies, 600 participants; very low-certainty evidence). We are uncertain whether HFNC made any difference to short-term comfort (MD 0.31, 95% CI -0.60 to 1.22; 4 studies, 662 participants, very low-certainty evidence), or to long-term comfort (MD 0.59, 95% CI -2.29 to 3.47; 2 studies, 445 participants, very low-certainty evidence).\nHFNC versus NIV or NIPPV\nWe found no evidence of a difference between groups in treatment failure when HFNC were used post-extubation or without prior use of mechanical ventilation (RR 0.98, 95% CI 0.78 to 1.22; 5 studies, 1758 participants; low-certainty evidence), or in-hospital mortality (RR 0.92, 95% CI 0.64 to 1.31; 5 studies, 1758 participants; low-certainty evidence). We are very uncertain about the effect of using HFNC on incidence of pneumonia (RR 0.51, 95% CI 0.17 to 1.52; 3 studies, 1750 participants; very low-certainty evidence), and HFNC may result in little or no difference to barotrauma (RR 1.15, 95% CI 0.42 to 3.14; 1 study, 830 participants; low-certainty evidence). HFNC may make little or no difference to the length of ICU stay (MD -0.72 days, 95% CI -2.85 to 1.42; 2 studies, 246 participants; low-certainty evidence). The ratio of PaO2/FiO2 may be lower up to 24 hours with HFNC use (MD -58.10 mmHg, 95% CI -71.68 to -44.51; 3 studies, 1086 participants; low-certainty evidence). We are uncertain whether HFNC improved short-term comfort when measured using comfort scores (MD 1.33, 95% CI 0.74 to 1.92; 2 studies, 258 participants) and responses to questionnaires (RR 1.30, 95% CI 1.10 to 1.53; 1 study, 168 participants); evidence for short-term comfort was very low certainty. No studies reported on nasal mucosa or skin trauma.\nAuthors' conclusions\nHFNC may lead to less treatment failure when compared to standard oxygen therapy, but probably makes little or no difference to treatment failure when compared to NIV or NIPPV. For most other review outcomes, we found no evidence of a difference in effect. However, the evidence was often of low or very low certainty. We found a large number of ongoing studies; including these in future updates could increase the certainty or may alter the direction of these effects.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People in the ICU may need support to breathe and HFNC are one option for this. HFNC deliver warm air and oxygen through small plastic tubes that sit inside the nostrils. The airflow is at a higher rate each minute than standard oxygen therapy (which is not always warmed and may be delivered through a plastic face mask or nasal cannulae). Other support options include non-invasive ventilation (NIV) or non-invasive positive pressure ventilation (NIPPV). These approaches use mild pressure to push air into the lungs through tightly-fitting face masks or a helmet covering the entire head. Invasive mechanical ventilation provides the highest level of support, using a ventilator (artificial breathing machine) to push air in and out of the lungs through a plastic tube inserted into the windpipe."
}
] |
query-laysum
|
22108
|
[
{
"role": "user",
"content": "Abstract: Background\nAsthma is an illness that commonly affects adults and children, and it serves as a common reason for children to attend emergency departments. An asthma exacerbation is characterised by acute or subacute worsening of shortness of breath, cough, wheezing, and chest tightness and may be triggered by viral respiratory infection, poor compliance with usual medication, a change in the weather, or exposure to allergens or irritants.\nMost children with asthma have mild or moderate exacerbations and respond well to first-line therapy (inhaled short-acting beta-agonists and systemic corticosteroids). However, the best treatment for the small proportion of seriously ill children who do not respond to first-line therapy is not well understood. Currently, a large number of treatment options are available and there is wide variation in management.\nObjectives\nMain objective\n- To summarise Cochrane Reviews with or without meta-analyses of randomised controlled trials on the efficacy and safety of second-line treatment for children with acute exacerbations of asthma (i.e. after first-line treatments, titrated oxygen delivery, and administration of intermittent inhaled short-acting beta2-agonists and oral corticosteroids have been tried and have failed)\nSecondary objectives\n- To identify gaps in the current evidence base that will inform recommendations for future research and subsequent Cochrane Reviews\n- To categorise information on reported outcome measures used in trials of escalation of treatment for acute exacerbations of asthma in children, and to make recommendations for development and reporting of standard outcomes in future trials and reviews\n- To identify relevant randomised controlled trials that have been published since the date of publication of each included review\nMethods\nWe included Cochrane Reviews assessing interventions for children with acute exacerbations of asthma. We searched the Cochrane Database of Systematic Reviews. The search is current to 28 December 2019. We also identified trials that were potentially eligible for, but were not currently included in, published reviews. We assessed the quality of included reviews using the ROBIS criteria (tool used to assess risk of bias in systematic reviews). We presented an evidence synthesis of data from reviews alongside an evidence map of clinical trials. Primary outcomes were length of stay, hospital admission, intensive care unit admission, and adverse effects. We summarised all findings in the text and reported data for each outcome in 'Additional tables'.\nMain results\nWe identified 17 potentially eligible Cochrane Reviews but extracted data from, and rated the quality of, 13 reviews that reported results for children alone. We excluded four reviews as one did not include any randomised controlled trials (RCTs), one did not provide subgroup data for children, and the last two had been updated and replaced by subsequent reviews.\nThe 13 reviews included 67 trials; the number of trials in each review ranged from a single trial up to 27 trials. The vast majority of comparisons included between one and three trials, involving fewer than 100 participants. The total number of participants included in reviews ranged from 40 to 2630. All studies included children; 16 (24%) included children younger than two years of age. Most of the reviews reported search dates older than four years.\nWe have summarised the published evidence as outlined in Cochrane Reviews. Key findings, in terms of our primary outcomes, are that (1) intravenous magnesium sulfate was the only intervention shown to reduce hospital length of stay (high-certainty evidence); (2) no evidence suggested that any intervention reduced the risk of intensive care admission (low- to very low-certainty evidence); (3) the risk of hospital admission was reduced by the addition of inhaled anticholinergic agents to inhaled beta2-agonists (moderate-certainty evidence), the use of intravenous magnesium sulfate (high-certainty evidence), and the use of inhaled heliox (low-certainty evidence); (4) the addition of inhaled magnesium sulfate to usual bronchodilator therapy appears to reduce serious adverse events during hospital admission (moderate-certainty evidence); (5) aminophylline increased vomiting compared to placebo (moderate-certainty evidence) and increased nausea and nausea/vomiting compared to intravenous beta2-agonists (low-certainty evidence); and (6) the addition of anticholinergic therapy to short-acting beta2-agonists appeared to reduce the risk of nausea (high-certainty evidence) and tremor (moderate-certainty evidence) but not vomiting (low-certainty evidence).\nWe considered 4 of the 13 reviews to be at high risk of bias based on the ROBIS framework. In all cases, this was due to concerns regarding identification and selection of studies. The certainty of evidence varied widely (by review and also by outcome) and ranged from very low to high.\nAuthors' conclusions\nThis overview provides the most up-to-date evidence on interventions for escalation of therapy for acute exacerbations of asthma in children from Cochrane Reviews of randomised controlled trials. A vast majority of comparisons involved between one and three trials and fewer than 100 participants, making it difficult to assess the balance between benefits and potential harms. Due to the lack of comparative studies between various treatment options, we are unable to make firm practice recommendations. Intravenous magnesium sulfate appears to reduce both hospital length of stay and the risk of hospital admission. Hospital admission is also reduced with the addition of inhaled anticholinergic agents to inhaled beta2-agonists. However, further research is required to determine which patients are most likely to benefit from these therapies.\nDue to the relatively rare incidence of acute severe paediatric asthma, multi-centre research will be required to generate high-quality evidence. A number of existing Cochrane Reviews should be updated, and we recommend that a new review be conducted on the use of high-flow nasal oxygen therapy. Important priorities include development of an internationally agreed core outcome set for future trials in acute severe asthma exacerbations and determination of clinically important differences in these outcomes, which can then inform adequately powered future trials.\n\nGiven the provided abstract, please respond to the following query: \"Recommendations for future research\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One of the major problems with existing research is that a small number of patients is included in each study, likely because severe acute asthma in children is relatively uncommon. To work out whether or not a treatment is effective and/or to tell the difference between treatments, a research study must include enough patients receiving each treatment. Therefore, high-quality research into severe acute asthma in children is likely to require studies that include a number of hospitals.\nIt is also important to be able to compare results across studies. To do this, researchers across the world should agree on a standard way of measuring results in studies of acute severe asthma in children."
}
] |
query-laysum
|
22109
|
[
{
"role": "user",
"content": "Abstract: Background\nExtracorporeal membrane oxygenation (ECMO) may provide benefit in certain populations of adults, including those with severe cardiac failure, severe respiratory failure, and cardiac arrest. However, it is also associated with serious short- and long-term complications, and there remains a lack of high-quality evidence to guide practice. Recently several large randomized controlled trials (RCTs) have been published, therefore, we undertook an update of our previous systematic review published in 2014.\nObjectives\nTo evaluate whether venovenous (VV), venoarterial (VA), or ECMO cardiopulmonary resuscitation (ECPR) improve mortality compared to conventional cardiopulmonary support in critically ill adults.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was March 2022. The search was limited to English language only.\nSelection criteria\nWe included RCTs, quasi-RCTs, and cluster-RCTs that compared VV ECMO, VA ECMO or ECPR to conventional support in critically ill adults.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcome was 1. all-cause mortality at day 90 to one year. Our secondary outcomes were 2. length of hospital stay, 3. survival to discharge, 4. disability, 5. adverse outcomes/safety events, 6. health-related quality of life, 7. longer-term health status, and 8. cost-effectiveness. We used GRADE to assess certainty of evidence.\nMain results\nFive RCTs met our inclusion criteria, with four new studies being added to the original review (total 757 participants). Two studies were of VV ECMO (429 participants), one VA ECMO (41 participants), and two ECPR (285 participants). Four RCTs had a low risk of bias and one was unclear, and the overall certainty of the results (GRADE score) was moderate, reduced primarily due to indirectness of the study populations and interventions.\nECMO was associated with a reduction in 90-day to one-year mortality compared to conventional treatment (risk ratio [RR] 0.80, 95% confidence interval [CI] 0.70 to 0.92; P = 0.002, I2 = 11%). This finding remained stable after performing a sensitivity analysis by removing the single trial with an uncertain risk of bias. Subgroup analyses did not reveal a significant subgroup effect across VV, VA, or ECPR modes (P = 0.73).\nFour studies reported an increased risk of major hemorrhage with ECMO (RR 3.32, 95% CI 1.90 to 5.82; P < 0.001), while two studies reported no difference in favorable neurologic outcome (RR 2.83, 95% CI 0.36 to 22.42; P = 0.32). Other secondary outcomes were not consistently reported across the studies.\nAuthors' conclusions\nIn this updated systematic review, which included four additional RCTs, we found that ECMO was associated with a reduction in day-90 to one-year all-cause mortality, as well as three times increased risk of bleeding. However, the certainty of this result was only low to moderate, limited by a low number of small trials, clinical heterogeneity, and indirectness across studies.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, ECMO was effective at reducing death compared to conventional management. However, ECMO was associated with an increased risk of major bleeding. There were no differences in outcomes across the different types of ECMO (VV, VA, or ECPR) and other effects of ECMO (such as neurologic injury or quality of life) were poorly reported."
}
] |
query-laysum
|
22110
|
[
{
"role": "user",
"content": "Abstract: Background\nHigh blood pressure constitutes one of the leading causes of mortality and morbidity all over the world. At the same time, heavy drinking increases the risk for developing cardiovascular diseases, including cardiomyopathy, hypertension, atrial arrhythmias, or stroke. Several studies have already assessed specifically the relationship between alcohol intake and hypertension. However, the potential effect on blood pressure of alcohol intake reduction interventions is largely unknown.\nObjectives\nTo assess the effect of any intervention to reduce alcohol intake in terms of blood pressure decrease in hypertensive people with alcohol consumption compared to a control intervention or no intervention at all. To determine additional effects related to mortality, major cardiovascular events, serious adverse events, or quality of life.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomised controlled trials up to June 2020: the Cochrane Hypertension Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL) (Issue 5, 2020), MEDLINE Ovid (from 1946), MEDLINE Ovid Epub Ahead of Print, and MEDLINE Ovid In-Process, Embase Ovid (from 1974), ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform. Trial authors were contacted when needed and no language restrictions were applied.\nSelection criteria\nWe included randomised controlled trials with minimum 12 weeks duration and including 50 or more subjects per group with quantitative measurement of alcohol consumption and/or biological measurement of the outcomes of interest.\nParticipants were adults (16 years of age or older) with systolic blood pressure (SBP) greater than 140 mmHg and diastolic blood pressure (DBP) greater than 90 mmHg, and SBP ≥ 130 or DBP ≥ 80 mmHg in participants with diabetes. We included any intervention implemented to reduce their alcohol intake.\nData collection and analysis\nTwo review authors independently assessed search results and extracted data using standard methodological procedures adopted by Cochrane.\nMain results\nA total of 1210 studies were screened. We included one randomised controlled trial involving a total of 269 participants with a two-year follow-up. Individual patient data for all participants were provided and used in this review.\nNo differences were found between the cognitive-behavioural intervention group and the control group for overall mortality (RR 0.72, 95% CI 0.16 to 3.17; low-certainty evidence), cardiovascular mortality (not estimable) and cardiovascular events (RR 0.80, 95% CI 0.36 to 1.79; very low-certainty evidence). There was no statistical difference in systolic blood pressure (SBP) reduction (Mean Difference (MD) -0.92 mmHg, 95% confidence interval (CI) -5.66 to 3.82 mmHg; very low-certainty evidence) or diastolic blood pressure (DBP) decrease (MD 0.98 mmHg, 95% CI -1.69 to 3.65 mmHg; low-certainty evidence) between the cognitive-behavioural intervention group and the control group. We also did not find any differences in the proportion of subjects with SBP < 140 mmHg and DBP < 90 mmHg (Risk Ratio (RR) 1.21, 95% CI 0.88 to 1.65; very low-certainty evidence).\nConcerning secondary outcomes, the alcohol intake was significantly reduced in the cognitive-behavioural intervention compared with the control group (MD 191.33 g, 95% CI 85.36 to 297.30 g). We found no differences between the active and control intervention in the proportion of subjects with lower-risk alcohol intake versus higher-risk and extreme drinkers at the end of the study (RR 1.04, 95% CI 0.68 to 1.60). There were no estimable results for the quality of life outcome.\nAuthors' conclusions\nAn intervention for decreasing alcohol intake consumption did not result in differences in systolic and diastolic blood pressure when compared with a control intervention, although there was a reduction in alcohol intake favouring the active intervention. No differences were found either for overall mortality, cardiovascular mortality or cardiovascular events. No data on serious adverse events or quality of life were available to assess. Adequate randomised controlled trials are needed to provide additional evidence on this specific question.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We evaluated whether an intervention to reduce alcohol intake has a greater impact than other kinds of interventions (or no intervention at all) in reducing blood pressure in people with high blood pressure and alcohol consumption."
}
] |
query-laysum
|
22111
|
[
{
"role": "user",
"content": "Abstract: Background\nObstructive sleep apnoea (OSA) is a common sleep disorder characterised by partial or complete upper airway occlusion during sleep, leading to intermittent cessation (apnoea) or reduction (hypopnoea) of airflow and dips in arterial oxygen saturation during sleep. Many patients with recognised and unrecognised OSA receive hypnotics, sedatives and opiates/opioids to treat conditions including pain, anxiety and difficulty sleeping. Concerns have been expressed that administration of these drugs to people with co-existing OSA may worsen OSA.\nObjectives\nTo investigate whether administration of sedative and hypnotic drugs exacerbates the severity of OSA (as measured by the apnoea-hypopnoea index (AHI) or the 4% oxygen desaturation index (ODI)) in people with known OSA.\nSearch methods\nWe searched the Cochrane Airways Group Specialised Register (CAGR) of trials. The search was current as of March 2015.\nSelection criteria\nRandomised, placebo-controlled trials including adult participants with confirmed OSA, where participants were randomly assigned to use opiates or opioids, sedatives, hypnotics or placebo. We included participants already using continuous positive airway pressure (CPAP) or a mandibular advancement device.\nData collection and analysis\nWe used standard methodological procedures as recommended by The Cochrane Collaboration.\nMain results\nFourteen studies examining the effects of 10 drugs and including a total of 293 participants contributed to this review. Trials were small, with only two trials, which used sodium oxybate, recruiting more than 40 participants, and all but three trials were of only one to three nights in duration. Most participants had mild to moderate OSA with a mean AHI of 11 to 25 events/h, and only two trials recruited patients with severe OSA. Two trials investigating the effects of ramelteon, a treatment option for insomnia, recruited adults over 60 years of age with OSA and concomitant insomnia.\nThe drugs studied in this review included remifentanil (infusion) 0.75 mcg/kg/h, eszopiclone 3 mg, zolpidem 10 and 20 mg, brotizolam 0.25 mg, flurazepam 30 mg, nitrazepam 10 mg to 15 mg, temazepam 10 mg, triazolam 0.25 mg, ramelteon 8 mg and 16 mg and sodium oxybate 4.5 g and 9 g. We were unable to pool most of the data, with the exception of data for eszopiclone and ramelteon.\nNone of the drugs in this review produced a significant increase in AHI or ODI. Two trials have shown a beneficial effect on OSA. One study showed that a single administration of eszopiclone 3 mg significantly decreased AHI compared with placebo (24 ± 4 vs 31 ± 5; P value < 0.05), and a second study of sodium oxybate 4.5 g showed a significant decrease in AHI compared with placebo (mean difference (MD) -7.41, 95% confidence interval (CI) -14.17 to -0.65; N = 48).\nOnly four trials reported outcome data on ODI. No significant increase, in comparison with placebo, was shown with eszopiclone (21 (22 to 37) vs 28.0 (15 to 36); P value = NS), zolpidem (0.81 ± 0.29 vs 1.46 ± 0.53; P value = NS), flurazepam (18.6 ± 19 vs 19.6 ± 15.9; P value = NS) and temazepam (6.53 ± 9.4 vs 6.56 ± 8.3; P value = 0.98).\nA significant decrease in minimum nocturnal peripheral capillary oxygen saturation (SpO2) was observed with zolpidem 20 mg (76.8 vs 85.2; P value = 0.002), flurazepam 30 mg (81.7 vs 85.2; P value = 0.002), remifentanil infusion (MD -7.00, 95% CI -11.95 to -2.05) and triazolam 0.25 mg in both rapid eye movement (REM) and non-REM (NREM) sleep (MD -14.00, 95% CI -21.84 to -6.16; MD -10.20, 95% CI -16.08 to -4.32, respectively.\nOne study investigated the effect of an opiate (remifentanil) on patients with moderate OSA. Remifentanil infusion did not significantly change AHI (MD 10.00, 95% CI -9.83 to 29.83); however it did significantly decrease the number of obstructive apnoeas (MD -9.00, 95% CI -17.40 to -0.60) and significantly increased the number of central apnoeas (MD 16.00, 95% CI -2.21 to 34.21). Similarly, although without significant effect on obstructive apnoeas, central apnoeas were increased in the sodium oxybate 9 g treatment group (MD 7.3 (18); P value = 0.005) in a cross-over trial.\nDrugs studied in this review were generally well tolerated, apart from adverse events reported in 19 study participants prescribed remifentanil (n = 1), eszopiclone (n = 6), sodium oxybate (n = 9) or ramelteon (n = 3).\nAuthors' conclusions\nThe findings of this review show that currently no evidence suggests that the pharmacological compounds assessed have a deleterious effect on the severity of OSA as measured by change in AHI or ODI. Significant clinical and statistical decreases in minimum overnight SpO2 were observed with remifentanil, zolpidem 20 mg and triazolam 0.25 mg. Eszopiclone 3 mg and sodium oxybate 4.5 g showed a beneficial effect on the severity of OSA with a reduction in AHI and may merit further assessment as a potential therapeutic option for a subgroup of patients with OSA. Only one trial assessed the effect of an opioid (remifentanil); some studies included CPAP treatment, whilst in a significant number of participants, previous treatment with CPAP was not stated and thus a residual treatment effect of CPAP could not be excluded. Most studies were small and of short duration, with indiscernible methodological quality.Caution is therefore required when such agents are prescribed for patients with OSA, especially outside the severity of the OSA cohorts and the corresponding dose of compounds given in the particular studies. Larger, longer trials involving patients across a broader spectrum of OSA severity are needed to clarify these results.\n\nGiven the provided abstract, please respond to the following query: \"Sodium oxybate (a treatment for narcolepsy, a condition causing excessive sleepiness during the day)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sodium oxybate was compared with placebo in two trials. Results showed that sodium oxybate did not worsen OSA as measured by the numbers and duration of pauses in breathing during sleep. In one study, sodium oxybate 4.5 g reduced the severity of OSA and might have proved beneficial in treating this condition, although further studies are needed to assess this effect."
}
] |
query-laysum
|
22112
|
[
{
"role": "user",
"content": "Abstract: Background\nLong-term mechanical ventilation is the most common situation for which tracheostomy is indicated for patients in intensive care units (ICUs). 'Early' and 'late' tracheostomies are two categories of the timing of tracheostomy. Evidence on the advantages attributed to early versus late tracheostomy is somewhat conflicting but includes shorter hospital stays and lower mortality rates.\nObjectives\nTo evaluate the effectiveness and safety of early (≤ 10 days after tracheal intubation) versus late tracheostomy (> 10 days after tracheal intubation) in critically ill adults predicted to be on prolonged mechanical ventilation with different clinical conditions.\nSearch methods\nThis is an update of a review last published in 2012 (Issue 3, The Cochrane Library) with previous searches run in December 2010. In this version, we searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2013, Issue 8); MEDLINE (via PubMed) (1966 to August 2013); EMBASE (via Ovid) (1974 to August 2013); LILACS (1986 to August 2013); PEDro (Physiotherapy Evidence Database) at www.pedro.fhs.usyd.edu.au (1999 to August 2013) and CINAHL (1982 to August 2013). We reran the search in October 2014 and will deal with any studies of interest when we update the review.\nSelection criteria\nWe included all randomized and quasi-randomized controlled trials (RCTs or QRCTs) comparing early tracheostomy (two to 10 days after intubation) against late tracheostomy (> 10 days after intubation) for critically ill adult patients expected to be on prolonged mechanical ventilation.\nData collection and analysis\nTwo review authors extracted data and conducted a quality assessment. Meta-analyses with random-effects models were conducted for mortality, time spent on mechanical ventilation and time spent in the ICU.\nMain results\nWe included eight RCTs (N = 1977 participants). At the longest follow-up time available in these studies, evidence of moderate quality from seven RCTs (n = 1903) showed lower mortality rates in the early as compared with the late tracheostomy group (risk ratio (RR) 0.83, 95% confidence interval (CI) 0.70 to 0.98; P value 0.03; number needed to treat for an additional beneficial outcome (NNTB) ≅ 11). Divergent results were reported on the time spent on mechanical ventilation and no differences were noted for pneumonia, but the probability of discharge from the ICU was higher at day 28 in the early tracheostomy group (RR 1.29, 95% CI 1.08 to 1.55; P value 0.006; NNTB ≅ 8).\nAuthors' conclusions\nThe whole findings of this systematic review are no more than suggestive of the superiority of early over late tracheostomy because no information of high quality is available for specific subgroups with particular characteristics.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence varied according to which outcome was analysed. Evidence was considered of moderate quality for mortality and time spent on mechanical ventilation; of high quality for discharge from the ICU at day 28; and of very low and low quality for pneumonia and sternal wound infection, respectively. Clinical and methodological heterogeneities between studies were the main factors responsible for downgrading the quality of available evidence."
}
] |
query-laysum
|
22113
|
[
{
"role": "user",
"content": "Abstract: Background\nOsteoarthritis (OA) is one of the most prevalent musculoskeletal diseases. There is currently no consensus on what is the best treatment to improve OA symptoms and slow disease progression. Diacerein is an anthraquinone synthesised in 1980 that interferes with interleukin-1, an inflammatory mediator. It has been proposed that diacerein acts as a slow-acting, symptom-modifying and perhaps disease-structure-modifying drug for OA. This is an update of a Cochrane review first published in 2006.\nObjectives\nTo assess the benefits and harms of diacerein for the treatment of adults with OA when compared with placebo and other pharmacologically active interventions (nonsteroidal anti-inflammatory drugs (NSAIDs) and other symptom-modifying, slow-acting drugs) for OA.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) - The Cochrane Library, Issue 10, 2013, MEDLINE (1966 to 2013), EMBASE (1980 to 2013), LILACS (1982 to 2013), and ACP Journal Club, and we handsearched reference lists of published articles. We also searched the World Health Organization International Clinical Trials Platform ( http://www.who.int/trialsearch/Default.aspx) to identify ongoing trials and screened reference lists of retrieved review articles and trials to identify potentially relevant studies. All searches were up to date as of March 2013. Pharmaceutical companies and authors of published articles were contacted. We searched the websites of the regulatory agencies using the keyword ‘diacerein’ in November 2013. No language restrictions were applied.\nSelection criteria\nStudies were included if they were randomised or quasi-randomised controlled trials that compared diacerein with placebo or another active pharmacological intervention in participants with OA.\nData collection and analysis\nData abstraction and quality assessment were performed by two independent investigators, and their results were compared. The Cochrane risk of bias tool was used. The quality of evidence obtained was assessed using the GRADE approach.\nMain results\nWe identified three new trials (141 participants), and this updated review now includes 10 trials, totalling 2,210 participants. The most frequent risk of bias was incomplete outcome data, identified in approximately 80% of the studies. Allocation concealment and random sequence generation were unclear in 90% and 40% of the studies, respectively, because of poor reporting.\nLow-quality evidence from six trials (1,283 participants) indicates that diacerein has a small beneficial effect on overall pain (measured on a 100 mm visual analogue scale) at three to 36 months (mean difference (MD) -8.65, 95% confidence interval (CI) -15.62 to -1.68), which is equivalent to a 9% pain reduction in the diacerein group (95% CI -16% to -2%) compared with the placebo group. This benefit may not be clinically significant.\nNo statistically significant differences in physical function (4 studies, 1006 participants) were noted between the diacerein and placebo groups (Lequesne impairment index, 0 to 24 points) (MD -0.29, 95% CI -0.87 to 0.28).\nLow-quality evidence from two trials (616 participants) on slowing of joint space narrowing (a decrease greater than 0.50 mm) in the knee or hip favoured diacerein over placebo (risk ratio (RR) 0.85, 95% CI 0.72 to 0.99), with an absolute risk difference of -6% (95% CI -15% to 2%) and a number needed to treat for an additional beneficial outcome (NNTB) of 14 (95% CI 8 to 203). Analysis of the knee joint alone (1 study, 170 participants) did not reach statistical significance (RR 0.94, 95% CI 0.51 to 1.74).\nNone of the trials of diacerein versus placebo measured quality of life. According to one trial (161 participants), which compared diacerein versus non-steroidal anti-inflammatory drugs (NSAIDs), the quality of life of participants in the two groups (as assessed by the Short Form (SF)-36 health survey questionnaire (0 to 800 sum score)) did not differ significantly (MD -40.70, 95% CI -85.20 to 3.80).\nLow-quality evidence from seven trials showed significantly more adverse events in the diacerein group compared with the placebo group after two to 36 months, mainly diarrhoea (RR 3.52, 95% CI 2.42 to 5.11), with an absolute risk increase of 24% (95% CI 12% to 35%), and a number needed to treat for an additional harmful outcome (NNTH) of 4 (95% CI 3 to 7).\nNo statistically significant differences in participant withdrawal due to adverse events were seen at two to 36 months for diacerein compared with placebo (RR 1.29, 95% CI 0.83 to 2.01).\nA search of regulatory websites found a recommendation from the European Medicines Agency (EMA) Pharmacovigilance Risk Assessment Committee (PRAC) that the marketing authorization of diacerein should be suspended across Europe because of harms (particularly the risk of severe diarrhoea and potentially harmful effects on the liver) outweighing benefits. However, this guidance is not final as the PRAC recommendation will be re-examined.\nAuthors' conclusions\nIn this update, the strength of evidence for effectiveness outcomes was low to moderate. We confirmed that symptomatic benefit provided by diacerein in terms of pain reduction is minimal. The small benefit derived in terms of joint space narrowing is of questionable clinical relevance and was observed only for OA of the hip. With respect to adverse effects of diacerein, diarrhoea was most frequent. Given the recent guidance issued by the EMA recommending suspension of diacerein in Europe, the EMA website should be consulted for further recommendations regarding the use of diacerein.\n\nGiven the provided abstract, please respond to the following query: \"What is osteoarthritis and what is diacerein?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Osteoarthritis (OA) is the most common form of arthritis. In OA, the cartilage that protects the ends of the bones breaks down, causing pain and swelling. OA can affect any joint, but the knees, hips and hands are the joints most often studied in clinical trials. In all, 10% of the world’s population aged 60 or older have pain or disability from OA.\nDiacerein is a slow-acting drug taken as a pill that may slow the breakdown of cartilage and relieve pain and swelling.\nThe review searched for studies up to March 2013 about primary osteoarthritis affecting men and women (18 years and older) of any disease severity."
}
] |
query-laysum
|
22114
|
[
{
"role": "user",
"content": "Abstract: Background\nHeparin is an anticoagulant medication that is usually injected subcutaneously. Subcutaneous administration of heparin may result in complications such as bruising, haematoma, and pain at the injection site. One of the factors that may affect pain, haematoma, and bruising is injection speed. Several studies have been carried out to determine if speed of injection affects the amount of pain and bruising where the injection is given; however, the results of these studies have differed, and study authors have not reached a clear final conclusion. This is the second update of a review first published in 2014.\nObjectives\nTo assess the effects of duration (speed) of subcutaneous heparin injection on pain and bruising at the injection site in people admitted to hospitals or clinics who require treatment with unfractionated heparin (UFH) or low molecular weight heparin (LMWH). We also looked at haematoma at the injection site.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL databases and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 22 June 2020. We undertook reference checking of included studies to identify additional studies.\nSelection criteria\nWe searched for randomised controlled trials (RCTs) comparing the effects of different durations of subcutaneous injection of heparin on pain, bruising, and haematoma at the injection site.\nData collection and analysis\nFor this update, two review authors independently selected studies and extracted data via Covidence software and assessed methodological quality using Cochrane's risk of bias tool. The primary outcomes of interest were pain intensity at injection site and size and incidence of bruising. The secondary outcomes of interest were size and incidence of haematoma at injection site. We calculated the odds ratio (OR), mean difference (MD), or standardised mean difference (SMD) with corresponding 95% confidence intervals (CIs). We assessed the certainty of the evidence using GRADE criteria.\nMain results\nWe identified one new study for this update, resulting in a total of five included studies with 503 participants who received subcutaneous injections of LMWH into the abdomen. Given the nature of the intervention, it was not possible to blind participants and caregivers (personnel) in any of the included studies. Two studies described blinding of outcome assessors. Overall, the methodological quality of included studies was moderate. The duration of the fast injection was 10 seconds, and the duration of the slow injection was 30 seconds in all included studies.\nFour studies reported site pain intensity after each injection at different time points. Two studies assessed site pain intensity immediately after each injection; meta-analysis showed no evidence of a difference in site pain intensity immediately after slow injection when compared to fast injection (MD −1.52, 95% CI −3.56 to 0.53; 140 participants; low-certainty evidence). Meta-analysis of three studies indicated that site pain intensity may be slightly reduced 48 hours after the slow heparin injection compared to fast injection (MD −1.60, 95% CI −2.69 to −0.51; 103 participants; low-certainty evidence).\nFive studies assessed bruise size at 48 hours, and two studies assessed bruise size at 60 hours. Meta-analysis showed there may be a reduction in bruise size 48 hours (SMD −0.54, 95% CI −1.05 to −0.02; 503 participants; 5 studies; very low-certainty evidence) and 60 hours (SMD −0.49, 95% CI −0.93 to −0.06; 84 participants; 2 studies; low-certainty evidence) after slow injection compared to fast injection. There was no evidence of a difference in bruise size 72 hours after slow injection compared to fast injection (SMD −0.27, 95% CI −0.61 to 0.06; 140 participants; 2 studies; low-certainty evidence).\nThree studies evaluated incidence of bruising and showed there may be a reduction in bruise incidence 48 hours (OR 0.39, 95% CI 0.26 to 0.60; 444 participants; low-certainty evidence) and 60 hours (OR 0.25, 95% CI 0.10 to 0.65; 84 participants; 2 studies; low-certainty evidence) after slow injection compared to fast injection. We downgraded the certainty of the evidence due to risk of bias concerns, imprecision, and inconsistency.\nNone of the included studies measured size or incidence of haematoma.\nAuthors' conclusions\nAdministering medication safely and enhancing patient comfort are the main aims of clinical nurses. In this review, we identified five RCTs that evaluated the effect of subcutaneous heparin injection duration on pain intensity, bruise size and incidence. We found that pain may be slightly reduced 48 hours after slow injection. Similarly, there may be a reduction in bruise size and incidence after slow injection compared to fast injection 48 and 60 hours postinjection. We downgraded the certainty of the evidence for all outcomes to low or very low due to risk of bias concerns, imprecision, and inconsistency. Accordingly, new trials with a more robust design, more participants, and a focus on different injection speeds will be useful in strengthening the certainty of the available evidence.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics and key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that investigated the effects of speed of injection on the amount of pain and bruising where the injection is given (current to 22 June 2020). Five studies met our inclusion criteria and were included in the review. The included studies took place in Thailand, Turkey, Italy, and China, and enrolled a total of 503 people (312 female and 191 male participants). All study participants received LMWH, whilst no study used UFH.\nStudy investigators injected heparin into the abdomen of participants. Participants could watch the injection being given and knew whether it was fast (10 seconds long) or slow (30 seconds long).\nWe found that pain may be slightly reduced 48 hours after slow injection compared with fast injection. Similarly, there may be smaller bruise sizes and fewer bruises with the slow injection 48 and 60 hours after the injection. None of the included studies measured the number or size of haematomas."
}
] |
query-laysum
|
22115
|
[
{
"role": "user",
"content": "Abstract: Background\nThe female genital tract is not exposed to seminal plasma during standard assisted reproductive technology (ART) cycles. However, it is thought that the inflammatory reaction triggered by seminal plasma may be beneficial by inducing maternal tolerance to paternal antigens expressed by the products of conception, and may increase the chance of successful implantation and live birth.\nObjectives\nTo assess the effectiveness and safety of application of seminal plasma to the female genital tract prior to embryo transfer in ART cycles.\nSearch methods\nWe searched the following databases from inception to October 2017: Cochrane Gynaecology and Fertility Group Specialised Register of Controlled Trials, Cochrane Central Register of Studies Online (CRSO), MEDLINE, Embase, CINAHL and PsycINFO. We also searched trial registers for ongoing trials, including International Clinical Trials Registry Platform (ICTRP) Search Portal and ClinicalTrials.gov. Other sources searched were; Web of Knowledge, OpenGrey, LILACS, PubMed, Google Scholar, and the reference lists of relevant articles.\nSelection criteria\nWe included randomised controlled trials (RCTs) conducted among women undergoing ART, comparing any procedure that would expose the female genital tract to seminal plasma during the period starting five days before embryo transfer and ending two days after it versus no seminal plasma application.\nData collection and analysis\nTwo review authors independently selected trials, assessed risk of bias, and extracted data. We pooled data to calculate relative risks (RRs) and 95% confidence intervals (CIs). We assessed statistical heterogeneity using the I2 statistic. We assessed the overall quality of the evidence for the main outcomes using GRADE methods. Our primary outcomes were live birth rate and miscarriage rate. Secondary outcomes were live birth/ongoing pregnancy rate, clinical pregnancy rate, multiple pregnancy rate, ectopic pregnancy rate, and the incidence of other adverse events.\nMain results\nWe included 11 RCTs (3215 women). The quality of the evidence ranged from very low to low. The main limitations were risk of bias (associated with poor reporting of allocation concealment and other methods) and imprecision for the primary outcome of live birth rate.\nLive birth rates: Seminal plasma application made little or no difference in live birth rates (RR 1.10, 95% CI 0.86 to 1.43; 948 participants; 3 studies; I2 = 0%). Low-quality evidence suggested that if the live birth rate following standard ART was 19%, it would be between 16% and 27% with seminal plasma application.\nMiscarriage rate: Seminal plasma application made little or no difference in miscarriage rates (RR 1.01, 95% CI 0.57 to 1.79; 1209 participants; 4 studies; I2 = 0%). Low-quality evidence suggested that if the miscarriage rate following standard ART was 3.7%, the miscarriage rate following seminal plasma application would be between 2.1% and 6.6%.\nLive birth or ongoing pregnancy rates: Seminal plasma application made little or no difference in live birth or ongoing pregnancy rates (RR 1.19, 95% CI 0.95 to 1.49; 1178 participants; 4 studies; I2 = 4%, low-quality evidence). The evidence suggested that if the live birth or ongoing pregnancy rate following standard ART was 19.5%, it would be between 18.5% and 29% with seminal plasma application.\nClinical pregnancy rates: We are uncertain whether seminal plasma application increases clinical pregnancy rates (RR 1.15, 95% CI 1.01 to 1.31; 2768 participants; 10 studies; I2 = 0%). Very low-quality evidence suggested that if the clinical pregnancy rate following standard ART was 22.0%, it would be between 22.2% and 28.8% with seminal plasma application. This finding should be regarded with caution, as a post hoc sensitivity analysis restricted to studies at overall low risk of bias did not find a significant difference between the groups (RR 1.06, 95% CI 0.81 to 1.39; 547 participants; 3 studies; I2 = 0%).\nMultiple pregnancy rate: Seminal plasma application may make little or no difference to multiple pregnancy rates (RR 1.11, 95% CI 0.76 to 1.64; 1642 participants; 5 studies; I2 = 9%). Low-quality evidence suggested that if the multiple pregnancy rate following standard ART was 7%, the multiple pregnancy rate following seminal plasma application would be between 5% and 11.4%.\nEctopic pregnancy: There was insufficient evidence to determine whether seminal plasma application influenced the risk of ectopic pregnancy (RR 1.59, 95% CI 0.20 to 12.78, 1521 participants; 5 studies; I2 = 0%) .\nInfectious complications or other adverse events: No data were available on these outcomes\nAuthors' conclusions\nIn women undergoing ART, there was insufficient evidence to determine whether there was a difference between the seminal plasma and the standard ART group in rates of live birth (low-quality evidence) or miscarriage (low-quality evidence). There was low-quality evidence suggesting little or no difference between the groups in rates of live birth or ongoing pregnancy (composite outcome). We found low-quality evidence that seminal plasma application may be associated with more clinical pregnancies than standard ART. There was low-quality evidence suggesting little or no difference between the groups in rates of multiple pregnancy. There was insufficient evidence to reach any conclusions about the risk of ectopic pregnancy, and no data were available on infectious complications or other adverse events.\nWe conclude that seminal plasma application is worth further investigation, focusing on live birth and miscarriage rates.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In ART cycles, the egg and sperm are mixed outside the body to develop embryos. One or two of the embryos are replaced into the womb in a very small amount of artificial fluid. During this process, the woman's body does not come into contact with seminal plasma at all, unlike during normal intercourse where the male partner ejaculates in the vagina, exposing the latter to seminal fluid. It has been suggested that seminal plasma contains several molecules which can help the embryos to attach to the womb. The logical question is whether application of some seminal plasma to the vagina/neck of the womb or womb increases the chances of a live birth after ART."
}
] |
query-laysum
|
22116
|
[
{
"role": "user",
"content": "Abstract: Background\nWe performed a systematic review, a standard meta-analysis and network meta-analysis (NMA), which updates the 2009 Cochrane Overview, 'Biologics for rheumatoid arthritis (RA)'. This review is focused on biologic monotherapy in people with RA in whom treatment with traditional disease-modifying anti-rheumatic drugs (DMARDs) including methotrexate (MTX) had failed (MTX/other DMARD-experienced).\nObjectives\nTo assess the benefits and harms of biologic monotherapy (includes anti-tumor necrosis factor (TNF) (adalimumab, certolizumab pegol, etanercept, golimumab, infliximab) or non-TNF (abatacept, anakinra, rituximab, tocilizumab)) or tofacitinib monotherapy (oral small molecule) versus comparator (placebo or MTX/other DMARDs) in adults with RA who were MTX/other DMARD-experienced.\nMethods\nWe searched for randomized controlled trials (RCTs) in the Cochrane Central Register of Controlled Trials (CENTRAL; The Cochrane Library 2015, Issue 6, June), MEDLINE (via OVID 1946 to June 2015), and Embase (via OVID 1947 to June 2015). Article selection, data extraction and risk of bias and GRADE assessments were done in duplicate. We calculated direct estimates with 95% confidence intervals (CI) using standard meta-analysis. We used a Bayesian mixed treatment comparisons (MTC) approach for NMA estimates with 95% credible intervals (CrI). We converted odds ratios (OR) to risk ratios (RR) for ease of understanding. We calculated absolute measures as risk difference (RD) and number needed to treat for an additional beneficial outcome (NNTB).\nMain results\nThis update includes 40 new RCTs for a total of 46 RCTs, of which 41 studies with 14,049 participants provided data. The comparator was placebo in 16 RCTs (4,532 patients), MTX or other DMARD in 13 RCTs (5,602 patients), and another biologic in 12 RCTs (3,915 patients).\nMonotherapy versus placebo\nBased on moderate-quality direct evidence, biologic monotherapy (without concurrent MTX/other DMARDs) was associated with a clinically meaningful and statistically significant improvement in American College of Rheumatology score (ACR50) and physical function, as measured by the Health Assessment Questionnaire (HAQ) versus placebo. RR was 4.68 for ACR50 (95% CI, 2.93 to 7.48); absolute benefit RD 23% (95% CI, 18% to 29%); and NNTB = 5 (95% CI, 3 to 8). The mean difference (MD) was -0.32 for HAQ (95% CI, -0.42 to -0.23; a negative sign represents greater HAQ improvement); absolute benefit of -10.7% (95% CI, -14% to -7.7%); and NNTB = 4 (95% CI, 3 to 5). Direct and NMA estimates for TNF biologic, non-TNF biologic or tofacitinib monotherapy showed similar results for ACR50 , downgraded to moderate-quality evidence. Direct and NMA estimates for TNF biologic, anakinra or tofacitinib monotherapy showed a similar results for HAQ versus placebo with mostly moderate quality evidence.\nBased on moderate-quality direct evidence, biologic monotherapy was associated with a clinically meaningful and statistically significant greater proportion of disease remission versus placebo with RR 1.12 (95% CI 1.03 to 1.22); absolute benefit 10% (95% CI, 3% to 17%; NNTB = 10 (95% CI, 8 to 21)).\nBased on low-quality direct evidence, results for biologic monotherapy for withdrawals due to adverse events and serious adverse events were inconclusive, with wide confidence intervals encompassing the null effect and evidence of an important increase. The direct estimate for TNF monotherapy for withdrawals due to adverse events showed a clinically meaningful and statistically significant result with RR 2.02 (95% CI, 1.08 to 3.78), absolute benefit RD 3% (95% CI,1% to 4%), based on moderate-quality evidence. The NMA estimates for TNF biologic, non-TNF biologic, anakinra, or tofacitinib monotherapy for withdrawals due to adverse events and for serious adverse events were all inconclusive and downgraded to low-quality evidence.\nMonotherapy versus active comparator (MTX/other DMARDs)\nBased on direct evidence of moderate quality, biologic monotherapy (without concurrent MTX/other DMARDs) was associated with a clinically meaningful and statistically significant improvement in ACR50 and HAQ scores versus MTX/other DMARDs with a RR of 1.54 (95% CI, 1.14 to 2.08); absolute benefit 13% (95% CI, 2% to 23%), NNTB = 7 (95% CI, 4 to 26) and a mean difference in HAQ of -0.27 (95% CI, -0.40 to -0.14); absolute benefit of -9% (95% CI, -13.3% to -4.7%), NNTB = 2 (95% CI, 2 to 4). Direct and NMA estimates for TNF monotherapy and NMA estimate for non-TNF biologic monotherapy for ACR50 showed similar results, based on moderate-quality evidence. Direct and NMA estimates for non-TNF biologic monotherapy, but not TNF monotherapy, showed similar HAQ improvements , based on mostly moderate-quality evidence.\nThere were no statistically significant or clinically meaningful differences for direct estimates of biologic monotherapy versus active comparator for RA disease remission. NMA estimates showed a statistically significant and clinically meaningful difference versus active comparator for TNF monotherapy (absolute improvement 7% (95% CI, 2% to 14%)) and non-TNF monotherapy (absolute improvement 19% (95% CrI, 7% to 36%)), both downgraded to moderate quality.\nBased on moderate-quality direct evidence from a single study, radiographic progression (scale 0 to 448) was statistically significantly reduced in those on biologic monotherapy versus active comparator, MD -4.34 (95% CI, -7.56 to -1.12), though the absolute reduction was small, -0.97% (95% CI, -1.69% to -0.25%). We are not sure of the clinical relevance of this reduction.\nDirect and NMA evidence (downgraded to low quality), showed inconclusive results for withdrawals due to adverse events, serious adverse events and cancer, with wide confidence intervals encompassing the null effect and evidence of an important increase.\nAuthors' conclusions\nBased mostly on RCTs of six to 12-month duration in people with RA who had previously experienced and failed treatment with MTX/other DMARDs, biologic monotherapy improved ACR50, function and RA remission rates compared to placebo or MTX/other DMARDs.\nRadiographic progression was reduced versus active comparator, although the clinical significance was unclear.\nResults were inconclusive for whether biologic monotherapy was associated with an increased risk of withdrawals due to adverse events, serious adverse events or cancer, versus placebo (no data on cancer) or MTX/other DMARDs.\n\nGiven the provided abstract, please respond to the following query: \"Withdrawal from study due to adverse events\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "52 people out of 1000 who took biologic monotherapy withdrew from the study compared to 32 people out of 1000 who were on placebo, that is 20 more people out of 1000 (2% more withdrawals)."
}
] |
query-laysum
|
22117
|
[
{
"role": "user",
"content": "Abstract: Background\nAnaemia is a condition where the number of red blood cells (and consequently their oxygen-carrying capacity) is insufficient to meet the body's physiological needs. Fortification of wheat flour is deemed a useful strategy to reduce anaemia in populations.\nObjectives\nTo determine the benefits and harms of wheat flour fortification with iron alone or with other vitamins and minerals on anaemia, iron status and health-related outcomes in populations over two years of age.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, 21 other databases and two trials registers up to 21 July 2020, together with contacting key organisations to identify additional studies.\nSelection criteria\nWe included cluster- or individually-randomised controlled trials (RCTs) carried out among the general population from any country, aged two years and above. The interventions were fortification of wheat flour with iron alone or in combination with other micronutrients. We included trials comparing any type of food item prepared from flour fortified with iron of any variety of wheat\nData collection and analysis\nTwo review authors independently screened the search results and assessed the eligibility of studies for inclusion, extracted data from included studies and assessed risks of bias. We followed Cochrane methods in this review.\nMain results\nOur search identified 3538 records, after removing duplicates. We included 10 trials, involving 3319 participants, carried out in Bangladesh, Brazil, India, Kuwait, Philippines, South Africa and Sri Lanka. We identified two ongoing studies and one study is awaiting classification. The duration of interventions varied from 3 to 24 months. One study was carried out among adult women and one trial among both children and nonpregnant women. Most of the included trials were assessed as low or unclear risk of bias for key elements of selection, performance or reporting bias.\nThree trials used 41 mg to 60 mg iron/kg flour, three trials used less than 40 mg iron/kg and three trials used more than 60 mg iron/kg flour. One trial used various iron levels based on type of iron used: 80 mg/kg for electrolytic and reduced iron and 40 mg/kg for ferrous fumarate.\nAll included studies contributed data for the meta-analyses.\nIron-fortified wheat flour with or without other micronutrients added versus wheat flour (no added iron) with the same other micronutrients added\nIron-fortified wheat flour with or without other micronutrients added versus wheat flour (no added iron) with the same other micronutrients added may reduce by 27% the risk of anaemia in populations (risk ratio (RR) 0.73, 95% confidence interval (CI) 0.55 to 0.97; 5 studies, 2315 participants; low-certainty evidence).\nIt is uncertain whether iron-fortified wheat flour with or without other micronutrients reduces iron deficiency (RR 0.46, 95% CI 0.20 to 1.04; 3 studies, 748 participants; very low-certainty evidence) or increases haemoglobin concentrations (in g/L) (mean difference MD 2.75, 95% CI 0.71 to 4.80; 8 studies, 2831 participants; very low-certainty evidence).\nNo trials reported data on adverse effects in children (including constipation, nausea, vomiting, heartburn or diarrhoea), except for risk of infection or inflammation at the individual level. The intervention probably makes little or no difference to the risk of Infection or inflammation at individual level as measured by C-reactive protein (CRP) (mean difference (MD) 0.04, 95% CI −0.02 to 0.11; 2 studies, 558 participants; moderate-certainty evidence).\nIron-fortified wheat flour with other micronutrients added versus unfortified wheat flour (nil micronutrients added)\nIt is unclear whether wheat flour fortified with iron, in combination with other micronutrients decreases anaemia (RR 0.77, 95% CI 0.41 to 1.46; 2 studies, 317 participants; very low-certainty evidence). The intervention probably reduces the risk of iron deficiency (RR 0.73, 95% CI 0.54 to 0.99; 3 studies, 382 participants; moderate-certainty evidence) and it is unclear whether it increases average haemoglobin concentrations (MD 2.53, 95% CI −0.39 to 5.45; 4 studies, 532 participants; very low-certainty evidence).\nNo trials reported data on adverse effects in children.\nNine out of 10 trials reported sources of funding, with most having multiple sources. Funding source does not appear to have distorted the results in any of the assessed trials.\nAuthors' conclusions\nFortification of wheat flour with iron (in comparison to unfortified flour, or where both groups received the same other micronutrients) may reduce anaemia in the general population above two years of age, but its effects on other outcomes are uncertain.\nIron-fortified wheat flour in combination with other micronutrients, in comparison with unfortified flour, probably reduces iron deficiency, but its effects on other outcomes are uncertain.\nNone of the included trials reported data on adverse side effects except for risk of infection or inflammation at the individual level. The effects of this intervention on other health outcomes are unclear. Future studies at low risk of bias should aim to measure all important outcomes, and to further investigate which variants of fortification, including the role of other micronutrients as well as types of iron fortification, are more effective, and for whom.\n\nGiven the provided abstract, please respond to the following query: \"Fortified foods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Adding micronutrients (vitamins and minerals) to foods, whether those micronutrients were originally present or not, is called fortification. Fortifying foods is one way to improve nutrition in a population.\nPeople living in low-income countries may not have enough iron in their diet, and may be at risk of anaemia. Adding iron and other nutrients to foods routinely eaten in large quantities, such as flour, is thought to help prevent iron-deficiency anaemia."
}
] |
query-laysum
|
22118
|
[
{
"role": "user",
"content": "Abstract: Background\nSocial networking sites (SNSs) have great potential as a platform for public health interventions to address the unmet need for contraception.\nObjectives\nTo evaluate the effectiveness of interventions using SNSs to promote the uptake of and adherence to contraception in reproductive-age women.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and six other databases on January 2018. We also searched Google Scholar, key conference proceedings, checked the reference lists of included studies, and contacted study authors to identify additional studies.\nSelection criteria\nWe considered randomised controlled trials (RCTs) and non-randomised interventional studies (NRS) in women of reproductive age. SNSs requiring a social profile within a bounded or restricted-access system of shared connections were included. We also included trials that utilised SNSs only or as an adjunct to an intervention. Studies had to have a follow-up outcome assessment of at least three months.\nData collection and analysis\nTwo authors independently screened titles, abstracts and full-text studies, and extracted data from included studies. A third author was assigned to arbitrate areas of disagreement. Authors assessed risk of bias according to the Cochrane Handbook for Systematic Reviews of Interventions. We were unable to conduct a meta-analysis due to the heterogeneity of the study designs and outcome measures.\nMain results\nOf the 461 unique records found, only two studies met our inclusion criteria. Both studies were conducted in the USA and were at high risk of bias. One RCT included 2284 women exposed to a web-based SNS or nothing. The groups were no different post intervention in their self-reported consistency of contraceptive use or knowledge of the relative effectiveness of different methods. There was a small but significant increase in the use of more effective methods (long-acting reversible methods) at 12 months' follow-up.\nThe second study, a cluster RCT with 1578 women, used a closed Facebook page showing sexual health content compared to a modified Facebook news page that avoided sexual health content. They found no differences in the use of condoms at last act of sexual intercourse at six months or the intention to use condoms between the intervention and control groups.\nAuthors' conclusions\nDespite the prevalence of SNSs, we found little scientific evidence to support the use of SNSs to improve contraceptive use or adherence among women.\n\nGiven the provided abstract, please respond to the following query: \"Authors' conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found little scientific evidence available to support whether SNSs improve contraceptive use or continuation among women."
}
] |
query-laysum
|
22119
|
[
{
"role": "user",
"content": "Abstract: Background\nPrevention of chronic chagasic cardiomyopathy (CCC) by treating infected populations with trypanocidal therapy (TT) remains a challenge. Despite a renewed enthusiasm for TT, uncertainty regarding its efficacy, concerns about its safety and limited availability remain barriers for a wider use of conventional drugs. We have updated a previous version of this review.\nObjectives\nTo systematically search, appraise, identify and extract data from eligible studies comparing the outcome of cohorts of seropositive individuals to Trypanosoma cruzi exposed to TT versus placebo or no treatment.\nSearch methods\nWe sought eligible studies in electronic databases (Cochrane Central Register of Controlled Trials (CENTRAL), Issue 1, 2014); MEDLINE (Ovid, 1946 to January week 5 2014); EMBASE (Ovid, 1980 to 2014 week 6) and LILACS (up to 6 May 2010)) by combining terms related with the disease and the treatment. The search also included a Google search, handsearch for references in review or selected articles, and search of expert files. We applied no language restrictions.\nSelection criteria\nReview authors screened the retrieved references for eligibility (those dealing with human participants treated with TT) and then assessed the pre-selected studies in full for inclusion. We included randomised controlled trials (RCTs) and observational studies that provided data on either mortality or clinical progression of CCC after at least four years of follow-up.\nData collection and analysis\nTeams of two review authors independently carried out the study selection, data extraction and risk of bias assessment, with a referee resolving disagreement within the pairs. Data collection included study design, characteristics of the population and interventions or exposures and outcome measures. We defined categories of outcome data as parasite-related (positive serology, xenodiagnosis or polymerase chain reaction (PCR) after TT) and participant-related (including efficacy outcomes such as progression towards CCC, all-cause mortality and side effects of TT). We reported pooled outcome data as Mantel-Haenszel odds ratios (OR) or standardised mean differences (SMD) along with 95% confidence intervals (CI), using a random-effects model. I2 statistics provided an estimate of heterogeneity across studies. We conducted an exploratory meta-regression analysis of the relationship between positive-serology and progression of CCC or mortality.\nMain results\nWe included 13 studies involving 4229 participants (six RCTs, n = 1096, five RCTs of intermediate risk of bias, one RCT of high risk of bias; four non-randomised experiments, n = 1639 and three observational studies, n = 1494). Ten studies tested nitroderivative drugs nifurtimox or benznidazole (three exposed participants to allopurinol, one to itraconazole). Five studies were conducted in Brazil, five in Argentina, one in Bolivia, one in Chile and one in Venezuela.\nTT was associated with substantial, but heterogeneous reductions on parasite-related outcomes such as positive serology (9 studies, OR 0.21, 95% CI 0.10 to 0.44, I2 = 76%), positive PCR (2 studies, OR 0.50, 95% CI 0.27 to 0.92, I2 = 0%), positive xenodiagnosis after treatment (6 studies, OR 0.35, 95% CI 0.14 to 0.86, I2 = 79%), or reduction on antibody titres (3 studies, SMD -0.56, 95% CI -0.89 to -0.23, I2 = 28%). Efficacy data on patient-related outcomes was largely from non-RCTs. TT with nitroderivatives was associated with potentially important, but imprecise and inconsistent reductions in progression of CCC (4 studies, 106 events, OR 0.74, 95% CI 0.32 to 1.73, I2 = 66%) and mortality after TT (6 studies, 99 events, OR 0.55, 95% CI 0.26 to 1.14, I2 = 48%). The overall median incidence of any severe side effects among 1475 individuals from five studies exposed to TT was 2.7%, and the overall discontinuation of this two-month therapy in RCTs (5 studies, 134 events) was 20.5% (versus 4.3% among controls) and 10.4% in other five studies (125 events).\nAuthors' conclusions\nDespite the evidence that TT reduced parasite-related outcomes, the low quality and inconsistency of the data for patient-important outcomes must be treated with caution. More geographically diverse RCTs testing newer forms of TT are warranted in order to 1. estimate efficacy more precisely, 2. explore factors potentially responsible for the heterogeneity of results and 3. increase knowledge on the efficacy/tolerance balance of conventional TT.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Only 25% of these data came from good-quality studies. Although most studies were published since 2000, all studies tested drugs developed in the 1960s."
}
] |
query-laysum
|
22120
|
[
{
"role": "user",
"content": "Abstract: Background\nWork-related upper limb disorder (WRULD), repetitive strain injury (RSI), occupational overuse syndrome (OOS) and work-related complaints of the arm, neck or shoulder (CANS) are the most frequently used umbrella terms for disorders that develop as a result of repetitive movements, awkward postures and impact of external forces such as those associated with operating vibrating tools. Work-related CANS, which is the term we use in this review, severely hampers the working population.\nObjectives\nTo assess the effects of conservative interventions for work-related complaints of the arm, neck or shoulder (CANS) in adults on pain, function and work-related outcomes.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (The Cochrane Library, 31 May 2013), MEDLINE (1950 to 31 May 2013), EMBASE (1988 to 31 May 2013), CINAHL (1982 to 31 May 2013), AMED (1985 to 31 May 2013), PsycINFO (1806 to 31 May 2013), the Physiotherapy Evidence Database (PEDro; inception to 31 May 2013) and the Occupational Therapy Systematic Evaluation of Evidence Database (OTseeker; inception to 31 May 2013). We did not apply any language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-randomised controlled trials evaluating conservative interventions for work-related complaints of the arm, neck or shoulder in adults. We excluded trials undertaken to test injections and surgery. We included studies that evaluated effects on pain, functional status or work ability.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, extracted data and assessed risk of bias of the included studies. When studies were sufficiently similar, we performed statistical pooling of reported results.\nMain results\nWe included 44 studies (62 publications) with 6,580 participants that evaluated 25 different interventions. We categorised these interventions according to their working mechanisms into exercises, ergonomics, behavioural and other interventions.\nOverall, we judged 35 studies as having a high risk of bias mainly because of an unknown randomisation procedure, lack of a concealed allocation procedure, unblinded trial participants or lack of an intention-to-treat analysis.\nWe found very low-quality evidence showing that exercises did not improve pain in comparison with no treatment (five studies, standardised mean difference (SMD) -0.52, 95% confidence interval (CI) -1.08 to 0.03), or minor intervention controls (three studies, SMD -0.25, 95% CI -0.87 to 0.37) or when provided as additional treatment (two studies, inconsistent results) at short-term follow-up or at long-term follow-up. Results were similar for recovery, disability and sick leave. Specific exercises led to increased pain at short-term follow-up when compared with general exercises (four studies, SMD 0.45, 95% CI 0.14 to 0.75)\nWe found very low-quality evidence indicating that ergonomic interventions did not lead to a decrease in pain when compared with no intervention at short-term follow-up (three studies, SMD -0.07, 95% CI -0.36 to 0.22) but did decrease pain at long-term follow-up (four studies, SMD -0.76, 95% CI -1.35 to -0.16). There was no effect on disability but sick leave decreased in two studies (risk ratio (RR) 0.48, 95% CI 0.32 to 0.76). None of the ergonomic interventions was more beneficial for any outcome measures when compared with another treatment or with no treatment or with placebo.\nBehavioural interventions had inconsistent effects on pain and disability, with some subgroups showing benefit and others showing no significant improvement when compared with no treatment, minor intervention controls or other behavioural interventions.\nIn the eight studies that evaluated various other interventions, there was no evidence of a clear beneficial effect of any of the interventions provided.\nAuthors' conclusions\nWe found very low-quality evidence indicating that pain, recovery, disability and sick leave are similar after exercises when compared with no treatment, with minor intervention controls or with exercises provided as additional treatment to people with work-related complaints of the arm, neck or shoulder. Low-quality evidence also showed that ergonomic interventions did not decrease pain at short-term follow-up but did decrease pain at long-term follow-up. There was no evidence of an effect on other outcomes. For behavioural and other interventions, there was no evidence of a consistent effect on any of the outcomes.\nStudies are needed that include more participants, that are clear about the diagnosis of work-relatedness and that report findings according to current guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 44 studies that included 6,580 persons. Twenty-one studies evaluated exercises, 13 evaluated ergonomic workplace adjustments and nine behavioural interventions. We combined the results of these studies per category. Eight other studies evaluated various other treatments.\nWe did not find a consistent effect of any treatment on pain, recovery, disability or sick leave. Ergonomic interventions reduced pain in the long term but not in the short term in several studies. We judged nine studies to be of high quality, but the results were very inconsistent. We found no reason for the variation in study results. Better studies are needed that are bigger, have a clearer diagnosis of work-relatedness and comply with reporting guidelines."
}
] |
query-laysum
|
22121
|
[
{
"role": "user",
"content": "Abstract: Background\nSevoflurane induction for general anaesthesia has been reported to be safe, reliable and well accepted by patients. Sevoflurane induction uses either low or high initial concentrations. The low initial concentration technique involves initially administering a low concentration of sevoflurane and gradually increasing the concentration of the dose until the patient is anaesthetized. The high initial concentration technique involves administering high concentrations from the beginning, then continuing with those high doses until the patient is anaesthetized. This review was originally published in 2013 and has been updated in 2016.\nObjectives\nWe aimed to compare induction times and complication rates between high and low initial concentration sevoflurane anaesthetic induction techniques in adults and children who received inhalational induction for general anaesthesia. We defined 'high' as greater than or equal to and 'low' as less than a 4% initial concentration.\nSearch methods\nFor the updated review, we searched the following databases: Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 2), MEDLINE (1950 to February 2016), EMBASE (1980 to February 2016), Latin American Caribbean Health Sciences Literature (LILACS) (1982 to February 2016) and the Institute for Scientific Information (ISI) Web of Science (1946 to February 2016). We also searched the reference lists of relevant articles and conference proceedings and contacted the authors of included trials. The original search was run in September 2011.\nSelection criteria\nWe sought all published and unpublished, randomized controlled trials comparing high versus low initial sevoflurane concentration inhalational induction. Our primary outcomes included two measures of anaesthesia (time to loss of the eyelash reflex (LOER) and time until a weighted object held in the patient's hand was dropped), time to successful insertion of a laryngeal mask airway (LMA) and time to endotracheal intubation. Other outcomes were complications of the technique.\nData collection and analysis\nWe used standardized methods for conducting a systematic review as described in the Cochrane Handbook for Systematic Reviews of Interventions. Two review authors independently extracted details of trial methods and outcome data from reports of all trials considered eligible for inclusion. We conducted all analyses on an intention-to-treat basis, when possible. We estimated overall treatment effects by using a fixed-effect model when we found no substantial heterogeneity, whereas we applied the random-effects model in the presence of considerable heterogeneity.\nMain results\nWe reran the searches and included one new study (100 participants) in this updated review. In total, we included 11 studies with 829 participants, although most analyses were based on data from fewer participants and evidence of low quality. We noted substantial heterogeneity in the included trials. Thus, our results should be read with caution. It was not possible to combine trials for the primary outcome (LOER), but individual trials reported faster induction times (typically 24 to 82 seconds faster, 41 seconds (31.37 to 50.62)) with high initial concentration sevoflurane (six studies, 443 participants, low-quality evidence). Apnoea appeared to be more common in the high initial concentration sevoflurane group (risk ratio (RR) 3.14, 95% confidence interval (CI) 1.72 to 5.7, two studies, 160 participants, low-quality evidence). We found no evidence of differences between the two groups in the incidence of cough (odds ratio (OR) 1.23, 95% CI 0.53 to 2.81, eight studies, 589 participants, low-quality evidence), laryngospasm (OR 1.59, 95% CI 0.16 to 15.9, seven studies, 588 participants, low-quality evidence), breath holding (OR 1.16, 95% CI 0.47 to 2.83, five studies, 389 participants, low-quality evidence), patient movement (RR 1.14, 95% CI 0.69 to 1.89, five studies, 445 participants, low-quality evidence) or bradycardia (OR 0.8, 95% CI 0.22 to 2.88, three studies, 199 participants, low-quality evidence), and the overall incidence of complications was low.\nAuthors' conclusions\nA high initial concentration sevoflurane technique probably offers more rapid induction of anaesthesia and a similar rate of complications, except for apnoea, which may be more common with a high initial concentration. However, this conclusion is not definitive because the included studies provided evidence of low quality.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence from randomized controlled trials comparing high initial concentrations of sevoflurane with low initial concentrations to see whether the evidence supports use of high initial concentrations to reduce induction times and complications for inhalational induction of anaesthesia. This update of a review first published in 2013 is current to February 2016."
}
] |
query-laysum
|
22122
|
[
{
"role": "user",
"content": "Abstract: Background\nFracture of the distal radius is a common clinical problem, particularly in older people with osteoporosis. There is considerable variation in the management, including rehabilitation, of these fractures. This is an update of a Cochrane review first published in 2002 and last updated in 2006.\nObjectives\nTo examine the effects of rehabilitation interventions in adults with conservatively or surgically treated distal radial fractures.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL 2014; Issue 12), MEDLINE, EMBASE, CINAHL, AMED, PEDro, OTseeker and other databases, trial registers, conference proceedings and reference lists of articles. We did not apply any language restrictions. The date of the last search was 12 January 2015.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs evaluating rehabilitation as part of the management of fractures of the distal radius sustained by adults. Rehabilitation interventions such as active and passive mobilisation exercises, and training for activities of daily living, could be used on their own or in combination, and be applied in various ways by various clinicians.\nData collection and analysis\nThe review authors independently screened and selected trials, and reviewed eligible trials. We contacted study authors for additional information. We did not pool data.\nMain results\nWe included 26 trials, involving 1269 mainly female and older patients. With few exceptions, these studies did not include people with serious fracture or treatment-related complications, or older people with comorbidities and poor overall function that would have precluded trial participation or required more intensive treatment. Only four of the 23 comparisons covered by these 26 trials were evaluated by more than one trial. Participants of 15 trials were initially treated conservatively, involving plaster cast immobilisation. Initial treatment was surgery (external fixation or internal fixation) for all participants in five trials. Initial treatment was either surgery or plaster cast alone in six trials. Rehabilitation started during immobilisation in seven trials and after post-immobilisation in the other 19 trials. As well as being small, the majority of the included trials had methodological shortcomings and were at high risk of bias, usually related to lack of blinding, that could affect the validity of their findings. Based on GRADE criteria for assessment quality, we rated the evidence for each of the 23 comparisons as either low or very low quality; both ratings indicate considerable uncertainty in the findings.\nFor interventions started during immobilisation, there was very low quality evidence of improved hand function for hand therapy compared with instructions only at four days after plaster cast removal, with some beneficial effects continuing one month later (one trial, 17 participants). There was very low quality evidence of improved hand function in the short-term, but not in the longer-term (three months), for early occupational therapy (one trial, 40 participants), and of a lack of differences in outcome between supervised and unsupervised exercises (one trial, 96 participants).\nFour trials separately provided very low quality evidence of clinically marginal benefits of specific interventions applied in addition to standard care (therapist-applied programme of digit mobilisation during external fixation (22 participants); pulsed electromagnetic field (PEMF) during cast immobilisation (60 participants); cyclic pneumatic soft tissue compression using an inflatable cuff placed under the plaster cast (19 participants); and cross-education involving strength training of the non-fractured hand during cast immobilisation with or without surgical repair (39 participants)).\nFor interventions started post-immobilisation, there was very low quality evidence from one study (47 participants) of improved function for a single session of physiotherapy, primarily advice and instructions for a home exercise programme, compared with 'no intervention' after cast removal. There was low quality evidence from four heterogeneous trials (30, 33, 66 and 75 participants) of a lack of clinically important differences in outcome in patients receiving routine physiotherapy or occupational therapy in addition to instructions for home exercises versus instructions for home exercises from a therapist. There was very low quality evidence of better short-term hand function in participants given physiotherapy than in those given either instructions for home exercises by a surgeon (16 participants, one trial) or a progressive home exercise programme (20 participants, one trial). Both trials (46 and 76 participants) comparing physiotherapy or occupational therapy versus a progressive home exercise programme after volar plate fixation provided low quality evidence in favour of a structured programme of home exercises preceded by instructions or coaching. One trial (63 participants) provided very low quality evidence of a short-term, but not persisting, benefit of accelerated compared with usual rehabilitation after volar plate fixation.\nFor trials testing single interventions applied post-immobilisation, there was very low quality evidence of no clinically significant differences in outcome in patients receiving passive mobilisation (69 participants, two trials), ice (83 participants, one trial), PEMF (83 participants, one trial), PEMF plus ice (39 participants, one trial), whirlpool immersion (24 participants, one trial), and dynamic extension splint for patients with wrist contracture (40 participants, one trial), compared with no intervention. This finding applied also to the trial (44 participants) comparing PEMF versus ice, and the trial (29 participants) comparing manual oedema mobilisation versus traditional oedema treatment. There was very low quality evidence from single trials of a short-term benefit of continuous passive motion post-external fixation (seven participants), intermittent pneumatic compression (31 participants) and ultrasound (38 participants).\nAuthors' conclusions\nThe available evidence from RCTs is insufficient to establish the relative effectiveness of the various interventions used in the rehabilitation of adults with fractures of the distal radius. Further randomised trials are warranted. However, in order to optimise research effort and engender the large multicentre randomised trials that are required to inform practice, these should be preceded by research that aims to identify priority questions.\n\nGiven the provided abstract, please respond to the following query: \"Background and aim\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Particularly in older women, a broken wrist (comprising a fracture at the lower end of the radius, one of the two forearm bones) can result from a fall onto an outstretched hand. Treatment usually includes putting the bone fragments back in place, if badly displaced, and immobilising the wrist in a plaster cast. Surgery may be considered for more seriously displaced fractures. Rehabilitation with interventions such as exercises and other physical interventions is used to help prevent complications, such as stiffness and aching, restore function and speed up recovery.\nThis review set out to evaluate the effects, primarily on wrist function, of different rehabilitation interventions for treating these injuries."
}
] |
query-laysum
|
22123
|
[
{
"role": "user",
"content": "Abstract: Background\nNeuroblastoma is an embryonic tumour of childhood that originates in the neural crest. It is the second most common extracranial malignant solid tumour of childhood.\nNeuroblastoma cells have the unique capacity to accumulate Iodine-123-metaiodobenzylguanidine (¹²³I-MIBG), which can be used for imaging the tumour. Moreover, ¹²³I-MIBG scintigraphy is not only important for the diagnosis of neuroblastoma, but also for staging and localization of skeletal lesions. If these are present, MIBG follow-up scans are used to assess the patient's response to therapy. However, the sensitivity and specificity of ¹²³I-MIBG scintigraphy to detect neuroblastoma varies according to the literature.\nPrognosis, treatment and response to therapy of patients with neuroblastoma are currently based on extension scoring of ¹²³I-MIBG scans. Due to its clinical use and importance, it is necessary to determine the exact diagnostic accuracy of ¹²³I-MIBG scintigraphy. In case the tumour is not MIBG avid, fluorine-18-fluorodeoxy-glucose (18F-FDG) positron emission tomography (PET) is often used and the diagnostic accuracy of this test should also be assessed.\nObjectives\nPrimary objectives:\n1.1 To determine the diagnostic accuracy of ¹²³I-MIBG (single photon emission computed tomography (SPECT), with or without computed tomography (CT)) scintigraphy for detecting a neuroblastoma and its metastases at first diagnosis or at recurrence in children from 0 to 18 years old.\n1.2 To determine the diagnostic accuracy of negative ¹²³I-MIBG scintigraphy in combination with 18F-FDG-PET(-CT) imaging for detecting a neuroblastoma and its metastases at first diagnosis or at recurrence in children from 0 to 18 years old, i.e. an add-on test.\nSecondary objectives:\n2.1 To determine the diagnostic accuracy of 18F-FDG-PET(-CT) imaging for detecting a neuroblastoma and its metastases at first diagnosis or at recurrence in children from 0 to 18 years old.\n2.2 To compare the diagnostic accuracy of ¹²³I-MIBG (SPECT-CT) and 18F-FDG-PET(-CT) imaging for detecting a neuroblastoma and its metastases at first diagnosis or at recurrence in children from 0 to 18 years old. This was performed within and between included studies. ¹²³I-MIBG (SPECT-CT) scintigraphy was the comparator test in this case.\nSearch methods\nWe searched the databases of MEDLINE/PubMed (1945 to 11 September 2012) and EMBASE/Ovid (1980 to 11 September 2012) for potentially relevant articles. Also we checked the reference lists of relevant articles and review articles, scanned conference proceedings and searched for unpublished studies by contacting researchers involved in this area.\nSelection criteria\nWe included studies of a cross-sectional design or cases series of proven neuroblastoma, either retrospective or prospective, if they compared the results of ¹²³I-MIBG (SPECT-CT) scintigraphy or 18F-FDG-PET(-CT) imaging, or both, with the reference standards or with each other. Studies had to be primary diagnostic and report on children aged between 0 to 18 years old with a neuroblastoma of any stage at first diagnosis or at recurrence.\nData collection and analysis\nOne review author performed the initial screening of identified references. Two review authors independently performed the study selection, extracted data and assessed the methodological quality.\nWe used data from two-by-two tables, describing at least the number of patients with a true positive test and the number of patients with a false negative test, to calculate the sensitivity, and if possible, the specificity for each included study.\nIf possible, we generated forest plots showing estimates of sensitivity and specificity together with 95% confidence intervals.\nMain results\nEleven studies met the inclusion criteria. Ten studies reported data on patient level: the scan was positive or negative. One study reported on all single lesions (lesion level). The sensitivity of ¹²³I-MIBG (SPECT-CT) scintigraphy (objective 1.1), determined in 608 of 621 eligible patients included in the 11 studies, varied from 67% to 100%. One study, that reported on a lesion level, provided data to calculate the specificity: 68% in 115 lesions in 22 patients. The sensitivity of ¹²³I-MIBG scintigraphy for detecting metastases separately from the primary tumour in patients with all neuroblastoma stages ranged from 79% to 100% in three studies and the specificity ranged from 33% to 89% for two of these studies.\nOne study reported on the diagnostic accuracy of 18F-FDG-PET(-CT) imaging (add-on test) in patients with negative ¹²³I-MIBG scintigraphy (objective 1.2). Two of the 24 eligible patients with proven neuroblastoma had a negative ¹²³I-MIBG scan and a positive 18F-FDG-PET(-CT) scan.\nThe sensitivity of 18F-FDG-PET(-CT) imaging as a single diagnostic test (objective 2.1) and compared to ¹²³I-MIBG (SPECT-CT) (objective 2.2) was only reported in one study. The sensitivity of 18F-FDG-PET(-CT) imaging was 100% versus 92% of ¹²³I-MIBG (SPECT-CT) scintigraphy. We could not calculate the specificity for both modalities.\nAuthors' conclusions\nThe reported sensitivities of ¹²³-I MIBG scintigraphy for the detection of neuroblastoma and its metastases ranged from 67 to 100% in patients with histologically proven neuroblastoma.\nOnly one study in this review reported on false positive findings. It is important to keep in mind that false positive findings can occur. For example, physiological uptake should be ruled out, by using SPECT-CT scans, although more research is needed before definitive conclusions can be made.\nAs described both in the literature and in this review, in about 10% of the patients with histologically proven neuroblastoma the tumour does not accumulate ¹²³I-MIBG (false negative results). For these patients, it is advisable to perform an additional test for staging and assess response to therapy. Additional tests might for example be 18F-FDG-PET(-CT), but to be certain of its clinical value, more evidence is needed.\nThe diagnostic accuracy of 18F-FDG-PET(-CT) imaging in case of a negative ¹²³I-MIBG scintigraphy could not be calculated, because only very limited data were available. Also the detection of the diagnostic accuracy of index test 18F-FDG-PET(-CT) imaging for detecting a neuroblastoma tumour and its metastases, and to compare this to comparator test ¹²³I-MIBG (SPECT-CT) scintigraphy, could not be calculated because of the limited available data at time of this search.\nAt the start of this project, we did not expect to find only very limited data on specificity. We now consider it would have been more appropriate to use the term \"the sensitivity to assess the presence of neuroblastoma\" instead of \"diagnostic accuracy\" for the objectives.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases for clinical studies comparing ¹²³I-MIBG or 18F-FDG-PET imaging, or both, with microscopic examination of tissue suspected of neuroblastoma (histopathology). The evidence is current up to 11 September 2012.\nWe identified 11 eligible studies including 621 children that fulfilled our inclusion criteria: children < 18 years old with a neuroblastoma and ¹²³I-MIBG or 18F-FDG-PET imaging or both.\nAll studies included proven neuroblastoma."
}
] |
query-laysum
|
22125
|
[
{
"role": "user",
"content": "Abstract: Background\nPatients and their relatives often expect to be actively involved in decisions of treatment. Even during resuscitation and acute medical care, patients may want to have their relatives nearby, and relatives may want to be present if offered the possibility. The principle of family presence during resuscitation (FPDR) is a triangular relationship where the intervention of family presence affects the healthcare professionals, the relatives present, and the care of the patient involved. All needs and well-being must be balanced in the context of FPDR as the actions involving all three groups can impact the others.\nObjectives\nThe primary aim of this review was to investigate how offering relatives the option to be present during resuscitation of patients affects the occurrence of post-traumatic stress disorder (PTSD)-related symptoms in the relatives.\nThe secondary aim was to investigate how offering relatives the option to be present during resuscitation of patients affects the occurrence of other psychological outcomes in the relatives and what effect family presence compared to no family presence during resuscitation of patients has on patient morbidity and mortality.\nWe also wanted to investigate the effect of FPDR on medical treatment and care during resuscitation. Furthermore, we wanted to investigate and report the personal stress seen in healthcare professionals and if possible describe their attitudes toward the FPDR initiative.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, PsycINFO, and CINAHL from inception to 22 March 2022 without any language limits. We also checked references and citations of eligible studies using Scopus, and searched for relevant systematic reviews using Epistomonikos. Furthermore, we searched ClinicalTrials.gov, WHO ICTRP, and ISRCTN registry for ongoing trials; OpenGrey for grey literature; and Google Scholar for additional trials (all on 22 March 2022).\nSelection criteria\nWe included randomized controlled trials of adults who have witnessed a resuscitation attempt of a patient (who was their relative) at the emergency department or in the pre-hospital emergency medical service.\nThe participants of this review included relatives, patients, and healthcare professionals during resuscitation.\nWe included relatives aged 18 years or older who have witnessed a resuscitation attempt of a patient (who is their relative) in the emergency department or pre-hospital. We defined relatives as siblings, parents, spouses, children, or close friends of the patient, or any other descriptions used by the study authors. There were no limitations on adult age or gender.\nWe defined patient as a patient with cardiac arrest in need of cardiopulmonary resuscitation (CPR), a patient with a critical medical or traumatic life-threatening condition, an unconscious patient, or a patient in any other way at risk of sudden death. We included all types of healthcare professionals as described in the included studies. There were no limitations on age or gender.\nData collection and analysis\nWe checked titles and abstracts of studies identified by the search, and obtained the full reports of those studies deemed potentially relevant. Two review authors independently extracted data. As it was not possible to conduct meta-analyses, we synthesized data narratively.\nMain results\nThe electronic searches yielded a total of 7292 records after deduplication. We included 2 trials (3 papers) involving a total of 595 participants: a cluster-randomized trial from 2013 involving pre-hospital emergency medical services units in France, comparing systematic offer for a relative to witness CPR with the traditional practice, and its 1-year assessment; and a small pilot study from 1998 of FPDR in an emergency department in the UK.\nParticipants were 19 to 78 years old, and between 56% and 64% were women. PTSD was measured with the Impact of Event Scale, and the median score ranged from 0 to 21 (range 0 to 75; higher scores correspond to more severe disease). In the trial that accounted for most of the included participants (570/595), the frequency of PTSD-related symptoms was significantly higher in the control group after 3 and 12 months, and in the per-protocol analyses a significant statistical difference was found in favor of FPDR when looking at PTSD, anxiety and depression, and complicated grief after 1 year. One of the included studies also measured duration of patient resuscitation and personal stress in healthcare professionals during FPDR and found no difference between groups. Both studies had high risk of bias, and the evidence for all outcomes except one was assessed as very low certainty.\nAuthors' conclusions\nThere was insufficient evidence to draw any firm conclusions on the effects of FPDR on relatives' psychological outcomes.\nSufficiently powered and well-designed randomized controlled trials may change the conclusions of the review in future.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases on 22 March 2022 without any language limits. We checked references and contacted study authors to identify additional studies. We included randomized controlled trials (a type of study where participants are randomly assigned to one of two or more treatment groups) of adults who were present during resuscitation of their relative."
}
] |
query-laysum
|
22126
|
[
{
"role": "user",
"content": "Abstract: Background\nAlthough smoking cessation is currently the only guaranteed way to reduce the harm caused by tobacco smoking, a reasonable secondary tobacco control approach may be to try and reduce the harm from continued tobacco use amongst smokers unable or unwilling to quit. Possible approaches to reduce the exposure to toxins from smoking include reducing the amount of tobacco used, and using less toxic products, such as pharmaceutical, nicotine and potential reduced-exposure tobacco products (PREPs), as an alternative to cigarettes.\nObjectives\nTo assess the effects of interventions intended to reduce the harm to health of continued tobacco use, we considered the following specific questions: do interventions intended to reduce harm have an effect on long-term health status?; do they lead to a reduction in the number of cigarettes smoked?; do they have an effect on smoking abstinence?; do they have an effect on biomarkers of tobacco exposure?; and do they have an effect on biomarkers of damage caused by tobacco?\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Trials Register (CRS) on the 21st October 2015, using free-text and MeSH terms for harm reduction, smoking reduction and cigarette reduction.\nSelection criteria\nRandomized or quasi-randomized controlled trials of interventions to reduce the amount smoked, or to reduce harm from smoking by means other than cessation. We include studies carried out in smokers with no immediate desire to quit all tobacco use. Primary outcomes were change in cigarette consumption, smoking cessation and any markers of damage or benefit to health, measured at least six months from the start of the intervention.\nData collection and analysis\nWe assessed study eligibility for inclusion using standard Cochrane methods. We pooled trials with similar interventions and outcomes (> 50% reduction in cigarettes a day (CPD) and long-term smoking abstinence), using fixed-effect models. Where it was not possible to meta-analyse data, we summarized findings narratively.\nMain results\nTwenty-four trials evaluated interventions to help those who smoke to cut down the amount smoked or to replace their regular cigarettes with PREPs, compared to placebo, brief intervention, or a comparison intervention. None of these trials directly tested whether harm reduction strategies reduced the harms to health caused by smoking. Most trials (14/24) tested nicotine replacement therapy (NRT) as an intervention to assist reduction. In a pooled analysis of eight trials, NRT significantly increased the likelihood of reducing CPD by at least 50% for people using nicotine gum or inhaler or a choice of product compared to placebo (risk ratio (RR) 1.75, 95% confidence interval (CI) 1.44 to 2.13; 3081 participants). Where average changes from baseline were compared for different measures, carbon monoxide (CO) and cotinine generally showed smaller reductions than CPD. Use of NRT versus placebo also significantly increased the likelihood of ultimately quitting smoking (RR 1.87, 95% CI 1.43 to 2.44; 8 trials, 3081 participants; quality of the evidence: low). Two trials comparing NRT and behavioural support to brief advice found a significant effect on reduction, but no significant effect on cessation. We found one trial investigating each of the following harm reduction intervention aids: bupropion, varenicline, electronic cigarettes, snus, plus another of nicotine patches to facilitate temporary abstinence. The evidence for all five intervention types was therefore imprecise, and it is unclear whether or not these aids increase the likelihood of smoking reduction or cessation. Two trials investigating two different types of behavioural advice and instructions on reducing CPD also provided imprecise evidence. Therefore, the evidence base for this comparison is inadequate to support the use of these types of behavioural advice to reduce smoking. Four studies of PREPs (cigarettes with reduced levels of tar, carbon and nicotine, and in one case delivered using an electronically-heated cigarette smoking system) showed some reduction in exposure to some toxicants, but it is unclear whether this would substantially alter the risk of harm. We judged the included studies to be generally at a low or unclear risk of bias; however, there were some ratings of high risk, due to a lack of blinding and the potential for detection bias. Using the GRADE system, we rated the overall quality of the evidence for our cessation outcomes as ‘low’ or ‘very low’, due to imprecision and indirectness. A ‘low’ grade means that further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. A ‘very low’ grade means we are very uncertain about the estimate.\nAuthors' conclusions\nPeople who do not wish to quit can be helped to cut down the number of cigarettes they smoke and to quit smoking in the long term, using NRT, despite original intentions not to do so. However, we rated the evidence contributing to the cessation outcome for NRT as 'low' by GRADE standards. There is a lack of evidence to support the use of other harm reduction aids to reduce the harm caused by continued tobacco smoking. This could simply be due to the lack of high-quality studies (our confidence in cessation outcomes for these aids is rated 'low' or 'very low' due to imprecision by GRADE standards), meaning that we may have missed a worthwhile effect, or due to a lack of effect on reduction or quit rates. It is therefore important that more high-quality RCTs are conducted, and that these also measure the long-term health effects of treatments.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 20 randomised controlled trials that tested ways to help people to cut down the number of cigarettes they smoked. Some of these just advised smokers to smoke less, but most also provided them with a product to help them cut down: nicotine replacement therapy (NRT), varenicline, bupropion, electronic cigarettes (ecigs), or snus (a form of smokeless, oral tobacco). We also found four randomized controlled trials that tested the effects of using cigarettes designed to reduce the damage caused by smoking: reduced tar, carbon or nicotine cigarettes. Most of the studies used NRT to help people to reduce their smoking. All of the studies included people who were not planning to quit smoking soon. The research is current to October 2015."
}
] |
query-laysum
|
22127
|
[
{
"role": "user",
"content": "Abstract: Background\nPre-cancerous lesions of cervix (cervical intraepithelial neoplasia (CIN)) are usually treated with excisional or ablative procedures. In the UK, the National Health Service (NHS) cervical screening guidelines suggest that over 80% of treatments should be performed in an outpatient setting (colposcopy clinics). Furthermore, these guidelines suggest that analgesia should always be given prior to laser or excisional treatments. Currently various pain relief strategies are employed that may reduce pain during these procedures.\nObjectives\nTo assess whether the administration of pain relief (analgesia) reduces pain during colposcopy treatment and in the postoperative period.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 2), MEDLINE (1950 to March week 3, 2016) and Embase (1980 to week 12, 2016) for studies of any design relating to analgesia for colposcopic management. We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field.\nSelection criteria\nRandomised controlled trials (RCTs) that compared all types of pain relief before, during or after outpatient treatment to the cervix, in women with CIN undergoing loop excision, laser ablation, laser excision or cryosurgery in an outpatient colposcopy clinic setting.\nData collection and analysis\nWe independently assessed study eligibility, extracted data and assessed risk of bias. We entered data into Review Manager 5 and double checked it for accuracy. Where possible, we expressed results as mean pain score and standard error of the mean with 95% confidence intervals (CI) and synthesised data in a meta-analysis.\nMain results\nWe included 19 RCTs (1720 women) of varying methodological quality in the review. These trials compared a variety of interventions aimed at reducing pain in women who underwent treatment for CIN, including cervical injection with lignocaine alone, lignocaine with adrenaline, buffered lignocaine with adrenaline, prilocaine with felypressin, oral analgesics (non-steroidal anti-inflammatory drugs (NSAIDs)), inhalation analgesia (gas mixture of isoflurane and desflurane), lignocaine spray, cocaine spray, local application of benzocaine gel, lignocaine-prilocaine cream (EMLA cream) and transcutaneous electrical nerve stimulation (TENS).\nMost comparisons were restricted to single trial analyses and were under-powered to detect differences in pain scores between treatments that may or may not have been present. There was no difference in pain relief between women who received local anaesthetic infiltration (lignocaine 2%; administered as a paracervical or direct cervical injection) and a saline placebo (mean difference (MD) -13.74; 95% CI -34.32 to 6.83; 2 trials; 130 women; low quality evidence). However, when local anaesthetic was combined with a vasoconstrictor agent (one trial used lignocaine plus adrenaline while the second trial used prilocaine plus felypressin), there was less pain (on visual analogue scale (VAS)) compared with no treatment (MD -23.73; 95% CI -37.53 to -9.93; 2 trials; 95 women; low quality evidence). Comparing two preparations of local anaesthetic combined with vasoconstrictor, prilocaine plus felypressin did not differ from lignocaine plus adrenaline for its effect on pain control (MD -0.05; 95% CI -0.26 to 0.16; 1 trial; 200 women). Although the mean (± standard deviation (SD)) observed blood loss score was less with lignocaine plus adrenaline (1.33 ± 1.05) compared with prilocaine plus felypressin (1.74 ± 0.98), the difference was not clinically as the overall scores in both groups were low (MD 0.41; 95% CI 0.13 to 0.69; 1 trial; 200 women). Inhalation of gas mixture (isoflurane and desflurane) in addition to standard cervical injection with prilocaine plus felypressin resulted in less pain during the LLETZ (loop excision of the transformation zone) procedure (MD -7.20; 95% CI -12.45 to -1.95; 1 trial; 389 women). Lignocaine plus ornipressin resulted in less measured blood loss (MD -8.75 ml; 95% CI -10.43 to -7.07; 1 trial; 100 women) and a shorter duration of treatment (MD -7.72 minutes; 95% CI -8.49 to -6.95; 1 trial; 100 women) than cervical infiltration with lignocaine alone. Buffered solution (sodium bicarbonate buffer mixed with lignocaine plus adrenaline) was not superior to non-buffered solution of lignocaine plus adrenaline in relieving pain during the procedure (MD -8.00; 95% CI -17.57 to 1.57; 1 trial; 52 women).\nOne meta-analysis found no difference in pain using VAS between women who received oral analgesic and women who received placebo (MD -3.51; 95% CI -10.03 to 3.01; 2 trials; 129 women; low quality evidence).\nCocaine spray was associated with less pain (MD -28.00; 95% CI -37.86 to -18.14; 1 trial; 50 women) and blood loss (MD 0.04; 95% CI 0 to 0.70; 1 trial; 50 women) than placebo.\nNone of the trials reported serious adverse events and majority of trials were at moderate or high risk of bias (13 trials).\nAuthors' conclusions\nBased on two small trials, there was no difference in pain relief in women receiving oral analgesics compared with placebo or no treatment (MD -3.51; 95% CI -10.03 to 3.01; 129 women). We consider this evidence to be of a low to moderate quality. In routine clinical practice, intracervical injection of local anaesthetic with a vasoconstrictor (lignocaine plus adrenaline or prilocaine plus felypressin) appears to be the optimum analgesia for treatment. However, further high quality, adequately powered trials should be undertaken in order to provide the data necessary to estimate the efficacy of oral analgesics, the optimal route of administration and dose of local anaesthetics.\n\nGiven the provided abstract, please respond to the following query: \"The aim of the review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The purpose of this review is to determine which, if any, pain relief should be used during cervical colposcopy treatment."
}
] |
query-laysum
|
22128
|
[
{
"role": "user",
"content": "Abstract: Background\nAcquired adult-onset hearing loss is a common long-term condition for which the most common intervention is hearing aid fitting. However, up to 40% of people fitted with a hearing aid either fail to use it or may not gain optimal benefit from it. This is an update of a review first published in The Cochrane Library in 2014.\nObjectives\nTo assess the long-term effectiveness of interventions to promote the use of hearing aids in adults with acquired hearing loss fitted with at least one hearing aid.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trials Register; Central Register of Controlled Trials (CENTRAL 2016, Issue 5); PubMed; EMBASE; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 13 June 2016.\nSelection criteria\nWe included randomised controlled trials (RCTs) of interventions designed to improve or promote hearing aid use in adults with acquired hearing loss compared with usual care or another intervention. We excluded interventions that compared hearing aid technology. We classified interventions according to the 'chronic care model' (CCM). The primary outcomes were hearing aid use (measured as adherence or daily hours of use) and adverse effects (inappropriate advice or clinical practice, or patient complaints). Secondary patient-reported outcomes included quality of life, hearing handicap, hearing aid benefit and communication. Outcomes were measured over the short (</= 12 weeks), medium (> 12 to < 52 weeks) and long term (one year plus).\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane.\nMain results\nWe included 37 studies involving a total of 4129 participants. Risk of bias across the included studies was variable. We judged the GRADE quality of evidence to be very low or low for the primary outcomes where data were available.\nThe majority of participants were over 65 years of age with mild to moderate adult-onset hearing loss. There was a mix of new and experienced hearing aid users. Six of the studies (287 participants) assessed long-term outcomes.\nAll 37 studies tested interventions that could be classified using the CCM as self-management support (ways to help someone to manage their hearing loss and hearing aid(s) better by giving information, practice and experience at listening/communicating or by asking people to practise tasks at home) and/or delivery system design interventions (just changing how the service was delivered).\nSelf-management support interventions\nWe found no studies that investigated the effect of these interventions on adherence, adverse effects or hearing aid benefit. Two studies reported daily hours of hearing aid use but we were unable to combine these in a meta-analysis. There was no evidence of a statistically significant effect on quality of life over the medium term. Self-management support reduced short- to medium-term hearing handicap (two studies, 87 participants; mean difference (MD) -12.80, 95% confidence interval (CI) -23.11 to -2.48 (0 to 100 scale)) and increased the use of verbal communication strategies in the short to medium term (one study, 52 participants; MD 0.72, 95% CI 0.21 to 1.23 (0 to 5 scale)). The clinical significance of these statistical findings is uncertain. It is likely that the outcomes were clinically significant for some, but not all, participants. Our confidence in the quality of this evidence was very low. No self-management support studies reported long-term outcomes.\nDelivery system design interventions\nThese interventions did not significantly affect adherence or daily hours of hearing aid use in the short to medium term, or adverse effects in the long term. We found no studies that investigated the effect of these interventions on quality of life. There was no evidence of a statistically or clinically significant effect on hearing handicap, hearing aid benefit or the use of verbal communication strategies in the short to medium term. Our confidence in the quality of this evidence was low or very low. Long-term outcome measurement was rare.\nCombined self-management support/delivery system design interventions\nOne combined intervention showed evidence of a statistically significant effect on adherence in the short term (one study, 167 participants, risk ratio (RR) 1.06, 95% CI 1.00 to 1.12). However, there was no evidence of a statistically or clinically significant effect on daily hours of hearing aid use over the long term, or the short to medium term. No studies of this type investigated adverse effects. There was no evidence of an effect on quality of life over the long term, or short to medium term. These combined interventions reduced hearing handicap in the short to medium term (14 studies, 681 participants; standardised mean difference (SMD) -0.26, 95% CI -0.50 to -0.02). This represents a small-moderate effect size but there is no evidence of a statistically significant effect over the long term. There was evidence of a statistically, but not clinically, significant effect on long-term hearing aid benefit (two studies, 69 participants, MD 0.30, 95% CI 0.02 to 0.58 (1 to 5 scale)), but no evidence of an effect over the short to medium term. There was evidence of a statistically, but not clinically, significant effect on the use of verbal communication strategies in the short term (four studies, 223 participants, MD 0.45, 95% CI 0.15 to 0.74 (0 to 5 scale)), but not the long term. Our confidence in the quality of this evidence was low or very low.\nWe found no studies that assessed the effect of other CCM interventions (decision support, the clinical information system, community resources or health system changes).\nAuthors' conclusions\nThere is some low to very low quality evidence to support the use of self-management support and complex interventions combining self-management support and delivery system design in adult auditory rehabilitation. However, effect sizes are small. The range of interventions that have been tested is relatively limited. Future research should prioritise: long-term outcome assessment; development of a core outcome set for adult auditory rehabilitation; and study designs and outcome measures that are powered to detect incremental effects of rehabilitative healthcare system changes.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Thirty-three of the 37 studies looked at ways to help someone to manage their hearing loss and hearing aid(s) better by giving information, practice and experience at listening/communicating or by asking people to practise tasks at home. These are forms of self-management support. Most of these studies also changed how the self-management support was provided, for example by changing the number of appointment sessions or using telephone or email follow-up.\nSix studies looked at the effect of just changing how the service was delivered. No studies looked at the effect of using guidelines or standards, computerised medical record systems, community resources or changing the health system.\nWe found no evidence that the interventions helped people to wear their hearing aids for more hours per day over the short, medium or long term. One study that used interactive videos to give information after hearing aid fitting encouraged more people to wear their hearing aids.\nWe found no evidence of adverse effects of any of the interventions, but it was rare for studies to look for adverse effects.\nGiving self-management support meant that people reported less hearing handicap and improved verbal communication over the short term. When this was combined with changing how the support was delivered people also reported slightly more hearing aid benefit over the long term.\nOnly six studies (287 people) looked at how people were doing after a year or more."
}
] |
query-laysum
|
22129
|
[
{
"role": "user",
"content": "Abstract: Background\nIt has been suggested that low serum zinc levels may be associated with suboptimal outcomes of pregnancy, such as prolonged labour, atonic postpartum haemorrhage, pregnancy-induced hypertension, preterm labour and post-term pregnancies, although these associations have not yet been established. This is an update of a review first published in 1997 and subsequently updated in 2007, 2012 and 2015.\nObjectives\n1. To compare the effects on maternal, fetal, neonatal and infant outcomes in healthy pregnant women receiving zinc supplementation versus no zinc supplementation, or placebo.\n2. To assess the above outcomes in a subgroup analysis reviewing studies performed in women who are, or are likely to be, zinc-deficient.\nSearch methods\nFor this update, we searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (3 July 2020), and reference lists of retrieved studies.\nSelection criteria\nRandomised trials of zinc supplementation versus no zinc supplementation or placebo administration during pregnancy, earlier than 27 weeks' gestation. We excluded quasi-randomised controlled trials. We intended to include studies presented only as abstracts, if they provided enough information or, if necessary, by contacting authors to analyse them against our criteria; we did not find any such studies.\nData collection and analysis\nThree review authors applied the study selection criteria, assessed trial quality and extracted data. When necessary, we contacted study authors for additional information. We assessed the certainty of the evidence using GRADE.\nMain results\nFor this update, we included 25 randomised controlled trials (RCTs) involving over 18,000 women and their babies. The overall risk of bias was low in half of the studies. The evidence suggests that zinc supplementation may result in little or no difference in reducing preterm births (risk ratio (RR) 0.87, 95% confidence interval (CI) 0.74 to 1.03; 21 studies, 9851 participants; low-certainty evidence). Further, zinc supplementation may make little or no difference in reducing the risk of stillbirth (RR 1.22, 95% CI 0.80 to 1.88; 7 studies, 3295 participants; low-certainty evidence), or perinatal deaths (RR 1.10, 95% CI 0.81 to 1.51; 2 studies, 2489 participants; low-certainty evidence). It is unclear whether zinc supplementation reduces neonatal death, because the certainty of the evidence is very low. Finally, for other birth outcomes, zinc supplementation may make little or no difference to mean birthweight (MD 13.83, 95% CI -15.81 to 43.46; 22 studies, 7977 participants; low-certainty evidence), and probably makes little or no difference in reducing the risk of low birthweight (RR 0.94, 95% CI 0.79 to 1.13; 17 studies, 7399 participants; moderate-certainty evidence) and small-for-gestational age babies when compared to placebo or no zinc supplementation (RR 1.02, 95% CI 0.92 to 1.12; 9 studies, 5330 participants; moderate-certainty evidence). We did not conduct subgroup analyses, as very few studies used normal zinc populations.\nAuthors' conclusions\nThere is not enough evidence that zinc supplementation during pregnancy results in improvements in maternal or neonatal outcomes. Future research to address ways of improving the overall nutritional status of pregnant women, particularly in low-income regions, and not looking at zinc in isolation, should be an urgent priority.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies in July 2020. This updated review now includes 25 randomised controlled trials, involving over 18,000 women and their babies. We found that zinc supplementation in pregnancy may make little to no difference in reducing the risk of preterm births, stillbirths, or deaths around the time of birth, when compared to no zinc supplementation or placebo. Zinc supplementation may make little or no difference to the birthweight of babies, and probably makes little or no difference to the number of babies born either with a low birthweight or small for their gestational age, when compared with no zinc supplementation, or with giving a placebo. We cannot be sure whether zinc supplementation reduces death in newborns, because the certainty of the evidence is very low."
}
] |
query-laysum
|
22130
|
[
{
"role": "user",
"content": "Abstract: Background\nIntermittent urethral self-dilatation is sometimes recommended to reduce the risk of recurrent urethral stricture. There is no consensus as to whether it is a clinically effective or cost-effective intervention in the management of this disease.\nObjectives\nThe purpose of this review is to evaluate the clinical effectiveness and cost-effectiveness of intermittent self-dilatation after urethral stricture surgery in males compared to no intervention. We also compared different programmes of, and devices for, intermittent self-dilatation. .\nSearch methods\nWe searched the Cochrane Incontinence Group Specialised Register (searched 7 May 2014), CENTRAL (2014, Issue 4), MEDLINE (1 January 1946 to Week 3April 2014), PREMEDLINE (covering 29 April 2014), EMBASE (1 January 1947 to Week 17 2014), CINAHL (31 December 1981 to 30 April 2014) OpenGrey (searched 6 May 2014), ClinicalTrials.gov (6 May 2014), WHO International Clinical Trials Registry Platform (6 May 2014), Current Controlled Trials (6 May 2014) and the reference lists of relevant articles.\nSelection criteria\nRandomised and quasi-randomised trials where one arm was a programme of intermittent self-dilatation for urethral stricture were identified. Studies were excluded if they were not randomised or quasi-randomised trials, or if they pertained to clean intermittent self-catheterisation for bladder emptying.\nData collection and analysis\nTwo authors screened the records for relevance and methodological quality. Data extraction was performed according to predetermined criteria using data extraction forms. Analyses were carried out in Cochrane Review Manager (RevMan 5). The primary outcomes were patient-reported symptoms and health-related quality of life, and risk of recurrence; secondary outcomes were adverse events, acceptability of the intervention to patients and cost-effectiveness. Quality of evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nEleven trials were selected for inclusion in the review, including a total of 776 men. They were generally small; all were of poor quality and all were deemed to have high risk of bias.\nPerforming intermittent self-dilatationversus not performing intermittent self-dilatation\nThe data from six trials were heterogeneous, imprecise and had a high risk of bias, but indicated that recurrent urethral stricture was less likely in men who performed intermittent self-dilatation than men who did not perform intermittent self-dilatation (RR 0.70, 95% CI 0.48 to 1.00; very low quality evidence). Adverse events were generally poorly reported: two trials did not report adverse events and two trials reported adverse events only for the intervention group. Meta-analysis of the remaining two trials found no evidence of a difference between performing intermittent self-dilatation and not performing it (RR 0.60, 95% CI 0.11 to 3.26). No trials formally assessed acceptability, and no trials reported on patient-reported lower urinary tract symptoms, patient-reported health-related quality of life, or cost-effectiveness.\nOne programme of intermittent self-dilatationversus another\nWe identified two trials that compared different durations of intermittent self-dilatation, but data were not combined. One study could not draw robust conclusions owing to cross-over, protocol deviation, administrative error, post-hoc analysis and incomplete outcome reporting. The other study found no evidence of a difference between intermittent self-dilatation for six months versus for 12 months after optical urethrotomy (RR 0.67, 95% CI 0.12 to 3.64), although again the evidence is limited by the small sample size and risk of bias in the included study. Adverse events were reported narratively and were not stratified by group. No trials formally assessed acceptability, and no trials reported on patient-reported lower urinary tract symptoms, patient-reported health-related quality of life, or cost-effectiveness.\nOne device for performing intermittent self-dilatationversus another\nThree trials compared one device for performing intermittent self-dilatation with another. Results from one trial at a high risk of bias were too uncertain to determine the effects of a low friction hydrophilic catheter and a standard polyvinyl chloride catheter on the risk of recurrent urethral stricture (RR 0.32, 95% CI 0.07 to 1.40). Similarly one study did not find evidence of a difference between one percent triamcinolone gel for lubricating the intermittent self-dilatation catheter versus water-based gel on risk of recurrent urethral stricture (RR 0.68, 95% CI 0.35 to 1.32). Two trials reported adverse events, but one did not provide sufficient detail for analysis. The other small study reported fewer instances of prostatitis, urethral bleeding or bacteriuria with a low friction hydrophilic catheter compared with a standard polyvinyl chloride catheter (RR 0.13, 95% CI 0.02 to 0.98). ‘Happiness with the intervention’ was assessed using a non-validated scale in one study, but no trials formally assessed patient-reported health-related quality of life or acceptability. No trials reported on patient-reported lower urinary tract symptoms or cost-effectiveness.\nGRADE quality assessment\nThe evidence that intermittent self-dilatation reduces the risk of recurrent urethral stricture after surgical intervention was downgraded to 'very low' on the basis that the studies comprising the meta-analysis were deemed to have high risk of bias, and the data was imprecise and inconsistent.\nInsufficient evidence\nNo trials provided cost-effectiveness data or used a validated patient-reported outcome measure, and adverse events were not reported rigorously. Acceptability of the intervention to patients has not been assessed quantitatively or qualitatively.\nAuthors' conclusions\nPerforming intermittent self-dilatation may confer a reduced risk of recurrent urethral stricture after endoscopic treatment. We have very little confidence in the estimate of the effect owing to the very low quality of the evidence. Evidence for other comparisons and outcomes is limited. Further research is required to determine whether the apparent benefit is sufficient to make the intervention worthwhile, and in whom.\n\nGiven the provided abstract, please respond to the following query: \"STUDY CHARACTERISTICS\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 11 trials involving a total of 776 men across eight countries for this review."
}
] |
query-laysum
|
22131
|
[
{
"role": "user",
"content": "Abstract: Background\nExtravasation injury, a complication commonly seen in the neonatal intensive care unit, can result in scarring with cosmetic and functional sequelae. A wide variety of treatments are available, including subcutaneous irrigation with saline (with or without hyaluronidase), liposuction, use of specific antidotes, topical applications, and normal wound care with dry or wet dressings. All such treatments aim to prevent or reduce the severity of complications.\nObjectives\nPrimary objective\nTo compare the efficacy and safety of saline irrigation or saline irrigation with prior hyaluronidase infiltration versus no intervention or normal wound care for tissue healing in neonates with extravasation injury.\nSecondary objectives\nTo evaluate by subgroup analysis of controlled trials the influence of type of extravasate, timing of irrigation following extravasation, and postmenstrual age (PMA) of the neonate at the time of injury on outcomes and adverse effects.\nSpecifically, we planned to perform subgroup analysis for the primary outcome, if appropriate, by examining:\n1. time to irrigation from identified extravasation injury (< 1 hour or ≥ 1 hour);\n2. type of extravasate (parenteral nutrition fluid or other fluids or medications);\n3. amount of saline used (< 500 mL or ≥ 500 mL); and\n4. PMA at injury (< 37 completed weeks or ≥ 37 completed weeks).\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 1), MEDLINE via PubMed (1966 to 2 February 2017), Embase (1980 to 2 February 2017), and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 2 February 2017). We also searched clinical trial databases, conference proceedings, and reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials. We used the Google Scholar search tool for reverse citations of relevant articles.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-randomised controlled trials comparing saline irrigation with or without hyaluronidase infiltration versus no intervention or normal wound care for the management of extravasation injury in neonates.\nData collection and analysis\nThree review authors independently reviewed and identified articles for possible inclusion in this review. We used the GRADE approach to assess the quality of evidence.\nMain results\nWe found no eligible studies. Our search revealed 10 case reports or case series describing successful outcomes with different interventions for this condition.\nAuthors' conclusions\nTo date, no RCTs have examined the effects of saline irrigation with or without prior hyaluronidase infiltration for management of extravasation injury in neonates. Saline irrigation is frequently reported in the literature as an intervention for management of extravasation injury in neonates. Research should focus first on evaluating the efficacy and safety of this intervention through RCTs. It will also be important for investigators to determine effect size by examining the timing of the intervention, the nature of the infusate, and severity of injury at the time of intervention.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Preterm and sick term infants requiring intravenous fluids and medications are vulnerable to tissue injury secondary to extravasation, that is, leakage of fluid into surrounding tissue. Such injury can result in scarring with consequent cosmetic issues and, in some infants, functional limitations. Remedial surgical intervention may be required for some babies. Saline flush, with or without prior infiltration of hyaluronidase (a protein that promotes the breakdown of barriers that hold tissue planes together), is widely used for the management of severe extravasation injury in neonates, and is intended to prevent or reduce complications following extravasation. Conservative treatment with normal wound care and various topical dressings is commonly used at different stages of extravasation injury with variable results."
}
] |
query-laysum
|
22132
|
[
{
"role": "user",
"content": "Abstract: Background\nAmbulatory or outpatient anaesthesia is performed in patients who are discharged on the same day as their surgery. Perioperative complications such as postoperative nausea and vomiting (PONV), postoperative behavioural disturbances and cardiorespiratory complications should be minimized in ambulatory anaesthesia. The choice of anaesthetic agents and techniques can influence the occurrence of these complications and thus delay in discharge.\nObjectives\nThe objective of this review was to evaluate the risk of complications (the risk of postoperative nausea and vomiting (PONV), admission or readmission to hospital, postoperative behavioural disturbances and perioperative respiratory and cardiovascular complications) and recovery times (time to discharge from recovery ward and time to discharge from hospital) comparing the use of intravenous to inhalational anaesthesia for paediatric outpatient surgery.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library (2013, Issue 8); MEDLINE (1948 to 1 October 2013); EMBASE (1974 to 1 October 2013); Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) (1982 to 1 October 2013). We also handsearched relevant journals and searched the reference lists of the articles identified.\nSelection criteria\nWe included randomized controlled trials comparing paediatric outpatient surgery using intravenous versus inhalational anaesthesia.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted the data. When necessary, we requested additional information and clarification of published data from the authors of individual trials.\nMain results\nWe included 16 trials that involved 900 children in this review. Half of all the studies did not describe the generation of randomized sequence and most studies did not describe adequate allocation sequence concealment. The included studies showed variability in the types and combinations of drugs and the duration of anaesthesia, limiting the meta-analysis and interpretation of the results.\nFor the induction and maintenance of anaesthesia there was a significant difference favouring intravenous anaesthesia with propofol; the incidence of PONV was 32.6% for sevoflurane and 16.1% for propofol (odds ratio (OR) 2.96; 95% confidence interval (CI) 1.35 to 6.49, four studies, 176 children, low quality evidence). The risk of postoperative behavioural disturbances also favoured intravenous anaesthesiaas the incidence was 24.7% for sevoflurane and 11.5% for propofol (OR 2.67; 95% CI 1.14 to 6.23, four studies, 176 children, very low quality evidence). There were no differences between groups in the risk of intraoperative and postoperative respiratory and cardiovascular complications (OR 0.75; 95% CI 0.27 to 2.13, three studies,130 children, very low quality evidence) and there was no difference in the time to recovery from anaesthesia and discharge from hospital. These results should be interpreted with caution due to heterogeneity between studies in the type and duration of operations, types of reported complications and the high risk of bias in almost all studies. Two studies (105 participants) compared halothane to propofol and showed heterogeneity in duration of anaesthesia and in the type of ambulatory procedure. For the risk of PONV the results of the studies were conflicting, and for the risks of intraoperative and postoperative complications there were no significant differences between the groups.\nFor the maintenance of anaesthesia there was a significant difference favouring anaesthesia with propofol, with or without nitrous oxide (N2O), when compared to thiopentone and halothane + N2O (OR 3.23; 95% CI 1.49 to 7.02, four studies, 176 children, low quality evidence; and OR 7.44; 95% CI 2.60 to 21.26, two studies, 87 children, low quality evidence), respectively. For the time to discharge from the recovery room, there were no significant differences between groups. The studies were performed with different ambulatory surgeries and a high risk of bias.\nFour studies (250 participants) compared the induction of anaesthesia by the inhalational or intravenous route, with inhalational anaesthesia for maintenance, and found no significant differences between groups in all outcomes (the risk of PONV, behavioural disturbances, respiratory and cardiovascular complications and time to discharge from recovery room). Meta-analysis was not done in this comparison because of significant clinical heterogeneity.\nReadmission to hospital was not reported in any of the included studies. No other adverse effects were reported.\nAuthors' conclusions\nThere is insufficient evidence to determine whether intravenous anaesthesia with propofol for induction and maintenance of anaesthesia in paediatric outpatients undergoing surgery reduces the risk of postoperative nausea and vomiting and the risk of behavioural disturbances compared with inhaled anaesthesia. This evidence is of poor quality. More high-quality studies are needed to compare the different types of anaesthesia in different subsets of children undergoing ambulatory surgery.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Study characteristics\nWe included 16 studies (900 participants) in this review. All participants were otherwise healthy children (aged under 15 years) who were scheduled for day surgery. The studies compared different types of intravenous and inhaled anaesthetic medications."
}
] |
query-laysum
|
22133
|
[
{
"role": "user",
"content": "Abstract: Background\nLoss of muscle mass and muscle weakness are common complications to cirrhosis and are associated with increased morbidity and mortality. Therefore, physical exercise may benefit people with cirrhosis.\nObjectives\nTo assess the beneficial and harmful effects of physical exercise versus sham exercise or no exercise for people with cirrhosis.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE Ovid, Embase Ovid, and three other databases, including manual searches through reference lists, abstracts, and presentations at conferences and meetings, Google Scholar, and online trial registers in February 2018.\nSelection criteria\nWe included randomised clinical trials regardless of publication status or language. Inclusion criteria were cirrhosis irrespective of the aetiology or stage. Interventions were physical exercise compared with sham exercise or no intervention.\nData collection and analysis\nThree review authors independently extracted data. We undertook meta-analyses and presented results using risk ratios (RR) for dichotomous outcomes and mean differences (MD) for continuous outcomes, both with 95% confidence intervals (CI) and I2 values as markers of imprecision and heterogeneity. We assessed bias control using the Cochrane Hepato-Biliary Group domains and determined the credibility of the evidence using GRADE.\nMain results\nWe included six randomised clinical trials with 173 participants. All participants had Child-Pugh stage A or B cirrhosis. The intervention groups participated in eight to 14 weeks of physical exercise (aerobic: three trials; resistance: one trial; or aerobic plus resistance training: two trials). Control groups underwent sham exercise (supervised relaxation: one trial) or no intervention (five trials). None of the 89 participants allocated to exercise versus two of 84 participants in the control group died (RR 0.19, 95% CI 0.01 to 3.73; moderate-quality evidence). The cause of death was acute-on-chronic liver disease for both participants. Nine participants in the exercise group and 13 in the control group experienced serious adverse events (RR 0.61, 95% CI 0.19 to 1.94; low-quality evidence).\nPhysical exercise showed no beneficial or detrimental effect on health-related quality of life assessed by the Chronic Liver Disease Questionnaire (MD 0.11, 95% CI –0.44 to 0.67; low-quality evidence). Likewise, physical exercise had no clear effect on physical fitness measured by peak exercise oxygen uptake (MD 0.3 mL/kg/minute, 95 % CI –2.74 to 3.35; low-quality evidence) and Six-Minute Walk Test (MD 56.06 min, 95% CI –9.14 to 121.26; very low-quality evidence). Physical exercise showed no clear effect on mid-thigh circumference (MD 1.76 cm, 95% CI –0.26 to 3.77; low-quality evidence), but showed an increase in mid-arm circumference (MD 2.61 cm, 95% CI 0.36 to 4.85; low-quality evidence).\nAuthors' conclusions\nWe found no clear beneficial or harmful effect of physical exercise on mortality, morbidity, or health-related quality of life. Further evidence is needed to evaluate the beneficial and harmful effects of physical exercise on clinical outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, the evidence for the effect of physical exercise was of low or very low quality. Factors that downgraded the quality of the evidence included lack of trials with a low risk of bias, small trials, and not similar results across trials."
}
] |
query-laysum
|
22134
|
[
{
"role": "user",
"content": "Abstract: Background\nFibromyalgia is characterized by chronic widespread pain that leads to reduced physical function. Exercise training is commonly recommended as a treatment for management of symptoms. We examined the literature on resistance training for individuals with fibromyalgia. Resistance training is exercise performed against a progressive resistance with the intention of improving muscle strength, muscle endurance, muscle power, or a combination of these.\nObjectives\nTo evaluate the benefits and harms of resistance exercise training in adults with fibromyalgia. We compared resistance training versus control and versus other types of exercise training.\nSearch methods\nWe searched nine electronic databases (The Cochrane Library, MEDLINE, EMBASE, CINAHL, PEDro, Dissertation Abstracts, Current Controlled Trials, World Health Organization (WHO) International Clinical Trials Registry Platform, AMED) and other sources for published full-text articles. The date of the last search was 5 March 2013. Two review authors independently screened 1856 citations, 766 abstracts and 156 full-text articles. We included five studies that met our inclusion criteria.\nSelection criteria\nSelection criteria included: a) randomized clinical trial, b) diagnosis of fibromyalgia based on published criteria, c) adult sample, d) full-text publication, and e) inclusion of between-group data comparing resistance training versus a control or other physical activity intervention.\nData collection and analysis\nPairs of review authors independently assessed risk of bias and extracted intervention and outcome data. We resolved disagreements between the two review authors and questions regarding interpretation of study methods by discussion within the pairs or when necessary the issue was taken to the full team of 11 members. We extracted 21 outcomes of which seven were designated as major outcomes: multidimensional function, self reported physical function, pain, tenderness, muscle strength, attrition rates, and adverse effects. We evaluated benefits and harms of the interventions using standardized mean differences (SMD) or mean differences (MD) or risk ratios or Peto odds ratios and 95% confidence intervals (CI). Where two or more studies provided data for an outcome, we carried out a meta-analysis.\nMain results\nThe literature search yielded 1865 citations with five studies meeting the selection criteria. One of the studies that had three arms contributed data for two comparisons. In the included studies, there were 219 women participants with fibromyalgia, 95 of whom were assigned to resistance training programs. Three randomized trials compared 16 to 21 weeks of moderate- to high-intensity resistance training versus a control group. Two studies compared eight weeks of progressive resistance training (intensity as tolerated) using free weights or body weight resistance exercise versus aerobic training (ie, progressive treadmill walking, indoor and outdoor walking), and one study compared 12 weeks of low-intensity resistance training using hand weights (1 to 3 lbs (0.45 to 1.36 kg)) and elastic tubing versus flexibility exercise (static stretches to major muscle groups).\nStatistically significant differences (MD; 95% CI) favoring the resistance training interventions over control group(s) were found in multidimensional function (Fibromyalgia Impact Questionnaire (FIQ) total decreased 16.75 units on a 100-point scale; 95% CI -23.31 to -10.19), self reported physical function (-6.29 units on a 100-point scale; 95% CI -10.45 to -2.13), pain (-3.3 cm on a 10-cm scale; 95% CI -6.35 to -0.26), tenderness (-1.84 out of 18 tender points; 95% CI -2.6 to -1.08), and muscle strength (27.32 kg force on bilateral concentric leg extension; 95% CI 18.28 to 36.36).\nDifferences between the resistance training group(s) and the aerobic training groups were not statistically significant for multidimensional function (5.48 on a 100-point scale; 95% CI -0.92 to 11.88), self reported physical function (-1.48 units on a 100-point scale; 95% CI -6.69 to 3.74) or tenderness (SMD -0.13; 95% CI -0.55 to 0.30). There was a statistically significant reduction in pain (0.99 cm on a 10-cm scale; 95% CI 0.31 to 1.67) favoring the aerobic groups.\nStatistically significant differences were found between a resistance training group and a flexibility group favoring the resistance training group for multidimensional function (-6.49 FIQ units on a 100-point scale; 95% CI -12.57 to -0.41) and pain (-0.88 cm on a 10-cm scale; 95% CI -1.57 to -0.19), but not for tenderness (-0.46 out of 18 tender points; 95% CI -1.56 to 0.64) or strength (4.77 foot pounds torque on concentric knee extension; 95% CI -2.40 to 11.94). This evidence was classified low quality due to the low number of studies and risk of bias assessment. There were no statistically significant differences in attrition rates between the interventions. In general, adverse effects were poorly recorded, but no serious adverse effects were reported. Assessment of risk of bias was hampered by poor written descriptions (eg, allocation concealment, blinding of outcome assessors). The lack of a priori protocols and lack of care provider blinding were also identified as methodologic concerns.\nAuthors' conclusions\nThe evidence (rated as low quality) suggested that moderate- and moderate- to high-intensity resistance training improves multidimensional function, pain, tenderness, and muscle strength in women with fibromyalgia. The evidence (rated as low quality) also suggested that eight weeks of aerobic exercise was superior to moderate-intensity resistance training for improving pain in women with fibromyalgia. There was low-quality evidence that 12 weeks of low-intensity resistance training was superior to flexibility exercise training in women with fibromyalgia for improvements in pain and multidimensional function. There was low-quality evidence that women with fibromyalgia can safely perform moderate- to high-resistance training.\n\nGiven the provided abstract, please respond to the following query: \"Dropping out (number of women out of 100 who dropped out of the studies)\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Thirteen women who did resistance training dropped out while 4 women who did not do resistance training dropped out.\nTherefore, 9 more women out of 100 who did resistance training dropped out of the studies than women who did not do resistance training."
}
] |
query-laysum
|
22135
|
[
{
"role": "user",
"content": "Abstract: Background\nGenotype® MTBDRsl (MTBDRsl) is a rapid DNA-based test for detecting specific mutations associated with resistance to fluoroquinolones and second-line injectable drugs (SLIDs) in Mycobacterium tuberculosis complex. MTBDRsl version 2.0 (released in 2015) identifies the mutations detected by version 1.0, as well as additional mutations. The test may be performed on a culture isolate or a patient specimen, which eliminates delays associated with culture. Version 1.0 requires a smear-positive specimen, while version 2.0 may use a smear-positive or -negative specimen. We performed this updated review as part of a World Health Organization process to develop updated guidelines for using MTBDRsl.\nObjectives\nTo assess and compare the diagnostic accuracy of MTBDRsl for: 1. fluoroquinolone resistance, 2. SLID resistance, and 3. extensively drug-resistant tuberculosis, indirectly on a M. tuberculosis isolate grown from culture or directly on a patient specimen. Participants were people with rifampicin-resistant or multidrug-resistant tuberculosis. The role of MTBDRsl would be as the initial test, replacing culture-based drug susceptibility testing (DST), for detecting second-line drug resistance.\nSearch methods\nWe searched the following databases without language restrictions up to 21 September 2015: the Cochrane Infectious Diseases Group Specialized Register; MEDLINE; Embase OVID; Science Citation Index Expanded, Conference Proceedings Citation Index-Science, and BIOSIS Previews (all three from Web of Science); LILACS; and SCOPUS; registers for ongoing trials; and ProQuest Dissertations & Theses A&I. We reviewed references from included studies and contacted specialists in the field.\nSelection criteria\nWe included cross-sectional and case-control studies that determined MTBDRsl accuracy against a defined reference standard (culture-based DST, genetic sequencing, or both).\nData collection and analysis\nTwo review authors independently extracted data and assessed quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. We synthesized data for versions 1.0 and 2.0 separately. We estimated MTBDRsl sensitivity and specificity for fluoroquinolone resistance, SLID resistance, and extensively drug-resistant tuberculosis when the test was performed indirectly or directly (smear-positive specimen for version 1.0, smear-positive or -negative specimen for version 2.0). We explored the influence on accuracy estimates of individual drugs within a drug class and of different reference standards. We performed most analyses using a bivariate random-effects model with culture-based DST as reference standard.\nMain results\nWe included 27 studies. Twenty-six studies evaluated version 1.0, and one study version 2.0. Of 26 studies stating specimen country origin, 15 studies (58%) evaluated patients from low- or middle-income countries. Overall, we considered the studies to be of high methodological quality. However, only three studies (11%) had low risk of bias for the reference standard; these studies used World Health Organization (WHO)-recommended critical concentrations for all drugs in the culture-based DST reference standard.\nMTBDRsl version 1.0\nFluoroquinolone resistance: indirect testing, MTBDRsl pooled sensitivity and specificity (95% confidence interval (CI)) were 85.6% (79.2% to 90.4%) and 98.5% (95.7% to 99.5%), (19 studies, 2223 participants); direct testing (smear-positive specimen), pooled sensitivity and specificity were 86.2% (74.6% to 93.0%) and 98.6% (96.9% to 99.4%), (nine studies, 1771 participants, moderate quality evidence).\nSLID resistance: indirect testing, MTBDRsl pooled sensitivity and specificity were 76.5% (63.3% to 86.0%) and 99.1% (97.3% to 99.7%), (16 studies, 1921 participants); direct testing (smear-positive specimen), pooled sensitivity and specificity were 87.0% (38.1% to 98.6%) and 99.5% (93.6% to 100.0%), (eight studies, 1639 participants, low quality evidence).\nExtensively drug-resistant tuberculosis: indirect testing, MTBDRsl pooled sensitivity and specificity were 70.9% (42.9% to 88.8%) and 98.8% (96.1% to 99.6%), (eight studies, 880 participants); direct testing (smear-positive specimen), pooled sensitivity and specificity were 69.4% (38.8% to 89.0%) and 99.4% (95.0% to 99.3%), (six studies, 1420 participants, low quality evidence).\nSimilar to the original Cochrane review, we found no evidence of a significant difference in MTBDRsl version 1.0 accuracy between indirect and direct testing for fluoroquinolone resistance, SLID resistance, and extensively drug-resistant tuberculosis.\nMTBDRsl version 2.0\nFluoroquinolone resistance: direct testing, MTBDRsl sensitivity and specificity were 97% (83% to 100%) and 98% (93% to 100%), smear-positive specimen; 80% (28% to 99%) and 100% (40% to 100%), smear-negative specimen.\nSLID resistance: direct testing, MTBDRsl sensitivity and specificity were 89% (72% to 98%) and 90% (84% to 95%), smear-positive specimen; 80% (28% to 99%) and 100% (40% to 100%), smear-negative specimen.\nExtensively drug-resistant tuberculosis: direct testing, MTBDRsl sensitivity and specificity were 79% (49% to 95%) and 97% (93% to 99%), smear-positive specimen; 50% (1% to 99%) and 100% (59% to 100%), smear-negative specimen.\nWe had insufficient data to estimate summary sensitivity and specificity of version 2.0 (smear-positive and -negative specimens) or to compare accuracy of the two versions.\nA limitation was that most included studies did not consistently use the World Health Organization (WHO)-recommended concentrations for drugs in the culture-based DST reference standard.\nAuthors' conclusions\nIn people with rifampicin-resistant or multidrug-resistant tuberculosis, MTBDRsl performed on a culture isolate or smear-positive specimen may be useful in detecting second-line drug resistance. MTBDRsl (smear-positive specimen) correctly classified around six in seven people as having fluoroquinolone or SLID resistance, although the sensitivity estimates for SLID resistance varied. The test rarely gave a positive result for people without drug resistance. However, when second-line drug resistance is not detected (MTBDRsl result is negative), conventional DST can still be used to evaluate patients for resistance to the fluoroquinolones or SLIDs.\nWe recommend that future work evaluate MTBDRsl version 2.0, in particular on smear-negative specimens and in different settings to account for different resistance-causing mutations that may vary by strain. Researchers should also consider incorporating WHO-recommended critical concentrations into their culture-based reference standards.\n\nGiven the provided abstract, please respond to the following query: \"What test is evaluated by this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "GenoType® MTBDRsl (MTBDRsl) is a rapid test for detecting resistance to second-line TB drugs. In people with MDR-TB, MTBDRsl is used to detect additional drug resistance. The test may be performed on TB bacteria grown in culture from a patient specimen (indirect testing) or on a patient specimen (direct testing), which eliminates delays associated with culture. MTBDRsl version 1.0 requires a specimen to be smear-positive by microscopy, while version 2.0 (released in 2015) may use a smear-positive or -negative specimen."
}
] |
query-laysum
|
22136
|
[
{
"role": "user",
"content": "Abstract: Background\nBisphosphonate drugs can be used to prevent and treat osteoporosis and to reduce symptoms and complications of metastatic bone disease; however, they are associated with a rare but serious adverse event: osteonecrosis of the maxillary and mandibular bones. This condition is called bisphosphonate-related osteonecrosis of the jaw or BRONJ. BRONJ is diagnosed when people who are taking, or have previously taken, bisphosphonates have exposed bone in the jaw area for more than eight weeks in the absence of radiation treatment. There is currently no \"gold standard\" of treatment for BRONJ. The three broad categories of intervention are conservative approaches (e.g. mouth rinse, antibiotics), surgical interventions and adjuvant non-surgical strategies (e.g. hyperbaric oxygen therapy, platelet-rich plasma), which can be used in combination.\nObjectives\nTo determine the efficacy and safety of any intervention aimed at treating BRONJ.\nSearch methods\nWe searched the following databases to 15 December 2015: the Cochrane Oral Health Group Trials Register, the Cochrane Breast Cancer Group Trials Register (20 September 2011), the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE via Ovid, EMBASE via Ovid, CancerLit via PubMed, CINAHL via EBSCO and AMED via Ovid. We scanned the references cited in retrieved articles and contacted experts in the field, the first authors of included papers, study sponsors, other bisphosphonates investigators and pharmaceutical companies. We searched for ongoing trials through contact with trialists and by searching the US National Institutes of Health Trials Register (clinicaltrials.gov) and the World Health Organization Clinical Trials Registry Platform. We also conducted a grey literature search to September 2015.\nSelection criteria\nRandomised controlled trials (RCTs) comparing the effects of any treatment for BRONJ with another treatment or placebo.\nData collection and analysis\nTwo review authors independently screened the search results, assessed the risk of bias in the included trials and extracted data. When in dispute, we consulted a third review author.\nMain results\nOne small trial at high risk of bias met the inclusion criteria. The trial randomised 49 participants, most of whom had cancer. It compared standard care (defined as surgery, antibiotics and oral rinses at the discretion of the oral-maxillofacial surgeon) to standard care plus hyperbaric oxygen therapy (2 atmospheres twice a day for 40 treatments). The trial measured the percentage of participants who improved or healed at three, six, 12 and 18 months and last contact. It also measured mean weekly pain scores.\nAt three months, the study found that the participants in intervention group were more likely to have an improvement in their osteonecrosis than the standard care group participants (risk ratio (RR) 1.94, 95% confidence interval (CI) 1.01 to 3.74). There was no clear difference between the groups for the outcome 'healed' at three months (RR 3.60, 95% CI 0.87 to 14.82). There was no clear difference between the groups for improvement or healing when they were evaluated at six, 12 and 18 months and last contact.\nThe study did not give any information on adverse events.\nAlthough the findings suggest adjunctive hyperbaric oxygen improved BRONJ, the quality of the evidence is very low since the only study was underpowered and was at high risk of bias due to lack of blinding, cross-over of participants between groups and very high attrition (50% at 12 months and 80% at 18 months in this study, which was designed for an intended follow-up of 24 months).\nAuthors' conclusions\nThere is a lack of evidence from randomised controlled trials to guide treatment of bisphosphonate-related osteonecrosis of the jaw (BRONJ). One small trial at high risk of bias evaluated hyperbaric oxygen therapy (HBO) as an adjunct to \"standard\" care and could not confirm or refute the effectiveness of HBO. There are two ongoing trials of teriparatide treatment for BRONJ. We found no randomised controlled trials of any other BRONJ treatments. High quality randomised controlled trials are needed. We provide recommendations for their focus and design.\n\nGiven the provided abstract, please respond to the following query: \"Authors' conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is insufficient evidence to conclude whether hyperbaric oxygen therapy is a useful add-on to standard care in the treatment of BRONJ. There are two ongoing trials of teriparatide, a hormonal treatment for BRONJ. We found no randomised controlled trials of any other treatments for BRONJ. As there is a lack of good quality scientific evidence to decide how best to treat BRONJ, high quality trials are needed."
}
] |
query-laysum
|
22137
|
[
{
"role": "user",
"content": "Abstract: Background\nFractures of the mandible (lower jaw) are a common occurrence and usually related to interpersonal violence or road traffic accidents. Mandibular fractures may be treated using open (surgical) and closed (non-surgical) techniques. Fracture sites are immobilized with intermaxillary fixation (IMF) or other external or internal devices (i.e. plates and screws) to allow bone healing. Various techniques have been used, however uncertainty exists with respect to the specific indications for each approach.\nObjectives\nThe objective of this review is to provide reliable evidence of the effects of any interventions either open (surgical) or closed (non-surgical) that can be used in the management of mandibular fractures, excluding the condyles, in adult patients.\nSearch methods\nWe searched the following electronic databases: the Cochrane Oral Health Group's Trials Register (to 28 February 2013), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 1), MEDLINE via OVID (1950 to 28 February 2013), EMBASE via OVID (1980 to 28 February 2013), metaRegister of Controlled Trials (to 7 April 2013), ClinicalTrials.gov (to 7 April 2013) and the WHO International Clinical Trials Registry Platform (to 7 April 2013). The reference lists of all trials identified were checked for further studies. There were no restrictions regarding language or date of publication.\nSelection criteria\nRandomised controlled trials evaluating the management of mandibular fractures without condylar involvement. Any studies that compared different treatment approaches were included.\nData collection and analysis\nAt least two review authors independently assessed trial quality and extracted data. Results were to be expressed as random-effects models using mean differences for continuous outcomes and risk ratios for dichotomous outcomes with 95% confidence intervals. Heterogeneity was to be investigated to include both clinical and methodological factors.\nMain results\nTwelve studies, assessed as high (six) and unclear (six) risk of bias, comprising 689 participants (830 fractures), were included. Interventions examined different plate materials and morphology; use of one or two lag screws; microplate versus miniplate; early and delayed mobilization; eyelet wires versus Rapid IMF™ and the management of angle fractures with intraoral access alone or combined with a transbuccal approach. Patient-oriented outcomes were largely ignored and post-operative pain scores were inadequately reported. Unfortunately, only one or two trials with small sample sizes were conducted for each comparison and outcome. Our results and conclusions should therefore be interpreted with caution. We were able to pool the results for two comparisons assessing one outcome. Pooled data from two studies comparing two miniplates versus one miniplate revealed no significant difference in the risk of post-operative infection of surgical site (risk ratio (RR) 1.32, 95% CI 0.41 to 4.22, P = 0.64, I2 = 0%). Similarly, no difference in post-operative infection between the use of two 3-dimensional (3D) and standard (2D) miniplates was determined (RR 1.26, 95% CI 0.19 to 8.13, P = 0.81, I2 = 27%). The included studies involved a small number of participants with a low number of events.\nAuthors' conclusions\nThis review illustrates that there is currently inadequate evidence to support the effectiveness of a single approach in the management of mandibular fractures without condylar involvement. The lack of high quality evidence may be explained by clinical diversity, variability in assessment tools used and difficulty in grading outcomes with existing measurement tools. Until high level evidence is available, treatment decisions should continue to be based on the clinician's prior experience and the individual circumstances.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence found is poor. Recommendations are made for further well-conducted research studies in this area to be undertaken."
}
] |
query-laysum
|
22138
|
[
{
"role": "user",
"content": "Abstract: Background\nAnterior cruciate ligament (ACL) tears are serious knee injuries that are frequently treated surgically in the form of arthroscopically assisted reconstruction with grafts from the patella or hamstrings tendons. We reviewed the evidence for the choice of arthroscopically assisted ACL reconstruction technique in terms of whether it should involve one incision (femoral tunnel drilled from inside the knee joint under arthroscopic visualisation) or two incisions (femoral tunnel drilled from outside to inside the knee joint).\nObjectives\nTo assess the effects (benefits and harms) of one-incision versus two-incision techniques for arthroscopically assisted ACL reconstruction in adults.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Latin American and Caribbean Health Sciences (LILACS), the World Health Organization International Clinical Trials Registry Platform, ClinicalTrials.gov, reference lists, and conference abstracts. The date of the search was 16 August 2017.\nSelection criteria\nRandomised and quasi-randomised controlled clinical trials evaluating one-incision versus two-incision techniques for arthroscopically assisted ACL reconstruction in adults.\nData collection and analysis\nTwo review authors independently searched and selected studies, and extracted data and assessed the risk of bias of the eligible studies. We undertook limited pooling of data using the fixed-effect model.\nMain results\nWe included five trials (four randomised and one quasi-randomised) evaluating a total of 320 participants who were mainly in their 20s. All participants underwent ACL reconstruction with patella tendon grafts. All five included trials were at a high risk of bias, particularly performance bias. Based on these limitations and the insufficiency of the available data resulting in imprecision of effect estimates, we judged the quality of the evidence as very low for all outcomes. This means that we are uncertain of the findings of the review.\nWe found very low-quality evidence of no clinically important differences between the two techniques in self reported knee function, measured using the Lysholm knee score (scale 0 to 100: best outcome), at short-term (3 months) (mean difference (MD) 2.73 favours one-incision technique, 95% confidence interval (CI) -2.70 to 8.15; 79 participants, 2 studies), intermediate-term (12 months) (MD -3.68 favours two-incision technique, 95% CI -6.61 to -0.75; 79 participants, 2 studies), and long-term follow-up. The data available for long-term follow-up (2 to 5 years) was expressed in terms of the numbers of participants with excellent Lysholm scores (90 points or more); we found no difference between the two groups (42/45 versus 36/40; risk ratio (RR) 1.04, 95% CI 0.91 to 1.18; 1 study). There were no data for quality of life measures or for overall numbers of participants incurring an adverse event. We found very low-quality evidence of little between-group differences in individual adverse events such as infection, knee stiffness, reoperation, and graft failure.\nWe found very low-quality evidence from one study (59 participants) of little difference between the two groups in activity levels measured using Tegner scores (scale 0 to 10: highest sport activity) at two years (MD -0.80 favours two-incision technique, 95% CI -1.90 to 0.30). There was very low-quality evidence from four studies of minimal between-group difference in the number of participants with normal or nearly normal objectively measured knee function (International Knee Documentation Committee objective assessment grading) at intermediate follow-up (means 12 to 28 months): 56/78 versus 63/89; RR 1.01, 95% CI 0.85 to 1.21; 167 participants).\nAuthors' conclusions\nThe very low-quality and often absent evidence means that we are uncertain whether one-incision arthroscopically assisted ACL reconstruction techniques yield better, worse, or equivalent results compared with two-incision techniques in terms of short-, intermediate-, or long-term subjective function, quality of life, adverse outcomes, activity levels, and objectively rated knee function. The evidence was available only for single-bundle ACL reconstruction using patella tendon grafts.\nWhen considering priorities for high-quality randomised trials on techniques for ACL reconstruction, it is important to note the insufficiency of the evidence available to inform this key comparison.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All five studies had weaknesses that could seriously affect the reliability of their results. We considered the evidence to be of very low quality, meaning that we are unsure of the results."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.