dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum | 3740 | [
{
"role": "user",
"content": "Abstract: Background\nObstructive sleep apnoea (OSA) is a syndrome characterised by episodes of apnoea (complete cessation of breathing) or hypopnoea (insufficient breathing) during sleep. Classical symptoms of the disease — such as snoring, unsatisfactory rest and daytime sleepiness — are experienced mainly by men; women report more unspecific symptoms such as low energy or fatigue, tiredness, initial insomnia and morning headaches. OSA is associated with an increased risk of occupational injuries, metabolic diseases, cardiovascular diseases, mortality, and being involved in traffic accidents.\nContinuous positive airway pressure (CPAP) - delivered by a machine which uses a hose and mask or nosepiece to deliver constant and steady air pressure- is considered the first treatment option for most people with OSA. However, adherence to treatment is often suboptimal. Myofunctional therapy could be an alternative for many patients. Myofunctional therapy consists of combinations of oropharyngeal exercises - i.e. mouth and throat exercises. These combinations typically include both isotonic and isometric exercises involving several muscles and areas of the mouth, pharynx and upper respiratory tract, to work on functions such as speaking, breathing, blowing, sucking, chewing and swallowing.\nObjectives\nTo evaluate the benefits and harms of myofunctional therapy (oropharyngeal exercises) for the treatment of obstructive sleep apnoea.\nSearch methods\nWe identified randomised controlled trials (RCTs) from the Cochrane Airways Trials Register (date of last search 1 May 2020). We found other trials at web-based clinical trials registers.\nSelection criteria\nWe included RCTs that recruited adults and children with a diagnosis of OSA.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We assessed our confidence in the evidence by using GRADE recommendations. Primary outcomes were daytime sleepiness, morbidity and mortality.\nMain results\nWe found nine studies eligible for inclusion in this review and nine ongoing studies. The nine included RCTs analysed a total of 347 participants, 69 of them women and 13 children. The adults' mean ages ranged from 46 to 51, daytime sleepiness scores from eight to 14, and severity of the condition from mild to severe OSA. The studies' duration ranged from two to four months.\nNone of the studies assessed accidents, cardiovascular diseases or mortality outcomes. We sought data about adverse events, but none of the included studies reported these.\nIn adults, compared to sham therapy, myofunctional therapy: probably reduces daytime sleepiness (Epworth Sleepiness Scale (ESS), MD (mean difference) -4.52 points, 95% Confidence Interval (CI) -6.67 to -2.36; two studies, 82 participants; moderate-certainty evidence); may increase sleep quality (MD -3.90 points, 95% CI -6.31 to -1.49; one study, 31 participants; low-certainty evidence); may result in a large reduction in Apnoea-Hypopnoea Index (AHI, MD -13.20 points, 95% CI -18.48 to -7.93; two studies, 82 participants; low-certainty evidence); may have little to no effect in reduction of snoring frequency but the evidence is very uncertain (Standardised Mean Difference (SMD) -0.53 points, 95% CI -1.03 to -0.03; two studies, 67 participants; very low-certainty evidence); and probably reduces subjective snoring intensity slightly (MD -1.9 points, 95% CI -3.69 to -0.11 one study, 51 participants; moderate-certainty evidence).\nCompared to waiting list, myofunctional therapy may: reduce daytime sleepiness (ESS, change from baseline MD -3.00 points, 95% CI -5.47 to -0.53; one study, 25 participants; low-certainty evidence); result in little to no difference in sleep quality (MD -0.70 points, 95% CI -2.01 to 0.61; one study, 25 participants; low-certainty evidence); and reduce AHI (MD -6.20 points, 95% CI -11.94 to -0.46; one study, 25 participants; low-certainty evidence).\nCompared to CPAP, myofunctional therapy may result in little to no difference in daytime sleepiness (MD 0.30 points, 95% CI -1.65 to 2.25; one study, 54 participants; low-certainty evidence); and may increase AHI (MD 9.60 points, 95% CI 2.46 to 16.74; one study, 54 participants; low-certainty evidence).\nCompared to CPAP plus myofunctional therapy, myofunctional therapy alone may result in little to no difference in daytime sleepiness (MD 0.20 points, 95% CI -2.56 to 2.96; one study, 49 participants; low-certainty evidence) and may increase AHI (MD 10.50 points, 95% CI 3.43 to 17.57; one study, 49 participants; low-certainty evidence).\nCompared to respiratory exercises plus nasal dilator strip, myofunctional therapy may result in little to no difference in daytime sleepiness (MD 0.20 points, 95% CI -2.46 to 2.86; one study, 58 participants; low-certainty evidence); probably increases sleep quality slightly (-1.94 points, 95% CI -3.17 to -0.72; two studies, 97 participants; moderate-certainty evidence); and may result in little to no difference in AHI (MD -3.80 points, 95% CI -9.05 to 1.45; one study, 58 participants; low-certainty evidence).\nCompared to standard medical treatment, myofunctional therapy may reduce daytime sleepiness (MD -6.40 points, 95% CI -9.82 to -2.98; one study, 26 participants; low-certainty evidence) and may increase sleep quality (MD -3.10 points, 95% CI -5.12 to -1.08; one study, 26 participants; low-certainty evidence).\nIn children, compared to nasal washing alone, myofunctional therapy and nasal washing may result in little to no difference in AHI (MD 3.00, 95% CI -0.26 to 6.26; one study, 13 participants; low-certainty evidence).\nAuthors' conclusions\nCompared to sham therapy, myofunctional therapy probably reduces daytime sleepiness and may increase sleep quality in the short term. The certainty of the evidence for all comparisons ranges from moderate to very low, mainly due to lack of blinding of the assessors of subjective outcomes, incomplete outcome data and imprecision. More studies are needed. In future studies, outcome assessors should be blinded. New trials should recruit more participants, including more women and children, and have longer treatment and follow-up periods.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Obstructive sleep apnoea (OSA) is a sleeping disorder. People with OSA have periods where their breathing stops during the night. OSA can cause snoring, unsatisfactory rest, daytime sleepiness, low energy or fatigue, tiredness, initial insomnia and morning headaches.\nContinuous positive airway pressure (CPAP) is considered the first treatment option for most people with OSA. However, adherence to CPAP is often poor. A CPAP machine uses a hose and mask or nosepiece to deliver constant and steady air pressure. People who use CPAP often say that using the machine is uncomfortable, causes nasal congestion and abdominal bloating. They can feel claustrophobic and the machine is noisy. The noise can disturb bed partners.\nMyofunctional therapy teaches people to do daily exercises to strengthen their tongue and throat muscles. Myofunctional therapy may reduce the intensity of the OSA symptoms and reduce daytime sleepiness on its own, or combined with CPAP."
}
]
|
query-laysum | 3741 | [
{
"role": "user",
"content": "Abstract: Background\nInhalation of hypertonic saline improves sputum rheology, accelerates mucociliary clearance and improves clinical outcomes of people with cystic fibrosis. This is an update of a previously published Cochrane Review.\nObjectives\nTo determine whether the timing of hypertonic saline inhalation (in relation to airway clearance techniques or in relation to time of day) has an impact on its clinical efficacy in people with cystic fibrosis.\nSearch methods\nWe identified relevant randomised and quasi-randomised controlled trials from the Cochrane Cystic Fibrosis Trials Register, the Physiotherapy Evidence Database (PEDro), and international cystic fibrosis conference proceedings.\nDate of the last search of the Cochrane Cystic Fibrosis and Genetic Disorders Group's Cystic Fibrosis Trials Register: 28 February 2019.\nSelection criteria\nAny trial of hypertonic saline in people with cystic fibrosis where timing of inhalation was the randomised element in the study protocol with either: inhalation up to six hours before airway clearance techniques compared to inhalation during airway clearance techniques compared to inhalation up to six hours after airway clearance techniques; or morning compared to evening inhalation with any definition provided by the author.\nData collection and analysis\nBoth authors independently assessed the trials identified by the search for potential inclusion in the review. The certainty of the evidence was assessed using GRADE.\nMain results\nThe searches identified 104 trial reports which represented 51 trials, of which three cross-over trials (providing data on 77 participants) met our inclusion criteria. We present three comparisons: inhalation before versus during airway clearance techniques; inhalation before versus after airway clearance techniques; and inhalation during versus after airway clearance techniques. One trial (50 participants), given its three-arm design, was eligible for all three comparisons. No trials compared morning versus evening inhalation of hypertonic saline.\nThe evidence from the three trials was judged to be of low quality downgraded for limitations (high risk of bias due to blinding) and indirectness (all participants are adults, and therefore not applicable to children). Intervention periods ranged from one treatment to three treatments in one day. There were no clinically important differences between the timing regimens of inhaling hypertonic saline before, during or after airway clearance techniques in the mean amount of improvement in lung function or symptom scores (77 participants), with the between-group comparisons being non-significant (low-certainty evidence). While there may be little or no difference in the rating of satisfaction when hypertonic saline was inhaled before versus during the airway clearance techniques (64 participants) (with the 95% confidence interval including the possibility of both a higher and lower rating of satisfaction), satisfaction may be lower on a 100-mm scale when inhaled after the airway clearance techniques compared to before: mean difference (MD) 20.38 mm (95% confidence interval (CI) 12.10 to 28.66) and when compared to during the techniques, MD 14.80 mm (95% CI 5.70 to 23.90). Perceived effectiveness showed similar results: little or no difference for inhalation before versus during airway clearance techniques (64 participants); may be lower when inhaled after the airway clearance techniques compared to before, MD 10.62 (95% CI 2.54 to 18.70); and also when compared to during the techniques, MD 15.60 (95% CI 7.55 to 23.65). There were no quality of life or adverse events reported in any of the trials.\nAuthors' conclusions\nTiming of hypertonic saline inhalation makes little or no difference to lung function (low-certainty evidence). However, inhaling hypertonic saline before or during airway clearance techniques may maximise perceived efficacy and satisfaction. The long-term efficacy of hypertonic saline has only been established for twice-daily inhalations; however, if only one dose per day is tolerated, the time of day at which it is inhaled could be based on convenience or tolerability until evidence comparing these regimens is available.\nThe identified trials were all of very short intervention periods, so longer-term research could be conducted to establish the effects arising from regular use, which would incorporate the influence of changes in adherence with long-term use, as well as generating data on any adverse effects that occur with long-term use.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Regular inhalation of hypertonic saline improves the clinical outcomes of people with cystic fibrosis. It is not certain whether it is better to inhale hypertonic saline before, during or after clearing the airways with physical techniques, nor whether it is better to inhale it in the morning or in the evening. We looked for trials that compared these different timing regimens."
}
]
|
query-laysum | 3742 | [
{
"role": "user",
"content": "Abstract: Background\nThis review is the third update of a previously published Cochrane Review. The original review, looking at all possible cardioprotective agents, was split and this part now focuses on dexrazoxane only.\nAnthracyclines are effective chemotherapeutic agents in the treatment of numerous malignancies. Unfortunately, their use is limited by a dose-dependent cardiotoxicity. In an effort to prevent or reduce this cardiotoxicity, different cardioprotective agents have been studied, including dexrazoxane.\nObjectives\nTo assess the efficacy of dexrazoxane to prevent or reduce cardiotoxicity and determine possible effects of dexrazoxane on antitumour efficacy, quality of life and toxicities other than cardiac damage in adults and children with cancer receiving anthracyclines when compared to placebo or no additional treatment.\nSearch methods\nWe searched CENTRAL, MEDLINE and Embase to May 2021. We also handsearched reference lists, the proceedings of relevant conferences and ongoing trials registers.\nSelection criteria\nRandomised controlled trials (RCTs) in which dexrazoxane was compared to no additional therapy or placebo in adults and children with cancer receiving anthracyclines.\nData collection and analysis\nTwo review authors independently performed study selection, data extraction, risk of bias and GRADE assessment of included studies. We analysed results in adults and children separately. We performed analyses according to the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nFor this update, we identified 548 unique records. We included three additional RCTs: two paediatric and one adult. Therefore, we included a total of 13 eligible RCTs (five paediatric and eight adult). The studies enrolled 1252 children with leukaemia, lymphoma or a solid tumour and 1269 participants, who were mostly diagnosed with breast cancer.\nIn adults, moderate-quality evidence showed that there was less clinical heart failure with the use of dexrazoxane (risk ratio (RR) 0.22, 95% confidence interval (CI) 0.11 to 0.43; 7 studies, 1221 adults). In children, we identified no difference in clinical heart failure risk between treatment groups (RR 0.20, 95% CI 0.01 to 4.19; 3 studies, 885 children; low-quality evidence). In three paediatric studies assessing cardiomyopathy/heart failure as the primary cause of death, none of the children had this outcome (1008 children, low-quality evidence). In the adult studies, different definitions for subclinical myocardial dysfunction and clinical heart failure combined were used, but pooled analyses were possible: there was a benefit in favour of the use of dexrazoxane (RR 0.37, 95% CI 0.24 to 0.56; 3 studies, 417 adults and RR 0.46, 95% CI 0.33 to 0.66; 2 studies, 534 adults, respectively, moderate-quality evidence). In the paediatric studies, definitions of subclinical myocardial dysfunction and clinical heart failure combined were incomparable, making pooling impossible. One paediatric study showed a benefit in favour of dexrazoxane (RR 0.33, 95% CI 0.13 to 0.85; 33 children; low-quality evidence), whereas another study showed no difference between treatment groups (Fischer exact P = 0.12; 537 children; very low-quality evidence).\nOverall survival (OS) was reported in adults and overall mortality in children. The meta-analyses of both outcomes showed no difference between treatment groups (hazard ratio (HR) 1.04, 95% 0.88 to 1.23; 4 studies; moderate-quality evidence; and HR 1.01, 95% CI 0.72 to 1.42; 3 studies, 1008 children; low-quality evidence, respectively). Progression-free survival (PFS) was only reported in adults. We subdivided PFS into three analyses based on the comparability of definitions, and identified a longer PFS in favour of dexrazoxane in one study (HR 0.62, 95% CI 0.43 to 0.90; 164 adults; low-quality evidence). There was no difference between treatment groups in the other two analyses (HR 0.95, 95% CI 0.64 to 1.40; 1 study; low-quality evidence; and HR 1.18, 95% CI 0.97 to 1.43; 2 studies; moderate-quality evidence, respectively). In adults, there was no difference in tumour response rate between treatment groups (RR 0.91, 95% CI 0.79 to 1.04; 6 studies, 956 adults; moderate-quality evidence). We subdivided tumour response rate in children into two analyses based on the comparability of definitions, and identified no difference between treatment groups (RR 1.01, 95% CI 0.95 to 1.07; 1 study, 206 children; very low-quality evidence; and RR 0.92, 95% CI 0.84 to 1.01; 1 study, 200 children; low-quality evidence, respectively). The occurrence of secondary malignant neoplasms (SMN) was only assessed in children. The available and worst-case analyses were identical and showed a difference in favour of the control group (RR 3.08, 95% CI 1.13 to 8.38; 3 studies, 1015 children; low-quality evidence). In the best-case analysis, the direction of effect was the same, but there was no difference between treatment groups (RR 2.51, 95% CI 0.96 to 6.53; 4 studies, 1220 children; low-quality evidence). For other adverse effects, results also varied. None of the studies evaluated quality of life.\nIf not reported, the number of participants for an analysis was unclear.\nAuthors' conclusions\nOur meta-analyses showed the efficacy of dexrazoxane in preventing or reducing cardiotoxicity in adults treated with anthracyclines. In children, there was a difference between treatment groups for one cardiac outcome (i.e. for one of the definitions used for clinical heart failure and subclinical myocardial dysfunction combined) in favour of dexrazoxane. In adults, no evidence of a negative effect on tumour response rate, OS and PFS was identified; and in children, no evidence of a negative effect on tumour response rate and overall mortality was identified. The results for adverse effects varied. In children, dexrazoxane may be associated with a higher risk of SMN; in adults this was not addressed. In adults, the quality of the evidence ranged between moderate and low; in children, it ranged between low and very low. Before definitive conclusions on the use of dexrazoxane can be made, especially in children, more high-quality research is needed.\nWe conclude that if the risk of cardiac damage is expected to be high, it might be justified to use dexrazoxane in children and adults with cancer who are treated with anthracyclines. However, clinicians and patients should weigh the cardioprotective effect of dexrazoxane against the possible risk of adverse effects, including SMN, for each individual.\nFor children, the International Late Effects of Childhood Cancer Guideline Harmonization Group has developed a clinical practice guideline.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to May 2021.\nWe found 13 randomised studies (clinical studies where people are randomly put into one of two or more treatment groups) looking at dexrazoxane: 5 studies in children (1252 children with leukaemia, lymphoma or a solid tumour) and 8 studies in adults (1269 adults who were mostly diagnosed with breast cancer)."
}
]
|
query-laysum | 3744 | [
{
"role": "user",
"content": "Abstract: Background\nPneumonia is a common and potentially serious illness. Corticosteroids have been suggested for the treatment of different types of infection, however their role in the treatment of pneumonia remains unclear. This is an update of a review published in 2011.\nObjectives\nTo assess the efficacy and safety of corticosteroids in the treatment of pneumonia.\nSearch methods\nWe searched the Cochrane Acute Respiratory Infections Group's Specialised Register, CENTRAL, MEDLINE, Embase, and LILACS on 3 March 2017, together with relevant conference proceedings and references of identified trials. We also searched three trials registers for ongoing and unpublished trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) that assessed systemic corticosteroid therapy, given as adjunct to antibiotic treatment, versus placebo or no corticosteroids for adults and children with pneumonia.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently assessed risk of bias and extracted data. We contacted study authors for additional information. We estimated risk ratios (RR) with 95% confidence intervals (CI) and pooled data using the Mantel–Haenszel fixed-effect model when possible.\nMain results\nWe included 17 RCTs comprising a total of 2264 participants; 13 RCTs included 1954 adult participants, and four RCTs included 310 children. This update included 12 new studies, excluded one previously included study, and excluded five new trials. One trial awaits classification.\nAll trials limited inclusion to inpatients with community-acquired pneumonia (CAP), with or without healthcare-associated pneumonia (HCAP). We assessed the risk of selection bias and attrition bias as low or unclear overall. We assessed performance bias risk as low for nine trials, unclear for one trial, and high for seven trials. We assessed reporting bias risk as low for three trials and high for the remaining 14 trials.\nCorticosteroids significantly reduced mortality in adults with severe pneumonia (RR 0.58, 95% CI 0.40 to 0.84; moderate-quality evidence), but not in adults with non-severe pneumonia (RR 0.95, 95% CI 0.45 to 2.00). Early clinical failure rates (defined as death from any cause, radiographic progression, or clinical instability at day 5 to 8) were significantly reduced with corticosteroids in people with severe and non-severe pneumonia (RR 0.32, 95% CI 0.15 to 0.7; and RR 0.68, 95% CI 0.56 to 0.83, respectively; high-quality evidence). Corstocosteroids reduced time to clinical cure, length of hospital and intensive care unit stays, development of respiratory failure or shock not present at pneumonia onset, and rates of pneumonia complications.\nAmong children with bacterial pneumonia, corticosteroids reduced early clinical failure rates (defined as for adults, RR 0.41, 95% CI 0.24 to 0.70; high-quality evidence) based on two small, clinically heterogeneous trials, and reduced time to clinical cure.\nHyperglycaemia was significantly more common in adults treated with corticosteroids (RR 1.72, 95% CI 1.38 to 2.14). There were no significant differences between corticosteroid-treated people and controls for other adverse events or secondary infections (RR 1.19, 95% CI 0.73 to 1.93).\nAuthors' conclusions\nCorticosteroid therapy reduced mortality and morbidity in adults with severe CAP; the number needed to treat for an additional beneficial outcome was 18 patients (95% CI 12 to 49) to prevent one death. Corticosteroid therapy reduced morbidity, but not mortality, for adults and children with non-severe CAP. Corticosteroid therapy was associated with more adverse events, especially hyperglycaemia, but the harms did not seem to outweigh the benefits.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 3 March 2017."
}
]
|
query-laysum | 3745 | [
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease is a group of genetic diseases which is especially prevalent in tropical and subtropical regions; however, forced migration and ongoing population movement have spread it throughout the world, with estimated birth rates reaching 0.49 per 1000 in the Americas, 0.07 per 1000 in Europe, 0.68 per 1000 in South and Southeast Asia, and 10.68 per 1000 in Africa. Life for individuals with sickle cell disease can be affected by repeated acute complications and compounded by progressive organ damage. Studies reveal that when people with chronic illness learn self-management, their clinical outcomes and quality of life improves; and they show lower dependence on healthcare services. There are, however, no reviews identifying which interventions improve knowledge and little is known about the impact of patient or care-giver knowledge on clinical and psychosocial outcomes in people with sickle cell disease.\nObjectives\n1. To determine the effectiveness of patient- and caregiver-centred educational interventions for changing knowledge and understanding of sickle cell disease among patients as well as caregivers of people with the disease.\n2. To assess the effectiveness and safety of patient- and caregiver-centred educational interventions and programs for the recognition of signs and symptoms of disease-related morbidity, adherence to treatment and healthcare utilization in patients with sickle cell disease.\nSearch methods\nThe authors searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books. Additional trials were sought from the reference lists of the trials and reviews identified by the search strategy.\nDate of last search: 11 April 2016.\nSelection criteria\nRandomized and quasi-randomized controlled trials which evaluate the effectiveness of individual- and group-based interventions for either the patient with sickle cell disease or their caregivers, or both. Eligible interventions will aim to change knowledge, attitudes or skills, improve psychosocial aspects of the disease as well as treatment adherence and healthcare utilization. Trials evaluating the intervention versus no program, comparing two interventions and those which are part of a multi-faceted intervention to improve a range of sickle cell-related health outcomes are all eligible for inclusion.\nData collection and analysis\nTwo review authors independently selected trials based on stated inclusion criteria and thereafter examined each selected report to extract data using a prepared, piloted, data collection form. A third author assisted in reaching consensus if there were any discrepancies. Similarly, risk of bias was assessed by two authors and verified by a third author.\nMain results\nA total of 12 trials (11 randomized controlled trials and one quasi-randomized trial) of 563 people with HbSS, HbSC or HbSβthal, aged six to 35 years old, were included in the review; the majority of participants were African-American. Interventions ranged from a total of one hour to weekly sessions for eight weeks and the post-intervention assessments ranged from the end of the intervention period to 12 months after completion. The heterogeneity of the included trials, which encompasses setting, inclusion and exclusion criteria, interventional method and time of assessment, ranged from 'not important' to 'moderate to substantial' for different review outcomes. The overall risk of bias was low for selective reporting, unclear for random sequence generation, allocation concealment, blinding of participants and blinding of outcome assessment. Incomplete outcome reporting and blinding of personnel showed mixed bias representations.\nPatient knowledge was assessed by four trials (160 participants) with moderate to substantial heterogeneity. There was evidence that educational programs improved patient knowledge, standardised mean difference 0.87 points (95% confidence interval 0.28 to 1.45, moderate quality evidence), which improved further when a trial with high bias was removed in a sensitivity analysis. Caregiver knowledge, reported in a single trial of 20 families, also showed an improvement, standardised mean difference 0.52 points (95% confidence interval 0.03 to 1.00, moderate quality evidence). The effect on patient knowledge was sustained at longer follow-up periods, whereas the effect on caregiver knowledge was not sustained.\nThere were two primary outcomes related to the effectiveness of educational programs on the recognition of signs and symptoms of disease-related morbidity. No comparative data were reported for patients or caregivers (or both) recognising signs and symptoms leading to self-management. Data from two trials were analysed for the utilization of health services and showed no evidence of an effect, mean difference 0.33 (95% confidence interval -0.57 to 1.23, moderate quality evidence).\nWith regard to the review's secondary outcomes, depression showed a statistically significant decline in intervention groups, standardised mean difference -0.66 points (95% confidence interval -1.18, to -0.14, moderate quality evidence). Adherence to treatment was not assessed in any of the identified trials. No effects of interventions were seen on coping, family relationships or health-related quality of life of patients.\nThe quality of evidence was low for positive coping and moderate for child knowledge, healthcare utilization and depression. This suggests that further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimates.\nAuthors' conclusions\nThis review identifies important positive effects of educational interventions on improving patient knowledge of sickle cell disease and depression. Effects on patients' knowledge were maintained for longer than for caregivers. The effect on knowledge was significant but small and whether it offers any clinical benefit is uncertain. Significant factors limiting these effects could be trials being under powered as well as attrition rates. Effects were not statistically significant in assessments of secondary outcomes, possibly due to the paucity of the number of trials and patients and caregivers. Trials showed moderate to high heterogeneity which might impact the results. To better study effects on outcomes, further controlled trials are needed with rigorous attention given to improve recruitment and retention and to decrease bias. Predetermined protocols using similar measurements should be used across multiple sites.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sickle cell disease is a lifelong, inherited disorder which can cause a number of complications throughout an individual's life. It may cause a huge burden on both the patient and their family, including frequent visits to healthcare facilities. The illness causes not just physical complications such as painful crises and strokes, but may have many other effects such as depression, poor quality of life, coping issues and poor family relationships. When people with a chronic illness have better understanding about their illness, they manage their illness better and improve their quality of life. We wish to compare effects of different interventions as well as individual interventions to no intervention."
}
]
|
query-laysum | 3746 | [
{
"role": "user",
"content": "Abstract: Background\nBotulinum toxin type A (BontA) is the most frequent treatment for facial wrinkles, but its effectiveness and safety have not previously been assessed in a Cochrane Review.\nObjectives\nTo assess the effects of all commercially available botulinum toxin type A products for the treatment of any type of facial wrinkles.\nSearch methods\nWe searched the following databases up to May 2020: the Cochrane Skin Specialised Register, CENTRAL, MEDLINE, Embase, and LILACS. We also searched five trials registers, and checked the reference lists of included studies for further references to relevant randomised controlled trials (RCTs).\nSelection criteria\nWe included RCTs with over 50 participants, comparing BontA versus placebo, other types of BontA, or fillers (hyaluronic acid), for treating facial wrinkles in adults.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Primary outcomes were participant assessment of success and major adverse events (AEs) (eyelid ptosis, eyelid sensory disorder, strabismus). Secondary outcomes included physician assessment of success; proportion of participants with at least one AE and duration of treatment effect. We used GRADE to assess the certainty of the evidence for each outcome.\nMain results\nWe included 65 RCTs, involving 14,919 randomised participants. Most participants were female, aged 18 to 65 years. All participants were outpatients (private office or day clinic). Study duration was between one week and one year. No studies were assessed as low risk of bias in all domains; the overall risk of bias was unclear for most studies.\nThe most common comparator was placebo (36 studies). An active control was used in 19 studies. There were eight dose-ranging studies of onabotulinumtoxinA, and a small number of studies compared against fillers. Treatment was given in one cycle (54 studies), two cycles (three studies), or three or more cycles (eight studies).\nThe treated regions were glabella (43 studies), crow's feet (seven studies), forehead (two studies), perioral (two studies), full face (one study), or more than two regions (nine studies). Most studies analysed moderate to severe wrinkles; mean duration of treatment was 20 weeks.\nThe following results summarise the main comparisons, based on studies of one treatment cycle for the glabella. AEs were collected over the duration of these studies (over four to 24 weeks).\nCompared to placebo, onabotulinumtoxinA-20 U probably has a higher success rate when assessed by participants (risk ratio (RR) 19.45, 95% confidence interval (CI) 8.60 to 43.99; 575 participants; 4 studies; moderate-certainty evidence) or physicians (RR 17.10, 95% CI 10.07 to 29.05; 1339 participants; 7 studies; moderate-certainty evidence) at week four. Major AEs are probably higher with onabotulinumtoxinA-20 U (Peto OR 3.62, 95% CI 1.50 to 8.74; 1390 participants; 8 studies; moderate-certainty evidence), but there may be no difference in any AEs (RR 1.14, 95% CI 0.89 to 1.45; 1388 participants; 8 studies; low-certainty evidence).\nCompared to placebo, abobotulinumtoxinA-50 U has a higher participant-assessed success rate at week four (RR 21.22, 95% CI 7.40 to 60.56; 915 participants; 6 studies; high-certainty evidence); and probably has a higher physician-assessed success rate (RR 14.93, 95% CI 8.09 to 27.55; 1059 participants; 7 studies; moderate-certainty evidence). There are probably more major AEs with abobotulinumtoxinA-50 U (Peto OR 3.36, 95% CI 0.88 to 12.87; 1294 participants; 7 studies; moderate-certainty evidence). Any AE may be more common with abobotulinumtoxinA-50 U (RR 1.25, 95% CI 1.05 to 1.49; 1471 participants; 8 studies; low-certainty evidence).\nCompared to placebo, incobotulinumtoxinA-20 U probably has a higher participant-assessed success rate at week four (RR 66.57, 95% CI 13.50 to 328.28; 547 participants; 2 studies; moderate-certainty evidence), and physician-assessed success rate (RR 134.62, 95% CI 19.05 to 951.45; 547 participants; 2 studies; moderate-certainty evidence). Major AEs were not observed (547 participants; 2 studies; moderate-certainty evidence). There may be no difference between groups in any AEs (RR 1.17, 95% CI 0.90 to 1.53; 547 participants; 2 studies; low-certainty evidence).\nAbobotulinumtoxinA-50 U is no different to onabotulinumtoxinA-20 U in participant-assessed success rate (RR 1.00, 95% CI 0.92 to 1.08, 388 participants, 1 study, high-certainty evidence) and physician-assessed success rate (RR 1.01, 95% CI 0.95 to 1.06; 388 participants; 1 study; high-certainty evidence) at week four. Major AEs are probably more likely in the abobotulinumtoxinA-50 U group than the onabotulinumtoxinA-20 U group (Peto OR 2.65, 95% CI 0.77 to 9.09; 433 participants; 1 study; moderate-certainty evidence). There is probably no difference in any AE (RR 1.02, 95% CI 0.67 to 1.54; 492 participants; 2 studies; moderate-certainty evidence).\nIncobotulinumtoxinA-24 U may be no different to onabotulinumtoxinA-24 U in physician-assessed success rate at week four (RR 1.01, 95% CI 0.96 to 1.05; 381 participants; 1 study; low-certainty evidence) (participant assessment was not measured). One participant reported ptosis with onabotulinumtoxinA, but we are uncertain of the risk of AEs (Peto OR 0.02, 95% CI 0.00 to 1.77; 381 participants; 1 study; very low-certainty evidence).\nCompared to placebo, daxibotulinumtoxinA-40 U probably has a higher participant-assessed success rate (RR 21.10, 95% CI 11.31 to 39.34; 683 participants; 2 studies; moderate-certainty evidence) and physician-assessed success rate (RR 23.40, 95% CI 12.56 to 43.61; 683 participants; 2 studies; moderate-certainty evidence) at week four. Major AEs were not observed (716 participants; 2 studies; moderate-certainty evidence). There may be an increase in any AE with daxibotulinumtoxinA compared to placebo (RR 2.23, 95% CI 1.46 to 3.40; 716 participants; 2 studies; moderate-certainty evidence).\nMajor AEs reported were mainly ptosis; BontA is also known to carry a risk of strabismus or eyelid sensory disorders.\nAuthors' conclusions\nBontA treatment reduces wrinkles within four weeks of treatment, but probably increases risk of ptosis. We found several heterogeneous studies (different types or doses of BontA, number of cycles, and different facial regions) hindering meta-analyses. The certainty of the evidence for effectiveness outcomes was high, low or moderate; for AEs, very low to moderate. Future RCTs should compare the most common BontA (onabotulinumtoxinA, abobotulinumtoxinA, incobotulinumtoxinA, daxibotulinumtoxinA, prabotulinumtoxinA) and evaluate long-term outcomes. There is a lack of evidence about the effects of multiple cycles of BontA, frequency of major AEs, duration of effect, efficacy of recently-approved BontA and comparisons with other treatments.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Injecting botulinum toxin type A (a Botox-like treatment) reduces wrinkles between the eyebrows, and is relatively safe to use. The effects on wrinkles were seen when measured at four weeks after the injection. Injecting botulinum toxin type A probably increases the risk of eyelid drooping. More studies are needed to assess the longer-term benefits and harms of repeated treatment with botulinum toxin."
}
]
|
query-laysum | 3747 | [
{
"role": "user",
"content": "Abstract: Background\nShared decision making is an important component of patient-centred care. It is a set of communication and evidence-based practice skills that elicits patients' expectations, clarifies any misperceptions and discusses the best available evidence for benefits and harms of treatment. Acute respiratory infections (ARIs) are one of the most common reasons for consulting in primary care and obtaining prescriptions for antibiotics. However, antibiotics offer few benefits for ARIs, and their excessive use contributes to antibiotic resistance – an evolving public health crisis. Greater explicit consideration of the benefit-harm trade-off within shared decision making may reduce antibiotic prescribing for ARIs in primary care.\nObjectives\nTo assess whether interventions that aim to facilitate shared decision making increase or reduce antibiotic prescribing for ARIs in primary care.\nSearch methods\nWe searched CENTRAL (2014, Issue 11), MEDLINE (1946 to November week 3, 2014), EMBASE (2010 to December 2014) and Web of Science (1985 to December 2014). We searched for other published, unpublished or ongoing trials by searching bibliographies of published articles, personal communication with key trial authors and content experts, and by searching trial registries at the National Institutes of Health and the World Health Organization.\nSelection criteria\nRandomised controlled trials (RCTs) (individual level or cluster-randomised), which evaluated the effectiveness of interventions that promote shared decision making (as the focus or a component of the intervention) about antibiotic prescribing for ARIs in primary care.\nData collection and analysis\nTwo review authors independently extracted and collected data. Antibiotic prescribing was the primary outcome, and secondary outcomes included clinically important adverse endpoints (e.g. re-consultations, hospital admissions, mortality) and process measures (e.g. patient satisfaction). We assessed the risk of bias of all included trials and the quality of evidence. We contacted trial authors to obtain missing information where available.\nMain results\nWe identified 10 published reports of nine original RCTs (one report was a long-term follow-up of the original trial) in over 1100 primary care doctors and around 492,000 patients.\nThe main risk of bias came from participants in most studies knowing whether they had received the intervention or not, and we downgraded the rating of the quality of evidence because of this.\nWe meta-analysed data using a random-effects model on the primary and key secondary outcomes and formally assessed heterogeneity. Remaining outcomes are presented narratively.\nThere is moderate quality evidence that interventions that aim to facilitate shared decision making reduce antibiotic use for ARIs in primary care (immediately after or within six weeks of the consultation), compared with usual care, from 47% to 29%: risk ratio (RR) 0.61, 95% confidence interval (CI) 0.55 to 0.68. Reduction in antibiotic prescribing occurred without an increase in patient-initiated re-consultations (RR 0.87, 95% CI 0.74 to 1.03, moderate quality evidence) or a decrease in patient satisfaction with the consultation (OR 0.86, 95% CI 0.57 to 1.30, low quality evidence). There were insufficient data to assess the effects of the intervention on sustained reduction in antibiotic prescribing, adverse clinical outcomes (such as hospital admission, incidence of pneumonia and mortality), or measures of patient and caregiver involvement in shared decision making (such as satisfaction with the consultation; regret or conflict with the decision made; or treatment compliance following the decision). No studies assessed antibiotic resistance in colonising or infective organisms.\nAuthors' conclusions\nInterventions that aim to facilitate shared decision making reduce antibiotic prescribing in primary care in the short term. Effects on longer-term rates of prescribing are uncertain and more evidence is needed to determine how any sustained reduction in antibiotic prescribing affects hospital admission, pneumonia and death.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 10 studies (nine trials and one follow-up study) up to December 2014. In total, the studies involved over 1100 primary care doctors and around 492,000 patients. The intervention was different in each study. Six of the studies involved training clinicians (mostly primary care doctors) in communication skills that are needed to facilitate shared decision making. In three studies, as well as training doctors in these skills, patients were also given written information about antibiotics for acute respiratory infections. All included trials received funding from government sources. No studies declared a conflict of interest."
}
]
|
query-laysum | 3748 | [
{
"role": "user",
"content": "Abstract: Background\nCaesarean section increases the risk of postpartum infection for women and prophylactic antibiotics have been shown to reduce the incidence; however, there are adverse effects. It is important to identify the most effective class of antibiotics to use and those with the least adverse effects.\nObjectives\nTo determine, from the best available evidence, the balance of benefits and harms between different classes of antibiotic given prophylactically to women undergoing caesarean section, considering their effectiveness in reducing infectious complications for women and adverse effects on both mother and infant.\nSearch methods\nFor this 2020 update, we searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (2 December 2019), and reference lists of retrieved studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing different classes of prophylactic antibiotics given to women undergoing caesarean section. RCTs published in abstract form were also included. We excluded trials that compared drugs with placebo or drugs within a specific class; these are assessed in other Cochrane Reviews. We excluded quasi-RCTs and cross-over trials. Cluster-RCTs were eligible for inclusion but none were identified.\nData collection and analysis\nTwo review authors independently assessed the studies for inclusion, assessed risk of bias and carried out data extraction. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included 39 studies, with 33 providing data (8073 women). Thirty-two studies (7690 women) contributing data administered antibiotics systemically, while one study (383 women) used lavage and was analysed separately.\nWe identified three main comparisons that addressed clinically important questions on antibiotics at caesarean section (all systemic administration), but we only found studies for one comparison, 'antistaphylococcal cephalosporins (1st and 2nd generation) versus broad spectrum penicillins plus betalactamase inhibitors'. We found no studies for the following comparisons: 'antistaphylococcal cephalosporins (1st and 2nd generation) versus lincosamides' and 'antistaphylococcal cephalosporins (1st and 2nd generation) versus lincosamides plus aminoglycosides'.\nTwenty-seven studies (22 provided data) included comparisons of cephalosporins (only) versus penicillins (only). However for this update, we only pooled data relating to different sub-classes of penicillins and cephalosporins where they are known to have similar spectra of action against agents likely to cause infection at caesarean section.\nEight trials, providing data on 1540 women, reported on our main comparison, 'antistaphylococcal cephalosporins (1st and 2nd generation) versus broad spectrum penicillins plus betalactamase inhibitors'. We found data on four other comparisons of cephalosporins (only) versus penicillins (only) using systemic administration: antistaphylococcal cephalosporins (1st and 2nd generation) versus non-antistaphylococcal penicillins (natural and broad spectrum) (9 studies, 3093 women); minimally antistaphylococcal cephalosporins (3rd generation) versus non-antistaphylococcal penicillins (natural and broad spectrum) (4 studies, 854 women); minimally antistaphylococcal cephalosporins (3rd generation) versus broad spectrum penicillins plus betalactamase inhibitors (2 studies, 865 women); and minimally antistaphylococcal cephalosporins (3rd generation) versus broad spectrum and antistaphylococcal penicillins (1 study, 200 women). For other comparisons of different classes of antibiotics, only a small number of trials provided data for each comparison, and in all but one case data were not pooled.\nFor all comparisons, there was a lack of good quality data and important outcomes often included few women. Three of the studies that contributed data were undertaken with drug company funding, one was funded by the hospital, and for all other studies the funding source was not reported.\nMost of the studies were at unclear risk of selection bias, reporting bias and other biases, partly due to the inclusion of many older trials where trial reports did not provide sufficient methodological information. We undertook GRADE assessment on the only main comparison reported by the included studies, antistaphylococcal cephalosporins (1st and 2nd generation) versus broad spectrum penicillins plus betalactamase inhibitors, and the certainty ranged from low to very low, mostly due to concerns about risk of bias, wide confidence intervals (CI), and few events.\nIn terms of the primary outcomes for our main comparison of 'antistaphylococcal cephalosporins (1st and 2nd generation) versus broad spectrum penicillins plus betalactamase inhibitors': only one small study reported sepsis, and there were too few events to identify clear differences between the drugs (risk ratio (RR) 2.37, 95% CI 0.10 to 56.41, 1 study, 75 women, very low-certainty evidence). There may be little or no difference between these antibiotics in preventing endometritis (RR 1.10; 95% CI 0.76 to 1.60, 7 studies, 1161 women; low-certainty evidence). None of the included studies reported on infant sepsis or infant oral thrush. For our secondary outcomes, we found there may be little or no difference between interventions for maternal fever (RR 1.07, 95% CI 0.65 to 1.75, 3 studies, 678 women; low-certainty evidence). We are uncertain of the effects on maternal: wound infection (RR 0.78, 95% CI 0.32 to 1.90, 4 studies, 543 women), urinary tract infection (average RR 0.64, 95% CI 0.11 to 3.73, 4 studies, 496 women), composite adverse effects (RR 0.96, 95% CI 0.09 to 10.50, 2 studies, 468 women), and skin rash (RR 1.08, 95% CI 0.28 to 4.1, 3 studies, 591 women) (all very low certainty evidence). Although maternal allergic reactions were reported by two studies, there were no events. There were no infant outcomes reported in the included studies.\nFor the other comparisons, the results for most outcomes had wide CIs, few studies and few women included. None of the included trials reported on longer-term maternal outcomes, or on any infant outcomes.\nAuthors' conclusions\nBased on the best currently available evidence, 'antistaphylococcal cephalosporins' and 'broad spectrum penicillins plus betalactamase inhibitors' may have similar efficacy at caesarean section when considering immediate postoperative infection, although we did not have clear evidence for several important outcomes. Most trials administered antibiotics at or after cord clamping, or post-operatively, so results may have limited applicability to current practice which generally favours administration prior to skin incision. We have no data on any infant outcomes, nor on late infections (up to 30 days) in the mother; these are important gaps in the evidence that warrant further research. Antimicrobial resistance is very important but more appropriately investigated by other trial designs.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 39 studies, of which 33 studies involving 8073 women and their babies provided data. The quality of the individual studies was generally unclear, which led to overall low or very low certainty of the evidence. Three of the 33 studies were undertaken with drug company funding. Most of the studies administered antibiotics at or after cord clamping, although practice now often gives antibiotics before skin incision.\nEight studies with data on 1540 women reported on antistaphylococcal cephalosporins (first and second generation) versus broad spectrum penicillins plus betalactamase inhibitors. We found that these antibiotics may be as effective as each other in reducing endometritis and maternal fever. We were uncertain which antibiotic performed better for wound infection, urinary tract infection, and maternal adverse effects such as nausea, vomiting, diarrhoea and skin rash. We did not find any evidence on longer-term outcomes for mothers once they left hospital, or on any outcomes for babies. Only one small study (75 women) reported on blood infection (sepsis) in mothers, with too few events to identify any clear differences between the antibiotics.\nWe identified no studies with evidence on antistaphylococcal cephalosporins versus lincosamides, nor antistaphylococcal cephalosporins versus lincosamides plus aminoglycosides. The other studies looked at a very large number of different comparisons with insufficient data to come to any firm conclusions about specific comparisons."
}
]
|
query-laysum | 3749 | [
{
"role": "user",
"content": "Abstract: Background\nStroke is the main cause of disability in high-income countries and ranks second as a cause of death worldwide. Infections occur frequently after stroke and may adversely affect outcome. Preventive antibiotic therapy in the acute phase of stroke may reduce the incidence of infections and improve outcome. In the previous version of this Cochrane Review, published in 2012, we found that antibiotics did reduce the risk of infection but did not reduce the number of dependent or deceased patients. However, included studies were small and heterogeneous. In 2015, two large clinical trials were published, warranting an update of this Review.\nObjectives\nTo assess the effectiveness and safety of preventive antibiotic therapy in people with ischaemic or haemorrhagic stroke. We wished to determine whether preventive antibiotic therapy in people with acute stroke:\n• reduces the risk of a poor functional outcome (dependency and/or death) at follow-up;\n• reduces the occurrence of infections in the acute phase of stroke;\n• reduces the occurrence of elevated body temperature (temperature ≥ 38° C) in the acute phase of stroke;\n• reduces length of hospital stay; or\n• leads to an increased rate of serious adverse events, such as anaphylactic shock, skin rash, or colonisation with antibiotic-resistant micro-organisms.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register (25 June 2017); the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 5; 25 June 2017) in the Cochrane Library; MEDLINE Ovid (1950 to 11 May 2017), and Embase Ovid (1980 to 11 May 2017). In an effort to identify further published, unpublished, and ongoing trials, we searched trials and research registers, scanned reference lists, and contacted trial authors, colleagues, and researchers in the field.\nSelection criteria\nRandomised controlled trials (RCTs) of preventive antibiotic therapy versus control (placebo or open control) in people with acute ischaemic or haemorrhagic stroke.\nData collection and analysis\nTwo review authors independently selected articles and extracted data; we discussed and resolved discrepancies at a consensus meeting with a third review author. We contacted study authors to obtain missing data when required. An independent review author assessed risk of bias using the Cochrane 'Risk of bias' tool. We calculated risk ratios (RRs) for dichotomous outcomes, assessed heterogeneity amongst included studies, and performed subgroup analyses on study quality.\nMain results\nWe included eight studies involving 4488 participants. Regarding quality of evidence, trials showed differences in study population, study design, type of antibiotic, and definition of infection; however, primary outcomes among the included studies were consistent. Mortality rate in the preventive antibiotic group was not significantly different from that in the control group (373/2208 (17%) vs 360/2214 (16%); RR 1.03, 95% confidence interval (CI) 0.87 to 1.21; high-quality evidence). The number of participants with a poor functional outcome (death or dependency) in the preventive antibiotic therapy group was also not significantly different from that in the control group (1158/2168 (53%) vs 1182/2164 (55%); RR 0.99, 95% CI 0.89 to 1.10; moderate-quality evidence). However, preventive antibiotic therapy did significantly reduce the incidence of 'overall' infections in participants with acute stroke from 26% to 19% (408/2161 (19%) vs 558/2156 (26%); RR 0.71, 95% CI 0.58 to 0.88; high-quality evidence). This finding was highly significant for urinary tract infections (81/2131 (4%) vs 204/2126 (10%); RR 0.40, 95% CI 0.32 to 0.51; high-quality evidence), whereas no preventive effect for pneumonia was found (222/2131 (10%) vs 235/2126 (11%); RR 0.95, 95% CI 0.80 to 1.13; high-quality evidence). No major side effects of preventive antibiotic therapy were reported. Only two studies qualitatively assessed the occurrence of elevated body temperature; therefore, these results could not be pooled. Only one study reported length of hospital stay.\nAuthors' conclusions\nPreventive antibiotics had no effect on functional outcome or mortality, but significantly reduced the risk of 'overall' infections. This reduction was driven mainly by prevention of urinary tract infection; no effect for pneumonia was found.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included eight studies on preventive antibiotic therapy, with a total of 4488 people with stroke: 2230 participants were randomised to preventive antibiotic therapy, and 2258 to control. The mean age of participants in the preventive antibiotics group was 74.2 years, and in the control group 74.8 years. In both groups, the percentage of men was 52%. Study interventions differed in all eight studies; in two studies, trialists selected the (type of) antibiotic according to local antibiotic policy, with the aim of treating pneumonia."
}
]
|
query-laysum | 3750 | [
{
"role": "user",
"content": "Abstract: Background\nIntensive cytotoxic chemotherapy for people with cancer can cause severe and prolonged cytopenia, especially neutropenia, a critical condition that is potentially life-threatening. When manifested by fever and neutropenia, it is called febrile neutropenia (FN). Invasive fungal disease (IFD) is one of the serious aetiologies of chemotherapy-induced FN. In pre-emptive therapy, physicians only initiate antifungal therapy when an invasive fungal infection is detected by a diagnostic test. Compared to empirical antifungal therapy, pre-emptive therapy may reduce the use of antifungal agents and associated adverse effects, but may increase mortality. The benefits and harms associated with the two treatment strategies have yet to be determined.\nObjectives\nTo assess the relative efficacy, safety, and impact on antifungal agent use of pre-emptive versus empirical antifungal therapy in people with cancer who have febrile neutropenia.\nSearch methods\nWe searched CENTRAL, MEDLINE Ovid, Embase Ovid, and ClinicalTrials.gov to October 2021.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared pre-emptive antifungal therapy with empirical antifungal therapy for people with cancer.\nData collection and analysis\nWe identified 2257 records from the databases and handsearching. After removing duplicates, screening titles and abstracts, and reviewing full-text reports, we included seven studies in the review. We evaluated the effects on all-cause mortality, mortality ascribed to fungal infection, proportion of antifungal agent use (other than prophylactic use), duration of antifungal use (days), invasive fungal infection detection, and adverse effects for the comparison of pre-emptive versus empirical antifungal therapy. We presented the overall certainty of the evidence for each outcome according to the GRADE approach.\nMain results\nThis review includes 1480 participants from seven randomised controlled trials. Included studies only enroled participants at high risk of FN (e.g. people with haematological malignancy); none of them included participants at low risk (e.g. people with solid tumours).\nLow-certainty evidence suggests there may be little to no difference between pre-emptive and empirical antifungal treatment for all-cause mortality (risk ratio (RR) 0.97, 95% confidence interval (CI) 0.72 to 1.30; absolute effect, reduced by 3/1000); and for mortality ascribed to fungal infection (RR 0.92, 95% CI 0.45 to 1.89; absolute effect, reduced by 2/1000). Pre-emptive therapy may decrease the proportion of antifungal agent used more than empirical therapy (other than prophylactic use; RR 0.71, 95% CI 0.47 to 1.05; absolute effect, reduced by 125/1000; very low-certainty evidence). Pre-emptive therapy may reduce the duration of antifungal use more than empirical treatment (mean difference (MD) -3.52 days, 95% CI -6.99 to -0.06, very low-certainty evidence). Pre-emptive therapy may increase invasive fungal infection detection compared to empirical treatment (RR 1.70, 95% CI 0.71 to 4.05; absolute effect, increased by 43/1000; very low-certainty evidence). Although we were unable to pool adverse events in a meta-analysis, there seemed to be no apparent difference in the frequency or severity of adverse events between groups.\nDue to the nature of the intervention, none of the seven RCTs could blind participants and personnel related to performance bias. We identified considerable clinical and statistical heterogeneity, which reduced the certainty of the evidence for each outcome. However, the two mortality outcomes had less statistical heterogeneity than other outcomes.\nAuthors' conclusions\nFor people with cancer who are at high-risk of febrile neutropenia, pre-emptive antifungal therapy may reduce the duration and rate of use of antifungal agents compared to empirical therapy, without increasing over-all and IFD-related mortality; but the evidence regarding invasive fungal infection detection and adverse events was inconsistent and uncertain.\n\nGiven the provided abstract, please respond to the following query: \"What question does this review aim to answer?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This systematic review aimed to find and evaluate the evidence for the relative efficacy (how well they work); safety (the number and severity of side effects); and the impact of pre-emptive versus empirical antifungal therapy on the use of antifungal medicines in people with cancer who have FN."
}
]
|
query-laysum | 3751 | [
{
"role": "user",
"content": "Abstract: Background\nPancreatic cancer (PC) is a highly lethal disease with few effective treatment options. Over the past few decades, many anti-cancer therapies have been tested in the locally advanced and metastatic setting, with mixed results. This review attempts to synthesise all the randomised data available to help better inform patient and clinician decision-making when dealing with this difficult disease.\nObjectives\nTo assess the effect of chemotherapy, radiotherapy or both for first-line treatment of advanced pancreatic cancer. Our primary outcome was overall survival, while secondary outcomes include progression-free survival, grade 3/4 adverse events, therapy response and quality of life.\nSearch methods\nWe searched for published and unpublished studies in CENTRAL (searched 14 June 2017), Embase (1980 to 14 June 2017), MEDLINE (1946 to 14 June 2017) and CANCERLIT (1999 to 2002) databases. We also handsearched all relevant conference abstracts published up until 14 June 2017.\nSelection criteria\nAll randomised studies assessing overall survival outcomes in patients with advanced pancreatic ductal adenocarcinoma. Chemotherapy and radiotherapy, alone or in combination, were the eligible treatments.\nData collection and analysis\nTwo review authors independently analysed studies, and a third settled any disputes. We extracted data on overall survival (OS), progression-free survival (PFS), response rates, adverse events (AEs) and quality of life (QoL), and we assessed risk of bias for each study.\nMain results\nWe included 42 studies addressing chemotherapy in 9463 patients with advanced pancreatic cancer. We did not identify any eligible studies on radiotherapy.\nWe did not find any benefit for chemotherapy over best supportive care. However, two identified studies did not have sufficient data to be included in the analysis, and many of the chemotherapy regimens studied were outdated.\nCompared to gemcitabine alone, participants receiving 5FU had worse OS (HR 1.69, 95% CI 1.26 to 2.27, moderate-quality evidence), PFS (HR 1.47, 95% CI 1.12 to 1.92) and QoL. On the other hand, two studies showed FOLFIRINOX was better than gemcitabine for OS (HR 0.51 95% CI 0.43 to 0.60, moderate-quality evidence), PFS (HR 0.46, 95% CI 0.38 to 0.57) and response rates (RR 3.38, 95% CI 2.01 to 5.65), but it increased the rate of side effects. The studies evaluating CO-101, ZD9331 and exatecan did not show benefit or harm when compared with gemcitabine alone.\nGiving gemcitabine at a fixed dose rate improved OS (HR 0.79, 95% CI 0.66 to 0.94, high-quality evidence) but increased the rate of side effects when compared with bolus dosing.\nWhen comparing gemcitabine combinations to gemcitabine alone, gemcitabine plus platinum improved PFS (HR 0.80, 95% CI 0.68 to 0.95) and response rates (RR 1.48, 95% CI 1.11 to 1.98) but not OS (HR 0.94, 95% CI 0.81 to 1.08, low-quality evidence). The rate of side effects increased. Gemcitabine plus fluoropyrimidine improved OS (HR 0.88, 95% CI 0.81 to 0.95), PFS (HR 0.79, 95% CI 0.72 to 0.87) and response rates (RR 1.78, 95% CI 1.29 to 2.47, high-quality evidence), but it also increased side effects. Gemcitabine plus topoisomerase inhibitor did not improve survival outcomes but did increase toxicity. One study demonstrated that gemcitabine plus nab-paclitaxel improved OS (HR 0.72, 95% CI 0.62 to 0.84, high-quality evidence), PFS (HR 0.69, 95% CI 0.58 to 0.82) and response rates (RR 3.29, 95% CI 2.24 to 4.84) but increased side effects. Gemcitabine-containing multi-drug combinations (GEMOXEL or cisplatin/epirubicin/5FU/gemcitabine) improved OS (HR 0.55, 95% CI 0.39 to 0.79, low-quality evidence), PFS (HR 0.43, 95% CI 0.30 to 0.62) and QOL.\nWe did not find any survival advantages when comparing 5FU combinations to 5FU alone.\nAuthors' conclusions\nCombination chemotherapy has recently overtaken the long-standing gemcitabine as the standard of care. FOLFIRINOX and gemcitabine plus nab-paclitaxel are highly efficacious, but our analysis shows that other combination regimens also offer a benefit. Selection of the most appropriate chemotherapy for individual patients still remains difficult, with clinicopathological stratification remaining elusive. Biomarker development is essential to help rationalise treatment selection for patients.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review aimed to answer the question, which therapies are the most effective for advanced pancreatic cancer?"
}
]
|
query-laysum | 3752 | [
{
"role": "user",
"content": "Abstract: Background\nHealth outcomes are improved when newborn babies with critical congenital heart defects (CCHDs) are detected before acute cardiovascular collapse. The main screening tests used to identify these babies include prenatal ultrasonography and postnatal clinical examination; however, even though both of these methods are available, a significant proportion of babies are still missed. Routine pulse oximetry has been reported as an additional screening test that can potentially improve detection of CCHD.\nObjectives\n• To determine the diagnostic accuracy of pulse oximetry as a screening method for detection of CCHD in asymptomatic newborn infants\n• To assess potential sources of heterogeneity, including:\n○ characteristics of the population: inclusion or exclusion of antenatally detected congenital heart defects;\n○ timing of testing: < 24 hours versus ≥ 24 hours after birth;\n○ site of testing: right hand and foot (pre-ductal and post-ductal) versus foot only (post-ductal);\n○ oxygen saturation: functional versus fractional;\n○ study design: retrospective versus prospective design, consecutive versus non-consecutive series; and\n○ risk of bias for the \"flow and timing\" domain of QUADAS-2.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 2) in the Cochrane Library and the following databases: MEDLINE, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Health Services Research Projects in Progress (HSRProj), up to March 2017. We searched the reference lists of all included articles and relevant systematic reviews to identify additional studies not found through the electronic search. We applied no language restrictions.\nSelection criteria\nWe selected studies that met predefined criteria for design, population, tests, and outcomes. We included cross-sectional and cohort studies assessing the diagnostic accuracy of pulse oximetry screening for diagnosis of CCHD in term and late preterm asymptomatic newborn infants. We considered all protocols of pulse oximetry screening (eg, different saturation thresholds to define abnormality, post-ductal only or pre-ductal and post-ductal measurements, test timing less than or greater than 24 hours). Reference standards were diagnostic echocardiography (echocardiogram) and clinical follow-up, including postmortem findings, mortality, and congenital anomaly databases.\nData collection and analysis\nWe extracted accuracy data for the threshold used in primary studies. We explored between-study variability and correlation between indices visually through use of forest and receiver operating characteristic (ROC) plots. We assessed risk of bias in included studies using the QUADAS-2 tool. We used the bivariate model to calculate random-effects pooled sensitivity and specificity values. We investigated sources of heterogeneity using subgroup analyses and meta-regression.\nMain results\nTwenty-one studies met our inclusion criteria (N = 457,202 participants). Nineteen studies provided data for the primary analysis (oxygen saturation threshold < 95% or ≤ 95%; N = 436,758 participants). The overall sensitivity of pulse oximetry for detection of CCHD was 76.3% (95% confidence interval [CI] 69.5 to 82.0) (low certainty of the evidence). Specificity was 99.9% (95% CI 99.7 to 99.9), with a false-positive rate of 0.14% (95% CI 0.07 to 0.22) (high certainty of the evidence). Summary positive and negative likelihood ratios were 535.6 (95% CI 280.3 to 1023.4) and 0.24 (95% CI 0.18 to 0.31), respectively. These results showed that out of 10,000 apparently healthy late preterm or full-term newborn infants, six will have CCHD (median prevalence in our review). Screening by pulse oximetry will detect five of these infants as having CCHD and will miss one case. In addition, screening by pulse oximetry will falsely identify another 14 infants out of the 10,000 as having suspected CCHD when they do not have it.\nThe false-positive rate for detection of CCHD was lower when newborn pulse oximetry was performed longer than 24 hours after birth than when it was performed within 24 hours (0.06%, 95% CI 0.03 to 0.13, vs 0.42%, 95% CI 0.20 to 0.89; P = 0.027).\nForest and ROC plots showed greater variability in estimated sensitivity than specificity across studies. We explored heterogeneity by conducting subgroup analyses and meta-regression of inclusion or exclusion of antenatally detected congenital heart defects, timing of testing, and risk of bias for the \"flow and timing\" domain of QUADAS-2, and we did not find an explanation for the heterogeneity in sensitivity.\nAuthors' conclusions\nPulse oximetry is a highly specific and moderately sensitive test for detection of CCHD with very low false-positive rates. Current evidence supports the introduction of routine screening for CCHD in asymptomatic newborns before discharge from the well-baby nursery.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched until March 2017 for evidence on use of pulse oximetry to detect CCHD in newborn infants and found 21 studies. These studies used different thresholds to define a pulse oximetry test as positive. We combined all studies using a threshold around 95% (19 studies with 436,758 newborn infants)."
}
]
|
query-laysum | 3753 | [
{
"role": "user",
"content": "Abstract: Background\nDementia is a common and serious neuropsychiatric syndrome, characterised by progressive cognitive and functional decline. The majority of people with dementia develop behavioural disturbances, also known as behavioural and psychological symptoms of dementia (BPSD). Several non-pharmacological interventions have been evaluated to treat BPSD in people with dementia. Simulated presence therapy (SPT), an intervention that uses video or audiotape recordings of family members played to the person with dementia, is a possible approach to treat BPSD.\nObjectives\nTo assess the effects of SPT on behavioural and psychological symptoms and quality of life in people with dementia.\nSearch methods\nWe searched ALOIS (the Specialised Register of the Cochrane Dementia and Cognitive Improvement Group), CENTRAL (The Cochrane Library) (9 April 2020), MEDLINE Ovid SP (1946 to 9 April 2020), Embase Ovid SP (1972 to 9 April 2020), PsycINFO Ovid SP (1806 to 9 April 2020), CINAHL via EBSCOhost (1980 to 9 April 2020), LILACS via BIREME (all dates to 9 April 2020), ClinicalTrials.gov (ClinicalTrials.gov) (all dates to 9 April 2020), and the World Health Organization (WHO) Portal (apps.who.int/trialsearch) (all dates to 9 April 2020). We also checked the reference lists of relevant articles to identify any additional studies.\nSelection criteria\nRandomised and quasi-randomised controlled trials, including cross-over studies, that evaluated the efficacy of SPT, consisting of personalised audio or videotape recordings of family members, in people with any form of dementia.\nData collection and analysis\nTwo authors independently selected studies, assessed risk of bias and extracted data. No meta-analyses were conducted because of substantial heterogeneity among the included studies.\nMain results\nThree trials with 144 participants met the inclusion criteria. Two of the trials had a randomised cross-over design, one was a cross-over trial which we classified as quasi-randomised.\nParticipants in the included studies were people with dementia living in nursing homes. They were predominantly women and had a mean age of over 80 years. SPT was performed using an audio or video recording prepared by family members or surrogates. It varied in its content, frequency of administration and duration. All the studies compared multiple treatments. In one study, SPT was compared with two other interventions; in the other two studies, it was compared with three other interventions. Specifically, SPT was compared to usual care, personalised music (two studies), a 'placebo' audiotape containing the voice of a person (two studies), and one-to-one social interaction performed by trained research assistants (one study). In terms of outcomes evaluated, one study considered agitation and withdrawn behaviour (both assessed with three methods); the second study evaluated verbal disruptive behaviour (assessed with three methods); and the third study evaluated physically agitated behaviour and verbally agitated behaviour (the method used was not clearly described).\nAccording to the GRADE criteria, the overall quality of the evidence was very low due to very small numbers of participants and risk of bias in the included studies; (none of the trials was at low risk of selection bias; all the trials were at high risk of performance bias; one trial was at high risk of attrition bias; and all had unclear selective reporting).\nBecause of variation in the participants, the format of SPT, the comparison interventions, and the measures used to assess outcomes, we judged the results unsuitable for a meta-analysis.\nWithin each trial, the effect of SPT on behaviour, compared to usual care, was mixed and depended on the measure used. Two trials which included a personalised music intervention reported no significant differences between simulated presence and music on behavioural outcomes. Because the overall quality of the evidence was very low, we were very uncertain regarding all the results\nNone of the studies evaluated quality of life or any of our secondary outcome measures (performance of activities of daily living, dropout and carer burden).\nAuthors' conclusions\nWe were unable to draw any conclusions about the efficacy of SPT for treating behavioural and psychological symptoms and improving quality of life of people with dementia. New high-quality studies are needed to investigate the effect of SPT.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Because the trials were so different from each other, we were not able to pool the results. Individually, each trial reported different methods to assess the effect of SPT on behavioural problems and the results varied depending on the method used to measure the outcome.\nNone of the studies assessed quality of life, effect on daily activities, effects on caregivers, or how likely participants were to drop out of the study."
}
]
|
query-laysum | 3754 | [
{
"role": "user",
"content": "Abstract: Background\nPelvic organ prolapse (POP) affects as many as 50% of parous women, with 14% to 19% of women undergoing a surgical correction. Although surgery for the treatment of POP is common, limited supportive data can be found in the literature regarding the preoperative and postoperative interventions related to these procedures. The main goal of perioperative interventions is to reduce the rate of adverse events while improving women's outcomes following surgical intervention for prolapse. A broad spectrum of perioperative interventions are available, and although the benefits of interventions such as prophylactic antibiotics before abdominal surgery are well established, others are unique to women undergoing POP surgeries and as such need to be investigated separately.\nObjectives\nThe aim of this review is to compare the safety and effectiveness of a range of perioperative interventions versus other interventions or no intervention (control group) at the time of surgery for pelvic organ prolapse.\nSearch methods\nWe searched the Cochrane Incontinence Group Specialised Register, which contains trials identified from the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE In Process, ClinicalTrials.gov, WHO ICTRP, handsearching of journals and conference proceedings (searched 30 November 2017), and reference lists of relevant articles. We also contacted researchers in the field.\nSelection criteria\nWe included randomised controlled trials (RCTs) of women undergoing surgical treatment for symptomatic pelvic organ prolapse that compared a perioperative intervention related to pelvic organ prolapse surgery versus no treatment or another intervention.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. Our primary outcomes were objective failure at any site and subjective postoperative prolapse symptoms. We also measured adverse effects, focusing on intraoperative blood loss and blood transfusion, intraoperative ureteral injury, and postoperative urinary tract infection.\nMain results\nWe included 15 RCTs that compared eight different interventions versus no treatment for 1992 women in five countries. Most interventions were assessed by only one RCT with evidence quality ranging from very low to moderate. The main limitation was imprecision, associated with small sample sizes and low event rates.\nPelvic floor muscle training (PFMT) compared with no treatment (three RCTs) - peri-operative intervention\nThe simplest of the PFMT programmes required women to attend six perioperative consultations in the three months surrounding prolapse surgery. Trial results provided no clear evidence of a difference between groups in objective failure at any site at 12 to 24 months (odds ratio (OR) 0.93, 95% confidence interval (CI) 0.56 to 1.54; two RCTs, 327 women; moderate-quality evidence). With respect to awareness of prolapse, findings were inconsistent. One RCT found no evidence of a difference between groups at 24 months (OR 1.07, 95% CI 0.61 to 1.87; one RCT, 305 women; low-quality evidence), and a second small RCT reported symptom reduction from the Pelvic Organ Prolapse Symptom Questionnaire completed by the intervention group at 12 months (mean difference (MD) -3.90, 95% CI -6.11 to -1.69; one RCT, 27 women; low-quality evidence). Researchers found no clear differences between groups at 24-month follow-up in rates of repeat surgery (or pessary) for prolapse (OR 1.92, 95% CI 0.74 to 5.02; one RCT, 316 women; low-quality evidence).\nOther interventions\nSingle RCTs evaluated the following interventions: preoperative guided imagery (N = 44); injection of vasoconstrictor agent at commencement of vaginal prolapse surgery (N = 76); ureteral stent placement during uterosacral ligament suspension (N = 91); vaginal pack (N = 116); prophylactic antibiotics for women requiring postoperative urinary catheterisation (N = 159); and postoperative vaginal dilators (N = 60).\nTwo RCTs evaluated bowel preparation (N = 298), and four RCTs assessed the method and timing of postoperative catheterisation (N = 514) - all in different comparisons.\nNone of these studies reported our primary review outcomes. One study reported intraoperative blood loss and suggested that vaginal injection of vasoconstrictors at commencement of surgery may reduce blood loss by a mean of about 30 mL. Another study reported intraoperative ureteral injury and found no clear evidence that ureteral stent placement reduces ureteral injury. Three RCTs reported postoperative urinary tract infection and found no conclusive evidence that rates of urinary tract infection were influenced by use of a vaginal pack, prophylactic antibiotics, or vaginal dilators. Other studies did not report these outcomes.\nAuthors' conclusions\nThere was a paucity of data about perioperative interventions in pelvic organ prolapse surgery. A structured programme of pelvic floor muscle training before and after prolapse surgery did not consistently demonstrate any benefit for the intervention; however, this finding is based on the results of two small studies. With regard to other interventions (preoperative bowel preparation and injection of vasoconstrictor agent, ureteral stent placement during uterosacral ligament suspension, postoperative vaginal pack insertion, use of vaginal dilators, prophylactic antibiotics for postoperative catheter care), we found no evidence regarding rates of recurrent prolapse and no clear evidence that these interventions were associated with clinically meaningful reductions in adverse effects, such as intraoperative or postoperative blood transfusion, intraoperative ureteral injury, or postoperative urinary tract infection.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Reviewers found very little evidence on perioperative interventions in pelvic organ prolapse surgery. Few trials reported primary outcomes. A structured programme of pelvic floor muscle training before and after surgery did not consistently demonstrate any benefit for the intervention. With regard to other preoperative interventions, neither bowel preparation nor detailed preoperative mapping demonstrated significant benefit when compared to usual care.\nIntraoperative interventions such as injection of a vasoconstrictor agent, ureteral stent placement during uterosacral ligament suspension, or placement of a vaginal pack did not demonstrate benefit for reduced blood loss or rate of urinary tract infection or injury to the ureter.\nVaginal packing postoperatively did not reduce the rate of haematoma (collection of blood) when compared to prolapse surgery without packing in a single trial. The rate of postoperative urinary tract infection was not reduced by use of a vaginal pack, prophylactic antibiotics, or vaginal dilators."
}
]
|
query-laysum | 3755 | [
{
"role": "user",
"content": "Abstract: Background\nRenal vasculitis presents as rapidly progressive glomerulonephritis and comprises of a group of conditions characterised by acute kidney injury (AKI), haematuria and proteinuria. Treatment of these conditions involve the use of steroid and non-steroid agents in combination with plasma exchange. Although immunosuppression overall has been very successful in treatment of these conditions, many questions remain unanswered in terms of dose and duration of therapy, the use of plasma exchange and the role of new therapies. This 2019 publication is an update of a review first published in 2008 and updated in 2015.\nObjectives\nTo evaluate the benefits and harms of any intervention used for the treatment of renal vasculitis in adults.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 21 November 2019 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nRandomised controlled trials investigating any intervention for the treatment of renal vasculitis in adults.\nData collection and analysis\nTwo authors independently assessed study quality and extracted data. Statistical analyses were performed using a random effects model and results expressed as risk ratio (RR) with 95% confidence intervals (CI) for dichotomous outcomes or mean difference (MD) for continuous outcomes.\nMain results\nForty studies (3764 patients) were included. Studies conducted earlier tended to have a higher risk of bias due to poor (or poorly reported) study design, broad inclusion criteria, less well developed disease definitions and low patient numbers. Later studies tend to have improved in all areas of quality, aided by the development of large international study groups.\nInduction therapy: Plasma exchange as adjunctive therapy may reduce the need for dialysis at three (2 studies: RR 0.43, 95% CI 0.23 to 0.78; I2 = 0%) and 12 months (6 studies: RR 0.45, 95% CI 0.29 to 0.72; I2 = 0%) (low certainty evidence). Plasma exchange may make little or no difference to death, serum creatinine (SCr), sustained remission or to serious or the total number of adverse events. Plasma exchange may increase the number of serious infections (5 studies: RR 1.26, 95% CI 1.03 to 1.54; I2 = 0%; low certainty evidence). Remission rates for pulse versus continuous cyclophosphamide (CPA) were equivalent but pulse treatment may increase the risk of relapse (4 studies: RR 1.79, 95% CI 1.11 to 2.87; I2 = 0%) (low certainty evidence) compared with continuous cyclophosphamide. Pulse CPA may make little or no difference to death at final follow-up, or SCr at any time point. More patients required dialysis in the pulse CPA group. Leukopenia was less common with pulse treatment; however, nausea was more common. Rituximab compared to CPA probably makes little or no difference to death, remission, relapse, severe adverse events, serious infections, or severe adverse events. Kidney function and dialysis were not reported. A single study reported no difference in the number of deaths, need for dialysis, or adverse events between mycophenolate mofetil (MMF) and CPA. Remission was reported to improve with MMF however more patients relapsed. A lower dose of steroids was probably as effective as high dose and may be safer, causing fewer infections; kidney function and relapse were not reported. There was little of no difference in death or remission between six and 12 pulses of CPA. There is low certainty evidence that there were less relapses with 12 pulses (2 studies: RR 1.57, 95% CI 0.96 to 2.56; I2 = 0%), but more infections (2 studies: RR 0.79, 95% CI 0.36 to 1.72; I2 = 45%). One study reported severe adverse events were less in patients receiving six compared to 12 pulses of CPA. Kidney function and dialysis were not reported. There is limited evidence from single studies about the effectiveness of intravenous immunoglobulin, avacopan, methotrexate, immunoadsorption, lymphocytapheresis, or etanercept.\nMaintenance therapy: Azathioprine (AZA) has equivalent efficacy as a maintenance agent to CPA with fewer episodes of leucopenia. MMF resulted in a higher relapse rate when tested against azathioprine in remission maintenance. Rituximab is an effective remission induction and maintenance agent. Oral co-trimoxazole did not reduce relapses in granulomatosis with polyangiitis. There were fewer relapses but more serious adverse events with leflunomide compared to methotrexate. There is limited evidence from single studies about the effectiveness of methotrexate versus CPA or AZA, cyclosporin versus CPA, extended versus standard AZA, and belimumab.\nAuthors' conclusions\nPlasma exchange was effective in patients with severe AKI secondary to vasculitis. Pulse cyclophosphamide may result in an increased risk of relapse when compared to continuous oral use but a reduced total dose. Whilst CPA is standard induction treatment, rituximab and MMF were also effective. AZA, methotrexate and leflunomide were effective as maintenance therapy. Further studies are required to more clearly delineate the appropriate place of newer agents within an evidence-based therapeutic strategy.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Forty studies (3764 patients) were identified. Plasma exchange reduces the risk of end-stage kidney disease in patients presenting with severe acute kidney failure (AKI). The use of pulse cyclophosphamide results in good remission rates but there was an increased risk of relapse. Other appropriate induction agents include rituximab and mycophenolate. Azathioprine is effective as maintenance therapy once remission has been achieved. A lower dose of steroids is just as effective as high dose and may be safer, causing fewer infections. One study shows that a new complement inhibitor can be used to replace steroids in the initial treatment of vasculitis. These are early data. The drug is likely to be very expensive so its place in treatment is not yet clearly defined. Mycophenolate mofetil has also been tested in maintenance treatment and was found to result in a higher rate of disease relapse, when compared to Azathioprine. Methotrexate and leflunomide are useful in maintenance therapy but their relative effectiveness are not clearly defined. Patients on immunosuppression for up to four years after diagnosis have a lowered relapse rate to those in whom treatment is ceased by three years."
}
]
|
query-laysum | 3756 | [
{
"role": "user",
"content": "Abstract: Background\nMedications used to treat inflammatory bowel disease (IBD) have significantly improved patient outcomes and delayed time to surgery. However, some of these therapies are recognized to increase the general risk of infection and have an unclear impact on postoperative infection risk.\nObjectives\nTo assess the impact of IBD medications on postoperative infection risk within 30 days of surgery.\nSearch methods\nWe searched the Cochrane IBD Groups Specialized Register (29 October 2019), MEDLINE (January 1966 to October 2019), EMBASE (January 1985 to October 2019), the Cochrane Library, Clinicaltrials.gov and the WHO International Clinical Trials Registry Platform from inception up to October 2019 and reference lists of articles.\nSelection criteria\nRandomized controlled trials, quasi-randomized controlled trials, non-randomized controlled trials, prospective cohort studies, retrospective cohort studies, case-control studies and cross-sectional studies comparing patients treated with an IBD medication preoperatively or within 30 days postoperatively to patients who were not taking that medication. Manuscripts and abstracts were included.\nData collection and analysis\nTwo authors independently screened titles and abstracts and extracted data. The primary outcome was postoperative infection within 30 days of surgery. Secondary outcomes included incisional infections and wound dehiscence, intra-abdominal infectious complications and extra-abdominal infections. Three authors assessed risk of bias using the Newcastle-Ottawa scale. We contacted authors for additional information when data were missing. For the primary and secondary outcomes, we calculated odds ratios (ORs) and corresponding 95% confidence intervals (95% CI) using the generic inverse variance method. When applicable, we analyzed adjusted and unadjusted data separately. The certainty of the evidence was evaluated using GRADE.\nMain results\nSixty-eight non-randomized studies were included. Twenty-four studies had low risk of bias while the remaining had very high risk. Based on pooling of adjusted data, overall infectious complications were increased in patients who received anti-TNF agents (OR 1.60; 95% CI 1.20 to 2.13; very low certainty evidence) and corticosteroids (OR 1.70; 95% CI 1.38 to 2.09; low certainty evidence). Use of 5-ASA (OR 0.76; 95% CI 0.51 to 1.14; very low certainty evidence), immunomodulators (OR 1.29; 95% CI 0.95 to 1.76; low certainty evidence) and anti-integrin agents (OR 1.04; 95% CI 0.79 to 1.36; low certainty evidence) had no impact on overall infectious complications. No difference in the odds of wound-related complications was seen in patients using corticosteroids, 5-ASA, immunomodulators, anti-TNF or anti-integrin agents when compared to controls. Both corticosteroids and anti-TNF agents increased odds of intra-abdominal infection (OR 1.53; 95% CI 1.28 to 1.84; very low certainty evidence and OR 1.38; 95% CI 1.04 to 1.82; very low certainty evidence, respectively) whereas no impact was observed with 5-ASA, immunomodulators or anti-integrin agents. The rate of extra-abdominal infections was not affected by corticosteroids, immunomodulators, anti-TNF or anti-integrin agents.\nAuthors' conclusions\nThe evidence regarding corticosteroids, 5ASA, immunomodulators, anti-TNF mediations and anti-integrin medications was low or very low in certainty. Thus, the impact of these medications on postoperative infectious complications is uncertain and no firm conclusions can be drawn regarding their safety in the perioperative period. Decisions regarding preoperative IBD medications should be tailored to each patient’s unique circumstances. Future studies should focus on controlling for potential confounding factors to generate higher quality evidence.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "More than 1.2 million individuals in North America are affected by inflammatory bowel disease (IBD). It is a condition that involves inflammation in the large and/or small intestine(s), resulting in symptoms such as diarrhea and abdominal pain. Many of the medications used to treat IBD suppress the immune system. As a result, use of these medications increases the risk of infection. This increased risk of infection is particularly concerning in patients undergoing surgery."
}
]
|
query-laysum | 3757 | [
{
"role": "user",
"content": "Abstract: Background\nPostnatal depression is a common disorder that can have adverse short- and long-term effects on maternal morbidity, the new infant and the family as a whole. Treatment is often largely by social support and psychological interventions. It is not known whether antidepressants are an effective and safe choice for treatment of this disorder. This review was undertaken to evaluate the effectiveness of different antidepressants and to compare their effectiveness with other forms of treatment, placebo or treatment as usual. It is an update of a review first published in 2001.\nObjectives\nTo assess the effectiveness of antidepressant drugs in comparison with any other treatment (psychological, psychosocial or pharmacological), placebo or treatment as usual for postnatal depression.\nSearch methods\nWe searched the Cochrane Depression, Anxiety and Neurosis Group's Specialized Register (CCDANCTR) to 11 July 2014. This register contains reports of relevant randomised controlled trials (RCTs) from the following bibliographic databases: The Cochrane Library (all years), MEDLINE (1950 to date), EMBASE, (1974 to date) and PsycINFO (1967 to date). We also searched international trial registries and contacted pharmaceutical companies and experts in the field.\nSelection criteria\nWe included RCTs of women with depression with onset up to six months postpartum that compared antidepressant treatment (alone or in combination with another treatment) with any other treatment, placebo or treatment as usual.\nData collection and analysis\nTwo review authors independently extracted data from the trial reports. We requested missing information from investigators wherever possible. We sought data to allow an intention-to-treat analysis. Random effects meta-analyses were conducted to pool data where sufficient comparable studies were identified.\nMain results\nWe included six trials with 596 participants in this review. All studies had a randomised controlled parallel group design, with two conducted in the UK, three in the US and one in Israel. Meta-analyses were performed to pool data on response and remission from studies comparing antidepressants with placebo. No meta-analyses could be conducted for other comparisons due to the small number of trials identified.\nFour studies compared selective serotonin reuptake inhibitors (SSRIs) with placebo (two using sertraline, one using paroxetine and one using fluoxetine; 233 participants in total). In two of these studies both the experimental and placebo groups also received psychological therapy. Pooled risk ratios based on data from three of these studies (146 participants) showed that women randomised to SSRIs had higher rates of response and remission than those randomised to placebo (response: RR 1.43, 95% CI 1.01 to 2.03; remission: RR 1.79, 95% CI 1.08 to 2.98); the fourth study did not report data on response or remission.\nOne study (254 participants) compared antidepressant treatment with treatment as usual (for the first four weeks) followed by listening visits. The study found significantly higher rates of improvement in the antidepressant group than treatment-as-usual group after the first four weeks, but no difference between antidepressants and listening visits at the later follow-up. In addition, one study comparing sertraline with nortriptyline (a tricyclic antidepressant) found no difference in effectiveness (109 participants).\nSide effects were experienced by a substantial proportion of women, but there was no evidence of a meaningful difference in the number of adverse effects between treatment arms in any study. There were very limited data on adverse effects experienced by breastfed infants, with no long-term follow-up. All but one of the studies were assessed as being at high or uncertain risk of attrition bias and selective outcome reporting. In particular, one of the placebo-controlled studies had over 50% drop-out.\nAuthors' conclusions\nThe evidence base for this review was very limited, with a small number of studies and little information on a number of important outcomes, particularly regarding potential effects on the child. Risk of bias, for example from high attrition rates, as well as low representativeness of participants (e.g. exclusion of women with severe or chronic depression in several trials) also limit the conclusions that can be drawn.\nPooled estimates for response and remission found that SSRIs were significantly more effective than placebo for women with postnatal depression. However the quality of evidence contributing to this comparison was assessed as very low owing to the small sample size for this comparison (146 participants from three studies), the risk of bias in included studes and the inclusion of one study where all participants in both study arms additionally received psychological therapy. There was insufficient evidence to conclude whether, and for whom, antidepressant or psychological/psychosocial treatments are more effective, or whether some antidepressants are more effective or better tolerated than others. There is also inadequate evidence on whether the benefits of antidepressants persist beyond eight weeks or whether they have short- or long-term adverse effects on breastfeeding infants.\nProfessionals treating women with severe depression in the postnatal period will need to draw on other evidence, including trials among general adult populations and observational studies of antidepressant safety when breastfeeding (although the potential for confounding in non-randomised studies must be considered). More RCTs are needed with larger sample sizes and longer follow-up, including assessment of the impact on the child and safety of breastfeeding. Further larger-scale trials comparing antidepressants with alternative treatment modalities are also required.\n\nGiven the provided abstract, please respond to the following query: \"Which studies were included in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched clinical trials registries; the Cochrane Depression, Anxiety and Neurosis Group; and the Cochrane Pregnancy and Childbirth Group databases to find all high-quality studies comparing antidepressants with any other form of treatment from the upper date limit of the most recent previous searches to July 2014. We contacted drug companies and experts in the field.\nTo be included in the review, studies had to be randomised controlled trials (clinical studies where people were randomly put into one of two or more treatment groups) and had to include women with postnatal depression (onset of depression up to six months after giving birth) who were not taking any antidepressant medication at the start of the trial.\nWe included six trials of 596 women in the review. Although many of the studies were well conducted and reported, there are some areas with substantial risk of bias; for example, through incomplete follow-up (e.g. in one study over 50% of the participants dropped out prior to the primary outcome measurement)."
}
]
|
query-laysum | 3758 | [
{
"role": "user",
"content": "Abstract: Background\nCatatonia is a debilitating disorder of movement and volition associated with schizophrenia and some other mental illnesses. People with catatonia are more likely to require hospitalisation and highly supervised care than those without the disorder. They also have an increased risk of secondary complications such as pneumonia, malnutrition and dehydration. The mainstay of treatment has been drug therapies and electroconvulsive therapy.\nObjectives\nTo compare the effects of benzodiazepines with other drugs, placebo or electroconvulsive therapy for catatonia in people with schizophrenia or other similar serious mental illnesses (SMIs).\nSearch methods\nWe updated our previous search (28 February 2007) by searching the Cochrane Schizophrenia Group's Study-Based Register of Trials (9 November 2016; 6 February 2019). This register is compiled by systematic searches of major resources (including CENTRAL, MEDLINE, Embase, AMED, BIOSIS, CINAHL, PsycINFO, PubMed, and registries of clinical trials) and their monthly updates, handsearches, grey literature, and conference proceedings, with no language, date, document type, or publication status limitations for inclusion of records into the register. We also manually searched reference lists from studies selected by the search.\nSelection criteria\nAll controlled clinical trials that randomised people who have schizophrenia or other similar SMI and experiencing catatonia to receive benzodiazepines or another relevant treatment. We included studies that met our inclusion criteria and reported usable data. We excluded those not meeting our inclusion criteria or those not reporting usable data. We contacted authors when we required further information; and if we received no response, we put those studies aside as 'awaiting assessment'.\nData collection and analysis\nReview authors extracted data independently. For dichotomous data we calculated relative risks (RR) and their 95% confidence intervals (CI) on an intention-to-treat basis using a fixed-effect model. We completed a 'Risk of bias' assessment for the included study and generated a 'Summary of findings' table using GRADE.\nMain results\nThe searches found 130 citations, from which we could identify 22 possibly relevant studies. From these, we could only include one study. This study had a relatively small sample size of 17 participants who received lorazepam or oxazepam and were drug free for one week before the trial started. The only usable data reported by this study were clinically important change in symptoms of catatonia measured as 50% improvement on the Visual Analogue Scale (VAS). There was no difference in the numbers of participants showing a clinically important change in their catatonic symptoms (RR 0.95, 95% CI 0.42 to 2.16; participants = 17; studies = 1; very low quality evidence).\nNo data were reported for other important outcomes of hospital stay, clinically important change in satisfaction with care, global state, adverse effects or general functioning\nWe did find a few studies meeting our inclusion criteria but they reported no usable data. We had to exclude these. Although poorly reported, these studies do illustrate that relevant studies have been undertaken — they are not impossible to design and conduct.\nAuthors' conclusions\nAnalysis of the results from this review, which was a head-to-head comparison of two benzodiazepine monotherapies, does not show a clear difference in effect. No data were available for benzodiazepines compared to placebo or standard care. The lack of usable data and very low quality of data available makes it impossible to draw firm conclusions and further studies with a high-quality methodology and reporting are required in order to determine more definitively the outcomes associated with benzodiazepine use in the clinical management of catatonia in persons with schizophrenia and other SMI.\n\nGiven the provided abstract, please respond to the following query: \"Evidence found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "One trial was found in the search which met the review requirements and provided limited, very low quality usable data for one outcome only. This trial compared two benzodiazepines (lorazepam vs oxazepam) and found no clear difference between these two treatments for improvement in the symptoms of catatonia for people who have catatonia and schizophrenia or similar serious mental illness."
}
]
|
query-laysum | 3759 | [
{
"role": "user",
"content": "Abstract: Background\nPeople with inflammatory bowel disease (IBD) require intensive follow-up with frequent consultations after diagnosis. IBD telehealth management includes consulting by phone, instant messenger, video, text message, or web-based services. Telehealth can be beneficial for people with IBD, but may have its own set of challenges. It is important to systematically review the evidence on the types of remote or telehealth approaches that can be deployed in IBD. This is particularly relevant following the coronavirus disease 2019 (COVID-19) pandemic, which led to increased self- and remote-management.\nObjectives\nTo identify the communication technologies used to achieve remote healthcare for people with inflammatory bowel disease and to assess their effectiveness.\nSearch methods\nOn 13 January 2022, we searched CENTRAL, Embase, MEDLINE, three other databases, and three trials registries with no limitations on language, date, document type, or publication status.\nSelection criteria\nAll published, unpublished, and ongoing randomised controlled trials (RCTs) that evaluated telehealth interventions targeted at people with IBD versus any other type of intervention or no intervention.\nWe did not include studies based on digital patient information resources or education resources, unless they formed part of a wider package including an element of telehealth. We excluded studies where remote monitoring of blood or faecal tests was the only form of monitoring.\nData collection and analysis\nTwo review authors independently extracted data from the included studies and assessed their risk of bias. We analysed studies on adult and paediatric populations separately. We expressed the effects of dichotomous outcomes as risk ratios (RRs) and the effects of continuous outcomes as mean differences (MDs) or standardised mean differences (SMDs), each with their 95% confidence intervals (CIs). We assessed the certainty of the evidence using GRADE methodology.\nMain results\nWe included 19 RCTs with a total of 3489 randomised participants, aged eight to 95 years. Three studies examined only people with ulcerative colitis (UC), two studies examined only people with Crohn's disease (CD), and the remaining studies examined a mix of IBD patients. Studies considered a range of disease activity states. The length of the interventions ranged from six months to two years. The telehealth interventions were web-based and telephone-based.\nWeb-based monitoring versus usual care\nTwelve studies compared web-based disease monitoring to usual care.\nThree studies, all in adults, provided data on disease activity. Web-based disease monitoring (n = 254) is probably equivalent to usual care (n = 174) in reducing disease activity in people with IBD (SMD 0.09, 95% CI −0.11 to 0.29). The certainty of the evidence is moderate.\nFive studies on adults provided dichotomous data that we could use for a meta-analysis on flare-ups. Web-based disease monitoring (n = 207/496) is probably equivalent to usual care (n = 150/372) for the occurrence of flare-ups or relapses in adults with IBD (RR 1.09, 95% CI 0.93 to 1.27). The certainty of the evidence is moderate. One study provided continuous data. Web-based disease monitoring (n = 465) is probably equivalent to usual care (n = 444) for the occurrence of flare-ups or relapses in adults with CD (MD 0.00 events, 95% CI −0.06 to 0.06). The certainty of the evidence is moderate. One study provided dichotomous data on flare-ups in a paediatric population. Web-based disease monitoring (n = 28/84) may be equivalent to usual care (n = 29/86) for the occurrence of flare-ups or relapses in children with IBD (RR 0.99, 95% CI 0.65 to 1.51). The certainty of the evidence is low.\nFour studies, all in adults, provided data on quality of life. Web-based disease monitoring (n = 594) is probably equivalent to usual care (n = 505) for quality of life in adults with IBD (SMD 0.08, 95% CI −0.04 to 0.20). The certainty of the evidence is moderate.\nBased on continuous data from one study in adults, we found that web-based disease monitoring probably leads to slightly higher medication adherence compared to usual care (MD 0.24 points, 95% CI 0.01 to 0.47). The results are of moderate certainty. Based on continuous data from one paediatric study, we found no difference between web-based disease monitoring and usual care in terms of their effect on medication adherence (MD 0.00, 95% CI −0.63 to 0.63), although the evidence is very uncertain. When we meta-analysed dichotomous data from two studies on adults, we found no difference between web-based disease monitoring and usual care in terms of their effect on medication adherence (RR 0.87, 95% CI 0.62 to 1.21), although the evidence is very uncertain.\nWe were unable to draw any conclusions on the effects of web-based disease monitoring compared to usual care on healthcare access, participant engagement, attendance rate, interactions with healthcare professionals, and cost- or time-effectiveness. The certainty of the evidence is very low.\nAuthors' conclusions\nThe evidence in this review suggests that web-based disease monitoring is probably no different to standard care in adults when considering disease activity, occurrence of flare-ups or relapse, and quality of life. There may be no difference in these outcomes in children, but the evidence is limited. Web-based monitoring probably increases medication adherence slightly compared to usual care.\nWe are uncertain about the effects of web-based monitoring versus usual care on our other secondary outcomes, and about the effects of the other telehealth interventions included in our review, because the evidence is limited.\nFurther studies comparing web-based disease monitoring to standard care for the clinical outcomes reported in adults are unlikely to change our conclusions, unless they have longer follow-up or investigate under-reported outcomes or populations. Studies with a clearer definition of web-based monitoring would enhance applicability, enable practical dissemination and replication, and enable alignment with areas identified as important by stakeholders and people affected by IBD.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review is up-to-date as of January 2022."
}
]
|
query-laysum | 3761 | [
{
"role": "user",
"content": "Abstract: Background\nEthnic minority groups in upper-middle-income and high-income countries tend to be socioeconomically disadvantaged and to have a higher prevalence of type 2 diabetes than is seen in the majority population.\nObjectives\nTo assess the effectiveness of culturally appropriate health education for people in ethnic minority groups with type 2 diabetes mellitus.\nSearch methods\nA systematic literature search was performed of the following databases: The Cochrane Library, MEDLINE, EMBASE, PsycINFO, the Education Resources Information Center (ERIC) and Google Scholar, as well as reference lists of identified articles. The date of the last search was July 2013 for The Cochrane Library and September 2013 for all other databases. We contacted authors in the field and handsearched commonly encountered journals as well.\nSelection criteria\nWe selected randomised controlled trials (RCTs) of culturally appropriate health education for people over 16 years of age with type 2 diabetes mellitus from named ethnic minority groups residing in upper-middle-income or high-income countries.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. When disagreements arose regarding selection of papers for inclusion, two additional review authors were consulted for discussion. We contacted study authors to ask for additional information when data appeared to be missing or needed clarification.\nMain results\nA total of 33 trials (including 11 from the original 2008 review) involving 7453 participants were included in this review, with 28 trials providing suitable data for entry into meta-analysis. Although the interventions provided in these studies were very different from one study to another (participant numbers, duration of intervention, group versus individual intervention, setting), most of the studies were based on recognisable theoretical models, and we tried to be inclusive in considering the wide variety of available culturally appropriate health education.\nGlycaemic control (as measured by glycosylated haemoglobin A1c (HbA1c)) showed improvement following culturally appropriate health education at three months (mean difference (MD) -0.4% (95% confidence interval (CI) -0.5 to -0.2); 14 trials; 1442 participants; high-quality evidence) and at six months (MD -0.5% (95% CI -0.7 to -0.4); 14 trials; 1972 participants; high-quality evidence) post intervention compared with control groups who received 'usual care'. This control was sustained to a lesser extent at 12 months (MD -0.2% (95% CI -0.3 to -0.04); 9 trials; 1936 participants) and at 24 months (MD -0.3% (95% CI -0.6 to -0.1); 4 trials; 2268 participants; moderate-quality evidence) post intervention. Neutral effects on health-related quality of life measures were noted and there was a general lack of reporting of adverse events in most studies — the other two primary outcomes for this review. Knowledge scores showed improvement in the intervention group at three (standardised mean difference (SMD) 0.4 (95% CI 0.1 to 0.6), six (SMD 0.5 (95% CI 0.3 to 0.7)) and 12 months (SMD 0.4 (95% CI 0.1 to 0.6)) post intervention. A reduction in triglycerides of 24 mg/dL (95% CI -40 to -8) was observed at three months, but this was not sustained at six or 12 months. Neutral effects on total cholesterol, low-density lipoprotein (LDL) cholesterol or high-density lipoprotein (HDL) cholesterol were reported at any follow-up point. Other outcome measures (blood pressure, body mass index, self-efficacy and empowerment) also showed neutral effects compared with control groups. Data on the secondary outcomes of diabetic complications, mortality and health economics were lacking or were insufficient.\nBecause of the nature of the intervention, participants and personnel delivering the intervention were rarely blinded, so the risk of performance bias was high. Also, subjective measures were assessed by participants who self-reported via questionnaires, leading to high bias in subjective outcome assessment.\nAuthors' conclusions\nCulturally appropriate health education has short- to medium-term effects on glycaemic control and on knowledge of diabetes and healthy lifestyles. With this update (six years after the first publication of this review), a greater number of RCTs were reported to be of sufficient quality for inclusion in the review. None of these studies were long-term trials, and so clinically important long-term outcomes could not be studied. No studies included an economic analysis. The heterogeneity of the studies made subgroup comparisons difficult to interpret with confidence. Long-term, standardised, multi-centre RCTs are needed to compare different types and intensities of culturally appropriate health education within defined ethnic minority groups, as the medium-term effects could lead to clinically important health outcomes, if sustained.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Culturally appropriate health education improved blood sugar control among participants, compared with those receiving 'usual' care, at three, six, 12 and 24 months after the intervention was provided. Knowledge about diabetes improved, and participants attained healthier lifestyles. No information was available regarding complications of diabetes and death from any cause, and there was a general lack of reporting of adverse effects in most studies. Neutral effects were observed for health-related quality of life, blood lipids like cholesterol, blood pressure and weight. The costs of educational programmes were rarely analysed. Compared with the first review, performed in 2008 (11 studies), many more published studies were identified in this review (altogether 33 studies), strengthening the original findings that blood sugar control and knowledge of diabetes are improved when culturally appropriate health education is provided to people in ethnic minority groups diagnosed with diabetes. The effects of this improvement are shown in this update as lasting longer — up to 24 months after health education was provided in some trials. However, additional high-quality standardised RCTs of longer duration are needed, along with full evaluation of costs."
}
]
|
query-laysum | 3762 | [
{
"role": "user",
"content": "Abstract: Background\nSubarachnoid haemorrhage has an incidence of up to nine per 100,000 person-years. It carries a mortality of 30% to 45% and leaves 20% dependent in activities of daily living. The major causes of death or disability after the haemorrhage are delayed cerebral ischaemia and rebleeding. Interventions aimed at lowering blood pressure may reduce the risk of rebleeding, while the induction of hypertension may reduce the risk of delayed cerebral ischaemia. Despite the fact that medical alteration of blood pressure has been clinical practice for more than three decades, no previous systematic reviews have assessed the beneficial and harmful effects of altering blood pressure (induced hypertension or lowered blood pressure) in people with acute subarachnoid haemorrhage.\nObjectives\nTo assess the beneficial and harmful effects of altering arterial blood pressure (induced hypertension or lowered blood pressure) in people with acute subarachnoid haemorrhage.\nSearch methods\nWe searched the following from inception to 8 September 2020 (Chinese databases to 27 January 2019): Cochrane Stroke Group Trials register; CENTRAL; MEDLINE; Embase; five other databases, and five trial registries. We screened reference lists of review articles and relevant randomised clinical trials.\nSelection criteria\nRandomised clinical trials assessing the effects of inducing hypertension or lowering blood pressure in people with acute subarachnoid haemorrhage. We included trials irrespective of publication type, status, date, and language.\nData collection and analysis\nTwo review authors independently extracted data. We assessed the risk of bias of all included trials to control for the risk of systematic errors. We performed trial sequential analysis to control for the risks of random errors. We also applied GRADE. Our primary outcomes were death from all causes and death or dependency. Our secondary outcomes were serious adverse events, quality of life, rebleeding, delayed cerebral ischaemia, and hydrocephalus. We assessed all outcomes closest to three months' follow-up (primary point of interest) and maximum follow-up.\nMain results\nWe included three trials: two trials randomising 61 participants to induced hypertension versus no intervention, and one trial randomising 224 participants to lowered blood pressure versus placebo. All trials were at high risk of bias. The certainty of the evidence was very low for all outcomes.\nInduced hypertension versus control\nTwo trials randomised participants to induced hypertension versus no intervention. Meta-analysis showed no evidence of a difference between induced hypertension versus no intervention on death from all causes (risk ratio (RR) 1.60, 95% confidence interval (CI) 0.57 to 4.42; P = 0.38; I2 = 0%; 2 trials, 61 participants; very low-certainty evidence). Trial sequential analyses showed that we had insufficient information to confirm or reject our predefined relative risk reduction of 20% or more.\nMeta-analysis showed no evidence of a difference between induced hypertension versus no intervention on death or dependency (RR 1.29, 95% CI 0.78 to 2.13; P = 0.33; I2 = 0%; 2 trials, 61 participants; very low-certainty evidence). Trial sequential analyses showed that we had insufficient information to confirm or reject our predefined relative risk reduction of 20% or more.\nMeta-analysis showed no evidence of a difference between induced hypertension and control on serious adverse events (RR 2.24, 95% CI 1.01 to 4.99; P = 0.05; I2 = 0%; 2 trials, 61 participants; very low-certainty evidence). Trial sequential analysis showed that we had insufficient information to confirm or reject our predefined relative risk reduction of 20% or more.\nOne trial (41 participants) reported quality of life using the Stroke Specific Quality of Life Scale. The induced hypertension group had a median of 47 points (interquartile range 35 to 55) and the no-intervention group had a median of 49 points (interquartile range 35 to 55). The certainty of evidence was very low.\nOne trial (41 participants) reported rebleeding. Fisher's exact test (P = 1.0) showed no evidence of a difference between induced hypertension and no intervention on rebleeding. The certainty of evidence was very low. Trial sequential analysis showed that we had insufficient information to confirm or reject our predefined relative risk reduction of 20% or more.\nOne trial (20 participants) reported delayed cerebral ischaemia. Fisher's exact test (P = 1.0) showed no evidence of a difference between induced hypertension and no intervention on delayed cerebral ischaemia. The certainty of the evidence was very low. Trial sequential analysis showed that we had insufficient information to confirm or reject our predefined relative risk reduction of 20% or more.\nNone of the trials randomising participants to induced hypertension versus no intervention reported on hydrocephalus. No subgroup analyses could be conducted for trials randomising participants to induced hypertension versus no intervention.\nLowered blood pressure versus control\nOne trial randomised 224 participants to lowered blood pressure versus placebo. The trial only reported on death from all causes. Fisher's exact test (P = 0.058) showed no evidence of a difference between lowered blood pressure versus placebo on death from all causes. The certainty of evidence was very low.\nAuthors' conclusions\nBased on the current evidence, there is a lack of information needed to confirm or reject minimally important intervention effects on patient-important outcomes for both induced hypertension and lowered blood pressure. There is an urgent need for trials assessing the effects of altering blood pressure in people with acute subarachnoid haemorrhage. Such trials should use the SPIRIT statement for their design and the CONSORT statement for their reporting. Moreover, such trials should use methods allowing for blinded altering of blood pressure and report on patient-important outcomes such as mortality, rebleeding, delayed cerebral ischaemia, quality of life, hydrocephalus, and serious adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases from their inception to September 2020."
}
]
|
query-laysum | 3763 | [
{
"role": "user",
"content": "Abstract: Background\nHigh intake of added sugar have been suggested to impact the risk for cardiovascular disease (CVD). Knowledge on the subject can contribute to preventing CVD.\nObjectives\nTo assess the effects of a high versus low-added sugar consumption for primary prevention of CVD in the general population.\nSearch methods\nWe searched Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE, Embase, Conference Proceedings Citation Index-Science (CPCI-S) on 2 July 2021. We also conducted a search of ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform (ICTRP) Search Portal for ongoing or unpublished trials. The search was performed together with reference checking, citation searching and contact with study authors to identify additional studies. We imposed no restriction on language of publication or publication status.\nSelection criteria\nWe included randomised controlled trials (RCTs), including cross-over trials, that compared different levels of added sugar intake. Exclusion criteria were: participants aged below 18 years; diabetes mellitus (type 1 and 2); and previous CVD. Primary outcomes were incident cardiovascular events (coronary, carotid, cerebral and peripheral arterial disease) and all-cause mortality. Secondary outcomes were changes in systolic and diastolic blood pressure, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglycerides, fasting plasma glucose and adverse events (gastrointestinal symptoms and impaired dental health).\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane.\nMain results\nWe included 21 RCTs (1110 participants completing the interventions) examining the effects of different levels of added sugar intake with a mean duration of 14 weeks. The study participants were generally described as healthy and the mean age ranged from 22 to 57 years.\nNo studies reported on cardiovascular events or all-cause mortality. There was minimal effect of low intake of added sugar on total cholesterol levels (MD 0.11, 95% CI 0.01 to 0.21; I² = 0%; 16 studies; 763 participants; low certainty of evidence) and triglycerides (MD 0.10, 95% CI 0.03 to 0.17; I² = 3%; 14 studies; 725 participants) but no evidence of effect on LDL-cholesterol and HDL-cholesterol. There was minimal effect on diastolic blood pressure (MD 1.52, 95% CI 0.67 to 2.37; I² = 0%; 13 studies; 873 participants) and on systolic blood pressure (MD 1.44, 95% 0.08 to 2.80; I² = 27%, 14 studies; 873 participants; low certainty of evidence), but no evidence of effect on fasting plasma glucose.\nOnly one study reported on dental health, with no events. No other trials reported adverse events (impaired dental health or gastrointestinal symptoms).\nAll results were judged as low-quality evidence according to GRADE. The risk of bias was generally unclear, five studies were classified at an overall low risk of bias (low risk in at least four domains, not including other bias).\nAuthors' conclusions\nNo trials investigating the effect of added sugar on cardiovascular events or all-cause mortality were identified in our searches. Evidence is uncertain whether low intake of added sugar has an effect on risk factors for CVD; the effect was small and the clinical relevance is, therefore, uncertain. Practical ways to achieve reductions in dietary added sugar includes following current dietary recommendations.\nFuture trials should have longer follow-up time and report on all-cause mortality and cardiovascular events in order to clarify the effect of added sugar on these outcomes. Future trials should also aim for more direct interventions and preferably be more independent of industry funding.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cardiovascular disease (CVD) is a group of disorders affecting the heart and blood vessels and the number one cause of death worldwide. It is important to detect modifiable risk factors and find strategies to prevent CVD. There are several established modifiable risk factors for developing CVD, one of them being eating an unhealthy diet rich in sugar. Sugar can be divided in two categories; sugars naturally occurring in food and sugars that are added to food. A high level of added sugar intake is suggested to cause weight gain and affect blood lipids, increasing the risk of CVD. This review assessed different levels of added sugars in the diet and the effect on cardiovascular events (e.g. heart attack or stroke), death, and CVD risk factors in healthy adults."
}
]
|
query-laysum | 3764 | [
{
"role": "user",
"content": "Abstract: Background\nHip fracture is a major injury that causes significant problems for affected individuals and their family and carers. Over 40% of people with hip fracture have dementia or cognitive impairment. The outcomes of these individuals after surgery are poorer than for those without dementia. It is unclear which care and rehabilitation interventions achieve the best outcomes for these people. This is an update of a Cochrane Review first published in 2013.\nObjectives\n(a) To assess the effectiveness of models of care including enhanced rehabilitation strategies designed specifically for people with dementia following hip fracture surgery compared to usual care.\n(b) To assess for people with dementia the effectiveness of models of care including enhanced rehabilitation strategies that are designed for all older people, regardless of cognitive status, following hip fracture surgery, compared to usual care.\nSearch methods\nWe searched ALOIS (www.medicine.ox.ac.uk/alois), the Cochrane Dementia and Cognitive Improvement Group Specialised Register, MEDLINE (OvidSP), Embase (OvidSP), PsycINFO (OvidSP), CINAHL (EBSCOhost), Web of Science Core Collection (ISI Web of Science), LILACS (BIREME), ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform on 16 October 2019.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials evaluating the effectiveness of any model of enhanced care and rehabilitation for people with dementia after hip fracture surgery compared to usual care.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted data. We assessed risk of bias of the included trials. We synthesised data only if we considered the trials to be sufficiently homogeneous in terms of participants, interventions, and outcomes. We used the GRADE approach to rate the overall certainty of evidence for each outcome.\nMain results\nWe included seven trials with a total of 555 participants. Three trials compared models of enhanced care in the inpatient setting with conventional care. Two trials compared an enhanced care model provided in inpatient settings and at home after discharge with conventional care. Two trials compared geriatrician-led care in-hospital to conventional care led by the orthopaedic team. None of the interventions were designed specifically for people with dementia, therefore the data included in the review were from subgroups of people with dementia or cognitive impairment participating in randomised controlled trials investigating models of care for all older people following hip fracture. The end of follow-up in the trials ranged from the point of acute hospital discharge to 24 months after discharge.\nWe considered all trials to be at high risk of bias in more than one domain. As subgroups of larger trials, the analyses lacked power to detect differences between the intervention groups. Furthermore, there were some important differences in baseline characteristics of participants between the experimental and control groups. Using the GRADE approach, we downgraded the certainty of the evidence for all outcomes to low or very low.\nThe effect estimates for almost all comparisons were very imprecise, and the overall certainty for most results was very low. There were no data from any study for our primary outcome of health-related quality of life. There was only very low certainty for our other primary outcome, activities of daily living and functional performance, therefore we were unable to draw any conclusions with confidence. There was low-certainty that enhanced care and rehabilitation in-hospital may reduce rates of postoperative delirium (odds ratio 0.04, 95% confidence interval (CI) 0.01 to 0.22, 2 trials, n = 141) and very low-certainty associating it with lower rates of some other complications. There was also low-certainty that, compared to orthopaedic-led management, geriatrician-led management may lead to shorter hospital stays (mean difference 4.00 days, 95% CI 3.61 to 4.39, 1 trial, n = 162).\nAuthors' conclusions\nWe found limited evidence that some of the models of enhanced rehabilitation and care used in the included trials may show benefits over usual care for preventing delirium and reducing length of stay for people with dementia who have been treated for hip fracture. However, the certainty of these results is low. Data were available from only a small number of trials, and the certainty for all other results is very low. Determining the optimal strategies to improve outcomes for this growing population of patients should be a research priority.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with dementia who receive enhanced care and rehabilitation in hospital after a hip fracture may be less likely to develop delirium. When care is led by a geriatrician, they may have stays in hospital that are three to four days shorter than if care is led by an orthopaedic surgeon. There was no information on the effect of any of the care models on quality of life, and we could not be certain about their effects on other important outcomes such as an individual's ability to manage their daily activities, regaining mobility, cognitive function, pain, death rates, or the likelihood of the person returning to the same place they had been living before the fracture."
}
]
|
query-laysum | 3765 | [
{
"role": "user",
"content": "Abstract: Background\nThe hypertensive disorders of pregnancy include pre-eclampsia, gestational hypertension, chronic hypertension, and undefined hypertension. Pre-eclampsia is considerably more prevalent in low-income than in high-income countries. One possible explanation for this discrepancy is dietary differences, particularly calcium deficiency. Calcium supplementation in the second half of pregnancy reduces the serious consequences of pre-eclampsia, but has limited effect on the overall risk of pre-eclampsia. It is important to establish whether calcium supplementation before, and in early pregnancy (before 20 weeks' gestation) has added benefit. Such evidence could count towards justification of population-level interventions to improve dietary calcium intake, including fortification of staple foods with calcium, especially in contexts where dietary calcium intake is known to be inadequate. This is an update of a review first published in 2017.\nObjectives\nTo determine the effect of calcium supplementation, given before or early in pregnancy and for at least the first half of pregnancy, on pre-eclampsia and other hypertensive disorders, maternal morbidity and mortality, and fetal and neonatal outcomes.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Trials Register (31 July 2018), PubMed (13 July 2018), ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP; 31 July 2018), and reference lists of retrieved studies.\nSelection criteria\nEligible studies were randomised controlled trials (RCT) of calcium supplementation, including women not yet pregnant, or women in early pregnancy. Cluster-RCTs, quasi-RCTs, and trials published as abstracts were eligible, but we did not identify any.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data, and checked them for accuracy. They assessed the quality of the evidence for key outcomes using the GRADE approach.\nMain results\nCalcium versus placebo\nWe included one study (1355 women), which took place across multiple hospital sites in Argentina, South Africa, and Zimbabwe. Most analyses were conducted only on 633 women from this group who were known to have conceived, or on 579 who reached 20 weeks' gestation; the trial was at moderate risk of bias due to high attrition rates pre-conception. Non-pregnant women with previous pre-eclampsia received either calcium 500 mg daily or placebo, from enrolment until 20 weeks' gestation. All participants received calcium 1.5 g daily from 20 weeks until birth.\nPrimary outcomes: calcium supplementation commencing before conception may make little or no difference to the risk of pre-eclampsia (69/296 versus 82/283, risk ratio (RR) 0.80, 95% confidence interval (CI) 0.61 to 1.06; low-quality evidence). For pre-eclampsia or pregnancy loss or stillbirth (or both) at any gestational age, calcium may slightly reduce the risk of this composite outcome, however the 95% CI met the line of no effect (RR 0.82, 95% CI 0.66 to 1.00; low-quality evidence). Supplementation may make little or no difference to the severe maternal morbidity and mortality index (RR 0.93, 95% CI 0.68 to 1.26; low-quality evidence), pregnancy loss or stillbirth at any gestational age (RR 0.83, 95% CI 0.61 to 1,14; low-quality evidence), or caesarean section (RR 1.11, 95% CI 0.96 to 1,28; low-quality evidence).\nCalcium supplementation may make little or no difference to the following secondary outcomes: birthweight < 2500 g (RR 1.00, 95% CI 0.76 to 1.30; low-quality evidence), preterm birth < 37 weeks (RR 0.90, 95% CI 0.74 to 1.10), early preterm birth < 32 weeks (RR 0.79, 95% CI 0.56 to 1.12), and pregnancy loss, stillbirth or neonatal death before discharge (RR 0.82, 95% CI 0.61 to 1.10; low-quality evidence), no conception, gestational hypertension, gestational proteinuria, severe gestational hypertension, severe pre-eclampsia, severe pre-eclamptic complications index. There was no clear evidence on whether or not calcium might make a difference to perinatal death, or neonatal intensive care unit admission for > 24h, or both (RR 1.11, 95% CI 0.77 to 1.60; low-quality evidence).\nIt is unclear what impact calcium supplementation has on Apgar score < 7 at five minutes (RR 0.43, 95% CI 0.15 to 1.21; very low-quality evidence), stillbirth, early onset pre-eclampsia, eclampsia, placental abruption, intensive care unit admission > 24 hours, maternal death, hospital stay > 7 days from birth, and pregnancy loss before 20 weeks' gestation.\nAuthors' conclusions\nThe single included study suggested that calcium supplementation before and early in pregnancy may reduce the risk of women experiencing the composite outcome pre-eclampsia or pregnancy loss at any gestational age, but the results are inconclusive for all other outcomes for women and babies. Therefore, current evidence neither supports nor refutes the routine use of calcium supplementation before conception and in early pregnancy.\nTo determine the overall benefit of calcium supplementation commenced before or in early pregnancy, the effects found in the study of calcium supplementation limited to the first half of pregnancy need to be added to the known benefits of calcium supplementation in the second half of pregnancy.\nFurther research is needed to confirm whether initiating calcium supplementation pre- or in early pregnancy is associated with a reduction in adverse pregnancy outcomes for mother and baby. Research could also address the acceptability of the intervention to women, which was not covered by this review update.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Women can develop high blood pressure and have protein in their urine after the twentieth week of pregnancy; this condition is known as pre-eclampsia. Many women, particularly those in low-income countries, do not have enough calcium in their diets. Giving these women extra calcium during the second half of pregnancy has been shown to reduce their risk of having high blood pressure and protein in the urine, and other related problems, such as convulsions, stroke, blood-clotting problems, fluid in the lungs, kidney failure, or even death. It is important to know if taking extra calcium before pregnancy and in early pregnancy can reduce the number of women who develop blood pressure problems during pregnancy, and related complications.\nWe searched for randomised controlled studies that looked at the effect of taking extra calcium before or early in pregnancy on the number of women who developed pre-eclampsia."
}
]
|
query-laysum | 3766 | [
{
"role": "user",
"content": "Abstract: Background\nThe setting in which induction of labour takes place (home or inpatient) is likely to have implications for safety, women's experiences and costs.\nHome induction may be started at home with the subsequent active phase of labour happening either at home or in a healthcare facility (hospital, birth centre, midwifery-led unit). More commonly, home induction starts in a healthcare facility, then the woman goes home to await the start of labour. Inpatient induction takes place in a healthcare facility where the woman stays while awaiting the start of labour.\nObjectives\nTo assess the effects on neonatal and maternal outcomes of third trimester home induction of labour compared with inpatient induction using the same method of induction.\nSearch methods\nFor this update, we searched the Cochrane Pregnancy and Childbirth Group's Trials Register, ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (31 January 2020)), and reference lists of retrieved studies.\nSelection criteria\nPublished and unpublished randomised controlled trials (RCTs) in which home and inpatient settings for induction have been compared. We included conference abstracts but excluded quasi-randomised trials and cross-over studies.\nData collection and analysis\nTwo review authors independently assessed study reports for inclusion. Two review authors carried out data extraction and assessment of risk of bias independently. GRADE assessments were checked by a third review author.\nMain results\nWe included seven RCTs, six of which provided data on 1610 women and their babies. Studies were undertaken between 1998 and 2015, and all were in high- or upper-middle income countries. Most women were induced for post dates. Three studies reported government funding, one reported no funding and three did not report on their funding source. Most GRADE assessments gave very low-certainty evidence, downgrading mostly for high risk of bias and serious imprecision.\n1. Home compared to inpatient induction with vaginal prostaglandin E (PGE) (two RCTs, 1028 women and babies; 1022 providing data).\nAlthough women's satisfaction may be slightly better in home settings, the evidence is very uncertain (mean difference (MD) 0.16, 95% confidence interval (CI) -0.02 to 0.34, 1 study, 399 women), very low-certainty evidence.\nThere may be little or no difference between home and inpatient induction for other primary outcomes, with all evidence being very low certainty:\n- spontaneous vaginal birth (average risk ratio (RR) [aRR] 0.91, 95% CI 0.69 to 1.21, 2 studies, 1022 women, random-effects method);\n- uterine hyperstimulation (RR 1.19, 95% CI 0.40 to 3.50, 1 study, 821 women);\n- caesarean birth (RR 1.01, 95% CI 0.81 to 1.28, 2 studies, 1022 women);\n- neonatal infection (RR 1.29, 95% CI 0.59 to 2.82, 1 study, 821 babies);\n- admission to neonatal intensive care unit (NICU) (RR 1.20, 95% CI 0.50 to 2.90, 2 studies, 1022 babies).\nStudies did not report serious neonatal morbidity or mortality.\n2. Home compared to inpatient induction with controlled release PGE (one RCT, 299 women and babies providing data).\nThere was no information on whether the questionnaire on women's satisfaction with care used a validated instrument, but the findings presented showed no overall difference in scores.\nWe found little or no difference between the groups for other primary outcomes, all also being very low-certainty evidence:\n- spontaneous vaginal birth (RR 0.94, 95% CI 0.77 to 1.14, 1 study, 299 women);\n- uterine hyperstimulation (RR 1.01, 95% CI 0.51 to 1.98, 1 study, 299 women);\n- caesarean births (RR 0.95, 95% CI 0.64 to 1.42, 1 study, 299 women);\n- admission to NICU (RR 1.38, 0.57 to 3.34, 1 study, 299 babies).\nThe study did not report on neonatal infection nor serious neonatal morbidity or mortality.\n3. Home compared to inpatient induction with balloon or Foley catheter (four RCTs; three studies, 289 women and babies providing data).\nIt was again unclear whether questionnaires reporting women's experiences/satisfaction with care were validated instruments, with one study (48 women, 69% response rate) finding women were similarly satisfied.\nHome inductions may reduce the number of caesarean births, but the data are also compatible with a slight increase and are of very low-certainty (RR 0.64, 95% CI 0.41 to 1.01, 2 studies, 159 women).\nThere was little or no difference between the groups for other primary outcomes with all being very low-certainty evidence:\n- spontaneous vaginal birth (RR 1.04, 95% CI 0.54 to 1.98, 1 study, 48 women):\n- uterine hyperstimulation (RR 0.45, 95% CI 0.03 to 6.79, 1 study, 48 women);\n- admission to NICU (RR 0.37, 95% CI 0.07 to 1.86, 2 studies, 159 babies).\nThere were no serious neonatal infections nor serious neonatal morbidity or mortality in the one study (involving 48 babies) assessing these outcomes.\nAuthors' conclusions\nData on the effectiveness, safety and women's experiences of home versus inpatient induction of labour are limited and of very low-certainty. Given that serious adverse events are likely to be extremely rare, the safety data are more likely to come from very large observational cohort studies rather than relatively small RCTs.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Induction of labour towards the end of pregnancy involves artificially bringing on contractions to start labour. There are risks for mother and baby from induction, but sometimes these are outweighed by the risks of continuing the pregnancy.\nHowever, induction can be a challenging experience for women as they may feel uncomfortable, unsupported and a lack of control. The use of home induction of labour may improve women’s experiences, reduce the length of stay in hospital and lower overall costs. The safety of both the mother and baby are critical factors for consideration. Only certain forms of induction are considered suitable for home induction, for example, vaginal prostaglandins or balloon/Foley catheters."
}
]
|
query-laysum | 3767 | [
{
"role": "user",
"content": "Abstract: Background\nGlioblastoma is an aggressive form of brain cancer. Approximately five in 100 people with glioblastoma survive for five years past diagnosis. Glioblastomas that have a particular modification to their DNA (called methylation) in a particular region (the O6-methylguanine–DNA methyltransferase (MGMT) promoter) respond better to treatment with chemotherapy using a drug called temozolomide.\nObjectives\nTo determine which method for assessing MGMT methylation status best predicts overall survival in people diagnosed with glioblastoma who are treated with temozolomide.\nSearch methods\nWe searched MEDLINE, Embase, BIOSIS, Web of Science Conference Proceedings Citation Index to December 2018, and examined reference lists. For economic evaluation studies, we additionally searched NHS Economic Evaluation Database (EED) up to December 2014.\nSelection criteria\nEligible studies were longitudinal (cohort) studies of adults with diagnosed glioblastoma treated with temozolomide with/without radiotherapy/surgery. Studies had to have related MGMT status in tumour tissue (assessed by one or more method) with overall survival and presented results as hazard ratios or with sufficient information (e.g. Kaplan-Meier curves) for us to estimate hazard ratios. We focused mainly on studies comparing two or more methods, and listed brief details of articles that examined a single method of measuring MGMT promoter methylation. We also sought economic evaluations conducted alongside trials, modelling studies and cost analysis.\nData collection and analysis\nTwo review authors independently undertook all steps of the identification and data extraction process for multiple-method studies. We assessed risk of bias and applicability using our own modified and extended version of the QUality In Prognosis Studies (QUIPS) tool. We compared different techniques, exact promoter regions (5'-cytosine-phosphate-guanine-3' (CpG) sites) and thresholds for interpretation within studies by examining hazard ratios. We performed meta-analyses for comparisons of the three most commonly examined methods (immunohistochemistry (IHC), methylation-specific polymerase chain reaction (MSP) and pyrosequencing (PSQ)), with ratios of hazard ratios (RHR), using an imputed value of the correlation between results based on the same individuals.\nMain results\nWe included 32 independent cohorts involving 3474 people that compared two or more methods. We found evidence that MSP (CpG sites 76 to 80 and 84 to 87) is more prognostic than IHC for MGMT protein at varying thresholds (RHR 1.31, 95% confidence interval (CI) 1.01 to 1.71). We also found evidence that PSQ is more prognostic than IHC for MGMT protein at various thresholds (RHR 1.36, 95% CI 1.01 to 1.84). The data suggest that PSQ (mainly at CpG sites 74 to 78, using various thresholds) is slightly more prognostic than MSP at sites 76 to 80 and 84 to 87 (RHR 1.14, 95% CI 0.87 to 1.48). Many variants of PSQ have been compared, although we did not see any strong and consistent messages from the results. Targeting multiple CpG sites is likely to be more prognostic than targeting just one. In addition, we identified and summarised 190 articles describing a single method for measuring MGMT promoter methylation status.\nAuthors' conclusions\nPSQ and MSP appear more prognostic for overall survival than IHC. Strong evidence is not available to draw conclusions with confidence about the best CpG sites or thresholds for quantitative methods. MSP has been studied mainly for CpG sites 76 to 80 and 84 to 87 and PSQ at CpG sites ranging from 72 to 95. A threshold of 9% for CpG sites 74 to 78 performed better than higher thresholds of 28% or 29% in two of three good-quality studies making such comparisons.\n\nGiven the provided abstract, please respond to the following query: \"What are the implications of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our review indicates both methylation-specific PCR and pyrosequencing provide better predictions of survival than immunohistochemistry. There is some evidence that pyrosequencing may be better than methylation-specific PCR at predicting overall survival, depending on the DNA targets and cut-off thresholds used. We documented the most frequent DNA targets used in methylation-specific PCR and pyrosequencing. We described cut-off thresholds used in pyrosequencing, although it is unclear which of these is best."
}
]
|
query-laysum | 3768 | [
{
"role": "user",
"content": "Abstract: Background\nHeavy menstrual bleeding without an organic lesion is mainly due to an imbalance of the various hormones which have a regulatory effect on the menstrual cycle. Another cause of heavy menstrual bleeding with no pelvic pathology, is the presence of an acquired or inherited bleeding disorder. The haemostatic system has a central role in controlling the amount and the duration of menstrual bleeding, thus abnormally prolonged or profuse bleeding does occur in most women affected by bleeding disorders. Whereas irregular, pre-menarchal or post-menopausal uterine bleeding is unusual in inherited or acquired haemorrhagic disorders, severe acute bleeding and heavy menstrual bleeding at menarche and chronic heavy menstrual bleeding during the entire reproductive life are common. This is an update of a previously published Cochrane Review.\nObjectives\nTo determine the efficacy and safety of non-surgical interventions versus each other, placebo or no treatment for reducing menstrual blood loss in women with bleeding disorders.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Haemoglobinopathies Trials Register (25 August 2016), Embase (May 2013), LILACS (February 2013) and the WHO International Clinical Trial registry (February 2013).\nSelection criteria\nRandomised controlled studies of non-surgical interventions for treating heavy menstrual bleeding (menorrhagia) in women of reproductive age suffering from a congenital or acquired bleeding disorder.\nData collection and analysis\nTwo authors independently assessed studies for inclusion, extracted data and assessed the risk of bias.\nMain results\nThree cross-over studies, with 175 women were included in the review. All three studies had an unclear risk of bias with regards to trial design and overall, the quality of evidence generated was judged to be poor.\nTwo of the studies (n = 59) compared desmopressin (1-deamino-8-D-arginine vasopressin) with placebo. Menstrual blood loss was the primary outcome for both of these studies. Neither study found clear evidence of a difference between groups. The first of these reported a mean difference in menstrual blood loss in the desmopressin versus placebo group of 21.20 mL (95% confidence interval -19.00 to 61.50)\nThe second study reported that even though there was an improvement of pictorial bleeding assessment chart scores with desmopressin and placebo when compared to pretreatment assessment, there was no clear evidence of difference in these scores when the two were compared to each other (results presented graphically, P = 0.51). The data from these studies could not be combined.\nThe third study (n = 116) compared desmopressin with tranexamic acid (n = 116). This study found a decrease in pictorial bleeding assessment chart scores after both treatments as compared to baseline. The decrease in these scores was greater for tranexamic acid than for desmopressin, with a mean difference of 41.6 mL (95% confidence interval 19.6 to 63) (P < 0.0002).\nIn relation to adverse events, across two studies, there was no clear evidence of a difference when placebo was compared to desmopressin, risk ratio 1.17 (95% confidence interval 0.41 to 3.34) . The same was also true when desmopressin was compared to tranexamic acid, risk ratio 1.17 (95% confidence interval 0.41 to 3.34).\nOnly the study that compared desmopressin to tranexamic acid assessed quality of life. However, we are unable to present any data from this study, since no differences in this outcome between the two intervention groups were reported.\nAuthors' conclusions\nEvidence from randomised controlled studies on the effect of desmopressin when compared to placebo in reducing menstrual blood loss is very limited and inconclusive. Two studies, each with a very limited number of participants, have shown uncertain effects in menstrual blood loss and adverse effects. A non-randomised comparison in one of the studies points to the value of combining desmopressin and tranexamic acid, which needs to be tested in a formal randomised controlled study comparison.\nWhen tranexamic acid was compared to desmopressin, a single study showed a reduction in menstrual blood loss with tranexamic acid use compared to desmopressin.\nThere is a need to evaluate non-surgical methods for treating of menorrhagia in women with bleeding disorders through randomised controlled studies. Such methods would be more acceptable than surgery for women wishing to retain their fertility. Given that women may need to use these treatments throughout their entire reproductive life, long-term side-effects should be evaluated.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review included three studies on non-surgical treatments in 175 women with a bleeding disorder who were experiencing heavy menstrual bleeding. Twostudies compared desmopressin to placebo and one study compared desmopressin to tranexamic acid. The women included in the studies were selected for one treatment or the other randomly. The studies lasted from two to four months."
}
]
|
query-laysum | 3769 | [
{
"role": "user",
"content": "Abstract: Background\nShock wave therapy has seen widespread use since the 1990s to treat various musculoskeletal disorders including rotator cuff disease, but evidence of its efficacy remains equivocal.\nObjectives\nTo determine the benefits and harms of shock wave therapy for rotator cuff disease, with or without calcification, and to establish its usefulness in the context of other available treatment options.\nSearch methods\nWe searched Ovid MEDLINE, Ovid Embase, CENTRAL, ClinicalTrials.gov and the WHO ICTRP up to November 2019, with no restrictions on language. We reviewed the reference lists of retrieved trials to identify potentially relevant trials.\nSelection criteria\nWe included randomised controlled trials (RCTs) and controlled clinical trials (CCTs) that used quasi-randomised methods to allocate participants, investigating participants with rotator cuff disease with or without calcific deposits. We included trials of comparisons of extracorporeal or radial shock wave therapy versus any other intervention. Major outcomes were pain relief greater than 30%, mean pain score, function, patient-reported global assessment of treatment success, quality of life, number of participants experiencing adverse events and number of withdrawals due to adverse events.\nData collection and analysis\nTwo review authors independently selected studies for inclusion, extracted data and assessed the certainty of evidence using GRADE. The primary comparison was shock wave therapy compared to placebo.\nMain results\nThirty-two trials (2281 participants) met our inclusion criteria. Most trials (25) included participants with rotator cuff disease and calcific deposits, five trials included participants with rotator cuff disease and no calcific deposits, and two trials included a mixed population of participants with and without calcific deposits.\nTwelve trials compared shock wave therapy to placebo, 11 trials compared high-dose shock wave therapy (0.2 mJ/mm² to 0.4 mJ/mm² and above) to low-dose shock wave therapy. Single trials compared shock wave therapy to ultrasound-guided glucocorticoid needling, ultrasound-guided hyaluronic acid injection, transcutaneous electric nerve stimulation (TENS), no treatment or exercise; dual session shock wave therapy to single session therapy; and different delivery methods of shock wave therapy. Our main comparison was shock wave therapy versus placebo and results are reported for the 3 month follow up.\nAll trials were susceptible to bias; including selection (74%), performance (62%), detection (62%), and selective reporting (45%) biases.\nNo trial measured participant-reported pain relief of 30%. However, in one trial (74 participants), at 3 months follow up, 14/34 participants reported pain relief of 50% or greater with shock wave therapy compared with 15/40 with placebo (risk ratio (RR) 1.10, 95% confidence interval (CI) 0.62 to 1.94); low-quality evidence (downgraded for bias and imprecision). Mean pain (0 to 10 scale, higher scores indicate more pain) was 3.02 points in the placebo group and 0.78 points better (0.17 better to 1.4 better; clinically important change was 1.5 points) with shock wave therapy (9 trials, 608 participants), moderate-quality evidence (downgraded for bias). Mean function (scale 0 to 100, higher scores indicate better function) was 66 points with placebo and 7.9 points better (1.6 better to 14 better, clinically important difference 10 points) with shock wave therapy (9 trials, 612 participants), moderate-quality evidence (downgraded for bias). Participant-reported success was reported by 58/150 people in shock wave therapy group compared with 35/137 people in placebo group (RR 1.59, 95% CI 0.87 to 2.91; 6 trials, 287 participants), low-quality evidence (downgraded for bias and imprecision). None of the trials measured quality of life.\nWithdrawal rate or adverse event rates may not differ between extracorporeal shock wave therapy and placebo, but we are uncertain due to the small number of events. There were 11/34 withdrawals in the extracorporeal shock wave therapy group compared with 13/40 withdrawals in the placebo group (RR 0.75, 95% CI 0.43 to 1.31; 7 trials, 581 participants) low-quality evidence (downgraded for bias and imprecision); and 41/156 adverse events with extracorporeal shock wave therapy compared with 10/139 adverse events in the placebo group (RR 3.61, 95% CI 2.00 to 6.52; 5 trials, 295 participants) low-quality evidence (downgraded for bias and imprecision).\nSubgroup analyses indicated that there were no between-group differences in pain and function outcomes in participants who did or did not have calcific deposits in the rotator cuff.\nAuthors' conclusions\nBased upon the currently available low- to moderate-certainty evidence, there were very few clinically important benefits of shock wave therapy, and uncertainty regarding its safety. Wide clinical diversity and varying treatment protocols means that we do not know whether or not some trials tested subtherapeutic doses, possibly underestimating any potential benefits.\nFurther trials of extracorporeal shock wave therapy for rotator cuff disease should be based upon a strong rationale and consideration of whether or not they would alter the conclusions of this review. A standard dose and treatment protocol should be decided upon before further research is conducted. Development of a core set of outcomes for trials of rotator cuff disease and other shoulder disorders would also facilitate our ability to synthesise the evidence.\n\nGiven the provided abstract, please respond to the following query: \"Key results for the primary comparison, shock wave therapy versus placebo\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Participant-reported pain relief of 50% or greater (one trial):\n• four more people out of 100 reported pain relief of 50% or more (ranging from 19 fewer to 26 more).\n42 out of 100 people reported pain relief of 50% or greater with shock wave therapy compared with 38 out of 100 with placebo.\nPain (higher scores mean more pain) (nine trials):\n• Improved pain by 8% (ranging from 2% better to 14% better) or 0.78 points better (ranging from 0.17 better to 1.4 better) on a 0- to 10-point scale.\nPeople who had shock wave therapy rated their pain as 2.2 points and people who had placebo rated their pain as 3 points.\nFunction (ability to use the shoulder; higher scores meanbetter function) (nine trials):\n• Improved by 8% (ranging from 1.6% to 14%) or 8 points better (ranging from 1.6 better to 14 better) on a 0- to 100-point scale.\nPeople who had shock wave therapy rated their function as 74 points and people who had placebo rated their function as 66 points.\nParticipant-reported success (six trials):\n• 15% (ranging from 3% fewer to 49% more) more people reported their treatment a success.\n41 out of 100 people reported treatment success with shock wave therapy and 26 out of 100 people reported treatment success with placebo.\nWithdrawals due to side effects (seven trials):\n• 3% fewer (ranging from 6% fewer to 3% more) people withdrew from treatment due to side effects.\n8 out of 100 people withdrew from treatment with shock wave therapy and 10 out of 100 people withdrew from the placebo group.\nSide effects (five trials):\n• 19% more people reported side effects (ranging from 7% more to 40% more):\n26 out of 100 people had a side effect with shock wave therapy and seven out of 100 people had a side effect with placebo."
}
]
|
query-laysum | 3770 | [
{
"role": "user",
"content": "Abstract: Background\nClinical guidelines differ regarding their recommended blood glucose targets for patients with type 1 diabetes and recent studies on patients with type 2 diabetes suggest that aiming at very low targets can increase the risk of mortality.\nObjectives\nTo assess the effects of intensive versus conventional glycaemic targets in patients with type 1 diabetes in terms of long-term complications and determine whether very low, near normoglycaemic values are of additional benefit.\nSearch methods\nA systematic literature search was performed in the databases The Cochrane Library, MEDLINE and EMBASE. The date of the last search was December 2012 for all databases.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that had defined different glycaemic targets in the treatment arms, studied patients with type 1 diabetes, and had a follow-up duration of at least one year.\nData collection and analysis\nTwo review authors independently extracted data, assessed studies for risk of bias, with differences resolved by consensus. Overall study quality was evaluated by the 'Grading of Recommendations Assessment, Development, and Evaluation' (GRADE) system. Random-effects models were used for the main analyses and the results are presented as risk ratios (RR) with 95% confidence intervals (CI) for dichotomous outcomes.\nMain results\nWe identified 12 trials that fulfilled the inclusion criteria, including a total of 2230 patients. The patient populations varied widely across studies with one study only including children, one study only including patients after a kidney transplant, one study with newly diagnosed adult patients, and several studies where patients had retinopathy or microalbuminuria at baseline. The mean follow-up duration across studies varied between one and 6.5 years. The majority of the studies were carried out in the 1980s and all trials took place in Europe or North America. Due to the nature of the intervention, none of the studies could be carried out in a blinded fashion so that the risk of performance bias, especially for subjective outcomes such as hypoglycaemia, was present in all of the studies. Fifty per cent of the studies were judged to have a high risk of bias in at least one other category.\nUnder intensive glucose control, the risk of developing microvascular complications was reduced compared to conventional treatment for a) retinopathy: 23/371 (6.2%) versus 92/397 (23.2%); RR 0.27 (95% CI 0.18 to 0.42); P < 0.00001; 768 participants; 2 trials; high quality evidence; b) nephropathy: 119/732 (16.3%) versus 211/743 (28.4%); RR 0.56 (95% CI 0.46 to 0.68); P < 0.00001; 1475 participants; 3 trials; moderate quality evidence; c) neuropathy: 29/586 (4.9%) versus 86/617 (13.9%); RR 0.35 (95% CI 0.23 to 0.53); P < 0.00001; 1203 participants; 3 trials; high quality evidence. Regarding the progression of these complications after manifestation, the effect was weaker (retinopathy) or possibly not existent (nephropathy: RR 0.79 (95% CI 0.37 to 1.70); P = 0.55; 179 participants with microalbuminuria; 3 trials; very low quality evidence); no adequate data were available regarding the progression of neuropathy. For retinopathy, intensive glucose control reduced the risk of progression in studies with a follow-up duration of at least two years (85/366 (23.2%) versus 154/398 (38.7%); RR 0.61 (95% CI 0.49 to 0.76); P < 0.0001; 764 participants; 2 trials; moderate quality evidence), while we found evidence for an initial worsening of retinopathy after only one year of intensive glucose control (17/49 (34.7%) versus 7/47 (14.9%); RR 2.32 (95% CI 1.16 to 4.63); P = 0.02; 96 participants; 2 trials; low quality evidence).\nMajor macrovascular outcomes (stroke and myocardial infarction) occurred very rarely, and no firm evidence could be established regarding these outcome measures (low quality evidence).\nWe found that intensive glucose control increased the risk for severe hypoglycaemia, however the results were heterogeneous and only the 'Diabetes Complications Clinical Trial' (DCCT) showed a clear increase in severe hypoglycaemic episodes under intensive treatment. A subgroup analysis according to the baseline haemoglobin A1c (HbA1c) of participants in the trials (low quality evidence) suggests that the risk of hypoglycaemia is possibly only increased for patients who started with relatively low HbA1c values (< 9.0%). Several of the included studies also showed a greater weight gain under intensive glucose control, and the risk of ketoacidosis was only increased in studies using insulin pumps in the intensive treatment group (very low quality evidence).\nOverall, all-cause mortality was very low in all studies (moderate quality evidence) except in one study investigating renal allograft as treatment for end-stage diabetic nephropathy. Health-related quality of life was only reported in the DCCT trial, showing no statistically significant differences between the intervention and comparator groups (moderate quality evidence). In addition, only the DCCT published data on costs, indicating that intensive glucose therapy control was highly cost-effective considering the reduction of potential diabetes complications (moderate quality evidence).\nAuthors' conclusions\nTight blood sugar control reduces the risk of developing microvascular diabetes complications. The evidence of benefit is mainly from studies in younger patients at early stages of the disease. Benefits need to be weighed against risks including severe hypoglycaemia, and patient training is an important aspect in practice. The effects of tight blood sugar control seem to become weaker once complications have been manifested. However, further research is needed on this issue. Furthermore, there is a lack of evidence from RCTs on the effects of tight blood sugar control in older patient populations or patients with macrovascular disease. There is no firm evidence for specific blood glucose targets and treatment goals need to be individualised taking into account age, disease progression, macrovascular risk, as well as the patient's lifestyle and disease management capabilities.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The primary objective of this review was to assess the positive and negative outcomes of tighter blood glucose control ('intensive' glucose control) compared to less intense treatment targets ('conventional' glucose control) in individuals with type 1 diabetes."
}
]
|
query-laysum | 3771 | [
{
"role": "user",
"content": "Abstract: Background\nMethylxanthines, including caffeine, theophylline, and aminophylline, work as stimulants of the respiratory drive, and decrease apnea of prematurity, a developmental disorder common in preterm infants. In particular, caffeine has been reported to improve important clinical outcomes, including bronchopulmonary dysplasia (BPD) and neurodevelopmental disability. However, there is uncertainty regarding the efficacy of caffeine compared to other methylxanthines.\nObjectives\nTo assess the effects of caffeine compared to aminophylline or theophylline in preterm infants at risk of apnea, with apnea, or in the peri-extubation phase.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, Epistemonikos, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), and clinicaltrials.gov in February 2023. We also checked the reference lists of relevant articles to identify additional studies.\nSelection criteria\nStudies: randomized controlled trials (RCTs) and quasi-RCTs\nParticipants: infants born before 34 weeks of gestation for prevention and extubation trials, and infants born before 37 weeks of gestation for treatment trials\nIntervention and comparison: caffeine versus theophylline or caffeine versus aminophylline. We included all doses and duration of treatment.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We evaluated treatment effects using a fixed-effect model with risk ratio (RR), risk difference (RD), and 95% confidence intervals (CI) for categorical data, and mean, standard deviation, and mean difference for continuous data. We used the GRADE approach to evaluate the certainty of evidence.\nMain results\nWe included 22 trials enrolling 1776 preterm infants. The indication for treatment was prevention of apnea in three studies, treatment of apnea in 13 studies, and extubation management in three studies. In three studies, there were multiple indications for treatment, and in one study, the indication for treatment was unclear. In 19 included studies, the infants had a mean gestational age between 28 and 32 weeks and a mean birth weight between 1000 g and 1500 g. One study's participants had a mean gestational age of more than 32 weeks, and two studies had participants with a mean birth weight of 1500 g or more.\nCaffeine administrated for any indication may result in little to no difference in all-cause mortality prior to hospital discharge compared to other methylxanthines (RR 1.12, 95% CI 0.68 to 1.84; RD 0.02, 95% CI -0.05 to 0.08; 2 studies, 396 infants; low-certainty evidence). Only one study enrolling 79 infants reported components of the outcome moderate to severe neurodevelopmental disability at 18 to 26 months. The evidence is very uncertain about the effect of caffeine on cognitive developmental delay compared to other methylxanthines (RR 0.17, 95% CI 0.02 to 1.37; RD -0.12, 95% CI -0.24 to 0.01; 1 study, 79 infants; very low-certainty evidence). The evidence is very uncertain about the effect of caffeine on language developmental delay compared to other methylxanthines (RR 0.76, 95% CI 0.37 to 1.58; RD -0.07, 95% CI -0.27 to 0.12; 1 study, 79 infants; very low-certainty evidence). The evidence is very uncertain about the effect of caffeine on motor developmental delay compared to other methylxanthines (RR 0.50, 95% CI 0.13 to 1.96; RD -0.07, 95% CI -0.21 to 0.07; 1 study, 79 infants; very low-certainty evidence). The evidence is very uncertain about the effect of caffeine on visual and hearing impairment compared to other methylxanthines. At 24 months of age, visual impairment was seen in 8 out of 11 infants and 10 out of 11 infants in the caffeine and other methylxanthines groups, respectively. Hearing impairment was seen in 2 out of 5 infants and 1 out of 1 infant in the caffeine and other methylxanthines groups, respectively. No studies reported the outcomes cerebral palsy, gross motor disability, and mental development. Compared to other methylxanthines, caffeine may result in little to no difference in BPD/chronic lung disease, defined as 28 days of oxygen exposure at 36 weeks' postmenstrual age (RR 1.40, 95% CI 0.92 to 2.11; RD 0.04, 95% CI -0.01 to 0.09; 3 studies, 481 infants; low-certainty evidence). The evidence is very uncertain about the effect of caffeine on side effects (tachycardia, agitation, or feed intolerance) leading to a reduction in dose or withholding of methylxanthines compared to other methylxanthines (RR 0.17, 95% CI 0.02 to 1.32; RD -0.29, 95% CI -0.57 to -0.02; 1 study, 30 infants; very low-certainty evidence). Caffeine may result in little to no difference in duration of hospital stay compared to other methylxanthines (median (interquartile range): caffeine 43 days (27.5 to 61.5); other methylxanthines 39 days (28 to 55)). No studies reported the outcome seizures.\nAuthors' conclusions\nAlthough caffeine has been shown to improve important clinical outcomes, in the few studies that compared caffeine to other methylxanthines, there might be little to no difference in mortality, bronchopulmonary dysplasia, and duration of hospital stay. The evidence is very uncertain about the effect of caffeine compared to other methylxanthines on long-term development and side effects.\nAlthough caffeine or other methylxanthines are widely used in preterm infants, there is little direct evidence to support the choice of which methylxanthine to use. More research is needed, especially on extremely preterm infants born before 28 weeks of gestation. Data from four ongoing studies might provide more evidence on the effects of caffeine or other methylxanthines.\n\nGiven the provided abstract, please respond to the following query: \"What is apnea of prematurity?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Apnea of prematurity is whenbabies born too early (preterm babies) stop breathing for 20 seconds or longer during sleep. More than half of all preterm babies have apnea.Preterm babies, especially those born before 28 weeks in the womb (gestation), have a higher risk of death, lung disease, and brain impairment than those born at or near their due date. Some of these babies develop intellectual disabilities, blindness, or deafness."
}
]
|
query-laysum | 3772 | [
{
"role": "user",
"content": "Abstract: Background\nSoft tissue sarcomas (STS) are a highly heterogeneous group of rare malignant solid tumors. Nonrhabdomyosarcoma soft tissue sarcomas (NRSTS) comprise all STS except rhabdomyosarcoma. In people with advanced local or metastatic disease, autologous hematopoietic stem cell transplantation (HSCT) applied after high-dose chemotherapy (HDCT) is a planned rescue therapy for HDCT-related severe hematologic toxicity. The rationale for this update is to determine whether any randomized controlled trials (RCTs) have been conducted and to clarify whether HDCT followed by autologous HSCT has a survival advantage.\nObjectives\nTo assess the efficacy and safety of high-dose chemotherapy (HDCT) followed by autologous hematopoietic stem cell transplantation (HSCT) for all stages of nonrhabdomyosarcoma soft tissue sarcomas (NRSTS) in children and adults.\nSearch methods\nFor this update, we revised the search strategy to improve the precision and reduce the number of irrelevant hits. We searched the following electronic databases: Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8), PubMed from 2012 to 6 September 2016, and Embase from 2012 to 26 September 2016. We searched online trial registries and congress proceedings from 2012 to 26 September 2016.\nSelection criteria\nTerms representing STS and autologous HSCT were required in the title or abstract. We restricted the study design to RCTs. We included studies if at least 80% of participants had a diagnosis listed in any version of the World Health Organization (WHO) classification and classified as malignant. The search included children and adults with no age limits.\nData collection and analysis\nWe used standard methodologic procedures expected by Cochrane. The primary outcomes were overall survival and treatment-related mortality.\nMain results\nWe identified 1549 records; 85 items from electronic databases, 45 from study registries, and 1419 from congress proceedings. The revised search strategy did not identify any additional RCTs. In the previous version of the review, we identified one RCT comparing HDCT followed by autologous HSCT versus standard-dose chemotherapy (SDCT). The trial randomized 87 participants who were considerably heterogeneous with respect to 19 different tumor entities. The data from 83 participants were available for analysis.\nIn the single included trial, overall survival at three years was 32.7% in the HDCT arm versus 49.4% in the SDCT arm and there was no difference between the treatment groups (hazard ratio (HR) 1.26, 95% confidence interval (CI) 0.70 to 2.29, P = 0.44; 1 study, 83 participants; high quality evidence). In a subgroup of participants who had a complete response before HDCT, overall survival was higher in both treatment groups and overall survival at three years was 42.8% in the HDCT arm versus 83.9% in the SDCT arm and favored the SDCT group (HR 2.92, 95% CI 1.1 to 7.6, P = 0.028; 1 study, 39 participants).\nIn the single included trial, the authors reported one treatment-related leukemia death two years after HDCT. They also evaluated severe adverse events WHO grade 3 to 4 in 22 participants in the HDCT arm and in 51 participants in the SDCT arm. The authors reported 11 events concerning digestive-, infection-, pain-, or asthenia-related toxicity in the HDCT arm and one event in the SDCT arm (moderate quality evidence). The development of secondary neoplasia was not addressed. We judged the study to have an overall unclear risk of bias as three of seven items had unclear and four items had low risk of bias. For GRADE, we judged three items as high quality and three items were not reported.\nAuthors' conclusions\nThe limited data of a single RCT with an unclear risk of bias and moderate to high quality evidence showed no survival advantage for HDCT. If this treatment is offered it should only be given after careful consideration on an individual person basis and possibly only as part of a well-designed RCT.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to 6 September 2016. We found one RCT that compared 38 people in the high-dose chemotherapy and transplantation group versus 45 people in the chemotherapy-only group and was judged to have mainly a low risk of bias (as it was well designed). The participants were 18 to 65 years old, had various types of nonrhabdomyosarcoma soft tissue sarcomas and were monitored for about 55 months. The treatment period ranged from 2000 to 2008. The single RCT was funded by a nonprofit organization (the funder did not benefit if the trial found good results)."
}
]
|
query-laysum | 3773 | [
{
"role": "user",
"content": "Abstract: Background\nThis review is one of six looking at the primary medical management options for patients with chronic rhinosinusitis.\nChronic rhinosinusitis is common and is characterised by inflammation of the lining of the nose and paranasal sinuses leading to nasal blockage, nasal discharge, facial pressure/pain and loss of sense of smell. The condition can occur with or without nasal polyps. Nasal saline irrigation is commonly used to improve patient symptoms.\nObjectives\nTo evaluate the effects of saline irrigation in patients with chronic rhinosinusitis.\nSearch methods\nThe Cochrane ENT Information Specialist searched the ENT Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 9); MEDLINE; EMBASE; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 30 October 2015.\nSelection criteria\nRandomised controlled trials (RCTs) with a follow-up period of at least three months comparing saline delivered to the nose by any means (douche, irrigation, drops, spray or nebuliser) with (a) placebo, (b) no treatment or (c) other pharmacological interventions.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were disease-specific health-related quality of life (HRQL), patient-reported disease severity and the commonest adverse event - epistaxis. Secondary outcomes included general HRQL, endoscopic nasal polyp score, computerised tomography (CT) scan score and the adverse events of local irritation and discomfort. We used GRADE to assess the quality of the evidence for each outcome; this is indicated in italics.\nMain results\nWe included two RCTs (116 adult participants). One compared large-volume (150 ml) hypertonic (2%) saline irrigation with usual treatment over a six-month period; the other compared 5 ml nebulised saline twice a day with intranasal corticosteroids, treating participants for three months and evaluating them on completion of treatment and three months later.\nLarge-volume, hypertonic nasal saline versus usual care\nOne trial included 76 adult participants (52 intervention, 24 control) with or without polyps.Disease-specific HRQL was reported using the Rhinosinusitis Disability Index (RSDI; 0 to 100, 100 = best quality of life). At the end of three months of treatment, patients in the saline group were better than those in the placebo group (mean difference (MD) 6.3 points, 95% confidence interval (CI) 0.89 to 11.71) and at six months there was a greater effect (MD 13.5 points, 95% CI 9.63 to 17.37). We assessed the evidence to be of low quality for the three months follow-up and very low quality for the six months follow-up.\nPatient-reported disease severity was evaluated using a \"single-item sinus symptom severity assessment\" but the range of scores is not stated, making it impossible for us to determine the meaning of the data presented.\nNo adverse effects data were collected in the control group but 23% of participants in the saline group experienced side effects including epistaxis.\nGeneral HRQL was measured using SF-12 (0 to 100, 100 = best quality of life). No difference was found after three months of treatment (low quality evidence) but at six months there was a small difference favouring the saline group, which may not be of clinical significance and has high uncertainty (MD 10.5 points, 95% CI 0.66 to 20.34) (very low quality evidence).\nLow-volume, nebulised saline versus intranasal corticosteroids\nOne trial included 40 adult participants with polyps. Our primary outcome of disease-specific HRQL was not reported. At the end of treatment (three months) the patients who had intranasal corticosteroids had less severe symptoms (MD -13.50, 95% CI -14.44 to -12.56); this corresponds to a large effect size. We assessed the evidence to be of very low quality.\nAuthors' conclusions\nThe two studies were very different in terms of included populations, interventions and comparisons and so it is therefore difficult to draw conclusions for practice. The evidence suggests that there is no benefit of a low-volume (5 ml) nebulised saline spray over intranasal steroids. There is some benefit of daily, large-volume (150 ml) saline irrigation with a hypertonic solution when compared with placebo, but the quality of the evidence is low for three months and very low for six months of treatment.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chronic rhinosinusitis is a common condition that is defined as inflammation of the nose and paranasal sinuses (a group of air-filled spaces behind the nose, eyes and cheeks). Patients with chronic rhinosinusitis experience at least two or more of the following symptoms for at least 12 weeks: blocked nose, discharge from their nose or runny nose, pain or pressure in their face and/or a reduced sense of smell (hyposmia). Some people will also have nasal polyps, which are grape-like swellings of the normal nasal lining inside the nasal passage and sinuses.\nNasal irrigation (also know as nasal douche, wash or lavage) is a procedure that rinses the nasal cavity with isotonic or hypertonic saline (salt water) solutions. The patient instils saline into one nostril and allows it to drain out of the other nostril, bathing the nasal cavity. Saline nasal irrigation can be performed with low positive pressure from a spray, pump or squirt bottle, with a nebuliser or with gravity-based pressure using a vessel with a nasal spout, such as a 'neti pot'. This therapy is available over the counter and is used as a standalone or add-on treatment by many patients with chronic rhinosinusitis."
}
]
|
query-laysum | 3774 | [
{
"role": "user",
"content": "Abstract: Background\nIn patients of various ages undergoing mechanical ventilation (MV), it has been observed that positions other than the standard supine position, such as the prone position, may improve respiratory parameters. The benefits of these positions have not been clearly defined for critically ill newborns receiving MV.\nThis is an update of a review first published in 2005 and last updated in 2013.\nObjectives\nPrimary objective\nTo assess the effects of different positioning of newborn infants receiving MV (supine vs prone, lateral decubitus or quarter turn from prone) in improving short-term respiratory outcomes.\nSecondary objective\nTo assess the effects of different positioning of newborn infants receiving MV on mortality and neuromotor and developmental outcomes over the long term, and on other complications of prematurity.\nSearch methods\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8), MEDLINE via PubMed (1966 to 22 August 2016), Embase (1980 to 22 August 2016) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 22 August 2016). We also searched clinical trials databases, conference proceedings and reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nRandomised and quasi-randomised clinical trials comparing different positions in newborns receiving mechanical ventilation.\nData collection and analysis\nThree unblinded review authors independently assessed trials for inclusion in the review and extracted study data. We used standard methodological procedures as expected by The Cochrane Collaboration and assessed the quality of the evidence using the GRADE approach. If the meta-analysis was not appropriate owing to substantial clinical heterogeneity between trials, we presented review findings in narrative format.\nMain results\nWe included in this review 19 trials involving 516 participants. Seven of the included studies (N = 222) had not been evaluated in the previous review. Investigators compared several positions: prone versus supine, prone alternant versus supine, prone versus lateral right, lateral right versus supine, lateral left versus supine, lateral alternant versus supine, lateral right versus lateral left, quarter turn from prone versus supine, quarter turn from prone versus prone and good lung dependent versus good lung uppermost.\nApart from two studies that compared lateral alternant versus supine, one comparing lateral right versus supine and two comparing prone or prone alternant versus the supine position, all included studies had a cross-over design. In five studies, infants were ventilated with continuous positive airway pressure (CPAP); in the other studies, infants were treated with conventional ventilation (CV).\nRisks of bias did not differ substantially for different comparisons and outcomes. This update detects a moderate to high grade of inconsistency, similar to previous versions. However, for the analysed outcomes, the direction of effect was the same in all studies. Therefore, we consider that this inconsistency had little effect on the conclusions of the meta-analysis. When comparing prone versus supine position, we observed an increase in arterial oxygen tension (PO2) in the prone position (mean difference (MD) 5.49 mmHg, 95% confidence interval (CI) 2.92 to 8.05 mmHg; three trials; 116 participants; I2= 0). When percent haemoglobin oxygen saturation was measured with pulse oximetry (SpO2), improvement in the prone position was between 1.13% and 3.24% (typical effect based on nine trials with 154 participants; I2= 89%). The subgroup ventilated with CPAP (three trials; 59 participants) showed a trend towards improving SpO2 in the prone position compared with the supine position, although the mean difference (1.91%) was not significant (95% CI -1.14 to 4.97) and heterogeneity was extreme (I2= 95%).\nSensitivity analyses restricted to studies with low risk of selection bias showed homogeneous results and verified a small but significant effect (MD 0.64, 95% CI 0.26 to 1.02; four trials; 92 participants; I2= 0).\nWe also noted a slight improvement in the number of episodes of desaturation; it was not possible to establish whether this effect continued once the intervention was stopped. Investigators studied few adverse effects from the interventions in sufficient detail. Two studies analysed tracheal cultures of neonates after five days on MV, reporting lower bacterial colonisation in the alternating lateral position than in the supine posture. Other effects - positive or negative - cannot be excluded in light of the relatively small numbers of neonates studied.\nAuthors' conclusions\nThis update of our last review in 2013 supports previous conclusions. Evidence of low to moderate quality favours the prone position for slightly improved oxygenation in neonates undergoing mechanical ventilation. However, we found no evidence to suggest that particular body positions during mechanical ventilation of the neonate are effective in producing sustained and clinically relevant improvement.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up to date as of August 2016."
}
]
|
query-laysum | 3775 | [
{
"role": "user",
"content": "Abstract: Background\nAcute respiratory distress syndrome (ARDS) is a significant cause of hospitalisation and death in young children. Positioning and mechanical ventilation have been regularly used to reduce respiratory distress and improve oxygenation in hospitalised patients. Due to the association of prone positioning (lying on the abdomen) with sudden infant death syndrome (SIDS) within the first six months, it is recommended that young infants be placed on their back (supine). However, prone positioning may be a non-invasive way of increasing oxygenation in individuals with acute respiratory distress, and offers a more significant survival advantage in those who are mechanically ventilated. There are substantial differences in respiratory mechanics between adults and infants. While the respiratory tract undergoes significant development within the first two years of life, differences in airway physiology between adults and children become less prominent by six to eight years old. However, there is a reduced risk of SIDS during artificial ventilation in hospitalised infants. Thus, an updated review focusing on positioning for infants and young children with ARDS is warranted. This is an update of a review published in 2005, 2009, and 2012.\nObjectives\nTo compare the effects of different body positions in hospitalised infants and children with acute respiratory distress syndrome aged between four weeks and 16 years.\nSearch methods\nWe searched CENTRAL, which contains the Acute Respiratory Infections Group's Specialised Register, MEDLINE, Embase, and CINAHL from January 2004 to July 2021.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs comparing two or more positions for the management of infants and children hospitalised with ARDS.\nData collection and analysis\nTwo review authors independently extracted data from each study. We resolved differences by consensus, or referred to a third contributor to arbitrate. We analysed bivariate outcomes using an odds ratio (OR) and 95% confidence interval (CI). We analysed continuous outcomes using a mean difference (MD) and 95% CI. We used a fixed-effect model, unless heterogeneity was significant (I2 statistic > 50%), when we used a random-effects model.\nMain results\nWe included six trials: four cross-over trials, and two parallel randomised trials, with 198 participants aged between 4 weeks and 16 years, all but 15 of whom were mechanically ventilated. Four trials compared prone to supine positions. One trial compared the prone position to good-lung dependent (where the person lies on the side of the healthy lung, e.g. if the right lung was healthy, they were made to lie on the right side), and independent (or non-good-lung independent, where the person lies on the opposite side to the healthy lung, e.g. if the right lung was healthy, they were made to lie on the left side) position. One trial compared good-lung independent to good-lung dependent positions.\nWhen the prone (with ventilators) and supine positions were compared, there was no information on episodes of apnoea or mortality due to respiratory events. There was no conclusive result in oxygen saturation (SaO2; MD 0.40 mmHg, 95% CI -1.22 to 2.66; 1 trial, 30 participants; very low certainty evidence); blood gases, PCO2 (MD 3.0 mmHg, 95% CI -1.93 to 7.93; 1 trial, 99 participants; low certainty evidence), or PO2 (MD 2 mmHg, 95% CI -5.29 to 9.29; 1 trial, 99 participants; low certainty evidence); or lung function (PaO2/FiO2 ratio; MD 28.16 mmHg, 95% CI -9.92 to 66.24; 2 trials, 121 participants; very low certainty evidence). However, there was an improvement in oxygenation index (FiO2% X MPAW/ PaO2) with prone positioning in both the parallel trials (MD -2.42, 95% CI -3.60 to -1.25; 2 trials, 121 participants; very low certainty evidence), and the cross-over study (MD -8.13, 95% CI -15.01 to -1.25; 1 study, 20 participants).\nDerived indices of respiratory mechanics, such as tidal volume, respiratory rate, and positive end-expiratory pressure (PEEP) were reported. There was an apparent decrease in tidal volume between prone and supine groups in a parallel study (MD -0.60, 95% CI -1.05 to -0.15; 1 study, 84 participants; very low certainty evidence). When prone and supine positions were compared in a cross-over study, there were no conclusive results in respiratory compliance (MD 0.07, 95% CI -0.10 to 0.24; 1 study, 10 participants); changes in PEEP (MD -0.70 cm H2O, 95% CI -2.72 to 1.32; 1 study, 10 participants); or resistance (MD -0.00, 95% CI -0.05 to 0.04; 1 study, 10 participants).\nOne study reported adverse events. There were no conclusive results for potential harm between groups in extubation (OR 0.57, 95% CI 0.13 to 2.54; 1 trial, 102 participants; very low certainty evidence); obstructions of the endotracheal tube (OR 5.20, 95% CI 0.24 to 111.09; 1 trial, 102 participants; very low certainty evidence); pressure ulcers (OR 1.00, 95% CI 0.41 to 2.44; 1 trial, 102 participants; very low certainty evidence); and hypercapnia (high levels of arterial carbon dioxide; OR 3.06, 95% CI 0.12 to 76.88; 1 trial, 102 participants; very low certainty evidence).\nOne study (50 participants) compared supine positions to good-lung dependent and independent positions. There was no conclusive evidence that PaO2 was different between supine and good-lung dependent positioning (MD 3.44 mm Hg, 95% CI -23.12 to 30.00; 1 trial, 25 participants; very low certainty evidence). There was also no conclusive evidence for supine position and good-lung independent positioning (MD -2.78 mmHg, 95% CI -28.84, 23.28; 25 participants; very low certainty evidence); or between good-lung dependent and independent positioning (MD 6.22, 95% CI -21.25 to 33.69; 1 trial, 25 participants; very low certainty evidence).\nAs most trials did not describe how possible biases were addressed, the potential for bias in these findings is unclear.\nAuthors' conclusions\nAlthough included studies suggest that prone positioning may offer some advantage, there was little evidence to make definitive recommendations. There appears to be low certainty evidence that positioning improves oxygenation in mechanically ventilated children with ARDS. Due to the increased risk of SIDS with prone positioning and lung injury with artificial ventilation, it is recommended that hospitalised infants and children should only be placed in this position while under continuous cardiorespiratory monitoring.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We investigated whether there was a difference in the outcomes for infants and young children with acute respiratory distress syndrome (ARDS) on artificial ventilation who were positioned lying on their abdomen (the prone position), compared to lying on their back (the supine position), or on their side."
}
]
|
query-laysum | 3776 | [
{
"role": "user",
"content": "Abstract: Background\nAcetylcholinesterase inhibitors, such as neostigmine, have traditionally been used for reversal of non-depolarizing neuromuscular blocking agents. However, these drugs have significant limitations, such as indirect mechanisms of reversal, limited and unpredictable efficacy, and undesirable autonomic responses. Sugammadex is a selective relaxant-binding agent specifically developed for rapid reversal of non-depolarizing neuromuscular blockade induced by rocuronium. Its potential clinical benefits include fast and predictable reversal of any degree of block, increased patient safety, reduced incidence of residual block on recovery, and more efficient use of healthcare resources.\nObjectives\nThe main objective of this review was to compare the efficacy and safety of sugammadex versus neostigmine in reversing neuromuscular blockade caused by non-depolarizing neuromuscular agents in adults.\nSearch methods\nWe searched the following databases on 2 May 2016: Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (WebSPIRS Ovid SP), Embase (WebSPIRS Ovid SP), and the clinical trials registries www.controlled-trials.com, clinicaltrials.gov, and www.centerwatch.com. We re-ran the search on 10 May 2017.\nSelection criteria\nWe included randomized controlled trials (RCTs) irrespective of publication status, date of publication, blinding status, outcomes published, or language. We included adults, classified as American Society of Anesthesiologists (ASA) I to IV, who received non-depolarizing neuromuscular blocking agents for an elective in-patient or day-case surgical procedure. We included all trials comparing sugammadex versus neostigmine that reported recovery times or adverse events. We included any dose of sugammadex and neostigmine and any time point of study drug administration.\nData collection and analysis\nTwo review authors independently screened titles and abstracts to identify trials for eligibility, examined articles for eligibility, abstracted data, assessed the articles, and excluded obviously irrelevant reports. We resolved disagreements by discussion between review authors and further disagreements through consultation with the last review author. We assessed risk of bias in 10 methodological domains using the Cochrane risk of bias tool and examined risk of random error through trial sequential analysis. We used the principles of the GRADE approach to prepare an overall assessment of the quality of evidence. For our primary outcomes (recovery times to train-of-four ratio (TOFR) > 0.9), we presented data as mean differences (MDs) with 95 % confidence intervals (CIs), and for our secondary outcomes (risk of adverse events and risk of serious adverse events), we calculated risk ratios (RRs) with CIs.\nMain results\nWe included 41 studies (4206 participants) in this updated review, 38 of which were new studies. Twelve trials were eligible for meta-analysis of primary outcomes (n = 949), 28 trials were eligible for meta-analysis of secondary outcomes (n = 2298), and 10 trials (n = 1647) were ineligible for meta-analysis.\nWe compared sugammadex 2 mg/kg and neostigmine 0.05 mg/kg for reversal of rocuronium-induced moderate neuromuscular blockade (NMB). Sugammadex 2 mg/kg was 10.22 minutes (6.6 times) faster then neostigmine 0.05 mg/kg (1.96 vs 12.87 minutes) in reversing NMB from the second twitch (T2) to TOFR > 0.9 (MD 10.22 minutes, 95% CI 8.48 to 11.96; I2 = 84%; 10 studies, n = 835; GRADE: moderate quality).\nWe compared sugammadex 4 mg/kg and neostigmine 0.07 mg/kg for reversal of rocuronium-induced deep NMB. Sugammadex 4 mg/kg was 45.78 minutes (16.8 times) faster then neostigmine 0.07 mg/kg (2.9 vs 48.8 minutes) in reversing NMB from post-tetanic count (PTC) 1 to 5 to TOFR > 0.9 (MD 45.78 minutes, 95% CI 39.41 to 52.15; I2 = 0%; two studies, n = 114; GRADE: low quality).\nFor our secondary outcomes, we compared sugammadex, any dose, and neostigmine, any dose, looking at risk of adverse and serious adverse events. We found significantly fewer composite adverse events in the sugammadex group compared with the neostigmine group (RR 0.60, 95% CI 0.49 to 0.74; I2 = 40%; 28 studies, n = 2298; GRADE: moderate quality). Risk of adverse events was 28% in the neostigmine group and 16% in the sugammadex group, resulting in a number needed to treat for an additional beneficial outcome (NNTB) of 8. When looking at specific adverse events, we noted significantly less risk of bradycardia (RR 0.16, 95% CI 0.07 to 0.34; I2= 0%; 11 studies, n = 1218; NNTB 14; GRADE: moderate quality), postoperative nausea and vomiting (PONV) (RR 0.52, 95% CI 0.28 to 0.97; I2 = 0%; six studies, n = 389; NNTB 16; GRADE: low quality) and overall signs of postoperative residual paralysis (RR 0.40, 95% CI 0.28 to 0.57; I2 = 0%; 15 studies, n = 1474; NNTB 13; GRADE: moderate quality) in the sugammadex group when compared with the neostigmine group. Finally, we found no significant differences between sugammadex and neostigmine regarding risk of serious adverse events (RR 0.54, 95% CI 0.13 to 2.25; I2= 0%; 10 studies, n = 959; GRADE: low quality).\nApplication of trial sequential analysis (TSA) indicates superiority of sugammadex for outcomes such as recovery time from T2 to TOFR > 0.9, adverse events, and overall signs of postoperative residual paralysis.\nAuthors' conclusions\nReview results suggest that in comparison with neostigmine, sugammadex can more rapidly reverse rocuronium-induced neuromuscular block regardless of the depth of the block. Sugammadex 2 mg/kg is 10.22 minutes (˜ 6.6 times) faster in reversing moderate neuromuscular blockade (T2) than neostigmine 0.05 mg/kg (GRADE: moderate quality), and sugammadex 4 mg/kg is 45.78 minutes (˜ 16.8 times) faster in reversing deep neuromuscular blockade (PTC 1 to 5) than neostigmine 0.07 mg/kg (GRADE: low quality). With an NNTB of 8 to avoid an adverse event, sugammadex appears to have a better safety profile than neostigmine. Patients receiving sugammadex had 40% fewer adverse events compared with those given neostigmine. Specifically, risks of bradycardia (RR 0.16, NNTB 14; GRADE: moderate quality), PONV (RR 0.52, NNTB 16; GRADE: low quality), and overall signs of postoperative residual paralysis (RR 0.40, NNTB 13; GRADE: moderate quality) were reduced. Both sugammadex and neostigmine were associated with serious adverse events in less than 1% of patients, and data showed no differences in risk of serious adverse events between groups (RR 0.54; GRADE: low quality).\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Data indicate that sugammadex was 10.22 minutes (6.6 times) faster than neostigmine (1.96 vs 12.87 minutes) in reversing moderate induced paralysis. Sugammadex was 45.78 minutes (16.8 times) faster than neostigmine (2.9 vs 48.8 minutes) in reversing deep induced paralysis. Participants receiving sugammadex appeared to have a 40% reduced risk of experiencing harmful events than those given neostigmine. Statistically, eight persons can be treated with sugammadex as opposed to neostigmine to avoid one person experiencing a single random harmful event. The occurrence of serious harmful events was nearly non-existent and data show no differences between compared groups."
}
]
|
query-laysum | 3777 | [
{
"role": "user",
"content": "Abstract: Background\nDiffuse large B-cell lymphoma (DLBCL) is an aggressive cancer of the lymphatic system. About 30% to 40% of people with DLBCL experience relapse and 10% are refractory to first-line treatment usually consisting of R-CHOP chemotherapy. Of those eligible for second-line treatment, commonly consisting of salvage chemotherapy followed by autologous stem-cell transplantation (ASCT), around 50% experience relapse. With a median overall survival of less than six to 12 months, the prognosis of individuals who relapse or are refractory (r/r) to advanced lines of treatment or of those who are ineligible for ASCT, is very poor. With the introduction of chimeric antigen receptor (CAR) T-cell therapy, a novel treatment option for these people is available.\nObjectives\nTo assess the benefits and harms of chimeric antigen receptor (CAR) T-cell therapy for people with relapsed or refractory (r/r) DLBCL.\nSearch methods\nAn experienced information specialist performed a systematic database search for relevant articles on CENTRAL, MEDLINE and Embase until September 11th, 2020. We also searched trial registries and reference lists of identified studies up to this date. All search results were screened by two authors independently and a third author was involved in case of discrepancies.\nSelection criteria\nWe included prospectively planned trials evaluating CAR T-cell therapy for people with r/r DLBCL. We had planned to include randomised controlled trials (RCTs) and we flexibly adapted eligibility criteria to the most reliable study designs available. We excluded studies involving fewer than 10 participants with r/r DLBCL and studies with a proportion of participants with r/r DLBCL below 70%, unless data were reported separately for this subgroup.\nData collection and analysis\nTwo review authors extracted data and performed risk of bias ratings independently. A third author was involved in case of disagreements. As our search did not yield any completed RCTs, prospective controlled non-randomised studies of interventions (NRSIs) or prospective observational studies with a control group, we did not meta-analyse data and reported all results narratively. We adopted the GRADE approach to assess the certainty of the evidence for prioritised outcomes.\nMain results\nWe identified 13 eligible uncontrolled studies evaluating a single or multiple arms of CAR T-cell therapies. We also identified 38 ongoing studies, including three RCTs. Ten studies are awaiting classification due to completion with no retrievable results data or insufficient data to justify inclusion. The mean number of participants enrolled, treated with CAR T-cell therapy and evaluated in the included studies were 79 (range 12 to 344; data unavailable for two studies), 61 (range 12 to 294; data unavailable for one study) and 52 (range 11 to 256), respectively.\nMost studies included people with r/r DLBCL among people with other haematological B-cell malignancies. Participants had received at least a median of three prior treatment lines (data unavailable for four studies), 5% to 50% had undergone ASCT (data unavailable for five studies) and, except for two studies, 3% to 18% had undergone allogenic stem-cell transplantation (data unavailable for eight studies).\nThe overall risk of bias was high for all studies, in particular, due to incomplete follow-up and the absence of blinding. None of the included studies had a control group so that no adequate comparative effect measures could be calculated. The duration of follow-up varied substantially between studies, in particular, for harms. Our certainty in the evidence is very low for all outcomes.\nOverall survival was reported by eight studies (567 participants). Four studies reported survival rates at 12 months which ranged between 48% and 59%, and one study reported an overall survival rate of 50.5% at 24 months. The evidence is very uncertain about the effect of CAR T-cell therapy on overall survival.\nTwo studies including 294 participants at baseline and 59 participants at the longest follow-up (12 months or 18 months) described improvements of quality of life measured with the EuroQol 5-Dimension 5-Level visual analogue scale (EQ-5D-5L VAS) or Function Assessment of Cancer Therapy-Lymphoma (FACT-Lym). The evidence is very uncertain about the effect of CAR T-cell therapy on quality of life.\nNone of the studies reported treatment-related mortality.\nFive studies (550 participants) reported the occurrence of adverse events among participants, ranging between 99% and 100% for any grade adverse events and 68% to 98% for adverse events grade ≥ 3. In three studies (253 participants), 56% to 68% of participants experienced serious adverse events, while in one study (28 participants), no serious adverse events occurred. CAR T-cell therapy may increase the risk of adverse events and serious adverse events but the evidence is very uncertain about the exact risk.\nThe occurrence of cytokine release syndrome (CRS) was reported in 11 studies (675 participants) under use of various grading criteria. Five studies reported between 42% and 100% of participants experiencing CRS according to criteria described in Lee 2014. CAR T-cell therapy may increase the risk of CRS but the evidence is very uncertain about the exact risk.\nNine studies (575 participants) reported results on progression-free survival, disease-free survival or relapse-free survival. Twelve-month progression-free survival rates were reported by four studies and ranged between 44% and 75%. In one study, relapse-free survival remained at a rate of 64% at both 12 and 18 months. The evidence is very uncertain about the effect of CAR T-cell therapy on progression-free survival.\nThirteen studies (620 participants) provided data on complete response rates. At six months, three studies reported complete response rates between 40% and 45%. The evidence is very uncertain about the effect of CAR T-cell therapy on complete response rates.\nAuthors' conclusions\nThe available evidence on the benefits and harms of CAR T-cell therapy for people with r/r DLBCL is limited, mainly because of the absence of comparative clinical trials. The results we present should be regarded in light of this limitation and conclusions should be drawn very carefully. Due to the uncertainty in the current evidence, a large number of ongoing investigations and a risk of substantial and potentially life-threatening complications requiring supplementary treatment, it is critical to continue evaluating the evidence on this new therapy.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 13 studies with information on 679 participants (who had received several prior lines of therapy) that received CAR T-cell therapy. None of the studies had a control group. This means CAR T-cells were not compared with another treatment. We decided not to meta-analyse the study results and reported all results narratively. We also found 38 ongoing studies."
}
]
|
query-laysum | 3778 | [
{
"role": "user",
"content": "Abstract: Background\nPreterm birth is the leading cause of death in newborns and children. Tocolytic drugs aim to delay preterm birth by suppressing uterine contractions to allow time for administration of corticosteroids for fetal lung maturation, magnesium sulphate for neuroprotection, and transport to a facility with appropriate neonatal care facilities. However, there is still uncertainty about their effectiveness and safety.\nObjectives\nTo estimate relative effectiveness and safety profiles for different classes of tocolytic drugs for delaying preterm birth, and provide rankings of the available drugs.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov (21 April 2021) and reference lists of retrieved studies.\nSelection criteria\nWe included all randomised controlled trials assessing effectiveness or adverse effects of tocolytic drugs for delaying preterm birth. We excluded quasi- and non-randomised trials. We evaluated all studies against predefined criteria to judge their trustworthiness.\nData collection and analysis\nAt least two review authors independently assessed the trials for inclusion and risk of bias, and extracted data. We performed pairwise and network meta-analyses, to determine the relative effects and rankings of all available tocolytics. We used GRADE to rate the certainty of the network meta-analysis effect estimates for each tocolytic versus placebo or no treatment.\nMain results\nThis network meta-analysis includes 122 trials (13,697 women) involving six tocolytic classes, combinations of tocolytics, and placebo or no treatment. Most trials included women with threatened preterm birth, singleton pregnancy, from 24 to 34 weeks of gestation. We judged 25 (20%) studies to be at low risk of bias. Overall, certainty in the evidence varied.\nRelative effects from network meta-analysis suggested that all tocolytics are probably effective in delaying preterm birth compared with placebo or no tocolytic treatment. Betamimetics are possibly effective in delaying preterm birth by 48 hours (risk ratio (RR) 1.12, 95% confidence interval (CI) 1.05 to 1.20; low-certainty evidence), and 7 days (RR 1.14, 95% CI 1.03 to 1.25; low-certainty evidence). COX inhibitors are possibly effective in delaying preterm birth by 48 hours (RR 1.11, 95% CI 1.01 to 1.23; low-certainty evidence). Calcium channel blockers are possibly effective in delaying preterm birth by 48 hours (RR 1.16, 95% CI 1.07 to 1.24; low-certainty evidence), probably effective in delaying preterm birth by 7 days (RR 1.15, 95% CI 1.04 to 1.27; moderate-certainty evidence), and prolong pregnancy by 5 days (0.1 more to 9.2 more; high-certainty evidence). Magnesium sulphate is probably effective in delaying preterm birth by 48 hours (RR 1.12, 95% CI 1.02 to 1.23; moderate-certainty evidence). Oxytocin receptor antagonists are probably effective in delaying preterm birth by 48 hours (RR 1.13, 95% CI 1.05 to 1.22; moderate-certainty evidence), are effective in delaying preterm birth by 7 days (RR 1.18, 95% CI 1.07 to 1.30; high-certainty evidence), and possibly prolong pregnancy by 10 days (95% CI 2.3 more to 16.7 more). Nitric oxide donors are probably effective in delaying preterm birth by 48 hours (RR 1.17, 95% CI 1.05 to 1.31; moderate-certainty evidence), and 7 days (RR 1.18, 95% CI 1.02 to 1.37; moderate-certainty evidence). Combinations of tocolytics are probably effective in delaying preterm birth by 48 hours (RR 1.17, 95% CI 1.07 to 1.27; moderate-certainty evidence), and 7 days (RR 1.19, 95% CI 1.05 to 1.34; moderate-certainty evidence).\nNitric oxide donors ranked highest for delaying preterm birth by 48 hours and 7 days, and delay in birth (continuous outcome), followed by calcium channel blockers, oxytocin receptor antagonists and combinations of tocolytics.\nBetamimetics (RR 14.4, 95% CI 6.11 to 34.1; moderate-certainty evidence), calcium channel blockers (RR 2.96, 95% CI 1.23 to 7.11; moderate-certainty evidence), magnesium sulphate (RR 3.90, 95% CI 1.09 to 13.93; moderate-certainty evidence) and combinations of tocolytics (RR 6.87, 95% CI 2.08 to 22.7; low-certainty evidence) are probably more likely to result in cessation of treatment.\nCalcium channel blockers possibly reduce the risk of neurodevelopmental morbidity (RR 0.51, 95% CI 0.30 to 0.85; low-certainty evidence), and respiratory morbidity (RR 0.68, 95% CI 0.53 to 0.88; low-certainty evidence), and result in fewer neonates with birthweight less than 2000 g (RR 0.49, 95% CI 0.28 to 0.87; low-certainty evidence). Nitric oxide donors possibly result in neonates with higher birthweight (mean difference (MD) 425.53 g more, 95% CI 224.32 more to 626.74 more; low-certainty evidence), fewer neonates with birthweight less than 2500 g (RR 0.40, 95% CI 0.24 to 0.69; low-certainty evidence), and more advanced gestational age (MD 1.35 weeks more, 95% CI 0.37 more to 2.32 more; low-certainty evidence). Combinations of tocolytics possibly result in fewer neonates with birthweight less than 2500 g (RR 0.74, 95% CI 0.59 to 0.93; low-certainty evidence).\nIn terms of maternal adverse effects, betamimetics probably cause dyspnoea (RR 12.09, 95% CI 4.66 to 31.39; moderate-certainty evidence), palpitations (RR 7.39, 95% CI 3.83 to 14.24; moderate-certainty evidence), vomiting (RR 1.91, 95% CI 1.25 to 2.91; moderate-certainty evidence), possibly headache (RR 1.91, 95% CI 1.07 to 3.42; low-certainty evidence) and tachycardia (RR 3.01, 95% CI 1.17 to 7.71; low-certainty evidence) compared with placebo or no treatment. COX inhibitors possibly cause vomiting (RR 2.54, 95% CI 1.18 to 5.48; low-certainty evidence). Calcium channel blockers (RR 2.59, 95% CI 1.39 to 4.83; low-certainty evidence), and nitric oxide donors probably cause headache (RR 4.20, 95% CI 2.13 to 8.25; moderate-certainty evidence).\nAuthors' conclusions\nCompared with placebo or no tocolytic treatment, all tocolytic drug classes that we assessed (betamimetics, calcium channel blockers, magnesium sulphate, oxytocin receptor antagonists, nitric oxide donors) and their combinations were probably or possibly effective in delaying preterm birth for 48 hours, and 7 days. Tocolytic drugs were associated with a range of adverse effects (from minor to potentially severe) compared with placebo or no tocolytic treatment, although betamimetics and combination tocolytics were more likely to result in cessation of treatment. The effects of tocolytic use on neonatal outcomes such as neonatal and perinatal mortality, and on safety outcomes such as maternal and neonatal infection were uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Tocolytics aim to delay preterm birth and allow time for women to receive medicines that can help with baby's breathing and feeding if born preterm, and medicines that lower the chance of the infant having cerebral palsy. Crucially, a short delay in preterm birth can enable women to reach specialist care. The aim of this Cochrane Review was to find out which tocolytic is most effective in delaying preterm birth, and has the fewest unwanted effects. We collected and analysed all studies to answer this question (date of search: 21 April 2021)"
}
]
|
query-laysum | 3779 | [
{
"role": "user",
"content": "Abstract: Background\nPressure ulcers (PUs) are injuries to the skin and underlying tissues that occur most commonly over bony prominences, such as the hips and heels as a result of pressure and shear forces. PUs cause pain, discomfort, longer hospital stays, and decreased quality of life. They are also very costly to treat and consume substantial parts of healthcare budgets. PUs are largely preventable, and education targeted at patients and their carers is considered important.\nObjectives\nTo assess the effects of patient and/or lay carer education on preventing pressure ulceration in at-risk people, in any care setting.\nSearch methods\nIn June 2019 we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase; Ovid PsycINFO and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe included randomised controlled trials (RCTs) that recruited people of any age at risk of pressure ulceration, and RCTs that recruited people who informally care for someone at risk of pressure ulceration.\nData collection and analysis\nTwo review authors independently performed study selection, data extraction, 'Risk of bias' assessment, and GRADE assessment of the certainty of the evidence.\nMain results\nWe included 10 studies with 11 publications (2261 participants analysed). Seven targeted their intervention at people at risk of ulceration and measured outcomes on these at risk people; two targeted those at risk and their family carers and measured outcomes on the at risk people cared for by their families; and one targeted lay carers only and measured outcomes on the at risk people they cared for. There were two main types of interventions: the provision of information on prevention of pressure ulcers, and the use of different types of education programmes.\nProvision of information on the prevention of pressure ulcers\nThree studies (237 participants) reported data for this comparison: two provided information directly to those at risk and their carers, and the third provided information to lay carers. As data could not be pooled we present individual study data. The evidence for primary outcomes is of very low certainty (downgraded twice for study limitations and twice for imprecision).\nWe are uncertain whether the combined use of a self-instruction manual and one-to-one patient training and counselling versus a self-instruction manual alone reduces the proportion of at risk people developing a new PU (risk ratio (RR) 0.40, 95% confidence interval (CI) 0.14 to 1.18), or whether carer self-instruction and one-to-one counselling versus self-instruction alone reduces the proportion of at risk people developing a new PU (RR 2.05, 95% CI 0.19 to 21.70).\nWe are uncertain whether the use of home-based training, compared with routine ward-based training, reduces the proportion of at risk people developing a new PU (RR 0.53, 95% CI 0.27 to 1.02).\nOne study explored the secondary outcome patient knowledge of pressure ulcer prevention; however, as usable data were not provided, we were unable to carry out further analysis, and no effect estimate could be calculated.\nEducational programmes on the prevention of pressure ulcers\nSeven studies (2024 participants analysed) provided data for this comparison. In all studies the intervention was aimed at people at risk of ulceration.\nRisk of pressure ulceration\nOne secondary report of an included study reported the primary outcome as time to PU development or occurrence and three studies and one secondary report of an included study reported this as the proportion of at risk people developing a new PU. One study reported the secondary outcome grade of PU and five studies and one secondary report of an included study reported on patient knowledge.\nThere is low certainty evidence of there being no clear difference in the proportion of participants developing a new PU between use of a pressure ulcer prevention care bundle (PUPCB) and standard care: HR 0.58, 95% CI 0.25 to 1.33 (downgraded twice for imprecision).\nOne secondary report of an included study explored whether individualised PU education and monthly structured telephone follow-up changes the mean time to PU occurrence. Not all participants in this study developed a pressure ulcer, therefore the mean time to pressure ulcer occurrence could not be calculated from the data.\nWe are uncertain whether the following three interventions reduce the proportion of at risk people developing a new PU as we assessed the certainty of evidence as very low: individualised PU education and monthly structured telephone follow-up (RR 0.55, 95% CI 0.23 to 1.30), education delivery (RR 3.57, 95% CI 0.78 to 16.38), (downgraded twice for risk of bias and twice for imprecision); and computerised feedback and one-to-one consultations (no clear data provided), (downgraded twice for risk of bias and once for indirectness).\nGrade of pressure ulcer\nThere is low certainty evidence that use of a PU prevention care bundle may make no difference to the severity of new PU development when compared with standard care.\nPatient knowledge\nWe are uncertain whether the following interventions improve patient knowledge: enhanced educational intervention and structured follow-up (mean difference (MD) 9.86, 95% CI 1.55 to 18.17); multi component motivational interviewing/self-management with a multi component education intervention (no clear data provided); Spinal Cord Injury Navigator programme (no clear data provided); individualised PU education and monthly structured telephone follow-up (no clear data provided); computerised feedback (no clear data provided), structured, patient-centric PU prevention education event (MD 30.15, 95% CI 23.56 to 36.74). We assessed the certainty of the evidence for this outcome as low or very low (downgraded for risk of bias, imprecision, or indirectness).\nAuthors' conclusions\nWe are uncertain whether educational interventions make any difference to the number of new PUs that develop, or to patient knowledge based on evidence from the 10 included studies, which we assessed as of low or very low certainty due to problems with risk of bias, serious imprecision and indirectness. The low certainty of evidence means that additional research is required to confirm these results.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Pressure ulcers are wounds that occur on the skin or underlying tissues. People who cannot move and change position (such as those using wheelchairs, or in long-term nursing and hospital care) are at most risk of developing pressure ulcers. These wounds can cause pain, discomfort, and distress and have a negative effect on quality of life. Preventing pressure ulcers is therefore very important. Educating people about the risks of pressure ulcers and how to prevent them is considered to be an important part of preventative care."
}
]
|
query-laysum | 3780 | [
{
"role": "user",
"content": "Abstract: Background\nTraditional Chinese herbal medicine (TCHM) is widely used for treating vascular dementia (VaD) in China. Recent studies of a number of TCHMs have demonstrated in vitro biological activity and therapeutic effects in animals, but the published clinical evidence has not been systematically appraised.\nObjectives\nTo evaluate the efficacy and safety of TCHMs listed in either the Chinese Pharmacopoeia (CP) or the Chinese National Essential Drug List (NEDL) that are used to treat VaD. A secondary aim was to identify promising TCHMs for further clinical research.\nSearch methods\nWe searched ALOIS, the Cochrane Dementia and Cognitive Improvement Group’s Specialised Register (on 14 March 2018) and also several Chinese biomedical databases: the Chinese Biological Medicine Database (January 1979 to May 2015), Wanfang database (January 1998 to May 2015), Chongqing VIP Information Co. Ltd or Weipu (January 1998 to May 2015) and the Chinese National Knowledge Infrastructure (January 1979 to May 2015).\nSelection criteria\nWe included randomised controlled trials (RCTs) of TCHMs compared to placebo, to Western medicine (WM) or to routine therapy for VaD risk factors. Eligible participants were men and women aged 18 years and above, diagnosed with VaD by any of the following four criteria: (1) Diagnostic and Statistical Manual of Mental Disorders (DSM) versions III, III-R, IV, IV-TR; (2) National Institute of Neurological Disorders and Stroke (NINDS-AIREN); (3) International Classification of Diseases 9 or 10; (4) the Hachinski or the Modified Hachinski Ischaemic Score. We required the use of an imaging technique to differentiate VaD from other dementias. We excluded (1) trials with participants diagnosed with mixed dementia or those that did not use an imaging technique to ascertain VaD; (2) trials of NEDL-listed Gingko biloba or Huperzine A as experimental interventions, to avoid duplication of existing Cochrane Reviews; (3) trials using acupuncture alone as the experimental intervention; (4) trials using another CP- or NEDL-listed TCHM (except for Huperzine A and Gingko which are popular in Western practice) as the control intervention; and (5) trials using purely non-pharmacological interventions as the control intervention unless explicitly described as 'routine therapy for VaD risk factors'.\nData collection and analysis\nWe assessed the risks of bias using the Cochrane 'Risk of bias' tool and adapted the Outcome Reporting Bias in Trials (ORBIT) classification system for outcome reporting bias. We assessed TCHM effects on five clinically important outcomes: cognition, global performance, safety, activities of daily living and behaviour and summarised the effects using mean differences for continuous outcomes and risk ratios or risk differences for binary outcomes. We stratified the studies into those that estimated the TCHM versus 'no treatment' effect and those that estimated the TCHM versus the WM effect, with further stratification by the specific TCHM tested or by one of the four modes of action. We pooled using a random-effects model. Due to substantial clinical and design heterogeneity, we did not estimate an 'overall TCHM effect'.\nMain results\nWe only found studies (47 studies, 3581 participants) for 18 of the 29 eligible TCHMs as defined by our inclusion criteria. All were superiority trials conducted in China between 1997 and 2013, with most employing a two-arm parallel design with sample sizes ranging from 26 to 240 and a median treatment duration of 12 weeks (range: 2 to 24 weeks).\nWe found that reporting and trial methodology were generally poor; in particular, there was a lack of information on randomisation, an absence of blinding of participants and outcome assessors and incomplete reporting of adverse events (AEs). None of the 30 trials published from 2007 onwards adopted the CONSORT recommendations for reporting RCTs of herbal interventions.\nWe found seven TCHMs which each had potentially large benefits in studies estimating the TCHM versus 'no treatment' effect and in studies estimating the TCHM versus the WM effect. Two TCHMs (NaoXinTong and TongXinLuo) were common to both groups. Three of these TCHMs – Nao XinTong, NaoMaiTai and TongXinLuo – had the strongest evidence to justify further research. Two TCHMs (NaoMaiTai and TongXinLuo) had a 5% or more increased risk of AEs compared to the 'no Treatment' control, but the quality of this evidence was poor.\nAuthors' conclusions\nWe found moderate- to very low-quality evidence of benefit and harm of TCHMs for VaD. Methodological inadequacies need to be addressed by better conducted and reported trials. We identified NaoMaiTai, NaoXinTong and TongXinLuo as warranting special research priority.\n\nGiven the provided abstract, please respond to the following query: \"What we found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 47 trials with 3581 participants of 18 TCHMs in this review. All were conducted in mainland China between 1997 and 2013 with participant numbers ranging from 26 to 240 and an average study duration of 12 weeks. There were significant problems with the methods in many of the trials, particularly with how participants were allocated to treatments, how outcomes were measured and how thoroughly harmful effects were monitored. For these and other reasons, we rated the overall quality of the evidence as variable, ranging from moderate to very low. This means that we are uncertain, and often very uncertain, about the accuracy of the results.\nDespite these reservations, we found seven TCHMs which each had potentially large benefits in studies comparing TCHMs to no treatment or Western Medicine. Three of these – Nao XinTong, NaoMaiTai and TongXinLuo – had the strongest evidence to justify further research. We found that the risk of harmful effects was at least 5% higher than the risk for participants in the control group for NaoMaiTai and TongXinLuo, but the quality of this evidence was poor."
}
]
|
query-laysum | 3781 | [
{
"role": "user",
"content": "Abstract: Background\nApproximately 30% of hospitalised older adults experience hospital-associated functional decline. Exercise interventions that promote in-hospital activity may prevent deconditioning and thereby maintain physical function during hospitalisation. This is an update of a Cochrane Review first published in 2007.\nObjectives\nTo evaluate the benefits and harms of exercise interventions for acutely hospitalised older medical inpatients on functional ability, quality of life (QoL), participant global assessment of success and adverse events compared to usual care or a sham-control intervention.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was May 2021.\nSelection criteria\nWe included randomised or quasi-randomised controlled trials evaluating an in-hospital exercise intervention in people aged 65 years or older admitted to hospital with a general medical condition. We excluded people admitted for elective reasons or surgery.\nData collection and analysis\nWe used standard Cochrane methods. Our major outcomes were 1. independence with activities of daily living; 2. functional mobility; 3. new incidence of delirium during hospitalisation; 4. QoL; 5. number of falls during hospitalisation; 6. medical deterioration during hospitalisation and 7. participant global assessment of success. Our minor outcomes were 8. death during hospitalisation; 9. musculoskeletal injuries during hospitalisation; 10. hospital length of stay; 11. new institutionalisation at hospital discharge; 12. hospital readmission and 13. walking performance. We used GRADE to assess certainty of evidence for each major outcome.\nWe categorised exercise interventions as: rehabilitation-related activities (interventions designed to increase physical activity or functional recovery, but did not follow a specified exercise protocol); structured exercise (interventions that included an exercise intervention protocol but did not include progressive resistance training); and progressive resistance exercise (interventions that included an element of progressive resistance training).\nMain results\nWe included 24 studies (nine rehabilitation-related activity interventions, six structured exercise interventions and nine progressive resistance exercise interventions) with 7511 participants. All studies compared exercise interventions to usual care; two studies, in addition to usual care, used sham interventions. Mean ages ranged from 73 to 88 years, and 58% of participants were women.\nSeveral studies were at high risk of bias. The most common domain assessed at high risk of bias was measurement of the outcome, and five studies (21%) were at high risk of bias arising from the randomisation process.\nExercise may have no clinically important effect on independence in activities of daily living at discharge from hospital compared to controls (16 studies, 5174 participants; low-certainty evidence). Five studies used the Barthel Index (scale: 0 to 100, higher scores representing greater independence). Mean scores at discharge in the control groups ranged from 42 to 96 points, and independence in activities of daily living was 1.8 points better (0.43 worse to 4.12 better) with exercise compared to controls. The minimally clinical important difference (MCID) is estimated to be 11 points.\nWe are uncertain regarding the effect of exercise on functional mobility at discharge from the hospital compared to controls (8 studies, 2369 participants; very low-certainty evidence). Three studies used the Short Physical Performance Battery (SPPB) (scale: 0 to 12, higher scores representing better function) to measure functional mobility. Mean scores at discharge in the control groups ranged from 3.7 to 4.9 points on the SPPB, and the estimated effect of the exercise interventions was 0.78 points better (0.02 worse to 1.57 better). A change of 1 point on the SPPB represents an MCID.\nWe are uncertain regarding the effect of exercise on the incidence of delirium during hospitalisation compared to controls (7 trials, 2088 participants; very low-certainty evidence). The incidence of delirium during hospitalisation was 88/1091 (81 per 1000) in the control group compared with 70/997 (73 per 1000; range 47 to 114) in the exercise group (RR 0.90, 95% CI 0.58 to 1.41).\nExercise interventions may result in a small clinically unimportant improvement in QoL at discharge from the hospital compared to controls (4 studies, 875 participants; low-certainty evidence). Mean QoL on the EuroQol 5 Dimensions (EQ-5D) visual analogue scale (VAS) (scale: 0 to 100, higher scores representing better QoL) ranged between 48.9 and 64.7 in the control group at discharge from the hospital, and QoL was 6.04 points better (0.9 better to 11.18 better) with exercise. A change of 10 points on the EQ-5D VAS represents an MCID.\nNo studies measured participant global assessment of success.\nExercise interventions did not affect the risk of falls during hospitalisation (moderate-certainty evidence). The incidence of falls was 31/899 (34 per 1000) in the control group compared with 31/888 (34 per 1000; range 20 to 57) in the exercise group (RR 0.99, 95% CI 0.59 to 1.65).\nWe are uncertain regarding the effect of exercise on the incidence of medical deterioration during hospitalisation (very low-certainty evidence). The incidence of medical deterioration in the control group was 101/1417 (71 per 1000) compared with 96/1313 (73 per 1000; range 44 to 120) in the exercise group (RR 1.02, 95% CI 0.62 to 1.68).\nSubgroup analyses by different intervention categories and by the use of a sham intervention were not meaningfully different from the main analyses.\nAuthors' conclusions\nExercise may make little difference to independence in activities of daily living or QoL, but probably does not result in more falls in older medical inpatients. We are uncertain about the effect of exercise on functional mobility, incidence of delirium and medical deterioration. Certainty of evidence was limited by risk of bias and inconsistency. Future primary research on the effect of exercise on acute hospitalisation could focus on more consistent and uniform reporting of participant's characteristics including their baseline level of functional ability, as well as exercise dose, intensity and adherence that may provide an insight into the reasons for the observed inconsistencies in findings.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There may be a benefit in some exercise treatments for older adults during an unplanned hospital stay, but we cannot be certain. Exercise interventions probably do not cause harm; we found no increase in the risk of falling for older adults when they were in hospital."
}
]
|
query-laysum | 3782 | [
{
"role": "user",
"content": "Abstract: Background\nChest physiotherapy is widely prescribed to assist the clearance of airway secretions in people with cystic fibrosis (CF). Positive expiratory pressure (PEP) devices provide back pressure to the airways during expiration. This may improve clearance by building up gas behind mucus via collateral ventilation and by temporarily increasing functional residual capacity. The developers of the PEP technique recommend using PEP with a mask in order to avoid air leaks via the upper airways and mouth. In addition, increasing forced residual capacity (FRC) has not been demonstrated using mouthpiece PEP. Given the widespread use of PEP devices, there is a need to determine the evidence for their effect. This is an update of a previously published review.\nObjectives\nTo determine the effectiveness and acceptability of PEP devices compared to other forms of physiotherapy as a means of improving mucus clearance and other outcomes in people with CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising of references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. The electronic database CINAHL was also searched from 1982 to 2017.\nMost recent search of the Group's CF Trials Register: 20 February 2019.\nSelection criteria\nRandomised controlled studies in which PEP was compared with any other form of physiotherapy in people with CF. This included, postural drainage and percussion (PDPV), active cycle of breathing techniques (ACBT), oscillating PEP devices, thoracic oscillating devices, bilevel positive airway pressure (BiPaP) and exercise.\nData collection and analysis\nThree authors independently applied the inclusion and exclusion criteria to publications, assessed the risk of bias of the included studies and assessed the quality of the evidence using the GRADE recommendations.\nMain results\nA total of 28 studies (involving 788 children and adults) were included in the review; 18 studies involving 296 participants were cross-over in design. Data were not published in sufficient detail in most of these studies to perform any meta-analysis. In 22 of the 28 studies the PEP technique was performed using a mask, in three of the studies a mouthpiece was used with nose clips and in three studies it was unclear whether a mask or mouthpiece was used. These studies compared PEP to ACBT, autogenic drainage (AD), oral oscillating PEP devices, high-frequency chest wall oscillation (HFCWO) and BiPaP and exercise.\nForced expiratory volume in one second was the review's primary outcome and the most frequently reported outcome in the studies (24 studies, 716 participants). Single interventions or series of treatments that continued for up to three months demonstrated little or no difference in effect between PEP and other methods of airway clearance on this outcome (low- to moderate-quality evidence). However, long-term studies had equivocal or conflicting results regarding the effect on this outcome (low- to moderate-quality evidence).\nA second primary outcome was the number of respiratory exacerbations. There was a lower exacerbation rate in participants using PEP compared to other techniques when used with a mask for at least one year (five studies, 232 participants; moderate- to high-quality evidence). In one of the included studies which used PEP with a mouthpiece, it was reported (personal communication) that there was no difference in the number of respiratory exacerbations (66 participants, low-quality evidence).\nParticipant preference was reported in 10 studies; and in all studies with an intervention period of at least one month, this was in favour of PEP. The results for the remaining outcome measures (including our third primary outcome of mucus clearance) were not examined or reported in sufficient detail to provide any high-quality evidence; only very low- to moderate-quality evidence was available for other outcomes. There was limited evidence reported on adverse events; these were measured in five studies, two of which found no events. In a study where infants performing either PEP or PDPV experienced some gastro-oesophageal reflux , this was more severe in the PDPV group (26 infants, low-quality evidence). In PEP versus oscillating PEP, adverse events were only reported in the flutter group (five participants complained of dizziness, which improved after further instructions on device use was provided) (22 participants, low-quality evidence). In PEP versus HFCWO, from one long-term high-quality study (107 participants) there was little or no difference in terms of number of adverse events; however, those in the PEP group had fewer adverse events related to the lower airways when compared to HFCWO (high-certainty evidence).\nMany studies had a risk of bias as they did not report how the randomisation sequence was either generated or concealed. Most studies reported the number of dropouts and also reported on all planned outcome measures.\nAuthors' conclusions\nThe evidence provided by this review is of variable quality, but suggests that all techniques and devices described may have a place in the clinical treatment of people with CF.\nFollowing meta-analyses of the effects of PEP versus other airway clearance techniques on lung function and patient preference, this Cochrane Review demonstrated that there was high-quality evidence that showed a significant reduction in pulmonary exacerbations when PEP using a mask was compared with HFCWO. It is important to note that airway clearance techniques should be individualised throughout life according to developmental stages, patient preferences, pulmonary symptoms and lung function. This also applies as conditions vary between baseline function and pulmonary exacerbations.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review includes 28 studies with 788 people (from infants to adults) with CF with mild to severe lung disease. The studies compared PEP to other methods of chest physiotherapy; the length of treatment ranged from a single session to two years of treatment."
}
]
|
query-laysum | 3783 | [
{
"role": "user",
"content": "Abstract: Background\nAcetylcholinesterase inhibitors, such as neostigmine, have traditionally been used for reversal of non-depolarizing neuromuscular blocking agents. However, these drugs have significant limitations, such as indirect mechanisms of reversal, limited and unpredictable efficacy, and undesirable autonomic responses. Sugammadex is a selective relaxant-binding agent specifically developed for rapid reversal of non-depolarizing neuromuscular blockade induced by rocuronium. Its potential clinical benefits include fast and predictable reversal of any degree of block, increased patient safety, reduced incidence of residual block on recovery, and more efficient use of healthcare resources.\nObjectives\nThe main objective of this review was to compare the efficacy and safety of sugammadex versus neostigmine in reversing neuromuscular blockade caused by non-depolarizing neuromuscular agents in adults.\nSearch methods\nWe searched the following databases on 2 May 2016: Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (WebSPIRS Ovid SP), Embase (WebSPIRS Ovid SP), and the clinical trials registries www.controlled-trials.com, clinicaltrials.gov, and www.centerwatch.com. We re-ran the search on 10 May 2017.\nSelection criteria\nWe included randomized controlled trials (RCTs) irrespective of publication status, date of publication, blinding status, outcomes published, or language. We included adults, classified as American Society of Anesthesiologists (ASA) I to IV, who received non-depolarizing neuromuscular blocking agents for an elective in-patient or day-case surgical procedure. We included all trials comparing sugammadex versus neostigmine that reported recovery times or adverse events. We included any dose of sugammadex and neostigmine and any time point of study drug administration.\nData collection and analysis\nTwo review authors independently screened titles and abstracts to identify trials for eligibility, examined articles for eligibility, abstracted data, assessed the articles, and excluded obviously irrelevant reports. We resolved disagreements by discussion between review authors and further disagreements through consultation with the last review author. We assessed risk of bias in 10 methodological domains using the Cochrane risk of bias tool and examined risk of random error through trial sequential analysis. We used the principles of the GRADE approach to prepare an overall assessment of the quality of evidence. For our primary outcomes (recovery times to train-of-four ratio (TOFR) > 0.9), we presented data as mean differences (MDs) with 95 % confidence intervals (CIs), and for our secondary outcomes (risk of adverse events and risk of serious adverse events), we calculated risk ratios (RRs) with CIs.\nMain results\nWe included 41 studies (4206 participants) in this updated review, 38 of which were new studies. Twelve trials were eligible for meta-analysis of primary outcomes (n = 949), 28 trials were eligible for meta-analysis of secondary outcomes (n = 2298), and 10 trials (n = 1647) were ineligible for meta-analysis.\nWe compared sugammadex 2 mg/kg and neostigmine 0.05 mg/kg for reversal of rocuronium-induced moderate neuromuscular blockade (NMB). Sugammadex 2 mg/kg was 10.22 minutes (6.6 times) faster then neostigmine 0.05 mg/kg (1.96 vs 12.87 minutes) in reversing NMB from the second twitch (T2) to TOFR > 0.9 (MD 10.22 minutes, 95% CI 8.48 to 11.96; I2 = 84%; 10 studies, n = 835; GRADE: moderate quality).\nWe compared sugammadex 4 mg/kg and neostigmine 0.07 mg/kg for reversal of rocuronium-induced deep NMB. Sugammadex 4 mg/kg was 45.78 minutes (16.8 times) faster then neostigmine 0.07 mg/kg (2.9 vs 48.8 minutes) in reversing NMB from post-tetanic count (PTC) 1 to 5 to TOFR > 0.9 (MD 45.78 minutes, 95% CI 39.41 to 52.15; I2 = 0%; two studies, n = 114; GRADE: low quality).\nFor our secondary outcomes, we compared sugammadex, any dose, and neostigmine, any dose, looking at risk of adverse and serious adverse events. We found significantly fewer composite adverse events in the sugammadex group compared with the neostigmine group (RR 0.60, 95% CI 0.49 to 0.74; I2 = 40%; 28 studies, n = 2298; GRADE: moderate quality). Risk of adverse events was 28% in the neostigmine group and 16% in the sugammadex group, resulting in a number needed to treat for an additional beneficial outcome (NNTB) of 8. When looking at specific adverse events, we noted significantly less risk of bradycardia (RR 0.16, 95% CI 0.07 to 0.34; I2= 0%; 11 studies, n = 1218; NNTB 14; GRADE: moderate quality), postoperative nausea and vomiting (PONV) (RR 0.52, 95% CI 0.28 to 0.97; I2 = 0%; six studies, n = 389; NNTB 16; GRADE: low quality) and overall signs of postoperative residual paralysis (RR 0.40, 95% CI 0.28 to 0.57; I2 = 0%; 15 studies, n = 1474; NNTB 13; GRADE: moderate quality) in the sugammadex group when compared with the neostigmine group. Finally, we found no significant differences between sugammadex and neostigmine regarding risk of serious adverse events (RR 0.54, 95% CI 0.13 to 2.25; I2= 0%; 10 studies, n = 959; GRADE: low quality).\nApplication of trial sequential analysis (TSA) indicates superiority of sugammadex for outcomes such as recovery time from T2 to TOFR > 0.9, adverse events, and overall signs of postoperative residual paralysis.\nAuthors' conclusions\nReview results suggest that in comparison with neostigmine, sugammadex can more rapidly reverse rocuronium-induced neuromuscular block regardless of the depth of the block. Sugammadex 2 mg/kg is 10.22 minutes (˜ 6.6 times) faster in reversing moderate neuromuscular blockade (T2) than neostigmine 0.05 mg/kg (GRADE: moderate quality), and sugammadex 4 mg/kg is 45.78 minutes (˜ 16.8 times) faster in reversing deep neuromuscular blockade (PTC 1 to 5) than neostigmine 0.07 mg/kg (GRADE: low quality). With an NNTB of 8 to avoid an adverse event, sugammadex appears to have a better safety profile than neostigmine. Patients receiving sugammadex had 40% fewer adverse events compared with those given neostigmine. Specifically, risks of bradycardia (RR 0.16, NNTB 14; GRADE: moderate quality), PONV (RR 0.52, NNTB 16; GRADE: low quality), and overall signs of postoperative residual paralysis (RR 0.40, NNTB 13; GRADE: moderate quality) were reduced. Both sugammadex and neostigmine were associated with serious adverse events in less than 1% of patients, and data showed no differences in risk of serious adverse events between groups (RR 0.54; GRADE: low quality).\n\nGiven the provided abstract, please respond to the following query: \"Objective\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review systematically sets out to compare the benefits and harms of sugammadex and neostigmine. The evidence is current up to May 2017."
}
]
|
query-laysum | 3784 | [
{
"role": "user",
"content": "Abstract: Background\nRecurrent respiratory papillomatosis (RRP) is a benign condition of the mucosa of the upper aerodigestive tract. It is characterised by recurrent papillomatous lesions and is associated with human papillomavirus (HPV). Frequent recurrence and rapid papilloma growth are common and in part responsible for the onset of potentially life-threatening symptoms. Most patients afflicted by the condition will require repeated surgical treatments to maintain their airway, and these may result in scarring and voice problems. Photodynamic therapy introduces a light-sensitising agent, which is administered either orally or by injection. This substance (called a photo-sensitiser) is selectively retained in hyperplastic and neoplastic tissue, including papilloma. It is then activated by light of a specific wavelength and may be used as a sole or adjuvant treatment for RRP.\nObjectives\nTo assess the effects of photodynamic therapy in the management of recurrent respiratory papillomatosis (RRP) in children and adults.\nSearch methods\nWe searched the Cochrane Ear, Nose and Throat Disorders Group Trials Register; the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; EMBASE; CINAHL; Web of Science; Cambridge Scientific Abstracts; ICTRP and additional sources for published and unpublished trials. The date of the search was 27 January 2014.\nSelection criteria\nRandomised controlled trials utilising photodynamic therapy as sole or adjuvant therapy in participants of any age with proven RRP versus control intervention. Primary outcome measures were symptom improvement (respiratory distress/dyspnoea and voice quality), quality of life improvement and recurrence-free interval. Secondary outcomes included reduction in the frequency of surgical intervention, reduction in disease volume and adverse effects of treatment.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration. Meta-analysis was not possible and results are presented descriptively.\nMain results\nWe included one trial with a total of 23 participants. This study was at high risk of bias. None of our primary outcomes and only one of our secondary outcomes (reduction in volume of disease, assessed endoscopically) was measured in the study. There was no significant difference between the groups (very low-quality evidence). Adverse effects reported included airway swelling requiring intubation in a child with severe RRP a few hours after photodynamic therapy.\nAuthors' conclusions\nThere is insufficient evidence from high-quality randomised controlled trials to determine whether photodynamic therapy alters the course of disease or provides an added benefit to surgery in patients with recurrent respiratory papillomatosis. Multicentre randomised controlled trials with appropriate sample sizes and long-term follow-up are required to evaluate whether photodynamic therapy is of benefit. Outcomes such as improvement in symptoms (respiratory function and voice quality) and quality of life should be measured in future trials.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Recurrent respiratory papillomatosis is a condition of the mucosal lining of the upper airway, which leads to multiple benign, wart-like growths (papilloma). Although not cancerous, it can lead to serious problems, including hoarseness and airway obstruction. The main treatment is repeated surgical removal of the papilloma using a laser or cutting instrument. However, multiple surgical procedures carry the risk of complications and can also result in long-term scarring. Photodynamic therapy works through the application of a light-sensitising substance, which is then activated by light of a specific wavelength. A chemical reaction creates powerful active molecules that destroy the papilloma locally. It can be used on its own or as an additional treatment together with surgical removal. It has been proposed that photodynamic therapy slows the growth of the papilloma and results in fewer recurrences and therefore fewer surgical procedures."
}
]
|
query-laysum | 3785 | [
{
"role": "user",
"content": "Abstract: Background\nDornase alfa is currently used as a mucolytic to treat pulmonary disease (the major cause of morbidity and mortality) in cystic fibrosis. It reduces mucus viscosity in the lungs, promoting improved clearance of secretions. This is an update of a previously published review.\nObjectives\nTo determine whether the use of dornase alfa in cystic fibrosis is associated with improved mortality and morbidity compared to placebo or other medications that improve airway clearance, and to identify any adverse events associated with its use.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register which comprises references identified from comprehensive electronic database searches, handsearching relevant journals and abstracts from conferences. Date of the most recent search of the Group's Cystic Fibrosis Register: 12 October 2020.\nClinicaltrials.gov and the International Clinical Trials Registry Platform were also searched to identify unpublished or ongoing trials. Date of most recent search: 08 February 2021.\nSelection criteria\nAll randomised and quasi-randomised controlled trials comparing dornase alfa to placebo, standard therapy or other medications that improve airway clearance.\nData collection and analysis\nAuthors independently assessed trials against the inclusion criteria; two authors carried out analysis of methodological quality and data extraction. GRADE was used to assess the level of evidence.\nMain results\nThe searches identified 74 trials, of which 19 (2565 participants) met our inclusion criteria. 15 trials compared dornase alfa to placebo or no dornase alfa (2447 participants); two compared daily dornase to hypertonic saline (32 participants); one compared daily dornase alfa to hypertonic saline and alternate day dornase alfa (48 participants); one compared dornase alfa to mannitol and the combination of both drugs (38 participants). Trial duration varied from six days to three years.\nDornase alfa compared to placebo or no treatment\nDornase alfa probably improved forced expiratory volume at one second (FEV1) at one month (four trials, 248 participants), three months (one trial, 320 participants; moderate-quality evidence), six months (one trial, 647 participants; high-quality evidence) and two years (one trial, 410 participants). Limited low-quality evidence showed treatment may make little or no difference in quality of life. Dornase alfa probably reduced the number of pulmonary exacerbations in trials of up to two years (moderate-quality evidence). One trial that examined the cost of care, including the cost of dornase alfa, found that the cost savings from dornase alfa offset 18% to 38% of the medication costs.\nDornase alfa: daily versus alternate day\nOne cross-over trial (43 children) found little or no difference between treatment regimens for lung function, quality of life or pulmonary exacerbations (low-quality evidence).\nDornase alfa compared to other medications that improve airway clearance\nResults for these comparisons were mixed. One trial (43 children) showed dornase alfa may lead to a greater improvement in FEV1 compared to hypertonic saline (low-quality evidence), and one trial (23 participants) reported little or no differences in lung function between dornase alfa and mannitol or dornase alfa and dornase alfa plus mannitol (low-quality evidence). One trial (23 participants) found dornase alfa may improve quality of life compared to dornase alfa plus mannitol (low-quality evidence); other comparisons found little or no difference in this outcome (low-quality evidence). No trials in any comparison reported any difference between groups in the number of pulmonary exacerbations (low-quality evidence).\nWhen all comparisons are assessed, dornase alfa did not cause significantly more adverse effects than other treatments, except voice alteration and rash.\nAuthors' conclusions\nThere is evidence to show that, compared with placebo, therapy with dornase alfa may improve lung function in people with cystic fibrosis in trials lasting from one month to two years. There was a decrease in pulmonary exacerbations in trials of six months or longer, probably due to treatment. Voice alteration and rash appear to be the only adverse events reported with increased frequency in randomised controlled trials. There is not enough evidence to firmly conclude if dornase alfa is superior to other hyperosmolar agents in improving lung function.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the effect of using inhaled dornase alfa for treating lung disease in people with cystic fibrosis."
}
]
|
query-laysum | 3786 | [
{
"role": "user",
"content": "Abstract: Background\nThe subclavian arteries are two major arteries of the upper chest, below the collar bone, which come from the arch of the aorta. Endovascular treatment for stenosis of the subclavian arteries includes angioplasty alone, and with stenting. There is insufficient evidence to guide the use of stents following angioplasty for subclavian artery stenosis. This is the second update of a review first published in 2011.\nObjectives\nThe aim of this review was to determine whether stenting was more effective than angioplasty alone for stenosis of the subclavian artery.\nSearch methods\nFor this update, the Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, and LILACS databases, and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 2 February 2021.\nSelection criteria\nWe searched for randomised controlled trials of endovascular treatment of subclavian artery lesions that compared angioplasty alone and stent implantation.\nData collection and analysis\nTwo review authors independently evaluated studies to assess eligibility. Discrepancies were resolved by discussion. If there was no agreement, we asked a third review author to assess the study for inclusion. We planned to undertake data collection and analysis in accordance with recommendations described in the Cochrane Handbook for Systematic Reviews of Interventions, and assess the certainty of the evidence using a GRADE approach.\nMain results\nTo date, we have not identified any completed or ongoing randomised controlled trials that compare percutaneous transluminal angioplasty and stenting for subclavian artery stenosis.\nAuthors' conclusions\nThere is currently insufficient evidence to determine whether stenting is more effective than angioplasty alone for stenosis of the subclavian artery.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the medical literature for randomised controlled studies of endovascular treatment of subclavian artery stenosis that compared angioplasty alone, or with a stent implanted. In randomised controlled studies, the treatments or tests people receive are decided at random. These usually give the most reliable evidence about treatment effects."
}
]
|
query-laysum | 3787 | [
{
"role": "user",
"content": "Abstract: Background\nEarly accurate detection of all skin cancer types is important to guide appropriate management and improve morbidity and survival. Basal cell carcinoma (BCC) is usually a localised skin cancer but with potential to infiltrate and damage surrounding tissue, whereas cutaneous squamous cell carcinoma (cSCC) and melanoma are higher risk skin cancers with the potential to metastasise and ultimately lead to death. When used in conjunction with clinical or dermoscopic suspicion of malignancy, or both, reflectance confocal microscopy (RCM) may help to identify cancers eligible for non-surgical treatment without the need for a diagnostic biopsy, particularly in people with suspected BCC. Any potential benefit must be balanced against the risk of any misdiagnoses.\nObjectives\nTo determine the diagnostic accuracy of RCM for the detection of BCC, cSCC, or any skin cancer in adults with any suspicious lesion and lesions that are difficult to diagnose (equivocal); and to compare its accuracy with that of usual practice (visual inspection or dermoscopy, or both).\nSearch methods\nWe undertook a comprehensive search of the following databases from inception to August 2016: Cochrane Central Register of Controlled Trials; MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nStudies of any design that evaluated the accuracy of RCM alone, or RCM in comparison to visual inspection or dermoscopy, or both, in adults with lesions suspicious for skin cancer compared with a reference standard of either histological confirmation or clinical follow-up, or both.\nData collection and analysis\nTwo review authors independently extracted data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic threshold were missing. We estimated summary sensitivities and specificities using the bivariate hierarchical model. For computation of likely numbers of true-positive, false-positive, false-negative, and true-negative findings in the 'Summary of findings' tables, we applied summary sensitivity and specificity estimates to lower quartile, median and upper quartiles of the prevalence observed in the study groups. We also investigated the impact of observer experience.\nMain results\nThe review included 10 studies reporting on 11 study cohorts. All 11 cohorts reported data for the detection of BCC, including 2037 lesions (464 with BCC); and four cohorts reported data for the detection of cSCC, including 834 lesions (71 with cSCC). Only one study also reported data for the detection of BCC or cSCC using dermoscopy, limiting comparisons between RCM and dermoscopy. Studies were at high or unclear risk of bias across almost all methodological quality domains, and were of high or unclear concern regarding applicability of the evidence. Selective participant recruitment, unclear blinding of the reference test, and exclusions due to image quality or technical difficulties were observed. It was unclear whether studies were representative of populations eligible for testing with RCM, and test interpretation was often undertaken using images, remotely from the participant and the interpreter blinded to clinical information that would normally be available in practice.\nMeta-analysis found RCM to be more sensitive but less specific for the detection of BCC in studies of participants with equivocal lesions (sensitivity 94%, 95% confidence interval (CI) 79% to 98%; specificity 85%, 95% CI 72% to 92%; 3 studies) compared to studies that included any suspicious lesion (sensitivity 76%, 95% CI 45% to 92%; specificity 95%, 95% CI 66% to 99%; 4 studies), although CIs were wide. At the median prevalence of disease of 12.5% observed in studies including any suspicious lesion, applying these results to a hypothetical population of 1000 lesions results in 30 BCCs missed with 44 false-positive results (lesions misdiagnosed as BCCs). At the median prevalence of disease of 15% observed in studies of equivocal lesions, nine BCCs would be missed with 128 false-positive results in a population of 1000 lesions. Across both sets of studies, up to 15% of these false-positive lesions were observed to be melanomas mistaken for BCCs. There was some suggestion of higher sensitivities in studies with more experienced observers. Summary sensitivity and specificity could not be estimated for the detection of cSCC due to paucity of data.\nAuthors' conclusions\nThere is insufficient evidence for the use of RCM for the diagnosis of BCC or cSCC in either population group. A possible role for RCM in clinical practice is as a tool to avoid diagnostic biopsies in lesions with a relatively high clinical suspicion of BCC. The potential for, and consequences of, misclassification of other skin cancers such as melanoma as BCCs requires further research. Importantly, data are lacking that compare RCM to standard clinical practice (with or without dermoscopy).\n\nGiven the provided abstract, please respond to the following query: \"Why is improving the diagnosis of BCC or cSCC important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There are a number of different types of skin cancer. BCC and cSCC are usually localised skin cancers. Making the correct diagnosis is important because mistaking one skin cancer for another can lead to the wrong treatment being used or lead to a delay in effective treatment. A missed diagnosis of BCC (known as a false-negative result) can result in the missed BCC growing and causing disfigurement. A missed diagnosis of cSCC is more serious as it could spread to other parts of the body. Diagnosing a skin cancer when it is not actually present (a false-positive result) may result in unnecessary biopsy or treatment and can cause discomfort and worry to patients."
}
]
|
query-laysum | 3788 | [
{
"role": "user",
"content": "Abstract: Background\nThere are controversies about the amount of calories and the type of nutritional support that should be given to critically-ill people. Several authors advocate the potential benefits of hypocaloric nutrition support, but the evidence is inconclusive.\nObjectives\nTo assess the effects of prescribed hypocaloric nutrition support in comparison with standard nutrition support for critically-ill adults\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, Cochrane Library), MEDLINE, Embase and LILACS (from inception to 20 June 2017) with a specific strategy for each database. We also assessed three websites, conference proceedings and reference lists, and contacted leaders in the field and the pharmaceutical industry for undetected/unpublished studies. There was no restriction by date, language or publication status.\nSelection criteria\nWe included randomized and quasi-randomized controlled trials comparing hypocaloric nutrition support to normo- or hypercaloric nutrition support or no nutrition support (e.g. fasting) in adults hospitalized in intensive care units (ICUs).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We meta-analysed data for comparisons in which clinical heterogeneity was low. We conducted prespecified subgroup and sensitivity analyses, and post hoc analyses, including meta-regression. Our primary outcomes were: mortality (death occurred during the ICU and hospital stay, or 28- to 30-day all-cause mortality); length of stay (days stayed in the ICU and in the hospital); and Infectious complications. Secondary outcomes included: length of mechanical ventilation. We assessed the quality of evidence with GRADE.\nMain results\nWe identified 15 trials, with a total of 3129 ICU participants from university-associated hospitals in the USA, Colombia, Saudi Arabia, Canada, Greece, Germany and Iran. There are two ongoing studies. Participants suffered from medical and surgical conditions, with a variety of inclusion criteria. Four studies used parenteral nutrition and nine studies used only enteral nutrition; it was unclear whether the remaining two used parenteral nutrition. Most of them could not achieve the proposed caloric targets, resulting in small differences in the administered calories between intervention and control groups. Most studies were funded by the US government or non-governmental associations, but three studies received funding from industry. Five studies did not specify their funding sources.\nThe included studies suffered from important clinical and statistical heterogeneity. This heterogeneity did not allow us to report pooled estimates of the primary and secondary outcomes, so we have described them narratively.\nWhen comparing hypocaloric nutrition support with a control nutrition support, for hospital mortality (9 studies, 1775 participants), the risk ratios ranged from 0.23 to 5.54; for ICU mortality (4 studies, 1291 participants) the risk ratios ranged from 0.81 to 5.54, and for mortality at 30 days (7 studies, 2611 participants) the risk ratios ranged from 0.79 to 3.00. Most of these estimates included the null value. The quality of the evidence was very low due to unclear or high risk of bias, inconsistency and imprecision.\nParticipants who received hypocaloric nutrition support compared to control nutrition support had a range of mean hospital lengths of stay of 15.70 days lower to 10.70 days higher (10 studies, 1677 participants), a range of mean ICU lengths of stay 11.00 days lower to 5.40 days higher (11 studies, 2942 participants) and a range of mean lengths of mechanical ventilation of 13.20 days lower to 8.36 days higher (12 studies, 3000 participants). The quality of the evidence for this outcome was very low due to unclear or high risk of bias in most studies, inconsistency and imprecision.\nThe risk ratios for infectious complications (10 studies, 2804 participants) of each individual study ranged from 0.54 to 2.54. The quality of the evidence for this outcome was very low due to unclear or high risk of bias, inconsistency and imprecision\nWe were not able to explain the causes of the observed heterogeneity using subgroup and sensitivity analyses or meta-regression.\nAuthors' conclusions\nThe included studies had substantial clinical heterogeneity. We found very low-quality evidence about the effects of prescribed hypocaloric nutrition support on mortality in hospital, in the ICU and at 30 days, as well as in length of hospital and ICU stay, infectious complications and the length of mechanical ventilation. For these outcomes there is uncertainty about the effects of prescribed hypocaloric nutrition, since the range of estimates includes both appreciable benefits and harms.\nGiven these limitations, results must be interpreted with caution in the clinical field, considering the unclear balance of the risks and harms of this intervention. Future research addressing the clinical heterogeneity of participants and interventions, study limitations and sample size could clarify the effects of this intervention.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Critically-ill people experience major metabolic changes (one chemical is transformed through a series of steps into another chemical) during injury or sepsis (a life-threatening condition in which the body's response to infection causes injury to its own organs). They receive nutritional support to prevent or minimize some adverse effects. Nevertheless, both overfeeding and starvation can be harmful.\nThere is currently no agreement about the amount of calories we should give to these critically-ill people. Normal caloric feeding provides the estimated caloric needs. Hypocaloric feeding provides an intentionally lower amount of calories."
}
]
|
query-laysum | 3789 | [
{
"role": "user",
"content": "Abstract: Background\nReal-time video communication software such as Skype and FaceTime transmits live video and audio over the Internet, allowing counsellors to provide support to help people quit smoking. There are more than four billion Internet users worldwide, and Internet users can download free video communication software, rendering a video counselling approach both feasible and scalable for helping people to quit smoking.\nObjectives\nTo assess the effectiveness of real-time video counselling delivered individually or to a group in increasing smoking cessation, quit attempts, intervention adherence, satisfaction and therapeutic alliance, and to provide an economic evaluation regarding real-time video counselling.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialised Register, CENTRAL, MEDLINE, PubMed, PsycINFO and Embase to identify eligible studies on 13 August 2019. We searched the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov to identify ongoing trials registered by 13 August 2019. We checked the reference lists of included articles and contacted smoking cessation researchers for any additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), randomised trials, cluster RCTs or cluster randomised trials of real-time video counselling for current tobacco smokers from any setting that measured smoking cessation at least six months following baseline. The real-time video counselling intervention could be compared with a no intervention control group or another smoking cessation intervention, or both.\nData collection and analysis\nTwo authors independently extracted data from included trials, assessed the risk of bias and rated the certainty of the evidence using the GRADE approach. We performed a random-effects meta-analysis for the primary outcome of smoking cessation, using the most stringent measure of smoking cessation measured at the longest follow-up. Analysis was based on the intention-to-treat principle. We considered participants with missing data at follow-up for the primary outcome of smoking cessation to be smokers.\nMain results\nWe included two randomised trials with 615 participants. Both studies delivered real-time video counselling for smoking cessation individually, compared with telephone counselling. We judged one study at unclear risk of bias and one study at high risk of bias. There was no statistically significant treatment effect for smoking cessation (using the strictest definition and longest follow-up) across the two included studies when real-time video counselling was compared to telephone counselling (risk ratio (RR) 2.15, 95% confidence interval (CI) 0.38 to 12.04; 2 studies, 608 participants; I2 = 66%). We judged the overall certainty of the evidence for smoking cessation as very low due to methodological limitations, imprecision in the effect estimate reflected by the wide 95% CIs and inconsistency of cessation rates. There were no significant differences between real-time video counselling and telephone counselling reported for number of quit attempts among people who continued to smoke (mean difference (MD) 0.50, 95% CI –0.60 to 1.60; 1 study, 499 participants), mean number of counselling sessions completed (MD –0.20, 95% CI –0.45 to 0.05; 1 study, 566 participants), completion of all sessions (RR 1.13, 95% CI 0.71 to 1.79; 1 study, 43 participants) or therapeutic alliance (MD 1.13, 95% CI –0.24 to 2.50; 1 study, 398 participants). Participants in the video counselling arm were more likely than their telephone counselling counterparts to recommend the programme to a friend or family member (RR 1.06, 95% CI 1.01 to 1.11; 1 study, 398 participants); however, there were no between-group differences on satisfaction score (MD 0.70, 95% CI –1.16 to 2.56; 1 study, 29 participants).\nAuthors' conclusions\nThere is very little evidence about the effectiveness of real-time video counselling for smoking cessation. The existing research does not suggest a difference between video counselling and telephone counselling for assisting people to quit smoking. However, given the very low GRADE rating due to methodological limitations in the design, imprecision of the effect estimate and inconsistency of cessation rates, the smoking cessation results should be interpreted cautiously. High-quality randomised trials comparing real-time video counselling to telephone counselling are needed to increase the confidence of the effect estimate. Furthermore, there is currently no evidence comparing real-time video counselling to a control group. Such research is needed to determine whether video counselling increases smoking cessation.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "It is unclear how video counselling compares with telephone counselling in terms of helping people to quit smoking. People who used video counselling were more likely than those who used telephone counselling to recommend the programme to a friend or someone in their family, but we found no differences in how satisfied they were, the number of video or telephone sessions completed, whether all sessions were completed and in the relationship or bond with the counsellor."
}
]
|
query-laysum | 3790 | [
{
"role": "user",
"content": "Abstract: Background\nA significant number of patients who suffer with anxiety and related disorders (that is post-traumatic stress disorder (PTSD), social anxiety disorder (SAnD), panic disorder with or without agoraphobia (PD), specific phobia (SPh) and obsessive compulsive disorder (OCD)) fail to respond optimally to first-line treatment with medication or cognitive and behavioural therapies. The addition of d-cycloserine (DCS) to cognitive and behavioural therapies may improve treatment response by impacting the glutamatergic system. This systematic review aimed to investigate the effects of adding DCS to cognitive and behavioural therapies by synthesising data from relevant randomised controlled trials and following the guidelines recommended by Cochrane.\nObjectives\nTo assess the effect of DCS augmentation of cognitive and behavioural therapies compared to placebo augmentation of cognitive and behavioural therapies in the treatment of anxiety and related disorders. Additionally, to assess the efficacy and tolerability of DCS across different anxiety and related disorders.\nSearch methods\nThis review fully incorporates studies identified from a search of the Cochrane Depression, Anxiety and Neurosis Controlled Trials Register (CCDANCTR) to 12 March 2015. This register includes relevant randomised controlled trials (RCTs) from: the Cochrane Library (all years), EMBASE (1974 to date), MEDLINE (1950 to date), PsycINFO (1967 to date), the World Health Organization’s trials portal (ICTRP) and ClinicalTrials.gov . Reference lists from previous meta-analyses and reports of RCTs were also checked. No restrictions were placed on language, setting, date or publication status.\nSelection criteria\nAll RCTs of DCS augmentation of cognitive and behavioural therapies versus placebo augmentation of cognitive and behavioural therapies for anxiety and related disorders were included.\nData collection and analysis\nTwo authors (RO and TA) independently assessed RCTs for eligibility and inclusion, extracted outcomes and risk of bias data and entered these into a customised extraction form. Investigators were contacted to obtain missing data. In addition, data entry and analysis were performed by two review authors (KSW and HB).\nMain results\nTwenty-one published RCTs, with 788 participants in outpatient settings, were included in the review. Sixteen studies had an age range of 18 to 75 years, while four investigated paediatric populations aged 8 to 17 years and one included children, adolescents and adults. The 21 RCTs investigated OCD (number of RCTs (N) = 6), PTSD (N = 5), SAnD (N = 5), SPh (N = 3) and PD (N = 2). Most information from the studies was rated as having either low risk or unclear risk of bias.\nThere was no evidence of a difference between DCS augmentation of cognitive and behavioural therapies and placebo augmentation of cognitive and behavioural therapies for the treatment of anxiety and related disorders in adults at the endpoint (treatment responders, N = 9, risk ratio (RR) 1.10; 95% confidence interval (CI) 0.89 to 1.34; number of participants (n) = 449; low quality evidence) and between 1 and 12 months follow-up (N = 7, RR 1.08; 95% CI 0.90 to 1.31; n = 383). DCS augmentation of cognitive and behavioural therapies was not superior to placebo augmentation of cognitive and behavioural therapies for children and adolescents, both at the endpoint (N = 4, RR 1.01; 95% CI 0.78 to 1.31; n = 121; low quality evidence) and between 3 and 12 months follow-up (N = 3, RR 0.86; 95% CI 0.67 to 1.09; n = 91).\nThere was no evidence of a difference in treatment acceptability for DCS augmentation of cognitive and behavioural therapies compared with placebo augmentation of cognitive and behavioural therapies in adults (N = 16, RR 0.88; 95% CI 0.61 to 1.25; n = 740), nor in children and adolescents (N = 4, RR 0.90; 95% CI 0.17 to 4.69; n = 131). These conclusions were based on moderate quality evidence for adults, and very low quality evidence for children and adolescents. Although the observed difference was small, it is noteworthy that there was a high efficacy of exposure-based therapies alone in the included trials. Due to the limited number of studies, subgroup analysis of moderating factors for clinical and methodological effect could not take place.\nAuthors' conclusions\nThis review found no evidence of a difference between DCS augmentation of cognitive and behavioural therapies and placebo augmentation of cognitive and behavioural therapies for treating anxiety and related disorders in children, adolescents and adults. These findings are based on low quality evidence from heterogenous studies with small sample sizes and incomplete data for clinical response, which precludes us from drawing conclusions on the use of DCS augmentation of cognitive and behavioural therapies at this stage. Given there is some promising preliminary data from individual studies, further research is necessary to assess DCS compared with placebo augmentation of cognitive and behavioural therapies, and determine mechanisms of action as well as magnitude of effect in anxiety and related disorders.\n\nGiven the provided abstract, please respond to the following query: \"Who may be interested in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- People with anxiety and related disorders.\n- Families and friends of people who suffer from anxiety and related disorders.\n- General practitioners, psychiatrists, psychologists and pharmacists.\n- Professionals working in adult as well as child and adolescent mental health services."
}
]
|
query-laysum | 3791 | [
{
"role": "user",
"content": "Abstract: Background\nTinea capitis is a common contagious fungal infection of the scalp in children. Systemic therapy is required for treatment and to prevent spread. This is an update of the original Cochrane review.\nObjectives\nTo assess the effects of systemic antifungal drugs for tinea capitis in children.\nSearch methods\nWe updated our searches of the following databases to November 2015: the Cochrane Skin Group Specialised Register, CENTRAL (2015, Issue 10), MEDLINE (from 1946), EMBASE (from 1974), LILACS (from 1982), and CINAHL (from 1981). We searched five trial registers and checked the reference lists of studies for references to relevant randomised controlled trials (RCTs). We obtained unpublished, ongoing trials and grey literature via correspondence with experts in the field and from pharmaceutical companies.\nSelection criteria\nRCTs of systemic antifungal therapy in children with normal immunity under the age of 18 with tinea capitis confirmed by microscopy, growth of fungi (dermatophytes) in culture or both.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 25 studies (N = 4449); 4 studies (N = 2637) were new to this update.\nTerbinafine for four weeks and griseofulvin for eight weeks showed similar efficacy for the primary outcome of complete (i.e. clinical and mycological) cure in three studies involving 328 participants with Trichophyton species infections (84.2% versus 79.0%; risk ratio (RR) 1.06, 95% confidence interval (CI) 0.98 to 1.15; low quality evidence).\nComplete cure with itraconazole (two to six weeks) and griseofulvin (six weeks) was similar in two studies (83.6% versus 91.0%; RR 0.92, 95% CI 0.81 to 1.05; N = 134; very low quality evidence). In two studies, there was no difference between itraconazole and terbinafine for two to three weeks treatment (73.8% versus 78.8%; RR 0.93, 95% CI 0.72 to 1.19; N = 160; low quality evidence). In three studies, there was a similar proportion achieving complete cured with two to four weeks of fluconazole or six weeks of griseofulvin (41.4% versus 52.7%; RR 0.92, 95% CI 0.81 to 1.05; N = 615; moderate quality evidence). Current evidence for ketoconazole versus griseofulvin was limited. One study favoured griseofulvin (12 weeks) because ketoconazole (12 weeks) appeared less effective for complete cure (RR 0.76, 95% CI 0.62 to 0.94; low quality evidence). However, their effects appeared to be similar when the treatment lasted 26 weeks (RR 0.95, 95% CI 0.83 to 1.07; low quality evidence). Another study indicated that complete cure was similar for ketoconazole (12 weeks) and griseofulvin (12 weeks) (RR 0.89, 95% CI 0.57 to 1.39; low quality evidence). For one trial, there was no significant difference for complete cure between fluconazole (for two to three weeks) and terbinafine (for two to three weeks) (82.0% versus 94.0%; RR 0.87, 95% CI 0.75 to 1.01; N = 100; low quality evidence). For complete cure, we did not find a significant difference between fluconazole (for two to three weeks) and itraconazole (for two to three weeks) (82.0% versus 82.0%; RR 1.00, 95% CI 0.83 to 1.20; low quality evidence).\nThis update provides new data: in children with Microsporum infections, a meta-analysis of two studies found that the complete cure was lower for terbinafine (6 weeks) than for griseofulvin (6-12 weeks) (34.7% versus 50.9%; RR 0.68, 95% CI 0.53 to 0.86; N = 334; moderate quality evidence). In the original review, there was no significant difference in complete cure between terbinafine (four weeks) and griseofulvin (eight weeks) in children with Microsporum infections in one small study (27.2% versus 60.0%; RR 0.45, 95% CI 0.15 to 1.35; N = 21; low quality evidence).\nOne study provides new evidence that terbinafine and griseofulvin for six weeks show similar efficacy (49.5% versus 37.8%; RR 1.18, 95% CI 0.74 to 1.88; N = 1006; low quality evidence). However, in children infected with T. tonsurans, terbinafine was better than griseofulvin (52.1% versus 35.4%; RR 1.47, 95% CI 1.22 to 1.77; moderate quality evidence). For children infected with T. violaceum, these two regimens have similar effects (41.3% versus 45.1%; RR 0.91, 95% CI 0.68 to 1.24; low quality evidence). Additionally, three weeks of fluconazole was similar to six weeks of fluconazole in one study in 491 participants infected with T. tonsurans and M. canis (30.2% versus 34.1%; RR 0.88, 95% CI 0.68 to 1.14; low quality evidence).\nThe frequency of adverse events attributed to the study drugs was similar for terbinafine and griseofulvin (9.2% versus 8.3%; RR 1.11, 95% CI 0.79 to 1.57; moderate quality evidence), and severe adverse events were rare (0.6% versus 0.6%; RR 0.97, 95% CI 0.24 to 3.88; moderate quality evidence). Adverse events for terbinafine, griseofulvin, itraconazole, ketoconazole, and fluconazole were all mild and reversible.\nAll of the included studies were at either high or unclear risk of bias in at least one domain. Using GRADE to rate the overall quality of the evidence, lower quality evidence resulted in lower confidence in the estimate of effect.\nAuthors' conclusions\nNewer treatments including terbinafine, itraconazole and fluconazole are at least similar to griseofulvin in children with tinea capitis caused by Trichophyton species. Limited evidence suggests that terbinafine, itraconazole and fluconazole have similar effects, whereas ketoconazole may be less effective than griseofulvin in children infected with Trichophyton. With some interventions the proportion achieving complete clinical cure was in excess of 90% (e.g. one study of terbinafine or griseofulvin for Trichophyton infections), but in many of the comparisons tested, the proportion cured was much lower.\nNew evidence from this update suggests that terbinafine is more effective than griseofulvin in children with T. tonsurans infection.\nHowever, in children with Microsporum infections, new evidence suggests that the effect of griseofulvin is better than terbinafine. We did not find any evidence to support a difference in terms of adherence between four weeks of terbinafine versus eight weeks of griseofulvin. Not all treatments for tinea capitis are available in paediatric formulations but all have reasonable safety profiles.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence in this review was generally low to moderate, so further research is likely to have an important effect on our confidence in these results. Some evidence was even of very low quality. We still need more and better evidence to help us understand the effectiveness and adverse events of systemic antifungal drugs for tinea capitis in children."
}
]
|
query-laysum | 3792 | [
{
"role": "user",
"content": "Abstract: Background\nPenetrating keratoplasty is a corneal transplantation procedure in which a full-thickness cornea from the host is replaced by a graft from a donor. The use of various immunosuppressants to prevent graft rejection, the most common cause of graft failure in the late postoperative period, is increasing.\nObjectives\nTo assess the effectiveness of immunosuppressants in the prophylaxis of corneal allograft rejection after high- and normal-risk keratoplasty.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2015, Issue 4), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to May 2015), EMBASE (January 1980 to May 2015), China National Knowledge Infrastructure (CNKI) (January 1913 to February 2015), VIP database (January 1989 to February 2015), Wanfang Data (www.wanfangdata.com) (January 1990 to February 2015), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov), and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the English language databases on 18 May 2015 and the Chinese language databases on 20 February 2015.\nSelection criteria\nWe included all randomised controlled trials (RCTs) assessing the use of immunosuppressants in the prevention of graft rejection, irrespective of publication language.\nData collection and analysis\nWe used standard procedures expected by Cochrane. The primary outcome was clear graft survival at 12 months after penetrating keratoplasty. Secondary outcomes included graft rejection, best-corrected visual acuity, and quality of life. We defined 'high-risk keratoplasty' as repeat keratoplasty and other indications of reduced graft survival.\nMain results\nWe included six studies conducted in Germany (three studies), Iran, India, and China. Three studies were conducted in people undergoing high-risk keratoplasty and investigated three different comparisons: systemic mycophenolate mofetil (MMF) versus no MMF; systemic MMF versus systemic cyclosporine A (CsA); and topical CsA versus placebo. One study compared topical tacrolimus to topical steroid in people with normal-risk keratoplasty, and two studies compared topical CsA to placebo in people experiencing graft rejection after normal-risk keratoplasty. Overall, we considered the trials to be at unclear or high risk of bias.\nMMF may not improve clear graft survival (risk ratio (RR) 1.06, 95% confidence interval (CI) 0.84 to 1.33, 1 RCT, 87 participants, low-quality evidence) but may reduce the risk of graft rejection (RR 0.49, 95% CI 0.22 to 1.08, 1 RCT, 87 participants, low-quality evidence) compared to no MMF. Visual acuity was not reported.\nIn 1 study of 52 people comparing systemic MMF and systemic CsA, there were no graft failures in the first year of follow-up. Data from the longest follow-up (three years) suggest that there may be little difference in the effect of these two treatments on clear graft survival (RR 1.10, 95% CI 0.90 to 1.35, low-quality evidence). There was low-quality evidence of an increased risk of graft rejection with systemic MMF compared to systemic CsA, but with wide CIs compatible with increased risk with systemic CsA (RR 1.48, 95% CI 0.56 to 3.93, low-quality evidence). Visual acuity was not reported.\nOne study of 84 people comparing topical CsA to placebo did not report clear graft survival at 1 year, which suggests that all grafts survived to 1 year. This study suggests that the use of topical CsA probably leads to little or no difference in graft rejection (RR 1.00, 95% CI 0.39 to 2.58, moderate-quality evidence). At one year, the mean difference (MD) between the two groups in visual acuity was 0.07 (95% CI -0.01 to 0.15, moderate-quality evidence).\nTopical CsA probably does not have an effect on clear graft survival in people experiencing graft rejection after normal-risk keratoplasty compared to placebo (RR 1.03, 95% CI 0.96 to 1.10, 2 RCTs, 283 participants, moderate-quality evidence). There were inconsistent findings on graft rejection, with one study reporting a reduced incidence of graft rejection in the CsA group (RR 0.35, 95% CI 0.14 to 0.87, 230 participants) but the other study reporting a higher average number of episodes of graft rejection in people treated with CsA (MD 1.30, 95% CI 0.39 to 2.21, 43 participants). Overall, we judged this to be low-quality evidence due to risk of bias and inconsistency. There was no evidence for a difference in visual acuity between the 2 groups at final follow-up (approximately 18 months, range 2 to 33 months) (MD 0.04, 95% CI -0.10 to 0.18, 1 RCT, 43 participants, low-quality evidence).\nIn 1 study comparing topical tacrolimus to topical steroid, the graft survived in all of the 12 treated participants and 20 control participants at 6 months. Graft rejection was rare (0 out of 12 versus 2 out of 20) (RR 0.32, 95% CI 0.02 to 6.21, low-quality evidence). Visual acuity was not reported.\nNone of the studies reported on quality of life. We identified an unpublished trial of basiliximab (Simulect) (NCT00409656), probably completed in 2005.\nAuthors' conclusions\nCurrent evidence on the effect of immunosuppressants in the prevention of graft failure and rejection after high- and normal-risk keratoplasty is largely low quality because the number of trials was limited, and, in general, the trials were small and at risk of bias. Future trials should be large enough to detect important clinical effects, conducted with a view to minimising the risk of bias, and they should measure outcomes important to patients.\n\nGiven the provided abstract, please respond to the following query: \"Key findings\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included six randomised controlled trials that enrolled a total of 561 people. The trials were conducted in Germany (three trials), Iran, India, and China.\nIn people with high-risk keratoplasty, one study compared systemic MMF with placebo, one study compared systemic MMF with systemic CsA, and one study compared CsA eye drops versus placebo.\nIn people with normal-risk keratoplasty, one study compared tacrolimus eye drops to steroid eye drops, and two studies compared CsA eye drops to placebo in people experiencing rejection after keratoplasty. All studies reported clear graft survival, incidence of graft rejection, and adverse effects.\nWe are uncertain as to the effects of immunosuppressants in the prevention of graft failure and rejection after high- and normal-risk keratoplasty, as the number of trials is limited, and, in general, the trials are small and at risk of bias. Future trials should be large enough to detect important clinical effects, conducted with a view to minimising the risk of bias, and they should measure outcomes important to patients."
}
]
|
query-laysum | 3793 | [
{
"role": "user",
"content": "Abstract: Background\nThe main complications of elevated systemic blood pressure (BP), coronary heart disease, ischaemic stroke, and peripheral vascular disease, are related to thrombosis rather than haemorrhage. Therefore, it is important to investigate if antithrombotic therapy may be useful in preventing thrombosis-related complications in patients with elevated BP.\nObjectives\nTo conduct a systematic review of the role of antiplatelet therapy and anticoagulation in patients with elevated BP, including elevations in systolic or diastolic BP alone or together.\nTo assess the effects of antiplatelet agents on total deaths or major thrombotic events or both in these patients versus placebo or other active treatment.\nTo assess the effects of oral anticoagulants on total deaths or major thromboembolic events or both in these patients versus placebo or other active treatment.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomised controlled trials (RCTs) up to January 2021: the Cochrane Hypertension Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 12), Ovid MEDLINE (from 1946), and Ovid Embase (from 1974). The World Health Organization International Clinical Trials Registry Platform and the US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) were searched for ongoing trials.\nSelection criteria\nRCTs in patients with elevated BP were included if they were ≥ 3 months in duration and compared antithrombotic therapy with control or other active treatment.\nData collection and analysis\nTwo review authors independently extracted data for inclusion criteria, our prespecified outcomes, and sources of bias. They assessed the risks and benefits of antiplatelet agents and anticoagulants by calculating odds ratios (OR), accompanied by the 95% confidence intervals (CI). They assessed risks of bias and applied GRADE criteria.\nMain results\nSix trials (61,015 patients) met the inclusion criteria and were included in this review.\nFour trials were primary prevention (41,695 patients; HOT, JPAD, JPPP, and TPT), and two secondary prevention (19,320 patients, CAPRIE and Huynh). Four trials (HOT, JPAD, JPPP, and TPT) were placebo-controlled and two studies (CAPRIE and Huynh) included active comparators.\nFour studies compared acetylsalicylic acid (ASA) versus placebo and found no evidence of a difference for all-cause mortality (OR 0.97, 95% CI 0.87 to 1.08; 3 studies, 35,794 participants; low-certainty evidence). We found no evidence of a difference for cardiovascular mortality (OR 0.98, 95% CI 0.82 to 1.17; 3 studies, 35,794 participants; low-certainty evidence). ASA reduced the risk of all non-fatal cardiovascular events (OR 0.63, 95% CI 0.45 to 0.87; 1 study (missing data in 3 studies), 2540 participants; low-certainty evidence) and the risk of all cardiovascular events (OR 0.86, 95% CI 0.77 to 0.96; 3 studies, 35,794 participants; low-certainty evidence). ASA increased the risk of major bleeding events (OR 1.77, 95% CI 1.34 to 2.32; 2 studies, 21,330 participants; high-certainty evidence).\nOne study (CAPRIE; ASA versus clopidogrel) included patients diagnosed with hypertension (mean age 62.5 years, 72% males, 95% Caucasians, mean follow-up: 1.91 years). It showed no evidence of a difference for all-cause mortality (OR 1.02, 95% CI 0.91 to 1.15; 1 study, 19,143 participants; high-certainty evidence) and for cardiovascular mortality (OR 1.08, 95% CI 0.94 to 1.26; 1 study, 19,143 participants; high-certainty evidence). ASA probably reduced the risk of non-fatal cardiovascular events (OR 1.10, 95% CI 1.00 to 1.22; 1 study, 19,143 participants; high-certainty evidence) and the risk of all cardiovascular events (OR 1.08, 95% CI 1.00 to 1.17; 1 study, 19,143 participants; high-certainty evidence) when compared to clopidogrel. Clopidogrel increased the risk of major bleeding events when compared to ASA (OR 1.35, 95% CI 1.14 to 1.61; 1 study, 19,143 participants; high-certainty evidence).\nIn one study (Huynh; ASA verus warfarin) patients with unstable angina or non–ST-segment elevation myocardial infarction, with prior coronary artery bypass grafting (CABG) were included (mean age 68 years, 79.8% males, mean follow-up: 1.1 year). There was no evidence of a difference for all-cause mortality (OR 0.98, 95% CI 0.06 to 16.12; 1 study, 91 participants; low-certainty evidence). Cardiovascular mortality, non-fatal cardiovascular events, and all cardiovascular events were not available. There was no evidence of a difference for major bleeding events (OR 0.13, 95% CI 0.01 to 2.60; 1 study, 91 participants; low-certainty evidence).\nAuthors' conclusions\nThere is no evidence that antiplatelet therapy modifies mortality in patients with elevated BP for primary prevention. ASA reduced the risk of cardiovascular events and increased the risk of major bleeding events.\nAntiplatelet therapy with ASA probably reduces the risk of non-fatal and all cardiovascular events when compared to clopidogrel. Clopidogrel increases the risk of major bleeding events compared to ASA in patients with elevated BP for secondary prevention.\nThere is no evidence that warfarin modifies mortality in patients with elevated BP for secondary prevention.\nThe benefits and harms of the newer drugs glycoprotein IIb/IIIa inhibitors, clopidogrel, prasugrel, ticagrelor, and non-vitamin K antagonist oral anticoagulants for patients with high BP have not been studied in clinical trials.\nFurther RCTs of antithrombotic therapy including newer agents and complete documentation of all benefits and harms are required in patients with elevated BP.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included six trials with a combined total of 61,015 patients in this review. Four trials were primary prevention (41,695 patients; HOT, JPAD, JPPP, and TPT) and two were secondary prevention (19,320 patients; CAPRIE and Huynh). Four trials were placebo-controlled (HOT, JPAD, JPPP, and TPT) and two trials included active comparators (CAPRIE and Huynh). CAPRIE 1996 included patients from 16 countries in Europe and USA with recent ischaemic stroke, recent myocardial infarction (MI) or symptomatic peripheral vascular disease (PVD). Mean patients' age was 62.5 years. 72% of the patients were males, and 95% of the patients were Caucasians. HOT 1998 included patients from 26 countries, aged 50 to 80 years (mean 61.5 years) with hypertension. In TPT 1998, men aged between 45 and 69 years (mean 57.5 years) at high risk of ischaemic heart disease were recruited from 108 practices in the UK. Huynh 2001 included patients from Canada with mean age of 67 and with unstable angina or non-ST-segment elevation MI, with prior coronary artery bypass grafting (CABG), and who were poor candidates for a revascularisation procedure. JPAD 2012 included patients from Japan with type 2 diabetes, mean age 65 years and 55% male. JPPP 2019 included Japanese patients with atherosclerotic risk factors (hypertension, diabetes mellitus, or dyslipidaemia). Median age was 70 years and 42% of patients were men."
}
]
|
query-laysum | 3794 | [
{
"role": "user",
"content": "Abstract: Background\nGlaucoma is a group of optic neuropathies characterized by progressive degeneration of the retinal ganglion cells, axonal loss and irreversible visual field defects. Glaucoma is classified as primary or secondary, and worldwide, primary glaucoma is a leading cause of irreversible blindness. Several subtypes of glaucoma exist, and primary open-angle glaucoma (POAG) is the most common. The etiology of POAG is unknown, but current treatments aim to reduce intraocular pressure (IOP), thus preventing the onset and progression of the disease. Compared with traditional antiglaucomatous treatments, rho kinase inhibitors (ROKi) have a different pharmacodynamic. ROKi is the only current treatment that effectively lowers IOP by modulating the drainage of aqueous humor through the trabecular meshwork and Schlemm's canal. As ROKi are introduced into the market more widely, it is important to assess the efficacy and potential AEs of the treatment.\nObjectives\nTo compare the efficacy and safety of ROKi with placebo or other glaucoma medication in people diagnosed with open-angle glaucoma (OAG), primary open-angle glaucoma (POAG) or ocular hypertension (OHT).\nSearch methods\nWe used standard Cochrane methods and searched databases on 11 December 2020.\nSelection criteria\nWe included randomized clinical trials examining commercially available ROKi-based monotherapy or combination therapy compared with placebo or other IOP-lowering medical treatments in people diagnosed with (P)OAG or OHT. We included trials where ROKi were administered according to official glaucoma guidelines. There were no restrictions regarding type, year or status of the publication.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently screened studies, extracted data, and evaluated risk of bias by using Cochrane's RoB 2 tool.\nMain results\nWe included 17 trials with 4953 participants diagnosed with (P)OAG or OHT. Fifteen were multicenter trials and 15 were masked trials. All participants were aged above 18 years. Trial duration varied from 24 hours to 12 months. Trials were conducted in the USA, Canada and Japan. Sixteen trials were funded by pharmaceutical companies, and one trial provided no information about funding sources. The trials compared ROKi monotherapy (netarsudil or ripasudil) or combination therapy with latanoprost (prostaglandin analog) or timolol (beta-blocker) with placebo, timolol, latanoprost or netarsudil. Reported outcomes were IOP and safety. Meta-analyses were applied to 13 trials (IOP reduction from baseline) and 15 trials (ocular AEs).\nOf the trials evaluating IOP, seven were at low risk, three had some concerns, and three were at high risk of bias. Three trials found that netarsudil monotherapy may be superior to placebo (mean difference [MD] 3.11 mmHg, 95% confidence interval [CI] 2.59 to 3.62; I2 = 0%; 155 participants; low-certainty evidence). Evidence from three trials found that timolol may be superior to netarsudil with an MD of 0.66 mmHg (95% CI 0.41 to 0.91; I2 = 0%; 1415 participants; low-certainty evidence). Evidence from four trials found that latanoprost may be superior to netarsudil with an MD of 0.97 mmHg (95% CI 0.67 to 1.27; I2 = 4%; 1283 participants; moderate-certainty evidence).\nEvidence from three trials showed that, compared with monotherapy with latanoprost, combination therapy with netarsudil and latanoprost probably led to an additional pooled mean IOP reduction from baseline of 1.64 mmHg (95% CI −2.16 to −1.11; 1114 participants). Evidence from three trials showed that, compared with monotherapy with netarsudil, combination therapy with netarsudil and latanoprost probably led to an additional pooled mean IOP reduction from baseline of 2.66 mmHg (95% CI −2.98 to −2.35; 1132 participants). The certainty of evidence was moderate. One trial showed that, compared with timolol monotherapy, combination therapy with ripasudil and timolol may lead to an IOP reduction from baseline of 0.75 mmHg (95% −1.29 to −CI 0.21; 208 participants). The certainty of evidence was moderate.\nOf the trials assessing total ocular AEs, three were at low risk, four had some concerns, and eight were at high risk of bias.\nWe found very low-certainty evidence that netarsudil may lead to more ocular AEs compared with placebo, with 66 more ocular AEs per 100 person-months (95% CI 28 to 103; I2 = 86%; 4 trials, 188 participants). We found low-certainty evidence that netarsudil may lead to more ocular AEs compared with latanoprost, with 29 more ocular AEs per 100 person-months (95% CI 17 to 42; I2 = 95%; 4 trials, 1286 participants).\nWe found moderate-certainty evidence that, compared with timolol, netarsudil probably led to 21 additional ocular AEs (95% CI 14 to 27; I2 = 93%; 4 trials, 1678 participants). Data from three trials (1132 participants) showed no evidence of differences in the incidence rate of AEs between combination therapy with netarsudil and latanoprost and netarsudil monotherapy (1 more event per 100 person-months, 95% CI 0 to 3); however, the certainty of evidence was low. Similarly, we found low-certainty evidence that, compared with latanoprost, combination therapy with netarsudil and latanoprost may cause 29 more ocular events per 100 person-months (95% CI 11 to 47; 3 trials, 1116 participants). We found moderate-certainty evidence that, compared with timolol monotherapy, combination therapy with ripasudil and timolol probably causes 35 more ocular events per 100 person-months (95% CI 25 to 45; 1 trial, 208 participants). In all included trials, ROKi was reportedly not associated with any particular serious AEs.\nAuthors' conclusions\nThe current evidence suggests that in people diagnosed with OHT or (P)OAG, the hypotensive effect of netarsudil may be inferior to latanoprost and slightly inferior to timolol. Combining netarsudil and latanoprost probably further reduces IOP compared with monotherapy. Netarsudil as mono- or combination therapy may result in more ocular AEs. However, the certainty of evidence was very low or low for all comparisons except timolol. In general, AEs were described as mild, transient, and reversible upon treatment discontinuation. ROKi was not associated with any particular serious AEs. Future trials of sufficient size and follow-up should be conducted to provide reliable information about glaucoma progression, relevant IOP measurements and a detailed description of AEs using similar terminology. This would ensure the robustness and confidence of the results and assess the intermediate- and long-term efficacy and safety of ROKi.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to examine whether the effectiveness and safety of rho kinase inhibitor eye drops were better or worse than other medicines."
}
]
|
query-laysum | 3795 | [
{
"role": "user",
"content": "Abstract: Background\nArsenic is a common environmental toxin. Exposure to arsenic (particularly its inorganic form) through contaminated food and drinking water is an important public health burden worldwide, and is associated with increased risk of neurotoxicity, congenital anomalies, cancer, and adverse neurodevelopment in children. Arsenic is excreted following methylation reactions, which are mediated by folate. Provision of folate through folic acid supplements could facilitate arsenic methylation and excretion, thereby reducing arsenic toxicity.\nObjectives\nTo assess the effects of provision of folic acid (through fortified foods or supplements), alone or in combination with other nutrients, in lessening the burden of arsenic-related health outcomes and reducing arsenic toxicity in arsenic-exposed populations.\nSearch methods\nIn September 2020, we searched CENTRAL, MEDLINE, Embase, 10 other international databases, nine regional databases, and two trials registers.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs comparing the provision of folic acid (at any dose or duration), alone or in combination with other nutrients or nutrient supplements, with no intervention, placebo, unfortified food, or the same nutrient or supplements without folic acid, in arsenic-exposed populations of all ages and genders.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included two RCTs with 822 adults exposed to arsenic-contaminated drinking water in Bangladesh. The RCTs compared 400 µg/d (FA400) or 800 µg/d (FA800) folic acid supplements, given for 12 or 24 weeks, with placebo. One RCT, a multi-armed trial, compared FA400 plus creatine (3 g/d) to creatine alone.\nWe judged both RCTs at low risk of bias in all domains. Due to differences in co-intervention, arsenic exposure, and participants' nutritional status, we could not conduct meta-analyses, and therefore, provide a narrative description of the data.\nNeither RCT reported on cancer, all-cause mortality, neurocognitive function, or congenital anomalies.\nFolic acid supplements alone versus placebo\nBlood arsenic. In arsenic-exposed individuals, FA likely reduces blood arsenic concentrations compared to placebo (2 studies, 536 participants; moderate-certainty evidence).\nFor folate-deficient and folate-replete participants who received arsenic-removal water filters as a co-intervention, FA800 reduced blood arsenic levels more than placebo (percentage change (%change) in geometric mean (GM) FA800 −17.8%, 95% confidence intervals (CI) −25.0 to −9.8; placebo GM −9.5%, 95% CI −16.5 to −1.8; 1 study, 406 participants).\nIn one study with 130 participants with low baseline plasma folate, FA400 reduced total blood arsenic (%change FA400 mean (M) −13.62%, standard error (SE) ± 2.87; placebo M −2.49%, SE ± 3.25), and monomethylarsonic acid (MMA) concentrations (%change FA400 M −22.24%, SE ± 2.86; placebo M −1.24%, SE ± 3.59) more than placebo. Inorganic arsenic (InAs) concentrations reduced in both groups (%change FA400 M −18.54%, SE ± 3.60; placebo M −10.61%, SE ± 3.38). There was little to no change in dimethylarsinic acid (DMA) in either group.\nUrinary arsenic. In arsenic-exposed individuals, FA likely reduces the proportion of total urinary arsenic excreted as InAs (%InAs) and MMA (%MMA) and increases the proportion excreted as DMA (%DMA) to a greater extent than placebo (2 studies, 546 participants; moderate-certainty evidence), suggesting that FA enhances arsenic methylation.\nIn a mixed folate-deficient and folate-replete population (1 study, 352 participants) receiving arsenic-removal water filters as a co-intervention, groups receiving FA had a greater decrease in %InAs (within-person change FA400 M −0.09%, 95% CI −0.17 to −0.01; FA800 M −0.14%, 95% CI −0.21 to −0.06; placebo M 0.05%, 95% CI 0.00 to 0.10), a greater decrease in %MMA (within-person change FA400 M −1.80%, 95% CI −2.53 to −1.07; FA800 M −2.60%, 95% CI −3.35 to −1.85; placebo M 0.15%, 95% CI −0.37 to 0.68), and a greater increase in %DMA (within-person change FA400 M 3.25%, 95% CI 1.81 to 4.68; FA800 M 4.57%, 95% CI 3.20 to 5.95; placebo M −1.17%, 95% CI −2.18 to −0.17), compared to placebo.\nIn 194 participants with low baseline plasma folate, FA reduced %InAs (%change FA400 M −0.31%, SE ± 0.04; placebo M −0.13%, SE ± 0.04) and %MMA (%change FA400 M −2.6%, SE ± 0.37; placebo M −0.71%, SE ± 0.43), and increased %DMA (%change FA400 M 5.9%, SE ± 0.82; placebo M 2.14%, SE ± 0.71), more than placebo.\nPlasma homocysteine: In arsenic-exposed individuals, FA400 likely reduces homocysteine concentrations to a greater extent than placebo (2 studies, 448 participants; moderate-certainty evidence), in the mixed folate-deficient and folate-replete population receiving arsenic-removal water filters as a co-intervention (%change in GM FA400 −23.4%, 95% CI −27.1 to −19.5; placebo −1.3%, 95% CI −5.3 to 3.1; 1 study, 254 participants), and participants with low baseline plasma folate (within-person change FA400 M −3.06 µmol/L, SE ± 3.51; placebo M −0.05 µmol/L, SE ± 4.31; 1 study, 194 participants).\nFA supplements plus other nutrient supplements versus nutrient supplements alone\nIn arsenic-exposed individuals who received arsenic-removal water filters as a co-intervention, FA400 plus creatine may reduce blood arsenic concentrations more than creatine alone (%change in GM FA400 + creatine −14%, 95% CI −22.2 to −5.0; creatine −7.0%, 95% CI −14.8 to 1.5; 1 study, 204 participants; low-certainty evidence); may not change urinary arsenic methylation indices (FA400 + creatine: %InAs M 13.2%, SE ± 7.0; %MMA M 10.8, SE ± 4.1; %DMA M 76, SE ± 7.8; creatine: %InAs M 14.8, SE ± 5.5; %MMA M 12.8, SE ± 4.0; %DMA M 72.4, SE ±7.6; 1 study, 190 participants; low-certainty evidence); and may reduce homocysteine concentrations to a greater extent (%change in GM FA400 + creatinine −21%, 95% CI −25.2 to −16.4; creatine −4.3%, 95% CI −9.0 to 0.7; 1 study, 204 participants; low-certainty evidence) than creatine alone.\nAuthors' conclusions\nThere is moderate-certainty evidence that FA supplements may benefit blood arsenic concentration, urinary arsenic methylation profiles, and plasma homocysteine concentration versus placebo. There is low-certainty evidence that FA supplements plus other nutrients may benefit blood arsenic and plasma homocysteine concentrations versus nutrients alone. No studies reported on cancer, all-cause mortality, neurocognitive function, or congenital anomalies. Given the limited number of RCTs, more studies conducted in diverse settings are needed to assess the effects of FA on arsenic-related health outcomes and arsenic toxicity in arsenic-exposed adults and children.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors searched for studies that had been published up to September 2020."
}
]
|
query-laysum | 3796 | [
{
"role": "user",
"content": "Abstract: Background\nMaternal hypotension is the most frequent complication of spinal anaesthesia for caesarean section. It can be associated with nausea or vomiting and may pose serious risks to the mother (unconsciousness, pulmonary aspiration) and baby (hypoxia, acidosis, neurological injury).\nObjectives\nTo assess the effects of prophylactic interventions for hypotension following spinal anaesthesia for caesarean section.\nSearch methods\nWe searched Cochrane Pregnancy and Childbirth's Trials Register (9 August 2016) and reference lists of retrieved studies.\nSelection criteria\nRandomised controlled trials, including full texts and abstracts, comparing interventions to prevent hypotension with placebo or alternative treatment in women having spinal anaesthesia for caesarean section. We excluded studies if hypotension was not an outcome measure.\nData collection and analysis\nTwo review authors independently assessed study quality and extracted data from eligible studies. We report 'Summary of findings' tables using GRADE.\nMain results\nWe included 125 studies involving 9469 women. Interventions were to prevent maternal hypotension following spinal anaesthesia only, and we excluded any interventions considered active treatment. All the included studies reported the review's primary outcome. Across 49 comparisons, we identified three intervention groups: intravenous fluids, pharmacological interventions, and physical interventions. Authors reported no serious adverse effects with any of the interventions investigated. Most trials reported hypotension requiring intervention and Apgar score of less than 8 at five minutes as the only outcomes. None of the trials included in the comparisons we describe reported admission to neonatal intensive care unit.\nCrystalloid versus control (no fluids)\nFewer women experienced hypotension in the crystalloid group compared with no fluids (average risk ratio (RR) 0.84, 95% confidence interval (CI) 0.72 to 0.98; 370 women; 5 studies; low-quality evidence). There was no clear difference between groups in numbers of women with nausea and vomiting (average RR 0.19, 95% CI 0.01 to 3.91; 1 study; 69 women; very low-quality evidence). No baby had an Apgar score of less than 8 at five minutes in either group (60 babies, low-quality evidence).\nColloid versus crystalloid\nFewer women experienced hypotension in the colloid group compared with the crystalloid group (average RR 0.69, 95% CI 0.58 to 0.81; 2009 women; 27 studies; verylow-quality evidence). There were no clear differences between groups for maternal hypertension requiring intervention (average RR 0.64, 95% CI 0.09 to 4.46, 3 studies, 327 women; very low-quality evidence), maternal bradycardia requiring intervention (average RR 0.98, 95% CI 0.54 to 1.78, 5 studies, 413 women; very low-quality evidence), nausea and/or vomiting (average RR 0.89, 95% CI 0.66 to 1.19, 14 studies, 1058 women, I² = 29%; very low-quality evidence), neonatal acidosis (average RR 0.83, 95% CI 0.15 to 4.52, 6 studies, 678 babies; very low-quality evidence), or Apgar score of less than 8 at five minutes (average RR 0.24, 95% CI 0.03 to 2.05, 10 studies, 730 babies; very low-quality evidence).\nEphedrine versus phenylephrine\nThere were no clear differences between ephedrine and phenylephrine groups for preventing maternal hypotension (average RR 0.92, 95% CI 0.71 to 1.18; 401 women; 8 studies; very low-quality evidence) or hypertension (average RR 1.72, 95% CI 0.71 to 4.16, 2 studies, 118 women, low-quality evidence). Rates of bradycardia were lower in the ephedrine group (average RR 0.37, 95% CI 0.21 to 0.64, 5 studies, 304 women, low-quality evidence). There was no clear difference in the number of women with nausea and/or vomiting (average RR 0.76, 95% CI 0.39 to 1.49, 4 studies, 204 women, I² = 37%, very low-quality evidence), or babies with neonatal acidosis (average RR 0.89, 95% CI 0.07 to 12.00, 3 studies, 175 babies, low-quality evidence). No baby had an Apgar score of less than 8 at five minutes in either group (321 babies; low-quality evidence).\nOndansetron versus control\nOndansetron administration was more effective than control (placebo saline) for preventing hypotension requiring treatment (average RR 0.67, 95% CI 0.54 to 0.83; 740 women, 8 studies, low-quality evidence), bradycardia requiring treatment (average RR 0.49, 95% CI 0.28 to 0.87; 740 women, 8 studies, low-quality evidence), and nausea and/or vomiting (average RR 0.35, 95% CI 0.24 to 0.51; 653 women, 7 studies, low-quality evidence). There was no clear difference between the groups in rates of neonatal acidosis (average RR 0.48, 95% CI 0.05 to 5.09; 134 babies; 2 studies, low-quality evidence) or Apgar scores of less than 8 at five minutes (284 babies, low-quality evidence).\nLower limb compression versus control\nLower limb compression was more effective than control for preventing hypotension (average RR 0.61, 95% CI 0.47 to 0.78, 11 studies, 705 women, I² = 65%, very low-quality evidence). There was no clear difference between the groups in rates of bradycardia (RR 0.63, 95% CI 0.11 to 3.56, 1 study, 74 women, very low-quality evidence) or nausea and/or vomiting (average RR 0.42, 95% CI 0.14 to 1.27, 4 studies, 276 women, I² = 32%, very-low quality evidence). No baby had an Apgar score of less than 8 at five minutes in either group (130 babies, very low-quality evidence).\nWalking versus lying\nThere was no clear difference between the groups for women with hypotension requiring treatment (RR 0.71, 95% CI 0.41 to 1.21, 1 study, 37 women, very low-quality evidence).\nMany included studies reported little to no information that would allow an assessment of their risk of bias, limiting our ability to draw meaningful conclusions. GRADE assessments of the quality of evidence ranged from very low to low. We downgraded evidence for limitations in study design, imprecision, and indirectness; most studies assessed only women scheduled for elective caesarean sections.\nExternal validity also needs consideration. Readers should question the use of colloids in this context given the serious potential side effects such as allergy and renal failure associated with their administration.\nAuthors' conclusions\nWhile interventions such as crystalloids, colloids, ephedrine, phenylephrine, ondansetron, or lower leg compression can reduce the incidence of hypotension, none have been shown to eliminate the need to treat maternal hypotension in some women. We cannot draw any conclusions regarding rare adverse effects associated with use of the interventions (for example colloids) due to the relatively small numbers of women studied.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that no single method completely prevents hypotension in women receiving spinal anaesthesia during caesarean birth. Administering intravenous fluids or certain medications, and compressing the legs with bandages, stockings, or inflatable devices may reduce the incidence of hypotension. However, we found the quality of the evidence to be low or very low, so there is still a need for large, high-quality studies using these clinically relevant interventions, either alone or in combination.\nFuture research in this setting could focus on combinations of these effective strategies or on new innovative strategies."
}
]
|
query-laysum | 3797 | [
{
"role": "user",
"content": "Abstract: Background\nBisphosphonates are considered to be the treatment of choice for people with Paget's disease of bone. However, the effects of bisphosphonates on patient-centred outcomes have not been extensively studied. There are insufficient data to determine whether reducing and maintaining biochemical markers of bone turnover to within the normal range improves quality of life and reduces the risk of complications.\nObjectives\nTo assess the benefits and harms of bisphosphonates for adult patients with Paget's disease of bone.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, ISI Web of Knowledge and trials registers up to March 2017. We searched regulatory agency published information for rare adverse events.\nSelection criteria\nRandomised controlled trials (RCTs) of bisphosphonates as treatment for Paget's disease in adults.\nData collection and analysis\nTwo review authors independently screened search results, extracted data and assessed studies for risk of bias. We used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe included 20 trials (25 reports, 3168 participants). Of these, 10 trials (801 participants) compared bisphosphonates (etidronate, tiludronate, ibandronate, pamidronate, olpadronate, alendronate, risedronate, zoledronate) versus placebo, seven compared two bisphosphonates (992 participants), one trial compared a bisphosphonates with a bisphosphonate plus calcitonin (44 participants), and two studies, the largest trial (1331 participants) and its interventional extension study (502 participants), compared symptomatic treatment and intensive treatment where the goal was to normalise alkaline phosphatase.\nMost studies were assessed at low or unclear risk of bias. Six of 10 studies comparing bisphosphonates versus placebo were assessed at high risk of bias, mainly around incomplete outcome data and selective outcome reporting.\nParticipant populations were reasonably homogeneous in terms of age (mean age 66 to 74 years) and sex (51% to 74% male). Most studies included participants who had elevated alkaline phosphatase levels whether or not bone pain was present. Mean follow-up was six months.\nBisphosphonates versus placebo\nBisphosphonates tripled the proportion (31% versus 9%) of participants whose bone pain disappeared (RR 3.42, 95% confidence interval (CI) 1.31 to 8.90; 2 studies, 205 participants; NNT 5, 95% CI 1 to 31; moderate-quality evidence). This result is clinically important. Data were consistent when pain change was measured as any reduction (RR 1.97, 95% CI 1.29 to 3.01; 7 studies, 481 participants).\nThere was uncertainty about differences in incident fractures: 1.4% fractures occurred in the bisphosphonates group and none in the placebo group (RR 0.89, 95% CI 0.18 to 4.31; 4 studies, 356 participants; very low-quality evidence).\nNone of the studies reported data on orthopaedic surgery, quality of life or hearing thresholds.\nResults regarding adverse effects and treatment discontinuation were uncertain. There was a 64% risk of mild gastrointestinal adverse events in intervention group participants and 48% in the control group (RR 1.32, 95% CI 0.91 to 1.92; 6 studies, 376 participants; low-quality evidence). The likelihood of study participants discontinuing due to adverse effects was slightly higher in intervention group participants (4.4%) than the control group (4.1%) (RR 1.01, 95% CI 0.41 to 2.52; 6 studies, 517 participants; low-quality evidence). Zoledronate was associated with an increased risk of transient fever or fatigue (RR 2.57, 95% CI 1.21 to 5.44; 1 study, 176 participants; moderate-quality evidence).\nBisphosphonates versus active comparator\nMore participants reported pain relief with zoledronate than pamidronate (RR 1.30, 95% CI 1.10 to 1.53; 1 study, 89 participants; NNT 5, 95% CI 3 to 11) or risedronate (RR 1.36, 95% CI 1.06 to 1.74; 1 study, 347 participants; NNT 7, 95% CI 4 to 24; very low quality evidence). This result is clinically important.\nThere was insufficient evidence to confirm or exclude differences in adverse effects of bisphosphonates (RR 1.05, 95% CI 0.95 to 1.76; 2 studies, 437 participants; low-quality evidence) and treatment discontinuation (2 studies, 437 participants) (RR 2.04, 95% CI 0.43 to 9.59; 2 studies, 437 participants; very low-quality evidence).\nIntensive versus symptomatic treatment\nThere was no consistent evidence of difference to response in bone pain, bodily pain or quality of life in participants who received intensive versus symptomatic treatment.\nInconclusive results were observed regarding fractures and orthopaedic procedures for intensive versus symptomatic treatment (intensive treatment for fracture: RR 1.84, 95% CI 0.76 to 4.44; absolute risk 8.1% versus 5.2%; orthopaedic procedures: RR 1.58, 95% CI 0.80 to 3.11; absolute risk 5.6% versus 3.0%; 1 study, 502 participants; low-quality evidence).\nThere was insufficient evidence to confirm or exclude an important difference in adverse effects between intensive and symptomatic treatment (RR 1.05, 95% CI 0.79 to 1.41; low-quality evidence).\nThere was insufficient evidence to confirm or exclude an important difference of risk of rare adverse events (including osteonecrosis of the jaw) from the regulatory agencies databases.\nAuthors' conclusions\nWe found moderate-quality evidence that bisphosphonates improved pain in people with Paget's disease of bone when compared with placebo. We are uncertain about the results of head-to-head studies investigating bisphosphonates. We found insufficient evidence of benefit in terms of pain or quality of life from intensive treatment. Information about adverse effects was limited, but serious side effects were rare, and rate of withdrawals due to side effects was low.\n\nGiven the provided abstract, please respond to the following query: \"What is Paget's disease of bone and what are bisphosphonates?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Paget's disease of bone is a chronic problem which usually affects one or a few bones. Paget's disease causes bone renewal and repair to become abnormal; bones become weak enlarged and misshapen, leading to pain, fractures and arthritis in joints close to affected bones.\nBisphosphonates are medications that slow down the bone remodelling process."
}
]
|
query-laysum | 3798 | [
{
"role": "user",
"content": "Abstract: Background\nExercise programmes are often recommended for managing ankylosing spondylitis (AS), to reduce pain and improve or maintain functional capacity.\nObjectives\nTo assess the benefits and harms of exercise programmes for people with AS.\nSearch methods\nWe searched CENTRAL, the Cochrane Library, MEDLINE Ovid, EMBASE Ovid, CINAHL EBSCO, PEDro, Scopus, and two trials registers to December 2018. We searched reference lists of identified systematic reviews and included studies, handsearched recent relevant conference proceedings, and contacted experts in the field.\nSelection criteria\nWe included reports of randomised controlled trials (RCT) of adults with AS that compared exercise therapy programmes with an inactive control (no intervention, waiting list) or usual care.\nData collection and analysis\nWe used standard Cochrane methodology.\nMain results\nWe included 14 RCTs with 1579 participants with AS. Most participants were male (70%), the median age was 45 years (range 39 to 47), and the mean symptom duration was nine years. The most frequently used exercises were those designed to help improve strength, flexibility, stretching, and breathing. Most exercise programmes were delivered along with drug therapy or a biological agent. We judged most of the studies at unclear or high risk of bias for several domains. All 14 studies provided data obtained immediately upon completion of the exercise programme. The median exercise programme duration was 12 weeks (interquartile range (IQR) 8 to 16). Three studies (146 participants) provided data for medium-term follow-up (< 24 weeks after completion of the exercise programmes), and one (63 participants) for long-term follow-up (> 24 weeks after completion of the exercise programmes). Nine studies compared exercise programmes to no intervention; five studies compared them to usual care (including physiotherapy, medication, or self-management).\nExercise programmes versus no intervention\nAll data were obtained immediately upon completion of the exercise programme.\nFor physical function, measured by a self-reporting questionnaire (the Bath Ankylosing Spondylitis Functional Index (BASFI) scale, 0 to 10; lower is better), moderate-quality evidence showed a no important clinically meaningful improvement with exercise programmes (mean difference (MD) -1.3, 95% confidence interval (CI) -1.7 to -0.9; 7 studies, 312 participants; absolute reduction 13%, 95% CI 17% to 9%).\nFor pain, measured on a visual analogue scale (VAS, 0 to 10, lower is better), low-quality evidence showed an important clinically meaningful reduction of pain with exercise (MD -2.1, 95% CI -3.6 to -0.6; 6 studies, 288 participants; absolute reduction 21%, 95% CI 36% to 6%).\nFor patient global assessment of disease activity, measured by a self-reporting questionnaire (the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) scale, 0 to 10, lower is better), moderate-quality evidence showed no important clinically meaningful reduction with exercise (MD -0.9, 95% CI -1.3 to -0.5; 6 studies, 262 participants; absolute reduction 9%, 95% CI 13% to 5%).\nFor spinal mobility, measured by a self-reporting questionnaire (the Bath Ankylosing Spondylitis Metrology Index (BASMI) scale, 0 to 10, lower is better), very low-quality evidence showed an improvement with exercise (MD -0.7 95%, -1.3 to -0.1; 5 studies, 232 participants) with no important clinical meaningful benefit (absolute reduction 7%, 95% CI 13% to 1%).\nFor fatigue, measured on a VAS (0 to 10, lower is better), very low-quality evidence showed a no important clinically meaningful reduction with exercise (MD -1.4, 95% CI -2.7 to -0.1; 2 studies, 72 participants; absolute reduction 14%, 95% CI 27% to 1%).\nExercise programmes versus usual care\nAll data were obtained immediately upon completion of the exercise programme.\nFor physical function, measured by the BASFI scale, moderate-quality evidence showed an improvement with exercise (MD -0.4, 95% CI -0.6 to -0.2; 5 studies, 1068 participants). There was no important clinical meaningful benefit (absolute reduction 4%, 95% CI 6% to 2%).\nFor pain, measured on a VAS (0 to 10, lower is better), moderate-quality evidence showed a reduction of pain with exercise (MD -0.5, 95% CI -0.9 to -0.1; 2 studies, 911 participants; absolute reduction 5%, 95% CI 9% to 1%). No important clinical meaningful benefit was found.\nFor patient global assessment of disease activity, measured by the BASDAI scale, low-quality evidence showed a reduction with exercise (MD -0.7, 95% CI -1.3 to -0.1; 5 studies, 1068 participants), but it was not clinically important (absolute reduction 7%, 95% CI 13% to 1%) with important clinical meaningful benefit\nFor spinal mobility, measured by the BASMI scale, very low-quality evidence found a no important clinically meaningful improvement with exercise (MD -1.2, 95% CI -2.8 to 0.5; 2 studies, 85 participants; absolute reduction 12%, 95% CI 5% less to 28% more). There was no important clinical meaningful benefit.\nNone of the studies measured fatigue.\nAdverse effects\nWe found very low-quality evidence of the effect of exercise versus either no intervention, or usual care. We are uncertain of the potential for harm of exercises, due to low event rates, and a limited number of studies reporting events.\nAuthors' conclusions\nWe found moderate- to low-quality evidence that exercise programmes probably slightly improve function, may reduce pain, and probably slightly reduce global patient assessment of disease activity, when compared with no intervention, and measured upon completion of the programme. We found moderate- to low-quality evidence that exercise programmes probably have little or no effect on improving function or reducing pain, when compared with usual care, and may have little or no effect on reducing patient assessment of disease activity, when measured upon completion of the programmes. We are uncertain whether exercise programmes improve spinal mobility, reduce fatigue, or induce adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"Exercise programmes versus usual care\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Exercise probably results in little or no improved function or reduced pain (moderate-quality evidence), and may have little or no effect in reducing patient-reported disease activity (low-quality evidence). We are uncertain of the effect on spinal mobility (very low-quality evidence).\nPhysical function. People who received usual care rated their function at 3.7 points on the BASFI; those who exercised rated it 0.4 points lower (4% absolute improvement).\nPain. People who received usual care rated their pain at 3.7 points on a 10-point VAS; those who exercised rated it 0.5 points lower (5% absolute improvement).\nPatient global assessment of disease activity. People who received usual care rated their disease activity at 3.7 points on the BASDAI; those who exercised rated it 0.7 points lower (7% absolute improvement).\nSpinal mobility. People who received usual care rated their spinal mobility at 8.9 points on the BASMI; those who exercised rated it 1.2 points lower (12% absolute improvement).\nNone of the studies measured fatigue."
}
]
|
query-laysum | 3799 | [
{
"role": "user",
"content": "Abstract: Background\nTraumatic eye complaints account for 3% of all hospital emergency department visits. The most common traumatic injury to the eye is blunt trauma, which accounts for 30% of these visits. Blunt trauma frequently leads to traumatic iridocyclitis, thus causing anterior uveitis. Iridocyclitis frequently causes tearing, photophobia, eye pain, and vision loss. These symptoms are a result of the inflammatory processes and ciliary spasms to iris muscles and sphincter. The inflammatory process is usually managed with topical corticosteroids, while the ciliary spasm is blunted by dilating the pupils with topical mydriatic agents, an adjuvant therapy. However, the effectiveness of mydriatic agents has not been quantified in terms of reduction of ocular pain and visual acuity loss.\nObjectives\nTo evaluate the effectiveness and safety of topical mydriatics as adjunctive therapy to topical corticosteroids for traumatic iridocyclitis.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) which contains the Cochrane Eyes and Vision Trials Register (2019, issue 6); Ovid MEDLINE; Embase.com; Cumulative Index to Nursing and Allied Health Literature (CINAHL) Plus; PubMed; ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 12 June 2019.\nSelection criteria\nWe planned to include randomized controlled trials (RCTs) that compared topical mydriatic agents in conjunction with topical corticosteroid therapy versus topical corticosteroids alone, in participants with traumatic iridocyclitis.\nData collection and analysis\nTwo review authors (JH, MK) independently screened titles and abstracts, then full-text reports, against eligibility criteria. We planned to have two authors independently extract data from included studies. We resolved differences in opinion by discussion.\nMain results\nThere were no eligible RCTs that compared the interventions of interest in people with traumatic iridocyclitis.\nAuthors' conclusions\nWe did not find any evidence from RCTs about the efficacy of topical mydriatic agents as an adjunctive therapy with topical corticosteroids for treating traumatic iridocyclitis. In the absence of these types of studies, we cannot draw any firm conclusions. Controlled trials that compare the combined use of topical mydriatic agents and corticosteroid drops against standard corticosteroid drops alone, in people with traumatic iridocyclitis are required. These may provide evidence about the efficacy and risk of topical mydriatic drops as adjuvant therapy for traumatic iridocyclitis.\n\nGiven the provided abstract, please respond to the following query: \"What we found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no randomized controlled trials to help us answer our question. Therefore, we do not know whether the addition of mydriatics to corticosteroids is beneficial or causes more unwanted effects than corticosteroids alone in the treatment of traumatic iridocyclitis. We need researchers to conduct future studies that compare mydriatics plus corticosteroids against corticosteroids, so that we can assess the benefits and risks of mydriatics as an additional treatment for traumatic iridocyclitis."
}
]
|
query-laysum | 3800 | [
{
"role": "user",
"content": "Abstract: Background\nVitiligo is a chronic skin disorder characterised by patchy loss of skin colour. Some people experience itching before the appearance of a new patch. It affects people of any age or ethnicity, more than half of whom develop it before the age of 20 years. There are two main types: generalised vitiligo, the common symmetrical form, and segmental, affecting only one side of the body. Around 1% of the world's population has vitiligo, a disease causing white patches on the skin. Several treatments are available. Some can restore pigment but none can cure the disease.\nObjectives\nTo assess the effects of all therapeutic interventions used in the management of vitiligo.\nSearch methods\nWe updated our searches of the following databases to October 2013: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library (2013, Issue 10), MEDLINE, Embase, AMED, PsycINFO, CINAHL and LILACS. We also searched five trials databases, and checked the reference lists of included studies for further references to relevant randomised controlled trials (RCTs).\nSelection criteria\nRandomised controlled trials (RCTs) assessing the effects of treatments for vitiligo.\nData collection and analysis\nAt least two review authors independently assessed study eligibility and methodological quality, and extracted data.\nMain results\nThis update of the 2010 review includes 96 studies, 57 from the previous update and 39 new studies, totalling 4512 participants. Most of the studies, covering a wide range of interventions, had fewer than 50 participants. All of the studies assessed repigmentation, however only five reported on all of our three primary outcomes which were quality of life, > 75% repigmentation and adverse effects. Of our secondary outcomes, six studies measured cessation of spread but none assessed long-term permanence of repigmentation resulting from treatment at two years follow-up.\nMost of the studies assessed combination therapies which generally reported better results. New interventions include seven new surgical interventions.\nWe analysed the data from 25 studies which assessed our primary outcomes. We used the effect measures risk ratio (RR), and odds ratio (OR) with their 95% confidence intervals (CI) and where N is the number of participants in the study.\nWe were only able to analyse one of nine studies assessing quality of life and this showed no statistically significant improvement between the comparators.\nNine analyses from eight studies reported >75% repigmentation. In the following studies the repigmentation was better in the combination therapy group: calcipotriol plus PUVA (psoralen with UVA light) versus PUVA (paired OR 4.25, 95% CI 1.43 to 12.64, one study, N = 27); hydrocortisone-17-butyrate plus excimer laser versus excimer laser alone (RR 2.57, 95% CI 1.20 to 5.50, one study, N = 84); oral minipulse of prednisolone (OMP) plus NB-UVB (narrowband UVB) versus OMP alone (RR 7.41, 95% CI 1.03 to 53.26, one study, N = 47); azathioprine with PUVA versus PUVA alone (RR 17.77, 95% CI 1.08 to 291.82, one study, N = 58) and 8-Methoxypsoralen (8-MOP ) plus sunlight versus psoralen (RR 2.50, 95% CI 1.06 to 5.91, one study, N = 168). In these three studies ginkgo biloba was better than placebo (RR 4.40, 95% CI 1.08 to 17.95, one study, N = 47); clobetasol propionate was better than PUVAsol (PUVA with sunlight) (RR 4.70, 95% CI 1.14 to 19.39, one study, N = 45); split skin grafts with PUVAsol was better than minipunch grafts with PUVAsol (RR 1.89, 95% CI 1.25 to 2.85, one study, N = 64).\nWe performed one meta-analysis of three studies, in which we found a non-significant 60% increase in the proportion of participants achieving >75% repigmentation in favour of NB-UVB compared to PUVA (RR 1.60, 95% CI 0.74 to 3.45; I² = 0%).\nStudies assessing topical preparations, in particular topical corticosteroids, reported most adverse effects. However, in combination studies it was difficult to ascertain which treatment caused these effects. We performed two analyses from a pooled analysis of three studies on adverse effects. Where NB-UVB was compared to PUVA, the NB-UVB group reported less observations of nausea in three studies (RR 0.13, 95% CI 0.02 to 0.69; I² = 0% three studies, N = 156) and erythema in two studies (RR 0.73, 95% CI 0.55 to 0.98; I² = 0%, two studies, N = 106), but not itching in two studies (RR 0.57, 95% CI 0.20 to 1.60; I² = 0%, two studies, N = 106).\nVery few studies only assessed children or included segmental vitiligo. We found one study of psychological interventions but we could not include the outcomes in our statistical analyses. We found no studies evaluating micropigmentation, depigmentation, or cosmetic camouflage.\nAuthors' conclusions\nThis review has found some evidence from individual studies to support existing therapies for vitiligo, but the usefulness of the findings is limited by the different designs and outcome measurements and lack of quality of life measures. There is a need for follow-up studies to assess permanence of repigmentation as well as high-quality randomised trials using standardised measures and which also address quality of life.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Since the last update (2010), the design and reporting of vitiligo trials have not greatly improved. Only five studies met the criteria for a well-designed trial. Poor design, the number and complexity of the treatments and the fact that many of the studies assessed individual vitiligo patches in the same participant, made comparison of the studies difficult. Consequently, we could only perform one meta-analysis of three studies comparing NB-UVB with PUVA which showed that NB-UVB has fewer side effects and is marginally better than PUVA."
}
]
|
query-laysum | 3801 | [
{
"role": "user",
"content": "Abstract: Background\nRoutine monitoring of gastric residual in preterm infants on gavage feeds is a common practice used to guide initiation and advancement of feeds. It is believed that an increase in or an altered gastric residual may be predictive of necrotising enterocolitis (NEC). Withholding monitoring of gastric residual may take away the early indicator and thus may increase the risk of NEC. However, routine monitoring of gastric residual as a guide, in the absence of uniform standards, may lead to unnecessary delay in initiation and advancement of feeds and hence might result in a delay in establishing full enteral feeds. This in turn may increase the duration of total parenteral nutrition (TPN) and central venous line usage, increasing the risk of associated complications. Furthermore, delays in establishing full enteral feeds increase the risk of extrauterine growth restriction and neurodevelopmental impairment.\nObjectives\n• To assess the efficacy and safety of routine monitoring versus no monitoring of gastric residual in preterm infants\n• To assess the efficacy and safety of routine monitoring of gastric residual based on two different criteria for interrupting feeds or decreasing feed volume in preterm infants\nSearch methods\nWe conducted searches in Cochrane CENTRAL via CRS, Ovid MEDLINE, Embase and CINAHL in February 2022. We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials (RCTs), quasi- and cluster-RCTs.\nSelection criteria\nWe selected RCTs that compared routine monitoring versus no monitoring of gastric residual and trials that used two different criteria for gastric residual to interrupt feeds in preterm infants.\nData collection and analysis\nTwo authors independently assessed trial eligibility, risk of bias and extracted data. We analysed treatment effects in individual trials and reported risk ratio (RR) for dichotomous data, and mean difference (MD) for continuous data, with respective 95% confidence intervals (CI). We calculated the number needed to treat for an additional beneficial/harmful outcome (NNTB/NNTH) for dichotomous outcomes with significant results. We used GRADE to assess the certainty of evidence.\nMain results\nWe included five studies (423 infants) in this updated review.\nRoutine monitoring versus no routine monitoring of gastric residual in preterm infants\nFour RCTs with 336 preterm infants met the inclusion criteria for this comparison. Three studies were performed in infants with birth weight of < 1500 g, while one study included infants with birth weight between 750 g and 2000 g. The trials were unmasked but were otherwise of good methodological quality.\nRoutine monitoring of gastric residual:\n- probably has little or no effect on the risk of NEC (RR 1.08, 95% CI 0.46 to 2.57; 334 participants, 4 studies; moderate-certainty evidence);\n- probably increases the time to establish full enteral feeds (MD 3.14 days, 95% CI 1.93 to 4.36; 334 participants, 4 studies; moderate-certainty evidence);\n- may increase the time to regain birth weight (MD 1.70 days, 95% CI 0.01 to 3.39; 80 participants, 1 study; low-certainty evidence);\n- may increase the number of infants with feed interruption episodes (RR 2.21, 95% CI 1.53 to 3.20; NNTH 3, 95% CI 2 to 5; 191 participants, 3 studies; low-certainty evidence);\n- probably increases the number of TPN days (MD 2.57 days, 95% CI 1.20 to 3.95; 334 participants, 4 studies; moderate-certainty evidence);\n- probably increases the risk of invasive infection (RR 1.50, 95% CI 1.02 to 2.19; NNTH 10, 95% CI 5 to 100; 334 participants, 4 studies; moderate-certainty evidence);\n- may result in little or no difference in all-cause mortality before hospital discharge (RR 2.14, 95% CI 0.77 to 5.97; 273 participants, 3 studies; low-certainty evidence).\nQuality and volume of gastric residual compared to quality of gastric residual alone for feed interruption in preterm infants\nOne trial with 87 preterm infants met the inclusion criteria for this comparison. The trial included infants with 1500 g to 2000 g birth weight.\nUsing two different criteria of gastric residual for feed interruption:\n- may result in little or no difference in the incidence of NEC (RR 5.35, 95% CI 0.26 to 108.27; 87 participants; low-certainty evidence);\n- may result in little or no difference in time to establish full enteral feeds (MD -0.10 days, 95% CI -0.91 to 0.71; 87 participants; low-certainty evidence);\n- may result in little or no difference in time to regain birth weight (MD 1.00 days, 95% CI -0.37 to 2.37; 87 participants; low-certainty evidence);\n- may result in little or no difference in number of TPN days (MD 0.80 days, 95% CI -0.78 to 2.38; 87 participants; low-certainty evidence);\n- may result in little or no difference in the risk of invasive infection (RR 5.35, 95% CI 0.26 to 108.27; 87 participants; low-certainty evidence);\n- may result in little or no difference in all-cause mortality before hospital discharge (RR 3.21, 95% CI 0.13 to 76.67; 87 participants; low-certainty evidence).\n- we are uncertain about the effect of using two different criteria of gastric residual on the risk of feed interruption episodes (RR 3.21, 95% CI 0.13 to 76.67; 87 participants; very low-certainty evidence).\nAuthors' conclusions\nModerate-certainty evidence suggests routine monitoring of gastric residual has little or no effect on the incidence of NEC. Moderate-certainty evidence suggests monitoring gastric residual probably increases the time to establish full enteral feeds, the number of TPN days and the risk of invasive infection. Low-certainty evidence suggests monitoring gastric residual may increase the time to regain birth weight and the number of feed interruption episodes, and may have little or no effect on all-cause mortality before hospital discharge. Further RCTs are warranted to assess the effect on long-term growth and neurodevelopmental outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to look for evidence from studies that assessed whether routine monitoring of stomach aspirates is beneficial or harmful in premature babies."
}
]
|
query-laysum | 3802 | [
{
"role": "user",
"content": "Abstract: Background\nAtypical haemolytic uraemic syndrome (aHUS) is a rare disorder characterised by thrombocytopenia, microangiopathic haemolytic anaemia, and acute kidney injury. The condition is primarily caused by inherited or acquired dysregulation of complement regulatory proteins with ~40% of those affected aged < 18 years. Historically, kidney failure and death were common outcomes, however, improved understanding of the condition has led to discovery of novel therapies.\nObjectives\nTo evaluate the benefits and harms of interventions for aHUS.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies for randomised controlled studies (RCTs) up to 3 September 2020 using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov. MEDLINE(OVID) 1946 to 27 July 2020 and EMBASE (OVID) 1974 to 27 July 2020 were searched for non-RCTs.\nSelection criteria\nAll randomised and non-randomised clinical trials comparing an intervention with placebo, an intervention with supportive therapy, or two or more interventions for aHUS were included. Given the rare nature of the condition in question, prospective single-arm studies of any intervention for aHUS were also included.\nData collection and analysis\nTwo authors independently extracted pre-specified data from eligible studies and evaluated risk of bias using a newly developed tool based on existing Cochrane criteria. As statistical meta-analysis was not appropriate, qualitative analysis of data was then performed.\nMain results\nWe included five single-arm studies, all of which evaluated terminal complement inhibition for the treatment of aHUS. Four studies evaluated the short-acting C5 inhibitor eculizumab and one study evaluated the longer-acting C5 inhibitor ravulizumab. All included studies within the review were of non-randomised, single-arm design. Thus, risk of bias is high, and it is challenging to draw firm conclusions from this low-quality evidence. One hundred patients were included within three primary studies evaluating eculizumab, with further data reported from 37 patients in a secondary study. Fifty-eight patients were included in the ravulizumab study. After 26 weeks of eculizumab therapy there were no deaths and a 70% reduction in the number of patients requiring dialysis. Complete thrombotic microangiopathic (TMA) response was observed in 60% of patients at 26 weeks and 65% at two years. After 26 weeks of ravulizumab therapy four patients had died (7%) and complete TMA response was observed in 54% of patients. Substantial improvements were seen in estimated glomerular filtration rate and health-related quality of life in both eculizumab and ravulizumab studies. Serious adverse events occurred in 42% of patients, and meningococcal infection occurred in two patients, both treated with eculizumab.\nAuthors' conclusions\nWhen compared with historical data, terminal complement inhibition appears to offer favourable outcomes in patients with aHUS, based upon very low-quality evidence drawn from five single-arm studies. It is unlikely that an RCT will be conducted in aHUS and therefore careful consideration of future single-arm data as well as longer term follow-up data will be required to better understand treatment duration, adverse outcomes and risk of disease recurrence associated with terminal complement inhibition.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Haemolytic uraemic syndrome (HUS) is a condition involving blockages in small blood vessels leading to destruction of blood cells and dysfunction of several organs, most notably the kidneys. It is commonly caused by infections, often E.Coli, and can be associated with diarrhoea. A rare form of HUS known as atypical HUS (aHUS) is a more aggressive form of the disease caused by inherited or acquired abnormalities of proteins involved in controlling an aspect of our immune system known as “complement”. Almost half of cases involve patients aged less than 18 years. In the past a diagnosis of aHUS was associated with a poor prognosis with patients often progressing to kidney failure and death. More recently, an improved understanding of the condition has led to better treatments. This review aims to evaluate the usefulness of these treatments by systematically examining the available evidence in order to find out the most effective treatments available for aHUS."
}
]
|
query-laysum | 3803 | [
{
"role": "user",
"content": "Abstract: Background\nChronic obstructive pulmonary disease (COPD) is a respiratory condition causing accumulation of mucus in the airways, cough, and breathlessness; the disease is progressive and is the fourth most common cause of death worldwide. Current treatment strategies for COPD are multi-modal and aim to reduce morbidity and mortality and increase patients' quality of life by slowing disease progression and preventing exacerbations. Fixed-dose combinations (FDCs) of a long-acting beta2-agonist (LABA) plus a long-acting muscarinic antagonist (LAMA) delivered via a single inhaler are approved by regulatory authorities in the USA, Europe, and Japan for the treatment of COPD. Several LABA/LAMA FDCs are available and recent meta-analyses have clarified their utility versus their mono-components in COPD. Evaluation of the efficacy and safety of once-daily LABA/LAMA FDCs versus placebo will facilitate the comparison of different FDCs in future network meta-analyses.\nObjectives\nWe assessed the evidence for once-daily LABA/LAMA combinations (delivered in a single inhaler) versus placebo on clinically meaningful outcomes in patients with stable COPD.\nSearch methods\nWe identified trials from Cochrane Airways' Specialised Register (CASR) and also conducted a search of the US National Institutes of Health Ongoing Trials Register ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization International Clinical Trials Registry Platform (apps.who.int/trialsearch). We searched CASR and trial registries from their inception to 3 December 2018; we imposed no restriction on language of publication.\nSelection criteria\nWe included parallel-group and cross-over randomised controlled trials (RCTs) comparing once-daily LABA/LAMA FDC versus placebo. We included studies reported as full-text, those published as abstract only, and unpublished data. We excluded very short-term trials with a duration of less than 3 weeks. We included adults (≥ 40 years old) with a diagnosis of stable COPD. We included studies that allowed participants to continue using their ICS during the trial as long as the ICS was not part of the randomised treatment.\nData collection and analysis\nTwo review authors independently screened the search results to determine included studies, extracted data on prespecified outcomes of interest, and assessed the risk of bias of included studies; we resolved disagreements by discussion with a third review author. Where possible, we used a random-effects model to meta-analyse extracted data. We rated all outcomes using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) system and presented results in 'Summary of findings’ tables.\nMain results\nWe identified and included 22 RCTs randomly assigning 8641 people with COPD to either once-daily LABA/LAMA FDC (6252 participants) or placebo (3819 participants); nine studies had a cross-over design. Studies had a duration of between three and 52 weeks (median 12 weeks). The mean age of participants across the included studies ranged from 59 to 65 years and in 21 of 22 studies, participants had GOLD stage II or III COPD. Concomitant inhaled corticosteroid (ICS) use was permitted in all of the included studies (where stated); across the included studies, between 28% to 58% of participants were using ICS at baseline. Six studies evaluated the once-daily combination of IND/GLY (110/50 μg), seven studies evaluated TIO/OLO (2.5/5 or 5/5 μg), eight studies evaluated UMEC/VI (62.5/5, 125/25 or 500/25 μg) and one study evaluated ACD/FOR (200/6, 200/12 or 200/18 μg); all LABA/LAMA combinations were compared with placebo.\nThe risk of bias was generally considered to be low or unknown (insufficient detail provided), with only one study per domain considered to have a high risk of bias except for the domain 'other bias' which was determined to be at high risk of bias in four studies (in three studies, disease severity was greater at baseline in participants receiving LABA/LAMA compared with participants receiving placebo, which would be expected to shift the treatment effect in favour of placebo).\nCompared to the placebo, the pooled results for the primary outcomes for the once-daily LABA/LAMA arm were as follows: all-cause mortality, OR 1.88 (95% CI 0.81 to 4.36, low-certainty evidence); all-cause serious adverse events (SAEs), OR 1.06 (95% CI 0.88 to 1.28, high-certainty evidence); acute exacerbations of COPD (AECOPD), OR 0.53 (95% CI 0.36 to 0.78, moderate-certainty evidence); adjusted St George's Respiratory Questionnaire (SGRQ) score, MD -4.08 (95% CI -4.80 to -3.36, high-certainty evidence); proportion of SGRQ responders, OR 1.75 (95% CI 1.54 to 1.99). Compared with placebo, the pooled results for the secondary outcomes for the once-daily LABA/LAMA arm were as follows: adjusted trough forced expiratory volume in one second (FEV1), MD 0.20 L (95% CI 0.19 to 0.21, moderate-certainty evidence); adjusted peak FEV1, MD 0.31 L (95% CI 0.29 to 0.32, moderate-certainty evidence); and all-cause AEs, OR 0.95 (95% CI 0.86 to 1.04; high-certainty evidence). No studies reported data for the 6-minute walk test. The results were generally consistent across subgroups for different LABA/LAMA combinations and doses.\nAuthors' conclusions\nCompared with placebo, once-daily LABA/LAMA (either IND/GLY, UMEC/VI or TIO/OLO) via a combination inhaler is associated with a clinically significant improvement in lung function and health-related quality of life in patients with mild-to-moderate COPD; UMEC/VI appears to reduce the rate of exacerbations in this population. These conclusions are supported by moderate or high certainty evidence based on studies with an observation period of up to one year.\n\nGiven the provided abstract, please respond to the following query: \"Background to the review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "COPD is a disease of the lungs and is the fourth most common cause of death worldwide. People with COPD experience symptoms of cough, breathlessness and a build up of mucus, which become worse over time. Current treatments for COPD aim to manage these symptoms and improve the quality of life of people with the disease.\nA combination of a LABA plus a LAMA taken once-daily in a single inhaler (LABA/LAMA) has been shown to be more effective than taking each separately in individual inhalers. Several different combinations of inhaled LABA and LAMA are available (e.g. indacaterol/glycopyrronium, olodaterol/tiotropium, formoterol/aclidinium, and vilanterol/umeclidinium) and are used for the treatment of COPD. By gathering information from clinical trials that compare once-daily LABA/LAMA with placebo in a dummy inhaler we will provide information to help future research decide which combination is best for treating people with COPD."
}
]
|
query-laysum | 3804 | [
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease is an inherited disorder of hemoglobin, resulting in abnormal red blood cells. These are rigid and may block blood vessels leading to acute painful crises and other complications. Recent research has focused on therapies to rehydrate the sickled cells by reducing the loss of water and ions from them. Little is known about the effectiveness and safety of such drugs. This is an updated version of a previously published review.\nObjectives\nTo assess the relative risks and benefits of drugs to rehydrate sickled red blood cells.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register. We also searched online trials registries for any ongoing trials (01 July 2018).\nLast search of the Group's Haemoglobinopathies Trials Register: 08 October 2018.\nSelection criteria\nRandomized or quasi-randomized controlled trials of drugs to rehydrate sickled red blood cells compared to placebo or an alternative treatment.\nData collection and analysis\nBoth authors independently selected studies for inclusion, assessed study quality and extracted data.\nMain results\nOf the 51 studies identified, three met the inclusion criteria, including 524 people with sickle cell disease aged between 12 and 65 years of age. One study tested the effectiveness of zinc sulphate as compared to placebo and the remaining two assessed senicapoc versus placebo. No deaths were seen in any of the studies (low-quality evidence). The zinc sulphate study showed a significant reduction in painful crises (in a total of 145 participants) over one and a half years, mean difference -2.83 (95% confidence interval -3.51 to -2.15) (moderate-quality evidence). However, analysis was restricted due to limited statistical data. Changes to red blood cell parameters and blood counts were inconsistent (very low-quality evidence). No serious adverse events were noted in the study. The Phase II dose-finding study of senicapoc (a Gardos channel blocker) compared to placebo showed that the high dose senicapoc showed significant improvement in change in hemoglobin level, the number and proportion of dense red blood cells, red blood cell count and indices and hematocrit value (very low-quality evidence). The results with low-dose senicapoc were similar to the high-dose senicapoc group but of lesser magnitude. There was no difference in the frequency of painful crises between the three groups (low-quality evidence). A subsequent Phase III study of senicapoc was terminated early since there was no difference observed between the treatment and control groups in the primary end point of painful crises.\nAuthors' conclusions\nWhile the results of zinc for reducing sickle-related crises are encouraging, larger and longer-term multicenter studies are needed to evaluate the effectiveness of this therapy for people with sickle cell disease.\nWhile the Phase II and the prematurely terminated phase III studies of senicapoc showed that the drug improved red blood cell survival (depending on dose), this did not lead to fewer painful crises.\nGiven this is no longer an active area of research, this review will no longer be regularly updated.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was mixed across outcomes."
}
]
|
query-laysum | 3805 | [
{
"role": "user",
"content": "Abstract: Background\nMany people with cancer experience moderate to severe pain that requires treatment with strong opioids, such as oxycodone and morphine. Strong opioids are, however, not effective for pain in all people, neither are they well tolerated by all people. The aim of this review was to assess whether oxycodone is associated with better pain relief and tolerability than other analgesic options for adults with cancer pain. This is an updated Cochrane review previously published in 2017.\nObjectives\nTo assess the effectiveness and tolerability of oxycodone by any route of administration for pain in adults with cancer.\nSearch methods\nFor this update, we searched the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE and MEDLINE In-Process (Ovid), Embase (Ovid), Science Citation Index, Conference Proceedings Citation Index - Science (ISI Web of Science), BIOSIS (ISI), and PsycINFO (Ovid) to November 2021. We also searched four trial registries, checked the bibliographic references of relevant studies, and contacted the authors of the included studies. We applied no language, date, or publication status restrictions.\nSelection criteria\nWe included randomised controlled trials (parallel-group or cross-over) comparing oxycodone (any formulation or route of administration) with placebo or an active drug (including oxycodone) for cancer background pain in adults by examining pain intensity/relief, adverse events, quality of life, and participant preference.\nData collection and analysis\nTwo review authors independently sifted the search, extracted data and assessed the included studies using standard Cochrane methodology. We meta-analysed pain intensity data using the generic inverse variance method, and pain relief and adverse events using the Mantel-Haenszel method, or summarised these data narratively along with the quality of life and participant preference data. We assessed the overall certainty of the evidence using GRADE.\nMain results\nFor this update, we identified 19 new studies (1836 participants) for inclusion. In total, we included 42 studies which enrolled/randomised 4485 participants, with 3945 of these analysed for efficacy and 4176 for safety. The studies examined a number of different drug comparisons.\nControlled-release (CR; typically taken every 12 hours) oxycodone versus immediate-release (IR; taken every 4-6 hours) oxycodone\nPooled analysis of three of the four studies comparing CR oxycodone to IR oxycodone suggest that there is little to no difference between CR and IR oxycodone in pain intensity (standardised mean difference (SMD) 0.12, 95% confidence interval (CI) -0.1 to 0.34; n = 319; very low-certainty evidence). The evidence is very uncertain about the effect on adverse events, including constipation (RR 0.71, 95% CI 0.45 to 1.13), drowsiness/somnolence (RR 1.03, 95% CI 0.69 to 1.54), nausea (RR 0.85, 95% CI 0.56 to 1.28), and vomiting (RR 0.66, 95% CI 0.38 to 1.15) (very low-certainty evidence). There were no data available for quality of life or participant preference, however, three studies suggested that treatment acceptability may be similar between groups (low-certainty evidence).\nCR oxycodone versus CR morphine\nThe majority of the 24 studies comparing CR oxycodone to CR morphine reported either pain intensity (continuous variable), pain relief (dichotomous variable), or both. Pooled analysis indicated that pain intensity may be lower (better) after treatment with CR morphine than CR oxycodone (SMD 0.14, 95% CI 0.01 to 0.27; n = 882 in 7 studies; low-certainty evidence). This SMD is equivalent to a difference of 0.27 points on the Brief Pain Inventory scale (0-10 numerical rating scale), which is not clinically significant. Pooled analyses also suggested that there may be little to no difference in the proportion of participants achieving complete or significant pain relief (RR 1.02, 95% CI 0.95 to 1.10; n = 1249 in 13 studies; low-certainty evidence).\nThe RR for constipation (RR 0.75, 95% CI 0.66 to 0.86) may be lower after treatment with CR oxycodone than after CR morphine. Pooled analyses showed that, for most of the adverse events, the CIs were wide, including no effect as well as potential benefit and harm: drowsiness/somnolence (RR 0.88, 95% CI 0.74 to 1.05), nausea (RR 0.93, 95% CI 0.77 to 1.12), and vomiting (RR 0.81, 95% CI 0.63 to 1.04) (low or very low-certainty evidence). No data were available for quality of life. The evidence is very uncertain about the treatment effects on treatment acceptability and participant preference.\nOther comparisons\nThe remaining studies either compared oxycodone in various formulations or compared oxycodone to different alternative opioids. None found any clear superiority or inferiority of oxycodone for cancer pain, neither as an analgesic agent nor in terms of adverse event rates and treatment acceptability. The certainty of this evidence base was limited by the high or unclear risk of bias of the studies and by imprecision due to low or very low event rates or participant numbers for many outcomes.\nAuthors' conclusions\nThe conclusions have not changed since the previous version of this review (in 2017). We found low-certainty evidence that there may be little to no difference in pain intensity, pain relief and adverse events between oxycodone and other strong opioids including morphine, commonly considered the gold standard strong opioid. Although we identified a benefit for pain relief in favour of CR morphine over CR oxycodone, this was not clinically significant and did not persist following sensitivity analysis and so we do not consider this important. However, we found that constipation and hallucinations occurred less often with CR oxycodone than with CR morphine; but the certainty of this evidence was either very low or the finding did not persist following sensitivity analysis, so these findings should be treated with utmost caution. Our conclusions are consistent with other reviews and suggest that, while the reliability of the evidence base is low, given the absence of important differences within this analysis, it seems unlikely that larger head-to-head studies of oxycodone versus morphine are justified, although well-designed trials comparing oxycodone to other strong analgesics may well be useful. For clinical purposes, oxycodone or morphine can be used as first-line oral opioids for relief of cancer pain in adults.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In general, the studies showed no difference between oxycodone taken every 4-6 (immediate-release) or every 12 (controlled-release) hours. In general, the studies also showed no difference between oxycodone and other strong pain killers such as morphine.\nAll the strong painkillers examined in the studies also have a number of unwanted effects, such as vomiting, constipation, and drowsiness. Overall, these do not differ between oxycodone and the other strong painkillers. Hallucinations (where people experience imaginary things, e.g. hearing voices) are much less common as a side effect of strong painkillers, and we found that they were less likely with oxycodone than with morphine.\nOverall, we found that the current evidence is comprised of studies that contained small numbers of people, of which many (12.2%) did not finish the studies. However, since there was very little difference between oxycodone and morphine, more research in this area is unlikely to be undertaken. This is partly because recruitment and retention of participants in this context is challenging. Studies looking at oxycodone compared to other strong painkillers may be useful."
}
]
|
query-laysum | 3806 | [
{
"role": "user",
"content": "Abstract: Background\nEstablishment of a secure airway is a critical part of neonatal resuscitation in the delivery room and the neonatal intensive care unit. Videolaryngoscopy has the potential to facilitate successful endotracheal intubation, and decrease adverse consequences of a delay in airway stabilization. Videolaryngoscopy may enhance visualization of the glottis and intubation success in neonates. This is an update of a review first published in 2015, and updated in 2018.\nObjectives\nTo determine the effectiveness and safety of videolaryngoscopy compared to direct laryngoscopy in decreasing the time and attempts required for endotracheal intubation and increasing the success rate on first intubation attempt in neonates (0 to 28 days of age).\nSearch methods\nIn November 2022, we updated the search for trials evaluating videolaryngoscopy for neonatal endotracheal intubation in CENTRAL, MEDLINE, Embase, CINAHL, and BIOSIS. We also searched abstracts of the Pediatric Academic Societies, clinical trials registries (www.clinicaltrials.gov; www.controlled-trials.com), and reference lists of relevant studies.\nSelection criteria\nRandomized controlled trials (RCTs), quasi-RCTs, cluster-RCTs, or cross-over trials, in neonates (0 to 28 days of age), evaluating videolaryngoscopy with any device used for endotracheal intubation compared with direct laryngoscopy.\nData collection and analysis\nThree review authors performed data collection and analysis, as recommended by Cochrane Neonatal. Two review authors independently assessed studies identified by the search strategy for inclusion.\nWe used the GRADE approach to assess the certainty of the evidence.\nMain results\nThe updated search yielded 7786 references, from which we identified five additional RCTs for inclusion, seven ongoing trials, and five studies awaiting classification. Three studies were included in the previous version of the review. For this update, we included eight studies, which provided data on 759 intubation attempts in neonates. We included neonates of either sex, who were undergoing endotracheal intubation in international hospitals. Different videolaryngoscopy devices (including C-MAC, Airtraq, and Glidescope) were used in the studies.\nFor the primary outcomes; videolaryngoscopy may not reduce the time required for successful intubation when compared with direct laryngoscopy (mean difference [MD] 0.74, 95% confidence interval [CI] -0.19 to 1.67; 5 studies; 505 intubations; low-certainty evidence). Videolaryngoscopy may result in fewer intubation attempts (MD -0.08, 95% CI -0.15 to 0.00; 6 studies; 659 intubations; low-certainty evidence). Videolaryngoscopy may increase the success of intubation at the first attempt (risk ratio [RR] 1.24, 95% CI 1.13 to 1.37; risk difference [RD] 0.14, 95% CI 0.08 to 0.20; number needed to treat for an additional beneficial outcome [NNTB] 7, 95% CI 5 to 13; 8 studies; 759 intubation attempts; low-certainty evidence).\nFor the secondary outcomes; the evidence is very uncertain about the effect of videolaryngoscopy on desaturation or bradycardia episodes, or both, during intubation (RR 0.94, 95% CI 0.38 to 2.30; 3 studies; 343 intubations; very-low certainty evidence). Videolaryngoscopy may result in little to no difference in the lowest oxygen saturations during intubation compared with direct laryngoscopy (MD -0.76, 95% CI -5.74 to 4.23; 2 studies; 359 intubations; low-certainty evidence). Videolaryngoscopy likely results in a slight reduction in the incidence of airway trauma during intubation attempts compared with direct laryngoscopy (RR 0.21, 95% CI 0.05 to 0.79; RD -0.04, 95% CI -0.07 to -0.01; NNTB 25, 95% CI 14 to 100; 5 studies; 467 intubations; moderate-certainty evidence).\nThere were no data available on other adverse effects of videolaryngoscopy. We found a high risk of bias in areas of allocation concealment and performance bias in the included studies.\nAuthors' conclusions\nVideolaryngoscopy may increase the success of intubation on the first attempt and may result in fewer intubation attempts, but may not reduce the time required for successful intubation (low-certainty evidence). Videolaryngoscopy likely results in a reduced incidence of airway-related adverse effects (moderate-certainty evidence).\nThese results suggest that videolaryngoscopy may be more effective and potentially reduce harm when compared to direct laryngoscopy for endotracheal intubation in neonates.\nWell-designed, adequately powered RCTS are necessary to confirm the efficacy and safety of videolaryngoscopy in neonatal intubation.\n\nGiven the provided abstract, please respond to the following query: \"What is videolaryngoscopy?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Seeing the airway by a video while placing the breathing tube is called videolaryngoscopy. This may make it easier and safer to place the breathing tube. This also may help trainees when they are learning this life-saving skill."
}
]
|
query-laysum | 3807 | [
{
"role": "user",
"content": "Abstract: Background\nDiabetic ketoacidosis (DKA) is an acute, life-threatening complication of uncontrolled diabetes that mainly occurs in individuals with autoimmune type 1 diabetes, but it is not uncommon in some people with type 2 diabetes. The treatment of DKA is traditionally accomplished by the administration of intravenous infusion of regular insulin that is initiated in the emergency department and continued in an intensive care unit or a high-dependency unit environment. It is unclear whether people with DKA should be treated with other treatment modalities such as subcutaneous rapid-acting insulin analogues.\nObjectives\nTo assess the effects of subcutaneous rapid-acting insulin analogues for the treatment of diabetic ketoacidosis.\nSearch methods\nWe identified eligible trials by searching MEDLINE, PubMed, EMBASE, LILACS, CINAHL, and the Cochrane Library. We searched the trials registers WHO ICTRP Search Portal and ClinicalTrials.gov. The date of last search for all databases was 27 October 2015. We also examined reference lists of included randomised controlled trials (RCTs) and systematic reviews, and contacted trial authors.\nSelection criteria\nWe included trials if they were RCTs comparing subcutaneous rapid-acting insulin analogues versus standard intravenous infusion in participants with DKA of any age or sex with type 1 or type 2 diabetes, and in pregnant women.\nData collection and analysis\nTwo review authors independently extracted data, assessed studies for risk of bias, and evaluated overall study quality utilising the GRADE instrument. We assessed the statistical heterogeneity of included studies by visually inspecting forest plots and quantifying the diversity using the I² statistic. We synthesised data using random-effects model meta-analysis or descriptive analysis, as appropriate.\nMain results\nFive trials randomised 201 participants (110 participants to subcutaneous rapid-acting insulin analogues and 91 to intravenous regular insulin). The criteria for DKA were consistent with the American Diabetes Association criteria for mild or moderate DKA. The underlying cause of DKA was mostly poor compliance with diabetes therapy. Most trials did not report on type of diabetes. Younger diabetic participants and children were underrepresented in our included trials (one trial only). Four trials evaluated the effects of the rapid-acting insulin analogue lispro, and one the effects of the rapid-acting insulin analogue aspart. The mean follow-up period as measured by mean hospital stay ranged between two and seven days. Overall, risk of bias of the evaluated trials was unclear in many domains and high for performance bias for the outcome measure time to resolution of DKA.\nNo deaths were reported in the included trials (186 participants; 3 trials; moderate- (insulin lispro) to low-quality evidence (insulin aspart)). There was very low-quality evidence to evaluate the effects of subcutaneous insulin lispro versus intravenous regular insulin on the time to resolution of DKA: mean difference (MD) 0.2 h (95% CI -1.7 to 2.1); P = 0.81; 90 participants; 2 trials. In one trial involving children with DKA, the time to reach a glucose level of 250 mg/dL was similar between insulin lispro and intravenous regular insulin. There was very low-quality evidence to evaluate the effects of subcutaneous insulin aspart versus intravenous regular insulin on the time to resolution of DKA: MD -1 h (95% CI -3.2 to 1.2); P = 0.36; 30 participants; 1 trial. There was low-quality evidence to evaluate the effects of subcutaneous rapid-acting insulin analogues versus intravenous regular insulin on hypoglycaemic episodes: 6 of 80 insulin lispro-treated participants compared with 9 of 76 regular insulin-treated participants reported hypoglycaemic events; risk ratio (RR) 0.59 (95% CI 0.23 to 1.52); P = 0.28; 156 participants; 4 trials. For insulin aspart compared with regular insulin, RR for hypoglycaemic episodes was 1.00 (95% CI 0.07 to 14.55); P = 1.0; 30 participants; 1 trial; low-quality evidence. Socioeconomic effects as measured by length of mean hospital stay for insulin lispro compared with regular insulin showed a MD of -0.4 days (95% CI -1 to 0.2); P = 0.22; 90 participants; 2 trials; low-quality evidence and for insulin aspart compared with regular insulin 1.1 days (95% CI -3.3 to 1.1); P = 0.32; low-quality evidence. Data on morbidity were limited, but no specific events were reported for the comparison of insulin lispro with regular insulin. No trial reported on adverse events other than hypoglycaemic episodes, and no trial investigated patient satisfaction.\nAuthors' conclusions\nOur review, which provided mainly data on adults, suggests on the basis of mostly low- to very low-quality evidence that there are neither advantages nor disadvantages when comparing the effects of subcutaneous rapid-acting insulin analogues versus intravenous regular insulin for treating mild or moderate DKA.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Rapid-acting insulin analogues (artificial insulin such as insulin lispro, insulin aspart, or insulin glulisine) act more quickly than regular human insulin. In people with a specific type of life-threatening diabetic coma due to uncontrolled diabetes, called diabetic ketoacidosis, prompt administration of intravenous regular insulin is standard therapy. The rapid-acting insulin analogues, if injected subcutaneously, act faster than subcutaneously administered regular insulin. The need for a continuous intravenous infusion, an intervention that usually requires admission to an intensive care unit, can thereby be avoided. This means that subcutaneously given insulin analogues for diabetic ketoacidosis might be applied in the emergency department and a general medicine ward."
}
]
|
query-laysum | 3808 | [
{
"role": "user",
"content": "Abstract: Background\nOvarian cancer is the leading cause of death from gynaecological cancer in developed countries. Surgery and chemotherapy are considered its mainstay of treatment and the completeness of surgery is a major prognostic factor for survival in these women. Currently, computed tomography (CT) is used to preoperatively assess tumour resectability. If considered feasible, women will be scheduled for primary debulking surgery (i.e. surgical efforts to remove the bulk of tumour with the aim of leaving no visible (macroscopic) tumour). If primary debulking is not considered feasible (i.e. the tumour load is too extensive), women will receive neoadjuvant chemotherapy to reduce tumour load and subsequently undergo (interval) surgery. However, CT is imperfect in assessing tumour resectability, so additional imaging modalities can be considered to optimise treatment selection.\nObjectives\nTo assess the diagnostic accuracy of fluorodeoxyglucose-18 (FDG) PET/CT, conventional and diffusion-weighted (DW) MRI as replacement or add-on to abdominal CT, for assessing tumour resectability at primary debulking surgery in women with stage III to IV epithelial ovarian/fallopian tube/primary peritoneal cancer.\nSearch methods\nWe searched MEDLINE and Embase (OVID) for potential eligible studies (1946 to 23 February 2017). Additionally, ClinicalTrials.gov, WHO-ICTRP and the reference list of all relevant studies were searched.\nSelection criteria\nDiagnostic accuracy studies addressing the accuracy of preoperative FDG-PET/CT, conventional or DW-MRI on assessing tumour resectability in women with advanced stage (III to IV) epithelial ovarian/fallopian tube/primary peritoneal cancer who are scheduled to undergo primary debulking surgery.\nData collection and analysis\nTwo authors independently screened titles and abstracts for relevance and inclusion, extracted data and performed methodological quality assessment using QUADAS-2. The limited number of studies did not permit meta-analyses.\nMain results\nFive studies (544 participants) were included in the analysis. All studies performed the index test as replacement of abdominal CT. Two studies (366 participants) addressed the accuracy of FDG-PET/CT for assessing incomplete debulking with residual disease of any size (> 0 cm) with sensitivities of 1.0 (95% CI 0.54 to 1.0) and 0.66 (95% CI 0.60 to 0.73) and specificities of 1.0 (95% CI 0.80 to 1.0) and 0.88 (95% CI 0.80 to 0.93), respectively (low- and moderate-certainty evidence). Three studies (178 participants) investigated MRI for different target conditions, of which two investigated DW-MRI and one conventional MRI. The first study showed that DW-MRI determines incomplete debulking with residual disease of any size with a sensitivity of 0.94 (95% CI 0.83 to 0.99) and a specificity of 0.98 (95% CI 0.88 to 1.00) (low- and moderate-certainty evidence). For abdominal CT, the sensitivity for assessing incomplete debulking was 0.66 (95% CI 0.52 to 0.78) and the specificity 0.77 (95% CI 0.63 to 0.87) (low- and low-certainty evidence). The second study reported a sensitivity of DW-MRI of 0.75 (95% CI 0.35 to 0.97) and a specificity of 0.96 (95% CI 0.80 to 1.00) (very low-certainty evidence) for assessing incomplete debulking with residual disease > 1 cm. In the last study, the sensitivity for assessing incomplete debulking with residual disease of > 2 cm on conventional MRI was 0.91 (95% CI 0.59 to 1.00) and the specificity 0.97 (95% CI 0.87 to 1.00) (very low-certainty evidence). Overall, the certainty of evidence was very low to moderate (according to GRADE), mainly due to small sample sizes and imprecision.\nAuthors' conclusions\nStudies suggested a high specificity and moderate sensitivity for FDG-PET/CT and MRI to assess macroscopic incomplete debulking. However, the certainty of the evidence was insufficient to advise routine addition of FDG-PET/CT or MRI to clinical practice..\nIn a research setting, adding an alternative imaging method could be considered for women identified as suitable for primary debulking by abdominal CT, in an attempt to filter out false-negatives (i.e. debulking, feasible based on abdominal CT, unfeasible at actual surgery).\n\nGiven the provided abstract, please respond to the following query: \"Why is it important to determine the feasibility of ovarian tumour resection?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Ovarian cancer is a disease with a high mortality that affects 239,000 women each year across the world. By the time it is symptomatic and detected, cancer cells have spread throughout the abdomen in most women. Treatment consists of surgery to remove as much visible tumour as possible (also called debulking surgery) and chemotherapy. Randomised controlled trials have shown that in women where all visible cancer cannot be removed with surgery, giving chemotherapy first to shrink the tumour is an alternative treatment strategy. This can improve the number of women having successful removal of all visible tumour, known as macroscopic debulking. Therefore, it is important to determine beforehand if all visible tumour deposits can be removed by surgery, followed by chemotherapy, or if chemotherapy is needed first to reduce tumour size before surgery is performed.\nImaging with abdominal computed tomography (abdominal CT) is currently used to determine whether primary debulking surgery is feasible. However, it cannot determine the outcome correctly in all women. Other imaging techniques that can be used are positron emission tomography (PET) and magnetic resonance imaging (MRI). PET visualises glucose uptake by cells and allows detection of distant metastases and is frequently performed parallel with abdominal CT (FDG-PET/CT). MRI provides good soft tissue contrast to detect small lesions. These additional imaging techniques may improve treatment selection."
}
]
|
query-laysum | 3809 | [
{
"role": "user",
"content": "Abstract: Background\nManagement of the open abdomen is a considerable burden for patients and healthcare professionals. Various temporary abdominal closure techniques have been suggested for managing the open abdomen. In recent years, negative pressure wound therapy (NPWT) has been used in some centres for the treatment of non-trauma patients with an open abdomen; however, its effectiveness is uncertain.\nObjectives\nTo assess the effects of negative pressure wound therapy (NPWT) on primary fascial closure for managing the open abdomen in non-trauma patients in any care setting.\nSearch methods\nIn October 2021 we searched the Cochrane Wounds Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL EBSCO Plus. To identify additional studies, we also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses, and health technology reports. There were no restrictions with respect to language, date of publication, or study setting.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared NPWT with any other type of temporary abdominal closure (e.g. Bogota bag, Wittmann patch) in non-trauma patients with open abdomen in any care setting. We also included RCTs that compared different types of NPWT systems for managing the open abdomen in non-trauma patients.\nData collection and analysis\nTwo review authors independently performed the study selection process, risk of bias assessment, data extraction, and GRADE assessment of the certainty of evidence.\nMain results\nWe included two studies, involving 74 adults with open abdomen associated with various conditions, predominantly severe peritonitis (N = 55). The mean age of the participants was 52.8 years; the mean proportion of women was 39.2%. Both RCTs were carried out in single centres and were at high risk of bias.\nNegative pressure wound therapy versus Bogota bag\nWe included one study (40 participants) comparing NPWT with Bogota bag. We are uncertain whether NPWT reduces time to primary fascial closure of the abdomen (NPWT: 16.9 days versus Bogota bag: 20.5 days (mean difference (MD) -3.60 days, 95% confidence interval (CI) -8.16 to 0.96); very low-certainty evidence) or adverse events (fistulae formation, NPWT: 10% versus Bogota: 5% (risk ratio (RR) 2.00, 95% CI 0.20 to 20.33); very low-certainty evidence) compared with the Bogota bag. We are also uncertain whether NPWT reduces all-cause mortality (NPWT: 25% versus Bogota bag: 35% (RR 0.71, 95% CI 0.27 to 1.88); very low-certainty evidence) or length of hospital stay compared with the Bogota bag (NPWT mean: 28.5 days versus Bogota bag mean: 27.4 days (MD 1.10 days, 95% CI -13.39 to 15.59); very low-certainty evidence). The study did not report the proportion of participants with successful primary fascial closure of the abdomen, participant health-related quality of life, reoperation rate, wound infection, or pain.\nNegative pressure wound therapy versus any other type of temporary abdominal closure\nThere were no randomised controlled trials comparing NPWT with any other type of temporary abdominal closure.\nComparison of different negative pressure wound therapy devices\nWe included one study (34 participants) comparing different types of NPWT systems (Suprasorb CNP system versus ABThera system). We are uncertain whether the Suprasorb CNP system increases the proportion of participants with successful primary fascial closure of the abdomen compared with the ABThera system (Suprasorb CNP system: 88.2% versus ABThera system: 70.6% (RR 0.80, 95% CI 0.56 to 1.14); very low-certainty evidence). We are also uncertain whether the Suprasorb CNP system reduces adverse events (fistulae formation, Suprasorb CNP system: 0% versus ABThera system: 23.5% (RR 0.11, 95% CI 0.01 to 1.92); very low-certainty evidence), all-cause mortality (Suprasorb CNP system: 5.9% versus ABThera system: 17.6% (RR 0.33, 95% CI 0.04 to 2.89); very low-certainty evidence), or reoperation rate compared with the ABThera system (Suprasorb CNP system: 100% versus ABThera system: 100% (RR 1.00, 95% CI 0.90 to 1.12); very low-certainty evidence). The study did not report the time to primary fascial closure of the abdomen, participant health-related quality of life, length of hospital stay, wound infection, or pain.\nAuthors' conclusions\nBased on the available trial data, we are uncertain whether NPWT has any benefit in primary fascial closure of the abdomen, adverse events (fistulae formation), all-cause mortality, or length of hospital stay compared with the Bogota bag. We are also uncertain whether the Suprasorb CNP system has any benefit in primary fascial closure of the abdomen, adverse events, all-cause mortality, or reoperation rate compared with the ABThera system. Further research evaluating these outcomes as well as participant health-related quality of life, wound infection, and pain outcomes is required. We will update this review when data from the large studies that are currently ongoing are available.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared NPWT with any other type of temporary abdominal closure in non-trauma patients with open abdomen. We also included studies that compared different types of NPWT systems for managing the open abdomen in non-trauma patients. We had no restrictions with respect to language, date of publication, or where the study was conducted. We rated our confidence in the evidence, based on factors such as study methods and sizes."
}
]
|
query-laysum | 3810 | [
{
"role": "user",
"content": "Abstract: Background\nAs part of efforts to prevent childhood overweight and obesity, we need to understand the relationship between total fat intake and body fatness in generally healthy children.\nObjectives\nTo assess the effects and associations of total fat intake on measures of weight and body fatness in children and young people not aiming to lose weight.\nSearch methods\nFor this update we revised the previous search strategy and ran it over all years in the Cochrane Library, MEDLINE (Ovid), MEDLINE (PubMed), and Embase (Ovid) (current to 23 May 2017). No language and publication status limits were applied. We searched the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov for ongoing and unpublished studies (5 June 2017).\nSelection criteria\nWe included randomised controlled trials (RCTs) in children aged 24 months to 18 years, with or without risk factors for cardiovascular disease, randomised to a lower fat (30% or less of total energy (TE)) versus usual or moderate-fat diet (greater than 30%TE), without the intention to reduce weight, and assessed a measure of weight or body fatness after at least six months. We included prospective cohort studies if they related baseline total fat intake to weight or body fatness at least 12 months later.\nData collection and analysis\nWe extracted data on participants, interventions or exposures, controls and outcomes, and trial or cohort quality characteristics, as well as data on potential effect modifiers, and assessed risk of bias for all included studies. We extracted body weight and blood lipid levels outcomes at six months, six to 12 months, one to two years, two to five years and more than five years for RCTs; and for cohort studies, at baseline to one year, one to two years, two to five years, five to 10 years and more than 10 years. We planned to perform random-effects meta-analyses with relevant subgrouping, and sensitivity and funnel plot analyses where data allowed.\nMain results\nWe included 24 studies comprising three parallel-group RCTs (n = 1054 randomised) and 21 prospective analytical cohort studies (about 25,059 children completed). Twenty-three studies were conducted in high-income countries. No meta-analyses were possible, since only one RCT reported the same outcome at each time point range for all outcomes, and cohort studies were too heterogeneous to combine.\nEffects of dietary counselling to reduce total fat intake from RCTs\nTwo studies recruited children aged between 4 and 11 years and a third recruited children aged 12 to 13 years. Interventions were combinations of individual and group counselling, and education sessions in clinics, schools and homes, delivered by dieticians, nutritionists, behaviourists or trained, supervised teachers. Concerns about imprecision and poor reporting limited our confidence in our findings. In addition, the inclusion of hypercholesteraemic children in two trials raised concerns about applicability.\nOne study of dietary counselling to lower total fat intake found that the intervention may make little or no difference to weight compared with usual diet at 12 months (mean difference (MD) -0.50 kg, 95% confidence interval (CI) -1.78 to 0.78; n = 620; low-quality evidence) and at three years (MD -0.60 kg, 95% CI -2.39 to 1.19; n = 612; low-quality evidence). Education delivered as a classroom curriculum probably decreased BMI in children at 17 months (MD -1.5 kg/m2, 95% CI -2.45 to -0.55; 1 RCT; n = 191; moderate-quality evidence). The effects were smaller at longer term follow-up (five years: MD 0 kg/m2, 95% CI -0.63 to 0.63; n = 541; seven years; MD -0.10 kg/m2, 95% CI -0.75 to 0.55; n = 576; low-quality evidence).\nDietary counselling probably slightly reduced total cholesterol at 12 months compared to controls (MD -0.15 mmol/L, 95% CI -0.24 to -0.06; 1 RCT; n = 618; moderate-quality evidence), but may make little or no difference over longer time periods. Dietary counselling probably slightly decreased low-density lipoprotein (LDL) cholesterol at 12 months (MD -0.12 mmol/L, 95% CI -0.20 to -0.04; 1 RCT; n = 618, moderate-quality evidence) and at five years (MD -0.09, 95% CI -0.17 to -0.01; 1 RCT; n = 623; moderate-quality evidence), compared to controls. Dietary counselling probably made little or no difference to HDL-C at 12 months (MD -0.03 mmol/L, 95% CI -0.08 to 0.02; 1 RCT; n = 618; moderate-quality evidence), and at five years (MD -0.01 mmol/L, 95% CI -0.06 to 0.04; 1 RCT; n = 522; moderate-quality evidence). Likewise, counselling probably made little or no difference to triglycerides in children at 12 months (MD -0.01 mmol/L, 95% CI -0.08 to 0.06; 1 RCT; n = 618; moderate-quality evidence). Lower versus usual or modified fat intake may make little or no difference to height at seven years (MD -0.60 cm, 95% CI -2.06 to 0.86; 1 RCT; n = 577; low-quality evidence).\nAssociations between total fat intake, weight and body fatness from cohort studies\nOver half the cohort analyses that reported on primary outcomes suggested that as total fat intake increases, body fatness measures may move in the same direction. However, heterogeneous methods and reporting across cohort studies, and predominantly very low-quality evidence, made it difficult to draw firm conclusions and true relationships may be substantially different.\nAuthors' conclusions\nWe were unable to reach firm conclusions. Limited evidence from three trials that randomised children to dietary counselling or education to lower total fat intake (30% or less TE) versus usual or modified fat intake, but with no intention to reduce weight, showed small reductions in body mass index, total- and LDL-cholesterol at some time points with lower fat intake compared to controls. There were no consistent effects on weight, high-density lipoprotein (HDL) cholesterol or height. Associations in cohort studies that related total fat intake to later measures of body fatness in children were inconsistent and the quality of this evidence was mostly very low. Most studies were conducted in high-income countries, and may not be applicable in low- and middle-income settings. High-quality, longer-term studies are needed, that include low- and middle-income settings to look at both possible benefits and harms.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found no high-quality evidence with which to answer this question. Evidence from the cohort studies was generally of very low quality so we are uncertain about these results and cannot draw conclusions. For the three randomised controlled trials, the results that we were most interested in were generally of moderate- or low-quality evidence. We could not make any conclusions about children in low- and middle-income countries as 23 of the 24 studies were done in high-income countries. More high-quality, long-term studies are required that also include children from low- and middle-income settings."
}
]
|
query-laysum | 3811 | [
{
"role": "user",
"content": "Abstract: Background\nPatients with a low platelet count (thrombocytopenia) often require the insertion of central lines (central venous catheters (CVCs)). CVCs have a number of uses; these include: administration of chemotherapy; intensive monitoring and treatment of critically-ill patients; administration of total parenteral nutrition; and long-term intermittent intravenous access for patients requiring repeated treatments. Current practice in many countries is to correct thrombocytopenia with platelet transfusions prior to CVC insertion, in order to mitigate the risk of serious procedure-related bleeding. However, the platelet count threshold recommended prior to CVC insertion varies significantly from country to country. This indicates significant uncertainty among clinicians of the correct management of these patients. The risk of bleeding after a central line insertion appears to be low if an ultrasound-guided technique is used. Patients may therefore be exposed to the risks of a platelet transfusion without any obvious clinical benefit.\nObjectives\nTo assess the effects of different platelet transfusion thresholds prior to the insertion of a central line in patients with thrombocytopenia (low platelet count).\nSearch methods\nWe searched for randomised controlled trials (RCTs) in CENTRAL (The Cochrane Library 2015, Issue 2), MEDLINE (from 1946), EMBASE (from 1974), the Transfusion Evidence Library (from 1950) and ongoing trial databases to 23 February 2015.\nSelection criteria\nWe included RCTs involving transfusions of platelet concentrates, prepared either from individual units of whole blood or by apheresis, and given to prevent bleeding in patients of any age with thrombocytopenia requiring insertion of a CVC.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nOne RCT was identified that compared different platelet transfusion thresholds prior to insertion of a CVC in people with chronic liver disease. This study is still recruiting participants (expected recruitment: up to 165 participants) and is due to be completed in December 2017. There were no completed studies. There were no studies that compared no platelet transfusions to a platelet transfusion threshold.\nAuthors' conclusions\nThere is no evidence from RCTs to determine whether platelet transfusions are required prior to central line insertion in patients with thrombocytopenia, and, if a platelet transfusion is required, what is the correct platelet transfusion threshold. Further randomised trials with robust methodology are required to develop the optimal transfusion strategy for such patients. The one ongoing RCT involving people with cirrhosis will not be able to answer this review's questions, because it is a small study that assesses one patient group and does not address all of the comparisons included in this review. To detect an increase in the proportion of participants who had major bleeding from 1 in 100 to 2 in 100 would require a study containing at least 4634 participants (80% power, 5% significance).\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no evidence from randomised controlled trials to answer our review questions."
}
]
|
query-laysum | 3812 | [
{
"role": "user",
"content": "Abstract: Background\nHead injury is a common event and can cause a spectrum of motor and cognition disabilities. A frequent complication is seizures. Antiepileptic drugs (AED) such as phenytoin are often used in clinical practice with the hopes of preventing post-traumatic epilepsy. Whether immediate medical intervention following head trauma with either AEDs or neuroprotective drugs can alter the process of epileptogenesis and lead to a more favorable outcome is currently unknown. This review attempted to address the effectiveness of these treatment interventions. This review updates and expands on the earlier Cochrane review.\nObjectives\nTo compare the efficacy of antiepileptic drugs and neuroprotective agents with placebo, usual care or other pharmacologic agents for the prevention of post-traumatic epilepsy in people diagnosed with any severity of traumatic brain injury.\nSearch methods\nWe searched The Cochrane Epilepsy Group's specialized register, CENTRAL, MEDLINE, ClinicalTrials.gov and World Health Organization International Clinical Trials Registry Platform (ICTRP) in January 2015. We searched EMBASE, Biological Abstracts and National Research Register in September 2014 and SCOPUS in December 2013. The Cochrane Epilepsy Group performed handsearches of relevant journals.\nSelection criteria\nWe included randomized controlled trials (RCTs) that include AEDs or neuroprotective agents compared with placebo, another pharmacologic agent or a usual care group. The outcomes measured included a seizure occurring within one week of trauma (early seizure), seizure occurring later than one week post-trauma (late seizure), mortality and any adverse events.\nData collection and analysis\nTwo review authors independently assessed study quality and extracted the data. We calculated risk ratios (RR) and 95% confidence intervals (CI) for each outcome. We used random-effects models in the meta-analyses and performed pre-defined subgroup and sensitivity analyses.\nMain results\nThis review included 10 RCTs (reported in 12 articles) consisting of 2326 participants The methodological quality of the studies varied. The type of intervention was separated into three categories; AED versus placebo or standard care, alternative neuroprotective agent versus placebo or standard care and AED versus other AED. Treatment with an AED (phenytoin or carbamazepine) decreased the risk of early seizure compared with placebo or standard care (RR 0.42, 95% CI 0.23 to 0.73; very low quality evidence). There was no evidence of a difference in the risk of late seizure occurrence between AEDs and placebo or standard care (RR 0.91, 95% CI 0.57 to 1.46; very low quality evidence). There was no evidence of a significant difference in all-cause mortality between AEDs and placebo or standard care (RR 1.08 95% CI 0.79 to 1.46,very low quality of evidence). Only one study looked at other potentially neuroprotective agents (magnesium sulfate) compared with placebo. The risk ratios were: late seizure 1.07 (95% CI 0.53 to 2.17) and all-cause mortality 1.20 (95% CI 0.80 to 1.81). The risk ratio for occurrence of early seizure was not estimable.\nTwo studies looked at comparison of two AEDs (levetiracetam, valproate) with phenytoin used as the main comparator in each study. The risk ratio for all-cause mortality was 0.53 (95% CI 0.30 to 0.94). There was no evidence of treatment benefit of phenytoin compared with another AED for early seizures (RR 0.66, 95% 0.20 to 2.12) or late seizures(RR 0.77, 95% CI 0.46 to 1.30).\nOnly two studies reported adverse events. The RR of any adverse event with AED compared with placebo was 1.65 (95% CI 0.73 to 3.66; low quality evidence). There were insufficient data on adverse events in the other treatment comparisons.\nAuthors' conclusions\nThis review found low-quality evidence that early treatment with an AED compared with placebo or standard care reduced the risk of early post-traumatic seizures. There was no evidence to support a reduction in the risk of late seizures or mortality. There was insufficient evidence to make any conclusions regarding the effectiveness or safety of other neuroprotective agents compared with placebo or for the comparison of phenytoin, a traditional AED, with another AED.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies evaluating the effect of early administration of antiepileptic drugs or other potentially neuroprotective agents (which act by protecting the structure or function of nerves) on post-traumatic epilepsy. The primary outcomes of interest were early post-traumatic seizures (within one week of trauma) and late seizures (later than one week post-trauma). We also looked at death, time to late seizure and side effects. The evidence is current to January 2015."
}
]
|
query-laysum | 3813 | [
{
"role": "user",
"content": "Abstract: Background\nWorldwide at least 100 million people are thought to have prevalent cardiovascular disease (CVD). This population has a five times greater chance of suffering a recurrent cardiovascular event than people without known CVD. Secondary CVD prevention is defined as action aimed to reduce the probability of recurrence of such events. Drug interventions have been shown to be cost-effective in reducing this risk and are recommended in international guidelines. However, adherence to recommended treatments remains sub-optimal. In order to influence non-adherence, there is a need to develop scalable and cost-effective behaviour-change interventions.\nObjectives\nTo assess the effects of mobile phone text messaging in patients with established arterial occlusive events on adherence to treatment, fatal and non-fatal cardiovascular events, and adverse effects.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, the Conference Proceedings Citation Index - Science on Web of Science on 7 November 2016, and two clinical trial registers on 12 November 2016. We contacted authors of included studies for missing information and searched reference lists of relevant papers. We applied no language or date restrictions.\nSelection criteria\nWe included randomised trials with at least 50% of the participants with established arterial occlusive events. We included trials investigating interventions using short message service (SMS) or multimedia messaging service (MMS) with the aim to improve adherence to medication for the secondary prevention of cardiovascular events. Eligible comparators were no intervention or other modes of communication.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. In addition, we attempted to contact all authors on how the SMS were developed.\nMain results\nWe included seven trials (reported in 13 reports) with 1310 participants randomised. Follow-up ranged from one month to 12 months. Due to heterogeneity in the methods, population and outcome measures, we were unable to conduct meta-analysis on these studies. All seven studies reported on adherence, but using different methods and scales. Six out of seven trials showed a beneficial effect of mobile phone text messaging for medication adherence. Dale 2015a, reported significantly greater medication adherence score in the intervention group (Mean Difference (MD) 0.58, 95% confidence interval (CI) 0.19 to 0.97; 123 participants randomised) at six months. Khonsari 2015 reported less adherence in the control group (Relative Risk (RR) 4.09, 95% CI 1.82 to 9.18; 62 participants randomised) at eight weeks. Pandey 2014 (34 participants randomised) assessed medication adherence through self-reported logs with 90% adherence in the intervention group compared to 70% in the control group at 12 months. Park 2014a (90 participants randomised) reported a greater increase of the medication adherence score in the control group, but also measured adherence with an event monitoring system for a number of medications with adherence levels ranging from 84.1% adherence to 86.2% in the intervention group and 79.7% to 85.7% in the control group at 30 days. Quilici 2013, reported reduced odds of non-adherence in the intervention group (Odds Ratio (OR) 0.43, 95% CI 0.22 to 0.86, 521 participants randomised) at 30 days. Fang 2016, reported that participants given SMS alone had reduced odds of being non-adherent compared to telephone reminders (OR 0.40 95% CI 0.18 to 0.63; 280 patients randomised). Kamal 2015 reported higher levels of adherence in the intervention arm (adjusted MD 0.54, 95% CI 0.22 to 0.85; 200 participants randomised).\nKhonsari 2015 was the only study to report fatal cardiovascular events and only reported two events, both in the control arm. No study reported on the other primary outcomes. No study reported repetitive thumb injury or road traffic crashes or other adverse events that were related to the intervention.\nFour authors replied to our questionnaire on SMS development. No study reported examining causes of non-adherence or provided SMS tailored to individual patient characteristics.\nThe included studies were small, heterogeneous and included participants recruited directly after acute events. All studies were assessed as having high risk of bias across at least one domain. Most of the studies came from high-income countries, with two studies conducted in an upper middle-income country (China, Malaysia), and one study from a lower middle-income country (Pakistan). The quality of the evidence was found to be very low. There was no obvious conflicts of interest from authors, although only two declared their funding.\nAuthors' conclusions\nWhile the results of this systematic review are promising, there is insufficient evidence to draw conclusions on the effectiveness of text message-based interventions for adherence to medications for secondary prevention of CVD. Sufficiently powered, high-quality randomised trials are needed, particularly in low- and middle-income countries.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Worldwide, at least 100 million people suffer from heart disease. While there are numerous cost-effective treatments, the majority of these individuals are not taking the medications that they need to keep themselves from suffering more heart problems. One possible method of helping people with heart disease to take their medications is through the use of text message-based reminders."
}
]
|
query-laysum | 3814 | [
{
"role": "user",
"content": "Abstract: Background\nPulmonary hypertension is a condition of complex aetiology that culminates in right heart failure and early death. Soluble guanylate cyclase (sGC) stimulators are a promising class of agents that have recently gained approval for use.\nObjectives\nTo evaluate the efficacy of sGC stimulators in pulmonary hypertension.\nSearch methods\nWe searched CENTRAL (Cochrane Central Register of Controlled Trials), MEDLINE, EMBASE and the reference lists of articles. Searches are current as of 12 February 2016.\nSelection criteria\nWe selected randomised controlled trials (RCTs) involving participants with pulmonary hypertension of all ages, severities and durations of treatment.\nData collection and analysis\nAW, MS and RW independently selected studies, assessed evidence quality and extracted data. This process was overseen by RT and SG. All included studies were sponsored by the drug manufacturer.\nMain results\nFive trials involving 962 participants are included in this review. All trials were of relatively short duration (< 16 weeks). Due to the heterogenous aetiology of pulmonary hypertension in participants, results are best considered according to each pulmonary hypertension subtype.\nPooled analysis shows a mean difference (MD) increase in six-minute walking distance (6MWD) of 30.13 metres (95% CI 5.29 to 54.96; participants = 659; studies = 3). On subgroup analysis, for pulmonary arterial hypertension (PAH) there was no effect noted (6MWD; MD 11.91 metres, 95% CI −44.92 to 68.75; participants = 398; studies = 2), and in chronic thromboembolic pulmonary hypertension (CTEPH) sGC stimulators improved 6MWD by an MD of 45 metres (95% CI 23.87 to 66.13; participants = 261; studies = 1). Data for left heart disease-associated PH was not available for pooling. Importantly, when participants receiving phosphodiesterase inhibitors were excluded, sGC stimulators increased 6MWD by a MD of 36 metres in PAH. The second primary outcome, mortality, showed no change on pooled analysis against placebo (Peto odds ratio (OR) 0.57, 95% CI 0.18 to 1.80).\nPooled secondary outcomes include an increase in World Health Organization (WHO) functional class (OR 1.53, 95% CI 0.87 to 2.72; participants = 858; studies = 4), no effect on clinical worsening (OR 0.45, 95% CI 0.17 to 1.14; participants = 842; studies = 3), and a reduction in mean pulmonary artery pressure (MD −2.77 mmHg, 95% CI −4.96 to −0.58; participants = 744; studies = 5). There was no significant difference in serious adverse events on pooled analysis (OR 1.12, 95% CI 0.66 to 1.90; participants = 818; studies = 5) or when analysed at PAH (MD −3.50, 95% CI −5.54 to −1.46; participants = 344; studies = 1), left heart disease associated subgroups (OR 1.56, 95% CI 0.78 to 3.13; participants = 159; studies = 2) or CTEPH subgroups (OR 1.29, 95% CI 0.65 to 2.56; participants = 261; studies = 1).\nIt is important to consider the results for PAH in the context of a person who is not also receiving a phosphodiesterase-V inhibitor, a contra-indication to sGC stimulator use. It should also be noted that CTEPH results are applicable to inoperable or recurrent CTEPH only.\nEvidence was rated according to the GRADE scoring system. One outcome was considered high quality, two were moderate, and eight were of low or very low quality, meaning that for many of the outcomes the true effect could differ substantially from our estimate. There were only minor concerns regarding the risk of bias in these trials, all being RCTs largely following the original protocol. Most trials employed an intention-to-treat analysis.\nAuthors' conclusions\nsGC stimulators improve pulmonary artery pressures in people with PAH (who are treatment naive or receiving a prostanoid or endothelin antagonist) or those with recurrent or inoperable CTEPH. In these settings this can be achieved without notable complication. However, sGC stimulators should not be taken by people also receiving phosphodiestase-V inhibitors or nitrates due to the risks of hypotension, and there is currently no evidence supporting their use in pulmonary hypertension associated with left heart disease. There is no evidence supporting their use in children. These conclusions are based on data with limitations, including unavailable data from two of the trials.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Soluble guanylate cyclase stimulators appear to reduce lung pressures and improve exercise capacity in PAH and recurrent or inoperable CTEPH, but not in PH due to left heart disease. It is uncertain if these drugs have an effect on death rates and general health decline, or if they may be associated with serious side effects. There is evidence that suggests these drugs should not be taken at the same time as phosphodiesterase-V inhibitors."
}
]
|
query-laysum | 3815 | [
{
"role": "user",
"content": "Abstract: Background\nPlatelet transfusions are used in modern clinical practice to prevent and treat bleeding in people with thrombocytopenia. Although considerable advances have been made in platelet transfusion therapy since the mid-1970s, some areas continue to provoke debate especially concerning the use of prophylactic platelet transfusions for the prevention of thrombocytopenic bleeding.\nObjectives\nTo determine whether agents that can be used as alternatives, or adjuncts, to platelet transfusions for people with haematological malignancies undergoing intensive chemotherapy or stem cell transplantation are safe and effective at preventing bleeding.\nSearch methods\nWe searched 11 bibliographic databases and four ongoing trials databases including the Cochrane Central Register of Controlled Trials (CENTRAL, 2016, Issue 4), MEDLINE (OvidSP, 1946 to 19 May 2016), Embase (OvidSP, 1974 to 19 May 2016), PubMed (e-publications only: searched 19 May 2016), ClinicalTrials.gov, World Health Organization (WHO) ICTRP and the ISRCTN Register (searched 19 May 2016).\nSelection criteria\nWe included randomised controlled trials in people with haematological malignancies undergoing intensive chemotherapy or stem cell transplantation who were allocated to either an alternative to platelet transfusion (artificial platelet substitutes, platelet-poor plasma, fibrinogen concentrate, recombinant activated factor VII, desmopressin (DDAVP), or thrombopoietin (TPO) mimetics) or a comparator (placebo, standard care or platelet transfusion). We excluded studies of antifibrinolytic drugs, as they were the focus of another review.\nData collection and analysis\nTwo review authors screened all electronically derived citations and abstracts of papers identified by the review search strategy. Two review authors assessed risk of bias in the included studies and extracted data independently.\nMain results\nWe identified 16 eligible trials. Four trials are ongoing and two have been completed but the results have not yet been published (trial completion dates: April 2012 to February 2017). Therefore, the review included 10 trials in eight references with 554 participants. Six trials (336 participants) only included participants with acute myeloid leukaemia undergoing intensive chemotherapy, two trials (38 participants) included participants with lymphoma undergoing intensive chemotherapy and two trials (180 participants) reported participants undergoing allogeneic stem cell transplantation. Men and women were equally well represented in the trials. The age range of participants included in the trials was from 16 years to 81 years. All trials took place in high-income countries. The manufacturers of the agent sponsored eight trials that were under investigation, and two trials did not report their source of funding.\nNo trials assessed artificial platelet substitutes, fibrinogen concentrate, recombinant activated factor VII or desmopressin.\nNine trials compared a TPO mimetic to placebo or standard care; seven of these used pegylated recombinant human megakaryocyte growth and differentiation factor (PEG-rHuMGDF) and two used recombinant human thrombopoietin (rhTPO).\nOne trial compared platelet-poor plasma to platelet transfusion.\nWe considered that all the trials included in this review were at high risk of bias and meta-analysis was not possible in seven trials due to problems with the way data were reported.\nWe are very uncertain whether TPO mimetics reduce the number of participants with any bleeding episode (odds ratio (OR) 0.40, 95% confidence interval (CI) 0.10 to 1.62, one trial, 120 participants, very low quality evidence). We are very uncertain whether TPO mimetics reduce the risk of a life-threatening bleed after 30 days (OR 1.46, 95% CI 0.06 to 33.14, three trials, 209 participants, very low quality evidence); or after 90 days (OR 1.00, 95% CI 0.06 to 16.37, one trial, 120 participants, very low quality evidence). We are very uncertain whether TPO mimetics reduce platelet transfusion requirements after 30 days (mean difference -3.00 units, 95% CI -5.39 to -0.61, one trial, 120 participants, very low quality evidence). No deaths occurred in either group after 30 days (one trial, 120 participants, very low quality evidence). We are very uncertain whether TPO mimetics reduce all-cause mortality at 90 days (OR 1.00, 95% CI 0.24 to 4.20, one trial, 120 participants, very low quality evidence). No thromboembolic events occurred for participants treated with TPO mimetics or control at 30 days (two trials, 209 participants, very low quality evidence). We found no trials that looked at: number of days on which bleeding occurred, time from randomisation to first bleed or quality of life.\nOne trial with 18 participants compared platelet-poor plasma transfusion with platelet transfusion. We are very uncertain whether platelet-poor plasma reduces the number of participants with any bleeding episode (OR 16.00, 95% CI 1.32 to 194.62, one trial, 18 participants, very low quality evidence). We are very uncertain whether platelet-poor plasma reduces the number of participants with severe or life-threatening bleeding (OR 4.00, 95% CI 0.56 to 28.40, one trial, 18 participants, very low quality evidence). We found no trials that looked at: number of days on which bleeding occurred, time from randomisation to first bleed, number of platelet transfusions, all-cause mortality, thromboembolic events or quality of life.\nAuthors' conclusions\nThere is insufficient evidence to determine if platelet-poor plasma or TPO mimetics reduce bleeding for participants with haematological malignancies undergoing intensive chemotherapy or stem cell transplantation. To detect a decrease in the proportion of participants with clinically significant bleeding from 12 in 100 to 6 in 100 would require a trial containing at least 708 participants (80% power, 5% significance). The six ongoing trials will provide additional information about the TPO mimetic comparison (424 participants) but this will still be underpowered to demonstrate this level of reduction in bleeding. None of the included or ongoing trials include children. There are no completed or ongoing trials assessing artificial platelet substitutes, fibrinogen concentrate, recombinant activated factor VII or desmopressin in people undergoing intensive chemotherapy or stem cell transplantation for haematological malignancies.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was very low, making it difficult to draw conclusions or make recommendations regarding the usefulness and safety of thrombopoietin mimetics or platelet-poor plasma. There was no trial evidence for artificial platelets, fibrinogen concentrate, recombinant activated factor VII or desmopressin."
}
]
|
query-laysum | 3816 | [
{
"role": "user",
"content": "Abstract: Background\nCongenital cataracts are lens opacities in one or both eyes of babies or children present at birth. These may cause a reduction in vision severe enough to require surgery. Cataracts are proportionally the most treatable cause of visual loss in childhood, and are a particular problem in low-income countries, where early intervention may not be possible. Paediatric cataracts provide different challenges to those in adults. Intense inflammation, amblyopia (vision is obstructed by cataract from birth which prevents normal development of the visual system), posterior capsule opacification and uncertainty about the final trajectory of ocular growth parameters can affect results of treatment. Two options currently considered for children under 2 years of age with bilateral congenital cataracts are: (i) intraocular lens (IOL) implantation; or (ii) leaving a child with primary aphakia (no lens in the eye), necessitating the need for contact lenses or aphakic glasses. Other important considerations regarding surgery include the prevention of visual axis opacification (VAO), glaucoma and the route used to perform lensectomy.\nObjectives\nTo assess the effectiveness of infant cataract surgery or lensectomy to no surgery for bilateral congenital cataracts in children aged 2 years and under.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; which contains the Cochrane Eyes and Vision Trials Register; 2022, Issue 1); Ovid MEDLINE; Ovid Embase; the ISRCTN registry; ClinicalTrials.gov and the WHO ICTRP. The date of the search was 25 January 2022.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared infant cataract surgery or lensectomy to no surgery, in children with bilateral congenital cataracts aged 2 years and younger. This update (of a review published in 2001 and updated in 2006) does not include children over 2 years of age because they have a wider variety of aetiologies, and are therefore managed differently, and have contrasting outcomes.\nData collection and analysis\nWe used standard methods expected by Cochrane. Two review authors extracted data independently. We assessed the risk of bias of included studies using RoB 1 and assessed the certainty of the evidence using GRADE.\nMain results\nWe identified three RCTs that met our inclusion criteria with each trial comparing a different aspect of surgical intervention for this condition. The trials included a total of 79 participants under 2 years of age, were conducted in India and follow-up ranged from 1 to 5 years. Study participants and outcome assessors were not masked in these trials.\nOne study (60 children) compared primary IOL implantation with primary aphakia. The results from this study suggest that there may be little or no difference in visual acuity at 5 years comparing children with pseudophakia (mean logMAR 0.50) and aphakia (mean logMAR 0.59) (mean difference (MD) -0.09 logMAR, 95% confidence intervals (CIs) -0.24 to 0.06; 54 participants; very low-certainty evidence), but the evidence is very uncertain. The evidence is very uncertain as to the effect of IOL implantation compared with aphakia on visual axis opacification (VAO) (risk ratio (RR) 1.29, 95% CI 0.23 to 7.13; 54 participants; very low-certainty evidence). The trial investigators did not report on the cases of amblyopia. There was little evidence of a difference betwen the two groups in cases of glaucoma at 5 years follow-up (RR 0.86, 95% CI 0.24 to 3.10; 54 participants; very low-certainty evidence). Cases of retinal detachment and reoperation rates were not reported. The impact of IOL implantation on adverse effects is very uncertain because of the sparse data available: of the children who were pseudophakic, 1/29 needed a trabeculectomy and 8/29 developed posterior synechiae. In comparison, no trabeculectomies were needed in the aphakic group and 2/25 children had posterior synechiae (54 participants; very low-certainty evidence).\nThe second study (14 eyes of 7 children under 2 years of age) compared posterior optic capture of IOL without vitrectomy versus endocapsular implantations with anterior vitrectomy (commonly called 'in-the-bag surgery'). The authors did not report on visual acuity, amblyopia, glaucoma and reoperation rate. They had no cases of VAO in either group. The evidence is very uncertain as to the effect of in-the-bag implantation in children aged under 1 year. There was a higher incidence of inflammatory sequelae: 4/7 in-the-bag implantation eyes and 1/7 in optic capture eyes (P = 0.04, 7 participants; very low-certainty evidence). We graded the certainty of evidence as low or very low for imprecision in all outcomes because their statistical analysis reported that a sample size of 13 was needed in each group to achieve a power of 80%, whereas their subset of children under the age of 1 year had only 7 eyes in each group.\nThe third study (24 eyes of 12 children) compared a transcorneal versus pars plana route using a 25-gauge transconjunctival sutureless vitrectomy system. The evidence is very uncertain as to the effect of the route chosen on the incidence of VAO, with no cases reported at 1 year follow-up in either group. The investigators did not report on visual acuity, amblyopia, glaucoma, retinal detachment and reoperation rate. The pars plana route had the adverse effects of posterior capsule rupture in 2/12 eyes, and 1/12 eyes needing sutures. Conversely, 1/12 eyes operated on by the transcorneal route needed sutures. We graded the outcomes with very low-certainty because of the small sample size and the absence of a priori sample size calculation.\nAuthors' conclusions\nThere is no high level evidence for the effectiveness of one type of surgery for bilateral congenital cataracts over another, or whether surgery itself is better than primary aphakia. Further RCTs are required to inform modern practice about concerns, including the timing of surgery, age at which surgery should be undertaken, age for implantation of an IOL and development of complications, such as reoperations, glaucoma and retinal detachment. Standardising the methods used to measure visual function, along with objective monitoring of compliance with the use of aphakic glasses/contact lenses would greatly improve the quality of study data and enable more reliable interpretation of outcomes.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review updates our previous review. The evidence is up to date to 25 January 2022."
}
]
|
query-laysum | 3817 | [
{
"role": "user",
"content": "Abstract: Background\nPressure ulcers, which are localised injury to the skin or underlying tissue, or both, occur when people are unable to reposition themselves to relieve pressure on bony prominences. Pressure ulcers are often difficult to heal, painful, and impact negatively on the individual's quality of life. International guidelines suggest bed rest as a component of the treatment strategy to manage pressure ulcers among wheelchair users. The potential benefits and risks need to be considered when assessing the effectiveness of bed rest as an intervention for treating pressure ulcers in this population. Therefore, it was important to search and appraise existing research evidence in order to determine the impact of bed rest on the healing of pressure ulcers in wheelchair users.\nObjectives\nTo assess the impact of bed rest on pressure ulcer healing, in wheelchair users, of any age, who are living or being cared for in any setting.\nSearch methods\nIn October 2016 we searched: the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations, MEDLINE Daily and Epub Ahead of Print); Ovid EMBASE and EBSCO CINAHL Plus. We also searched clinical trials registries and conference proceedings and for ongoing and unpublished studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe considered randomised controlled trials (RCTs) and cluster-RCTs that evaluated the impact of bed rest on healing pressure ulcers in wheelchair users.\nData collection and analysis\nTwo review authors independently assessed titles and abstracts of the studies identified by the search strategy for their eligibility.\nMain results\nWe identified no studies that met the inclusion criteria.\nAuthors' conclusions\nWe set out to evaluate the research evidence, from randomised controlled trials, of the impact of bed rest on pressure ulcer healing in wheelchair users. No study met the inclusion criteria. It is uncertain whether bed rest makes a difference to the healing of pressure ulcers in wheelchair users. Well-designed trials addressing important clinical, quality of life and economic outcomes are required.\n\nGiven the provided abstract, please respond to the following query: \"What we found\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In October 2016 we searched widely through the medical literature for randomised controlled trials comparing bed rest with no bed rest for the healing of pressure ulcers in wheelchair users. We did not find any trials that had been conducted in this area. This means that we cannot say whether bed rest improves the healing of pressure ulcers in wheelchair users, or what the harms and benefits of this treatment might be. Trials are needed that compare pressure ulcer healing with and without bed rest among wheelchair users.\nThis plain language summary is up-to-date as of October 2016."
}
]
|
query-laysum | 3818 | [
{
"role": "user",
"content": "Abstract: Background\nImproved understanding and treatment of cystic fibrosis (CF) has led to longer life expectancy, which is accompanied by an increasingly complex regimen of treatments. Suboptimal adherence to the treatment plan, in the context of respiratory disease, has been found to be associated with poorer health outcomes. With digital technology being more accessible, it can be used to monitor adherence to inhaled therapies via chipped nebulisers, mobile phone apps and web-based platforms. This technology can allow monitoring of adherence as well as clinical outcomes, and allow feedback to both the person with CF and their healthcare team.\nObjectives\nTo assess the effects of using digital technology to monitor adherence to inhaled therapies and health status in adults and children with CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Trials Register, compiled from electronic database searches and handsearching of journals and conference abstract books.\nDate of last search: 28 October 2021.\nWe also searched Embase and three clinical trial registries and checked references of included studies.\nDate of last search: 9 November 2021.\nSelection criteria\nWe searched for randomised controlled trials (RCTs) looking at the effects of a digital technology for monitoring adherence of children and adults with CF to inhaled therapies.\nData collection and analysis\nTwo review authors screened the search results for studies eligible for inclusion in the review and extracted their data. We used Risk of Bias 2 for assessing study quality. We assessed the overall certainty of the evidence using GRADE.\nMain results\nWe included two studies in our review, with 628 participants aged five to 41 years. There was one study each for two different comparisons.\nNebuliser target inhalation mode versus standard inhalation mode\nThe included parallel study was carried out over 10 weeks after a run-in period of four to six weeks. The study compared the effects of a digitally enhanced inhalation mode (target inhalation mode) for nebulised antibiotics compared to standard mode in children attending a regional CF clinic in the United Kingdom. The study's primary outcome was the time taken to complete the inhaled treatment, but investigators also reported on adherence to therapy. The results showed that there may be an improvement in adherence with the target inhalation mode when this intervention is used (mean difference (MD) 24.0%, 95% confidence interval (CI) 2.95 to 45.05; low-certainty evidence). The target inhalation mode may make little or no difference to forced expiratory volume in one second (FEV1) % predicted (MD 1.00 % predicted, 95% CI -9.37 to 11.37; low-certainty evidence). The study did not report on treatment burden, quality of life (QoL) or pulmonary exacerbations.\neNebuliser with digital support versus eNebuliser without support\nOne large multicentre RCT monitored adherence via data-tracking nebulisers. The intervention group also receiving access to an online web-based platform, CFHealthHub, which offered tailored, flexible support from the study interventionist as well as access to their adherence data, educational and problem-solving information throughout the 12-month trial period. We graded all evidence as moderate certainty. Compared to usual care, the digital intervention probably improves adherence to inhaled therapy (MD 18%, 95% CI 12.90 to 23.10); probably leads to slightly reduced treatment burden (MD 5.1, 95% CI 1.79 to 8.41); and may lead to slightly improved FEV1 % predicted (MD 3.70, 95% CI -0.23 to 7.63). There is probably little or no difference in the incidence of pulmonary exacerbations or QoL between the two groups.\nAuthors' conclusions\nDigital monitoring plus tailored support via an online platform probably improves adherence to inhaled therapies and reduces treatment burden (but without a corresponding change in QoL) in the medium term (low- and moderate-certainty evidence). In a shorter time frame, technological enhancement of inhaling antibiotics may improve adherence to treatment (low-certainty evidence). There may be little or no effect on lung function with either intervention, and online monitoring probably makes no difference to pulmonary exacerbations.\nFuture research should assess the effect of digital technology on adherence in both children and adults. Consideration of adherence to the total treatment regimen is also important, as an improvement in adherence to inhaled therapies could come at the cost of adherence to other parts of the treatment regimen.\n\nGiven the provided abstract, please respond to the following query: \"What is CF?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "CF is a life-threatening, inherited condition where sticky, thick mucus builds up in the lungs and digestive system. Over time, the lungs become damaged and may eventually stop working properly."
}
]
|
query-laysum | 3819 | [
{
"role": "user",
"content": "Abstract: Background\nAutism spectrum disorder is a neurodevelopmental disorder characterised by social communication difficulties, restricted interests and repetitive behaviours. The clinical pathway for children with a diagnosis of autism spectrum disorder is varied, and current research suggests some children may not continue to meet diagnostic criteria over time.\nObjectives\nThe primary objective of this review was to synthesise the available evidence on the proportion of preschool children who have a diagnosis of autism spectrum disorder at baseline (diagnosed before six years of age) who continue to meet diagnostic criteria at follow-up one or more years later (up to 19 years of age).\nSearch methods\nWe searched MEDLINE, Embase, PsycINFO, and eight other databases in October 2017 and ran top-up searches up to July 2021. We also searched reference lists of relevant systematic reviews.\nSelection criteria\nTwo review authors independently assessed prospective and retrospective follow-up studies that used the same measure and process within studies to diagnose autism spectrum disorder at baseline and follow-up. Studies were required to have at least one year of follow-up and contain at least 10 participants. Participants were all aged less than six years at baseline assessment and followed up before 19 years of age.\nData collection and analysis\nWe extracted data on study characteristics and the proportion of children diagnosed with autism spectrum disorder at baseline and follow-up. We also collected information on change in scores on measures that assess the dimensions of autism spectrum disorder (i.e. social communication and restricted interests and repetitive behaviours). Two review authors independently extracted data on study characteristics and assessed risk of bias using a modified quality in prognosis studies (QUIPS) tool. We conducted a random-effects meta-analysis or narrative synthesis, depending on the type of data available. We also conducted prognostic factor analyses to explore factors that may predict diagnostic outcome.\nMain results\nIn total, 49 studies met our inclusion criteria and 42 of these (11,740 participants) had data that could be extracted. Of the 42 studies, 25 (60%) were conducted in North America, 13 (31%) were conducted in Europe and the UK, and four (10%) in Asia. Most (52%) studies were published before 2014. The mean age of the participants was 3.19 years (range 1.13 to 5.0 years) at baseline and 6.12 years (range 3.0 to 12.14 years) at follow-up. The mean length of follow-up was 2.86 years (range 1.0 to 12.41 years). The majority of the children were boys (81%), and just over half (60%) of the studies primarily included participants with intellectual disability (intelligence quotient < 70). The mean sample size was 272 (range 10 to 8564). Sixty-nine per cent of studies used one diagnostic assessment tool, 24% used two tools and 7% used three or more tools. Diagnosis was decided by a multidisciplinary team in 41% of studies. No data were available for the outcomes of social communication and restricted and repetitive behaviours and interests.\nOf the 42 studies with available data, we were able to synthesise data from 34 studies (69% of all included studies; n = 11,129) in a meta-analysis. In summary, 92% (95% confidence interval 89% to 95%) of participants continued to meet diagnostic criteria for autism spectrum disorder from baseline to follow-up one or more years later; however, the quality of the evidence was judged as low due to study limitations and inconsistency. The majority of the included studies (95%) were rated at high risk of bias. We were unable to explore the outcomes of change in social communication and restricted and repetitive behaviour and interests between baseline and follow-up as none of the included studies provided separate domain scores at baseline and follow-up. Details on conflict of interest were reported in 24 studies. Funding support was reported by 30 studies, 12 studies omitted details on funding sources and two studies reported no funding support. Declared funding sources were categorised as government, university or non-government organisation or charity groups. We considered it unlikely funding sources would have significantly influenced the outcomes, given the nature of prognosis studies.\nAuthors' conclusions\nOverall, we found that nine out of 10 children who were diagnosed with autism spectrum disorder before six years of age continued to meet diagnostic criteria for autism spectrum disorder a year or more later, however the evidence was uncertain. Confidence in the evidence was rated low using GRADE, due to heterogeneity and risk of bias, and there were few studies that included children diagnosed using a current classification system, such as the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) or the eleventh revision of the International Classification of Diseases (ICD-11). Future studies that are well-designed, prospective and specifically assess prognosis of autism spectrum disorder diagnoses are needed. These studies should also include contemporary diagnostic assessment methods across a broad range of participants and investigate a range of relevant prognostic factors.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have little confidence in the evidence because not all the studies provided data about everything that we were interested in, and the studies were done with different types of people and diagnostic assessments.\nFor the one in 10 children who no longer met diagnostic criteria for an autism diagnosis at follow-up, we were not able to tell whether they had 'grown out' of their autism because they became more mature over time, or because they had received intervention, or whether the original diagnosis was inaccurate."
}
]
|
query-laysum | 3820 | [
{
"role": "user",
"content": "Abstract: Background\nInfertility is a condition affecting 10% to 15% of couples of reproductive age. It is generally defined as \"the failure to achieve a clinical pregnancy after 12 months or more of regular unprotected sexual intercourse\". The treatment of infertility may involve manipulation of gametes or of the embryos themselves. These techniques are together known as assisted reproductive technology (ART). Practitioners are constantly seeking alternative or adjunct treatments, or both, in the hope that they may improve the outcome of assisted reproductive techniques. This Cochrane review focusses on the adjunct use of synthetic versions of two naturally-produced hormones, dehydroepiandrosterone (DHEA) and testosterone (T), in assisted reproduction.\nDHEA and its derivative testosterone are steroid hormones proposed to increase conception rates by positively affecting follicular response to gonadotrophin stimulation, leading to greater oocyte yields and, in turn, increased chance of pregnancy.\nObjectives\nTo assess the effectiveness and safety of DHEA and testosterone as pre- or co-treatments in subfertile women undergoing assisted reproduction.\nSearch methods\nWe searched the following electronic databases, trial registers and websites up to 12 March 2015: the Cochrane Central Register of Controlled Trials (CENTRAL), the Menstrual Disorders and Subfertility Group (MDSG) Specialised Register, MEDLINE, EMBASE, PsycINFO, CINAHL, electronic trial registers for ongoing and registered trials, citation indexes, conference abstracts in the Web of Science, PubMed and OpenSIGLE. We also carried out handsearches. There were no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing DHEA or testosterone as an adjunct treatment to any other active intervention, placebo, or no treatment in women undergoing assisted reproduction.\nData collection and analysis\nTwo review authors independently selected studies, extracted relevant data and assessed them for risk of bias. We pooled studies using fixed-effect models. We calculated odds ratios (ORs) for each dichotomous outcome. Analyses were stratified by type of treatment. There were no data for the intended groupings by dose, mode of delivery or after one/more than one cycle.\nWe assessed the overall quality of the evidence for the main findings using the GRADE working group methods.\nMain results\nWe included 17 RCTs with a total of 1496 participants. Apart from two trials, the trial participants were women identified as 'poor responders' to standard IVF protocols. The included trials compared either testosterone or DHEA treatment with placebo or no treatment.\nWhen DHEA was compared with placebo or no treatment, pre-treatment with DHEA was associated with higher rates of live birth or ongoing pregnancy (OR 1.88, 95% CI 1.30 to 2.71; eight RCTs, N = 878, I² statistic = 27%, moderate quality evidence). This suggests that in women with a 12% chance of live birth/ongoing pregnancy with placebo or no treatment, the live birth/ongoing pregnancy rate in women using DHEA will be between 15% and 26%. However, in a sensitivity analysis removing trials at high risk of performance bias, the effect size was reduced and no longer reached significance (OR 1.50, 95% CI 0.88 to 2.56; five RCTs, N = 306, I² statistic = 43%). There was no evidence of a difference in miscarriage rates (OR 0.58, 95% CI 0.29 to 1.17; eight RCTs, N = 950, I² statistic = 0%, moderate quality evidence). Multiple pregnancy data were available for five trials, with one multiple pregnancy in the DHEA group of one trial (OR 3.23, 95% CI 0.13 to 81.01; five RCTs, N = 267, very low quality evidence).\nWhen testosterone was compared with placebo or no treatment we found that pre-treatment with testosterone was associated with higher live birth rates (OR 2.60, 95% CI 1.30 to 5.20; four RCTs, N = 345, I² statistic = 0%, moderate evidence). This suggests that in women with an 8% chance of live birth with placebo or no treatment, the live birth rate in women using testosterone will be between 10% and 32%. On removal of studies at high risk of performance bias in a sensitivity analysis, the remaining study showed no evidence of a difference between the groups (OR 2.00, 95% CI 0.17 to 23.49; one RCT, N = 53). There was no evidence of a difference in miscarriage rates (OR 2.04, 95% CI 0.58 to 7.13; four RCTs, N = 345, I² = 0%, low quality evidence). Multiple pregnancy data were available for three trials, with four events in the testosterone group and one in the placebo/no treatment group (OR 3.09, 95% CI 0.48 to 19.98; three RCTs, N = 292, very low quality evidence).\nOne study compared testosterone with estradiol and reported no evidence of a difference in live birth rates (OR 2.06, 95% CI 0.43 to 9.87; one RCT, N = 46, very low quality evidence) or miscarriage rates (OR 0.70, 95% CI 0.11 to 4.64; one RCT, N = 46, very low quality evidence).\nThe quality of the evidence was moderate, the main limitations being lack of blinding in the included trials, inadequate reporting of study methods, and low event and sample sizes in some trials.\nAuthors' conclusions\nIn women identified as poor responders undergoing ART, pre-treatment with DHEA or testosterone may be associated with improved live birth rates. The overall quality of the evidence is moderate. There is insufficient evidence to draw any conclusions about the safety of either androgen. Definitive conclusions regarding the clinical role of either androgen awaits evidence from further well-designed studies.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Researchers in the Cochrane collaboration reviewed the evidence about the effectiveness and safety of androgens in women undergoing assisted reproduction. The androgens evaluated were dehydroepiandrosterone (DHEA) and testosterone (T) as pre- or co-treatments aiming to improve pregnancy and live birth rates in those women."
}
]
|
query-laysum | 3821 | [
{
"role": "user",
"content": "Abstract: Background\nPain is a common symptom with cancer, and 30% to 50% of all people with cancer will experience moderate to severe pain that can have a major negative impact on their quality of life. Opioid (morphine-like) drugs are commonly used to treat moderate or severe cancer pain, and are recommended for this purpose in the World Health Organization (WHO) pain treatment ladder. The most commonly-used opioid drugs are buprenorphine, codeine, fentanyl, hydrocodone, hydromorphone, methadone, morphine, oxycodone, tramadol, and tapentadol.\nObjectives\nTo provide an overview of the analgesic efficacy of opioids in cancer pain, and to report on adverse events associated with their use.\nMethods\nWe identified systematic reviews examining any opioid for cancer pain published to 4 May 2017 in the Cochrane Database of Systematic Reviews in the Cochrane Library. The primary outcomes were no or mild pain within 14 days of starting treatment, withdrawals due to adverse events, and serious adverse events.\nMain results\nWe included nine reviews with 152 included studies and 13,524 participants, but because some studies appeared in more than one review the number of unique studies and participants was smaller than this. Most participants had moderate or severe pain associated with a range of different types of cancer. Studies in the reviews typically compared one type of opioid or formulation with either a different formulation of the same opioid, or a different opioid; few included a placebo control. Typically the reviews titrated dose to effect, a balance between pain relief and adverse events. Various routes of administration of opioids were considered in the reviews; oral with most opioids, but transdermal administration with fentanyl, and buprenorphine. No review included studies of subcutaneous opioid administration. Pain outcomes reported were varied and inconsistent. The average size of included studies varied considerably between reviews: studies of older opioids, such as codeine, morphine, and methadone, had low average study sizes while those involving newer drugs tended to have larger study sizes.\nSix reviews reported a GRADE assessment (buprenorphine, codeine, hydromorphone, methadone, oxycodone, and tramadol), but not necessarily for all comparisons or outcomes. No comparative analyses were possible because there was no consistent placebo or active control. Cohort outcomes for opioids are therefore reported, as absolute numbers or percentages, or both.\nReviews on buprenorphine, codeine with or without paracetamol, hydromorphone, methadone, tramadol with or without paracetamol, tapentadol, and oxycodone did not have information about the primary outcome of mild or no pain at 14 days, although that on oxycodone indicated that average pain scores were within that range. Two reviews, on oral morphine and transdermal fentanyl, reported that 96% of 850 participants achieved that goal.\nAdverse event withdrawal was reported by five reviews, at rates of between 6% and 19%. Participants with at least one adverse event were reported by three reviews, at rates of between 11% and 77%.\nOur GRADE assessment of evidence quality was very low for all outcomes, because many studies in the reviews were at high risk of bias from several sources, including small study size.\nAuthors' conclusions\nThe amount and quality of evidence around the use of opioids for treating cancer pain is disappointingly low, although the evidence we have indicates that around 19 out of 20 people with moderate or severe pain who are given opioids and can tolerate them should have that pain reduced to mild or no pain within 14 days. This accords with the clinical experience in treating many people with cancer pain, but overstates to some extent the effectiveness found for the WHO pain ladder. Most people will experience adverse events, and help may be needed to manage the more common undesirable adverse effects such as constipation and nausea. Perhaps between 1 in 10 and 2 in 10 people treated with opioids will find these adverse events intolerable, leading to a change in treatment.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "At one level these are encouraging results, and generally agree with surveys of how well the WHO advice works in cancer pain. On another level, the quality of studies in the reviews was generally poor. We would like better study design, and especially better study reporting, which should include the outcome of pain reduced to a level where people with cancer can cope with it (no pain or mild pain).\nWe found that the Cochrane Reviews were of high quality.\nWe rated the quality of the evidence from studies using four levels: very low, low, moderate, or high. Very low quality evidence means that we are very uncertain about the results. High-quality evidence means that we are very confident in the results. We rated the evidence in the reviews as very low quality."
}
]
|
query-laysum | 3822 | [
{
"role": "user",
"content": "Abstract: Background\nGlaucoma is a leading cause of irreversible blindness worldwide and the second most common cause of blindness after cataracts. The primary treatment for glaucoma aims to lower intraocular pressure (IOP) with the use of topical medicines. Topical medication instillation techniques, such as eyelid closure and nasolacrimal occlusion when instilling drops, have been proposed as potential methods to increase ocular absorption and decrease systemic absorption of the drops.\nObjectives\nTo investigate the effectiveness of topical medication instillation techniques compared with usual care or another method of instillation of topical medication in the management of glaucoma or ocular hypertension.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2016, Issue 12), MEDLINE Ovid (1946 to 8 December 2016), Embase Ovid (1947 to 8 December 2016), PubMed (1948 to 8 December 2016), LILACS (Latin American and Caribbean Health Sciences Literature Database) (1982 to 8 December 2016), International Pharmaceutical Abstracts Database (1970 to 8 December 2016), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com) (last searched 13 May 2013), ClinicalTrials.gov (www.clinicaltrials.gov) (searched 8 December 2016) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en) (searched 8 December 2016). We did not use any date or language restrictions in the electronic searches for trials.\nSelection criteria\nWe included randomized controlled trials which had compared any topical medication instillation technique with usual care or a different method of instillation of topical medication.\nData collection and analysis\nTwo review authors independently screened records from the searches for eligibility, assessed the risk of bias, and extracted data. We followed methods recommended by Cochrane.\nMain results\nWe identified two trials (122 eyes of 61 participants) that had evaluated a topical medication instillation technique. We also identified two ongoing trials. Both included trials used a within-person design and administered prostaglandin monotherapy for glaucoma or ocular hypertension. Because the trials evaluated different instillation techniques and assessed different outcomes, we performed no meta-analysis.\nOne trial, conducted in the US, evaluated the effect of eyelid closure (one and three minutes) versus no eyelid closure on lowering IOP. At one to two weeks' follow-up, reduction in IOP was similar in the eyelid closure group and the no eyelid closure group (mean difference (MD) -0.33 mmHg, 95% confidence interval (CI) -0.8 to 1.5; 51 participants; moderate-certainty evidence).\nThe second trial, conducted in Italy, evaluated the effect of using an absorbent cloth to wipe excess fluid after instillation (fluid removal) versus not using an absorbent cloth (no removal) on reducing dermatologic adverse events. At four months' follow-up, eyelashes were shorter among eyes in the fluid removal group compared with the no fluid removal group (MD -1.70 mm, 95% CI -3.46 to 0.06; 10 participants; low-certainty evidence). Fewer eyes showed skin hyperpigmentation in the eyelid region towards the nose in the fluid removal group compared with the no removal group (RR 0.07, 95% CI 0.01 to 0.84; 10 participants; low-certainty evidence); however, the difference was uncertain in the eyelid region towards the temples (RR 0.44, 95% CI 0.07 to 2.66; 10 participants; low-certainty evidence). The effect hypertrichosis (excessive hair growth) was uncertain between groups (RR 1.00, 95% CI 0.17 to 5.98; 10 participants; low-certainty evidence).\nNeither trial reported other outcomes specified for this review, including the proportion of participants with IOP less than 21 mmHg; participant-reported outcomes related to the ease, convenience, and comfort of instillation techniques; physiologic measurements of systemic absorption; escalation of therapy; mean change in visual fields; optic nerve progression; mean change in best-corrected visual acuity; proportion in whom glaucoma developed; quality of life outcomes; or cost-effectiveness outcomes. Neither trial reported data at follow-up times of more than four months.\nAuthors' conclusions\nEvidence to evaluate the effectiveness of topical medication instillation techniques for treatment of glaucoma is lacking. It is unclear what, if any, effects instillation techniques have on topical medical therapy for glaucoma.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found two studies that compared different ways of putting in eye drops. Both studies used prostaglandin alone for glaucoma or ocular hypertension. One study, in the US, evaluated the effect of closing the eyes for one and three minutes after putting in eye drops on lowering IOP. The second study, in Italy, looked at whether wiping off the excess fluid, after putting in the eye drops, led to fewer changes in the skin surrounding the eye.\nAfter one to two weeks in the US study, closing the eyelid for one to three minutes after putting in the eye drops found no difference on IOP compared with not closing the eyelid for a specified amount of time. After four months in the Italian study, eyelashes were shorter when participants had wiped their eyes to remove excess fluid when compared with participants who did not wipe their eyes.\nWe found two ongoing studies that do not have results yet.\nBased on only two studies and uncertain results, it is unclear if different ways of putting in eye drops have any effect on people with glaucoma."
}
]
|
query-laysum | 3823 | [
{
"role": "user",
"content": "Abstract: Background\nNon-selective beta-blockers are recommended for the prevention of bleeding in people with cirrhosis, portal hypertension and gastroesophageal varices. Carvedilol is a non-selective beta-blocker with additional intrinsic alpha1-blocking effects, which may be superior to traditional, non-selective beta-blockers in reducing portal pressure and, therefore, in reducing the risk of upper gastrointestinal bleeding.\nObjectives\nTo assess the beneficial and harmful effects of carvedilol compared with traditional, non-selective beta-blockers for adults with cirrhosis and gastroesophageal varices.\nSearch methods\nWe combined searches in the Cochrane Hepato-Biliary's Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, LILACS, and Science Citation Index with manual searches. The last search update was 08 May 2018.\nSelection criteria\nWe included randomised clinical trials comparing carvedilol versus traditional, non-selective beta-blockers, irrespective of publication status, blinding, or language. We included trials evaluating both primary and secondary prevention of upper gastrointestinal bleeding in adults with cirrhosis and verified gastroesophageal varices.\nData collection and analysis\nThree review authors (AZ, RJ and LH), independently extracted data. The primary outcome measures were mortality, upper gastrointestinal bleeding and serious adverse events. We undertook meta-analyses and presented results using risk ratios (RR) or mean differences (MD), both with 95% confidence intervals (CIs), and I2 values as a marker of heterogeneity. We assessed bias control using the Cochrane Hepato-Biliary domains and the quality of the evidence with GRADE.\nMain results\nEleven trials fulfilled our inclusion criteria. One trial did not report clinical outcomes. We included the remaining 10 randomised clinical trials, involving 810 participants with cirrhosis and oesophageal varices, in our analyses. The intervention comparisons were carvedilol versus propranolol (nine trials), or nadolol (one trial). Six trials were of short duration (mean 6 (range 1 to 12) weeks), while four were of longer duration (13.5 (6 to 30) months). Three trials evaluated primary prevention; three evaluated secondary prevention; while four evaluated both primary and secondary prevention. We classified all trials as at 'high risk of bias'. We gathered mortality data from seven trials involving 507 participants; no events occurred in four of these. Sixteen of 254 participants receiving carvedilol and 19 of 253 participants receiving propranolol or nadolol died (RR 0.86, 95% CI 0.48 to 1.53; I2 = 0%, low-quality evidence). There appeared to be no differences between carvedilol versus traditional, non-selective beta-blockers and the risks of upper gastrointestinal bleeding (RR 0.77, 95% CI 0.43 to 1.37; 810 participants; 10 trials; I2 = 45%, very low-quality evidence) and serious adverse events (RR 0.97, 95% CI 0.67 to 1.42; 810 participants; 10 trials; I2 = 14%, low-quality evidence). Significantly more deaths, episodes of upper gastrointestinal bleeding and serious adverse events occurred in the long-term trials but there was not enough information to determine whether there were differences between carvedilol and traditional, non-selective beta-blockers, by trial duration. There was also insufficient information to detect differences in the effects of these interventions in trials evaluating primary or secondary prevention. There appeared to be no differences in the risk of non-serious adverse events between carvedilol versus its comparators (RR 0.55, 95% CI 0.23 to 1.29; 596 participants; 6 trials; I2 = 88%; very low-quality evidence). Use of carvedilol was associated with a greater reduction in hepatic venous pressure gradient than traditional, non-selective beta-blockers both in absolute (MD -1.75 mmHg, 95% CI -2.60 to -0.89; 368 participants; 6 trials; I2 = 0%; low-quality evidence) and percentage terms (MD -8.02%, 95% CI -11.49% to -4.55%; 368 participants; 6 trials; I2 = 0%; low-quality evidence). However, we did not observe a concomitant reduction in the number of participants who failed to achieve a sufficient haemodynamic response (RR 0.76, 95% CI 0.57 to 1.02; 368 participants; 6 trials; I2 = 42%; very low-quality evidence) or in clinical outcomes.\nAuthors' conclusions\nWe found no clear beneficial or harmful effects of carvedilol versus traditional, non-selective beta-blockers on mortality, upper gastrointestinal bleeding, serious or non-serious adverse events despite the fact that carvedilol was more effective at reducing the hepatic venous pressure gradient. However, the evidence was of low or very low quality, and hence the findings are uncertain. Additional evidence is required from adequately powered, long-term, double-blind, randomised clinical trials, which evaluate both clinical and haemodynamic outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We classified the evidence as of low or very low quality, so further trials are needed."
}
]
|
query-laysum | 3824 | [
{
"role": "user",
"content": "Abstract: Background\nThe widespread use of mobile technologies can potentially expand the use of telemedicine approaches to facilitate communication between healthcare providers, this might increase access to specialist advice and improve patient health outcomes.\nObjectives\nTo assess the effects of mobile technologies versus usual care for supporting communication and consultations between healthcare providers on healthcare providers' performance, acceptability and satisfaction, healthcare use, patient health outcomes, acceptability and satisfaction, costs, and technical difficulties.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase and three other databases from 1 January 2000 to 22 July 2019. We searched clinical trials registries, checked references of relevant systematic reviews and included studies, and contacted topic experts.\nSelection criteria\nRandomised trials comparing mobile technologies to support healthcare provider to healthcare provider communication and consultations compared with usual care.\nData collection and analysis\nWe followed standard methodological procedures expected by Cochrane and EPOC. We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included 19 trials (5766 participants when reported), most were conducted in high-income countries. The most frequently used mobile technology was a mobile phone, often accompanied by training if it was used to transfer digital images. Trials recruited participants with different conditions, and interventions varied in delivery, components, and frequency of contact. We judged most trials to have high risk of performance bias, and approximately half had a high risk of detection, attrition, and reporting biases. Two studies reported data on technical problems, reporting few difficulties.\nMobile technologies used by primary care providers to consult with hospital specialists\nWe assessed the certainty of evidence for this group of trials as moderate to low.\nMobile technologies:\n- probably make little or no difference to primary care providers following guidelines for people with chronic kidney disease (CKD; 1 trial, 47 general practices, 3004 participants);\n- probably reduce the time between presentation and management of individuals with skin conditions, people with symptoms requiring an ultrasound, or being referred for an appointment with a specialist after attending primary care (4 trials, 656 participants);\n- may reduce referrals and clinic visits among people with some skin conditions, and increase the likelihood of receiving retinopathy screening among people with diabetes, or an ultrasound in those referred with symptoms (9 trials, 4810 participants when reported);\n- probably make little or no difference to patient-reported quality of life and health-related quality of life (2 trials, 622 participants) or to clinician-assessed clinical recovery (2 trials, 769 participants) among individuals with skin conditions;\n- may make little or no difference to healthcare provider (2 trials, 378 participants) or participant acceptability and satisfaction (4 trials, 972 participants) when primary care providers consult with dermatologists;\n- may make little or no difference for total or expected costs per participant for adults with some skin conditions or CKD (6 trials, 5423 participants).\nMobile technologies used by emergency physicians to consult with hospital specialists about people attending the emergency department\nWe assessed the certainty of evidence for this group of trials as moderate.\nMobile technologies:\n- probably slightly reduce the consultation time between emergency physicians and hospital specialists (median difference −12 minutes, 95% CI −19 to −7; 1 trial, 345 participants);\n- probably reduce participants’ length of stay in the emergency department by a few minutes (median difference −30 minutes, 95% CI −37 to −25; 1 trial, 345 participants).\nWe did not identify trials that reported on providers' adherence, participants’ health status and well-being, healthcare provider and participant acceptability and satisfaction, or costs.\nMobile technologies used by community health workers or home-care workers to consult with clinic staff\nWe assessed the certainty of evidence for this group of trials as moderate to low.\nMobile technologies:\n- probably make little or no difference in the number of outpatient clinic and community nurse consultations for participants with diabetes or older individuals treated with home enteral nutrition (2 trials, 370 participants) or hospitalisation of older individuals treated with home enteral nutrition (1 trial, 188 participants);\n- may lead to little or no difference in mortality among people living with HIV (RR 0.82, 95% CI 0.55 to 1.22) or diabetes (RR 0.94, 95% CI 0.28 to 3.12) (2 trials, 1152 participants);\n- may make little or no difference to participants' disease activity or health-related quality of life in participants with rheumatoid arthritis (1 trial, 85 participants);\n- probably make little or no difference for participant acceptability and satisfaction for participants with diabetes and participants with rheumatoid arthritis (2 trials, 178 participants).\nWe did not identify any trials that reported on providers' adherence, time between presentation and management, healthcare provider acceptability and satisfaction, or costs.\nAuthors' conclusions\nOur confidence in the effect estimates is limited. Interventions including a mobile technology component to support healthcare provider to healthcare provider communication and management of care may reduce the time between presentation and management of the health condition when primary care providers or emergency physicians use them to consult with specialists, and may increase the likelihood of receiving a clinical examination among participants with diabetes and those who required an ultrasound. They may decrease the number of people attending primary care who are referred to secondary or tertiary care in some conditions, such as some skin conditions and CKD. There was little evidence of effects on participants' health status and well-being, satisfaction, or costs.\n\nGiven the provided abstract, please respond to the following query: \"When primary healthcare workers use mobile technologies to consult with hospital specialists, they\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- probably make little or no difference to whether guidelines are followed for people with chronic kidney disease, or to health status or quality of life of people with psoriasis.\n- may increase the likelihood of retinopathy screening for people with diabetes, or receiving an ultrasound if referred with symptoms, and may reduce referrals or a visit to the clinic for people with a skin condition or referred for clinic follow-up for different health problems.\n- may make little or no difference to healthcare worker or patient satisfaction, or to how much it costs to deliver health care."
}
]
|
query-laysum | 3825 | [
{
"role": "user",
"content": "Abstract: Background\nFunctional endoscopic sinus surgery (FESS) has become a well-established approach for treating patients with chronic rhinosinusitis (CRS) refractory to medical management. However, the surgical outcome may be compromised by postoperative inflammation, polyposis and adhesions, which often require subsequent intervention. Bioabsorbable, steroid-eluting sinus stents are inserted into the nose, sinuses or both following surgery to prevent stenosis of the sinus openings during the postoperative healing period. The slow release of corticosteroid aims to decrease mucosal oedema and expedite wound healing. Whether a steroid-eluting stent offers any beneficial effects in terms of improving sinonasal symptoms has not been systematically reviewed.\nObjectives\nTo assess the safety and efficacy of steroid-eluting sinus stent placement in CRS patients after FESS.\nSearch methods\nThe Cochrane Ear, Nose and Throat Disorders Group (CENTDG) Trials Search Co-ordinator searched the CENTDG Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 4); PubMed; EMBASE; CINAHL; Web of Science; Clinicaltrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 14 May 2015.\nSelection criteria\nWe included all randomised controlled trials (RCTs) comparing steroid-eluting sinus stents with non-steroid-eluting sinus stents, nasal packing or no treatment in adult CRS patients undergoing FESS.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe identified no RCTs that met our inclusion criteria. Among the 159 records retrieved using our search strategy, 21 trials had the potential to be included given that they had tested sinus stents, spacers and packing materials for patients with CRS undergoing FESS. However, we excluded these trials from the review because they met some but not all of the inclusion criteria.\nAuthors' conclusions\nWe are unable to provide evidence to establish whether steroid-eluting sinus stents have potential advantages and disadvantages for patients with CRS undergoing FESS. Future, high-quality RCTs are needed to assess whether or not steroid-eluting sinus stents confer any beneficial effects, over those of surgery alone, when compared to non-steroid sinus stents.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chronic rhinosinusitis (CRS) is a chronic inflammatory disease of the nasal and paranasal sinus mucosa. Although the cause of CRS remains unclear, multiple factors are likely to contribute to its development. CRS is treated with medical therapy (topical or oral steroids) and surgery. While medical treatments offer relief for most patients with CRS, functional endoscopic sinus surgery (FESS) is commonly performed. However, the surgical outcome of FESS can be compromised by postoperative inflammation, polyposis (formation of polyps) and adhesions of the nasal mucous lining, which often require subsequent interventions. Postoperative treatments are therefore required to achieve optimal long-term results. Steroid-eluting sinus stents are devices that are inserted into the nose, sinuses or both, and gradually release a steroid after FESS. They are used to prevent stenosis (narrowing) of the sinus openings and provide anti-inflammatory benefits during the postoperative healing period. However, it is unknown whether steroid-eluting sinus stents offer any beneficial effect in improving nasal symptoms of patients with CRS."
}
]
|
query-laysum | 3827 | [
{
"role": "user",
"content": "Abstract: Background\nGraves' disease is the most common cause of hyperthyroidism. Both antithyroid medications and radioiodine are commonly used treatments but their frequency of use varies between regions and countries. Despite the commonness of the diagnosis, any possible differences between the two treatments with respect to long-term outcomes remain unknown.\nObjectives\nTo assess the effects of radioiodine therapy versus antithyroid medications for Graves' disease.\nSearch methods\nWe performed a systematic literature search in the Cochrane Library, MEDLINE and EMBASE and the trials registers ICTRP Search Portal and ClinicalTrials.gov. The date of the last search was September 2015 for all databases.\nSelection criteria\nRandomised controlled trials (RCTs) comparing the effects of radioiodine therapy versus antithyroid medications for Graves' disease with at least two years follow-up.\nData collection and analysis\nTwo authors independently screened titles and abstracts for relevance. One author carried out screening for inclusion, data extraction and 'Risk of bias' assessment and a second author checked this. We presented data not suitable for meta-analysis as descriptive data. We analysed the overall quality of evidence utilising the GRADE instrument.\nMain results\nWe included two RCTs involving 425 adult participants with Graves' disease in this review. Altogether 204 participants were randomised to radioiodine therapy and 221 to methimazole therapy. A single dose of radioiodine was administered. The duration of methimazole medication was 18 months. The period of follow-up was at least two years, depending on the outcome measured. For most outcome measures risk of bias was low; for the outcomes health-related quality of life as well as development and worsening of Graves' ophthalmopathy risks of performance bias and detection bias were high in at least one of the two RCTs.\nHealth-related quality of life appeared to be similar in the radioiodine and methimazole treatment groups, however no quantitative data were reported (425 participants; 2 trials; low quality evidence). The development and worsening of Graves' ophthalmopathy was observed in 76 of 202 radioiodine-treated participants (38%) and in 40 of 215 methimazole-treated participants (19%): risk ratio (RR) 1.94 (95% confidence interval (CI) 1.40 to 2.70); 417 participants; 2 trials; low quality evidence. A total of 35% to 56% of radioiodine-treated participants and 42% of participants treated with methimazole were smokers, which is associated with the risk of worsening or development of Graves' ophthalmopathy. Euthyroidism was not achieved by any participant being treated with radioiodine compared with 64/68 (94%) of participants after methimazole treatment (112 participants; 1 trial). In this trial thyroxine therapy was not introduced early in both treatment arms to avoid hypothyroidism. Recurrence of hyperthyroidism (relapse) in favour of radioiodine treatment showed a RR of 0.20 (95% CI 0.01 to 2.66); P value = 0.22; 417 participants; 2 trials; very low quality evidence. Heterogeneity was high (I² = 91%) and the RRs were 0.61 or 0.06 with non-overlapping CIs. Adverse events other than development of worsening of Graves' ophthalmopathy for radioiodine therapy were hypothyroidism (39 of 41 participants (95%) compared with 0% of participants receiving methimazole, however thyroxine treatment to avoid hypothyroidism was not introduced early in the radioiodine group - 104 participants; 1 trial; very low quality evidence) and drug-related adverse events for methimazole treatment (23 of 215 participants (11%) reported adverse effects likely related to methimazole therapy - 215 participants; 2 trials; very low quality evidence). The outcome measures all-cause mortality and bone mineral density were not reported in the included trials. One trial (174 participants) reported socioeconomic effects: costs based on the official hospital reimbursement system in Sweden for patients without relapse and methimazole treatment were USD 1126/1164 (young/older methimazole group) and for radioiodine treatment USD 1862. Costs for patients with relapse and methimazole treatment were USD 2284/1972 (young/older methimazole group) and for radioiodine treatment USD 2760.\nAuthors' conclusions\nThe only antithyroid drug investigated in the two included trials was methimazole, which might limit the applicability of our findings with regard to other compounds such as propylthiouracil. Results from two RCTs suggest that radioiodine treatment is associated with an increased risk of Graves' ophthalmopathy. Our findings suggest some benefit from radioiodine treatment for recurrence of hyperthyroidism (relapse) but there is uncertainty about the magnitude of the effect size.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Is radioiodine treatment better than antithyroid drugs for the therapy of Graves' disease?"
}
]
|
query-laysum | 3828 | [
{
"role": "user",
"content": "Abstract: Background\nChronic pain is common in adults, and often has a detrimental impact upon physical ability, well-being, and quality of life. Previous reviews have shown that certain antidepressants may be effective in reducing pain with some benefit in improving patients’ global impression of change for certain chronic pain conditions. However, there has not been a network meta-analysis (NMA) examining all antidepressants across all chronic pain conditions.\nObjectives\nTo assess the comparative efficacy and safety of antidepressants for adults with chronic pain (except headache).\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, LILACS, AMED and PsycINFO databases, and clinical trials registries, for randomised controlled trials (RCTs) of antidepressants for chronic pain conditions in January 2022.\nSelection criteria\nWe included RCTs that examined antidepressants for chronic pain against any comparator. If the comparator was placebo, another medication, another antidepressant, or the same antidepressant at different doses, then we required the study to be double-blind. We included RCTs with active comparators that were unable to be double-blinded (e.g. psychotherapy) but rated them as high risk of bias. We excluded RCTs where the follow-up was less than two weeks and those with fewer than 10 participants in each arm.\nData collection and analysis\nTwo review authors separately screened, data extracted, and judged risk of bias. We synthesised the data using Bayesian NMA and pairwise meta-analyses for each outcome and ranked the antidepressants in terms of their effectiveness using the surface under the cumulative ranking curve (SUCRA). We primarily used Confidence in Meta-Analysis (CINeMA) and Risk of Bias due to Missing Evidence in Network meta-analysis (ROB-MEN) to assess the certainty of the evidence. Where it was not possible to use CINeMA and ROB-MEN due to the complexity of the networks, we used GRADE to assess the certainty of the evidence.\nOur primary outcomes were substantial (50%) pain relief, pain intensity, mood, and adverse events. Our secondary outcomes were moderate pain relief (30%), physical function, sleep, quality of life, Patient Global Impression of Change (PGIC), serious adverse events, and withdrawal.\nMain results\nThis review and NMA included 176 studies with a total of 28,664 participants. The majority of studies were placebo-controlled (83), and parallel−armed (141). The most common pain conditions examined were fibromyalgia (59 studies); neuropathic pain (49 studies) and musculoskeletal pain (40 studies). The average length of RCTs was 10 weeks. Seven studies provided no useable data and were omitted from the NMA. The majority of studies measured short-term outcomes only and excluded people with low mood and other mental health conditions.\nAcross efficacy outcomes, duloxetine was consistently the highest-ranked antidepressant with moderate- to high-certainty evidence. In duloxetine studies, standard dose was equally efficacious as high dose for the majority of outcomes. Milnacipran was often ranked as the next most efficacious antidepressant, although the certainty of evidence was lower than that of duloxetine. There was insufficient evidence to draw robust conclusions for the efficacy and safety of any other antidepressant for chronic pain.\nPrimary efficacy outcomes\nDuloxetine standard dose (60 mg) showed a small to moderate effect for substantial pain relief (odds ratio (OR) 1.91, 95% confidence interval (CI) 1.69 to 2.17; 16 studies, 4490 participants; moderate-certainty evidence) and continuous pain intensity (standardised mean difference (SMD) −0.31, 95% CI −0.39 to −0.24; 18 studies, 4959 participants; moderate-certainty evidence). For pain intensity, milnacipran standard dose (100 mg) also showed a small effect (SMD −0.22, 95% CI −0.39 to 0.06; 4 studies, 1866 participants; moderate-certainty evidence). Mirtazapine (30 mg) had a moderate effect on mood (SMD −0.5, 95% CI −0.78 to −0.22; 1 study, 406 participants; low-certainty evidence), while duloxetine showed a small effect (SMD −0.16, 95% CI −0.22 to −0.1; 26 studies, 7952 participants; moderate-certainty evidence); however it is important to note that most studies excluded participants with mental health conditions, and so average anxiety and depression scores tended to be in the 'normal' or 'subclinical' ranges at baseline already.\nSecondary efficacy outcomes\nAcross all secondary efficacy outcomes (moderate pain relief, physical function, sleep, quality of life, and PGIC), duloxetine and milnacipran were the highest-ranked antidepressants with moderate-certainty evidence, although effects were small. For both duloxetine and milnacipran, standard doses were as efficacious as high doses.\nSafety\nThere was very low-certainty evidence for all safety outcomes (adverse events, serious adverse events, and withdrawal) across all antidepressants. We cannot draw any reliable conclusions from the NMAs for these outcomes.\nAuthors' conclusions\nOur review and NMAs show that despite studies investigating 25 different antidepressants, the only antidepressant we are certain about for the treatment of chronic pain is duloxetine. Duloxetine was moderately efficacious across all outcomes at standard dose. There is also promising evidence for milnacipran, although further high-quality research is needed to be confident in these conclusions. Evidence for all other antidepressants was low certainty. As RCTs excluded people with low mood, we were unable to establish the effects of antidepressants for people with chronic pain and depression. There is currently no reliable evidence for the long-term efficacy of any antidepressant, and no reliable evidence for the safety of antidepressants for chronic pain at any time point.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• We are only confident in the effectiveness of one antidepressant: duloxetine. We found that a standard dose (60 mg) was effective, and that there is no benefit to using a higher dose.\n• We are uncertain about unwanted effects for any antidepressant as the data for this were very poor. Future research should address this.\n• In clinical practice for chronic pain, a standard dose of duloxetine may be considered before trying other antidepressants.\n• Adopting a person-centred approach is critical. Pain is a very individual experience and certain medications may work for people even while the research evidence is inconclusive or unavailable. Future studies should last longer and focus on unwanted effects of antidepressants."
}
]
|
query-laysum | 3829 | [
{
"role": "user",
"content": "Abstract: Background\nOsteoporosis is characterized by low bone mass and micro-architectural deterioration of bone tissue leading to increased bone fragility. In people with beta-thalassaemia, osteoporosis represents an important cause of morbidity and is due to a number of factors. First, ineffective erythropoiesis causes bone marrow expansion, leading to reduced trabecular bone tissue with cortical thinning. Second, excessive iron loading causes endocrine dysfunction, leading to increased bone turnover. Lastly, disease complications can result in physical inactivity, with a subsequent reduction in optimal bone mineralization.\nTreatments for osteoporosis in people with beta-thalassaemia include bisphosphonates (e.g. clodronate, pamidronate, alendronate; with or without hormone replacement therapy (HRT)), calcitonin, calcium, zinc supplementation, hydroxyurea, and HRT alone (for preventing hypogonadism). Denosumab, a fully human monoclonal antibody, inhibits bone resorption and increases bone mineral density (BMD). Finally, strontium ranelate simultaneously promotes bone formation and inhibits bone resorption, thus contributing to a net gain in BMD, increased bone strength, and reduced fracture risk.\nThis is an update of a previously published Cochrane Review.\nObjectives\nTo review the evidence on the efficacy and safety of treatment for osteoporosis in people with beta-thalassaemia.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group’s Haemoglobinopathies Trials Register, which includes references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. We also searched online trial registries.\nDate of most recent search: 4 August 2022.\nSelection criteria\nRandomized controlled trials (RCTs) in people with beta-thalassaemia with: a BMD Z score below −2 standard deviations (SDs) for children aged under 15 years, adult males (aged 15 to 50 years) and premenopausal females aged over 15 years; or a BMD T score below −2.5 SDs for postmenopausal females and males aged over 50 years.\nData collection and analysis\nTwo review authors assessed the eligibility and risk of bias of the included RCTs, and extracted and analysed data. We assessed the certainty of the evidence using GRADE.\nMain results\nWe included six RCTs (298 participants). Active interventions included bisphosphonates (3 trials, 169 participants), zinc supplementation (1 trial, 42 participants), denosumab (1 trial, 63 participants), and strontium ranelate (1 trial, 24 participants). The certainty of the evidence ranged from moderate to very low and was downgraded mainly due to concerns surrounding imprecision (low participant numbers), but also risk of bias issues related to randomization, allocation concealment, and blinding.\nBisphosphonates versus placebo or no treatment\nTwo RCTs compared bisphosphonates to placebo or no treatment. After two years, one trial (25 participants) found that alendronate and clodronate may increase BMD Z score compared to placebo at the femoral neck (mean difference (MD) 0.40, 95% confidence interval (CI) 0.22 to 0.58) and the lumbar spine (MD 0.14, 95% CI 0.05 to 0.23). One trial (118 participants) reported that neridronate compared to no treatment may increase BMD at the lumbar spine and total hip at six and 12 months; for the femoral neck, the study found increased BMD in the neridronate group at 12 months only. All results were of very low-certainty.\nThere were no major adverse effects of treatment. Participants in the neridronate group reported less back pain; we considered this representative of improved quality of life (QoL), though the certainty of the evidence was very low. One participant in the neridronate trial (116 participants) sustained multiple fractures as a result of a traffic accident. No trials reported BMD at the wrist or mobility.\nDifferent doses of bisphosphonate compared\nOne 12-month trial (26 participants) assessed different doses of pamidronate (60 mg versus 30 mg) and found a difference in BMD Z score favouring the 60 mg dose at the lumbar spine (MD 0.43, 95% CI 0.10 to 0.76) and forearm (MD 0.87, 95% CI 0.23 to 1.51), but no difference at the femoral neck (very low-certainty evidence). This trial did not report fracture incidence, mobility, QoL, or adverse effects of treatment.\nZinc versus placebo\nOne trial (42 participants) showed zinc supplementation probably increased BMD Z score compared to placebo at the lumbar spine after 12 months (MD 0.15, 95% CI 0.10 to 0.20; 37 participants) and 18 months (MD 0.34, 95% CI 0.28 to 0.40; 32 participants); the same was true for BMD at the hip after 12 months (MD 0.15, 95% CI 0.11 to 0.19; 37 participants) and 18 months (MD 0.26, 95% CI 0.21 to 0.31; 32 participants). The evidence for these results was of moderate certainty. The trial did not report BMD at the wrist, fracture incidence, mobility, QoL, or adverse effects of treatment.\nDenosumab versus placebo\nBased on one trial (63 participants), we are unsure about the effect of denosumab on BMD Z score at the lumbar spine, femoral neck, and wrist joint after 12 months compared to placebo (low-certainty evidence). This trial did not report fracture incidence, mobility, QoL, or adverse effects of treatment, but the investigators reported a reduction in bone pain measured on a visual analogue scale in the denosumab group after 12 months of treatment compared to placebo (MD −2.40 cm, 95% CI −3.80 to −1.00).\nStrontium ranelate\nOne trial (24 participants) only narratively reported an increase in BMD Z score at the lumbar spine in the intervention group and no corresponding change in the control group (very low-certainty evidence). This trial also found a reduction in back pain measured on a visual analogue scale after 24 months in the strontium ranelate group compared to the placebo group (MD −0.70 cm (95% CI −1.30 to −0.10); we considered this measure representative of improved quality of life.\nAuthors' conclusions\nBisphosphonates may increase BMD at the femoral neck, lumbar spine, and forearm compared to placebo after two years' therapy. Zinc supplementation probably increases BMD at the lumbar spine and hip after 12 months. Denosumab may make little or no difference to BMD, and we are uncertain about the effect of strontium on BMD.\nWe recommend further long-term RCTs on different bisphosphonates and zinc supplementation therapies in people with beta-thalassaemia-associated osteoporosis.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": ""
}
]
|
query-laysum | 3830 | [
{
"role": "user",
"content": "Abstract: Background\nWomen between the ages of 15 and 24 years have high rates of unintended pregnancy; over half of women in this age group want to avoid pregnancy. However, women under age 25 years have higher typical contraceptive failure rates within the first 12 months of use than older women. High discontinuation rates may also be a problem in this population. Concern that adolescents and young women will not find hormonal or intrauterine contraceptives acceptable or effective might deter healthcare providers from recommending these contraceptive methods.\nObjectives\nTo compare the contraceptive failure (pregnancy) rates and to examine the continuation rates for hormonal and intrauterine contraception among young women aged 25 years and younger.\nSearch methods\nWe searched until 4 August 2015 for randomized controlled trials (RCTs) that compared hormonal or intrauterine methods of contraception in women aged 25 years and younger. Computerized databases included the Cochrane Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, POPLINE, CINAHL, and LILACS. We also searched for current trials via ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP).\nSelection criteria\nWe considered RCTs in any language that reported the contraceptive failure rates for hormonal or intrauterine contraceptive methods, when compared with another contraceptive method, for women aged 25 years and younger. The other contraceptive method could have been another intrauterine contraceptive, another hormonal contraceptive or different dose of the same method, or a non-hormonal contraceptive. Treatment duration must have been at least three months. Eligible trials had to include the primary outcome of contraceptive failure rate (pregnancy). The secondary outcome was contraceptive continuation rate.\nData collection and analysis\nOne author conducted the primary data extraction and entered the information into Review Manager. Another author performed an independent data extraction and verified the initial entry. For dichotomous outcomes, we computed the Mantel-Haenszel odds ratio (OR) with 95% confidence interval (CI). Because of disparate interventions and outcome measures, we did not conduct meta-analysis.\nMain results\nFive trials met the inclusion criteria. The studies included a total of 1503 women, with a mean of 301 participants. The trials compared the following contraceptives: combined oral contraceptive (COC) versus transdermal contraceptive patch, vaginal contraceptive ring, or levonorgestrel intrauterine system 20 µg/day (LNG-IUS 20); LNG-IUS 12 µg/day (LNG-IUS 12) versus LNG-IUS 16 µg/day (LNG-IUS 16); and LNG-IUS 20 versus the copper T380A intrauterine device (IUD). In the trials comparing two different types of methods, the study arms did not differ significantly for contraceptive efficacy or continuation. The sample sizes were small for two of those studies. The only significant outcome was that a COC group had a higher proportion of women who discontinued for 'other personal reasons' compared with the group assigned to the LNG-IUS 20 (OR 0.27, 95% CI 0.09 to 0.85), which may have little clinic relevance. The trial comparing LNG-IUS 12 versus LNG-IUS 16 showed similar efficacy over one and three years. In three trials that examined different LNG-IUS, continuation was at least 75% at 6 to 36 months.\nAuthors' conclusions\nWe considered the overall quality of evidence to be moderate to low. Limitations were due to trial design or limited reporting. Different doses in the LNG-IUS did not appear to influence efficacy over three years. In another study, continuation of the LNG-IUS appeared at least as high as that for the COC. The current evidence was insufficient to compare efficacy and continuation rates for hormonal and intrauterine contraceptive methods in women aged 25 years and younger.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found five trials that enrolled had a total of 1503 women. Some studies looked at different types of IUC, while others compared pills versus a vaginal ring, skin patch, or IUC. No study showed any major difference between the groups in pregnancy or continued use. Some of the trials were too small to find a difference. Women kept using IUC at least as long as pills in one study. IUC may be useful for women in this age group. Studies of different birth control with more women would help determine which methods work the best for young women. Overall, the quality of the results was moderate to low."
}
]
|
query-laysum | 3831 | [
{
"role": "user",
"content": "Abstract: Background\nProviding effective positive pressure ventilation is considered to be the single most important component of successful neonatal resuscitation. Ventilation is frequently initiated manually with bag and face mask (BMV) followed by endotracheal intubation if respiratory depression continues. These techniques may be difficult to perform successfully resulting in prolonged resuscitation or neonatal asphyxia. The laryngeal mask airway (LMA) may achieve initial ventilation and successful resuscitation faster than a bag-mask device or endotracheal intubation.\nObjectives\nAmong newborns requiring positive pressure ventilation for cardio-pulmonary resuscitation, is LMA more effective than BMV or endotracheal intubation for successful resuscitation? When BMV is either insufficient or ineffective, is effective positive pressure ventilation and successful resuscitation achieved faster with the LMA compared to endotracheal intubation?\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2017, Issue 1), MEDLINE via PubMed (1966 to 15 February 2017), Embase (1980 to 15 February 2017), and CINAHL (1982 to 15 February 2017). We also searched clinical trials registers, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials that compared LMA for neonatal resuscitation with either BMV or endotracheal intubation and reported on any outcomes related to neonatal resuscitation specified in this review.\nData collection and analysis\nTwo review authors independently evaluated studies for risk of bias assessments, and extracted data using Cochrane Neonatal criteria. Categorical treatment effects were described as relative risks and continuous treatment effects were described as the mean difference, with 95% confidence intervals (95% CI) of estimates.\nMain results\nWe included seven trials that involved a total of 794 infants. Five studies compared LMA with BMV and three studies compared LMA with endotracheal intubation. We added six new studies for this update (754 infants).\nLMA was associated with less need for endotracheal intubation than BMV (typical risk ratio (RR) 0.24, 95% CI 0.12 to 0.47 and typical risk difference (RD) -0.14, 95% CI -0.14 to -0.06; 5 studies, 661 infants; moderate-quality evidence) and shorter ventilation time (mean difference (MD) -18.90 seconds, 95% CI -24.35 to -13.44; 4 studies, 610 infants). Babies resuscitated with LMA were less likely to require admission to neonatal intensive care unit (NICU) (typical RR 0.60, 95% CI 0.40 to 0.90 and typical RD -0.18, 95% CI -0.31 to -0.04; 2 studies,191 infants; moderate-quality evidence). There was no difference in deaths or hypoxic ischaemic encephalopathy (HIE) events.\nCompared to endotracheal intubation, there were no clinically significant differences in insertion time or failure to correctly insert the device (typical RR 0.95, 95% CI 0.17 to 5.42; 3 studies, 158 infants; very low-quality evidence). There was no difference in deaths or HIE events.\nAuthors' conclusions\nLMA can achieve effective ventilation during neonatal resuscitation in a time frame consistent with current neonatal resuscitation guidelines. Compared to BMV, LMA is more effective in terms of shorter resuscitation and ventilation times, and less need for endotracheal intubation (low- to moderate-quality evidence). However, in trials comparing LMA with BMV, over 80% of infants in both trial arms responded to the allocated intervention. In studies that allowed LMA rescue of infants failing with BMV, it was possible to avoid intubation in the majority. It is important that the clinical community resorts to the use of LMA more proactively to provide effective ventilation when newborn is not responding to BMV before attempting intubation or initiating chest compressions.\nLMA was found to offer comparable efficacy to endotracheal intubation (very low- to low-quality evidence). It therefore offers an alternate airway device when attempts at inserting endotracheal intubation are unsuccessful during resuscitation.\nMost studies enrolled infants with birth weight over 1500 g or 34 or more weeks' gestation. As such, there is lack of evidence to support LMA use in more premature infants.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most newborns are vigorous at birth, but a small number need to be helped with breathing (assisted ventilation) in the delivery room. Infants who do not have effective breathing soon after birth can become severely depressed. Providing rapid effective ventilation in the delivery room is very important. Ventilation is often started using a manually-pumped oxygen bag to force air into a close-fitting face mask held over the infant's nose and mouth. If breathing remains depressed after using the manual resuscitation bag, a tube is placed directly into the infant's large airway (endotracheal intubation). Bag and mask ventilation and endotracheal intubation may not be possible when infants have airway obstructions or head and face abnormalities that obstruct the normal flow of air into their lungs and/or obstructing the view of the airway by the medical personnel attempting intubation. The LMA is an alternative to bag and mask ventilation and endotracheal intubation. LMA is a small mask attached to a silicone tube fitted into the throat to provide positive pressure ventilation into the airway."
}
]
|
query-laysum | 3832 | [
{
"role": "user",
"content": "Abstract: Background\nLaparoscopic appendectomy is amongst the most common general surgical procedures performed in the developed world. Arguably, the most critical part of this procedure is effective closure of the appendix stump to prevent catastrophic intra-abdominal complications from a faecal leak into the abdominal cavity.\n\nA variety of methods to close the appendix stump are used worldwide; these can be broadly divided into traditional ligatures (such as intracorporeal or extracorporeal ligatures or Roeder loops) and mechanical devices (such as stapling devices, clips, or electrothermal devices). However, the optimal method remains unclear.\nObjectives\nTo compare all surgical techniques now used for appendix stump closure during laparoscopic appendectomy.\nSearch methods\nIn June 2017, we searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 6) in the Cochrane Library, MEDLINE Ovid (1946 to 14 June 2017), Embase Ovid (1974 to 14 June 2017), Science Citation Index - Expanded (14 June 2017), China Biological Medicine Database (CBM), the World Health Organization International Trials Registry Platform search portal, ClinicalTrials.gov, Current Controlled Trials, the Chinese Clinical Trials Register, and the EU Clinical Trials Register (all in June 2017). We searched the reference lists of relevant publications as well as meeting abstracts and Conference Proceedings Citation Index to look for additional relevant clinical trials.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared mechanical appendix stump closure (stapler, clips, or electrothermal devices) versus ligation (Endoloop, Roeder loop, or intracorporeal knot techniques) for uncomplicated appendicitis.\nData collection and analysis\nTwo review authors identified trials for inclusion, collected data, and assessed risk of bias independently. We performed the meta-analysis using Review Manager 5. We calculated the odds ratio (OR) for dichotomous outcomes and the mean difference (MD) for continuous outcomes, with 95% confidence intervals (CIs).\nMain results\nWe included eight randomised studies encompassing 850 participants. Five studies compared titanium clips versus ligature, two studies compared an endoscopic stapler device versus ligature, and one study compared an endoscopic stapler device, titanium clips, and ligature. In our analyses of primary outcomes, we found no differences in total complications (OR 0.97, 95% CI 0.27 to 3.50, 8 RCTs, very low-quality evidence), intraoperative complications (OR 0.93, 95% CI 0.34 to 2.55, 8 RCTs, very low-quality evidence), or postoperative complications (OR 0.80, 95% CI 0.21 to 3.13, 8 RCTs, very low-quality evidence) between ligature and all types of mechanical devices. However, our analyses of secondary outcomes revealed that use of mechanical devices saved approximately nine minutes of total operating time when compared with use of a ligature (mean difference (MD) -9.04 minutes, 95% CI -12.97 to -5.11 minutes, 8 RCTs, very low-quality evidence). However, this finding did not translate into a clinically or statistically significant reduction in inpatient hospital stay (MD 0.02 days, 95% CI -0.12 to 0.17 days, 8 RCTs, very low-quality evidence). Available information was insufficient for reliable comparison of total hospital costs and postoperative pain/quality of life between the two approaches. Overall, evidence across all analyses was of very low quality, with substantial potential for confounding factors. Given the limitations of all studies in terms of bias and the low quality of available evidence, a clear conclusion regarding superiority of any one particular type of mechanical device over another is not possible.\nAuthors' conclusions\nEvidence is insufficient at present to advocate omission of conventional ligature-based appendix stump closure in favour of any single type of mechanical device over another in uncomplicated appendicitis.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence used to derive our conclusions was generally of low quality. The studies we included for each analysis were vulnerable to different types of bias and contained inconsistencies and imprecision in their results due to small numbers of participants and events in each included study arm. It is likely that future research will substantially change our conclusions; further studies in this field are needed."
}
]
|
query-laysum | 3833 | [
{
"role": "user",
"content": "Abstract: Background\nCurrent recommendations for people with irritable bowel syndrome (IBS) to partake in physical activity are based on low-level evidence, do not incorporate evidence from all available randomised controlled trials (RCTs) and provide little information regarding potential adverse effects.\nObjectives\nTo assess the benefits and harms of physical activity interventions in adults diagnosed with irritable bowel syndrome and to explore possible effect moderators including type, setting and nature of physical activity interventions.\nSearch methods\nWe searched nine electronic databases including CENTRAL, MEDLINE and Embase to 5 November 2021. We handsearched reference lists and sought unpublished studies through trial registries.\nSelection criteria\nWe included RCTs involving adults (aged 18 years or older) diagnosed with IBS and conducted in any setting comparing a physical activity intervention with no intervention, usual care or wait-list control group or another physical activity intervention group and assessing a validated measure of symptoms, quality of life or bowel movement.\nData collection and analysis\nAt least two review authors independently selected studies for inclusion, extracted study data, and performed risk of bias and GRADE assessments to assess the certainty of evidence. We pooled studies that evaluated similar outcomes using a random-effects meta-analysis, and synthesised data from other studies narratively.\nMain results\nWe included 11 RCTs with data for 622 participants. Most (10/11) were set in high- or middle- to high-income countries, with five involving supervised physical activity, three unsupervised activity and three a mix of supervised and unsupervised activity. No trial was at low risk of bias. Four trials specified a minimally important difference for at least one assessed outcome measure. Data for 10 trials were obtained from published journal articles, with data for one obtained from an unpublished Masters degree thesis.\nIrritable bowel syndrome symptoms\nSix RCTs assessed the effectiveness of a physical activity intervention compared with usual care on global symptoms of IBS. Meta-analysis of five studies showed an observed improvement in reported symptoms following physical activity (standardised mean difference (SMD) –0.93, 95% confidence interval (CI) –1.44 to –0.42; 185 participants). We rated the certainty of evidence for this outcome as very low due to unclear and high risk of bias, inconsistency and imprecision from sparse data. This means physical activity may improve IBS symptoms but the evidence is very uncertain. The results of the remaining study supported the meta-analysis but were at unclear risk of bias and sample size was small.\nTwo studies assessed the effectiveness of a yoga intervention compared with a walking intervention on global IBS symptoms. Meta-analysis of these two studies found no conclusive evidence of an effect of yoga compared with walking on IBS symptoms (SMD –1.16, 95% CI –3.93 to 1.62; 124 participants). We rated the certainty of evidence as very low, meaning the evidence is very uncertain about the effect of yoga interventions compared with walking interventions on IBS symptoms.\nTwo studies assessed the effectiveness of a physical activity intervention (yoga) compared with medication. One reported no observed difference in global IBS symptoms, though CIs were wide, suggesting uncertainty in the observed estimates and risk of bias was high (MD –1.20, 95% CI –2.65 to 0.25; 21 participants). We excluded IBS symptom data for the remaining study as it used a non-validated method.\nOne study compared a yoga intervention with a dietary intervention and reported an observed improvement in symptoms with both interventions but neither intervention was superior to the other.\nQuality of life\nFive RCTs assessed the impact of physical activity on self-reported quality of life compared with usual care. Meta-analysis of data from four studies found no improvement in quality of life following a physical activity intervention (SMD 1.17, 95% CI –0.30 to 2.64; 134 participants; very low certainty due to risk of bias, inconsistency and imprecision). We rated the certainty of evidence as very low, meaning the evidence is very uncertain about the effect of physical activity interventions on quality-of-life outcomes in people with IBS.\nOne study assessed the impact on quality of life of a yoga intervention compared with walking and observed an improvement in the yoga group (MD 53.45, 95% CI 38.85 to 68.05; 97 participants ).\nOne study reported no observed difference in quality of life between a yoga and a dietary intervention.\nAbdominal pain\nTwo trials assessed the impact of physical activity compared with usual care on reported abdominal pain. Meta-analysis found no improvement in abdominal pain with physical activity compared with usual care (SMD 0.01, 95% CI –0.48 to 0.50; 64 participants). We rated the certainty of the evidence as very low due to risk of bias and imprecision, meaning the evidence is very uncertain about the effect of physical activity interventions on abdominal pain in people with IBS.\nOne study assessing the impact of a yoga intervention compared with walking advice reported no observed differences between groups on abdominal pain.\nOne study comparing a yoga intervention with a dietary intervention found neither intervention had a more beneficial impact than the other and both interventions did not conclusively reduce abdominal pain.\nThere was insufficient evidence to adequately assess adverse effects associated with physical activity due to a lack of reporting in trials. One study reported a musculoskeletal injury in a yoga intervention group but this did not result in withdrawal from the study.\nAuthors' conclusions\nFindings from a small body of evidence suggest that physical activity comprising of yoga, treadmill exercise or support to increase physical activity may improve symptoms but not quality of life or abdominal pain in people diagnosed with IBS but we have little confidence in these conclusions due to the very low certainty of evidence.\nThe numbers of reported adverse events were low and the certainty of these findings was very low for all comparisons, so no conclusions can be drawn.\nDiscussions with patients considering physical activity as part of symptom management should address the uncertainty in the evidence to ensure fully informed decisions. If deemed sufficiently important to patients and healthcare providers, higher quality research is needed to enable more certain conclusions.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 11 randomised controlled trials involving 622 people with irritable bowel syndrome. The biggest trial was in 102 people and the smallest was in 20 people. Six trials were conducted in high-income countries worldwide and two were conducted in a low- to middle-income country. One study included people with irritable bowel syndrome where constipation was the main stool pattern, two included people where diarrhoea was the main stool pattern and five included people with a mixed stool pattern.\nFive trials assessed a yoga physical activity intervention, three assessed advice to increase physical activity levels, two assessed treadmill exercise, and one assessed a Qigong (breathing and slow movements) intervention. Seven trials involved a 12-week intervention period, two involved an eight-week period and one a six-week period. The longest trial lasted six months."
}
]
|
query-laysum | 3834 | [
{
"role": "user",
"content": "Abstract: Background\nVery preterm infants often require respiratory support and are therefore exposed to an increased risk of bronchopulmonary dysplasia (chronic lung disease) and later neurodevelopmental disability. Caffeine is widely used to prevent and treat apnea (temporal cessation of breathing) associated with prematurity and facilitate extubation. Though widely recognized dosage regimes have been used for decades, higher doses have been suggested to further improve neonatal outcomes. However, observational studies suggest that higher doses may be associated with harm.\nObjectives\nTo determine the effects of higher versus standard doses of caffeine on mortality and major neurodevelopmental disability in preterm infants with (or at risk of) apnea, or peri-extubation.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), and clinicaltrials.gov in May 2022. The reference lists of relevant articles were also checked to identify additional studies.\nSelection criteria\nWe included randomized (RCTs), quasi-RCTs and cluster-RCTs, comparing high-dose to standard-dose strategies in preterm infants. High-dose strategies were defined as a high-loading dose (more than 20 mg of caffeine citrate/kg) or a high-maintenance dose (more than 10 mg of caffeine citrate/kg/day). Standard-dose strategies were defined as a standard-loading dose (20 mg or less of caffeine citrate/kg) or a standard-maintenance dose (10 mg or less of caffeine citrate/kg/day). We specified three additional comparisons according to the indication for commencing caffeine: 1) prevention trials, i.e. preterm infants born at less than 34 weeks' gestation, who are at risk for apnea; 2) treatment trials, i.e. preterm infants born at less than 37 weeks' gestation, with signs of apnea; 3) extubation trials: preterm infants born at less than 34 weeks' gestation, prior to planned extubation.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We evaluated treatment effects using a fixed-effect model with risk ratio (RR) for categorical data and mean, standard deviation (SD), and mean difference (MD) for continuous data.\nMain results\nWe included seven trials enrolling 894 very preterm infants (reported in Comparison 1, i.e. any indication).\nTwo studies included infants for apnea prevention (Comparison 2), four studies for apnea treatment (Comparison 3) and two for extubation management (Comparison 4); in one study, indication for caffeine administration was both apnea treatment and extubation management (reported in Comparison 1, Comparison 3 and Comparison 4).\nIn the high-dose groups, loading and maintenance caffeine doses ranged from 30 mg/kg to 80 mg/kg, and 12 mg/kg to 30 mg/kg, respectively; in the standard-dose groups, loading and maintenance caffeine doses ranged from 6 mg/kg to 25 mg/kg, and 3 mg/kg to 10 mg/kg, respectively.\nTwo studies had three study groups: infants were randomized in three different doses (two of them matched our definition of high dose and one matched our definition of standard dose); high-dose caffeine and standard-dose caffeine were compared to theophylline administration (the latter is included in a separate review).\nSix of the seven included studies compared high-loading and high-maintenance dose to standard-loading and standard-maintenance dose, whereas in one study standard-loading dose and high-maintenance dose was compared to standard-loading dose and standard-maintenance dose.\nHigh-dose caffeine strategies (administration for any indication) may have little or no effect on mortality prior to hospital discharge (risk ratio (RR) 0.86, 95% confidence of interval (CI) 0.53 to 1.38; risk difference (RD) -0.01, 95% CI -0.05 to 0.03; I² for RR and RD = 0%; 5 studies, 723 participants; low-certainty evidence). Only one study enrolling 74 infants reported major neurodevelopmental disability in children aged three to five years (RR 0.79, 95% CI 0.51 to 1.24; RD -0.15, 95% CI -0.42 to 0.13; 46 participants; very low-certainty evidence). No studies reported the outcome mortality or major neurodevelopmental disability in children aged 18 to 24 months and 3 to 5 years. Five studies reported bronchopulmonary dysplasia at 36 weeks' postmenstrual age (RR 0.75, 95% CI 0.60 to 0.94; RD -0.08, 95% CI -0.15 to -0.02; number needed to benefit (NNTB) = 13; I² for RR and RD = 0%; 723 participants; moderate-certainty evidence). High-dose caffeine strategies may have little or no effect on side effects (RR 1.66, 95% CI 0.86 to 3.23; RD 0.03, 95% CI -0.01 to 0.07; I² for RR and RD = 0%; 5 studies, 593 participants; low-certainty evidence). The evidence is very uncertain for duration of hospital stay (data reported in three studies could not be pooled in meta-analysis because outcomes were expressed as medians and interquartile ranges) and seizures (RR 1.42, 95% CI 0.79 to 2.53; RD 0.14, 95% CI -0.09 to 0.36; 1 study, 74 participants; very low-certainty evidence).\nWe identified three ongoing trials conducted in China, Egypt, and New Zealand.\nAuthors' conclusions\nHigh-dose caffeine strategies in preterm infants may have little or no effect on reducing mortality prior to hospital discharge or side effects. We are very uncertain whether high-dose caffeine strategies improves major neurodevelopmental disability, duration of hospital stay or seizures. No studies reported the outcome mortality or major neurodevelopmental disability in children aged 18 to 24 months and 3 to 5 years. High-dose caffeine strategies probably reduce the rate of bronchopulmonary dysplasia.\nRecently completed and future trials should report long-term neurodevelopmental outcome of children exposed to different caffeine dosing strategies in the neonatal period. Data from extremely preterm infants are needed, as this population is exposed to the highest risk for mortality and morbidity. However, caution is required when administering high doses in the first hours of life, when the risk for intracranial bleeding is highest. Observational studies might provide useful information regarding potential harms of the highest doses.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out if high-caffeine dose was better than standard-dose caffeine in newborns born too early to improve:\n• mortality prior to hospital discharge;\n• long-term development at age 18 to 24 months of age and at 3 to 5 years.\nWe also wanted to find out if high-caffeine dose was associated with any unwanted effects."
}
]
|
query-laysum | 3835 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review previously published in 2018.\nThe incidence of seizures following supratentorial craniotomy for non-traumatic pathology has been estimated to be between 15% to 20%; however, the risk of experiencing a seizure appears to vary from 3% to 92% over a five-year period. Postoperative seizures can precipitate the development of epilepsy; seizures are most likely to occur within the first month of cranial surgery. The use of antiepileptic drugs (AEDs) administered pre- or postoperatively to prevent seizures following cranial surgery has been investigated in a number of randomised controlled trials (RCTs).\nObjectives\nTo determine the efficacy and safety of AEDs when used prophylactically in people undergoing craniotomy and to examine which AEDs are most effective.\nSearch methods\nFor the latest update we searched the following databases on 29 September 2019: Cochrane Epilepsy Group Specialized Register, CENTRAL, MEDLINE, ClinicalTrials.gov, and the WHO International Clinical Trials Registry Platform (ICTRP). We did not apply any language restrictions.\nSelection criteria\nWe included RCTs of people with no history of epilepsy who were undergoing craniotomy for either therapeutic or diagnostic reasons. We included trials with adequate randomisation methods and concealment; these could either be blinded or unblinded parallel trials. We did not stipulate a minimum treatment period, and we included trials using active drugs or placebo as a control group.\nData collection and analysis\nThree review authors (JW, JG, YD) independently selected trials for inclusion, extracted data and assessed risk of bias. We resolved any disagreements through discussion. Outcomes investigated included the number of participants experiencing seizures (early (occurring within first week following craniotomy), and late (occurring after first week following craniotomy)), the number of deaths and the number of people experiencing disability and adverse effects. Due to the heterogeneous nature of the trials, we did not combine data from the included trials in a meta-analysis; we presented the findings of the review in narrative format. Visual comparisons of outcomes are presented in forest plots.\nMain results\nWe included 10 RCTs (N = 1815), which were published between 1983 and 2015. Three trials compared a single AED (phenytoin) with placebo or no treatment. One, three-armed trial compared two AEDs (phenytoin, carbamazepine) with no treatment. A second three-armed trial compared phenytoin, phenobarbital with no treatment. Of these five trials comparing AEDs with placebo or no treatment, two trials reported a statistically significant advantage for AED treatment compared to controls for early seizure occurrence; all other comparisons showed no clear or statistically significant differences between AEDs and control treatment. None of the trials that were head-to-head comparisons of AEDs (phenytoin versus sodium valproate, phenytoin versus phenobarbital, levetiracetam versus phenytoin, zonisamide versus phenobarbital) reported any statistically significant differences between treatments for either early or late seizure occurrence.\nOnly five trials reported incidences of death. One trial reported statistically significantly fewer deaths in the carbamazepine and no-treatment groups compared with the phenytoin group after 24 months of treatment, but not after six months of treatment. Incidences of adverse effects of treatment were poorly reported; however, three trials did show that significantly more adverse events occurred on phenytoin compared to valproate, placebo, or no treatment. No trials reported any results relating to functional outcomes such as disability.\nWe considered the evidence to be of low certainty for all reported outcomes due to methodological issues and variability of comparisons made in the trials.\nAuthors' conclusions\nThere is limited, low-certainly evidence to suggest that AED treatment administered prophylactically is either effective or not effective in the prevention of postcraniotomy (early or late) seizures. The current evidence base is limited due to the different methodologies employed in the trials and inconsistencies in the reporting of outcomes including deaths and adverse events. Further evidence from good-quality, contemporary trials is required in order to assess the clinical effectiveness of prophylactic AED treatment compared to placebo or no treatment, or other AEDs in preventing postcraniotomy seizures in this select group of patients.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to September 2019. Ten trials met our inclusion criteria, and included 1815 people. Three trials compared phenytoin (an AED) with a placebo (sham) or no treatment. One trial compared the AEDs phenytoin or carbamazepine with no treatment. One trial compared the AEDs phenytoin or phenobarbital with no treatment. Five other trials were head-to-head trials (where one drug is directly compared against another drug) of AEDs (phenytoin versus valproate; zonisamide versus phenobarbital and levetiracetam versus phenytoin)."
}
]
|
query-laysum | 3836 | [
{
"role": "user",
"content": "Abstract: Background\nMechanical ventilation often causes major distress and anxiety in patients. The sensation of breathlessness, frequent suctioning, inability to talk, uncertainty regarding surroundings or condition, discomfort, isolation from others, and fear contribute to high levels of anxiety. Side effects of analgesia and sedation may lead to the prolongation of mechanical ventilation and, subsequently, to a longer length of hospitalization and increased cost. Therefore, non-pharmacological interventions should be considered for anxiety and stress management. Music interventions have been used to reduce anxiety and distress and improve physiological functioning in medical patients; however, their efficacy for mechanically ventilated patients needs to be evaluated. This review was originally published in 2010 and was updated in 2014.\nObjectives\nTo update the previously published review that examined the effects of music therapy or music medicine interventions (as defined by the authors) on anxiety and other outcomes in mechanically ventilated patients. Specifically, the following objectives are addressed in this review.\n1. To conduct a meta-analysis to compare the effects of participation in standard care combined with music therapy or music medicine interventions with standard care alone.\n2. To compare the effects of patient-selected music with researcher-selected music.\n3. To compare the effects of different types of music interventions (e.g., music therapy versus music medicine).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2014, Issue 2), MEDLINE (1950 to March 2014), CINAHL (1980 to March 2014), EMBASE (1980 to March 2014), PsycINFO (1967 to March 2014), LILACS (1982 to March 2014), Science Citation Index (1980 to March 2014), www.musictherapyworld.net (1 March 2008) (database is no longer functional), CAIRSS for Music (to March 2014), Proquest Digital Dissertations (1980 to March 2014), ClinicalTrials.gov (2000 to March 2014), Current Controlled Trials (1998 to March 2014), the National Research Register (2000 to September 2007), and NIH CRISP (all to March 2014). We handsearched music therapy journals and reference lists, and contacted relevant experts to identify unpublished manuscripts. There was no language restriction. The original search was performed in January 2010.\nSelection criteria\nWe included all randomized and quasi-randomized controlled trials that compared music interventions and standard care with standard care alone for mechanically ventilated patients.\nData collection and analysis\nTwo review authors independently extracted the data and assessed the methodological quality of included studies. We contacted authors to obtain missing data where needed. Where possible, results for continuous outcomes were presented in meta-analyses using mean differences and standardized mean differences. Post-test scores were used. In cases of significant baseline difference, we used change scores. For dichotomous outcomes, we presented the results as risk ratios.\nMain results\nWe identified six new trials for this update. In total, the evidence for this review rests on 14 trials (805 participants). Music listening was the main intervention used, and 13 of the studies did not include a trained music therapist. Results indicated that music listening may be beneficial for anxiety reduction in mechanically ventilated patients. Specifically, music listening resulted, on average, in an anxiety reduction that was 1.11 standard deviation units greater (95% CI -1.75 to -0.47, P = 0.0006) than in the standard care group. This is considered a large and clinically significant effect. Findings indicated that listening to music consistently reduced respiratory rate and systolic blood pressure, suggesting a relaxation response. Furthermore, one large-scale study reported greater reductions in sedative and analgesic intake in the music listening group compared to the control group, and two other studies reported trends for reduction in sedative and analgesic intake for the music group. One study found significantly higher sedation scores in the music listening group compared to the control group.\nNo strong evidence was found for reduction in diastolic blood pressure and mean arterial pressure. Furthermore, inconsistent results were found for reduction in heart rate with seven studies reporting greater heart rate reductions in the music listening group and one study a slightly greater reduction in the control group. Music listening did not improve oxygen saturation levels.\nFour studies examined the effects of music listening on hormone levels but the results were mixed and no conclusions could be drawn.\nNo strong evidence was found for an effect of music listening on mortality rate but this evidence rested on only two trials.\nMost trials were assessed to be at high risk of bias because of lack of blinding. Blinding of outcome assessors is often impossible in music therapy and music medicine studies that use subjective outcomes, unless the music intervention is compared to another treatment intervention. Because of the high risk of bias, these results need to be interpreted with caution.\nNo studies could be found that examined the effects of music interventions on quality of life, patient satisfaction, post-discharge outcomes, or cost-effectiveness. No adverse events were identified.\nAuthors' conclusions\nThis updated systematic review indicates that music listening may have a beneficial effect on anxiety in mechanically ventilated patients. These findings are consistent with the findings of three other Cochrane systematic reviews on the use of music interventions for anxiety reduction in medical patients. The review furthermore suggests that music listening consistently reduces respiratory rate and systolic blood pressure. Finally, results indicate a possible beneficial impact on the consumption of sedatives and analgesics. Therefore, we conclude that music interventions may provide a viable anxiety management option to mechanically ventilated patients.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence on the effect of music interventions compared to standard care on anxiety and other outcomes in mechanically ventilated patients."
}
]
|
query-laysum | 3837 | [
{
"role": "user",
"content": "Abstract: Background\nThis review updates part of an earlier Cochrane Review titled \"Pregabalin for acute and chronic pain in adults\", and considers only neuropathic pain (pain from damage to nervous tissue). Antiepileptic drugs have long been used in pain management. Pregabalin is an antiepileptic drug used in management of chronic pain conditions.\nObjectives\nTo assess the analgesic efficacy and adverse effects of pregabalin for chronic neuropathic pain in adults.\nSearch methods\nWe searched CENTRAL, MEDLINE, and Embase for randomised controlled trials from January 2009 to April 2018, online clinical trials registries, and reference lists.\nSelection criteria\nWe included randomised, double-blind trials of two weeks' duration or longer, comparing pregabalin (any route of administration) with placebo or another active treatment for neuropathic pain, with participant-reported pain assessment.\nData collection and analysis\nTwo review authors independently extracted data and assessed trial quality and biases. Primary outcomes were: at least 30% pain intensity reduction over baseline; much or very much improved on the Patient Global Impression of Change (PGIC) Scale (moderate benefit); at least 50% pain intensity reduction; or very much improved on PGIC (substantial benefit). We calculated risk ratio (RR) and number needed to treat for an additional beneficial (NNTB) or harmful outcome (NNTH). We assessed the quality of the evidence using GRADE.\nMain results\nWe included 45 studies lasting 2 to 16 weeks, with 11,906 participants - 68% from 31 new studies. Oral pregabalin doses of 150 mg, 300 mg, and 600 mg daily were compared with placebo. Postherpetic neuralgia, painful diabetic neuropathy, and mixed neuropathic pain predominated (85% of participants). High risk of bias was due mainly to small study size (nine studies), but many studies had unclear risk of bias, mainly due to incomplete outcome data, size, and allocation concealment.\nPostherpetic neuralgia: More participants had at least 30% pain intensity reduction with pregabalin 300 mg than with placebo (50% vs 25%; RR 2.1 (95% confidence interval (CI) 1.6 to 2.6); NNTB 3.9 (3.0 to 5.6); 3 studies, 589 participants, moderate-quality evidence), and more had at least 50% pain intensity reduction (32% vs 13%; RR 2.5 (95% CI 1.9 to 3.4); NNTB 5.3 (3.9 to 8.1); 4 studies, 713 participants, moderate-quality evidence). More participants had at least 30% pain intensity reduction with pregabalin 600 mg than with placebo (62% vs 24%; RR 2.5 (95% CI 2.0 to 3.2); NNTB 2.7 (2.2 to 3.7); 3 studies, 537 participants, moderate-quality evidence), and more had at least 50% pain intensity reduction (41% vs 15%; RR 2.7 (95% CI 2.0 to 3.5); NNTB 3.9 (3.1 to 5.5); 4 studies, 732 participants, moderate-quality evidence). Somnolence and dizziness were more common with pregabalin than with placebo (moderate-quality evidence): somnolence 300 mg 16% versus 5.5%, 600 mg 25% versus 5.8%; dizziness 300 mg 29% versus 8.1%, 600 mg 35% versus 8.8%.\nPainful diabetic neuropathy: More participants had at least 30% pain intensity reduction with pregabalin 300 mg than with placebo (47% vs 42%; RR 1.1 (95% CI 1.01 to 1.2); NNTB 22 (12 to 200); 8 studies, 2320 participants, moderate-quality evidence), more had at least 50% pain intensity reduction (31% vs 24%; RR 1.3 (95% CI 1.2 to 1.5); NNTB 22 (12 to 200); 11 studies, 2931 participants, moderate-quality evidence), and more had PGIC much or very much improved (51% vs 30%; RR 1.8 (95% CI 1.5 to 2.0); NNTB 4.9 (3.8 to 6.9); 5 studies, 1050 participants, moderate-quality evidence). More participants had at least 30% pain intensity reduction with pregabalin 600 mg than with placebo (63% vs 52%; RR 1.2 (95% CI 1.04 to 1.4); NNTB 9.6 (5.5 to 41); 2 studies, 611 participants, low-quality evidence), and more had at least 50% pain intensity reduction (41% vs 28%; RR 1.4 (95% CI 1.2 to 1.7); NNTB 7.8 (5.4 to 14); 5 studies, 1015 participants, low-quality evidence). Somnolence and dizziness were more common with pregabalin than with placebo (moderate-quality evidence): somnolence 300 mg 11% versus 3.1%, 600 mg 15% versus 4.5%; dizziness 300 mg 13% versus 3.8%, 600 mg 22% versus 4.4%.\nMixed or unclassified post-traumatic neuropathic pain: More participants had at least 30% pain intensity reduction with pregabalin 600 mg than with placebo (48% vs 36%; RR 1.2 (1.1 to 1.4); NNTB 8.2 (5.7 to 15); 4 studies, 1367 participants, low-quality evidence), and more had at least 50% pain intensity reduction (34% vs 20%; RR 1.5 (1.2 to 1.9); NNTB 7.2 (5.4 to 11); 4 studies, 1367 participants, moderate-quality evidence). Somnolence (12% vs 3.9%) and dizziness (23% vs 6.2%) were more common with pregabalin.\nCentral neuropathic pain: More participants had at least 30% pain intensity reduction with pregabalin 600 mg than with placebo (44% vs 28%; RR 1.6 (1.3 to 2.0); NNTB 5.9 (4.1 to 11); 3 studies, 562 participants, low-quality evidence) and at least 50% pain intensity reduction (26% vs 15%; RR 1.7 (1.2 to 2.3); NNTB 9.8 (6.0 to 28); 3 studies, 562 participants, low-quality evidence). Somnolence (32% vs 11%) and dizziness (23% vs 8.6%) were more common with pregabalin.\nOther neuropathic pain conditions: Studies show no evidence of benefit for 600 mg pregabalin in HIV neuropathy (2 studies, 674 participants, moderate-quality evidence) and limited evidence of benefit in neuropathic back pain or sciatica, neuropathic cancer pain, or polyneuropathy.\nSerious adverse events, all conditions: Serious adverse events were no more common with placebo than with pregabalin 300 mg (3.1% vs 2.6%; RR 1.2 (95% CI 0.8 to 1.7); 17 studies, 4112 participants, high-quality evidence) or pregabalin 600 mg (3.4% vs 3.4%; RR 1.1 (95% CI 0.8 to 1.5); 16 studies, 3995 participants, high-quality evidence).\nAuthors' conclusions\nEvidence shows efficacy of pregabalin in postherpetic neuralgia, painful diabetic neuralgia, and mixed or unclassified post-traumatic neuropathic pain, and absence of efficacy in HIV neuropathy; evidence of efficacy in central neuropathic pain is inadequate. Some people will derive substantial benefit with pregabalin; more will have moderate benefit, but many will have no benefit or will discontinue treatment. There were no substantial changes since the 2009 review.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Neuropathic pain comes from damage to the nervous system. It is different from pain messages that are carried along healthy nerves from damaged tissue (for example, from a fall or a cut, or from an arthritic knee). Neuropathic pain is often treated by different medicines (drugs) from those used for pain from damaged tissue, which we often think of as painkillers. Medicines that are sometimes used to treat depression or epilepsy can be effective in some people with neuropathic pain. One of these is pregabalin. Our definition of a good result was a high level of pain relief and ability to keep taking the medicine without side effects making people stop."
}
]
|
query-laysum | 3838 | [
{
"role": "user",
"content": "Abstract: Background\nCurrent guidelines recommend oral anticoagulation therapy for patients with atrial fibrillation (AF) with one or more risk factors for stroke; however, anticoagulation control (time in therapeutic range (TTR)) with vitamin K antagonists (VKAs) is dependent on many factors. Educational and behavioural interventions may impact patients' ability to maintain their international normalised ratio (INR) control. This is an updated version of the original review first published in 2013.\nObjectives\nTo evaluate the effects of educational and behavioural interventions for oral anticoagulation therapy (OAT) on TTR in patients with AF.\nSearch methods\nWe updated searches from the previous review by searching the Cochrane Central Register of Controlled Trials (CENTRAL) and the Database of Abstracts of Reviews of Effects (DARE) in The Cochrane Library (January 2016, Issue 1), MEDLINE Ovid (1949 to February week 1 2016), EMBASE Classic + EMBASE Ovid (1980 to Week 7 2016), PsycINFO Ovid (1806 to Week 1 February 2016) and CINAHL Plus with Full Text EBSCO (1937 to 16/02/2016). We applied no language restrictions.\nSelection criteria\nWe included randomised controlled trials evaluating the effect of any educational and behavioural intervention compared with usual care, no intervention, or intervention in combination with other self-management techniques among adults with AF who were eligible for, or currently receiving, OAT.\nData collection and analysis\nTwo of the review authors independently selected studies and extracted data. Risk of bias was assessed using the Cochrane 'Risk of bias' tool. We included outcome data on TTR, decision conflict (patient's uncertainty in making health-related decisions), percentage of INRs in the therapeutic range, major bleeding, stroke and thromboembolic events, patient knowledge, patient satisfaction, quality of life (QoL), beliefs about medication, illness perceptions, and anxiety and depression. We pooled data for three outcomes - TTR, anxiety and depression, and decision conflict - and reported mean differences (MD). Where insufficient data were present to conduct a meta-analysis, we reported effect sizes and confidence intervals (CI) from the included studies. We evaluated the quality of evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework.\nMain results\nEleven trials with a total of 2246 AF patients (ranging from 14 to 712 by study) were included within the review. Studies included education, decision aids, and self-monitoring plus education interventions. The effect of self-monitoring plus education on TTR was uncertain compared with usual care (MD 6.31, 95% CI -5.63 to 18.25, I2 = 0%, 2 trials, 69 participants, very low-quality evidence). We found small but positive effects of education on anxiety (MD -0.62, 95% CI -1.21 to -0.04, I2 = 0%, 2 trials, 587 participants, low-quality evidence) and depression (MD -0.74, 95% CI -1.34 to -0.14, I2 = 0%, 2 trials, 587 participants, low-quality evidence) compared with usual care. The effect of decision aids on decision conflict favoured usual care (MD -0.1, 95% CI -0.17 to -0.02, I2 = 0%, 2 trials, 721 participants, low-quality evidence).\nAuthors' conclusions\nThis review demonstrates that there is insufficient evidence to draw definitive conclusions regarding the impact of educational or behavioural interventions on TTR in AF patients receiving OAT. Thus, more trials are needed to examine the impact of interventions on anticoagulation control in AF patients and the mechanisms by which they are successful. It is also important to explore the psychological implications for patients suffering from this long-term chronic condition.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This is an update of the original review first published in 2013. We searched scientific databases in February 2016 and found 11 randomised clinical trials including 2246 adults with atrial fibrillation who were taking oral anticoagulant medication. The trials we found compared education, decision aids, and self-monitoring plus education to usual care, over any length of time."
}
]
|
query-laysum | 3839 | [
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the original Cochrane review published in 2010 (Issue 7).\nCarcinoma of the uterine cervix is the second most common cancer and the third leading cause of cancer death among women. Radiotherapy has been used successfully to treat cervical cancer for nearly a century. The combination of external beam radiotherapy (EBRT) and intracavity brachytherapy (ICBT) has become a standard treatment for cervical cancer. Whether high dose rate (HDR) or low dose rate (LDR) brachytherapy improves outcomes in terms of local control rates, survival and complications for women with cervical cancer remains controversial.\nObjectives\nTo assess the efficacy and safety of HDR versus LDR ICBT in combination with EBRT for women with uterine cervical cancer.\nSearch methods\nWe searched the Cochrane Gynaecological Cancer Group Specialised Register and the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2014, Issue 1), MEDLINE (1966 to March 2014), EMBASE (1974 to March 2014), and the Chinese Biomedical Literature Database (CBM) (1978 to March 2014) for relevant original, published trials.\nSelection criteria\nRandomised controlled trials (RCTs) and quasi-RCTs that compared HDR with LDR ICBT, combined with EBRT, for women with locally advanced uterine cervical cancer.\nData collection and analysis\nTwo authors independently extracted the data using standardised forms. Primary outcome measures included overall survival (OS), relapse-free survival (RFS) and pelvic control rate, while secondary outcomes included rates of recurrence and complications.\nMain results\nFour studies involving 1265 women met the inclusion criteria. In our meta-analysis to compare HDR and LDR ICBT, the pooled risk ratios (RRs) were 0.95 (95% confidence interval (CI) 0.79 to 1.15), 0.93 (95% CI 0.84 to 1.04) and 0.79 (95% CI 0.52 to 1.20) for 3-, 5- and 10-year overall survival rates respectively; and 0.95 (95% CI 0.84 to 1.07) and 1.02 (0.88 to 1.19) for 5- and 10-year disease-specific survival (DSS) rates respectively. The RR for RFS was 1.04 (95% CI 0.71 to 1.52) and 0.96 (95% CI 0.81 to 1.14) at 3- and 5- years. For local control rates the RR was 0.95 (95% CI 0.86 to 1.05) and 0.95 (95% CI 0.87 to 1.05) at 3- and 5- years; with a RR of 1.09 (95% CI 0.83 to 1.43) for locoregional recurrence, 0.79 (95% CI 0.40 to 1.53) for local and distant recurrence, 2.23 (95% CI 0.78 to 6.34) for para-aortic lymph node metastasis, and 0.99 (95% CI 0.72 to 1.35) for distance metastasis. For bladder, rectosigmoid and small bowel complications, the RR was 1.33 (95% CI 0.53 to 3.34), 1.00 (95% CI 0.52 to 1.91) and 3.37 (95% CI 1.06 to 10.72) respectively. These results indicated that there were no significant differences except for increased small bowel complications with HDRs (P = 0.04).\nAuthors' conclusions\nSince the last version of this review, no new studies were identified for inclusion in this review to provide additional information. This review showed no significant differences between HDR and LDR ICBT when considering OS, DSS, RFS, local control rate, recurrence, metastasis and treatment related complications for women with cervical carcinoma. Due to some potential advantages of HDR ICBT (rigid immobilization, outpatient treatment, patient convenience, accuracy of source and applicator positioning, individualized treatment) we recommend the use of HDR ICBT for all clinical stages of cervix cancer. The overall risk of bias was high for the included studies as many of the items were either of high or unclear risk. The GRADE assessment of the quality of the evidence was low to moderate.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Despite the practical advantages of HDR, controversy still persists regarding the efficacy and safety of HDR compared to LDR brachytherapy. We conducted this review to assess the efficacy and safety of HDR versus LDR ICBT for women with locally advanced uterine cervical cancer."
}
]
|
query-laysum | 3840 | [
{
"role": "user",
"content": "Abstract: Background\nThe choice of antibiotic, and the use of single or combined therapy are controversial areas in the treatment of respiratory infection due to Pseudomonas aeruginosa in cystic fibrosis (CF). Advantages of combination therapy include wider range of modes of action, possible synergy and reduction of resistant organisms; advantages of monotherapy include lower cost, ease of administration and reduction of drug-related toxicity. Current evidence does not provide a clear answer and the use of intravenous antibiotic therapy in CF requires further evaluation. This is an update of a previously published review.\nObjectives\nTo assess the effectiveness of single compared to combination intravenous anti-pseudomonal antibiotic therapy for treating people with CF.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register, comprising references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings.\nMost recent search of the Group's Trials Register: 07 October 2020.\nWe also searched online trials registries on 16 November 2020.\nSelection criteria\nRandomised controlled trials (RCTs) comparing a single intravenous anti-pseudomonal antibiotic with a combination of that antibiotic plus a second anti-pseudomonal antibiotic in people with CF.\nData collection and analysis\nTwo authors independently assessed trial quality and extracted data. We assessed the certainty of the data using GRADE.\nMain results\nWe identified 59 trials, of which we included eight trials (356 participants) comparing a single anti-pseudomonal agent to a combination of the same antibiotic and one other. There was a wide variation in the individual antibiotics used in each trial. In total, the trials included seven comparisons of a beta-lactam antibiotic (penicillin-related or third generation cephalosporin) with a beta-lactam-aminoglycoside combination and three comparisons of an aminoglycoside with a beta-lactam-aminoglycoside combination.\nThere was considerable heterogeneity amongst these trials, leading to difficulties in performing the review and interpreting the results. These results should be interpreted cautiously. Six of the included trials were published between 1977 and 1988; these were single-centre trials with flaws in the randomisation process and small sample size. Overall, the methodological quality was poor and the certainty of the evidence ranged from low to moderate.\nThe review did not find any differences between monotherapy and combination therapy in either the short term or in the long term for the outcomes of different lung function measures, bacteriological outcome measures, need for additional treatment, adverse effects, quality of life or symptom scores.\nAuthors' conclusions\nThe results of this review are inconclusive. The review raises important methodological issues. There is a need for an RCT which needs to be well-designed in terms of adequate randomisation allocation, blinding, power and long-term follow-up. Results need to be standardised to a consistent method of reporting, in order to validate the pooling of results from multiple trials.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence about the different effects of using a single intravenous (given directly into a vein) antibiotic compared to using a combination of intravenous antibiotics in people with cystic fibrosis infected with Pseudomonas aeruginosa."
}
]
|
query-laysum | 3841 | [
{
"role": "user",
"content": "Abstract: Background\nCOVID-19 illness is highly variable, ranging from infection with no symptoms through to pneumonia and life-threatening consequences. Symptoms such as fever, cough, or loss of sense of smell (anosmia) or taste (ageusia), can help flag early on if the disease is present. Such information could be used either to rule out COVID-19 disease, or to identify people who need to go for COVID-19 diagnostic tests. This is the second update of this review, which was first published in 2020.\nObjectives\nTo assess the diagnostic accuracy of signs and symptoms to determine if a person presenting in primary care or to hospital outpatient settings, such as the emergency department or dedicated COVID-19 clinics, has COVID-19.\nSearch methods\nWe undertook electronic searches up to 10 June 2021 in the University of Bern living search database. In addition, we checked repositories of COVID-19 publications. We used artificial intelligence text analysis to conduct an initial classification of documents. We did not apply any language restrictions.\nSelection criteria\nStudies were eligible if they included people with clinically suspected COVID-19, or recruited known cases with COVID-19 and also controls without COVID-19 from a single-gate cohort. Studies were eligible when they recruited people presenting to primary care or hospital outpatient settings. Studies that included people who contracted SARS-CoV-2 infection while admitted to hospital were not eligible. The minimum eligible sample size of studies was 10 participants. All signs and symptoms were eligible for this review, including individual signs and symptoms or combinations. We accepted a range of reference standards.\nData collection and analysis\nPairs of review authors independently selected all studies, at both title and abstract, and full-text stage. They resolved any disagreements by discussion with a third review author. Two review authors independently extracted data and assessed risk of bias using the QUADAS-2 checklist, and resolved disagreements by discussion with a third review author. Analyses were restricted to prospective studies only. We presented sensitivity and specificity in paired forest plots, in receiver operating characteristic (ROC) space and in dumbbell plots. We estimated summary parameters using a bivariate random-effects meta-analysis whenever five or more primary prospective studies were available, and whenever heterogeneity across studies was deemed acceptable.\nMain results\nWe identified 90 studies; for this update we focused on the results of 42 prospective studies with 52,608 participants. Prevalence of COVID-19 disease varied from 3.7% to 60.6% with a median of 27.4%. Thirty-five studies were set in emergency departments or outpatient test centres (46,878 participants), three in primary care settings (1230 participants), two in a mixed population of in- and outpatients in a paediatric hospital setting (493 participants), and two overlapping studies in nursing homes (4007 participants). The studies did not clearly distinguish mild COVID-19 disease from COVID-19 pneumonia, so we present the results for both conditions together.\nTwelve studies had a high risk of bias for selection of participants because they used a high level of preselection to decide whether reverse transcription polymerase chain reaction (RT-PCR) testing was needed, or because they enrolled a non-consecutive sample, or because they excluded individuals while they were part of the study base. We rated 36 of the 42 studies as high risk of bias for the index tests because there was little or no detail on how, by whom and when, the symptoms were measured. For most studies, eligibility for testing was dependent on the local case definition and testing criteria that were in effect at the time of the study, meaning most people who were included in studies had already been referred to health services based on the symptoms that we are evaluating in this review.\nThe applicability of the results of this review iteration improved in comparison with the previous reviews. This version has more studies of people presenting to ambulatory settings, which is where the majority of assessments for COVID-19 take place. Only three studies presented any data on children separately, and only one focused specifically on older adults.\nWe found data on 96 symptoms or combinations of signs and symptoms. Evidence on individual signs as diagnostic tests was rarely reported, so this review reports mainly on the diagnostic value of symptoms. Results were highly variable across studies. Most had very low sensitivity and high specificity. RT-PCR was the most often used reference standard (40/42 studies).\nOnly cough (11 studies) had a summary sensitivity above 50% (62.4%, 95% CI 50.6% to 72.9%)); its specificity was low (45.4%, 95% CI 33.5% to 57.9%)). Presence of fever had a sensitivity of 37.6% (95% CI 23.4% to 54.3%) and a specificity of 75.2% (95% CI 56.3% to 87.8%). The summary positive likelihood ratio of cough was 1.14 (95% CI 1.04 to 1.25) and that of fever 1.52 (95% CI 1.10 to 2.10). Sore throat had a summary positive likelihood ratio of 0.814 (95% CI 0.714 to 0.929), which means that its presence increases the probability of having an infectious disease other than COVID-19.\nDyspnoea (12 studies) and fatigue (8 studies) had a sensitivity of 23.3% (95% CI 16.4% to 31.9%) and 40.2% (95% CI 19.4% to 65.1%) respectively. Their specificity was 75.7% (95% CI 65.2% to 83.9%) and 73.6% (95% CI 48.4% to 89.3%). The summary positive likelihood ratio of dyspnoea was 0.96 (95% CI 0.83 to 1.11) and that of fatigue 1.52 (95% CI 1.21 to 1.91), which means that the presence of fatigue slightly increases the probability of having COVID-19.\nAnosmia alone (7 studies), ageusia alone (5 studies), and anosmia or ageusia (6 studies) had summary sensitivities below 50% but summary specificities over 90%. Anosmia had a summary sensitivity of 26.4% (95% CI 13.8% to 44.6%) and a specificity of 94.2% (95% CI 90.6% to 96.5%). Ageusia had a summary sensitivity of 23.2% (95% CI 10.6% to 43.3%) and a specificity of 92.6% (95% CI 83.1% to 97.0%). Anosmia or ageusia had a summary sensitivity of 39.2% (95% CI 26.5% to 53.6%) and a specificity of 92.1% (95% CI 84.5% to 96.2%). The summary positive likelihood ratios of anosmia alone and anosmia or ageusia were 4.55 (95% CI 3.46 to 5.97) and 4.99 (95% CI 3.22 to 7.75) respectively, which is just below our arbitrary definition of a 'red flag', that is, a positive likelihood ratio of at least 5. The summary positive likelihood ratio of ageusia alone was 3.14 (95% CI 1.79 to 5.51).\nTwenty-four studies assessed combinations of different signs and symptoms, mostly combining olfactory symptoms. By combining symptoms with other information such as contact or travel history, age, gender, and a local recent case detection rate, some multivariable prediction scores reached a sensitivity as high as 90%.\nAuthors' conclusions\nMost individual symptoms included in this review have poor diagnostic accuracy. Neither absence nor presence of symptoms are accurate enough to rule in or rule out the disease. The presence of anosmia or ageusia may be useful as a red flag for the presence of COVID-19. The presence of cough also supports further testing. There is currently no evidence to support further testing with PCR in any individuals presenting only with upper respiratory symptoms such as sore throat, coryza or rhinorrhoea.\nCombinations of symptoms with other readily available information such as contact or travel history, or the local recent case detection rate may prove more useful and should be further investigated in an unselected population presenting to primary care or hospital outpatient settings.\nThe diagnostic accuracy of symptoms for COVID-19 is moderate to low and any testing strategy using symptoms as selection mechanism will result in both large numbers of missed cases and large numbers of people requiring testing. Which one of these is minimised, is determined by the goal of COVID-19 testing strategies, that is, controlling the epidemic by isolating every possible case versus identifying those with clinically important disease so that they can be monitored or treated to optimise their prognosis. The former will require a testing strategy that uses very few symptoms as entry criterion for testing, the latter could focus on more specific symptoms such as fever and anosmia.\n\nGiven the provided abstract, please respond to the following query: \"What did we want to find out?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Symptoms and signs of COVID-19 are varied and may indicate other diseases, not just COVID-19. We wanted to know how accurate diagnosis of COVID-19 is, based on symptoms and signs from medical examination. We were interested in people with suspected COVID-19, who go to their doctor, outpatient test centres or hospital."
}
]
|
query-laysum | 3842 | [
{
"role": "user",
"content": "Abstract: Background\nThis review is one of six looking at the primary medical management options for patients with chronic rhinosinusitis.\nChronic rhinosinusitis is common and is characterised by inflammation of the lining of the nose and paranasal sinuses leading to nasal blockage, rhinorrhoea, facial pressure/pain and loss of sense of smell. The condition can occur with or without nasal polyps. The use of topical (intranasal) corticosteroids has been widely advocated for the treatment of chronic rhinosinusitis given the belief that inflammation is a major component of this condition.\nObjectives\nTo assess the effects of intranasal corticosteroids in people with chronic rhinosinusitis.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 8); MEDLINE; EMBASE; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 11 August 2015.\nSelection criteria\nRandomised controlled trials (RCTs) with a follow-up period of at least three months comparing intranasal corticosteroids (e.g. beclomethasone dipropionate, triamcinolone acetonide, flunisolide, budesonide) against placebo or no treatment in patients with chronic rhinosinusitis.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were disease-specific health-related quality of life (HRQL), patient-reported disease severity and the commonest adverse event - epistaxis. Secondary outcomes included general HRQL, endoscopic nasal polyp score, computerised tomography (CT) scan score and the adverse events of local irritation or other systemic adverse events. We used GRADE to assess the quality of the evidence for each outcome; this is indicated in italics.\nMain results\nWe included 18 RCTs with a total of 2738 participants. Fourteen studies had participants with nasal polyps and four studies had participants without nasal polyps. Only one study was conducted in children.\nIntranasal corticosteroids versus placebo or no intervention\nOnly one study (20 adult participants without polyps) measured our primary outcome disease-specific HRQL using the Rhinosinusitis Outcome Measures-31 (RSOM-31). They reported no significant difference (numerical data not available) (very low quality evidence).\nOur second primary outcome, disease severity , was measured using the Chronic Sinusitis Survey in a second study (134 participants without polyps), which found no important difference (mean difference (MD) 2.84, 95% confidence interval (CI) -5.02 to 10.70; scale 0 to 100). Another study (chronic rhinosinusitis with nasal polyps) reported an increased chance of improvement in the intranasal corticosteroids group (RR 2.78, 95% CI 1.76 to 4.40; 109 participants). The quality of the evidence was low.\nSix studies provided data on at least two of the individualsymptoms used in the EPOS 2012 criteria to define chronic rhinosinusitis (nasal blockage, rhinorrhoea, loss of sense of smell and facial pain/pressure). When all four symptoms in the EPOS criteria were available on a scale of 0 to 3 (higher = more severe symptoms), the average MD in change from baseline was -0.26 (95% CI -0.37 to -0.15; 243 participants; two studies; low quality evidence). Although there were more studies and participants when only nasal blockage and rhinorrhoea were considered (MD -0.31, 95% CI -0.38 to -0.24; 1702 participants; six studies), the MD was almost identical to when loss of sense of smell was also considered (1345 participants, four studies; moderate quality evidence).\nWhen considering the results for the individual symptoms, benefit was shown in the intranasal corticosteroids group. The effect size was larger for nasal blockage (MD -0.40, 95% CI -0.52 to -0.29; 1702 participants; six studies) than for rhinorrhoea (MD -0.25, 95% CI -0.33 to -0.17; 1702 participants; six studies) or loss of sense of smell (MD -0.19, 95% CI -0.28 to -0.11; 1345 participants; four studies). There was heterogeneity in the analysis for facial pain/pressure (MD -0.27, 95% CI -0.56 to 0.02; 243 participants; two studies). The quality of the evidence was moderate for nasal blockage, rhinorrhoea and loss of sense of smell, but low for facial pain/pressure.\nThere was an increased risk of epistaxis with intranasal corticosteroids (risk ratio (RR) 2.74, 95% CI 1.88 to 4.00; 2508 participants; 13 studies; high quality evidence).\nConsidering our secondary outcome, general HRQL, one study (134 participants without polyps) measured this using the SF-36 and reported a statistically significant benefit only on the general health subscale. The quality of the evidence was very low.\nIt is unclear whether there is a difference in the risk of local irritation (RR 0.94, 95% CI 0.53 to 1.64; 2124 participants; 11 studies) (low quality evidence).\nNone of the studies treated or followed up patients long enough to provide meaningful data on the risk of osteoporosis or stunted growth (children).\nOther comparisons\nWe identified no other studies that compared intranasal corticosteroids plus co-intervention A versus placebo plus co-intervention A.\nAuthors' conclusions\nMost of the evidence available was from studies in patients with chronic rhinosinusitis with nasal polyps. There is little information about quality of life (very low quality evidence). For disease severity, there seems to be improvement for all symptoms (low quality evidence), a moderate-sized benefit for nasal blockage and a small benefit for rhinorrhoea (moderate quality evidence). The risk of epistaxis is increased (high quality evidence), but these data included all levels of severity; small streaks of blood may not be a major concern for patients. It is unclear whether there is a difference in the risk of local irritation (low quality evidence).\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Most of the evidence available was from studies in patients with chronic rhinosinusitis with nasal polyps. There is little information about quality of life and the quality of this evidence is very low. For disease severity, there seems to be improvement for all symptoms (low quality evidence), a moderate-sized benefit for nasal blockage and a small benefit for rhinorrhoea (moderate quality evidence). The risk of nosebleeds is increased (high quality evidence), but this included all levels of severity; for some patients small streaks of blood may not be a major concern. It is unclear whether there is a difference in the risk of local irritation (low quality evidence)."
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.