dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
22844
|
[
{
"role": "user",
"content": "Abstract: Background\nNeonatal parenteral nutrition may be delivered via peripheral cannulas or central venous catheters (umbilical or percutaneous). As the result of complications associated with umbilical catheters, many neonatal units prefer to use percutaneous catheters after initial stabilisation. Although they can be difficult to place, these catheters may be more stable than peripheral cannulae and require less frequent replacement. These delivery methods may be associated with different risks of adverse events, including acquired invasive infection and extravasation injury.\nObjectives\nTo determine the effects of infusion of parenteral nutrition via percutaneous central venous catheters versus peripheral cannulae on nutrient input, growth and development and complications among hospitalised neonates receiving parenteral nutrition in terms of adverse consequences such as bacteraemia or invasive fungal infection, cardiac tamponade or other extravasation injuries.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 5), MEDLINE (1966 to June 2015) and EMBASE (1980 to June 2015), as well as conference proceedings and previous reviews.\nSelection criteria\nRandomised controlled trials that compared delivery of intravenous fluids (primarily parenteral nutrition) via percutaneous central venous catheters versus peripheral cannulae in hospitalised neonates.\nData collection and analysis\nWe extracted data using standard methods of the Cochrane Neonatal Group, with separate evaluation of trial quality and data extraction by two review authors.\nMain results\nWe found six trials recruiting a total of 549 infants. One trial showed that use of a percutaneous central venous catheter was associated with a smaller deficit between prescribed and actual nutrient intake during the trial period (mean difference (MD) -7.1%, 95% confidence interval (CI) -11.02 to -3.2). Infants in the percutaneous central venous catheter group needed significantly fewer catheters/cannulae (MD -4.3, 95% CI -5.24, -3.43). Meta-analysis of data from all trials revealed no evidence of an effect on the incidence of invasive infection (typical risk ratio (RR) 0.95, 95% CI 0.72 to 1.25; typical risk difference (RD) -0.01, 95% CI -0.08 to 0.06).\nAuthors' conclusions\nData from one small trial suggest that use of percutaneous central venous catheters to deliver parenteral nutrition increases nutrient input. The significance of this in relation to long-term growth and developmental outcomes is unclear. Three trials suggest that use of percutaneous central venous catheters decreases the number of catheters/cannulae needed to deliver nutrition. No evidence suggests that percutaneous central venous catheter use increases risks of adverse events, particularly invasive infection, although none of the included trials was large enough to rule out an effect on uncommon severe adverse events such as pericardial effusion.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In newborn infants receiving parenteral nutrition, does delivery into deep veins (via percutaneous central venous catheters) versus superficial veins (via peripheral cannulae) affect nutrition, growth and development, and adverse events including infection or skin damage?"
}
] |
query-laysum
|
22845
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease comprises a group of genetic haemoglobin disorders. The predominant symptom associated with sickle cell disease is pain resulting from the occlusion of small blood vessels by abnormally 'sickle-shaped' red blood cells. There are other complications, including chronic organ damage and prolonged painful erection of the penis, known as priapism. Severity of sickle cell disease is variable, and treatment is usually symptomatic. Priapism affects up to half of all men with sickle cell disease, however, there is no consistency in treatment. We therefore need to know the best way of treating this complication in order to offer an effective interventional approach to all affected individuals. This is an update of a previously published review.\nObjectives\nTo assess the benefits and risks of different treatments for stuttering (repeated short episodes) and fulminant (lasting for six hours or more) priapism in sickle cell disease.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register, which comprises references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. We also searched trial registries.\nDate of the most recent search of the Group's Haemoglobinopathies Trials Register: 09 September 2019.\nDate of most recent search of trial registries and of Embase: 01 October 2019.\nSelection criteria\nAll randomised or quasi-randomised controlled trials comparing non-surgical or surgical treatment with placebo or no treatment, or with another intervention for stuttering or fulminant priapism.\nData collection and analysis\nThe authors independently extracted data and assessed the risk of bias of the trials.\nMain results\nThree trials with 102 participants were identified and met the criteria for inclusion in this review. These trials compared stilboestrol to placebo, sildenafil to placebo and a four-arm trial which compared ephedrine or etilefrine to placebo and ranged in duration from two weeks to six months. All of the trials were conducted in an outpatient setting in Jamaica, Nigeria and the UK. None of the trials measured our first primary outcome, detumescence. However, all three trials reported on the reduction in frequency of stuttering priapism, our second primary outcome; and from the evidence included in this review, we are uncertain whether stilboestrol, etilefrine or ephedrine reduce the frequency of stuttering priapism as the certainty of the evidence has been assessed as very low. Additionally, we conclude that sildenafil may make little or no difference (low-certainty evidence). Two trials reported on immediate side effects and we are uncertain whether etilefrine or ephedrine reduce the occurrence of these (very low-certainty of evidence) and also conclude that sildenafil may make little or no difference in side effects (low-quality evidence). Given that all of the trials were at risk of bias and all had low participant numbers, we considered the certainty of the evidence to be low to very low.\nAuthors' conclusions\nThere is a lack of evidence for the benefits or risks of the different treatments for both stuttering and fulminant priapism in sickle cell disease. This systematic review has clearly identified the need for well-designed, adequately-powered, multicentre randomised controlled trials assessing the effectiveness of specific interventions for priapism in sickle cell disease.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We considered the quality of evidence to be low to very low as all of the trials were at risk of bias and all had low participant numbers."
}
] |
query-laysum
|
22846
|
[
{
"role": "user",
"content": "Abstract: Background\nMore than 400,000 cases of oropharyngeal squamous cell cancer (OPSCC) are diagnosed every year worldwide and this is rising. Much of the increase has been attributed to human papillomavirus (HPV). HPV-positive OPSCC patients are often younger and have significantly improved survival relative to HPV-negative patients. Traditional management of OPSCC has been with radiotherapy with or without chemotherapy, as this was shown to have similar survival to open surgery but with significantly lower morbidity. Techniques have evolved, however, with the development of computerised planning and intensity-modulated radiotherapy, and of minimally invasive surgical techniques. Acute and late toxicities associated with chemoradiotherapy are a significant burden for OPSCC patients and with an ever-younger cohort, any strategies that could decrease treatment-associated morbidity should be investigated.\nObjectives\nTo assess the effects of de-intensified adjuvant (chemo)radiotherapy in comparison to standard adjuvant (chemo)radiotherapy in patients treated with minimally invasive transoral surgery (transoral robotic surgery or transoral laser microsurgery) for resectable HPV-positive oropharyngeal squamous cell carcinoma.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trials Register; Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Ovid Embase; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 26 April 2018.\nSelection criteria\nRandomised controlled trials (RCTs) in patients with carcinoma of the oropharynx (as defined by the World Health Organization classification C09, C10). Cancers included were primary HPV-positive squamous cell tumours originating from the oropharyngeal mucosa. Tumours were classified as T1-4a with or without nodal spread and with no evidence of distant metastatic spread. The intervention was minimally invasive transoral surgery followed by de-intensified adjuvant therapy (either omission of chemotherapy or reduced-dose radiotherapy). The comparator was minimally invasive transoral surgery followed by standard concurrent chemoradiotherapy or standard-dose radiotherapy. The treatments received were of curative intent and patients had not undergone any prior intervention, other than diagnostic biopsy.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were overall survival (disease-related survival was to be studied where possible) and disease-free survival, measured at one, two, three and five years. Our secondary outcomes included assessment of swallowing ability and voice, measured at one, six, 12 and 24 months. We planned to use GRADE to assess the quality of evidence for each outcome.\nMain results\nWe did not identify any completed RCTs that met our inclusion criteria. However, three eligible studies are in progress:\nADEPT is a phase III trial comparing postoperative radiotherapy with or without cisplatin in HPV-positive T1-4a OPSCC patients. Included patients must have received minimally invasive surgery and demonstrated extra-capsular spread from disease in the neck.\nECOG-E3311 is a phase II trial of treatment for HPV-positive locally advanced OPSCC (stages III-IVa + IVb without distant metastasis). Patients are stratified after minimally invasive surgery. Medium-risk patients are randomised to either standard or reduced-dose radiotherapy.\nPATHOS is a phase III trial of treatment for HPV-positive OPSCC (T1-3, N0-2b). Patients are stratified after minimally invasive surgery. Medium-risk patients are randomised to either standard or reduced-dose radiotherapy. High-risk patients are randomised to radiotherapy with or without concurrent cisplatin.\nAuthors' conclusions\nThis review highlights the current lack of high-quality randomised controlled trials studying treatment de-escalation after minimally invasive surgery in patients with HPV-positive OPSCC. However, trials that will meet the inclusion criteria for this review are in progress with results expected between 2021 and 2023.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "More than 400,000 cases of cancer of the throat are diagnosed each year and this is increasing, with HPV being a significant factor. Throat cancer caused by this virus often affects younger patients but has a better prognosis than non-viral throat cancer. Traditional treatment for throat cancer is with radiotherapy and chemotherapy as this had been shown to have similar survival outcomes to surgery but with fewer side effects. However, treatments have evolved, such as computerised planning and improvements in radiotherapy, and the development of keyhole surgery, which have the potential for fewer side effects. Chemotherapy and radiotherapy do have long-term negative effects on quality of life. With younger patients being affected, any way of reducing these side effects should be investigated."
}
] |
query-laysum
|
22847
|
[
{
"role": "user",
"content": "Abstract: Background\nConventional mechanical ventilation (CMV) is a common therapy for neonatal respiratory failure. While CMV facilitates gas exchange, it may simultaneously injure the lungs. Positive end-expiratory pressure (PEEP) has received less attention than other ventilation parameters when considering this benefit–risk balance. While an appropriate PEEP level may result in clinical benefits, both inappropriately low or high levels may cause harm. An appropriate PEEP level may also be best achieved by an individualized approach.\nObjectives\n1. To compare the effects of PEEP levels in preterm infants requiring CMV for respiratory distress syndrome (RDS). We compare both: zero end-expiratory pressure (ZEEP) (0 cm H2O) versus any PEEP and low (< 5 cm H2O) vs high (≥ 5 cm H2O) PEEP.\n2. To compare the effects of PEEP levels in preterm infants requiring CMV for bronchopulmonary dysplasia (BPD). We compare both: ZEEP (0 cm H2O) vs any PEEP and low (< 5 cm H2O) versus high (≥ 5 cm H2O) PEEP.\n3. To compare the effects of different methods for individualizing PEEP to an optimal level in preterm newborn infants requiring CMV for RDS.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials, MEDLINE via PubMed, Embase, and CINAHL to 14 February 2018. We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomized controlled trials and quasi-randomized trials.\nSelection criteria\nWe included all randomized or quasi-randomized controlled trials studying preterm infants born at less than 37 weeks' gestational age, requiring CMV and undergoing randomization to either different PEEP levels (RDS or BPD); or, two or more alternative methods for individualizing PEEP levels (RDS only). We included cross-over trials but limited outcomes to those from the first cross-over period.\nData collection and analysis\nWe performed data collection and analysis according to the recommendations of the Cochrane Neonatal Review Group. We used the GRADE approach to assess the quality of evidence for prespecified key clinically relevant outcomes.\nMain results\nFour trials met the inclusion criteria. Two cross-over trials with 28 participants compared different PEEP levels in infants with RDS. Meta-analysis was limited to short-term measures of pulmonary gas exchange and showed no differences between low and high PEEP.\nWe identified no trials comparing PEEP levels in infants with BPD.\nTwo trials enrolling 44 participants compared different methods for individualizing PEEP in infants with RDS. Both trials compared an oxygenation-guided lung-recruitment maneuver (LRM) with gradual PEEP level titrations for individualizing PEEP to routine care (control). Meta-analysis showed no difference between LRM and control on mortality by hospital discharge (risk ratio (RR) 1.00, 95% confidence interval (CI) 0.17 to 5.77); there was no statistically significant difference on BPD, with an effect estimate favoring LRM (RR 0.25, 95% CI 0.03 to 2.07); and a statistically significant difference favoring LRM for the outcome of duration of ventilatory support (mean difference –1.06 days, 95% CI –1.85 to –0.26; moderate heterogeneity, I2 = 67%). Short-term oxygenation measures also favored LRM. We graded the quality of the evidence as low for all key outcomes due to risk of bias and imprecision of the effect estimates.\nAuthors' conclusions\nThere continues to be insufficient evidence to guide PEEP level selection for preterm infants on CMV for RDS or BPD. Low-quality data suggests that selecting PEEP levels through the application of an oxygenation-guided LRM may result in clinical benefit. Well-conducted randomized trials, particularly to further evaluate the potential benefits of oxygenation-guided LRMs, are needed.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "how do different positive end-expiratory pressure (PEEP) levels, or different approaches for selecting PEEP levels, compare on their effects in preterm infants with respiratory distress syndrome (RDS) or bronchopulmonary dysplasia (BPD)?"
}
] |
query-laysum
|
22848
|
[
{
"role": "user",
"content": "Abstract: Background\nElective hysterectomy is commonly performed for benign gynaecological conditions. Hysterectomy can be performed abdominally, laparoscopically, or vaginally, with or without laparoscopic assistance. Antibiotic prophylaxis consists of administration of antibiotics to reduce the rate of postoperative infection, which otherwise affects 40%-50% of women after vaginal hysterectomy, and more than 20% after abdominal hysterectomy. No Cochrane review has systematically assessed evidence on this topic.\nObjectives\nTo determine the effectiveness and safety of antibiotic prophylaxis in women undergoing elective hysterectomy.\nSearch methods\nWe searched electronic databases to November 2016 (including the Cochrane Gynaecology and Fertility Group Specialised Register, the Cochrane Central Register of Studies (CRSO), MEDLINE, Embase, PsycINFO, and the Cumulative Index to Nursing and Allied Health Literature (CINAHL), as well as clinical trials registers, conference abstracts, and reference lists of relevant articles.\nSelection criteria\nAll randomised controlled trials (RCTs) comparing use of antibiotics versus placebo or other antibiotics as prophylaxis in women undergoing elective hysterectomy.\nData collection and analysis\nWe used Cochrane standard methodological procedures.\nMain results\nWe included in this review 37 RCTs, which performed 20 comparisons of various antibiotics versus placebo and versus one another (6079 women). The quality of the evidence ranged from very low to moderate. The main limitations of study findings were risk of bias due to poor reporting of methods, imprecision due to small samples and low event rates, and inadequate reporting of adverse effects.\nAny antibiotic versus placebo\nVaginal hysterectomy\nLow-quality evidence shows that women who received antibiotic prophylaxis had fewer total postoperative infections (risk ratio (RR) 0.28, 95% confidence interval (CI) 0.19 to 0.40; four RCTs, N = 293; I2 = 85%), less urinary tract infection (UTI) (RR 0.58, 95% CI 0.43 to 0.77; eight RCTs, N = 1473; I2 = 44%), fewer pelvic infections (RR 0.28, 95% CI 0.20 to 0.39; 11 RCTs, N = 1693; I2 = 57%), and fewer postoperative fevers (RR 0.43, 95% CI 0.34 to 0.54; nine RCTs, N = 1562; I2 = 47%) than women who did not receive such prophylaxis. This suggests that antibiotic prophylaxis reduces the average risk of postoperative infection from about 34% to 7% to 14%. Whether this treatment has led to differences in rates of other serious infection remains unclear (RR 0.20, 95% CI 0.01 to 4.10; one RCT, N = 146; very low-quality evidence).\nData were insufficient for comparison of adverse effects.\nAbdominal hysterectomy\nWomen who received antibiotic prophylaxis of any class had fewer total postoperative infections (RR 0.38, 95% CI 0.21 to 0.67; one RCT, N = 158; low-quality evidence), abdominal wound infections (RR 0.51, 95% CI 0.36 to 0.73; 11 RCTs, N = 2247; I2 = 6%; moderate-quality evidence), UTIs (RR 0.41, 95% CI 0.31 to 0.53; 11 RCTs, N = 2705; I2 = 28%; moderate-quality evidence), pelvic infections (RR 0.50, 95% CI 0.35 to 0.71; 11 RCTs, N = 1883; I2 = 11%; moderate-quality evidence), and postoperative fevers (RR 0.59, 95% CI 0.50 to 0.70; 11 RCTs, N = 2394; I2 = 55%; moderate-quality evidence) than women who did not receive prophylaxis, suggesting that antibiotic prophylaxis reduces the average risk of postoperative infection from about 16% to 1% to 6%. Whether this treatment has led to differences in rates of other serious infection remains unclear (RR 0.44, 95% CI 0.12 to 1.69; two RCTs, N = 476; I2 = 29%; very low-quality evidence).\nIt is unclear whether rates of adverse effects differed between groups (RR 1.80, 95% CI 0.62 to 5.18; two RCTs, N = 430; I2 = 0%; very low-quality evidence).\nHead-to-head comparisons between antibiotics\nVaginal hysterectomy\nWe identified four comparisons: cephalosporin versus penicillin (two RCTs, N = 470), cephalosporin versus tetracycline (one RCT, N = 51), antiprotozoal versus lincosamide (one RCT, N = 80), and cephalosporin versus antiprotozoal (one RCT, N = 78). Data show no evidence of differences between groups for any of the primary outcomes, except that fewer cases of total postoperative infection and postoperative fever were reported in women who received cephalosporin than in those who received antiprotozoal.\nOnly one comparison (cephalosporin vs penicillin; two RCTs, N = 451) yielded data on adverse effects and showed no differences between groups.\nAbdominal hysterectomy\nWe identified only one comparison: cephalosporin versus penicillin (N = 220). Data show no evidence of differences between groups for any of the primary outcomes. Adverse effects were not reported.\nCombined antibiotics versus single antibiotics\nVaginal hysterectomy\nWe identified three comparisons: cephalosporin plus antiprotozoal versus cephalosporin (one RCT, N = 78), cephalosporin plus antiprotozoal versus antiprotozoal (one RCT, N = 78), and penicillin plus antiprotozoal versus penicillin (one RCT, N = 18). Data were unavailable for most outcomes, including adverse effects. We found no evidence of differences between groups, except that fewer women receiving cephalosporin with antiprotozoal received a diagnosis of total postoperative infection, UTI, or postoperative fever compared with women receiving antiprotozoal.\nAbdominal hysterectomy\nWe identified one comparison (penicillin plus antiprotozoal vs penicillin only; two RCT, N = 155). Whether differences between groups occurred was unclear. Adverse effects were not reported.\nComparison of cephalosporins in different regimens\nSingle small trials addressed dose comparisons and provided no data for most outcomes, including adverse effects. Whether differences between groups occurred was unclear. No trials compared route of administration.\nThe quality of evidence for all head-to-head and dose comparisons was very low owing to very serious imprecision and serious risk of bias related to poor reporting of methods.\nAuthors' conclusions\nAntibiotic prophylaxis appears to be effective in preventing postoperative infection in women undergoing elective vaginal or abdominal hysterectomy, regardless of the dose regimen. However, evidence is insufficient to show whether use of prophylactic antibiotics influences rates of adverse effects. Similarly, evidence is insufficient to show which (if any) individual antibiotic, dose regimen, or route of administration is safest and most effective. The most recent studies included in this review were 14 years old at the time of our search. Thus findings from included studies may not reflect current practice in perioperative and postoperative care and may not show locoregional antimicrobial resistance patterns.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Are antibiotics effective and safe for preventing postoperative infection in women undergoing elective (non-urgent) hysterectomy?"
}
] |
query-laysum
|
22849
|
[
{
"role": "user",
"content": "Abstract: Background\nA thyroid nodule is a discrete lesion within the thyroid gland that might be palpable and is ultrasonographically distinct from the surrounding thyroid parenchyma. Thyroid nodules are more common as age increases and occur more frequently in women. Benign thyroid nodules often cause pressure symptoms and cosmetic complaints. In China and many other countries, doctors use Chinese herbal medicines (CHM) to treat thyroid nodules.\nObjectives\nTo assess the effects of Chinese herbal medicines in the treatment of benign thyroid nodules in adults.\nSearch methods\nReview authors searched the following electronic databases: The Cochrane Library, MEDLINE, EMBASE, the Chinese Biomedical Literature Database (CBM), the China National Knowledge Infrastructure (CNKI), VIP information (a Chinese database), WANFANG Data (a Chinese database), the Chinese Conference Papers Database and the Chinese Dissertation Database (all searched up to April 2013).\nSelection criteria\nRandomised controlled trials comparing CHM or CHM plus levothyroxine versus levothyroxine, placebo or no treatment in adults with benign thyroid nodules.\nData collection and analysis\nTwo review authors independently extracted data, assessed studies for risk of bias and evaluated overall study quality according to GRADE (Grading of Recommendations Assessment, Development and Evaluation), with differences resolved by consensus.\nMain results\nWe included one randomised trial involving 152 participants with a randomisation ratio of 2:1 (CHM vs no treatment). The trial applied adequate sequence generation; however, allocation concealment was unclear. Duration of treatment was three months, and follow-up six months. Our a priori defined outcomes of interest (i.e. nodule volume reduction ≥ 50%; pressure symptoms, cosmetic complaints or both; health-related quality of life; all-cause mortality; cancer occurrence; changes in number and size of thyroid nodules; changes in thyroid volume; and socioeconomic effects) were not investigated in the included study. Thyrotropin (TSH), thyroxine (T4) and tri-iodothyronine (T3) serum levels were normal in both groups before and after the trial was conducted. No adverse events were reported (low quality evidence).\nAuthors' conclusions\nFirm evidence cannot be found to support or refute the use of Chinese herbal medicines for benign thyroid nodules in adults.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We could identify only one randomised controlled trial involving 152 participants with benign thyroid nodules. The intervention group (98 participants) was treated with a combination of two Chinese herbal medicines, and no treatment was given to the control group (54 participants). The duration of treatment was three months, and overall follow-up six months."
}
] |
query-laysum
|
22850
|
[
{
"role": "user",
"content": "Abstract: Background\nShared care has been used in the management of many chronic conditions with the assumption that it delivers better care than primary or specialty care alone; however, little is known about the effectiveness of shared care.\nObjectives\nTo determine the effectiveness of shared care health service interventions designed to improve the management of chronic disease across the primary/specialty care interface. This is an update of a previously published review.\nSecondary questions include the following:\n1. Which shared care interventions or portions of shared care interventions are most effective?\n2. What do the most effective systems have in common?\nSearch methods\nWe searched MEDLINE, Embase and the Cochrane Library to 12 October 2015.\nSelection criteria\nOne review author performed the initial abstract screen; then two review authors independently screened and selected studies for inclusion. We considered randomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before-after studies (CBAs) and interrupted time series analyses (ITS) evaluating the effectiveness of shared care interventions for people with chronic conditions in primary care and community settings. The intervention was compared with usual care in that setting.\nData collection and analysis\nTwo review authors independently extracted data from the included studies, evaluated study quality and judged the certainty of the evidence using the GRADE approach. We conducted a meta-analysis of results when possible and carried out a narrative synthesis of the remainder of the results. We presented the results in a 'Summary of findings' table, using a tabular format to show effect sizes for all outcome types.\nMain results\nWe identified 42 studies of shared care interventions for chronic disease management (N = 18,859), 39 of which were RCTs, two CBAs and one an NRCT. Of these 42 studies, 41 examined complex multi-faceted interventions and lasted from six to 24 months. Overall, our confidence in results regarding the effectiveness of interventions ranged from moderate to high certainty. Results showed probably few or no differences in clinical outcomes overall with a tendency towards improved blood pressure management in the small number of studies on shared care for hypertension, chronic kidney disease and stroke (mean difference (MD) 3.47, 95% confidence interval (CI) 1.68 to 5.25)(based on moderate-certainty evidence). Mental health outcomes improved, particularly in response to depression treatment (risk ratio (RR) 1.40, 95% confidence interval (CI) 1.22 to 1.62; six studies, N = 1708) and recovery from depression (RR 2.59, 95% CI 1.57 to 4.26; 10 studies, N = 4482) in studies examining the 'stepped care' design of shared care interventions (based on high-certainty evidence). Investigators noted modest effects on mean depression scores (standardised mean difference (SMD) -0.29, 95% CI -0.37 to -0.20; six studies, N = 3250). Differences in patient-reported outcome measures (PROMs), processes of care and participation and default rates in shared care services were probably limited (based on moderate-certainty evidence). Studies probably showed little or no difference in hospital admissions, service utilisation and patient health behaviours (with evidence of moderate certainty).\nAuthors' conclusions\nThis review suggests that shared care improves depression outcomes and probably has mixed or limited effects on other outcomes. Methodological shortcomings, particularly inadequate length of follow-up, may account in part for these limited effects. Review findings support the growing evidence base for shared care in the management of depression, particularly stepped care models of shared care. Shared care interventions for other conditions should be developed within research settings, with account taken of the complexity of such interventions and awareness of the need to carry out longer studies to test effectiveness and sustainability over time.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We have defined shared care across the primary/specialty interface as joint participation of primary care physicians and specialty care physicians in planned delivery of care. This may be informed by enhanced information exchange, over and above routine discharge and referral notices. This approach has the potential to improve the management of chronic disease while leading to better outcomes than are attained by primary or specialty care alone."
}
] |
query-laysum
|
22851
|
[
{
"role": "user",
"content": "Abstract: Background\nPrimary dysmenorrhoea is the most common form of period pain and affects up to three-quarters of women at some stage of their reproductive life. Primary dysmenorrhoea is pain in the absence of any organic cause and is characterised by cramping pain in the lower abdomen, starting within the first eight to 72 hours of menstruation.This review examines the currently available evidence supporting the use of acupuncture (stimulation of points on the body using needles) and acupressure (stimulation of points on the body using pressure) to treat primary dysmenorrhoea.\nObjectives\nTo determine the effectiveness and safety of acupuncture and acupressure in the treatment of primary dysmenorrhoea when compared with a placebo, no treatment, or conventional medical treatment.\nSearch methods\nWe searched the following databases: the Cochrane Menstrual Disorders and Subfertility Group Trials Register (to September 2015), Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library), MEDLINE, EMBASE, PsycINFO, CINAHL and Chinese databases including Chinese Biomedical Literature Database (CBM), China National Knowledge Infrastructure (CNKI), VIP database and registers of ongoing trials.\nSelection criteria\nWe included all published and unpublished randomised controlled trials (RCTs) comparing acupuncture with sham acupuncture or placebo control, usual care, pharmacological treatment or no treatment. We included the following modes of treatment: acupuncture, electro-acupuncture, and acupressure. Participants were women of reproductive age with primary dysmenorrhoea during the majority of the menstrual cycles or for three consecutive menstrual cycles, and moderate to severe symptoms.\nData collection and analysis\nWe calculated odds ratios (ORs) for dichotomous outcomes and mean differences (MDs) or standardised mean differences (SMDs) for continuous outcomes, with 95% confidence intervals (CIs). We pooled the data where appropriate. Our primary outcomes was pain. Secondary outcomes included menstrual symptoms, quality of life, and adverse effects.\nMain results\nWe included 42 RCTs (4640 women). Acupuncture or acupressure was compared with a sham/placebo group, medication, no treatment or other treatment. Many of the continuous data were not suitable for calculation of means, mainly due to evidence of skew.\n1. Acupuncture studies\nAcupuncture versus sham or placebo control (6 RCTs)\nFindings were inconsistent and inconclusive. However, the only study in the review that was at low risk of bias in all domains found no evidence of a difference between the groups at three, six or 12 months. The overall quality of the evidence was low. No studies reported adverse events.\nAcupuncture versus NSAIDs\nSeven studies reported visual analogue scale (VAS) pain scores, but were unsuitable for pooling due to extreme heterogeneity (I² = 94%). In all studies the scores were lower in the acupuncture group, with the mean difference varying across studies from 0.64 to 4 points on a VAS 0 - 10 scale (low-quality evidence). Four RCTs reported rates of pain relief, and found a benefit for the acupuncture group (OR 4.99, 95% CI 2.82 to 8.82, 352 women, I² = 0%, low-quality evidence). Adverse events were less common in the acupuncture group (OR 0.10, 95% CI 0.02 to 0.44, 4 RCTs, 239 women, 4 trials, I² = 15%, low-quality evidence).\nAcupuncture versus no treatment\nData were unsuitable for analysis, but pain scores were lower in the acupuncture group in all six studies reporting this outcome. The quality of the evidence was low. No studies reported adverse events.\n2. Acupressure studies\nNo studies of acupressure reported adverse events.\nAcupressure versus sham or placebo control\nData were unsuitable for pooling, but two studies reported a mean benefit of one to three points on a 0 - 10 VAS pain scale. Another four studies reported data unsuitable for analysis: all found that pain scores were lower in the acupuncture group. No studies reported adverse events. The quality of the evidence was low.\nAcupressure versus NSAIDs\nOne study reported this outcome, using a 0 - 3 pain scale. The score was higher (indicating more pain) in the acupressure group (MD 0.39 points, 95% CI 0.21 to 0.57, 136 women, very low-quality evidence).\nAcupressure versus no treatment\nThere was no clear evidence of a difference between the groups on a VAS 0 - 10 pain scale (MD -0.96 points, 95% CI -2.54 to 0.62, 2 trials, 140 women, I² = 83%, very low-quality evidence).\nAuthors' conclusions\nThere is insufficient evidence to demonstrate whether or not acupuncture or acupressure are effective in treating primary dysmenorrhoea, and for most comparisons no data were available on adverse events. The quality of the evidence was low or very low for all comparisons. The main limitations were risk of bias, poor reporting, inconsistency and risk of publication bias.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Dysmenorrhoea, also known as period pain, is commonly experienced by younger women. Symptoms may include cramping pain in the lower abdomen that may spread to the lower back or front thigh, nausea, vomiting, diarrhoea, headache, fatigue, anxiety, and dizziness. Use of conventional treatments is high, but 20% to 25% of women still have inadequate pain relief. We therefore need other approaches to effectively manage symptoms of period pain. Acupuncture (the stimulation of points on the body using needles) and acupressure (stimulation of points on the body using pressure) are used to treat period pain, although we do not know how effective it is in reducing such pain."
}
] |
query-laysum
|
22852
|
[
{
"role": "user",
"content": "Abstract: Background\nTraumatic brain injury (TBI) is a major cause of death and disability, with an estimated 5.5 million people experiencing severe TBI worldwide every year. Observational clinical studies of people with TBI suggest an association between raised body temperature and unfavourable outcome, although this relationship is inconsistent. Additionally, preclinical models suggest that reducing temperature to 35 °C to 37.5 °C improves biochemical and histopathological outcomes compared to reducing temperature to a lower threshold of 33 °C to 35 °C. It is unknown whether reducing body temperature to 35 °C to 37.5 °C in people admitted to hospital with TBI is beneficial, has no effect, or causes harm. This is an update of a review last published in 2014.\nObjectives\nTo assess the effects of pharmacological interventions or physical interventions given with the intention of reducing body temperature to 35 °C to 37.5 °C in adults and children admitted to hospital after TBI.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Web of Science, and PubMed on 28 November 2019. We searched clinical trials registers, grey literature and references lists of reviews, and we carried out forward citation searches of included studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) with participants of any age admitted to hospital following TBI. We included interventions that aimed to reduce body temperature to 35 °C to 37.5 °C: these included pharmacological interventions (such as paracetamol, or non-steroidal anti-inflammatory drugs), or physical interventions (such as surface cooling devices, bedside fans, or cooled intravenous fluids). Eligible comparators were placebo or usual care.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, and assessed risks of bias. We assessed the certainty of the evidence with GRADE.\nMain results\nWe included one RCT with 41 participants. This study recruited adult participants admitted to two intensive care units in Australia, and evaluated a pharmacological intervention. Researchers gave participants 1 g paracetamol or a placebo intravenously at four-hourly intervals for 72 hours.\nWe could not be certain whether intravenous paracetamol influenced mortality at 28 days (risk ratio 2.86, 95% confidence interval 0.32 to 25.24). We judged the evidence for this outcome to be very low certainty, meaning we have very little confidence in this effect estimate, and the true result may be substantially different to this effect. We downgraded the certainty for imprecision (because the evidence was from a single study with very few participants), and study limitations (because we noted a high risk of selective reporting bias). This study was otherwise at low risk of bias.\nThe included study did not report the primary outcome for this review, which was the number of people with a poor outcome at the end of follow-up (defined as death or dependency, as measured on a scale such as the Glasgow Outcome Score), or any of our secondary outcomes, which included the number of people with further intracranial haemorrhage, extracranial haemorrhage, abnormal intracranial pressure, or pneumonia or other serious infections.\nThe only other completed trial that we found was of a physical intervention that compared advanced fever control (using a surface cooling device) versus conventional fever control in 12 participants. The trial was published as an abstract only, with insufficient details to allow inclusion, so we have added this to the 'studies awaiting classification' section, pending further information from the study authors or publication of the full study report.\nWe identified four ongoing studies that will contribute evidence to future updates of the review if they measure relevant outcomes and, in studies with a mixed population, report data separately for participants with TBI.\nAuthors' conclusions\nOne small study contributed very low-certainty evidence for mortality to this review. The uncertainty is largely driven by limited research into reduction of body temperature to 35 °C to 37.5 °C in people with TBI. Further research that evaluates pharmacological or physical interventions, or both, may increase certainty in this field. We propose that future updates of the review, and ongoing and future research in this field, incorporate outcomes that are important to the people receiving the interventions, including side effects of any pharmacological agent (e.g. nausea or vomiting), and discomfort caused by physical therapies.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found only one small study with 41 people who had recently been admitted to the intensive care unit after a traumatic head injury. This study assessed the effects of paracetamol, given intravenously (through a needle or tube inserted into a vein) for 72 hours, compared to an intravenous salt solution that was disguised to look like the paracetamol solution.\nWe found no full reports of studies that gave people with traumatic brain injury other medicines or physical cooling treatments, such as blankets, fans, or ice to cool the surface of the skin, or cooled intravenous fluids. We found a short report about one very small study of a physical cooling treatment. The report did not give enough information for us to include its data, but we may include it in future if more information becomes available.\nWe also found four ongoing studies, which may provide more evidence for the review once they have completed. Some studies include participants who have other types of brain injury. We will only be able to include these studies in the review if they report the findings for people with traumatic brain injury separately from people with other conditions."
}
] |
query-laysum
|
22853
|
[
{
"role": "user",
"content": "Abstract: Background\nPhenylketonuria is an inherited disease treated with dietary restriction of the amino acid phenylalanine. The diet is initiated in the neonatal period to prevent learning disability; however, it is restrictive and can be difficult to follow. Whether the diet can be relaxed or discontinued during adolescence or should be continued for life remains a controversial issue, which we aim to address in this review. This is an updated version of a previously published review.\nObjectives\nTo assess the effects of a low-phenylalanine diet commenced early in life for people with phenylketonuria. To assess the possible effects of relaxation or termination of the diet on intelligence, neuropsychological outcomes and mortality, growth, nutritional status, eating behaviour and quality of life.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches, handsearches of relevant journals and abstract books of conference proceedings.\nMost recent search of the Inborn Errors of Metabolism Trials Register: 30 April 2020.\nSelection criteria\nAll randomised or quasi-randomised controlled trials comparing a low-phenylalanine diet to relaxation or termination of dietary restrictions in people with phenylketonuria.\nData collection and analysis\nTwo authors independently assessed study eligibility and methodological quality, and subsequently extracted the data.\nMain results\nWe included four studies in this review (251 participants), and found few significant differences between treatment and comparison groups for the outcomes of interest. Blood phenylalanine levels were significantly lower in participants with phenylketonuria following a low-phenylalanine diet compared to those on a less restricted diet, mean difference (MD) at three months -698.67 (95% confidence interval (CI) -869.44 to -527.89). Intelligence quotient was significantly higher in participants who continued the diet than in those who stopped the diet, MD after 12 months 5.00 (95% CI 0.40 to 9.60). However, these results came from a single study.\nAuthors' conclusions\nThe results of non-randomised studies have concluded that a low-phenylalanine diet is effective in reducing blood phenylalanine levels and improving intelligence quotient and neuropsychological outcomes. We were unable to find any randomised controlled studies that have assessed the effect of a low-phenylalanine diet versus no diet from diagnosis. In view of evidence from non-randomised studies, such a study would be unethical and it is recommended that low-phenylalanine diet should be commenced at the time of diagnosis. There is uncertainty about the precise level of phenylalanine restriction and when, if ever, the diet should be relaxed. This should be addressed by randomised controlled studies; however, no new studies are expected in this area so we do not plan to update this review.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review included four studies with 251 people with phenylketonuria. One study looked at the effects of stopping a low-phenylalanine diet at four years of age, a second study looked at the effect of returning to low-phenylalanine diets in children who had previously relaxed their diet, a third study looked at the effect of increasing phenylalanine intake in children who had continued on a low-phenylalanine diet since diagnosis and the fourth (a large, multicentre study) investigated the use of a strict low-phenylalanine diet compared to a moderately strict low-phenylalanine diet in newly diagnosed children."
}
] |
query-laysum
|
22854
|
[
{
"role": "user",
"content": "Abstract: Background\nDrinking is influenced by youth perceptions of how their peers drink. These perceptions are often incorrect, overestimating peer drinking norms. If inaccurate perceptions can be corrected, young people may drink less.\nObjectives\nTo determine whether social norms interventions reduce alcohol-related negative consequences, alcohol misuse or alcohol consumption when compared with a control (ranging from assessment only/no intervention to other educational or psychosocial interventions) among university and college students.\nSearch methods\nThe following electronic databases were searched up to July 2015: the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE, PsycINFO. The Cumulative Index to Nursing and Allied Health Literature (CINAHL) only to March 2008. Reference lists of included studies and review articles were manually searched. No restriction based on language or date was applied.\nSelection criteria\nRandomised controlled trials or cluster-randomised controlled trials that compared a social normative intervention versus no intervention, alcohol education leaflet or other 'non-normative feedback' alcohol intervention and reported on alcohol consumption or alcohol-related problems in university or college students.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane. Each outcome was analysed by mode of delivery: mailed normative feedback (MF); web/computer normative feedback (WF); individual face-to-face normative feedback (IFF); group face-to-face normative feedback (GFF); and normative marketing campaign (MC).\nMain results\nA total of 70 studies (44,958 participants) were included in the review, and 63 studies (42,784 participants) in the meta-analyses. Overall, the risk of bias assessment showed that these studies provided moderate or low quality evidence.\nOutcomes at four or more months post-intervention were of particular interest to assess when effects were sustained beyond the immediate short term. We have reported pooled effects across delivery modes only for those analyses for which heterogeneity across delivery modes is not substantial (I2 < 50%).\nAlcohol-related problems at four or more months: IFF standardised mean difference (SMD) -0.14, 95% confidence interval (CI) -0.24 to -0.04 (participants = 2327; studies = 11; moderate quality evidence), equivalent to a decrease of 1.28 points in the 69-point alcohol problems scale score. No effects were found for WF or MF.\nBinge drinking at four or more months: results pooled across delivery modes: SMD -0.06, 95% CI -0.11 to -0.02 (participants = 11,292; studies = 16; moderate quality evidence), equivalent to 2.7% fewer binge drinkers if 30-day prevalence is 43.9%.\nDrinking quantity at four or more months: results pooled across delivery modes: SMD -0.08, 95% CI -0.12 to -0.04 (participants = 21,169; studies = 32; moderate quality evidence), equivalent to a reduction of 0.9 drinks consumed each week, from a baseline of 13.7 drinks per week.\nDrinking frequency at four or more months: WF SMD -0.11, 95% CI -0.17 to -0.04 (participants = 9929; studies = 10; moderate quality evidence), equivalent to a decrease of 0.17 drinking days/wk, from a baseline of 2.74 days/wk; IFF SMD -0.21, 95% CI -0.31 to -0.10 (participants = 1464; studies = 8; moderate quality evidence), equivalent to a decrease of 0.32 drinking days/wk, from a baseline of 2.74 days/wk. No effects were found for GFF or MC.\nEstimated blood alcohol concentration (BAC) at four or more months: peak BAC results pooled across delivery modes: SMD -0.08, 95% CI -0.17 to 0.00 (participants = 7198; studies = 11; low quality evidence), equivalent to a reduction in peak BAC from an average of 0.144% to 0.135%. No effects were found for typical BAC with IFF.\nAuthors' conclusions\nThe results of this review indicate that no substantive meaningful benefits are associated with social norms interventions for prevention of alcohol misuse among college/university students. Although some significant effects were found, we interpret the effect sizes as too small, given the measurement scales used in the studies included in this review, to be of relevance for policy or practice. Moreover, the significant effects are not consistent for all misuse measures, heterogeneity was a problem in some analyses and bias cannot be discounted as a potential cause of these findings.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, only low or moderate quality evidence was noted for the effects reported in this review. Problems with study quality could result in estimates of social norms effects that are too high, so we cannot rule out the chance that the effects observed in this review may be overstated.\nThe U.S. National Institutes of Health provided funding for just under half (33/70) of the studies included in this review. Eighteen studies provided no information about funding, and only 13 papers had a clear conflict of interest statement."
}
] |
query-laysum
|
22855
|
[
{
"role": "user",
"content": "Abstract: Background\nAsthma is the most common chronic lung condition worldwide, affecting 334 million adults and children globally. Despite the availability of effective treatment, such as inhaled corticosteroids (ICS), adherence to maintenance medication remains suboptimal. Poor ICS adherence leads to increased asthma symptoms, exacerbations, hospitalisations, and healthcare utilisation. Importantly, suboptimal use of asthma medication is a key contributor to asthma deaths. The impact of digital interventions on adherence and asthma outcomes is unknown.\nObjectives\nTo determine the effectiveness of digital interventions for improving adherence to maintenance treatments in asthma.\nSearch methods\nWe identified trials from the Cochrane Airways Trials Register, which contains studies identified through multiple electronic searches and handsearches of other sources. We also searched trial registries and reference lists of primary studies. We conducted the most recent searches on 1 June 2020, with no restrictions on language of publication. A further search was run in October 2021, but studies were not fully incorporated.\nSelection criteria\nWe included randomised controlled trials (RCTs) including cluster- and quasi-randomised trials of any duration in any setting, comparing a digital adherence intervention with a non-digital adherence intervention or usual care. We included adults and children with a clinical diagnosis of asthma, receiving maintenance treatment.\nData collection and analysis\nWe used standard methodological procedures for data collection. We used GRADE to assess quantitative outcomes where data were available.\nMain results\nWe included 40 parallel randomised controlled trials (RCTs) involving adults and children with asthma (n = 15,207), of which eight are ongoing studies. Of the included studies, 30 contributed data to at least one meta-analysis. The total number of participants ranged from 18 to 8517 (median 339). Intervention length ranged from two to 104 weeks. Most studies (n = 29) reported adherence to maintenance medication as their primary outcome; other outcomes such as asthma control and quality of life were also commonly reported. Studies had low or unclear risk of selection bias but high risk of performance and detection biases due to inability to blind the participants, personnel, or outcome assessors. A quarter of the studies had high risk of attrition bias and selective outcome reporting. We examined the effect of digital interventions using meta-analysis for the following outcomes: adherence (16 studies); asthma control (16 studies); asthma exacerbations (six studies); unscheduled healthcare utilisation (four studies); lung function (seven studies); and quality of life (10 studies).\nPooled results showed that patients receiving digital interventions may have increased adherence (mean difference of 14.66 percentage points, 95% confidence interval (CI) 7.74 to 21.57; low-certainty evidence); this is likely to be clinically significant in those with poor baseline medication adherence. Subgroup analysis by type of intervention was significant (P = 0.001), with better adherence shown with electronic monitoring devices (EMDs) (23 percentage points over control, 95% CI 10.84 to 34.16; seven studies), and with short message services (SMS) (12 percentage points over control, 95% CI 6.22 to 18.03; four studies). No significant subgroup differences were seen for interventions having an in-person component versus fully digital interventions, adherence feedback, one or multiple digital components to the intervention, or participant age. Digital interventions were likely to improve asthma control (standardised mean difference (SMD) 0.31 higher, 95% CI 0.17 to 0.44; moderate-certainty evidence) - a small but likely clinically significant effect. They may reduce asthma exacerbations (risk ratio 0.53, 95% CI 0.32 to 0.91; low-certainty evidence).\nDigital interventions may result in a slight change in unscheduled healthcare utilisation, although some studies reported no or a worsened effect. School or work absence data could not be included for meta-analysis due to the heterogeneity in reporting and the low number of studies. They may result in little or no difference in lung function (forced expiratory volume in one second (FEV1)): there was an improvement of 3.58% predicted FEV1, 95% CI 1.00% to 6.17%; moderate-certainty evidence); however, this is unlikely to be clinically significant as the FEV1 change is below 12%. Digital interventions likely increase quality of life (SMD 0.26 higher, 95% CI 0.07 to 0.45; moderate-certainty evidence); however, this is a small effect that may not be clinically significant. Acceptability data showed positive attitudes towards digital interventions. There were no data on cost-effectiveness or adverse events.\nOur confidence in the evidence was reduced by risk of bias and inconsistency.\nAuthors' conclusions\nOverall, digital interventions may result in a large increase in adherence (low-certainty evidence). There is moderate-certainty evidence that digital adherence interventions likely improve asthma control to a degree that is clinically significant, and likely increase quality of life, but there is little or no improvement in lung function. The review found low-certainty evidence that digital interventions may reduce asthma exacerbations. Subgroup analyses show that EMDs may improve adherence by 23% and SMS interventions by 12%, and interventions with an in-person element and adherence feedback may have greater benefits for asthma control and adherence, respectively. Future studies should include percentage adherence as a routine outcome measure to enable comparison between studies and meta-analysis, and use validated questionnaires to assess adherence and outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the information\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is some uncertainty about our results because the studies were quite different from each other. These differences mean that we cannot be completely sure what the real benefit is, as the benefits may be due to other factors not directly related to the technology - for example, being involved in a study can improve medication-taking. Sometimes the studies did not give us enough information for us to include them with the other studies to work out their effectiveness. We had concerns about a quarter of the studies where people did not finish the study, and we were uncertain whether studies reported everything they measured."
}
] |
query-laysum
|
22856
|
[
{
"role": "user",
"content": "Abstract: Background\nPain is a common feature of childhood and adolescence around the world, and for many young people, that pain is chronic. The World Health Organization (WHO) guidelines for pharmacological treatments for children's persisting pain acknowledge that pain in children is a major public health concern of high significance in most parts of the world. Views on children's pain have changed over time and relief of pain is now seen as important. In the past, pain was largely dismissed and was frequently left untreated, and it was assumed that children quickly forgot about painful experiences.\nWe designed a suite of seven reviews in chronic non-cancer pain and cancer pain (looking at antidepressants, antiepileptic drugs, non-steroidal anti-inflammatory drugs, opioids, and paracetamol) to review the evidence for children's pain using pharmacological interventions.\nAs one of the leading causes of mortality and morbidity for children and adolescents in the world today, childhood cancer (and its associated pain) is a major health concern. Cancer pain in infants, children, and adolescents is primarily nociceptive pain with negative long term effects. Cancer-related pain is generally caused directly by the tumour itself such as compressing on the nerve or inflammation of the organs. Cancer-related pain generally occurs as a result of perioperative procedures, nerve damage caused by radiation or chemotherapy treatments, or mucositis. However, this review focused on pain caused directly by the tumour itself such as nerve infiltration, external nerve compression, and other inflammatory events.\nOpioids are used worldwide for the treatment of pain. Currently available opioids include: buprenorphine, codeine, fentanyl, hydromorphone, methadone, morphine, oxycodone, and tramadol. Opioids are generally available in healthcare settings across most developed countries but access may be restricted in developing countries. To achieve adequate pain relief in children using opioids, with an acceptable grade of adverse effects, the recommended method is to start with a low dose gradually titrated to effect or unacceptable adverse effect in the child.\nObjectives\nTo assess the analgesic efficacy, and adverse events, of opioids used to treat cancer-related pain in children and adolescents aged between birth and 17 years, in any setting.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online, MEDLINE via Ovid and Embase via Ovid from inception to 22 February 2017. We also searched the reference lists of retrieved studies and reviews, and searched online clinical trial registries.\nSelection criteria\nRandomised controlled trials (RCTs), with or without blinding, of any dose, and any route, treating cancer-related pain in children and adolescents, comparing opioids with placebo or an active comparator.\nData collection and analysis\nTwo review authors independently assessed studies for eligibility. We planned to use dichotomous data to calculate risk ratio and number needed to treat for one additional event, using standard methods. We planned to assess GRADE (Grading of Recommendations Assessment, Development and Evaluation) and planned to create a 'Summary of findings' table.\nMain results\nNo studies were identified that were eligible for inclusion in this review. Several studies tested opioids on adults with cancer-related pain, but none in participants aged from birth to 17 years.\nThere is no evidence to support or refute the use of opioids for treating cancer-related pain in children and adolescents.\nAuthors' conclusions\nThere is no evidence from randomised controlled trials to support or refute the use of opioids to treat chronic cancer-related pain in children and adolescents. We are unable to comment about efficacy or harm from the use of opioids to treat chronic cancer-related pain in children and adolescents.\n\nGiven the provided abstract, please respond to the following query: \"Bottom line\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is no evidence from randomised controlled trials to support or contradict the suggestion that opioids in any dose will reduce cancer-related pain in children or adolescents."
}
] |
query-laysum
|
22857
|
[
{
"role": "user",
"content": "Abstract: Background\nOlder patients with multiple health problems (multi-morbidity) value being involved in decision-making about their health care. However, they are less frequently involved than younger patients. To maximise quality of life, day-to-day function, and patient safety, older patients require support to identify unmet healthcare needs and to prioritise treatment options.\nObjectives\nTo assess the effects of interventions for older patients with multi-morbidity aiming to involve them in decision-making about their health care during primary care consultations.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; all years to August 2018), in the Cochrane Library; MEDLINE (OvidSP) (1966 to August 2018); Embase (OvidSP) (1988 to August 2018); PsycINFO (OvidSP) (1806 to August 2018); the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (Ovid) (1982 to September 2008), then in Ebsco (2009 to August 2018); Centre for Reviews and Dissemination Databases (Database of Abstracts and Reviews of Effects (DARE)) (all years to August 2018); the Health Technology Assessment (HTA) Database (all years to August 2018); the Ongoing Reviews Database (all years to August 2018); and Dissertation Abstracts International (1861 to August 2018).\nSelection criteria\nWe sought randomised controlled trials (RCTs), cluster-RCTs, and quasi-RCTs of interventions to involve patients in decision-making about their health care versus usual care/control/another intervention, for patients aged 65 years and older with multi-morbidity in primary care.\nData collection and analysis\nWe used standard Cochrane methodological procedures. Meta-analysis was not possible; therefore we prepared a narrative synthesis.\nMain results\nWe included three studies involving 1879 participants: two RCTs and one cluster-RCT. Interventions consisted of:\n· patient workshop and individual coaching using behaviour change techniques;\n· individual patient coaching utilising cognitive-behavioural therapy and motivational interviewing; and\n· holistic patient review, multi-disciplinary practitioner training, and organisational change.\nNo studies reported the primary outcome ‘patient involvement in decision-making’ or the primary adverse outcome ‘less patient involvement as a result of the intervention’.\nComparing interventions (patient workshop and individual coaching, holistic patient review plus practitioner training, and organisational change) to usual care: we are uncertain whether interventions had any effect on patient reports of high self-rated health (risk ratio (RR) 1.40, 95% confidence interval (CI) 0.36 to 5.49; very low-certainty evidence) or on patient enablement (mean difference (MD) 0.60, 95% CI -9.23 to 10.43; very low-certainty evidence) compared with usual care. Interventions probably had no effect on health-related quality of life (adjusted difference in means 0.00, 95% CI -0.02 to 0.02; moderate-certainty evidence) or on medication adherence (MD 0.06, 95% CI -0.05 to 0.17; moderate-certainty evidence) but probably improved the number of patients discussing their priorities (adjusted odds ratio 1.85, 95% CI 1.44 to 2.38; moderate-certainty evidence) and probably increased the number of nurse consultations (incident rate ratio from adjusted multi-level Poisson model 1.37, 95% CI 1.17 to 1.61; moderate-certainty evidence) compared with usual care. Practitioner outcomes were not measured. Interventions were not reported to adversely affect rates of participant death or anxiety, emergency department attendance, or hospital admission compared with usual care.\nComparing interventions (patient workshop and coaching, individual patient coaching) to attention-control conditions: we are uncertain whether interventions affect patient-reported high self-rated health (RR 0.38, 95% CI 0.15 to 1.00, favouring attention control, with very low-certainty evidence; RR 2.17, 95% CI 0.85 to 5.52, favouring the intervention, with very low-certainty evidence). We are uncertain whether interventions affect patient enablement and engagement by increasing either patient activation (MD 1.20, 95% CI -8.21 to 10.61; very low-certainty evidence) or self-efficacy (MD 0.29, 95% CI -0.21 to 0.79; very low-certainty evidence); or whether interventions affect the number of general practice visits (MD 0.51, 95% CI -0.34 to 1.36; very low-certainty evidence), compared to attention-control conditions. The intervention may however lead to more patient-reported changes in management of their health conditions (RR 1.82, 95% CI 1.35 to 2.44; low-certainty evidence). Practitioner outcomes were not measured. Interventions were not reported to adversely affect emergency department attendance nor hospital admission when compared with attention control.\nComparing one form of intervention with another: not measured.\nThere was 'unclear' risk across studies for performance bias, detection bias, and reporting bias; however, no aspects were 'high' risk. Evidence was downgraded via GRADE, most often because of 'small sample size' and 'evidence from a single study'.\nAuthors' conclusions\nLimited available evidence does not allow a robust conclusion regarding the objectives of this review. Whilst patient involvement in decision-making is seen as a key mechanism for improving care, it is rarely examined as an intervention and was not measured by included studies. Consistency in design, analysis, and evaluation of interventions would enable a greater likelihood of robust conclusions in future reviews.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "None of the studies reported the main outcome ‘patient involvement in decision-making about their health care’ nor whether there was less patient involvement as a result of the intervention. Interventions were not found to increase adverse outcomes such as death, anxiety, emergency department attendance, or hospital admissions..\nWe are uncertain whether interventions for involving older people with more than one long-term health problem in decision-making about their health care can improve their self-rated health or healthcare engagement, or make any difference in self-efficacy (one's belief in one's ability to succeed in specific situations) or in the overall number of general practice visits. We can report that these interventions probably make little or no difference in patients' quality of life but probably increase the number of patients discussing their priorities, and are associated with more patient consultations with nurses, when compared to usual care. Interventions may be associated with more changes in the management of health conditions when considered from the patient’s perspective when compared with a control group.\nThe quality of the evidence was limited by small studies, and by studies choosing to measure different outcomes, resulting in lack of data that could be combined in analyses."
}
] |
query-laysum
|
22858
|
[
{
"role": "user",
"content": "Abstract: Background\nCervical artery dissection (CeAD) is a pathological bleed or tear, or both, in the wall of the carotid or vertebral arteries as they course through the neck, and is a leading cause of stroke in young people.\nObjectives\nTo assess the effectiveness of surgical and radiological interventions versus best medical treatment alone for treating symptomatic cervical artery dissection.\nSearch methods\nWe performed comprehensive searches of the Cochrane Stroke Group Trials Register (last searched March 2020), the Cochrane Central Register of Controlled Trials (CENTRAL), 2020, Issue 4, in the Cochrane Library (searched March 2020), MEDLINE (1946 to March 2020) and Embase (1974 to March 2020). We searched relevant ongoing trials and research registers (searched March 2020), checked references in all relevant papers for additional eligible studies, and contacted authors and researchers in the field.\nSelection criteria\nRandomised controlled trials (RCTs) and controlled clinical trials (CCTs) of either surgical or endovascular intervention for the management of symptomatic CeAD were eligible for inclusion. Only studies with anticoagulants or antiplatelet treatment as the control group were included. Two review authors planned to independently extract data.\nData collection and analysis\nPrimary outcomes were ipsilateral stroke and disability. Secondary outcomes were death, any stroke, or transient ischaemic attack, residual stenosis (> 50%), recurrence of cervical dissection, expanding pseudoaneurysm, or major bleeding. We analysed the studies according to the first choice of treatment. We planned to assess for risk of bias and apply GRADE criteria for any included studies.\nMain results\nWe did not find any completed RCTs or CCTs undertaken in this area of research.\nAuthors' conclusions\nNo RCTs or CCTs compared either surgery or endovascular therapy with control. Thus, there is no available evidence to support their use for the treatment of extracranial cervical artery dissection in addition to antithrombotic therapy in people who continue to have neurological symptoms when treated with antithrombotic therapy alone.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review was conducted to establish whether a surgical operation or a keyhole endovascular treatment, such placement of stent in addition to blood-thinning medication would improve outcomes in people with cervical artery dissection for whom treatment with blood-thinning medication alone does not improve persistent or worsening symptoms of stroke. The outcomes which we wished to assess are permanent brain damage and long-term disability from stroke."
}
] |
query-laysum
|
22859
|
[
{
"role": "user",
"content": "Abstract: Background\nGovernance arrangements include changes in rules or processes that determine authority and accountability for health policies, organisations, commercial products and health professionals, as well as the involvement of stakeholders in decision-making. Changes in governance arrangements can affect health and related goals in numerous ways, generally through changes in authority, accountability, openness, participation and coherence. A broad overview of the findings of systematic reviews can help policymakers, their technical support staff and other stakeholders to identify strategies for addressing problems and improving the governance of their health systems.\nObjectives\nTo provide an overview of the available evidence from up-to-date systematic reviews about the effects of governance arrangements for health systems in low-income countries. Secondary objectives include identifying needs and priorities for future evaluations and systematic reviews on governance arrangements and informing refinements of the framework for governance arrangements outlined in the overview.\nMethods\nWe searched Health Systems Evidence in November 2010 and PDQ Evidence up to 17 December 2016 for systematic reviews. We did not apply any date, language or publication status limitations in the searches. We included well-conducted systematic reviews of studies that assessed the effects of governance arrangements on patient outcomes (health and health behaviours), the quality or utilisation of healthcare services, resource use (health expenditures, healthcare provider costs, out-of-pocket payments, cost-effectiveness), healthcare provider outcomes (such as sick leave), or social outcomes (such as poverty, employment) and that were published after April 2005. We excluded reviews with limitations that were important enough to compromise the reliability of the findings of the review. Two overview authors independently screened reviews, extracted data and assessed the certainty of evidence using GRADE. We prepared SUPPORT Summaries for eligible reviews, including key messages, 'Summary of findings' tables (using GRADE to assess the certainty of the evidence) and assessments of the relevance of findings to low-income countries.\nMain results\nWe identified 7272 systematic reviews and included 21 of them in this overview (19 primary reviews and 2 supplementary reviews). We focus here on the results of the 19 primary reviews, one of which had important methodological limitations. The other 18 were reliable (with only minor limitations).\nWe grouped the governance arrangements addressed in the reviews into five categories: authority and accountability for health policies (three reviews); authority and accountability for organisations (two reviews); authority and accountability for commercial products (three reviews); authority and accountability for health professionals (seven reviews); and stakeholder involvement (four reviews).\nOverall, we found desirable effects for the following interventions on at least one outcome, with moderate- or high-certainty evidence and no moderate- or high-certainty evidence of undesirable effects.\nDecision-making about what is covered by health insurance\n- Placing restrictions on the medicines reimbursed by health insurance systems probably decreases the use of and spending on these medicines (moderate-certainty evidence).\nStakeholder participation in policy and organisational decisions\n- Participatory learning and action groups for women probably improve newborn survival (moderate-certainty evidence).\n- Consumer involvement in preparing patient information probably improves the quality of the information and patient knowledge (moderate-certainty evidence).\nDisclosing performance information to patients and the public\n- Disclosing performance data on hospital quality to the public probably encourages hospitals to implement quality improvement activities (moderate-certainty evidence).\n- Disclosing performance data on individual healthcare providers to the public probably leads people to select providers that have better quality ratings (moderate-certainty evidence).\nAuthors' conclusions\nInvestigators have evaluated a wide range of governance arrangements that are relevant for low-income countries using sound systematic review methods. These strategies have been targeted at different levels in health systems, and studies have assessed a range of outcomes. Moderate-certainty evidence shows desirable effects (with no undesirable effects) for some interventions. However, there are important gaps in the availability of systematic reviews and primary studies for the all of the main categories of governance arrangements.\n\nGiven the provided abstract, please respond to the following query: \"What are the effects of different ways of organising stakeholder involvement in governing health services?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Four reviews were included and the key findings are that:\n- participatory learning and action groups for women probably improve newborn survival (moderate-certainty evidence) and may improve maternal survival (low-certainty evidence);\n- disclosing performance data on health insurance scheme quality to the public may lead people to select health plans that have better quality ratings or to avoid those with worse ratings and may lead to slight improvements in clinical outcomes for health insurance schemes (low-certainty evidence);\n- disclosing performance data on hospital quality to the public may lead to little or no difference in people's selection of hospitals (low-certainty evidence), probably encourages hospitals to implement quality improvement activities (moderate-certainty evidence) and may lead to slight improvements in hospital clinical outcomes (low-certainty evidence);\n- disclosing performance on individual healthcare providers to the public probably leads people to select providers that have better quality ratings (moderate-certainty evidence).\nNo studies evaluated the effects of stakeholder participation in policy and organisational decisions."
}
] |
query-laysum
|
22860
|
[
{
"role": "user",
"content": "Abstract: Background\nInternational clinical practice guidelines routinely recommend that cardiac patients participate in rehabilitation programmes for comprehensive secondary prevention. However, data show that only a small proportion of these patients utilise rehabilitation.\nObjectives\nFirst, to assess interventions provided to increase patient enrolment in, adherence to, and completion of cardiac rehabilitation. Second, to assess intervention costs and associated harms, as well as interventions intended to promote equitable CR utilisation in vulnerable patient subpopulations.\nSearch methods\nReview authors performed a search on 10 July 2018, to identify studies published since publication of the previous systematic review. We searched the Cochrane Central Register of Controlled Trials (CENTRAL); the National Health Service (NHS) Centre for Reviews and Dissemination (CRD) databases (Health Technology Assessment (HTA) and Database of Abstracts of Reviews of Effects (DARE)), in the Cochrane Library (Wiley); MEDLINE (Ovid); Embase (Elsevier); the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCOhost); and Conference Proceedings Citation Index - Science (CPCI-S) on Web of Science (Clarivate Analytics). We checked the reference lists of relevant systematic reviews for additional studies and also searched two clinical trial registers. We applied no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) in adults with myocardial infarction, with angina, undergoing coronary artery bypass graft surgery or percutaneous coronary intervention, or with heart failure who were eligible for cardiac rehabilitation. Interventions had to aim to increase utilisation of comprehensive phase II cardiac rehabilitation. We included only studies that measured one or more of our primary outcomes. Secondary outcomes were harms and costs, and we focused on equity.\nData collection and analysis\nTwo review authors independently screened the titles and abstracts of all identified references for eligibility, and we obtained full papers of potentially relevant trials. Two review authors independently considered these trials for inclusion, assessed included studies for risk of bias, and extracted trial data independently. We resolved disagreements through consultation with a third review author. We performed random-effects meta-regression for each outcome and explored prespecified study characteristics.\nMain results\nOverall, we included 26 studies with 5299 participants (29 comparisons). Participants were primarily male (64.2%). Ten (38.5%) studies included patients with heart failure. We assessed most studies as having low or unclear risk of bias. Sixteen studies (3164 participants) reported interventions to improve enrolment in cardiac rehabilitation, 11 studies (2319 participants) reported interventions to improve adherence to cardiac rehabilitation, and seven studies (1567 participants) reported interventions to increase programme completion. Researchers tested a variety of interventions to increase utilisation of cardiac rehabilitation. In many studies, this consisted of contacts made by a healthcare provider during or shortly after an acute care hospitalisation.\nLow-quality evidence shows an effect of interventions on increasing programme enrolment (19 comparisons; risk ratio (RR) 1.27, 95% confidence interval (CI) 1.13 to 1.42). Meta-regression revealed that the intervention deliverer (nurse or allied healthcare provider; P = 0.02) and the delivery format (face-to-face; P = 0.01) were influential in increasing enrolment. Low-quality evidence shows interventions to increase adherence were effective (nine comparisons; standardised mean difference (SMD) 0.38, 95% CI 0.20 to 0.55), particularly when they were delivered remotely, such as in home-based programs (SMD 0.56, 95% CI 0.37 to 0.76). Moderate-quality evidence shows interventions to increase programme completion were also effective (eight comparisons; RR 1.13, 95% CI 1.02 to 1.25), but those applied in multi-centre studies were less effective than those given in single-centre studies, leading to questions regarding generalisability. A moderate level of statistical heterogeneity across intervention studies reflects heterogeneity in intervention approaches. There was no evidence of small-study bias for enrolment (insufficient studies to test for this in the other outcomes).\nWith regard to secondary outcomes, no studies reported on harms associated with the interventions. Only two studies reported costs. In terms of equity, trialists tested interventions designed to improve utilisation among women and older patients. Evidence is insufficient for quantitative assessment of whether women-tailored programmes were associated with increased utilisation, and studies that assess motivating women are needed. For older participants, again while quantitative assessment could not be undertaken, peer navigation may improve enrolment.\nAuthors' conclusions\nInterventions may increase cardiac rehabilitation enrolment, adherence and completion; however the quality of evidence was low to moderate due to heterogeneity of the interventions used, among other factors. Effects on enrolment were larger in studies targeting healthcare providers, training nurses, or allied healthcare providers to intervene face-to-face; effects on adherence were larger in studies that tested remote interventions. More research is needed, particularly to discover the best ways to increase programme completion.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Strategies to increase enrolment were effective, particularly those that targeted healthcare providers, training nurses, or allied healthcare providers to intervene face-to-face. Interventions to increase adherence to programmes and to increase completion were effective, but it remains unclear which specific strategies were implemented.\nWe found no studies providing information about potential harms and two studies reporting costs of these strategies to increase use of cardiac rehabilitation. Some studies provided interventions to increase rehabilitation utilisation in women and older patients. Evidence was insufficient for quantitative assessment of whether women-tailored programmes were associated with increased utilisation, but motivating women appears key. For older participants, qualitative analysis suggested that peer support or postdischarge visits may improve enrolment, and group sessions promoting self-regulation skills may increase completion."
}
] |
query-laysum
|
22861
|
[
{
"role": "user",
"content": "Abstract: Background\nHypertension is a major public health problem that increases the risk of cardiovascular and kidney diseases. Several studies have shown an inverse association between calcium intake and blood pressure, as small reductions in blood pressure have been shown to produce rapid reductions in vascular disease risk even in individuals with normal blood pressure ranges. This is the first update of the review to evaluate the effect of calcium supplementation in normotensive individuals as a preventive health measure.\nObjectives\nTo assess the efficacy and safety of calcium supplementation versus placebo or control for reducing blood pressure in normotensive people and for the prevention of primary hypertension.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for randomised controlled trials up to September 2020: the Cochrane Hypertension Specialised Register, CENTRAL (2020, Issue 9), Ovid MEDLINE, Ovid Embase, the WHO International Clinical Trials Registry Platform, and the US National Institutes of Health Ongoing Trials Register, ClinicalTrials.gov. We also contacted authors of relevant papers regarding further published and unpublished work. The searches had no language restrictions.\nSelection criteria\nWe selected trials that randomised normotensive people to dietary calcium interventions such as supplementation or food fortification versus placebo or control. We excluded quasi-random designs. The primary outcomes were hypertension (defined as blood pressure ≥ 140/90 mmHg) and blood pressure measures.\nData collection and analysis\nTwo review authors independently selected trials for inclusion, abstracted the data and assessed the risks of bias. We used the GRADE approach to assess the certainty of evidence.\nMain results\nThe 2020 updated search identified four new trials. We included a total of 20 trials with 3512 participants, however we only included 18 for the meta-analysis with 3140 participants. None of the studies reported hypertension as a dichotomous outcome. The effect on systolic and diastolic blood pressure was: mean difference (MD) -1.37 mmHg, 95% confidence interval (CI) -2.08, -0.66; 3140 participants; 18 studies; I2 = 0%, high-certainty evidence; and MD -1.45, 95% CI -2.23, -0.67; 3039 participants; 17 studies; I2 = 45%, high-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for those younger than 35 years was: MD -1.86, 95% CI -3.45, -0.27; 452 participants; eight studies; I2 = 19%, moderate-certainty evidence; MD -2.50, 95% CI -4.22, -0.79; 351 participants; seven studies ; I2 = 54%, moderate-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for those 35 years or older was: MD -0.97, 95% CI -1.83, -0.10; 2688 participants; 10 studies; I2 = 0%, high-certainty evidence; MD -0.59, 95% CI -1.13, -0.06; 2688 participants; 10 studies; I2 = 0%, high-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for women was: MD -1.25, 95% CI -2.53, 0.03; 1915 participants; eight studies; I2 = 0%, high-certainty evidence; MD -1.04, 95% CI -1.86, -0.22; 1915 participants; eight studies; I2 = 4%, high-certainty evidence, respectively. The effect on systolic and diastolic blood pressure for men was MD -2.14, 95% CI -3.71, -0.59; 507 participants; five studies; I2 = 8%, moderate-certainty evidence; MD -1.99, 95% CI -3.25, -0.74; 507 participants; five studies; I2 = 41%, moderate-certainty evidence, respectively. The effect was consistent in both genders regardless of baseline calcium intake.\nThe effect on systolic blood pressure was: MD -0.02, 95% CI -2.23, 2.20; 302 participants; 3 studies; I2 = 0%, moderate-certainty evidence with doses less than 1000 mg; MD -1.05, 95% CI -1.91, -0.19; 2488 participants; 9 studies; I2 = 0%, high-certainty evidence with doses 1000 to 1500 mg; and MD -2.79, 95% CI -4.71, 0.86; 350 participants; 7 studies; I2 = 0%, moderate-certainty evidence with doses more than 1500 mg. The effect on diastolic blood pressure was: MD -0.41, 95% CI -2.07, 1.25; 201 participants; 2 studies; I2 = 0, moderate-certainty evidence; MD -2.03, 95% CI -3.44, -0.62 ; 1017 participants; 8 studies; and MD -1.35, 95% CI -2.75, -0.05; 1821 participants; 8 studies; I2 = 51%, high-certainty evidence, respectively.\nNone of the studies reported adverse events.\nAuthors' conclusions\nAn increase in calcium intake slightly reduces both systolic and diastolic blood pressure in normotensive people, particularly in young people, suggesting a role in the prevention of hypertension. The effect across multiple prespecified subgroups and a possible dose response effect reinforce this conclusion. Even small reductions in blood pressure could have important health implications for reducing vascular disease. A 2 mmHg lower systolic blood pressure is predicted to produce about 10% lower stroke mortality and about 7% lower mortality from ischaemic heart disease.\nThere is a great need for adequately-powered clinical trials randomising young people. Subgroup analysis should involve basal calcium intake, age, sex, basal blood pressure, and body mass index. We also require assessment of side effects, optimal doses and the best strategy to improve calcium intake.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found high quality of evidence for systolic and diastolic blood pressure in both men and women. The quality of evidence was also high for participants 35 years or older and moderate for younger people.\nThe quality of evidence was high for doses of calcium of 1000 to 1500 mg/day and was moderate for lower or higher doses.\nFive of the 18 trials were industry funded."
}
] |
query-laysum
|
22862
|
[
{
"role": "user",
"content": "Abstract: Background\nVenous thromboembolism (VTE) is a collective term for two conditions: deep vein thrombosis (DVT) and pulmonary embolism (PE). A proportion of people with VTE have no underlying or immediately predisposing risk factors and the VTE is referred to as unprovoked. Unprovoked VTE can often be the first clinical manifestation of an underlying malignancy. This has raised the question of whether people with an unprovoked VTE should be investigated for an underlying cancer. Treatment for VTE is different in cancer and non-cancer patients and a correct diagnosis would ensure that people received the optimal treatment for VTE to prevent recurrence and further morbidity. Furthermore, an appropriate cancer diagnosis at an earlier stage could avoid the risk of cancer progression and lead to improvements in cancer-related mortality and morbidity. This is the third update of the review first published in 2015.\nObjectives\nTo determine whether testing for undiagnosed cancer in people with a first episode of unprovoked VTE (DVT of the lower limb or PE) is effective in reducing cancer- or VTE-related mortality and morbidity and to determine which tests for cancer are best at identifying treatable cancers early.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase and CINAHL databases and World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 5 May 2021. We also undertook reference checking to identify additional studies.\nSelection criteria\nRandomised and quasi-randomised trials in which people with an unprovoked VTE were allocated to receive specific tests for identifying cancer or clinically indicated tests only were eligible for inclusion.\nData collection and analysis\nTwo review authors independently selected studies, assessed risk of bias and extracted data. We assessed the certainty of the evidence using GRADE criteria. We resolved any disagreements by discussion. The main outcomes of interest were all-cause mortality, cancer-related mortality and VTE-related mortality.\nMain results\nNo new studies were identified for this 2021 update. In total, four studies with 1644 participants are included. Two studies assessed the effect of extensive tests including computed tomography (CT) scanning versus tests at the physician's discretion, while the other two studies assessed the effect of standard testing plus positron emission tomography (PET)/CT scanning versus standard testing alone. For extensive tests including CT versus tests at the physician's discretion, the certainty of the evidence, as assessed according to GRADE, was low due to risk of bias (early termination of the studies). When comparing standard testing plus PET/CT scanning versus standard testing alone, the certainty of evidence was moderate due to a risk of detection bias. The certainty of the evidence was downgraded further as detection bias was present in one study with a low number of events.\nWhen comparing extensive tests including CT versus tests at the physician's discretion, pooled analysis on two studies showed that testing for cancer was consistent with either benefit or no benefit on cancer-related mortality (odds ratio (OR) 0.49, 95% confidence interval (CI) 0.15 to 1.67; 396 participants; 2 studies; low-certainty evidence). One study (201 participants) showed that, overall, malignancies were less advanced at diagnosis in extensively tested participants than in participants in the control group. In total, 9/13 participants diagnosed with cancer in the extensively tested group had a T1 or T2 stage malignancy compared to 2/10 participants diagnosed with cancer in the control group (OR 5.00, 95% CI 1.05 to 23.76; low-certainty evidence). There was no clear difference in detection of advanced stages between extensive tests versus tests at the physician's discretion: one participant in the extensively tested group had stage T3 compared with four participants in the control group (OR 0.25, 95% CI 0.03 to 2.28; low-certainty evidence). In addition, extensively tested participants were diagnosed earlier than control group (mean: 1 month with extensive tests versus 11.6 months with tests at physician's discretion to cancer diagnosis from the time of diagnosis of VTE). Extensive testing did not increase the frequency of an underlying cancer diagnosis (OR 1.32, 95% CI 0.59 to 2.93; 396 participants; 2 studies; low-certainty evidence). Neither study measured all-cause mortality, VTE-related morbidity and mortality, complications of anticoagulation, adverse effects of cancer tests, participant satisfaction or quality of life.\nWhen comparing standard testing plus PET/CT screening versus standard testing alone, standard testing plus PET/CT screening was consistent with either benefit or no benefit on all-cause mortality (OR 1.22, 95% CI 0.49 to 3.04; 1248 participants; 2 studies; moderate-certainty evidence), cancer-related mortality (OR 0.55, 95% CI 0.20 to 1.52; 1248 participants; 2 studies; moderate-certainty evidence) or VTE-related morbidity (OR 1.02, 95% CI 0.48 to 2.17; 854 participants; 1 study; moderate-certainty evidence). Regarding stage of cancer, there was no clear difference for detection of early (OR 1.78, 95% 0.51 to 6.17; 394 participants; 1 study; low-certainty evidence) or advanced (OR 1.00, 95% CI 0.14 to 7.17; 394 participants; 1 study; low-certainty evidence) stages of cancer. There was also no clear difference in the frequency of an underlying cancer diagnosis (OR 1.71, 95% CI 0.91 to 3.20; 1248 participants; 2 studies; moderate-certainty evidence). Time to cancer diagnosis was 4.2 months in the standard testing group and 4.0 months in the standard testing plus PET/CT group (P = 0.88). Neither study measured VTE-related mortality, complications of anticoagulation, adverse effects of cancer tests, participant satisfaction or quality of life.\nAuthors' conclusions\nSpecific testing for cancer in people with unprovoked VTE may lead to earlier diagnosis of cancer at an earlier stage of the disease. However, there is currently insufficient evidence to draw definitive conclusions concerning the effectiveness of testing for undiagnosed cancer in people with a first episode of unprovoked VTE (DVT or PE) in reducing cancer- or VTE-related morbidity and mortality. The results could be consistent with either benefit or no benefit. Further good-quality large-scale randomised controlled trials are required before firm conclusions can be made.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This Cochrane review updates our previous evidence. The evidence is current to May 2021."
}
] |
query-laysum
|
22863
|
[
{
"role": "user",
"content": "Abstract: Background\nUrinary tract infections (UTIs) are very common, affecting more than 7 million people worldwide. Whilst many people may only experience a single episode in their lifetime and are generally responsive to standard antibiotics, a significant proportion of adults and children (approximately 15% to 25%) are chronic symptomatic UTI sufferers. Certain population groups are at greater risk than others, such as immunosuppressed and people with chronic kidney disease.\nD-mannose is a sugar part of normal human metabolism found within most diets. The mechanism of action is to prevent bacterial adherence to the uroepithelial cells. The D-mannose-based inhibitors can block uropathogenic Escherichia coli adhesion and invasion of the uroepithelial cells. The bacteria are then understood to essentially be eliminated by urination.\nEarly pilot studies on animals and humans have trialled concentrated forms of D-mannose (tablets or sachets) in doses ranging from 200 mg up to 2 to 3 g and found possible efficacy in reducing UTI symptoms or recurrence.\nAlthough the anti-adhesive effects of D-mannose have been well-established, only recently have we seen a small number of pilot studies and small clinical trials conducted.\nObjectives\nTo assess the benefits and harms of D-mannose for preventing and treating UTIs in adults and children.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Register of Studies up to 22 February 2022 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nWe included RCTs measuring and reporting the effect of D-mannose, in any combination and any formulation, to prevent or treat UTIs in adults and children, females and males, in any setting (including perioperative).\nAuthors independently assessed the retrieved titles and abstracts and, where necessary, the full text to determine which satisfied the inclusion criteria.\nData collection and analysis\nData extraction was independently carried out by two authors using a standard data extraction form. Methodological quality of the included studies was assessed using the Cochrane risk of bias tool. Data entry was carried out by one author and cross-checked by another author. The certainty of the evidence was assessed using the GRADE approach.\nMain results\nWe included seven RCTs (719 participants) in adult females and males who had either acute cystitis or a history of recurrent (at least two episodes in six months or three episodes in 12 months) UTIs (symptomatic or asymptomatic). Two were prevention studies, four were prevention and treatment studies (two perioperative and one in people with multiple sclerosis), and one was a treatment study. Time periods ranged from 15 days to six months. No two studies were comparable (by dose or treatments), and we could not undertake meta-analyses.\nIndividual studies reported no clear evidence to determine whether D-mannose is more or less effective in preventing or treating UTIs.\nD-mannose (2 g) had uncertain effects on symptomatic and bacteriuria-confirmed UTIs when compared to no treatment (1 study, 205 participants; very low certainty evidence) and antibiotics (nitrofurantoin 50 mg) (1 study, 206 participants; very low certainty evidence). D-mannose, in combination with herbal supplements, had uncertain effects on symptomatic and bacteria-confirmed UTI and pain when compared to no treatment (1 study, 40 participants; very low certainty evidence). D-mannose 500 mg plus supplements (N-acetylcysteine and Morinda citrifolia fruit extract) had uncertain effects on symptomatic and bacteriuria-confirmed UTIs when compared to an antibiotic (prulifloxacin 400 mg) (1 study, 75 participants; very low certainty evidence).\nAdverse events were very few and poorly reported; none were serious (mostly diarrhoea and vaginal burning).\nOverall, the quality of the evidence is poor. Most studies were judged to have unclear or high risk of bias across most domains. Data was sparse and addressed very few outcomes. The GRADE evaluation was rated as very low certainty evidence due to very serious limitations in the study design or execution (high risk of bias across all studies) and sparse data (single study data and small sample sizes).\nAuthors' conclusions\nThere is currently little to no evidence to support or refute the use of D-mannose to prevent or treat UTIs in all populations.\nThis review highlights the severe lack of high-quality RCTs testing the efficacy of D-mannose for UTIs in any population. Despite UTIs being one of the most common adult infections (affecting 50% of women at least once in their lifetime) and the growing global antimicrobial resistance, we found very few studies that adequately test this alternative treatment.\nFuture research in this field requires, in the first instance, a single adequately powered RCT comparing D-mannose with placebo.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed all of the evidence on D-mannose (tablets or powder) to see whether it can prevent or treat UTIs in adults and children. The evidence is current to 22 February 2022."
}
] |
query-laysum
|
22864
|
[
{
"role": "user",
"content": "Abstract: Background\nChanging population demographics have led to an increasing number of functionally dependent older people who require care and medical treatment. In many countries, government policy aims to shift resources into the community from institutional care settings with the expectation that this will reduce costs and improve the quality of care compared.\nObjectives\nTo assess the effects of long-term home or foster home care versus institutional care for functionally dependent older people.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Library, MEDLINE, Embase, CINAHL, and two trials registers to November 2015.\nSelection criteria\nWe included randomised and non-randomised trials, controlled before-after studies and interrupted time series studies complying with the EPOC study design criteria and comparing the effects of long-term home care versus institutional care for functionally dependent older people.\nData collection and analysis\nTwo reviewers independently extracted data and assessed the risk of bias of each included study. We reported the results narratively, as the substantial heterogeneity across studies meant that meta-analysis was not appropriate.\nMain results\nWe included 10 studies involving 16,377 participants, all of which were conducted in high income countries. Included studies compared community-based care with institutional care (care homes). The sample size ranged from 98 to 11,803 (median N = 204). There was substantial heterogeneity in the healthcare context, interventions studied, and outcomes assessed. One study was a randomised trial (N = 112); other included studies used designs that had potential for bias, particularly due lack of randomisation, baseline imbalances, and non-blinded outcome assessment. Most studies did not select (or exclude) participants for any specific disease state, with the exception of one study that only included patients if they had a stroke. All studies had methodological limitations, so readers should interpret results with caution.\nIt is uncertain whether long-term home care compared to nursing home care decreases mortality risk (2 studies, N = 314, very-low certainty evidence). Estimates ranged from a nearly three-fold increased risk of mortality in the homecare group (risk ratio (RR) 2.89, 95% confidence interval (CI) 1.57 to 5.32) to a 62% relative reduction (RR 0.38, 95% CI 0.17 to 0.61). We did not pool data due to the high degree of heterogeneity (I2 = 94%).\nIt is uncertain whether the intervention has a beneficial effect on physical function, as the certainty of evidence is very low (5 studies, N = 1295). Two studies reported that participants who received long-term home care had improved activities of daily living compared to those in a nursing home, whereas a third study reported that all participants performed equally on physical function.\nIt is uncertain whether long-term home care improves happiness compared to nursing home care (RR 1.97, 95% CI 1.27 to 3.04) or general satisfaction because the certainty of evidence was very low (2 studies, N = 114).\nThe extent to which long-term home care was associated to more or fewer adverse health outcomes than nursing home care was not reported.\nIt is uncertain whether long-term home care compared to nursing home care decreases the risk of hospital admission (very low-certainty evidence, N = 14,853). RR estimates ranged from 2.75 (95% CI 2.59 to 2.92), showing an increased risk for those receiving care at home, to 0.82 (95% CI 0.72 to 0.93), showing a slightly reduced risk for the same group. We did not pool data due to the high degree of heterogeneity (I2 = 99%).\nAuthors' conclusions\nThere are insufficient high-quality published data to support any particular model of care for functionally dependent older people. Community-based care was not consistently beneficial across all the included studies; there were some data suggesting that community-based care may be associated with improved quality of life and physical function compared to institutional care. However, community alternatives to institutional care may be associated with increased risk of hospitalisation. Future studies should assess healthcare utilisation, perform economic analysis, and consider caregiver burden.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors searched for studies that had been published up to November 2015."
}
] |
query-laysum
|
22865
|
[
{
"role": "user",
"content": "Abstract: Background\nCancer is a common disease and radiotherapy is one well-established treatment for some solid tumours. Hyperbaric oxygenation therapy (HBOT) may improve the ability of radiotherapy to kill hypoxic cancer cells, so the administration of radiotherapy while breathing hyperbaric oxygen may result in a reduction in mortality and recurrence.\nObjectives\nTo assess the benefits and harms of administering radiotherapy for the treatment of malignant tumours while breathing HBO.\nSearch methods\nIn September 2017 we searched the Cochrane Central Register of Controlled Trials (CENTRAL), the Cochrane Library Issue 8, 2017, MEDLINE, Embase, and the Database of Randomised Trials in Hyperbaric Medicine using the same strategies used in 2011 and 2015, and examined the reference lists of included articles.\nSelection criteria\nRandomised and quasi-randomised studies comparing the outcome of malignant tumours following radiation therapy while breathing HBO versus air or an alternative sensitising agent.\nData collection and analysis\nThree review authors independently evaluated the quality of and extracted data from the included trials.\nMain results\nWe included 19 trials in this review (2286 participants: 1103 allocated to HBOT and 1153 to control).\nFor head and neck cancer, there was an overall reduction in the risk of dying at both one year and five years after therapy (risk ratio (RR) 0.83, 95% confidence interval (CI) 0.70 to 0.98, number needed to treat for an additional beneficial outcome (NNTB) = 11 and RR 0.82, 95% CI 0.69 to 0.98, high-quality evidence), and some evidence of improved local tumour control immediately following irradiation (RR with HBOT 0.58, 95% CI 0.39 to 0.85, moderate-quality evidence due to imprecision). There was a lower incidence of local recurrence of tumour when using HBOT at both one and five years (RR at one year 0.66, 95% CI 0.56 to 0.78, high-quality evidence; RR at five years 0.77, 95% CI 0.62 to 0.95, moderate-quality evidence due to inconsistency between trials). There was also some evidence with regard to the chance of metastasis at five years (RR with HBOT 0.45 95% CI 0.09 to 2.30, single trial moderate quality evidence imprecision). No trials reported a quality of life assessment. Any benefits come at the cost of an increased risk of severe local radiation reactions with HBOT (severe radiation reaction RR 2.64, 95% CI 1.65 to 4.23, high-quality evidence). However, the available evidence failed to clearly demonstrate an increased risk of seizures from acute oxygen toxicity (RR 4.3, 95% CI 0.47 to 39.6, moderate-quality evidence).\nFor carcinoma of the uterine cervix, there was no clear benefit in terms of mortality at either one year or five years (RR with HBOT at one year 0.88, 95% CI 0.69 to 1.11, high-quality evidence; RR at five years 0.95, 95% CI 0.80 to 1.14, moderate-quality evidence due to inconsistency between trials). Similarly, there was no clear evidence of a benefit of HBOT in the reported rate of local recurrence (RR with HBOT at one year 0.82, 95% CI 0.63 to 1.06, high-quality evidence; RR at five years 0.85, 95% CI 0.65 to 1.13, moderate-quality evidence due to inconsistency between trials). We also found no clear evidence for any effect of HBOT on the rate of development of metastases at both two years and five years (two years RR with HBOT 1.05, 95% CI 0.84 to 1.31, high quality evidence; five years RR 0.79, 95% CI 0.50 to 1.26, moderate-quality evidence due to inconsistency). There were, however, increased adverse effects with HBOT. The risk of a severe radiation injury at the time of treatment with HBOT was 2.05, 95% CI 1.22 to 3.46, high-quality evidence. No trials reported any failure of local tumour control, quality of life assessments, or the risk of seizures during treatment.\nWith regard to the treatment of urinary bladder cancer, there was no clear evidence of a benefit in terms of mortality from HBOT at one year (RR 0.97, 95% CI 0.74 to 1.27, high-quality evidence), nor any benefit in the risk of developing metastases at two years (RR 2.0, 95% CI 0.58 to 6.91, moderate-quality evidence due to imprecision). No trial reported on failure of local control, local recurrence, quality of life, or adverse effects.\nWhen all cancer types were combined, there was evidence for an increased risk of severe radiation tissue injury during the course of radiotherapy with HBOT (RR 2.35, 95% CI 1.66 to 3.33, high-quality evidence) and of oxygen toxic seizures during treatment (RR with HBOT 6.76, 96% CI 1.16 to 39.31, moderate-quality evidence due to imprecision).\nAuthors' conclusions\nWe found evidence that HBOT improves local tumour control, mortality, and local tumour recurrence for cancers of the head and neck. These benefits may only occur with unusual fractionation schemes. Hyperbaric oxygenation therapy is associated with severe tissue radiation injury. Given the methodological and reporting inadequacies of the included studies, our results demand a cautious interpretation. More research is needed for head and neck cancer, but is probably not justified for uterine cervical or bladder cancer. There is little evidence available concerning malignancies at other anatomical sites.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For people with solid cancers, we asked if the combination of radiotherapy and hyperbaric oxygen (HBO) breathing could reduce mortality and the chance of cancer spread when compared to radiotherapy alone or to radiotherapy and an alternative approach to reducing mortality and cancer spread."
}
] |
query-laysum
|
22866
|
[
{
"role": "user",
"content": "Abstract: Background\nElectromotive drug administration (EMDA) is the use of electrical current to improve the delivery of intravesical agents to reduce the risk of recurrence in people with non-muscle invasive bladder cancer (NMIBC). It is unclear how effective this is in comparison to other forms of intravesical therapy.\nObjectives\nTo assess the effects of intravesical EMDA for the treatment of NMIBC.\nSearch methods\nWe performed a comprehensive search using multiple databases (CENTRAL, MEDLINE, EMBASE), two clinical trial registries and a grey literature repository. We searched reference lists of relevant publications and abstract proceedings. We applied no language restrictions. The last search was February 2017.\nSelection criteria\nWe searched for randomised studies comparing EMDA of any intravesical agent used to reduce bladder cancer recurrence in conjunction with transurethral resection of bladder tumour (TURBT).\nData collection and analysis\nTwo review authors independently screened the literature, extracted data, assessed risk of bias and rated quality of evidence (QoE) according to GRADE on a per outcome basis.\nMain results\nWe included three trials with 672 participants that described five distinct comparisons. The same principal investigator conducted all three trials. All studies used mitomycin C (MMC) as the chemotherapeutic agent for EMDA.\n1. Postoperative MMC-EMDA induction versus postoperative Bacillus Calmette-Guérin (BCG) induction: based on one study with 72 participants with carcinoma in situ (CIS) and concurrent pT1 urothelial carcinoma, we are uncertain (very low QoE) about the effect of MMC-EMDA on time to recurrence (risk ratio (RR) 1.06, 95% confidence interval (CI) 0.64 to 1.76; corresponding to 30 more per 1000 participants, 95% CI 180 fewer to 380 more). There was no disease progression in either treatment arm at three months' follow-up. We are uncertain (very low QoE) about serious adverse events (RR 0.75, 95% CI 0.18 to 3.11).\n2. Postoperative MMC-EMDA induction versus MMC-passive diffusion (PD) induction: based on one study with 72 participants with CIS and concurrent pT1 urothelial carcinoma, postoperative MMC-EMDA may (low QoE) reduce disease recurrence (RR 0.65, 95% CI 0.44 to 0.98; corresponding to 147 fewer per 1000 participants, 95% CI 235 fewer to 8 fewer). There was no disease progression in either treatment arm at three months' follow-up. We are uncertain (very low QoE) about the effect of MMC-EMDA on serious adverse events (RR 1.50, 95% CI 0.27 to 8.45).\n3. Postoperative MMC-EMDA with sequential BCG induction and maintenance versus postoperative BCG induction and maintenance: based on one study with 212 participants with pT1 urothelial carcinoma of the bladder with or without CIS, postoperative MMC-EMDA with sequential BCG may result (low QoE) in a longer time to recurrence (hazard ratio (HR) 0.51, 95% CI 0.34 to 0.77; corresponding to 181 fewer per 1000 participants, 95% CI 256 fewer to 79 fewer) and time to progression (HR 0.36, 95% CI 0.17 to 0.75; corresponding to 63 fewer per 1000 participants, 95% CI 82 fewer to 24 fewer). We are uncertain (very low QoE) about the effect of MMC-EMDA on serious adverse events (RR 1.02, 95% CI 0.21 to 4.94).\n4. Single-dose, preoperative MMC-EMDA versus single-dose, postoperative MMC-PD: based on one study with 236 participants with primary pTa and pT1 urothelial carcinoma, preoperative MMC-EMDA likely (moderate QoE) results in a longer time to recurrence (HR 0.47, 95% CI 0.32 to 0.69; corresponding to 247 fewer per 1000 participants, 95% CI 341 fewer to 130 fewer) for a median follow-up of 86 months. We are uncertain (very low QoE) about the effect of MMC-EMDA on time to progression (HR 0.81, 95% CI 0.00 to 259.93; corresponding to 34 fewer per 1000 participants, 95% CI 193 fewer to 807 more) and serious adverse events (RR 0.79, 95% CI 0.30 to 2.05).\n5. Single-dose, preoperative MMC-EMDA versus TURBT alone: based on one study with 233 participants with primary pTa and pT1 urothelial carcinoma, preoperative MMC-EMDA likely (moderate QoE) results in a longer time to recurrence (HR 0.40, 95% CI 0.28 to 0.57; corresponding to 304 fewer per 1000 participants, 95% CI 390 fewer to 198 fewer) for a median follow-up of 86 months. We are uncertain (very low QoE) about the effect of MMC-EMDA on time to progression (HR 0.74, 95% CI 0.00 to 247.93; corresponding to 49 fewer per 1000 participants, 95% CI 207 fewer to 793 more) or serious adverse events (HR 1.74, 95% CI 0.52 to 5.77).\nAuthors' conclusions\nWhile the use of EMDA to administer intravesical MMC may result in a delay in time to recurrence in select patient populations, we are uncertain about its impact on serious adverse events in all settings. Common reasons for downgrading the QoE were study limitations and imprecision. A potential role for EMDA-based administration of MMC may lie in settings where more established agents (such as BCG) are not available. In the setting of low or very low QoE for most comparisons, our confidence in the effect estimates is limited and the true effect sizes may be substantially different from those reported here.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are very unsure whether the use of an electrical current to give a course of MMC after endoscopic surgery is better or worse compared to giving a course of Bacillus Calmette-Guérin (BCG; vaccine usually used in tuberculosis) or MMC without electrical current. MMC given with electrical current together with BCG given over a long period of time may be better than BCG alone in delaying the tumour from coming back and from spreading into the deeper layer of the bladder wall. Giving one dose of MMC with electrical current before endoscopic surgery may be better than one dose of MMC without electric current after surgery or surgery alone without further treatment. We are uncertain whether people are more or less likely to have bad reactions from these treatments."
}
] |
query-laysum
|
22867
|
[
{
"role": "user",
"content": "Abstract: Background\nThis systematic review update summarizes the current evidence on the use of natalizumab for induction of remission in Crohn's disease (CD).\nObjectives\nTo determine the efficacy and safety of natalizumab for induction of remission in CD.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, the Cochrane IBD Group Specialized Trials Register, and clinicaltrials.gov from inception to 10 May 2018.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing natalizumab to a placebo or control therapy for induction of remission in CD.\nData collection and analysis\nTwo authors independently screened studies, extracted data and assessed methodological quality using the Cochrane risk of bias tool. The primary outcome was failure to enter clinical remission. Secondary outcomes included clinical response, mean change in Crohn's Disease Activity Index (CDAI), adverse events (AEs), withdrawal due to AEs and serious AEs. For dichotomous outcomes, we calculated the risk ratio (RR) and 95% confidence interval (95% CI). For continuous outcomes we calculated the mean difference (MD) and 95% CI. Data were pooled for meta-analysis when the interventions, patient groups and outcomes were sufficiently similar (determined by consensus). We used GRADE to assess the overall quality of the evidence.\nMain results\nA total of five RCTs (1771 participants) were included. Four studies (1692 participants) compared one, two or three infusions of natalizumab (300 mg or 3 mg/kg or 6mg/kg) to placebo. One study (79 participants) compared three infusions of natalizumab (300 mg) and infliximab (5 mg/kg) to infliximab and placebo. Four studies were rated as low risk of bias. One study was rated as unclear risk of bias for selective reporting.\nOne, two and three infusions of natalizumab were superior to placebo for induction of remission and clinical response. Infusions were administered at weeks zero, four and eight. After one infusion, 76% (849/1117) of natalizumab participants failed to enter remission at 4 weeks compared to 83% (411/494) of placebo participants (RR 0.91, 95% CI 0.86 to 0.96, 3 studies, GRADE high quality). At 4 weeks, the RR for clinical response was 0.78 (95% CI 0.66 to 0.92, 3 studies, 1611 participants, GRADE moderate quality). After two infusions, after 8 weeks, 66% (693/1049) of natalizumab participants failed to enter remission compared to 77% (382/494) of placebo participants (RR 0.85, 95% CI 0.76 to 0.95; 3 studies, GRADE moderate quality). At 8 weeks, the RR for clinical response was 0.73 (95% CI 0.58 to 0.91, 3 studies, 1543 participants, GRADE low quality). After three infusions, at 12 weeks, 61% (596/983) of natalizumab participants failed to enter remission compared to 73% (313/431) of placebo participants (RR 0.85, 95% CI 0.78 to 0.92, 2 studies, GRADE high quality). At 12 weeks, the RR for clinical response was 0.76 (95% CI 0.67 to 0.86, 2 studies, 1414 participants, GRADE high quality). One study (507 participants) reported on change in CADI from baseline. Natalizumab participants had a larger drop in mean CDAI scores than placebo participants at 4, 8 and 12 weeks.\nThe rates of AEs, withdrawals due to AEs and serious AEs were similar across groups at 4, 8 and 12 weeks. After one infusion, 74% (50/68) of natalizumab participants experienced an AE compared to 81% (51/63) of placebo participants (RR 0.91, 95% CI 0.75 to 1.09, GRADE moderate quality). Withdrawal due to an AE occurred in 1% (1/68) of natalizumab participants and 3% of placebo participants (RR 0.46, 95% CI 0.04 to 4.98, GRADE low quality). SAEs occurred in 10% (7/68) of natalizumab participants compared to 11% (7/63) of placebo participants (RR 0.93, 95% CI 0.34 to 2.49, GRADE low quality). After two infusions, 86% (57/66) of natalizumab participants experienced an AE compared to 81% (51/63) of placebo participants (RR 1.07, 95% CI 0.92 to 1.24, GRADE moderate quality). Withdrawal due to an AE occurred in 3% (2/66) natalizumab participants compared to 3% (2/63) placebo participants (RR 0.95, 95% CI 0.14 to 6.57, GRADE low quality). SAEs occurred in 9% (6/66) of natalizumab participants and 11% (7/63) of placebo participants (RR 0.82, 95% CI 0.29 to 2.30, GRADE low quality). After three infusions, 86% (848/984) of natalizumab participants experienced an AE compared to 83% (359/431) placebo participants (RR 1.03, 95% CI 0.98 to 1.08, GRADE high quality). Withdrawals due to AEs occurred in 8% (82/984) of natalizumab participants compared to 10% (45/431) of placebo participants (RR 0.86, 95% CI 0.59 to 1.26, GRADE moderate quality). SAEs occurred in 7% (65/983) of natalizumab participants and 8% (36/431) of placebo participants (RR 0.76. 95% CI 0.37 to 1.56, GRADE low quality). Adverse events included headache, nausea, nasopharyngitis, abdominal pain, fatigue, vomiting, and exacerbation of CD.\nThe study comparing combination therapy with natalizumab and infliximab to infliximab and placebo demonstrated similar remission rates at 10 weeks. Sixty-four per cent (33/52) of participants assigned to natalizumab and infliximab failed to achieve remission compared to 70% (19/27) assigned to placebo and infliximab (RR 0.90, 95% CI 0.65 to 1.24, GRADE moderate quality). The rates of AEs (moderate quality evidence), withdrawals due to AEs (low quality evidence) and serious AEs (low quality evidence) were similar across groups at 10 weeks. Adverse events included headache, exacerbation of CD, nausea, and nasopharyngitis.\nNatalizumab is associated with the development of progressive multifocal leukoencephalopathy (PML) resulting in some patient deaths. There are currently no tests which can reliably predict those at risk of developing PML.\nAuthors' conclusions\nHigh quality data suggest that natalizumab is effective for induction of clinical remission and response in some patients with moderately to severely active CD. However, none of the included studies had the power to detect rare but serious adverse events such as PML. Due to the association with PML, and the availability of alternative agents that are not associated with PML, natalizumab is not likely to be used in patients who fail currently available medical therapy. The use of natalizumab in select patients (e.g. patients allergic to different biologics) needs to be carefully considered against the potential risk of developing PML. Futher studies of natalizumab are not likely to be done.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, the quality of evidence for each outcome was generally high."
}
] |
query-laysum
|
22868
|
[
{
"role": "user",
"content": "Abstract: Background\nThe frequency of skin ulceration makes an important contributor to the morbidity burden in people with sickle cell disease. Many treatment options are available to the healthcare professional, although it is uncertain which treatments have been assessed for effectiveness in people with sickle cell disease. This is an update of a previously published Cochrane Review.\nObjectives\nTo assess the clinical effectiveness and harms of interventions for treating leg ulcers in people with sickle cell disease.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register.\nWe searched LILACS (1982 to January 2020), ISI Web of Knowledge (1985 to January 2020), and the Clinical Trials Search Portal of the World Health Organization (January 2020). We checked the reference lists of all the trials identified. We also contacted those groups or individuals who may have completed relevant randomised trials in this area.\nDate of the last search of the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register: 13 January 2020; date of the last search of the Cochrane Wounds Group Trials Register: 17 February 2017.\nSelection criteria\nRandomised controlled trials of interventions for treating leg ulcers in people with sickle cell disease compared to placebo or an alternative treatment.\nData collection and analysis\nTwo authors independently selected studies for inclusion. All three authors independently assessed the risk of bias of the included studies and extracted data. We used GRADE to assess the quality of the evidence.\nMain results\nSix studies met the inclusion criteria (198 participants with 250 ulcers). Each trial investigated a different intervention and within this review we have grouped these as systemic pharmaceutical interventions (L-cartinine, arginine butyrate, isoxsuprine) and topical pharmaceutical interventions (Solcoseryl® cream, arginine-glycine-aspartic acid (RGD) peptide dressing and topical antibiotics). No trials on non-pharmaceutical interventions were included in the review. All trials had an overall unclear or high risk of bias, and drug companies sponsored four of them. We were unable to pool findings due to the heterogeneity in outcome definitions, and inconsistency between the units of randomisation and analysis.\nThree interventions reported on the change in ulcer size (arginine butyrate, RGD peptide, L-cartinine). Of these, only arginine butyrate showed a reduction of ulcer size compared with a control group, mean reduction -5.10 cm² (95% CI -9.65 to -0.55), but we are uncertain whether this reduces ulcer size compared to standard care alone as the certainty of the evidence has been assessed as very low. Three trials reported on complete leg ulcer closure (isoxsuprine, arginine butyrate, RGD peptide matrix; very low quality of evidence). None reported a clinical benefit. No trial reported on: the time to complete ulcer healing; ulcer-free survival following treatment for sickle cell leg ulcers; quality of life measures; incidence of amputation or harms.\nAuthors' conclusions\nGiven the very low quality of the evidence identified in this updated Cochrane Review we are uncertain whether any of the assessed pharmaceutical interventions reduce ulcer size or result in leg ulcer closure in treated participants compared to controls. However, this intervention was assessed as having a high risk of bias due to inadequacies in the single trial report. Other included studies were also assessed as having an unclear or high risk of bias. The harm profile of the all interventions remains inconclusive.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Given the very low quality of the evidence we are uncertain whether any of the assessed pharmaceutical interventions reduce ulcer size or result in leg ulcer closure. None of the included trials reported on our other outcomes of interest: the time to complete ulcer healing; ulcer-free survival following treatment for sickle cell leg ulcers; quality of life measures; incidence of amputation; or harms."
}
] |
query-laysum
|
22869
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatic veno-occlusive disease (VOD) is a severe complication after haematopoietic stem cell transplantation (HSCT). Different drugs with different mechanisms of action have been tried in HSCT recipients to prevent hepatic VOD. However, it is uncertain whether high-quality evidence exists to support any prophylactic therapy.\nObjectives\nWe aimed to determine the effects of various prophylactic therapies on the incidence of hepatic VOD, overall survival, mortality, quality of life (QOL), and the safety of these therapies in people undergoing HSCT.\nSearch methods\nWe searched the Cochrane Central Registe of Controlled Trials (CENTRAL), MEDLINE, EMBASE, conference proceedings of three international haematology-oncology societies and two trial registries in January 2015, together with reference checking, citation searching and contact with study authors to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing prophylactic therapies with placebo or no treatment, or comparing different therapies for hepatic VOD in people undergoing HSCT.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 14 RCTs. Four trials (612 participants) compared ursodeoxycholic acid with or without additional treatment versus placebo or no treatment or same additional treatment. Two trials (259 participants) compared heparin with no treatment. Two trials (106 participants) compared low molecular weight heparin (LMWH) with placebo or no treatment. One trial (360 participants) compared defibrotide with no treatment. One trial (34 participants) compared glutamine with placebo. Two trials (383 participants) compared fresh frozen plasma (FFP) with or without additional treatment versus no treatment or same additional treatment. One trial (30 participants) compared antithrombin III with heparin versus heparin. One trial compared heparin (47 participants) with LMWH (46 participants) and prostaglandin E1 (PGE1) (47 participants). No trial investigated the effects of danaparoid. The RCTs included participants of both genders with wide age range and disease spectrum undergoing autologous or allogeneic HSCT. Funding was provided by government sources (two studies), research fund (one study), pharmaceutical companies that manufactured defibrotide and ursodeoxycholic acid (two studies), or unclear source (nine studies). All RCTs had high risk of bias because of lack of blinding of participants and study personnel, or other risks of bias (mainly differences in baseline characteristics of comparison groups).\nResults showed that ursodeoxycholic acid may reduce the incidence of hepatic VOD (risk ratio (RR) 0.60, 95% confidence interval (CI) 0.40 to 0.88; number needed to treat for an additional beneficial outcome (NNTB) 15, 95% CI 7 to 50, low quality of evidence), but there was no evidence of difference in overall survival (hazard ratio (HR) 0.83, 95% CI 0.59 to 1.18, low quality of evidence). It may reduce all-cause mortality (RR 0.70, 95% CI 0.50 to 0.99; NNTB 17, 95% CI 8 to 431, low quality of evidence) and mortality due to hepatic VOD (RR 0.27, 95% CI 0.09 to 0.87; NNTB 34, 95% CI 16 to 220, very low quality of evidence). There was no evidence of difference in the incidence of hepatic VOD between treatment and control groups for heparin (RR 0.47, 95% CI 0.18 to 1.26, very low quality of evidence), LMWH (RR 0.27, 95% CI 0.06 to 1.18, very low quality of evidence), defibrotide (RR 0.62, 95% CI 0.38 to 1.02, low quality of evidence), glutamine (no hepatic VOD in either group, very low quality of evidence), FFP (RR 0.66, 95% CI 0.20 to 2.17, very low quality of evidence), antithrombin III (RR 0.13, 95% CI 0.01 to 2.15, very low quality of evidence), between heparin and LMWH (RR 1.96, 95% CI 0.80 to 4.77, very low quality of evidence), between heparin and PGE1 (RR 1.20, 95% CI 0.58 to 2.50, very low quality of evidence), and between LMWH and PGE1 (RR 0.61, 95% CI 0.24 to 1.55, very low quality of evidence). There was no evidence of difference in survival between treatment and control groups for heparin (92.6% vs. 88.7%) and defibrotide (HR 1.04, 95% CI 0.54 to 2.02, low quality of evidence). There were no data on survival for trials of LMWH, glutamine, FFP, antithrombin III, between heparin and LMWH, between heparin and PGE1, and between LMWH and PGE1. There were no data on quality of life (QoL) for any trials. Eleven trials reported adverse events. There was no evidence of difference in the frequency of adverse events between treatment and control groups except for one trial showing that defibrotide resulted in more adverse events compared with no treatment (RR 18.79, 95% CI 1.10 to 320.45). These adverse events included coagulopathy, gastrointestinal disorders, haemorrhage and microangiopathy. The quality of evidence was low or very low due to bias of study design, and inconsistent and imprecise results.\nAuthors' conclusions\nThere is low or very low quality evidence that ursodeoxycholic acid may reduce the incidence of hepatic VOD, all-cause mortality and mortality due to VOD in HSCT recipients. However, the optimal regimen is not well-defined. There is insufficient evidence to support the use of heparin, LMWH, defibrotide, glutamine, FFP, antithrombin III, and PGE1. Further high-quality RCTs are needed.\n\nGiven the provided abstract, please respond to the following query: \"Review Question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed evidence about the effects of medications to prevent blockage of small veins in the liver (veno-occlusive disease or VOD) in people who undergo blood-forming stem cell transplantation (HSCT)."
}
] |
query-laysum
|
22870
|
[
{
"role": "user",
"content": "Abstract: Background\nThere is now a rising commitment to acknowledge the role patients and families play in contributing to their safety. This review focuses on one type of involvement in safety - patient and family involvement in escalation of care for serious life-threatening conditions i.e. helping secure a step-up to urgent or emergency care - which has been receiving increasing policy and practice attention. This review was concerned with the negotiation work that patient and family members undertake across the emergency care escalation pathway, once contact has been made with healthcare staff. It includes interventions aiming to improve detection of symptoms, communication of concerns and staff response to these concerns.\nObjectives\nTo assess the effects of interventions designed to increase patient and family involvement in escalation of care for acute life-threatening illness on patient and family outcomes, treatment outcomes, clinical outcomes, patient and family experience and adverse events.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE (OvidSP), Embase (OvidSP), PsycINFO (OvidSP) ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform from 1 Jan 2000 to 24 August 2018. The search was updated on 21 October 2019.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster-randomised controlled trials where the intervention focused on patients and families working with healthcare professionals to ensure care received for acute deterioration was timely and appropriate. A key criterion was to include an interactive element of rehearsal, role play, modelling, shared language, group work etc. to the intervention to help patients and families have agency in the process of escalation of care. The interventions included components such as enabling patients and families to detect changes in patients' conditions and to speak up about these changes to staff. We also included studies where the intervention included a component targeted at enabling staff response.\nData collection and analysis\nSeven of the eight authors were involved in screening; two review authors independently extracted data and assessed the risk of bias of included studies, with any disagreements resolved by discussion to reach consensus. Primary outcomes included patient and family outcomes, treatment outcomes, clinical outcomes, patient and family experience and adverse events. Our advisory group (four users and four providers) ensured that the review was of relevance and could inform policy and practice.\nMain results\nWe included nine studies involving 436,684 patients and family members and one ongoing study. The published studies focused on patients with specific conditions such as coronary artery disease, ischaemic stroke, and asthma, as well as pregnant women, inpatients on medical surgical wards, older adults and high-risk patients with a history of poor self-management.\nWhile all studies tested interventions versus usual care, for four studies the usual care group also received educational or information strategies. Seven of the interventions involved face-to-face, interactional education/coaching sessions aimed at patients/families while two provided multi-component education programmes which included components targeted at staff as well as patients/families. All of the interventions included: (1) an educational component about the acute condition and preparedness for future events such as stroke or change in fetal movements: (2) an engagement element (self-monitoring, action plans); while two additionally focused on shared language or communication skills.\nWe had concerns about risk of bias for all but one of the included studies in respect of one or more criteria, particularly regarding blinding of participants and personnel. Our confidence in results regarding the effectiveness of interventions was moderate to low.\nLow-certainty evidence suggests that there may be moderate improvement in patients’ knowledge of acute life-threatening conditions, danger signs, appropriate care-seeking responses, and preparedness capacity between interactional patient-facing interventions and multi-component programmes and usual care at 12 months (MD 4.20, 95% CI 2.44 to 5.97, 2 studies, 687 participants). Four studies in total assessed knowledge (3,086 participants) but we were unable to include two other studies in the pooled analysis due to differences in the way outcome measures were reported. One found no improvement in knowledge but higher symptom preparedness at 12 months. The other study found an improvement in patients’ knowledge about symptoms and appropriate care-seeking responses in the intervention group at 18 months compared with usual care.\nLow-certainty evidence from two studies, each using a different measure, meant that we were unable to determine the effects of patient-based interventions on self-efficacy. Self-efficacy was higher in the intervention group in one study but there was no difference in the other compared with usual care.\nWe are uncertain whether interactional patient-facing and multi-component programmes improve time from the start of patient symptoms to treatment due to low-certainty evidence for this outcome. We were unable to combine the data due to differences in outcome measures. Three studies found that arrival times or prehospital delay time was no different between groups. One found that delay time was shorter in the intervention group.\nModerate-certainty evidence suggests that multi-component interventions probably have little or no impact on mortality rates. Only one study on a pregnant population was eligible for inclusion in the review, which found no difference between groups in rates of stillbirth. In terms of unintended events, we found that interactional patient-facing interventions to increase patient and family involvement in escalation of care probably have few adverse effects on patient's anxiety levels (moderate-certainty evidence).\nNone of the studies measured or reported patient and family perceptions of involvement in escalation of care or patient and family experience of patient care. Reported outcomes related to healthcare professionals were also not reported in any studies.\nAuthors' conclusions\nOur review identified that interactional patient-facing interventions and multi-component programmes (including staff) to increase patient and family involvement in escalation of care for acute life-threatening illness may improve patient and family knowledge about danger signs and care-seeking responses, and probably have few adverse effects on patient’s anxiety levels when compared to usual care. Multi-component interventions probably have little impact on mortality rates. Further high-quality trials are required using multi-component interventions and a focus on relational elements of care. Cognitive and behavioural outcomes should be included at patient and staff level.\n\nGiven the provided abstract, please respond to the following query: \"Increasing patient and family involvement\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Education and coaching are available to help patients and their families, and healthcare professionals work together to make sure patients and families can secure emergency care when needed. These strategies focus on:\n- helping patients and their families to notice changes in a patient's condition and tell healthcare staff about them;\n- empowering patients and families to feel confident about arranging for urgent or emergency care;\n- healthcare staff giving patients and families a chance to talk about their concerns, and actively listening to them during an emergency consultation; and\n- training healthcare staff to respond appropriately when patients and their families raise concerns about a patient's condition."
}
] |
query-laysum
|
22871
|
[
{
"role": "user",
"content": "Abstract: Background\nCommunication about end of life (EoL) and EoL care is critically important for providing quality care as people approach death. Such communication is often complex and involves many people (patients, family members, carers, health professionals). How best to communicate with people in the period approaching death is not known, but is an important question for quality of care at EoL worldwide. This review fills a gap in the evidence on interpersonal communication (between people and health professionals) in the last year of life, focusing on interventions to improve interpersonal communication and patient, family member and carer outcomes.\nObjectives\nTo assess the effects of interventions designed to improve verbal interpersonal communication about EoL care between health practitioners and people affected by EoL.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, PsycINFO, and CINAHL from inception to July 2018, without language or date restrictions. We contacted authors of included studies and experts and searched reference lists to identify relevant papers. We searched grey literature sources, conference proceedings, and clinical trials registries in September 2019. Database searches were re-run in June 2021 and potentially relevant studies listed as awaiting classification or ongoing.\nSelection criteria\nThis review assessed the effects of interventions, evaluated in randomised and quasi-randomised trials, intended to enhance interpersonal communication about EoL care between patients expected to die within 12 months, their family members and carers, and health practitioners involved in their care. Patients of any age from birth, in any setting or care context (e.g. acute catastrophic injury, chronic illness), and all health professionals involved in their care were eligible. All communication interventions were eligible, as long as they included interpersonal interaction(s) between patients and family members or carers and health professionals. Interventions could be simple or complex, with one or more communication aims (e.g. to inform, skill, engage, support). Effects were sought on outcomes for patients, family and carers, health professionals and health systems, including adverse (unintended) effects.\nTo ensure this review's focus was maintained on interpersonal communication in the last 12 months of life, we excluded studies that addressed specific decisions, shared or otherwise, and the tools involved in such decision-making. We also excluded studies focused on advance care planning (ACP) reporting ACP uptake or completion as the primary outcome. Finally, we excluded studies of communication skills training for health professionals unless patient outcomes were reported as primary outcomes.\nData collection and analysis\nStandard Cochrane methods were used, including dual review author study selection, data extraction and quality assessment of the included studies.\nMain results\nEight trials were included. All assessed intervention effects compared with usual care. Certainty of the evidence was low or very low. All outcomes were downgraded for indirectness based on the review’s purpose, and many were downgraded for imprecision and/or inconsistency. Certainty was not commonly downgraded for methodological limitations.\nA summary of the review's findings is as follows.\nKnowledge and understanding (four studies, low-certainty evidence; one study without usable data): interventions to improve communication (e.g. question prompt list, with or without patient and physician training) may have little or no effect on knowledge of illness and prognosis, or information needs and preferences, although studies were small and measures used varied across trials.\nEvaluation of the communication (six studies measuring several constructs (communication quality, patient-centredness, involvement preferences, doctor-patient relationship, satisfaction with consultation), most low-certainty evidence): across constructs there may be minimal or no effects of interventions to improve EoL communication, and there is uncertainty about effects of interventions such as a patient-specific feedback sheet on quality of communication.\nDiscussions of EoL or EoL care (six studies measuring selected outcomes, low- or very low-certainty evidence): a family conference intervention may increase duration of EoL discussions in an intensive care unit (ICU) setting, while use of a structured serious illness conversation guide may lead to earlier discussions of EoL and EoL care (each assessed by one study). We are uncertain about effects on occurrence of discussions and question asking in consultations, and there may be little or no effect on content of communication in consultations.\nAdverse outcomes or unintended effects (limited evidence): there is insufficient evidence to determine whether there are adverse outcomes associated with communication interventions (e.g. question prompt list, family conference, structured discussions) for EoL and EoL care. Patient and/or carer anxiety was reported by three studies, but judged as confounded. No other unintended consequences, or worsening of desired outcomes, were reported.\nPatient/carer quality of life (four studies, low-certainty evidence; two without useable data): interventions to improve communication may have little or no effect on quality of life.\nHealth practitioner outcomes (three studies, low-certainty evidence; two without usable data): interventions to improve communication may have little or no effect on health practitioner outcomes (satisfaction with communication during consultation; one study); effects on other outcomes (knowledge, preparedness to communicate) are unknown.\nHealth systems impacts: communication interventions (e.g. structured EoL conversations) may have little or no effect on carer or clinician ratings of quality of EoL care (satisfaction with care, symptom management, comfort assessment, quality of care) (three studies, low-certainty evidence), or on patients' self-rated care and illness, or numbers of care goals met (one study, low-certainty evidence). Communication interventions (e.g. question prompt list alone or with nurse-led communication skills training) may slightly increase mean consultation length (two studies), but other health service impacts (e.g. hospital admissions) are unclear.\nAuthors' conclusions\nFindings of this review are inconclusive for practice. Future research might contribute meaningfully by seeking to fill gaps for populations not yet studied in trials; and to develop responsive outcome measures with which to better assess the effects of communication on the range of people involved in EoL communication episodes. Mixed methods and/or qualitative research may contribute usefully to better understand the complex interplay between different parties involved in communication, and to inform development of more effective interventions and appropriate outcome measures. Co-design of such interventions and outcomes, involving the full range of people affected by EoL communication and care, should be a key underpinning principle for future research in this area.\n\nGiven the provided abstract, please respond to the following query: \"Why is communication at the end of life important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When people are in the last year of their life, it is important that they receive high-quality care (refer to ACSQHC 2015 and 2015b references for more on care at end of life). Communication about EoL is a critical part of such care. It helps patients and their families and carers to understand what is happening, to know what to expect and what their options are, to ask questions and receive support, and to be involved in decisions and planning as much as they would like to be. Communication about EoL is not always done well and this can have negative effects. Understanding how to improve such communication between the different people involved in care at EoL (patients, family members, carers, health professionals) is important to help ensure that people receive the best possible care in the time leading up to death."
}
] |
query-laysum
|
22872
|
[
{
"role": "user",
"content": "Abstract: Background\nThere is now a rising commitment to acknowledge the role patients and families play in contributing to their safety. This review focuses on one type of involvement in safety - patient and family involvement in escalation of care for serious life-threatening conditions i.e. helping secure a step-up to urgent or emergency care - which has been receiving increasing policy and practice attention. This review was concerned with the negotiation work that patient and family members undertake across the emergency care escalation pathway, once contact has been made with healthcare staff. It includes interventions aiming to improve detection of symptoms, communication of concerns and staff response to these concerns.\nObjectives\nTo assess the effects of interventions designed to increase patient and family involvement in escalation of care for acute life-threatening illness on patient and family outcomes, treatment outcomes, clinical outcomes, patient and family experience and adverse events.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE (OvidSP), Embase (OvidSP), PsycINFO (OvidSP) ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform from 1 Jan 2000 to 24 August 2018. The search was updated on 21 October 2019.\nSelection criteria\nWe included randomised controlled trials (RCTs) and cluster-randomised controlled trials where the intervention focused on patients and families working with healthcare professionals to ensure care received for acute deterioration was timely and appropriate. A key criterion was to include an interactive element of rehearsal, role play, modelling, shared language, group work etc. to the intervention to help patients and families have agency in the process of escalation of care. The interventions included components such as enabling patients and families to detect changes in patients' conditions and to speak up about these changes to staff. We also included studies where the intervention included a component targeted at enabling staff response.\nData collection and analysis\nSeven of the eight authors were involved in screening; two review authors independently extracted data and assessed the risk of bias of included studies, with any disagreements resolved by discussion to reach consensus. Primary outcomes included patient and family outcomes, treatment outcomes, clinical outcomes, patient and family experience and adverse events. Our advisory group (four users and four providers) ensured that the review was of relevance and could inform policy and practice.\nMain results\nWe included nine studies involving 436,684 patients and family members and one ongoing study. The published studies focused on patients with specific conditions such as coronary artery disease, ischaemic stroke, and asthma, as well as pregnant women, inpatients on medical surgical wards, older adults and high-risk patients with a history of poor self-management.\nWhile all studies tested interventions versus usual care, for four studies the usual care group also received educational or information strategies. Seven of the interventions involved face-to-face, interactional education/coaching sessions aimed at patients/families while two provided multi-component education programmes which included components targeted at staff as well as patients/families. All of the interventions included: (1) an educational component about the acute condition and preparedness for future events such as stroke or change in fetal movements: (2) an engagement element (self-monitoring, action plans); while two additionally focused on shared language or communication skills.\nWe had concerns about risk of bias for all but one of the included studies in respect of one or more criteria, particularly regarding blinding of participants and personnel. Our confidence in results regarding the effectiveness of interventions was moderate to low.\nLow-certainty evidence suggests that there may be moderate improvement in patients’ knowledge of acute life-threatening conditions, danger signs, appropriate care-seeking responses, and preparedness capacity between interactional patient-facing interventions and multi-component programmes and usual care at 12 months (MD 4.20, 95% CI 2.44 to 5.97, 2 studies, 687 participants). Four studies in total assessed knowledge (3,086 participants) but we were unable to include two other studies in the pooled analysis due to differences in the way outcome measures were reported. One found no improvement in knowledge but higher symptom preparedness at 12 months. The other study found an improvement in patients’ knowledge about symptoms and appropriate care-seeking responses in the intervention group at 18 months compared with usual care.\nLow-certainty evidence from two studies, each using a different measure, meant that we were unable to determine the effects of patient-based interventions on self-efficacy. Self-efficacy was higher in the intervention group in one study but there was no difference in the other compared with usual care.\nWe are uncertain whether interactional patient-facing and multi-component programmes improve time from the start of patient symptoms to treatment due to low-certainty evidence for this outcome. We were unable to combine the data due to differences in outcome measures. Three studies found that arrival times or prehospital delay time was no different between groups. One found that delay time was shorter in the intervention group.\nModerate-certainty evidence suggests that multi-component interventions probably have little or no impact on mortality rates. Only one study on a pregnant population was eligible for inclusion in the review, which found no difference between groups in rates of stillbirth. In terms of unintended events, we found that interactional patient-facing interventions to increase patient and family involvement in escalation of care probably have few adverse effects on patient's anxiety levels (moderate-certainty evidence).\nNone of the studies measured or reported patient and family perceptions of involvement in escalation of care or patient and family experience of patient care. Reported outcomes related to healthcare professionals were also not reported in any studies.\nAuthors' conclusions\nOur review identified that interactional patient-facing interventions and multi-component programmes (including staff) to increase patient and family involvement in escalation of care for acute life-threatening illness may improve patient and family knowledge about danger signs and care-seeking responses, and probably have few adverse effects on patient’s anxiety levels when compared to usual care. Multi-component interventions probably have little impact on mortality rates. Further high-quality trials are required using multi-component interventions and a focus on relational elements of care. Cognitive and behavioural outcomes should be included at patient and staff level.\n\nGiven the provided abstract, please respond to the following query: \"Why we did this Cochrane Review\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We wanted to find out if education and coaching strategies could help patients and families to recognise when changes in a health condition are life‑threatening and act to help secure emergency care."
}
] |
query-laysum
|
22873
|
[
{
"role": "user",
"content": "Abstract: Background\nPotential benefits of rapamycin or rapalogs for treating people with tuberous sclerosis complex (TSC) have been shown. Currently everolimus (a rapalog) is only approved for TSC-associated renal angiomyolipoma and subependymal giant cell astrocytoma (SEGA), but not other manifestations of TSC. A systematic review needs to establish evidence for rapamycin or rapalogs for various manifestations in TSC. This is an updated review.\nObjectives\nTo determine the effectiveness of rapamycin or rapalogs in people with TSC for decreasing tumour size and other manifestations and to assess the safety of rapamycin or rapalogs in relation to their adverse effects.\nSearch methods\nWe identified relevant studies from the Cochrane-Central-Register-of-Controlled-Trials (CENTRAL), Ovid MEDLINE and ongoing trials registries with no language restrictions. We searched conference proceedings and abstract books of conferences.\nDate of the last searches: 15 July 2022.\nSelection criteria\nRandomised controlled trials (RCTs) or quasi-RCTs of rapamycin or rapalogs in people with TSC.\nData collection and analysis\nTwo review authors independently extracted data and assessed the risk of bias of each study; a third review author verified the extracted data and risk of bias decisions. We assessed the certainty of the evidence using GRADE.\nMain results\nThe current update added seven RCTs, bringing the total number to 10 RCTs (with 1008 participants aged 3 months to 65 years; 484 males). All TSC diagnoses were by consensus criteria as a minimum. In parallel studies, 645 participants received active interventions and 340 placebo. Evidence is low-to-high certainty and study quality is mixed; mostly a low risk of bias across domains, but one study had a high risk of performance bias (lack of blinding) and three studies had a high risk of attrition bias. Manufacturers of the investigational products supported eight studies.\nSystemic administration\nSix studies (703 participants) administered everolimus (rapalog) orally. More participants in the intervention arm reduced renal angiomyolipoma size by 50% (risk ratio (RR) 24.69, 95% confidence interval (CI) 3.51 to 173.41; P = 0.001; 2 studies, 162 participants, high-certainty evidence). In the intervention arm, more participants in the intervention arm reduced SEGA tumour size by 50% (RR 27.85, 95% CI 1.74 to 444.82; P = 0.02; 1 study; 117 participants; moderate-certainty evidence) ,and reported more skin responses (RR 5.78, 95% CI 2.30 to 14.52; P = 0.0002; 2 studies; 224 participants; high-certainty evidence). In one 18-week study (366 participants), the intervention led to 25% fewer seizures (RR 1.63, 95% CI 1.27 to 2.09; P = 0.0001) or 50% fewer seizures (RR 2.28, 95% CI 1.44 to 3.60; P = 0.0004); but there was no difference in numbers being seizure-free (RR 5.30, 95% CI 0.69 to 40.57; P = 0.11) (moderate-certainty evidence). One study (42 participants) showed no difference in neurocognitive, neuropsychiatry, behavioural, sensory and motor development (low-certainty evidence).\nTotal adverse events (AEs) did not differ between groups (RR 1.09, 95% CI 0.97 to 1.22; P = 0.16; 5 studies; 680 participants; high-certainty evidence). However, the intervention group experienced more AEs resulting in withdrawal, interruption of treatment, or reduced dose (RR 2.61, 95% CI 1.58 to 4.33; P = 0.0002; 4 studies; 633 participants; high-certainty evidence and also reported more severe AEs (RR 2.35, 95% CI 0.99 to 5.58; P = 0.05; 2 studies; 413 participants; high-certainty evidence).\nTopical (skin) administration\nFour studies (305 participants) administered rapamycin topically. More participants in the intervention arm showed a response to skin lesions (RR 2.72, 95% CI 1.76 to 4.18; P < 0.00001; 2 studies; 187 participants; high-certainty evidence) and more participants in the placebo arm reported a deterioration of skin lesions (RR 0.27, 95% CI 0.15 to 0.49; 1 study; 164 participants; high-certainty evidence). More participants in the intervention arm responded to facial angiofibroma at one to three months (RR 28.74, 95% CI 1.78 to 463.19; P = 0.02) and three to six months (RR 39.39, 95% CI 2.48 to 626.00; P = 0.009; low-certainty evidence). Similar results were noted for cephalic plaques at one to three months (RR 10.93, 95% CI 0.64 to 186.08; P = 0.10) and three to six months (RR 7.38, 95% CI 1.01 to 53.83; P = 0.05; low-certainty evidence). More participants on placebo showed a deterioration of skin lesions (RR 0.27, 95% CI 0.15 to 0.49; P < 0.0001; 1 study; 164 participants; moderate-certainty evidence). The intervention arm reported a higher general improvement score (MD -1.01, 95% CI -1.68 to -0.34; P < 0.0001), but no difference specifically in the adult subgroup (MD -0.75, 95% CI -1.58 to 0.08; P = 0.08; 1 study; 36 participants; moderate-certainty evidence). Participants in the intervention arm reported higher satisfaction than with placebo (MD -0.92, 95% CI -1.79 to -0.05; P = 0.04; 1 study; 36 participants; low-certainty evidence), although again with no difference among adults (MD -0.25, 95% CI -1.52 to 1.02; P = 0.70; 1 study; 18 participants; low-certainty evidence). Groups did not differ in change in quality of life at six months (MD 0.30, 95% CI -1.01 to 1.61; P = 0.65; 1 study; 62 participants; low-certainty evidence).\nTreatment led to a higher risk of any AE compared to placebo (RR 1.72, 95% CI 1.10, 2.67; P = 0.02; 3 studies; 277 participants; moderate-certainty evidence); but no difference between groups in severe AEs (RR 0.78, 95% CI 0.19 to 3.15; P = 0.73; 1 study; 179 participants; moderate-certainty evidence).\nAuthors' conclusions\nOral everolimus reduces the size of SEGA and renal angiomyolipoma by 50%, reduces seizure frequency by 25% and 50% and implements beneficial effects on skin lesions with no difference in the total number of AEs compared to placebo; however, more participants in the treatment group required a dose reduction, interruption or withdrawal and marginally more experienced serious AEs compared to placebo.\nTopical rapamycin increases the response to skin lesions and facial angiofibroma, an improvement score, satisfaction and the risk of any AE, but not severe adverse events.\nWith caution regarding the risk of severe AEs, this review supports oral everolimus for renal angiomyolipoma, SEGA, seizure, and skin lesions, and topical rapamycin for facial angiofibroma.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Tuberous sclerosis complex is a genetic disease caused by mutations in TSC1 or TSC2 genes that affects several organs such as the brain, kidneys, heart, lungs and skin. The incidence is one in approximately 6000. Previous studies have shown potential benefits of rapamycin or rapalogs for treating people with tuberous sclerosis complex. Although everolimus (a rapalog) is currently approved by the FDA (USA Food and Drug Administration) and EMA (European Medicines Agency) for tumours associated with tuberous sclerosis complex (renal angiomyolipoma and subependymal giant cell astrocytoma), the use of these drugs for treating other symptoms of the condition has not yet been established. This review aims to bring together clinical trials in this area to establish the clinical value of rapamycin and rapalogs for various symptoms of tuberous sclerosis complex."
}
] |
query-laysum
|
22874
|
[
{
"role": "user",
"content": "Abstract: Background\nPersonalised care planning is a collaborative process used in chronic condition management in which patients and clinicians identify and discuss problems caused by or related to the patient's condition, and develop a plan for tackling these. In essence it is a conversation, or series of conversations, in which they jointly agree goals and actions for managing the patient's condition.\nObjectives\nTo assess the effects of personalised care planning for adults with long-term health conditions compared to usual care (i.e. forms of care in which active involvement of patients in treatment and management decisions is not explicitly attempted or achieved).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO, ProQuest, clinicaltrials.gov and WHO International Clinical Trials Registry Platform to July 2013.\nSelection criteria\nWe included randomised controlled trials and cluster-randomised trials involving adults with long-term conditions where the intervention included collaborative (between individual patients and clinicians) goal setting and action planning. We excluded studies where there was little or no opportunity for the patient to have meaningful influence on goal selection, choice of treatment or support package, or both.\nData collection and analysis\nTwo of three review authors independently screened citations for inclusion, extracted data, and assessed risk of bias. The primary outcomes were effects on physical health, psychological health, subjective health status, and capabilities for self management. Secondary outcomes included effects on health-related behaviours, resource use and costs, and type of intervention. A patient advisory group of people with experience of living with long-term conditions advised on various aspects of the review, including the protocol, selection of outcome measures and emerging findings.\nMain results\nWe included 19 studies involving a total of 10,856 participants. Twelve of these studies focused on diabetes, three on mental health, one on heart failure, one on end-stage renal disease, one on asthma, and one on various chronic conditions. All 19 studies included components that were intended to support behaviour change among patients, involving either face-to-face or telephone support. All but three of the personalised care planning interventions took place in primary care or community settings; the remaining three were located in hospital clinics. There was some concern about risk of bias for each of the included studies in respect of one or more criteria, usually due to inadequate or unclear descriptions of research methods.\nPhysical health\nNine studies measured glycated haemoglobin (HbA1c), giving a combined mean difference (MD) between intervention and control of -0.24% (95% confidence interval (CI) -0.35 to -0.14), a small positive effect in favour of personalised care planning compared to usual care (moderate quality evidence).\nSix studies measured systolic blood pressure, a combined mean difference of -2.64 mm/Hg (95% CI -4.47 to -0.82) favouring personalised care (moderate quality evidence). The pooled results from four studies showed no significant effect on diastolic blood pressure, MD -0.71 mm/Hg (95% CI -2.26 to 0.84).\nWe found no evidence of an effect on cholesterol (LDL-C), standardised mean difference (SMD) 0.01 (95% CI -0.09 to 0.11) (five studies) or body mass index, MD -0.11 (95% CI -0.35 to 0.13) (four studies).\nA single study of people with asthma reported that personalised care planning led to improvements in lung function and asthma control.\nPsychological health\nSix studies measured depression. We were able to pool results from five of these, giving an SMD of -0.36 (95% CI -0.52 to -0.20), a small effect in favour of personalised care (moderate quality evidence). The remaining study found greater improvement in the control group than the intervention group.\nFour other studies used a variety of psychological measures that were conceptually different so could not be pooled. Of these, three found greater improvement for the personalised care group than the usual care group and one was too small to detect differences in outcomes.\nSubjective health status\nTen studies used various patient-reported measures of health status (or health-related quality of life), including both generic health status measures and condition-specific ones. We were able to pool data from three studies that used the SF-36 or SF-12, but found no effect on the physical component summary score SMD 0.16 (95% CI -0.05 to 0.38) or the mental component summary score SMD 0.07 (95% CI -0.15 to 0.28) (moderate quality evidence). Of the three other studies that measured generic health status, two found improvements related to personalised care and one did not.\nFour studies measured condition-specific health status. The combined results showed no difference between the intervention and control groups, SMD -0.01 (95% CI -0.11 to 0.10) (moderate quality evidence).\nSelf-management capabilities\nNine studies looked at the effect of personalised care on self-management capabilities using a variety of outcome measures, but they focused primarily on self efficacy. We were able to pool results from five studies that measured self efficacy, giving a small positive result in favour of personalised care planning: SMD 0.25 (95% CI 0.07 to 0.43) (moderate quality evidence).\nA further five studies measured other attributes that contribute to self-management capabilities. The results from these were mixed: two studies found evidence of an effect on patient activation, one found an effect on empowerment, and one found improvements in perceived interpersonal support.\nOther outcomes\nPooled data from five studies on exercise levels showed no effect due to personalised care planning, but there was a positive effect on people's self-reported ability to carry out self-care activities: SMD 0.35 (95% CI 0.17 to 0.52).\nWe found no evidence of adverse effects due to personalised care planning.\nThe effects of personalised care planning were greater when more stages of the care planning cycle were completed, when contacts between patients and health professionals were more frequent, and when the patient's usual clinician was involved in the process.\nAuthors' conclusions\nPersonalised care planning leads to improvements in certain indicators of physical and psychological health status, and people's capability to self-manage their condition when compared to usual care. The effects are not large, but they appear greater when the intervention is more comprehensive, more intensive, and better integrated into routine care.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with long-term health conditions play an important part in managing their own health. But some of the tasks involved can be complicated, and require confidence and skill. Such tasks include taking medicines properly, monitoring symptoms, adopting or maintaining healthy lifestyles, managing their emotions, solving practical problems, knowing when and how to seek medical advice or community support, and coping with the impact of the condition(s) on their daily lives. Personalised care planning aims to provide support from health professionals that is tailored to the needs of individual patients. Such support recognises patients’ concerns, and helps them become more able to manage their own health. Personalised care planning is a conversation, or series of conversations, between a patient and a clinician when they jointly agree on goals and actions for managing the patient's health problems."
}
] |
query-laysum
|
22875
|
[
{
"role": "user",
"content": "Abstract: Background\nOvarian cancer is the seventh most frequent cancer diagnosis worldwide, and the eighth leading cause of cancer mortality. Epithelial ovarian cancer is the most common kind, accounting for 90% of cases. First-line therapy for women with epithelial ovarian cancer consists of a combination of cytoreductive surgery and platinum and taxane-based chemotherapy. However, more than 50% of women with epithelial ovarian cancer will experience a relapse and require further chemotherapy and at some point develop resistance to platinum-based drugs.\nCurrently, guidance on the use of most chemotherapy drugs, including taxanes, is unclear for women whose epithelial ovarian cancer has recurred. Paclitaxel, topotecan, pegylated liposomal doxorubicin hydrochloride, trabectedin and gemcitabine are all licensed for use in the UK at the discretion of clinicians, following discussion with the women as to potential adverse effects. Taxanes can be given in once-weekly regimens (at a lower dose) or three-weekly regimens (at a higher dose), which may have differences in the severity of side effects and effectiveness. As relapsed disease suggests incurable disease, it is all the more important to consider side effects and the impact of treatment schedules, as well as quality of life, and not only the life-prolonging effects of treatment.\nObjectives\nTo assess the efficacy and toxicity of different taxane monotherapy regimens for women with recurrent epithelial ovarian, tubal or primary peritoneal cancer.\nSearch methods\nWe searched CENTRAL, MEDLINE and Embase, up to 22 March 2022. Other related databases and trial registries were searched as well as grey literature and no additional studies were identified. A total of 1500 records were identified.\nSelection criteria\nWe included randomised controlled trials of taxane monotherapy for adult women diagnosed with recurrent epithelial ovarian, tubal or primary peritoneal cancer, previously treated with platinum-based chemotherapy. We included trials comparing two or more taxane monotherapy regimens. Participants could be experiencing their first recurrence of disease or any line of recurrence.\nData collection and analysis\nTwo review authors screened, independently assessed studies, and extracted data from the included studies. The clinical outcomes we examined were overall survival, response rate, progression-free survival, neurotoxicity, neutropenia, alopecia, and quality of life. We performed statistical analyses using fixed-effect and random-effects models following standard Cochrane methodology. We rated the certainty of evidence according to the GRADE approach.\nMain results\nOur literature search yielded 1500 records of 1466 studies; no additional studies were identified by searching grey literature or handsearching. We uploaded the search results into Covidence. After the exclusion of 92 duplicates, we screened titles and abstracts of 1374 records. Of these, we identified 24 studies for full-text screening. We included four parallel-group randomised controlled trials (RCTs). All trials were multicentred and conducted in a hospital setting. The studies included 981 eligible participants with recurrent epithelial ovarian cancer, tubal or primary peritoneal cancer with a median age ranging between 56 to 62 years of age. All participants had a WHO (World Health Organization) performance status of between 0 to 2. The proportion of participants with serous histology ranged between 56% to 85%. Participants included women who had platinum-sensitive (71%) and platinum-resistant (29%) relapse. Some participants were taxane pre-treated (5.6%), whilst the majority were taxane-naive (94.4%). No studies were classified as having a high risk of bias for any of the domains in the Cochrane risk of bias tool.\nWe found that there may be little or no difference in overall survival (OS) between weekly paclitaxel and three-weekly paclitaxel, but the evidence is very uncertain (risk ratio (RR) of 0.94, 95% confidence interval (CI) 0.66 to 1.33, two studies, 263 participants, very low-certainty evidence). Similarly, there may be little or no difference in response rate (RR of 1.07, 95% CI 0.78 to 1.48, two studies, 263 participants, very low-certainty evidence) and progression-free survival (PFS) (RR of 0.83, 95% CI 0.46 to 1.52, two studies, 263 participants, very low-certainty evidence) between weekly and three-weekly paclitaxel, but the evidence is very uncertain.\nWe found differences in the chemotherapy-associated adverse events between the weekly and three-weekly paclitaxel regimens. The weekly paclitaxel regimen may result in a reduction in neutropenia (RR 0.51, 95% 0.27 to 0.95, two studies, 260 participants, low-certainty evidence) and alopecia (RR 0.58, 95% CI 0.46 to 0.73, one study, 205 participants, low-certainty evidence). There may be little or no difference in neurotoxicity, but the evidence was very low-certainty and we cannot exclude an effect (RR 0.53, 95% CI 0.19 to 1.45, two studies, 260 participants).\nWhen examining the effect of paclitaxel dosage in the three-weekly regimen, the 250 mg/m2 paclitaxel regimen probably causes more neurotoxicity compared to the 175 mg/m2 regimen (RR 0.41, 95% CI 0.21 to 0.80, one study, 330 participants, moderate-certainty evidence).\nQuality-of-life data were not extractable from any of the included studies.\nAuthors' conclusions\nFewer people may experience neutropenia when given weekly rather than three-weekly paclitaxel (low-certainty evidence), although it may make little or no difference to the risk of developing neurotoxicity (very low-certainty evidence). This is based on the participants receiving lower doses of drug more often. However, our confidence in this result is low and the true effect may be substantially different from the estimate of the effect. Weekly paclitaxel probably reduces the risk of alopecia, although the rates in both arms were high (46% versus 79%) (low-certainty evidence). A change to weekly from three-weekly chemotherapy could be considered to reduce the likelihood of toxicity, as it may have little or no negative impact on response rate (very low-certainty evidence), PFS (very low-certainty evidence) or OS (very low-certainty evidence).\nThree-weekly paclitaxel, given at a dose of 175 mg/m2 compared to a higher dose,probably reduces the risk of neurotoxicity.We are moderately confident in this result; the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. A change to 175 mg/m2 paclitaxel (from a higher dose), if a three-weekly regimen is used, probably has little or no negative impact on PFS or OS (very low-certainty evidence).\n\nGiven the provided abstract, please respond to the following query: \"Methods\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included randomised-controlled trials (RCTs) (clinical studies where people are randomly put into one of two or more treatment groups) of two or more taxane treatments for women with relapsed epithelial ovarian cancer. The clinical outcomes we examined were: 1) overall survival - how long a participant survives after their diagnosis; 2) response rate - how many participants have scan or blood test evidence that their ovarian cancer reduces in size in response to chemotherapy; 3) progression-free survival - the amount of time a participant lives with the disease without evidence of it continuing to grow; 4) neurotoxicity - how many participants experienced nerve damage leading to sensory or motor co-ordination problems, usually in their hands and feet; 5) neutropenia - how many participants experienced a dangerous drop in white blood cells (neutrophils) which can lead to infection; 6) alopecia - how many participants experienced high levels of hair loss; 7) quality of life - a measure of the overall impact of the disease and treatment on the participants' daily life; this is recorded by a questionnaire.\nWe performed statistical analyses following Cochrane methodology."
}
] |
query-laysum
|
22876
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of a review first published in 2011, and last updated in 2017.\nMost people with epilepsy have a good prognosis, but up to 30% of people continue to have seizures despite several regimens of antiepileptic drugs. In this review, we summarized the current evidence regarding eslicarbazepine acetate (ESL) when used as an add-on treatment for drug-resistant focal epilepsy.\nObjectives\nTo evaluate the efficacy and tolerability of ESL when used as an add-on treatment for people with drug-resistant focal epilepsy.\nSearch methods\nFor this update, we searched the following databases on 10 September 2020: Cochrane Register of Studies (CRS Web) and MEDLINE (Ovid). CRS Web includes randomized or quasi-randomized, controlled trials from Specialized Registers of Cochrane Review Groups including Epilepsy, CENTRAL, PubMed, Embase, ClinicalTrials.gov and the WHO ICTRP. There were no language restrictions. We reviewed the reference lists of retrieved studies and contacted the manufacturers of ESL and experts in the field for information about any unpublished or ongoing studies.\nSelection criteria\nRandomized placebo-controlled double-blind add-on trials of ESL in people with drug-resistant focal epilepsy.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted data. Outcomes investigated included 50% or greater reduction in seizure frequency, seizure freedom, treatment withdrawal, adverse effects and drug interactions. Primary analyses were by intention to treat (ITT). The dose-response relationship was evaluated in regression models.\nMain results\nWe included seven trials (2185 participants, aged 2 to 77 years), which were at low or unclear risk of bias apart from a high risk of attrition bias; all studies were funded by the pharmaceutical company, BIAL.\nThe overall risk ratio (RR) for 50% or greater reduction in seizure frequency was 1.57 (95% confidence interval (CI) 1.34 to 1.83). For adults, the RR was 1.71 (95% CI 1.42 to 2.05; 5 studies, 1799 participants; moderate-certainty evidence); for children aged six to 18 years, the RR was 1.35 (95% CI 0.98 to 1.87; 2 studies, 322 participants; moderate-certainty evidence). Dose regression analysis showed evidence that ESL reduced seizure frequency with an increase in efficacy with increasing doses of ESL. ESL was associated with seizure freedom (RR 3.16, 95% CI 1.73 to 5.78; 6 studies, 1922 participants; moderate-certainty evidence). Participants were more likely to have ESL withdrawn for adverse effects (RR 2.72, 95% CI 1.66 to 4.46; 7 studies, 2185 participants; moderate-certainty evidence), but not for any reason (RR 1.25, 95% CI 0.93 to 1.70; 7 studies, 2185 participants; moderate-certainty evidence). The following adverse effects were associated with ESL: dizziness (RR 2.77, 99% CI 1.85 to 4.15); nausea (RR 2.55, 99% CI 1.39 to 4.67); somnolence (RR 1.75, 99% CI 1.18 to 2.61); diplopia (RR 4.07, 99% CI 1.86 to 8.89); and vomiting (RR 2.37, 99% CI 1.19 to 4.74). Overall, the certainty of the evidence was moderate due to a high discontinuation rate in studies of adults.\nAuthors' conclusions\nESL reduces seizure frequency when used as an add-on treatment for adults with drug-resistant focal epilepsy. The trials included in this review were of short-term duration. In addition, this update found that ESL may reduce seizure frequency in children from 6 to 18 years of age; however the results are inconclusive.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Altogether the seven trials used good methods, but information was missing from the trials for between 10% and 45% of people taking the medicine in five trials of adults. This missing information may have introduced uncertainty into the results so the evidence in this review is of moderate quality. More research is needed to look at the long-term effects of eslicarbazepine acetate and to explore how well it works in children with epilepsy."
}
] |
query-laysum
|
22877
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic health conditions in children and adolescents can have profound impacts on education, well-being and health. They are described as non-communicable illnesses that are prolonged in duration, do not resolve spontaneously, and rarely cured completely. Due to variations in the definition of chronic health conditions and how they are measured prevalence estimates vary considerably and have been reported to be as high as 44% in children and adolescents. Of young people with a chronic health condition, an estimated 5% are affected by severe conditions characterised by limitations to daily activities impacting their ability to attend school. School attendance is important for academic and social skill development as well as well-being. When children and adolescents are absent from school due to a chronic health condition, school engagement can be affected. Disengagement from school is associated with poorer academic achievement, social-emotional functioning and career choices. Education support services for children and adolescents with chronic health conditions aim to prevent disengagement from school, education and learning during periods where their illness caused them to miss school. However, there is limited evidence on the effectiveness of educational support interventions at improving school engagement and educational/learning outcomes for children and adolescents with chronic health conditions.\nObjectives\nTo describe the nature of educational support interventions for children and adolescents with a chronic health condition, and to examine the effectiveness of these interventions on school engagement and academic achievement.\nSearch methods\nWe searched eight electronic databases which span the health/medical, social sciences and education disciplines between 18 and 25 January 2021: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid), Embase (Ovid). CINAHL (EBSCO), PsycINFO (EBSCO), ERIC (Education Resources Information Center), Applied Social Sciences Index and Abstracts: ASSIA (ProQuest), and PubMed (from 2019). We also searched five grey literature trials registers and databases between 8 and 12 February 2021 to identify additional published and unpublished studies, theses and conference abstracts, as well as snowballing reference lists of included studies.\nSelection criteria\nRandomised controlled trials (RCTs), controlled before-and-after studies and interrupted time series studies that met the inclusion criteria were selected. Other inclusion criteria were: participants - must include children or adolescents (aged four to 18 years) with a chronic health condition, intervention - must include educational support, outcomes - must report the primary outcomes (i.e. school engagement or academic achievement) or secondary outcomes (i.e. quality of life, transition to school/school re-entry, mental health or adverse outcomes).\nData collection and analysis\nTwo people independently screened titles and abstracts, and full-text articles, to identify included studies. Where disagreements arose between reviewers, the two reviewers discussed the discrepancy. If resolution was unable to be achieved, the issues were discussed with a senior reviewer to resolve the matter. We extracted study characteristic data and risk of bias data from the full texts of included studies using a data extraction form before entering the information into Review Manager 5.4.1. Two people independently extracted data, assessed risk of bias of individual studies and undertook GRADE assessments of the quality of the evidence.\nMeta-analysis was not possible due to the small number of studies for each outcome. Our synthesis, therefore, used vote-counting based on the direction of the effect/impact of the intervention.\nMain results\nThe database searches identified 14,202 titles and abstracts. Grey literature and reference list searches did not identify any additional studies that met the inclusion criteria. One hundred and twelve full-text studies were assessed for eligibility, of which four studies met the eligibility criteria for inclusion in the review. All studies were randomised controlled studies with a combined total of 359 participants. All included studies were disease-specific; three studies focused on children with cancer, and one study focused on children with Attention Deficit Hyperactivity Disorder (ADHD).\nThere was evidence that education support improved school engagement with three of four studies favouring the intervention. Three studies measured academic achievement but only two studies provided effect estimates. Based on the vote-counting method, we found contradictory results from the studies: one study showed a positive direction of effect and the other study showed a negative direction of effect. One study measured transition back to school and found a positive impact of education support favouring the intervention (SMD 0.18, 95% CI -0.46 to 0.96, no P value reported). The result came from a single study with a small sample size (n = 30), and produced a confidence interval that indicated the possibility of a very small or no effect. The overall certainty of evidence for these three outcomes was judged to be 'very low'.\nTwo of four studies measured mental health (measured as self-esteem). Both studies reported a positive impact of education support interventions on mental health; this was the only outcome for which the overall certainty of evidence was judged to be 'low' rather than 'very low'.\nNo studies measured or reported quality of life or adverse effects.\nRisk of bias (selection, performance, detection, attrition, reporting and other bias) was assessed using the Cochrane risk of bias tool for randomised trials (version 1). Overall risk of bias for all studies was assessed as 'high risk' because all studies had at least one domain at high risk of bias.\nAuthors' conclusions\nThis review has demonstrated the infancy of quality research on the effectiveness of education support interventions for children and adolescents with chronic health conditions. At best, we can say that we are uncertain whether education support interventions improve either academic achievement or school engagement. Of the secondary outcomes, we are also uncertain whether education support interventions improve transition back to school, or school re-entry. However, we suggest there is some evidence that education support may slightly improve mental health, measured as self-esteem. Given the current state of the evidence of the effectiveness of education support interventions for children and adolescents with chronic health conditions, we highlight some important implications for future research in this field to strengthen the evidence that can inform effective practice and policy.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up-to-date to January 2021."
}
] |
query-laysum
|
22878
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with a low platelet count (thrombocytopenia) often require lumbar punctures or an epidural anaesthetic. Lumbar punctures can be diagnostic (haematological malignancies, subarachnoid haematoma, meningitis) or therapeutic (spinal anaesthetic, administration of chemotherapy). Epidural catheters are placed for administration of epidural anaesthetic. Current practice in many countries is to correct thrombocytopenia with platelet transfusions prior to lumbar punctures and epidural anaesthesia, in order to mitigate the risk of serious procedure-related bleeding. However, the platelet count threshold recommended prior to these procedures varies significantly from country to country. This indicates significant uncertainty among clinicians regarding the correct management of these patients. The risk of bleeding appears to be low, but if bleeding occurs it can be very serious (spinal haematoma). Consequently, people may be exposed to the risks of a platelet transfusion without any obvious clinical benefit.\nThis is an update of a Cochrane Review first published in 2016.\nObjectives\nTo assess the effects of different platelet transfusion thresholds prior to a lumbar puncture or epidural anaesthesia in people with thrombocytopenia (low platelet count).\nSearch methods\nWe searched for randomised controlled trials (RCTs), non-randomised controlled trials (nRCTs), controlled before-after studies (CBAs), interrupted time series studies (ITSs), and cohort studies in CENTRAL (the Cochrane Library 2018, Issue 1), MEDLINE (from 1946), Embase (from 1974), the Transfusion Evidence Library (from 1950), and ongoing trial databases to 13 February 2018.\nSelection criteria\nWe included RCTs, nRCTs, CBAs, ITSs, and cohort studies involving transfusions of platelet concentrates, prepared either from individual units of whole blood or by apheresis, and given to prevent bleeding in people of any age with thrombocytopenia requiring insertion of a lumbar puncture needle or epidural catheter.\nThe original review only included RCTs.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane for including RCTs, nRCTs, CBAs, and ITSs. Two review authors independently assessed studies for eligibility and risk of bias and extracted data. Results were only expressed narratively.\nMain results\nWe identified no completed or ongoing RCTs, nRCTs, CBAs, or ITSs. No studies included people undergoing an epidural procedure. No studies compared different platelet count thresholds prior to a procedure.\nIn this update we identified three retrospective cohort studies that contained participants who did and did not receive platelet transfusions prior to lumbar puncture procedures. All three studies were carried out in people with cancer, most of whom had a haematological malignancy. Two studies were in children, and one was in adults.\nThe number of participants receiving platelet transfusions prior to the lumbar puncture procedures was not reported in one study. We therefore only summarised in a narrative form the relevant outcomes from two studies (150 participants; 129 children and 21 adults), in which the number of participants who received the transfusion was given.\nWe judged the overall risk of bias for all reported outcomes for both studies as 'serious' based on the ROBINS-I tool.\nNo procedure-related major bleeding occurred in the two studies that reported this outcome (2 studies, 150 participants, no cases, very low-quality evidence).\nThere was no evidence of a difference in the risk of minor bleeding (traumatic tap) in participants who received platelet transfusions before a lumbar puncture and those who did not receive a platelet transfusion before the procedure (2 studies, 150 participants, very low-quality evidence). One of the 14 adults who received a platelet transfusion experienced minor bleeding (traumatic tap; defined as at least 500 x 106/L red blood cells in the cerebrospinal fluid); none of the seven adults who did not receive a platelet transfusion experienced this event. Ten children experienced minor bleeding (traumatic taps; defined as at least 100 x 106/L red blood cells in the cerebrospinal fluid), six out of the 57 children who received a platelet transfusion and four out of the 72 children who did not receive a platelet transfusion.\nNo serious adverse events occurred in the one study that reported this outcome (1 study, 21 participants, very low-quality evidence).\nWe found no studies that evaluated all-cause mortality within 30 days from the lumbar puncture procedure, length of hospital stay, proportion of participants who received platelet transfusions, or quality of life.\nAuthors' conclusions\nWe found no evidence from RCTs or non-randomised studies on which to base an assessment of the correct platelet transfusion threshold prior to insertion of a lumbar puncture needle or epidural catheter. There are no ongoing registered RCTs assessing the effects of different platelet transfusion thresholds prior to the insertion of a lumbar puncture or epidural anaesthesia in people with thrombocytopenia. Any future study would need to be very large to detect a difference in the risk of bleeding. A study would need to be designed with at least 47,030 participants to be able to detect an increase in the number of people who had major procedure-related bleeding from 1 in 1000 to 2 in 1000. The use of a central data collection register or routinely collected electronic records (big data) is likely to be the only method to systematically gather data relevant to this population.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Platelets are found in the blood and are an essential part of a blood clot. A low platelet count increases the risk of bleeding. People with a low platelet count often require a lumbar puncture or epidural anaesthetic for administration of treatment or to aid in diagnosis.\nA lumbar puncture is usually performed by inserting a needle between the bones (vertebrae) of the spine in the lower back into the fluid surrounding the spinal cord (the bundle of nerves that runs down the spine and connects the brain with the body). Lumbar punctures are performed either to obtain a sample of this fluid or to administer treatment into the fluid (chemotherapy or an anaesthetic). The lumbar puncture needle is removed immediately after any fluid samples have been taken or treatment administered.\nAn epidural involves inserting a needle with a larger diameter than a lumbar puncture needle. The epidural needle passes through the same tissues as the lumbar puncture needle, but stops short of penetrating the fluid sac surrounding the spinal cord. Instead any treatment is injected into the space just outside the fluid sac (called the epidural space). A small tube (an epidural catheter) is often passed through the epidural needle and left in position so that additional local anaesthetic medicines can be given.\nCurrent practice in many countries is to increase the platelet count above a prespecified level with platelet transfusions (platelets given by injection into a vein) to prevent serious bleeding due to the lumbar puncture or epidural anaesthetic. Although the risk of bleeding appears to be low, if bleeding does occur, it can be very serious. Due to a lack of evidence the platelet count level recommended prior to lumbar puncture or epidural anaesthetic varies significantly from country to country. This means that doctors are uncertain about what is the correct platelet count level, or if a platelet transfusion is required. Consequently, people may be exposed to the risks of a platelet transfusion (e.g. allergic reaction, infection) without any obvious clinical benefit."
}
] |
query-laysum
|
22879
|
[
{
"role": "user",
"content": "Abstract: Background\nAmbient air pollution is associated with a large burden of disease in both high-income countries (HICs) and low- and middle-income countries (LMICs). To date, no systematic review has assessed the effectiveness of interventions aiming to reduce ambient air pollution.\nObjectives\nTo assess the effectiveness of interventions to reduce ambient particulate matter air pollution in reducing pollutant concentrations and improving associated health outcomes.\nSearch methods\nWe searched a range of electronic databases with diverse focuses, including health and biomedical research (CENTRAL, Cochrane Public Health Group Specialised Register, MEDLINE, Embase, PsycINFO), multidisciplinary research (Scopus, Science Citation Index), social sciences (Social Science Citation Index), urban planning and environment (Greenfile), and LMICs (Global Health Library regional indexes, WHOLIS). Additionally, we searched grey literature databases, multiple online trial registries, references of included studies and the contents of relevant journals in an attempt to identify unpublished and ongoing studies, and studies not identified by our search strategy. The final search date for all databases was 31 August 2016.\nSelection criteria\nEligible for inclusion were randomized and cluster randomized controlled trials, as well as several non-randomized study designs, including controlled interrupted time-series studies (cITS-EPOC), interrupted time-series studies adhering to EPOC standards (ITS-EPOC), interrupted time-series studies not adhering to EPOC standards (ITS), controlled before-after studies adhering to EPOC standards (CBA-EPOC), and controlled before-after studies not adhering to EPOC standards (CBA); these were classified as main studies. Additionally, we included uncontrolled before-after studies (UBA) as supporting studies. We included studies that evaluated interventions to reduce ambient air pollution from industrial, residential, vehicular and multiple sources, with respect to their effect on mortality, morbidity and several air pollutant concentrations. We did not restrict studies based on the population, setting or comparison.\nData collection and analysis\nAfter a calibration exercise among the author team, two authors independently assessed studies for inclusion, extracted data and assessed risk of bias. We conducted data extraction, risk of bias assessment and evidence synthesis only for main studies; we mapped supporting studies with regard to the types of intervention and setting. To assess risk of bias, we used the Graphic Appraisal Tool for Epidemiological studies (GATE) for correlation studies, as modified and employed by the Centre for Public Health Excellence at the UK National Institute for Health and Care Excellence (NICE). For each intervention category, i.e. those targeting industrial, residential, vehicular and multiple sources, we synthesized evidence narratively, as well as graphically using harvest plots.\nMain results\nWe included 42 main studies assessing 38 unique interventions. These were heterogeneous with respect to setting; interventions were implemented in countries across the world, but most (79%) were implemented in HICs, with the remaining scattered across LMICs. Most interventions (76%) were implemented in urban or community settings.\nWe identified a heterogeneous mix of interventions, including those aiming to address industrial (n = 5), residential (n = 7), vehicular (n = 22), and multiple sources (n = 4). Some specific interventions, such as low emission zones and stove exchanges, were assessed by several studies, whereas others, such as a wood burning ban, were only assessed by a single study.\nMost studies assessing health and air quality outcomes used routine monitoring data. Studies assessing health outcomes mostly investigated effects in the general population, while few studies assessed specific subgroups such as infants, children and the elderly. No identified studies assessed unintended or adverse effects.\nThe judgements regarding the risk of bias of studies were mixed. Regarding health outcomes, we appraised eight studies (47%) as having no substantial risk of bias concerns, five studies (29%) as having some risk of bias concerns, and four studies (24%) as having serious risk of bias concerns. Regarding air quality outcomes, we judged 11 studies (31%) as having no substantial risk of bias concerns, 16 studies (46%) as having some risk of bias concerns, and eight studies (23%) as having serious risk of bias concerns.\nThe evidence base, comprising non-randomized studies only, was of low or very low certainty for all intervention categories and primary outcomes. The narrative and graphical synthesis showed that evidence for effectiveness was mixed across the four intervention categories. For interventions targeting industrial, residential and multiple sources, a similar pattern emerged for both health and air quality outcomes, with essentially all studies observing either no clear association in either direction or a significant association favouring the intervention. The evidence base for interventions targeting vehicular sources was more heterogeneous, as a small number of studies did observe a significant association favouring the control. Overall, however, the evidence suggests that the assessed interventions do not worsen air quality or health.\nAuthors' conclusions\nGiven the heterogeneity across interventions, outcomes, and methods, it was difficult to derive overall conclusions regarding the effectiveness of interventions in terms of improved air quality or health. Most included studies observed either no significant association in either direction or an association favouring the intervention, with little evidence that the assessed interventions might be harmful. The evidence base highlights the challenges related to establishing a causal relationship between specific air pollution interventions and outcomes. In light of these challenges, the results on effectiveness should be interpreted with caution; it is important to emphasize that lack of evidence of an association is not equivalent to evidence of no association.\nWe identified limited evidence for several world regions, notably Africa, the Middle East, Eastern Europe, Central Asia and Southeast Asia; decision-makers should prioritize the development and implementation of interventions in these settings. In the future, as new policies are introduced, decision-makers should consider a built-in evaluation component, which could facilitate more systematic and comprehensive evaluations. These could assess effectiveness, but also aspects of feasibility, fidelity and acceptability.\nThe production of higher quality and more uniform evidence would be helpful in informing decisions. Researchers should strive to sufficiently account for confounding, assess the impact of methodological decisions through the conduct and communication of sensitivity analyses, and improve the reporting of methods, and other aspects of the study, most importantly the description of the intervention and the context in which it is implemented.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We investigated whether measures put into place to reduce outdoor air pollution have actually reduced air pollution and improved health."
}
] |
query-laysum
|
22880
|
[
{
"role": "user",
"content": "Abstract: Background\nCarotid patch angioplasty may reduce the risk of acute occlusion or long-term restenosis of the carotid artery and subsequent ischaemic stroke in people undergoing carotid endarterectomy (CEA). This is an update of a Cochrane Review originally published in 1995 and updated in 2008.\nObjectives\nTo assess the safety and efficacy of routine or selective carotid patch angioplasty with either a venous patch or a synthetic patch compared with primary closure in people undergoing CEA. We wished to test the primary hypothesis that carotid patch angioplasty results in a lower rate of severe arterial restenosis and therefore fewer recurrent strokes and stroke-related deaths, without a considerable increase in perioperative complications.\nSearch methods\nWe searched the Cochrane Stroke Group trials register, CENTRAL, MEDLINE, Embase, two other databases, and two trial registries in September 2021.\nSelection criteria\nRandomised controlled trials and quasi-randomised trials comparing carotid patch angioplasty with primary closure in people undergoing CEA.\nData collection and analysis\nTwo review authors independently assessed eligibility and risk of bias; extracted data; and determined the certainty of evidence using the GRADE approach. Outcomes of interest included stroke, death, significant complications related to surgery, and artery restenosis or occlusion during the perioperative period (within 30 days of the operation) or during long-term follow-up.\nMain results\nWe included 11 trials involving 2100 participants undergoing 2304 CEA operations. The quality of trials was generally poor. Follow-up varied from hospital discharge to five years. Compared with primary closure, carotid patch angioplasty may make little or no difference to reduction in risk of any stroke during the perioperative period (odds ratio (OR) 0.57, 95% confidence interval (CI) 0.31 to 1.03; P = 0.063; 8 studies, 1769 participants; very low-certainty evidence), but may lower the risk of any stroke during long-term follow-up (OR 0.49, 95% CI 0.27 to 0.90; P = 0.022; 7 studies, 1332 participants; very low-certainty evidence). In the included studies, carotid patch angioplasty resulted in a lower risk of ipsilateral stroke during the perioperative period (OR 0.31, 95% CI 0.15 to 0.63; P = 0.001; 7 studies, 1201 participants; very low-certainty evidence), and during long-term follow-up (OR 0.32, 95% CI 0.16 to 0.63; P = 0.001; 6 studies, 1141 participants; very low-certainty evidence). The intervention was associated with a reduction in the risk of any stroke or death during long-term follow-up (OR 0.59, 95% CI 0.42 to 0.84; P = 0.003; 6 studies, 1019 participants; very low-certainty evidence). In addition, the included studies suggest that carotid patch angioplasty may reduce the risk of perioperative arterial occlusion (OR 0.18, 95% CI 0.08 to 0.41; P < 0.0001; 7 studies, 1435 participants; low-certainty evidence), and may reduce the risk of restenosis during long-term follow-up (OR 0.24, 95% CI 0.17 to 0.34; P < 0.00001; 8 studies, 1719 participants; low-certainty evidence). The studies recorded very few arterial complications, including haemorrhage, infection, cranial nerve palsies and pseudo-aneurysm formation, with either patch or primary closure. We found no correlation between the use of patch angioplasty and the risk of either perioperative or long-term stroke-related death or all-cause death rates.\nAuthors' conclusions\nCompared with primary closure, carotid patch angioplasty may reduce the risk of perioperative arterial occlusion and long-term restenosis of the operated artery. It would appear to reduce the risk of ipsilateral stroke during the perioperative and long-term period and reduce the risk of any stroke in the long-term when compared with primary closure. However, the evidence is uncertain due to the limited quality of included trials.\n\nGiven the provided abstract, please respond to the following query: \"What is carotid endarterectomy?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "About 20% of strokes result from narrowing of the carotid artery (the main artery supplying blood to the brain). A narrowed carotid artery can be treated with a surgical procedure called carotid endarterectomy, which involves cutting the artery open and removing fatty substances called plaques. This widens the artery and so reduces the risk of stroke. However, there is a small possibility that the operation itself can cause a stroke."
}
] |
query-laysum
|
22881
|
[
{
"role": "user",
"content": "Abstract: Background\nDelirium is defined as a disturbance in attention, awareness and cognition with reduced ability to direct, focus, sustain and shift attention, and reduced orientation to the environment. Critically ill patients in the intensive care unit (ICU) frequently develop ICU delirium. It can profoundly affect both them and their families because it is associated with increased mortality, longer duration of mechanical ventilation, longer hospital and ICU stay and long-term cognitive impairment. It also results in increased costs for society.\nObjectives\nTo assess existing evidence for the effect of preventive interventions on ICU delirium, in-hospital mortality, the number of delirium- and coma-free days, ventilator-free days, length of stay in the ICU and cognitive impairment.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, BIOSIS, International Web of Science, Latin American Caribbean Health Sciences Literature, CINAHL from 1980 to 11 April 2018 without any language limits. We adapted the MEDLINE search for searching the other databases. Furthermore, we checked references, searched citations and contacted study authors to identify additional studies. We also checked the following trial registries: Current Controlled Trials; ClinicalTrials.gov; and CenterWatch.com (all on 24 April 2018).\nSelection criteria\nWe included randomized controlled trials (RCTs) of adult medical or surgical ICU patients receiving any intervention for preventing ICU delirium. The control could be standard ICU care, placebo or both. We assessed the quality of evidence with GRADE.\nData collection and analysis\nWe checked titles and abstracts to exclude obviously irrelevant studies and obtained full reports on potentially relevant ones. Two review authors independently extracted data. If possible we conducted meta-analyses, otherwise we synthesized data narratively.\nMain results\nThe electronic search yielded 8746 records. We included 12 RCTs (3885 participants) comparing usual care with the following interventions: commonly used drugs (four studies); sedation regimens (four studies); physical therapy or cognitive therapy, or both (one study); environmental interventions (two studies); and preventive nursing care (one study). We found 15 ongoing studies and five studies awaiting classification. The participants were 48 to 70 years old; 48% to 74% were male; the mean acute physiology and chronic health evaluation (APACHE II) score was 14 to 28 (range 0 to 71; higher scores correspond to more severe disease and a higher risk of death). With the exception of one study, all participants were mechanically ventilated in medical or surgical ICUs or mixed. The studies were overall at low risk of bias. Six studies were at high risk of detection bias due to lack of blinding of outcome assessors. We report results for the two most commonly explored approaches to delirium prevention: pharmacologic and a non-pharmacologic intervention.\nHaloperidol versus placebo (two RCTs, 1580 participants)\nThe event rate of ICU delirium was measured in one study including 1439 participants. No difference was identified between groups, (risk ratio (RR) 1.01, 95% confidence interval (CI) 0.87 to 1.17) (moderate-quality evidence). Haloperidol versus placebo neither reduced or increased in-hospital mortality, (RR 0.98, 95% CI 0.80 to 1.22; 2 studies; 1580 participants (moderate-quality evidence)); the number of delirium- and coma-free days, (mean difference (MD) -0.60, 95% CI -1.37 to 0.17; 2 studies, 1580 participants (moderate-quality of evidence)); number of ventilator-free days (mean 23.8 (MD -0.30, 95% CI -0.93 to 0.33) 1 study; 1439 participants, (high-quality evidence)); length of ICU stay, (MD 0.18, 95% CI 0.60 to 0.97); 2 studies, 1580 participants; high-quality evidence). None of the studies measured cognitive impairment. In one study there were three serious adverse events in the intervention group and five in the placebo group; in the other there were five serious adverse events and three patients died, one in each group. None of the serious adverse events were judged to be related to interventions received (moderate-quality evidence).\nPhysical and cognitive therapy interventions (one study, 65 participants)\nThe study did not measure the event rate of ICU delirium. A physical and cognitive therapy intervention versus standard care neither reduced nor increased in-hospital mortality, (RR 0.94, 95% CI 0.40 to 2.20, I² = 0; 1 study, 65 participants; very low-quality evidence); the number of delirium- and coma-free days, (MD -2.8, 95% CI -10.1 to 4.6, I² = 0; 1 study, 65 participants; very low-quality evidence); the number of ventilator-free days (within the first 28/30 days) was median 27.4 (IQR 0 to 29.2) and 25 (IQR 0 to 28.9); 1 study, 65 participants; very low-quality evidence, length of ICU stay, (MD 1.23, 95% CI -0.68 to 3.14, I² = 0; 1 study, 65 participants; very low-quality evidence); cognitive impairment measured by the MMSE: Mini-Mental State Examination with higher scores indicating better function, (MD 0.97, 95% CI -0.19 to 2.13, I² = 0; 1 study, 30 participants; very low-quality evidence); or measured by the Dysexecutive questionnaire (DEX) with lower scores indicating better function (MD -8.76, 95% CI -19.06 to 1.54, I² = 0; 1 study, 30 participants; very low-quality evidence). One patient experienced acute back pain accompanied by hypotensive urgency during physical therapy.\nAuthors' conclusions\nThere is probably little or no difference between haloperidol and placebo for preventing ICU delirium but further studies are needed to increase our confidence in the findings. There is insufficient evidence to determine the effects of physical and cognitive intervention on delirium. The effects of other pharmacological interventions, sedation, environmental, and preventive nursing interventions are unclear and warrant further investigation in large multicentre studies. Five studies are awaiting classification and we identified 15 ongoing studies, evaluating pharmacological interventions, sedation regimens, physical and occupational therapy combined or separately, and environmental interventions, that may alter the conclusions of the review in future.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our findings suggest that there may be little or no difference between haloperidol and placebo for preventing ICU delirium, but further studies are needed to reduce imprecision and increase our confidence in the findings. More studies of physical and cognitive therapy are needed as there is insufficient evidence to determine whether these non-pharmacological approaches can prevent delirium in the ICU. Additional research is required to explore the benefits and harms of other approaches to prevent delirium in the ICU such as sedation and changes in the ICU environment, and nursing care tailored to prevent delirium."
}
] |
query-laysum
|
22882
|
[
{
"role": "user",
"content": "Abstract: Background\nThe myelodysplastic syndrome (MDS) comprises a diverse group of haematopoietic stem cell disorders. Due to symptomatic anaemia, most people with MDS require supportive therapy including repeated red blood cell (RBC) transfusions. In combination with increased iron absorption, this contributes to the accumulation of iron resulting in secondary iron overload and the risk of organ dysfunction and reduced life expectancy. Since the human body has no natural means of removing excess iron, iron chelation therapy, i.e. the pharmacological treatment of iron overload, is usually recommended. However, it is unclear whether or not the newer oral chelator deferasirox leads to relevant benefit.\nObjectives\nTo evaluate the effectiveness and safety of oral deferasirox for managing iron overload in people with myelodysplastic syndrome (MDS).\nSearch methods\nWe searched the following databases up to 03 April 2014: MEDLINE, EMBASE, The Cochrane Library, Biosis Previews, Web of Science, Derwent Drug File and four trial registries: Current Controlled Trials (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov), ICTRP (www.who.int./ictrp/en/), and German Clinical Trial Register (www.drks.de).\nSelection criteria\nRandomised controlled trials (RCTs) comparing deferasirox with no therapy, placebo or with another iron-chelating treatment schedule.\nData collection and analysis\nWe did not identify any trials eligible for inclusion in this review.\nMain results\nNo trials met our inclusion criteria. However, we identified three ongoing and one completed trial (published as an abstract only and in insufficient detail to permit us to decide on inclusion) comparing deferasirox with deferoxamine, placebo or no treatment.\nAuthors' conclusions\nWe planned to report evidence from RCTs that evaluated the effectiveness of deferasirox compared to either placebo, no treatment or other chelating regimens, such as deferoxamine, in people with MDS. However, we did not identify any completed RCTs addressing this question.\nWe found three ongoing and one completed RCT (published as an abstract only and in insufficient detail) comparing deferasirox with deferoxamine, placebo or no treatment and data will hopefully be available soon. These results will be important to inform physicians and patients on the advantages and disadvantages of this treatment option.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We aimed to review the evidence about the effects of deferasirox (an oral therapeutic option) on reducing iron overload in people with myelodysplastic syndrome (MDS), which is a diverse group of haematopoietic stem cell disorders."
}
] |
query-laysum
|
22883
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with chronic obstructive pulmonary disease (COPD) often become transiently hypoxaemic (low oxygen levels in blood) on exercise, necessitating oxygen therapy to improve breathlessness and exercise capacity and to reduce disability. Ambulatory oxygen therapy refers to provision of oxygen therapy during exercise and activities of daily living. Ambulatory oxygen therapy is often used by patients on long-term oxygen therapy (LTOT) during exercise or by non-LTOT users with or without resting hypoxaemia when they show evidence of exercise de-saturation and demonstrate improvement in exercise capacity with supplemental oxygen.\nObjectives\nTo determine the longer-term efficacy of ambulatory oxygen therapy only in patients with COPD who do not meet the criteria for LTOT, with respect to improvement in exercise capacity, mortality, quality of life and other relevant measures of improvement.\nSearch methods\nThe Cochrane Airways Group Specialised Register, including MEDLINE, EMBASE and CINAHL, was searched. Online clinical trial registers, including Controlled Clinical Trials (www.controlled-trials.com), government registries (clinicaltrials.gov) and World Health Organization (WHO) registries (www.who.int/trialsearch), were screened for ongoing and recently completed studies. Bibliographies of included studies were searched for additional trials that may meet the inclusion criteria and were not retrieved by the above search strategy. Authors of identified trials were contacted to provide other published and unpublished studies. Searches were current as of November 2012.\nSelection criteria\nRandomised controlled trials (RCTs) that compare ambulatory oxygen therapy provided through portable oxygen cylinders/battery-powered devices or liquid oxygen canisters versus placebo air cylinders, usual medical care or co-intervention in study participants with COPD who did not meet criteria for LTOT.\nData collection and analysis\nWe used standard methods as expected by The Cochrane Collaboration.\nMain results\nFour studies met the inclusion criteria (331 participants), with two studies producing a statistically and clinically significant benefit in favour of the intervention for dyspnoea post exercise.The quality of life domain for all four included studies produced a statistically significant benefit for the subcategories of dyspnoea and fatigue, in favour of the oxygen group (dyspnoea mean difference (MD) 0.28, 95% confidence interval (CI) 0.10 to 0.45; P value 0.002; fatigue MD 0.17, 95% CI 0.04 to 0.31; P value 0.009). No evidence of any effect was reported for survival, and limited benefits were observed for exercise capacity (as measured by step test and distance walk test), with one study showing a statistically significant improvement in the number of steps taken in the oxygen group for group N-of-1 studies only. No other statistically significant benefits were observed for exercise capacity among the other trials or individual N-of-1 studies.\nAuthors' conclusions\nIn patients with COPD with moderate hypoxia, current evidence on ambulatory oxygen therapy reveals improvements in dyspnoea post exercise and in the dyspnoea and fatigue domain of quality of life. However, evidence for the clinical utility and effectiveness of ambulatory oxygen in improving mortality and exercise capacity was not evident in this review. Methodologically rigorous RCTs with sufficient power to detect a difference are required to investigate the role of ambulatory oxygen in the management of COPD.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of evidence from the studies in this review was moderate. The way the studies were conducted (methods) was not fully reported in all cases. Most studies were lacking the pre-published study plan (protocol)."
}
] |
query-laysum
|
22884
|
[
{
"role": "user",
"content": "Abstract: Background\nMany surgical approaches are available to treat varicose veins secondary to chronic venous insufficiency. One of the least invasive techniques is the ambulatory conservative hemodynamic correction of venous insufficiency method (in French 'cure conservatrice et hémodynamique de l'insuffisance veineuse en ambulatoire' (CHIVA)), an approach based on venous hemodynamics with deliberate preservation of the superficial venous system. This is the second update of the review first published in 2013.\nObjectives\nTo compare the efficacy and safety of the CHIVA method with alternative therapeutic techniques to treat varicose veins.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL, AMED, and the World Health Organisation International Clinical Trials Registry Platform and ClinicalTrials.gov trials registries to 19 October 2020. We also searched PUBMED to 19 October 2020 and checked the references of relevant articles to identify additional studies.\nSelection criteria\nWe included randomized controlled trials (RCTs) that compared CHIVA to other therapeutic techniques to treat varicose veins.\nData collection and analysis\nTwo review authors independently assessed and selected studies, extracted data, and performed quantitative analysis from the selected papers. A third author solved any disagreements. We assessed the risk of bias in included trials with the Cochrane risk of bias tool. We calculated the risk ratio (RR), mean difference (MD), number of people needed to treat for an additional beneficial outcome (NNTB), and the number of people needed to treat for an additional harmful outcome (NNTH), with 95% confidence intervals (CI). We evaluated the certainty of the evidence using GRADE. The main outcomes of interest were the recurrence of varicose veins and side effects.\nMain results\nFor this update, we identified two new additional studies. In total, we included six RCTs with 1160 participants (62% women) and collected from them eight comparisons. Three RCTs compared CHIVA with vein stripping. One RCT compared CHIVA with compression dressings in people with venous ulcers. The new studies included three comparisons, one compared CHIVA with vein stripping and radiofrequency ablation (RFA), and one compared CHIVA with vein stripping and endovenous laser therapy. We judged the certainty of the evidence for our outcomes as low to very low due to inconsistency, imprecision caused by the low number of events and risk of bias. The overall risk of bias across studies was high because neither participants nor personnel were blinded to the interventions. Two studies attempted to blind outcome assessors, but the characteristics of the surgery limited concealment.\nFive studies reported the outcome clinical recurrence of varicose veins with a follow-up of 18 months to 10 years. CHIVA may make little or no difference to the recurrence of varicose veins in the lower limb compared to stripping (RR 0.74, 95% CI 0.46 to 1.20; 5 studies, 966 participants; low-certainty evidence). We are uncertain whether CHIVA reduced recurrence compared to compression dressing (RR 0.23, 95% CI 0.06 to 0.96; 1 study, 47 participants; very low-certainty evidence). CHIVA may make little or no difference to clinical recurrence compared to RFA (RR 2.02, 95% CI 0.74 to 5.53; 1 study, 146 participants; low-certainty evidence) and endovenous laser (RR 0.20, 95% CI 0.01 to 4.06; 1 study, 100 participants; low-certainty evidence).\nWe found no clear difference between CHIVA and stripping for the side effects of limb infection (RR 0.83, 95% CI 0.33 to 2.10; 3 studies, 746 participants; low-certainty evidence), and superficial vein thrombosis (RR 1.05, 95% CI 0.51 to 2.17; 4 studies, 846 participants; low-certainty evidence). CHIVA may reduce slightly nerve injury (RR 0.14, 95% CI 0.02 to 0.98; NNTH 9, 95% CI 5 to 100; 4 studies, 846 participants; low-certainty evidence) and hematoma compared to stripping (RR 0.59, 95% CI 0.37 to 0.97; NNTH 11, 95% CI 5 to 100; 2 studies, 245 participants; low-certainty evidence). For bruising, one study found no differences between groups while another study found reduced rates of bruising in the CHIVA group compared to the stripping group. Compared to RFA, CHIVA may make little or no difference to rates of limb infection, superficial vein thrombosis, nerve injury or hematoma, but may cause more bruising (RR 1.15, 95% CI 1.04 to 1.28; NNTH 8, CI 95% 5 to 25; 1 study, 144 participants; low-certainty evidence). Compared to endovenous laser, CHIVA may make little or no difference to rates of limb infection, superficial vein thrombosis, nerve injury or hematoma. The study comparing CHIVA versus compression did not report side effects.\nAuthors' conclusions\nThere may be little or no difference in the recurrence of varicose veins when comparing CHIVA to stripping (low-certainty evidence), but CHIVA may slightly reduce nerve injury and hematoma in the lower limb (low-certainty evidence). Very limited evidence means we are uncertain of any differences in recurrence when comparing CHIVA with compression (very low-certainty evidence). CHIVA may make little or no difference to recurrence compared to RFA (low-certainty evidence), but may result in more bruising (low-certainty evidence). CHIVA may make little or no difference to recurrence and side effects compared to endovenous laser therapy (low-certainty evidence). However, we based these conclusions on a small number of trials with a high risk of bias as the effects of surgery could not be concealed, and the results were imprecise due to the low number of events. New RCTs are needed to confirm these results and to compare CHIVA with approaches other than open surgery.\n\nGiven the provided abstract, please respond to the following query: \"How did we identify and evaluate the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "First, we searched the medical literature for randomized controlled trials (RCTs), clinical studies where people are randomly put into one or more treatment groups. This type of study provides the most robust evidence about the effects of treatment. We compared the results and summarized the evidence from all the studies. Finally, we assessed how certain the evidence was. To do this, we considered factors such as the way studies were conducted, study sizes and consistency of findings across studies. Based on our assessments, we categorized the evidence as very low, low, moderate or high certainty."
}
] |
query-laysum
|
22885
|
[
{
"role": "user",
"content": "Abstract: Background\nExcess dietary sodium consumption is a risk factor for high blood pressure, stroke and cardiovascular disease. Currently, dietary sodium consumption in almost every country is too high. Excess sodium intake is associated with high blood pressure, which is common and costly and accounts for significant burden of disease. A large number of jurisdictions worldwide have implemented population-level dietary sodium reduction initiatives. No systematic review has examined the impact of these initiatives.\nObjectives\n• To assess the impact of population-level interventions for dietary sodium reduction in government jurisdictions worldwide.\n• To assess the differential impact of those initiatives by social and economic indicators.\nSearch methods\nWe searched the following electronic databases from their start date to 5 January 2015: the Cochrane Central Register of Controlled Trials (CENTRAL); Cochrane Public Health Group Specialised Register; MEDLINE; MEDLINE In Process & Other Non-Indexed Citations; EMBASE; Effective Public Health Practice Project Database; Web of Science; Trials Register of Promoting Health Interventions (TRoPHI) databases; and Latin American Caribbean Health Sciences Literature (LILACS). We also searched grey literature, other national sources and references of included studies.\nThis review was conducted in parallel with a comprehensive review of national sodium reduction efforts under way worldwide (Trieu 2015), through which we gained additional information directly from country contacts.\nWe imposed no restrictions on language or publication status.\nSelection criteria\nWe included population-level initiatives (i.e. interventions that target whole populations, in this case, government jurisdictions, worldwide) for dietary sodium reduction, with at least one pre-intervention data point and at least one post-intervention data point of comparable jurisdiction. We included populations of all ages and the following types of study designs: cluster-randomised, controlled pre-post, interrupted time series and uncontrolled pre-post. We contacted study authors at different points in the review to ask for missing information.\nData collection and analysis\nTwo review authors extracted data, and two review authors assessed risk of bias for each included initiative.\nWe analysed the impact of initiatives by using estimates of sodium consumption from dietary surveys or urine samples. All estimates were converted to a common metric: salt intake in grams per day. We analysed impact by computing the mean change in salt intake (grams per day) from pre-intervention to post-intervention.\nMain results\nWe reviewed a total of 881 full-text documents. From these, we identified 15 national initiatives, including more than 260,000 people, that met the inclusion criteria. None of the initiatives were provided in lower-middle-income or low-income countries. All initiatives except one used an uncontrolled pre-post study design.\nBecause of high levels of study heterogeneity (I2 > 90%), we focused on individual initiatives rather than on pooled results.\nTen initiatives provided sufficient data for quantitative analysis of impact (64,798 participants). As required by the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) method, we graded the evidence as very low due to the risk of bias of the included studies, as well as variation in the direction and size of effect across the studies. Five of these showed mean decreases in average daily salt intake per person from pre-intervention to post-intervention, ranging from 1.15 grams/day less (Finland) to 0.35 grams/day less (Ireland). Two initiatives showed mean increase in salt intake from pre-intervention to post-intervention: Canada (1.66) and Switzerland (0.80 grams/day more per person); however in both countries the pre-intervention data point was from several years prior to the initiation of the intervention. The remaining initiatives did not show a statistically significant mean change.\nSeven of the 10 initiatives were multi-component and incorporated intervention activities of a structural nature (e.g. food product reformulation, food procurement policy in specific settings). Of those seven initiatives, four showed a statistically significant mean decrease in salt intake from pre-intervention to post-intervention, ranging from Finland to Ireland (see above), and one showed a statistically significant mean increase in salt intake from pre-intervention to post-intervention (Switzerland; see above).\nNine initiatives permitted quantitative analysis of differential impact by sex (men and women separately). For women, three initiatives (China, Finland, France) showed a statistically significant mean decrease, four (Austria, Netherlands, Switzerland, United Kingdom) showed no significant change and two (Canada, United States) showed a statistically significant mean increase in salt intake from pre-intervention to post-intervention. For men, five initiatives (Austria, China, Finland, France, United Kingdom) showed a statistically significant mean decrease, three (Netherlands, Switzerland, United States) showed no significant change and one (Canada) showed a statistically significant mean increase in salt intake from pre-intervention to post-intervention.\nInformation was insufficient to indicate whether a differential change in mean salt intake occurred from pre-intervention to post-intervention by other axes of equity included in the PROGRESS framework (e.g. education, place of residence).\nWe identified no adverse effects of these initiatives.\nThe number of initiatives was insufficient to permit other subgroup analyses, including stratification by intervention type, economic status of country and duration (or start year) of the initiative.\nMany studies had methodological strengths, including large, nationally representative samples of the population and rigorous measurement of dietary sodium intake. However, all studies were scored as having high risk of bias, reflecting the observational nature of the research and the use of an uncontrolled study design. The quality of evidence for the main outcome was low. We could perform a sensitivity analysis only for impact.\nAuthors' conclusions\nPopulation-level interventions in government jurisdictions for dietary sodium reduction have the potential to result in population-wide reductions in salt intake from pre-intervention to post-intervention, particularly if they are multi-component (more than one intervention activity) and incorporate intervention activities of a structural nature (e.g. food product reformulation), and particularly amongst men. Heterogeneity across studies was significant, reflecting different contexts (population and setting) and initiative characteristics. Implementation of future initiatives should embed more effective means of evaluation to help us better understand the variation in the effects.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Five of the 10 countries included in the quantitative analysis (China, Finland, France, Ireland and England) showed a decrease in salt intake after the intervention. Two of the 10 countries (Canada, Switzerland) showed an increase in salt intake after the intervention, however, in both countries the only data available were from several years prior to the intervention starting. Because the initiatives were very different, we cannot present an overall finding of whether these types of initiatives work.\nWhen we focused on the subset of seven countries whose salt reduction initiatives included multiple components and were not focused solely on educating the public, we found that more than half (four of seven) showed a decrease in salt intake from pre-intervention to post-intervention.\nWhen we examined the nine initiatives that analysed men and women separately, we found that amongst men, more than half (five of nine) showed a decrease in salt intake after the intervention. Amongst women, the pattern of findings was less clear, with three of nine interventions showing a decrease, two showing an increase and four showing no change in salt intake.\nLow-bias study designs, such as randomised controlled trials, typically are not suitable for evaluating complex initiatives such as these; therefore, we rated all of the studies included in this review as having low methodological quality. Large nationally representative samples of the population and careful measurement of dietary sodium intake were strengths of several studies. However, because of study design limitations, the trustworthiness of study results is not clear.\nOverall, our results show that national government initiatives have the potential to achieve population-wide reductions in salt intake, especially amongst men, and particularly if they employ more than one strategy and include structural activities such as food product reformulation (i.e. food companies putting less salt in food products). The wide variation of results across the studies we found presents a challenge in interpreting the current evidence and this warrants more research to help us understand this."
}
] |
query-laysum
|
22886
|
[
{
"role": "user",
"content": "Abstract: Background\nPeptic ulcer disease is the cause of dyspepsia in about 10% of people. Ninety-five percent of duodenal and 70% of gastric ulcers are associated with Helicobacter pylori. Eradication of H. pylori reduces the relapse rate of ulcers but the magnitude of this effect is uncertain. This is an update of Ford AC, Delaney B, Forman D, Moayyedi P. Eradication therapy for peptic ulcer disease in Helicobacter pylori-positive patients. Cochrane Database of Systematic Reviews 2006, Issue 2. Art. No.: CD003840. DOI: 10.1002/14651858.CD003840.pub4.\nObjectives\nTo assess the proportion of peptic ulcers healed and the proportion of participants who remained free from relapse with eradication therapy against placebo or other pharmacological therapies in H. pylori-positive people.\nTo assess the proportion of participants that achieved complete relief of symptoms and improvement in quality of life scores.\nTo compare the incidence of adverse effects/drop-outs (total number for each drug) associated with the different treatments.\nTo assess the proportion of participants in whom successful eradication was achieved.\nSearch methods\nIn this update, we identified trials by searching the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE (1950 to March 2016) and Ovid EMBASE (1980 to March 2016). To identify further relevant trials, we handsearched reference lists from trials selected by electronic searching, and published abstracts from conference proceedings from the United European Gastroenterology Week (published in Gut) and Digestive Disease Week (published in Gastroenterology). The search was last updated in March 2016. We contacted members of Cochrane Upper GI and Pancreatic Diseases, and experts in the field and asked them to provide details of outstanding clinical trials and any relevant unpublished materials.\nSelection criteria\nWe analysed randomised controlled trials of short- and long-term treatment of peptic ulcer disease in H. pylori-positive adults. Participants received at least one week of H. pylori eradication compared with ulcer healing drug, placebo or no treatment. Trials were included if they reported assessment from two weeks onwards.\nData collection and analysis\nWe collected data on ulcer healing, recurrence, relief of symptoms and adverse effects. We calculated the risk ratio (RR) with 95% confidence intervals (CI) using both fixed-effect and random-effects models with Review Manager software (RevMan 5.3) based on intention-to-treat analysis as far as possible.\nMain results\nA total of 55 trials were included for one or more outcomes for this review.\nIn duodenal ulcer healing, eradication therapy was superior to ulcer healing drug (UHD) (34 trials, 3910 participants, RR of ulcer persisting = 0.66, 95% confidence interval (CI) 0.58 to 0.76; 381/2286 (adjusted proportion: 12.4%) in eradication therapy plus UHD versus 304/1624 (18.7%) in UHD; low quality evidence) and no treatment (two trials, 207 participants, RR 0.37, 95% CI 0.26 to 0.53; 30/125 (adjusted proportion: 21.7%) in eradication therapy versus 48/82 (58.5%) in no treatment; low quality evidence).\nIn gastric ulcer healing, the differences were imprecise between eradication therapy and UHD (15 trials, 1974 participants, RR 1.23, 95% CI 0.90 to 1.68; 220/1192 (adjusted proportion: 16.0%) in eradication therapy plus UHD versus 102/782 (13.0%) in UHD; very low quality evidence). In preventing duodenal ulcer recurrence the differences were imprecise between maintenance therapy with H.pylori eradication therapy and maintenance therapy with UHD (four trials, 319 participants, RR of ulcer recurring 0.73; 95% CI 0.42 to 1.25; 19/159 (adjusted proportion: 11.9%) in eradication therapy versus 26/160 (16.3%) in UHD; very low quality evidence), but eradication therapy was superior to no treatment (27 trials 2509 participants, RR 0.20, 95% CI 0.15 to 0.26; 215/1501 (adjusted proportion: 12.9%) in eradication therapy versus 649/1008 (64.4%) in no treatment; very low quality evidence).\nIn preventing gastric ulcer recurrence, eradication therapy was superior to no treatment (12 trials, 1476 participants, RR 0.31, 95% CI 0.22 to 0.45; 116/697 (adjusted proportion: 16.3%) in eradication therapy versus 356/679 (52.4%) in no treatment; very low quality evidence). None of the trials reported proportion of people with gastric ulcer not healed after initial therapy between H.pylori eradication therapy and no active treatment or the proportion of people with recurrent gastric ulcer or peptic ulcers during maintenance therapy between H.pylori eradication therapy and ulcer healing drug therapy.\nAuthors' conclusions\nAdding a one to two-week course of H. pylori eradication therapy is an effective treatment for people with H. pylori-positive duodenal ulcer when compared to ulcer healing drugs alone and no treatment. H. pylori eradication therapy is also effective in preventing recurrence of duodenal and gastric ulcer compared to no treatment. There is currently no evidence that H. pylori eradication therapy is an effective treatment in people with gastric ulcer or that it is effective in preventing recurrence of duodenal ulcer compared to ulcer healing drug. However, confidence intervals were wide and significant benefits or harms of H. pylori eradication therapy in acute ulcer healing of gastric ulcers compared to no treatment, and in preventing recurrence of duodenal ulcers compared to ulcer healing drugs cannot be ruled out.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Adding a one to two-week course of H. pylori eradication therapy speeds up ulcer healing for people with H. pylori-positive duodenal ulcer when compared to ulcer-healing drugs alone and no treatment. H. pylori eradication therapy is also effective in preventing recurrence of duodenal and gastric ulcer (ulcer returning after initial healing) compared to no treatment. There is currently no evidence that H. pylori eradication therapy is an effective treatment in people with gastric ulcer or that it is effective in preventing recurrence of duodenal ulcer compared to ulcer-healing drugs. However, because of the small number of studies included for the last two comparisons, significant benefits or harms of H. pylori eradication therapy in acute ulcer healing of gastric ulcers compared to no treatment and in preventing recurrence of duodenal ulcers compared to ulcer healing drugs cannot be ruled out."
}
] |
query-laysum
|
22887
|
[
{
"role": "user",
"content": "Abstract: Background\nComorbid depression and substance use disorders are common and have poorer outcomes than either disorder alone. While effective psychological treatments for depression or substance use disorders are available, relatively few randomised controlled trials (RCTs) have examined the efficacy of these treatments in people with these comorbid disorders.\nObjectives\nTo assess the efficacy of psychological interventions delivered alone or in combination with pharmacotherapy for people diagnosed with comorbid depression and substance use disorders.\nSearch methods\nWe searched the following databases up to February 2019: Cochrane Central Register of Controlled Trials, PubMed, Embase, CINAHL, Google Scholar and clinical trials registers. All systematic reviews identified, were handsearched for relevant articles.\nSelection criteria\nThe review includes data from RCTs of psychological treatments for people diagnosed with comorbid depression and substance use disorders, using structured clinical interviews. Studies were included if some of the sample were experiencing another mental health disorder (e.g. anxiety); however, studies which required a third disorder as part of their inclusion criteria were not included. Studies were included if psychological interventions (with or without pharmacotherapy) were compared with no treatment, delayed treatment, treatment as usual or other psychological treatments.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nSeven RCTs of psychological treatments with a total of 608 participants met inclusion criteria. All studies were published in the USA and predominately consisted of Caucasian samples. All studies compared different types of psychological treatments. Two studies compared Integrated Cognitive Behavioural Therapy (ICBT) with Twelve Step Facilitation (TSF), another two studies compared Interpersonal Psychotherapy for Depression (IPT-D) with other treatment (Brief Supportive Therapy (BST) or Psychoeducation). The other three studies compared different types or combinations of psychological treatments. No studies compared psychological interventions with no treatment or treatment as usual control conditions. The studies included a diverse range of participants (e.g. veterans, prisoners, community adults and adolescents).\nAll studies were at high risk of performance bias, other main sources were selection, outcome detection and attrition bias. Due to heterogeneity between studies only two meta-analyses were conducted. The first meta-analysis focused on two studies (296 participants) comparing ICBT to TSF. Very low-quality evidence revealed that while the TSF group had lower depression scores than the ICBT group at post-treatment (mean difference (MD) 4.05, 95% confidence interval (CI) 1.43 to 6.66; 212 participants), there was no difference between groups in depression symptoms (MD 1.53, 95% CI -1.73 to 4.79; 181 participants) at six- to 12-month follow-up. At post-treatment there was no difference between groups in proportion of days abstinent (MD -2.84, 95% CI -8.04 to 2.35; 220 participants), however, the ICBT group had a greater proportion of days abstinent than the TSF group at the six- to 12-month follow-up (MD 10.76, 95% CI 3.10 to 18.42; 189 participants). There were no differences between the groups in treatment attendance (MD -1.27, 95% CI -6.10 to 3.56; 270 participants) or treatment retention (RR 0.95, 95% CI 0.72 to 1.25; 296 participants).\nThe second meta-analysis was conducted with two studies (64 participants) comparing IPT-D with other treatment (Brief Supportive Psychotherapy/Psychoeducation). Very low-quality evidence indicated IPT-D resulted in significantly lower depressive symptoms at post-treatment (MD -0.54, 95% CI -1.04 to -0.04; 64 participants), but this effect was not maintained at three-month follow-up (MD 3.80, 95% CI -3.83 to 11.43) in the one study reporting follow-up outcomes (38 participants; IPT-D versus Psychoeducation). Substance use was examined separately in each study, due to heterogeneity in outcomes. Both studies found very low-quality evidence of no significant differences in substance use outcomes at post-treatment (percentage of days abstinent, IPD versus Brief Supportive Psychotherapy; MD -2.70, 95% CI -28.74 to 23.34; 26 participants) or at three-month follow-up (relative risk of relapse, IPT-D versus Psychoeducation; RR 0.67, 95% CI 0.30 to 1.50; 38 participants). There was also very low-quality evidence for no significant differences between groups in treatment retention (RR 1.00, 95% CI 0.81 to 1.23; 64 participants). No adverse events were reported in any study.\nAuthors' conclusions\nThe conclusions of this review are limited due to the low number and very poor quality of included studies. No conclusions can be made about the efficacy of psychological interventions (delivered alone or in combination with pharmacotherapy) for the treatment of comorbid depression and substance use disorders, as they are yet to be compared with no treatment or treatment as usual in this population. In terms of differences between psychotherapies, although some significant effects were found, the effects were too inconsistent and small, and the evidence of too poor quality, to be of relevance to practice.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "No conclusions about the effectiveness of psychological interventions for the treatment of comorbid depression and substance use disorders can be made, due to the low number of studies found and very low quality of the evidence. More high-quality studies comparing psychological interventions versus no treatment, delayed treatment, treatment as usual and other psychological interventions are needed."
}
] |
query-laysum
|
22888
|
[
{
"role": "user",
"content": "Abstract: Background\nThe choice of the appropriate perioperative thromboprophylaxis for people with cancer depends on the relative benefits and harms of different anticoagulants.\nObjectives\nTo systematically review the evidence for the relative efficacy and safety of anticoagulants for perioperative thromboprophylaxis in people with cancer.\nSearch methods\nThis update of the systematic review was based on the findings of a comprehensive literature search conducted on 14 June 2018 that included a major electronic search of Cochrane Central Register of Controlled Trials (CENTRAL, 2018, Issue 6), MEDLINE (Ovid), and Embase (Ovid); handsearching of conference proceedings; checking of references of included studies; searching for ongoing studies; and using the 'related citation' feature in PubMed.\nSelection criteria\nRandomized controlled trials (RCTs) that enrolled people with cancer undergoing a surgical intervention and assessed the effects of low-molecular weight heparin (LMWH) to unfractionated heparin (UFH) or to fondaparinux on mortality, deep venous thrombosis (DVT), pulmonary embolism (PE), bleeding outcomes, and thrombocytopenia.\nData collection and analysis\nUsing a standardized form, we extracted data in duplicate on study design, participants, interventions outcomes of interest, and risk of bias. Outcomes of interest included all-cause mortality, PE, symptomatic venous thromboembolism (VTE), asymptomatic DVT, major bleeding, minor bleeding, postphlebitic syndrome, health related quality of life, and thrombocytopenia. We assessed the certainty of evidence for each outcome using the GRADE approach (GRADE Handbook).\nMain results\nOf 7670 identified unique citations, we included 20 RCTs with 9771 randomized people with cancer receiving preoperative prophylactic anticoagulation. We identified seven reports for seven new RCTs for this update.\nThe meta-analyses did not conclusively rule out either a beneficial or harmful effect of LMWH compared with UFH for the following outcomes: mortality (risk ratio (RR) 0.82, 95% confidence interval (CI) 0.63 to 1.07; risk difference (RD) 9 fewer per 1000, 95% CI 19 fewer to 4 more; moderate-certainty evidence), PE (RR 0.49, 95% CI 0.17 to 1.47; RD 3 fewer per 1000, 95% CI 5 fewer to 3 more; moderate-certainty evidence), symptomatic DVT (RR 0.67, 95% CI 0.27 to 1.69; RD 3 fewer per 1000, 95% CI 7 fewer to 7 more; moderate-certainty evidence), asymptomatic DVT (RR 0.86, 95% CI 0.71 to 1.05; RD 11 fewer per 1000, 95% CI 23 fewer to 4 more; low-certainty evidence), major bleeding (RR 1.01, 95% CI 0.69 to 1.48; RD 0 fewer per 1000, 95% CI 10 fewer to 15 more; moderate-certainty evidence), minor bleeding (RR 1.01, 95% CI 0.76 to 1.33; RD 1 more per 1000, 95% CI 34 fewer to 47 more; moderate-certainty evidence), reoperation for bleeding (RR 0.93, 95% CI 0.57 to 1.50; RD 4 fewer per 1000, 95% CI 22 fewer to 26 more; moderate-certainty evidence), intraoperative transfusion (mean difference (MD) -35.36 mL, 95% CI -253.19 to 182.47; low-certainty evidence), postoperative transfusion (MD 190.03 mL, 95% CI -23.65 to 403.72; low-certainty evidence), and thrombocytopenia (RR 3.07, 95% CI 0.32 to 29.33; RD 6 more per 1000, 95% CI 2 fewer to 82 more; moderate-certainty evidence). LMWH was associated with lower incidence of wound hematoma (RR 0.70, 95% CI 0.54 to 0.92; RD 26 fewer per 1000, 95% CI 39 fewer to 7 fewer; moderate-certainty evidence). The meta-analyses found the following additional results: outcomes intraoperative blood loss (MD -6.75 mL, 95% CI -85.49 to 71.99; moderate-certainty evidence); and postoperative drain volume (MD 30.18 mL, 95% CI -36.26 to 96.62; moderate-certainty evidence).\nIn addition, the meta-analyses did not conclusively rule out either a beneficial or harmful effect of LMWH compared with Fondaparinux for the following outcomes: any VTE (DVT or PE, or both; RR 2.51, 95% CI 0.89 to 7.03; RD 57 more per 1000, 95% CI 4 fewer to 228 more; low-certainty evidence), major bleeding (RR 0.74, 95% CI 0.45 to 1.23; RD 8 fewer per 1000, 95% CI 16 fewer to 7 more; low-certainty evidence), minor bleeding (RR 0.83, 95% CI 0.34 to 2.05; RD 8fewer per 1000, 95% CI 33 fewer to 52 more; low-certainty evidence), thrombocytopenia (RR 0.35, 95% CI 0.04 to 3.30; RD 14 fewer per 1000, 95% CI 20 fewer to 48 more; low-certainty evidence), any PE (RR 3.13, 95% CI 0.13 to 74.64; RD 2 more per 1000, 95% CI 1 fewer to 78 more; low-certainty evidence) and postoperative drain volume (MD -20.00 mL, 95% CI -114.34 to 74.34; low-certainty evidence)\nAuthors' conclusions\nWe found no difference between perioperative thromboprophylaxis with LMWH versus UFH and LMWH compared with fondaparinux in their effects on mortality, thromboembolic outcomes, major bleeding, or minor bleeding in people with cancer. There was a lower incidence of wound hematoma with LMWH compared to UFH.\n\nGiven the provided abstract, please respond to the following query: \"Reliability of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The reliability of evidence varied from low to moderate across the outcomes of interest."
}
] |
query-laysum
|
22889
|
[
{
"role": "user",
"content": "Abstract: Background\nDelirium is an acute neuropsychological disorder that is common in hospitalised patients. It can be distressing to patients and carers and it is associated with serious adverse outcomes. Treatment options for established delirium are limited and so prevention of delirium is desirable. Non-pharmacological interventions are thought to be important in delirium prevention.\nObjectives\nTo assess the effectiveness of non-pharmacological interventions designed to prevent delirium in hospitalised patients outside intensive care units (ICU).\nSearch methods\nWe searched ALOIS, the specialised register of the Cochrane Dementia and Cognitive Improvement Group, with additional searches conducted in MEDLINE, Embase, PsycINFO, CINAHL, LILACS, Web of Science Core Collection, ClinicalTrials.gov and the World Health Organization Portal/ICTRP to 16 September 2020. There were no language or date restrictions applied to the electronic searches, and no methodological filters were used to restrict the search.\nSelection criteria\nWe included randomised controlled trials (RCTs) of single and multicomponent non-pharmacological interventions for preventing delirium in hospitalised adults cared for outside intensive care or high dependency settings. We only included non-pharmacological interventions which were designed and implemented to prevent delirium.\nData collection and analysis\nTwo review authors independently examined titles and abstracts identified by the search for eligibility and extracted data from full-text articles. Any disagreements on eligibility and inclusion were resolved by consensus. We used standard Cochrane methodological procedures. The primary outcomes were: incidence of delirium; inpatient and later mortality; and new diagnosis of dementia. We included secondary and adverse outcomes as pre-specified in the review protocol. We used risk ratios (RRs) as measures of treatment effect for dichotomous outcomes and between-group mean differences for continuous outcomes. The certainty of the evidence was assessed using GRADE. A complementary exploratory analysis was undertaker using a Bayesian component network meta-analysis fixed-effect model to evaluate the comparative effectiveness of the individual components of multicomponent interventions and describe which components were most strongly associated with reducing the incidence of delirium.\nMain results\nWe included 22 RCTs that recruited a total of 5718 adult participants. Fourteen trials compared a multicomponent delirium prevention intervention with usual care. Two trials compared liberal and restrictive blood transfusion thresholds. The remaining six trials each investigated a different non-pharmacological intervention. Incidence of delirium was reported in all studies.\nUsing the Cochrane risk of bias tool, we identified risks of bias in all included trials. All were at high risk of performance bias as participants and personnel were not blinded to the interventions. Nine trials were at high risk of detection bias due to lack of blinding of outcome assessors and three more were at unclear risk in this domain.\nPooled data showed that multi-component non-pharmacological interventions probably reduce the incidence of delirium compared to usual care (10.5% incidence in the intervention group, compared to 18.4% in the control group, risk ratio (RR) 0.57, 95% confidence interval (CI) 0.46 to 0.71, I2 = 39%; 14 studies; 3693 participants; moderate-certainty evidence, downgraded due to risk of bias).\nThere may be little or no effect of multicomponent interventions on inpatient mortality compared to usual care (5.2% in the intervention group, compared to 4.5% in the control group, RR 1.17, 95% CI 0.79 to 1.74, I2 = 15%; 10 studies; 2640 participants; low-certainty evidence downgraded due to inconsistency and imprecision).\nNo studies of multicomponent interventions reported data on new diagnoses of dementia.\nMulticomponent interventions may result in a small reduction of around a day in the duration of a delirium episode (mean difference (MD) -0.93, 95% CI -2.01 to 0.14 days, I2 = 65%; 351 participants; low-certainty evidence downgraded due to risk of bias and imprecision). The evidence is very uncertain about the effect of multicomponent interventions on delirium severity (standardised mean difference (SMD) -0.49, 95% CI -1.13 to 0.14, I2=64%; 147 participants; very low-certainty evidence downgraded due to risk of bias and serious imprecision). Multicomponent interventions may result in a reduction in hospital length of stay compared to usual care (MD -1.30 days, 95% CI -2.56 to -0.04 days, I2=91%; 3351 participants; low-certainty evidence downgraded due to risk of bias and inconsistency), but little to no difference in new care home admission at the time of hospital discharge (RR 0.77, 95% CI 0.55 to 1.07; 536 participants; low-certainty evidence downgraded due to risk of bias and imprecision). Reporting of other adverse outcomes was limited.\nOur exploratory component network meta-analysis found that re-orientation (including use of familiar objects), cognitive stimulation and sleep hygiene were associated with reduced risk of incident delirium. Attention to nutrition and hydration, oxygenation, medication review, assessment of mood and bowel and bladder care were probably associated with a reduction in incident delirium but estimates included the possibility of no benefit or harm. Reducing sensory deprivation, identification of infection, mobilisation and pain control all had summary estimates that suggested potential increases in delirium incidence, but the uncertainty in the estimates was substantial.\nEvidence from two trials suggests that use of a liberal transfusion threshold over a restrictive transfusion threshold probably results in little to no difference in incident delirium (RR 0.92, 95% CI 0.62 to 1.36; I2 = 9%; 294 participants; moderate-certainty evidence downgraded due to risk of bias).\nSix other interventions were examined, but evidence for each was limited to single studies and we identified no evidence of delirium prevention.\nAuthors' conclusions\nThere is moderate-certainty evidence regarding the benefit of multicomponent non-pharmacological interventions for the prevention of delirium in hospitalised adults, estimated to reduce incidence by 43% compared to usual care. We found no evidence of an effect on mortality. There is emerging evidence that these interventions may reduce hospital length of stay, with a trend towards reduced delirium duration, although the effect on delirium severity remains uncertain. Further research should focus on implementation and detailed analysis of the components of the interventions to support more effective, tailored practice recommendations.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There were some limitations in the studies which may affect the results. In many included studies the people in the study and sometimes researchers were aware of who was and was not receiving the intervention.\nThere was very little information about people living with dementia, who are at greater risk of experiencing delirium."
}
] |
query-laysum
|
22890
|
[
{
"role": "user",
"content": "Abstract: Background\nDementia, a global health priority, has no current cure. Around 50 million people worldwide currently live with dementia, and this number is expected to treble by 2050. Some health conditions and lifestyle behaviours can increase or decrease the risk of dementia and are known as 'predictors'. Prognostic models combine such predictors to measure the risk of future dementia. Models that can accurately predict future dementia would help clinicians select high-risk adults in middle age and implement targeted risk reduction.\nObjectives\nOur primary objective was to identify multi-domain prognostic models used in middle-aged adults (aged 45 to 65 years) for predicting dementia or cognitive impairment. Eligible multi-domain prognostic models involved two or more of the modifiable dementia predictors identified in a 2020 Lancet Commission report and a 2019 World Health Organization (WHO) report (less education, hearing loss, traumatic brain injury, hypertension, excessive alcohol intake, obesity, smoking, depression, social isolation, physical inactivity, diabetes mellitus, air pollution, poor diet, and cognitive inactivity). Our secondary objectives were to summarise the prognostic models, to appraise their predictive accuracy (discrimination and calibration) as reported in the development and validation studies, and to identify the implications of using dementia prognostic models for the management of people at a higher risk for future dementia.\nSearch methods\nWe searched MEDLINE, Embase, PsycINFO, CINAHL, and ISI Web of Science Core Collection from inception until 6 June 2022. We performed forwards and backwards citation tracking of included studies using the Web of Science platform.\nSelection criteria\nWe included development and validation studies of multi-domain prognostic models. The minimum eligible follow-up was five years. Our primary outcome was an incident clinical diagnosis of dementia based on validated diagnostic criteria, and our secondary outcome was dementia or cognitive impairment determined by any other method.\nData collection and analysis\nTwo review authors independently screened the references, extracted data using a template based on the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS), and assessed risk of bias and applicability of included studies using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). We synthesised the C-statistics of models that had been externally validated in at least three comparable studies.\nMain results\nWe identified 20 eligible studies; eight were development studies and 12 were validation studies. There were 14 unique prognostic models: seven models with validation studies and seven models with development-only studies. The models included a median of nine predictors (range 6 to 34); the median number of modifiable predictors was five (range 2 to 11). The most common modifiable predictors in externally validated models were diabetes, hypertension, smoking, physical activity, and obesity. In development-only models, the most common modifiable predictors were obesity, diabetes, hypertension, and smoking. No models included hearing loss or air pollution as predictors. Nineteen studies had a high risk of bias according to the PROBAST assessment, mainly because of inappropriate analysis methods, particularly lack of reported calibration measures. Applicability concerns were low for 12 studies, as their population, predictors, and outcomes were consistent with those of interest for this review. Applicability concerns were high for nine studies, as they lacked baseline cognitive screening or excluded an age group within the range of 45 to 65 years.\nOnly one model, Cardiovascular Risk Factors, Ageing, and Dementia (CAIDE), had been externally validated in multiple studies, allowing for meta-analysis. The CAIDE model included eight predictors (four modifiable predictors): age, education, sex, systolic blood pressure, body mass index (BMI), total cholesterol, physical activity and APOEƐ4 status. Overall, our confidence in the prediction accuracy of CAIDE was very low; our main reasons for downgrading the certainty of the evidence were high risk of bias across all the studies, high concern of applicability, non-overlapping confidence intervals (CIs), and a high degree of heterogeneity.\nThe summary C-statistic was 0.71 (95% CI 0.66 to 0.76; 3 studies; very low-certainty evidence) for the incident clinical diagnosis of dementia, and 0.67 (95% CI 0.61 to 0.73; 3 studies; very low-certainty evidence) for dementia or cognitive impairment based on cognitive scores. Meta-analysis of calibration measures was not possible, as few studies provided these data.\nAuthors' conclusions\nWe identified 14 unique multi-domain prognostic models used in middle-aged adults for predicting subsequent dementia. Diabetes, hypertension, obesity, and smoking were the most common modifiable risk factors used as predictors in the models. We performed meta-analyses of C-statistics for one model (CAIDE), but the summary values were unreliable. Owing to lack of data, we were unable to meta-analyse the calibration measures of CAIDE. This review highlights the need for further robust external validations of multi-domain prognostic models for predicting future risk of dementia in middle-aged adults.\n\nGiven the provided abstract, please respond to the following query: \"What are prediction tools?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To develop prediction tools, researchers observe a group of people over years to see how many with such risk factors develop dementia. The tools assign a higher risk score to people who have a higher chance of getting dementia later in life, based on the presence or absence of risk factors in middle age."
}
] |
query-laysum
|
22891
|
[
{
"role": "user",
"content": "Abstract: Background\nCryptococcal meningitis is a severe fungal infection that occurs primarily in the setting of advanced immunodeficiency and remains a major cause of HIV-related deaths worldwide. The best induction therapy to reduce mortality from HIV-associated cryptococcal meningitis is unclear, particularly in resource-limited settings where management of drug-related toxicities associated with more potent antifungal drugs is a challenge.\nObjectives\nTo evaluate the best induction therapy to reduce mortality from HIV-associated cryptococcal meningitis; to compare side effect profiles of different therapies.\nSearch methods\nWe searched the Cochrane Infectious Diseases Group Specialized Register, CENTRAL, MEDLINE (PubMed), Embase (Ovid), LILACS (BIREME), African Index Medicus, and Index Medicus for the South-East Asia Region (IMSEAR) from 1 January 1980 to 9 July 2018. We also searched the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP), ClinicalTrials.gov, and the ISRCTN registry; and abstracts of select conferences published between 1 July 2014 and 9 July 2018.\nSelection criteria\nWe included randomized controlled trials that compared antifungal induction therapies used for the first episode of HIV-associated cryptococcal meningitis. Comparisons could include different individual or combination therapies, or the same antifungal therapies with differing durations of induction (less than two weeks or two or more weeks, the latter being the current standard of care). We included data regardless of age, geographical region, or drug dosage. We specified no language restriction.\nData collection and analysis\nTwo review authors independently screened titles and abstracts identified by the search strategy. We obtained the full texts of potentially eligible studies to assess eligibility and extracted data using standardized forms. The main outcomes included mortality at 2 weeks, 10 weeks, and 6 months; mean rate of cerebrospinal fluid fungal clearance in the first two weeks of treatment; and Division of AIDS (DAIDS) grade three or four laboratory events. Using random-effects models we determined pooled risk ratio (RR) and 95% confidence interval (CI) for dichotomous outcomes and mean differences (MD) and 95% CI for continuous outcomes. For the direct comparison of 10-week mortality, we assessed the certainty of the evidence using the GRADE approach. We performed a network meta-analysis using multivariate meta-regression. We modelled treatment differences (RR and 95% CI) and determined treatment rankings for two-week and 10-week mortality outcomes using surface under the cumulative ranking curve (SUCRA). We assessed transitivity by comparing distribution of effect modifiers between studies, local inconsistency through a node-splitting approach, and global inconsistency using design-by-treatment interaction modelling. For the network meta-analysis, we applied a modified GRADE approach for assessing the certainty of the evidence for 10-week mortality.\nMain results\nWe included 13 eligible studies that enrolled 2426 participants and compared 21 interventions. All studies were carried out in adults, and all but two studies were conducted in resource-limited settings, including 11 of 12 studies with 10-week mortality data.\nIn the direct pairwise comparisons evaluating 10-week mortality, one study from four sub-Saharan African countries contributed data to several key comparisons. At 10 weeks these data showed that those on the regimen of one-week amphotericin B deoxycholate (AmBd) and flucytosine (5FC) followed by fluconazole (FLU) on days 8 to 14 had lower mortality when compared to (i) two weeks of AmBd and 5FC (RR 0.62, 95% CI 0.42 to 0.93; 228 participants, 1 study), (ii) two weeks of AmBd and FLU (RR 0.58, 95% CI 0.39 to 0.86; 227 participants, 1 study), (iii) one week of AmBd with two weeks of FLU (RR 0.49, 95% CI 0.34 to 0.72; 224 participants, 1 study), and (iv) two weeks of 5FC and FLU (RR 0.68, 95% CI 0.47 to 0.99; 338 participants, 1 study). The evidence for each of these comparisons was of moderate certainty. For other outcomes, this shortened one-week AmBd and 5FC regimen had similar fungal clearance (MD 0.05 log10 CFU/mL/day, 95% CI -0.02 to 0.12; 186 participants, 1 study) as well as lower risk of grade three or four anaemia (RR 0.31, 95% CI 0.16 to 0.60; 228 participants, 1 study) compared to the two-week regimen of AmBd and 5FC.\nFor 10-week mortality, the comparison of two weeks of 5FC and FLU with two weeks of AmBd and 5FC (RR 0.92, 95% CI 0.69 to 1.23; 340 participants, 1 study) or two weeks of AmBd and FLU (RR 0.85, 95% CI 0.64 to 1.13; 339 participants, 1 study) did not show a difference in mortality, with moderate-certainty evidence for both comparisons.\nWhen two weeks of combination AmBd and 5FC was compared with AmBd alone, pooled data showed lower mortality at 10 weeks (RR 0.66, 95% CI 0.46 to 0.95; 231 participants, 2 studies, moderate-certainty evidence).\nWhen two weeks of AmBd and FLU was compared to AmBd alone, there was no difference in 10-week mortality in pooled data (RR 0.94, 95% CI 0.55 to 1.62; 371 participants, 3 studies, low-certainty evidence).\nOne week of AmBd and 5FC followed by FLU on days 8 to 14 was the best induction therapy regimen after comparison with 11 other regimens for 10-week mortality in the network meta-analysis, with an overall SUCRA ranking of 88%.\nAuthors' conclusions\nIn resource-limited settings, one-week AmBd- and 5FC-based therapy is probably superior to other regimens for treatment of HIV-associated cryptococcal meningitis. An all-oral regimen of two weeks 5FC and FLU may be an alternative in settings where AmBd is unavailable or intravenous therapy cannot be safely administered. We found no mortality benefit of combination two weeks AmBd and FLU compared to AmBd alone. Given the absence of data from studies in children, and limited data from high-income countries, our findings provide limited guidance for treatment in these patients and settings.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Shorter initial treatment with one week of combined amphotericin B deoxycholate and flucytosine probably results in lower risk of death than longer treatment with two weeks of combination amphotericin B deoxycholate and flucytosine that has traditionally been recommended in treatment guidelines. The shorter treatment likely results in similar clearance of the infection with less toxicity from the drugs used for treatment. Where amphotericin B deoxycholate cannot be given, two weeks of combined flucytosine with fluconazole is likely a good treatment option. Given the absence of data from studies in children, and limited data from high-income countries, our findings provide limited guidance for treatment in these patients and settings."
}
] |
query-laysum
|
22892
|
[
{
"role": "user",
"content": "Abstract: Background\nPatients with longstanding ulcerative colitis and colonic Crohn's disease have an increased risk of colorectal cancer (CRC) compared with the general population. This review assessed the evidence that endoscopic surveillance may prolong life by allowing earlier detection of CRC or its pre-cursor lesion, dysplasia, in patients with inflammatory bowel disease (IBD).\nObjectives\nTo assess the effectiveness of cancer surveillance programs for diagnosis of IBD-associated colorectal cancer and in reducing the mortality rate from colorectal cancer in patients with IBD.\nSearch methods\nWe searched MEDLINE, EMBASE, CENTRAL and clinical clinicaltrials.gov from inception to 19 September 2016. We also searched conference abstracts and reference lists to identify additional studies.\nSelection criteria\nPotentially relevant articles were reviewed independently and unblinded by two authors to determine eligibility. Randomised controlled trials (RCTs) or observational studies (cohort or case control) assessing any form of endoscopic surveillance aimed at early detection of CRC were considered for inclusion. Studies had to have a no surveillance comparison group to be eligible for inclusion.\nData collection and analysis\nEligible studies were reviewed in duplicate and the results of the primary research trials were independently extracted by two authors. The primary outcome was detection of CRC. Secondary outcomes included death from CRC, time to cancer detection, time to death and adverse events. Deaths from CRC were derived from life tables, survival curves or where possible, by calculating life tables from the data provided. The presence of significant heterogeneity among studies was tested by the chi-square test. Because this is a relatively insensitive test, a P value of less than 0.1 was considered statistically significant. Provided statistical heterogeneity was not present, the fixed effects model was used for the pooling of data. The 2x2 tables were combined into a summary test statistic using the pooled odds ratio (OR) and 95% confidence intervals as described by Cochrane and Mantel and Haenszel. The methodological quality of the included studies was assessed using the Newcastle-Ottawa scale for non-randomised studies The overall quality of the evidence supporting the primary and selected secondary outcomes was assessed using the GRADE criteria.\nMain results\nNo RCTs were identified. Five observational studies (N = 7199) met the inclusion criteria. The studies scored well on the Newcastle-Ottawa scale, but due to the nature of observational studies, a high risk of bias was assigned to all the studies. Three studies were pooled to assess the rate of cancer detected in the surveillance group compared to the non-surveillance group. The studies found a significantly higher rate of cancer detection in the non surveillance group compared to the surveillance group. CRC was detected in 1.83% (53/2895) of patients in the surveillance group compared to 3.17% (135/4256) of patients in the non-surveillance group (OR 0.58, 95% CI 0.42 to 0.80; P = 0.0009). Four studies were pooled to assess the death rate associated with CRC in patients who underwent surveillance compared to patients who did not undergo surveillance. There was a significantly lower death rate associated with CRC in the surveillance group compared to the non-surveillance group. Eight per cent (15/176) of patients in the surveillance group died from CRC compared to 22% (79/354) of patients in the non-surveillance group (OR 0.36, 95% CI 0.19 to 0.69, P=0.002). Data were pooled from two studies to examine the rate of early stage versus late stage colorectal cancer (Duke stages A & B compared to Duke stages C & D) in patients who underwent surveillance compared to patients who do not undergo surveillance. A significantly higher rate of early stage CRC (Duke A & B) was detected in the surveillance group compared to the non-surveillance group. Sixteen per cent (17/110) of patients in the surveillance group had early stage CRC compared to 8% (9/117) of patients in the non-surveillance group (OR 5.40, 95% CI 1.51 to 19.30; P = 0.009). A higher rate of late stage CRC (Duke C & D) was observed in the non-surveillance group compared to the surveillance group. Nine per cent (10/110) of patients in the surveillance group had late stage CRC compared to 16% (19/117) of patients in the non-surveillance group (OR 0.46, 95% CI 0.08 to 2.51; P = 0.37). A GRADE analysis indicated that the quality of the data was very low for all of these outcomes. The included studies did not report on the other pre-specified outcomes including time to cancer detection, time to death and adverse events.\nAuthors' conclusions\nThe current data suggest that colonoscopic surveillance in IBD may reduce the development of both CRC and the rate of CRC-associated death through early detection, although the quality of the evidence is very low. The detection of earlier stage CRC in the surveillance group may explain some of the survival benefit observed. RCTs assessing the efficacy of endoscopic surveillance in people with IBD are unlikely to be undertaken due to ethical considerations.\n\nGiven the provided abstract, please respond to the following query: \"What is colon cancer?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Long term inflammation associated with IBD leads to an increased risk of colon cancer compared to the risk in people without IBD. Colon cancer is a malignant tumour arising from the inner wall of the large intestine (the colon)."
}
] |
query-laysum
|
22893
|
[
{
"role": "user",
"content": "Abstract: Background\nFactor Xa inhibitors and vitamin K antagonists (VKAs) are now recommended in treatment guidelines for preventing stroke and systemic embolic events in people with atrial fibrillation (AF). This is an update of a Cochrane review previously published in 2013.\nObjectives\nTo assess the effectiveness and safety of treatment with factor Xa inhibitors versus VKAs for preventing cerebral or systemic embolic events in people with AF.\nSearch methods\nWe searched the trials registers of the Cochrane Stroke Group and the Cochrane Heart Group (September 2016), the Cochrane Central Register of Controlled Trials (CENTRAL) (August 2017), MEDLINE (1950 to April 2017), and Embase (1980 to April 2017). We also contacted pharmaceutical companies, authors and sponsors of relevant published trials. We used outcome data from marketing authorisation applications of apixaban, edoxaban and rivaroxaban that were submitted to regulatory authorities in Europe and the USA.\nSelection criteria\nWe included randomised controlled trials (RCTs) that directly compared the effects of long-term treatment (lasting more than four weeks) with factor Xa inhibitors versus VKAs for preventing cerebral and systemic embolism in people with AF.\nData collection and analysis\nThe primary efficacy outcome was the composite endpoint of all strokes and systemic embolic events. Two review authors independently extracted data, and assessed the quality of the trials and the risk of bias. We calculated a weighted estimate of the typical treatment effect across trials using the odds ratio (OR) with 95% confidence interval (CI) by means of a fixed-effect model. In case of moderate or high heterogeneity of treatment effects, we used a random-effects model to compare the overall treatment effects. We also performed a pre-specified sensitivity analysis excluding any open-label studies.\nMain results\nWe included data from 67,688 participants randomised into 13 RCTs. The included trials directly compared dose-adjusted warfarin with either apixaban, betrixaban, darexaban, edoxaban, idraparinux, idrabiotaparinux, or rivaroxaban. The majority of the included data (approximately 90%) was from apixaban, edoxaban, and rivaroxaban.\nThe composite primary efficacy endpoint of all strokes (both ischaemic and haemorrhagic) and non-central nervous systemic embolic events was reported in all of the included studies. Treatment with a factor Xa inhibitor significantly decreased the number of strokes and systemic embolic events compared with dose-adjusted warfarin in participants with AF (OR 0.89, 95% CI 0.82 to 0.97; 13 studies; 67,477 participants; high-quality evidence).\nTreatment with a factor Xa inhibitor significantly reduced the number of major bleedings compared with warfarin (OR 0.78, 95% CI 0.73 to 0.84; 13 studies; 67,396 participants; moderate-quality evidence). There was, however, statistically significant and high heterogeneity (I2 = 83%). When we repeated this analysis using a random-effects model, it did not show a statistically significant decrease in the number of major bleedings (OR 0.88, 95% CI 0.66 to 1.17). A pre-specified sensitivity analysis excluding all open-label studies showed that treatment with a factor Xa inhibitor significantly reduced the number of major bleedings compared with warfarin (OR 0.75, 95% CI 0.69 to 0.81), but high heterogeneity was also observed in this analysis (I2 = 72%). The same sensitivity analysis using a random-effects model also showed a statistically significant decrease in the number of major bleedings in participants treated with factor Xa inhibitors (OR 0.76, 95% CI 0.60 to 0.96).\nTreatment with a factor Xa inhibitor significantly reduced the risk of intracranial haemorrhages (ICHs) compared with warfarin (OR 0.50, 95% CI 0.42 to 0.59; 12 studies; 66,259 participants; high-quality evidence). We observed moderate, but statistically significant heterogeneity (I2 = 55%). The pre-specified sensitivity analysis excluding open-label studies showed that treatment with a factor Xa inhibitor significantly reduced the number of ICHs compared with warfarin (OR 0.47, 95% CI 0.40 to 0.56), with low, non-statistically significant heterogeneity (I2 = 27%).\nTreatment with a factor Xa inhibitor also significantly reduced the number of all-cause deaths compared with warfarin (OR 0.89, 95% 0.83 to 0.95; 10 studies; 65,624 participants; moderate-quality evidence).\nAuthors' conclusions\nTreatment with factor Xa inhibitors significantly reduced the number of strokes and systemic embolic events compared with warfarin in people with AF. The absolute effect of factor Xa inhibitors compared with warfarin treatment was, however, rather small. Factor Xa inhibitors also reduced the number of ICHs, all-cause deaths and major bleedings compared with warfarin, although the evidence for a reduction in the latter is less robust.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that factor Xa inhibitors when compared with warfarin, which was used as comparator in all trials, reduced the number of strokes in people with atrial fibrillation. This reduction was, however, rather small. Factor Xa inhibitors also appeared to reduce the number of serious bleedings (including those into the brain) and the number of people dying from any cause compared with warfarin."
}
] |
query-laysum
|
22894
|
[
{
"role": "user",
"content": "Abstract: Background\nCataract surgery is the most common ambulatory incisional surgery performed in the USA. Cystoid macular edema (CME), the accumulation of fluid in the central retina due to leakage from dilated capillaries, is the most common cause of vision impairment following cataract surgery. Acute CME, defined as CME of less than four months' duration, often resolves spontaneously. CME that persists for four months or longer is termed chronic CME. Non-steroidal anti-inflammatory drugs (NSAIDs) have been used to treat CME. This update adds new evidence and analyses to the previously published review.\nObjectives\nTo examine the effectiveness of NSAIDs in the treatment of CME following cataract surgery.\nSearch methods\nWe searched the CENTRAL (2022, Issue 3); Ovid MEDLINE; Embase; PubMed; LILACS; mRCT (discontinued in 2014, last searched August 2011), ClinicalTrials.gov, and WHO ICTRP databases. We did not use any date or language restrictions in the electronic search for trials. We last searched the electronic databases on 20 March 2022.\nSelection criteria\nWe included randomized controlled trials evaluating the effects of NSAIDs for CME following cataract surgery.\nData collection and analysis\nTwo review authors independently screened all titles and abstracts, reviewed full-text publications against eligibility criteria, independently extracted data from newly included trials and assessed risk of bias for each included trial. We contacted trial authors for clarification or to request missing information. We provided a narrative synthesis of all included trials and their results. For continuous and dichotomous outcomes, we separately performed pooled analysis and reported mean difference (MD) and risk ratio (RR) as well as the associated 95% confidence interval (CI) whenever feasible. Two review authors independently graded the overall certainty of the evidence for each outcome using the GRADE approach.\nMain results\nWe included nine trials with a total of 390 participants (393 eyes). Study participants' mean age was 72.2 years (interquartile range [IQR] 68.8 to 73.6) and 72% were women (IQR 69% to 74%). Three trials included participants with acute CME, and four included participants with chronic CME; the remaining two trials enrolled both participants with acute and chronic CME or participants with unknown CME duration. We assessed trials as having unclear (33%) or high risk of bias (67%).\nVisual improvement of two or more lines at the end of treatment\nData from one trial in participants with acute CME show no treatment effect of topical ketorolac compared to placebo (RR 2.00, 95% CI 0.46 to 8.76; 22 participants). Data from a three-arm trial in participants with acute CME demonstrate that, when compared with topical prednisolone, topical ketorolac (RR 1.33, 95% CI 0.58 to 3.07; 17 participants) or topical ketorolac and prednisolone combination therapy (RR 1.78, 95% CI 0.86 to 3.69; 17 participants) may have little or no effect on visual improvement. Results of subgroup analysis from two studies in participants with chronic CME suggest that, after treatment for 90 days or longer, NSAIDs may increase participants' likelihood of visual improvement by 1.87 fold (RR 2.87, 95% CI 1.58 to 5.22; I2 = 33%; 2 trials, 121 participants) relative to placebo. However, there was no evidence of treatment effects in the subgroup with two months of treatment or less (RR 0.72, 95% CI 0.30 to 1.73; P = 0.19, I2 = 41%; 2 trials, 34 participants). Overall, this evidence is very low certainty.\nA single-study estimate in patients with mixed CME indicates that topical diclofenac may increase the likelihood of visual improvement by 40% when compared to topical ketorolac (RR 1.40, 95% CI 1.02 to 1.94; 68 participants). However, the same trial reported no difference between the groups in mean final visual acuity in Snellen lines (MD 0.40, 95% CI −0.93 to 1.73). A three-arm trial in patients with mixed CME reporting visual changes in ETDRS letters in comparisons between ketorolac and diclofenac (34 participants) or bromfenac (34 participants) suggests no evidence of effects. Overall, NSAIDs may slightly improve visual acuity in participants with mixed CME but the evidence is very uncertain.\nPersistence of improvement of vision one month after discontinuation of treatment\nOne trial of participants with chronic CME tested oral indomethacin (RR 0.40, 95% CI 0.10 to 1.60; 20 participants) and the other compared topical ketorolac to placebo (RR 4.00, 95% CI 0.51 to 31.1; 26 participants). While there is no evidence of treatment effects, evidence suggests substantial between-group heterogeneity (P = 0.07, I2 = 69.9%; very low-certainty evidence). None of the trials in patients with acute or mixed CME reported this outcome.\nProportion of participants with improvement in leakage on fundus fluorescein angiography\nOne three-arm trial in participants with acute CME shows that, when compared with topical prednisolone, there is no treatment benefit of topical ketorolac (RR 1.11, 95% CI 0.45 to 2.75; 17 participants) or topical ketorolac and topical prednisolone combination therapy (RR 1.56, 95% CI 0.72 to 3.38; 17 participants). This evidence is very low certainty.\nThe combined estimate from two trials in participants with chronic CME indicates NSAIDs have little to no effect over placebo on improving leakage (RR 1.93, 95% CI 0.62 to 6.02; 40 participants; very low-certainty evidence). Neither of the trials in patients with mixed CME reported this outcome.\nProportion of participants with improved contrast sensitivity\nVery low-certainty evidence from one trial in participants with acute CME shows no treatment benefit of ketorolac (RR 1.11, 95% CI 0.45 to 2.75; 17 participants) or ketorolac and prednisolone combination therapy (RR 1.78, 95% CI 0.86 to 3.69; 17 participants) compared with topical prednisolone. None of the trials in patients with chronic or mixed CME reported this outcome.\nProportion of participants with improved central macular thickness on optical coherence tomography; measures of quality of life\nNo included trial reported these outcomes.\nAdverse effects\nMost trials observed no differences in ocular adverse events, such as corneal toxicity or elevated intraocular pressure, between comparison groups.\nAuthors' conclusions\nEvidence on effects of NSAIDs in patients with CME is very uncertain and further investigation is warranted. Our findings are limited by small sample sizes, and heterogeneity in interventions, assessments, and reporting of clinically important outcomes.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that randomly assigned adults with CME after cataract surgery to be treated with NSAIDs, alone or with other medications. Studies compared NSAIDs with placebo (a dummy medicine with no active ingredients) or other active treatment (steroid or another NSAID). We compared and summarized the results of the studies and rated our confidence in the evidence, based on factors such as study methods and sizes."
}
] |
query-laysum
|
22895
|
[
{
"role": "user",
"content": "Abstract: Background\nApproximately 30% of hospitalised older adults experience hospital-associated functional decline. Exercise interventions that promote in-hospital activity may prevent deconditioning and thereby maintain physical function during hospitalisation. This is an update of a Cochrane Review first published in 2007.\nObjectives\nTo evaluate the benefits and harms of exercise interventions for acutely hospitalised older medical inpatients on functional ability, quality of life (QoL), participant global assessment of success and adverse events compared to usual care or a sham-control intervention.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was May 2021.\nSelection criteria\nWe included randomised or quasi-randomised controlled trials evaluating an in-hospital exercise intervention in people aged 65 years or older admitted to hospital with a general medical condition. We excluded people admitted for elective reasons or surgery.\nData collection and analysis\nWe used standard Cochrane methods. Our major outcomes were 1. independence with activities of daily living; 2. functional mobility; 3. new incidence of delirium during hospitalisation; 4. QoL; 5. number of falls during hospitalisation; 6. medical deterioration during hospitalisation and 7. participant global assessment of success. Our minor outcomes were 8. death during hospitalisation; 9. musculoskeletal injuries during hospitalisation; 10. hospital length of stay; 11. new institutionalisation at hospital discharge; 12. hospital readmission and 13. walking performance. We used GRADE to assess certainty of evidence for each major outcome.\nWe categorised exercise interventions as: rehabilitation-related activities (interventions designed to increase physical activity or functional recovery, but did not follow a specified exercise protocol); structured exercise (interventions that included an exercise intervention protocol but did not include progressive resistance training); and progressive resistance exercise (interventions that included an element of progressive resistance training).\nMain results\nWe included 24 studies (nine rehabilitation-related activity interventions, six structured exercise interventions and nine progressive resistance exercise interventions) with 7511 participants. All studies compared exercise interventions to usual care; two studies, in addition to usual care, used sham interventions. Mean ages ranged from 73 to 88 years, and 58% of participants were women.\nSeveral studies were at high risk of bias. The most common domain assessed at high risk of bias was measurement of the outcome, and five studies (21%) were at high risk of bias arising from the randomisation process.\nExercise may have no clinically important effect on independence in activities of daily living at discharge from hospital compared to controls (16 studies, 5174 participants; low-certainty evidence). Five studies used the Barthel Index (scale: 0 to 100, higher scores representing greater independence). Mean scores at discharge in the control groups ranged from 42 to 96 points, and independence in activities of daily living was 1.8 points better (0.43 worse to 4.12 better) with exercise compared to controls. The minimally clinical important difference (MCID) is estimated to be 11 points.\nWe are uncertain regarding the effect of exercise on functional mobility at discharge from the hospital compared to controls (8 studies, 2369 participants; very low-certainty evidence). Three studies used the Short Physical Performance Battery (SPPB) (scale: 0 to 12, higher scores representing better function) to measure functional mobility. Mean scores at discharge in the control groups ranged from 3.7 to 4.9 points on the SPPB, and the estimated effect of the exercise interventions was 0.78 points better (0.02 worse to 1.57 better). A change of 1 point on the SPPB represents an MCID.\nWe are uncertain regarding the effect of exercise on the incidence of delirium during hospitalisation compared to controls (7 trials, 2088 participants; very low-certainty evidence). The incidence of delirium during hospitalisation was 88/1091 (81 per 1000) in the control group compared with 70/997 (73 per 1000; range 47 to 114) in the exercise group (RR 0.90, 95% CI 0.58 to 1.41).\nExercise interventions may result in a small clinically unimportant improvement in QoL at discharge from the hospital compared to controls (4 studies, 875 participants; low-certainty evidence). Mean QoL on the EuroQol 5 Dimensions (EQ-5D) visual analogue scale (VAS) (scale: 0 to 100, higher scores representing better QoL) ranged between 48.9 and 64.7 in the control group at discharge from the hospital, and QoL was 6.04 points better (0.9 better to 11.18 better) with exercise. A change of 10 points on the EQ-5D VAS represents an MCID.\nNo studies measured participant global assessment of success.\nExercise interventions did not affect the risk of falls during hospitalisation (moderate-certainty evidence). The incidence of falls was 31/899 (34 per 1000) in the control group compared with 31/888 (34 per 1000; range 20 to 57) in the exercise group (RR 0.99, 95% CI 0.59 to 1.65).\nWe are uncertain regarding the effect of exercise on the incidence of medical deterioration during hospitalisation (very low-certainty evidence). The incidence of medical deterioration in the control group was 101/1417 (71 per 1000) compared with 96/1313 (73 per 1000; range 44 to 120) in the exercise group (RR 1.02, 95% CI 0.62 to 1.68).\nSubgroup analyses by different intervention categories and by the use of a sham intervention were not meaningfully different from the main analyses.\nAuthors' conclusions\nExercise may make little difference to independence in activities of daily living or QoL, but probably does not result in more falls in older medical inpatients. We are uncertain about the effect of exercise on functional mobility, incidence of delirium and medical deterioration. Certainty of evidence was limited by risk of bias and inconsistency. Future primary research on the effect of exercise on acute hospitalisation could focus on more consistent and uniform reporting of participant's characteristics including their baseline level of functional ability, as well as exercise dose, intensity and adherence that may provide an insight into the reasons for the observed inconsistencies in findings.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 24 studies with 7511 participants, of whom 58% were women. The average ages of participants in the studies ranged from 73 to 88 years. Thirteen studies were from Europe, six from Oceania, four from North America and one from South America. Participants were admitted to hospital with a wide range of illnesses or medical conditions such as infections, heart failure, kidney failure, bleeding in the stomach or gut, and vertigo.\nThe types of exercise treatments and the amount of exercise that people were asked to do varied considerably. Nine studies classified the exercise treatment as rehabilitation-related activities (treatments designed to increase physical activity, but that did not follow a specific exercise programme). Six studies consisted of structured exercise (a specific exercise programme that every person in the treatment group performed). The exercise may have varied depending on the individual person's ability, but the treatment did not involve progressive strength training. With progressive strength training people exercise their muscles against some type of resistance that is progressively increased as their strength improves. Nine studies provided an element of progressive resistance training."
}
] |
query-laysum
|
22896
|
[
{
"role": "user",
"content": "Abstract: Background\nPostoperative pancreatic fistula is a common and serious complication following pancreaticoduodenectomy. Duct-to-mucosa pancreaticojejunostomy has been used in many centers to reconstruct pancreatic digestive continuity following pancreatoduodenectomy, however, its efficacy and safety are uncertain.\nObjectives\nTo assess the benefits and harms of duct-to-mucosa pancreaticojejunostomy versus other types of pancreaticojejunostomy for the reconstruction of pancreatic digestive continuity in participants undergoing pancreaticoduodenectomy, and to compare the effects of different duct-to-mucosa pancreaticojejunostomy techniques.\nSearch methods\nWe searched the Cochrane Library (2021, Issue 1), MEDLINE (1966 to 9 January 2021), Embase (1988 to 9 January 2021), and Science Citation Index Expanded (1982 to 9 January 2021).\nSelection criteria\nWe included all randomized controlled trials (RCTs) that compared duct-to-mucosa pancreaticojejunostomy with other types of pancreaticojejunostomy (e.g. invagination pancreaticojejunostomy, binding pancreaticojejunostomy) in participants undergoing pancreaticoduodenectomy. We also included RCTs that compared different types of duct-to-mucosa pancreaticojejunostomy in participants undergoing pancreaticoduodenectomy.\nData collection and analysis\nTwo review authors independently identified the studies for inclusion, collected the data, and assessed the risk of bias. We performed the meta-analyses using Review Manager 5. We calculated the risk ratio (RR) for dichotomous outcomes and the mean difference (MD) for continuous outcomes with 95% confidence intervals (CIs). For all analyses, we used the random-effects model. We used the Cochrane RoB 1 tool to assess the risk of bias. We used GRADE to assess the certainty of the evidence for all outcomes.\nMain results\nWe included 11 RCTs involving a total of 1696 participants in the review. One RCT was a dual-center study; the other 10 RCTs were single-center studies conducted in: China (4 studies); Japan (2 studies); USA (1 study); Egypt (1 study); Germany (1 study); India (1 study); and Italy (1 study). The mean age of participants ranged from 54 to 68 years. All RCTs were at high risk of bias.\nDuct-to-mucosa versus any other type of pancreaticojejunostomy\nWe included 10 RCTs involving 1472 participants comparing duct-to-mucosa pancreaticojejunostomy with invagination pancreaticojejunostomy: 732 participants were randomized to the duct-to-mucosa group, and 740 participants were randomized to the invagination group after pancreaticoduodenectomy. Comparing the two techniques, the evidence is very uncertain for the rate of postoperative pancreatic fistula (grade B or C; RR 1.45, 95% CI 0.64 to 3.26; 7 studies, 1122 participants; very low-certainty evidence), postoperative mortality (RR 0.77, 95% CI 0.39 to 1.49; 10 studies, 1472 participants; very low-certainty evidence), rate of surgical reintervention (RR 1.12, 95% CI 0.65 to 1.95; 10 studies, 1472 participants; very low-certainty evidence), rate of postoperative bleeding (RR 0.85, 95% CI 0.51 to 1.42; 9 studies, 1275 participants; very low-certainty evidence), overall rate of surgical complications (RR 1.12, 95% CI 0.92 to 1.36; 5 studies, 750 participants; very low-certainty evidence), and length of hospital stay (MD -0.41 days, 95% CI -1.87 to 1.04; 4 studies, 658 participants; very low-certainty evidence). The studies did not report adverse events or quality of life outcomes.\nOne type of duct-to-mucosa pancreaticojejunostomy versus a different type of duct-to-mucosa pancreaticojejunostomy\nWe included one RCT involving 224 participants comparing duct-to-mucosa pancreaticojejunostomy using the modified Blumgart technique with duct-to-mucosa pancreaticojejunostomy using the traditional interrupted technique: 112 participants were randomized to the modified Blumgart group, and 112 participants were randomized to the traditional interrupted group after pancreaticoduodenectomy. Comparing the two techniques, the evidence is very uncertain for the rate of postoperative pancreatic fistula (grade B or C; RR 1.51, 95% CI 0.61 to 3.75; 1 study, 210 participants; very low-certainty evidence), postoperative mortality (there were no deaths in either group; 1 study, 210 participants; very low-certainty evidence), rate of surgical reintervention (RR 1.93, 95% CI 0.18 to 20.91; 1 study, 210 participants; very low-certainty evidence), rate of postoperative bleeding (RR 2.89, 95% CI 0.12 to 70.11; 1 study, 210 participants; very low-certainty evidence), overall rate of surgical complications (RR 1.10, 95% CI 0.80 to 1.51; 1 study, 210 participants; very low-certainty evidence), and length of hospital stay (15 days versus 15 days; 1 study, 210 participants; very low-certainty evidence). The study did not report adverse events or quality of life outcomes.\nAuthors' conclusions\nThe evidence is very uncertain about the effects of duct-to-mucosa pancreaticojejunostomy compared to invagination pancreaticojejunostomy on any of the outcomes, including rate of postoperative pancreatic fistula (grade B or C), postoperative mortality, rate of surgical reintervention, rate of postoperative bleeding, overall rate of surgical complications, and length of hospital stay. The evidence is also very uncertain whether duct-to-mucosa pancreaticojejunostomy using the modified Blumgart technique is superior, equivalent or inferior to duct-to-mucosa pancreaticojejunostomy using the traditional interrupted technique. None of the studies reported adverse events or quality of life outcomes.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up to date to January 2021."
}
] |
query-laysum
|
22897
|
[
{
"role": "user",
"content": "Abstract: Background\nPitavastatin is the newest statin on the market, and the dose-related magnitude of effect of pitavastatin on blood lipids is not known.\nObjectives\nPrimary objective\nTo quantify the effects of various doses of pitavastatin on the surrogate markers: LDL cholesterol, total cholesterol, HDL cholesterol and triglycerides in participants with and without cardiovascular disease.\nTo compare the effect of pitavastatin on surrogate markers with other statins.\nSecondary objectives\nTo quantify the effect of various doses of pitavastatin on withdrawals due to adverse effects.\nSearch methods\nThe Cochrane Hypertension Information Specialist searched the following databases for trials up to March 2019: the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 2, 2019), MEDLINE (from 1946), Embase (from 1974), the World Health Organization International Clinical Trials Registry Platform, and ClinicalTrials.gov. We also contacted authors of relevant papers regarding further published and unpublished work. The searches had no language restrictions.\nSelection criteria\nRCT and controlled before-and-after studies evaluating the dose response of different fixed doses of pitavastatin on blood lipids over a duration of three to 12 weeks in participants of any age with and without cardiovascular disease.\nData collection and analysis\nTwo review authors independently assessed eligibility criteria for studies to be included, and extracted data. We entered data from RCT and controlled before-and-after studies into Review Manager 5 as continuous and generic inverse variance data, respectively. Withdrawals due to adverse effects (WDAE) information was collected from the RCTs. We assessed all included trials using the Cochrane 'Risk of bias' tool under the categories of allocation (selection bias), blinding (performance bias and detection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias), and other potential sources of bias.\nMain results\nForty-seven studies (five RCTs and 42 before-and-after studies) evaluated the dose-related efficacy of pitavastatin in 5436 participants. The participants were of any age with and without cardiovascular disease, and pitavastatin effects were studied within a treatment period of three to 12 weeks. Log dose-response data over doses of 1 mg to 16 mg revealed strong linear dose-related effects on blood total cholesterol and LDL cholesterol and triglycerides. There was no dose-related effect of pitavastatin on blood HDL cholesterol, which was increased by 4% on average by pitavastatin. Pitavastatin 1 mg/day to 16 mg/day reduced LDL cholesterol by 33.3% to 54.7%, total cholesterol by 23.3% to 39.0% and triglycerides by 13.0% to 28.1%. For every two-fold dose increase, there was a 5.35% (95% CI 3.32 to 7.38) decrease in blood LDL cholesterol, a 3.93% (95% CI 2.35 to 5.50) decrease in blood total cholesterol and a 3.76% (95% CI 1.03 to 6.48) decrease in blood triglycerides. The certainty of evidence for these effects was judged to be high. When compared to other statins for its effect to reduce LDL cholesterol, pitavastatin is about 6-fold more potent than atorvastatin, 1.7-fold more potent than rosuvastatin, 77-fold more potent than fluvastatin and 3.3-fold less potent than cerivastatin. For the placebo group, there were no participants who withdrew due to an adverse effect per 109 subjects and for all doses of pitavastatin, there were three participants who withdrew due to an adverse effect per 262 subjects.\nAuthors' conclusions\nPitavastatin lowers blood total cholesterol, LDL cholesterol and triglyceride in a dose-dependent linear fashion. Based on the effect on LDL cholesterol, pitavastatin is about 6-fold more potent than atorvastatin, 1.7-fold more potent than rosuvastatin, 77-fold more potent than fluvastatin and 3.3-fold less potent than cerivastatin. There were not enough data to determine risk of withdrawal due to adverse effects due to pitavastatin.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked at research up to March 2019."
}
] |
query-laysum
|
22898
|
[
{
"role": "user",
"content": "Abstract: Background\nExternal root resorption is a pathological process, which tends to occur following a wide range of mechanical or chemical stimuli such as infection, pressure, trauma or orthodontic tooth movement. Although it is predominantly detected by radiography, in some cases root resorption may be identified by clinical symptoms such as pain, swelling and mobility of the tooth. Treatment alternatives are case-dependent and aim to address the cause of the resorption and aid the regeneration of the resorptive lesion.\nObjectives\nTo evaluate the effectiveness of any interventions that can be used in the management of external root resorption in permanent teeth.\nSearch methods\nThe following electronic databases were searched: the Cochrane Oral Health Group Trials Register (to 14 October 2015), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, 2015, Issue 9), MEDLINE via OVID (1946 to 14 October 2015) and EMBASE via OVID (1980 to 14 October 2015). We searched the US National Institutes of Health Trials Register (http://clinicaltrials.gov) and the WHO Clinical Trials Registry Platform for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials of permanent teeth with any type of external root resorption, which has been confirmed by clinical and radiological examination, comparing one type of intervention (root canal medications and canal filling, splinting or extraction of teeth or the surgical removal of any relevant pathology) with another, or with placebo or no treatment.\nData collection and analysis\nTwo review authors screened search records independently. Full papers were obtained for potentially relevant trials. If data had been extracted, the statistical guidelines set out in the Cochrane Handbook would have been followed.\nMain results\nNo randomised controlled trials that met the inclusion criteria were identified. However, we identified one ongoing study that is potentially relevant to this review and will be assessed when it is published.\nAuthors' conclusions\nWe were unable to identify any reports of randomised controlled trials regarding the efficacy of different interventions for the management of external root resorption. In view of the lack of reliable evidence on this topic, clinicians must decide on the most appropriate means of managing this condition according to their clinical experience with regard to patient-related factors. There is a need for well designed and conducted clinical trials on this topic, which conform to the CONSORT statement (www.consort-statement.org/).\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review has been conducted to assess different interventions for managing the reabsorption of the tooth root."
}
] |
query-laysum
|
22899
|
[
{
"role": "user",
"content": "Abstract: Background\nPlatinum-based therapy, including cisplatin, carboplatin, oxaliplatin or a combination of these, is used to treat a variety of paediatric malignancies. Unfortunately, one of the most important adverse effects is the occurrence of hearing loss or ototoxicity. There is a wide variation in the reported prevalence of platinum-induced ototoxicity and the associated risk factors. More insight into the prevalence of and risk factors for platinum-induced hearing loss is essential in order to develop less ototoxic treatment protocols for the future treatment of children with cancer and to develop adequate follow-up protocols for childhood cancer survivors treated with platinum-based therapy.\nObjectives\nTo evaluate the existing evidence on the association between childhood cancer treatment including platinum analogues and the occurrence of hearing loss.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2015, Issue 8), MEDLINE (PubMed) (1945 to 23 September 2015) and EMBASE (Ovid) (1980 to 23 September 2015). In addition, we searched reference lists of relevant articles and the conference proceedings of the International Society for Paediatric Oncology (2008 to 2014), the American Society of Pediatric Hematology/Oncology (2008 to 2015) and the International Conference on Long-Term Complications of Treatment of Children and Adolescents for Cancer (2010 to 2015). Experts in the field provided information on additional studies.\nSelection criteria\nAll study designs, except case reports, case series (i.e. a description of non-consecutive participants) and studies including fewer than 100 participants treated with platinum-based therapy who had an ototoxicity assessment, examining the association between childhood cancer treatment including platinum analogues and the occurrence of hearing loss.\nData collection and analysis\nTwo review authors independently performed the study selection. One review author performed data extraction and risk of bias assessment, which was checked by another review author.\nMain results\nWe identified 13 eligible cohort studies including 2837 participants with a hearing test after treatment with a platinum analogue for different types of childhood cancers. All studies had methodological limitations, with regard to both internal (risk of bias) and external validity. Participants were treated with cisplatin, carboplatin or both, in varying doses. The reported prevalence of hearing loss varied considerably between 0% and 90.1%; none of the studies provided data on tinnitus. Three studies reported a prevalence of 0%, but none of these studies provided a definition for hearing loss and there might be substantial or even complete overlap in included participants between these three studies. When only studies that did provide a definition for hearing loss were included, the prevalence of hearing loss still varied widely between 1.7% and 90.1%. All studies were very heterogeneous with regard to, for example, definitions of hearing loss, used diagnostic tests, participant characteristics, (prior) anti-tumour treatment, other ototoxic drugs and length of follow-up. Therefore, pooling of results was not possible.\nOnly two studies included a control group of people who had not received platinum treatment. In one study, the prevalence of hearing loss was 67.1% (95% confidence interval (CI) 59.3% to 74.1%) in platinum-treated participants, while in the control participants it was 7.4% (95% CI 6.2% to 8.8%). However, hearing loss was detected by screening in survivors treated with platinum analogues and by clinical presentation in control participants. It is uncertain what the effect of this difference in follow-up/diagnostic testing was. In the other study, the prevalence of hearing loss was 20.1% (95% CI 17.4% to 23.2%) in platinum-treated participants and 0.4% (95% CI 0.12% to 1.6%) in control participants. As neither study was a randomized controlled trial or controlled clinical trial, the calculation of a risk ratio was not feasible as it is very likely that both groups differed more than only the platinum treatment.\nOnly two studies evaluated possible risk factors using multivariable analysis. One study identified a significantly higher risk of hearing loss in people treated with cisplatin 400 mg/m2 plus carboplatin 1700 mg/m2 as compared to treatment with cisplatin 400 mg/m2 or less, irrespective of the definition of hearing loss. They also identified a significantly higher risk of hearing loss in people treated with non-anthracycline aminoglycosides antibiotics (using a surrogate marker) as compared to people not treated with them, for three out of four definitions of hearing loss. The other study reported that age at treatment (odds ratio less than 1 for each single-unit increase) and single maximum cisplatin dose (odds ratio greater than 1 for each single-unit increase) were significant predictors for hearing loss, while gender was not.\nAuthors' conclusions\nThis systematic review shows that children treated with platinum analogues are at risk for developing hearing loss, but the exact prevalence and risk factors remain unclear. There were no data available for tinnitus. Based on the currently available evidence we can only advise that children treated with platinum analogues are screened for ototoxicity in order to make it possible to diagnose hearing loss early and to take appropriate measures. However, we are unable to give recommendations for specific follow-up protocols including frequency of testing. Counselling regarding the prevention of noise pollution can be considered, such as the use of noise-limiting equipment, avoiding careers with excess noise and ototoxic medication. Before definitive conclusions on the prevalence and associated risk factors of platinum-induced ototoxicity can be made, more high-quality research is needed. Accurate and transparent reporting of findings will make it possible for readers to appraise the results of these studies critically.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence on the association between childhood cancer treatment including platinum analogues and the occurrence of hearing loss."
}
] |
query-laysum
|
22900
|
[
{
"role": "user",
"content": "Abstract: Background\nSkeletal muscle cramps are common and often occur in association with pregnancy, advanced age, exercise or motor neuron disorders (such as amyotrophic lateral sclerosis). Typically, such cramps have no obvious underlying pathology, and so are termed idiopathic. Magnesium supplements are marketed for the prophylaxis of cramps but the efficacy of magnesium for this purpose remains unclear.\nThis is an update of a Cochrane Review first published in 2012, and performed to identify and incorporate more recent studies.\nObjectives\nTo assess the effects of magnesium supplementation compared to no treatment, placebo control or other cramp therapies in people with skeletal muscle cramps.\nSearch methods\nOn 9 September 2019, we searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, LILACS, CINAHL Plus, AMED, and SPORTDiscus. We also searched WHO-ICTRP and ClinicalTrials.gov for registered trials that might be ongoing or unpublished, and ISI Web of Science for studies citing the studies included in this review.\nSelection criteria\nRandomized controlled trials (RCTs) of magnesium supplementation (in any form) to prevent skeletal muscle cramps in any patient group (i.e. all clinical presentations of cramp). We considered comparisons of magnesium with no treatment, placebo control, or other therapy.\nData collection and analysis\nTwo review authors independently selected trials for inclusion and extracted data. Two review authors assessed risk of bias. We attempted to contact all study authors when questions arose and obtained participant-level data for four of the included trials, one of which was unpublished. We collected all data on adverse effects from the included RCTs.\nMain results\nWe identified 11 trials (nine parallel-group, two cross-over) enrolling a total of 735 individuals, amongst whom 118 cross-over participants additionally served as their own controls. Five trials enrolled women with pregnancy-associated leg cramps (408 participants) and five trials enrolled people with idiopathic cramps (271 participants, with 118 additionally crossed over to control). Another study enrolled 29 people with liver cirrhosis, only some of whom suffered muscle cramps. All trials provided magnesium as an oral supplement, except for one trial which provided magnesium as a series of slow intravenous infusions. Nine trials compared magnesium to placebo, one trial compared magnesium to no treatment, calcium carbonate or vitamin B, and another trial compared magnesium to vitamin E or calcium. We judged the single trial in people with liver cirrhosis and all five trials in participants with pregnancy-associated leg cramps to be at high risk of bias. In contrast, we rated the risk of bias high in only one of five trials in participants with idiopathic rest cramps.\nFor idiopathic cramps, largely in older adults (mean age 61.6 to 69.3 years) presumed to have nocturnal leg cramps (the commonest presentation), differences in measures of cramp frequency when comparing magnesium to placebo were small, not statistically significant, and showed minimal heterogeneity (I² = 0% to 12%). This includes the primary endpoint, percentage change from baseline in the number of cramps per week at four weeks (mean difference (MD) −9.59%, 95% confidence interval (CI) −23.14% to 3.97%; 3 studies, 177 participants; moderate-certainty evidence); and the difference in the number of cramps per week at four weeks (MD −0.18 cramps/week, 95% CI −0.84 to 0.49; 5 studies, 307 participants; moderate-certainty evidence). The percentage of individuals experiencing a 25% or better reduction in cramp rate from baseline was also no different (RR 1.04, 95% CI 0.84 to 1.29; 3 studies, 177 participants; high-certainty evidence). Similarly, no statistically significant difference was found at four weeks in measures of cramp intensity or cramp duration. This includes the number of participants rating their cramps as moderate or severe at four weeks (RR 1.33, 95% CI 0.81 to 2.21; 2 studies, 91 participants; moderate-certainty evidence); and the percentage of participants with the majority of cramp durations of one minute or more at four weeks (RR 1.83, 95% CI 0.74 to 4.53, 1 study, 46 participants; low-certainty evidence).\nWe were unable to perform meta-analysis for trials of pregnancy-associated leg cramps. The single study comparing magnesium to no treatment failed to find statistically significant benefit on a three-point ordinal scale of overall treatment efficacy. Of the three trials comparing magnesium to placebo, one found no benefit on frequency or intensity measures, another found benefit for both, and a third reported inconsistent results for frequency that could not be reconciled. The single study in people with liver cirrhosis was small and had limited reporting of cramps, but found no difference in terms of cramp frequency or cramp intensity.\nOur analysis of adverse events pooled all studies, regardless of the setting in which cramps occurred. Major adverse events (occurring in 2 out of 72 magnesium recipients and 3 out of 68 placebo recipients), and withdrawals due to adverse events, were not significantly different from placebo. However, in the four studies for which it could be determined, more participants experienced minor adverse events in the magnesium group than in the placebo group (RR 1.51, 95% CI 0.98 to 2.33; 4 studies, 254 participants; low-certainty evidence). Overall, oral magnesium was associated with mostly gastrointestinal adverse events (e.g. diarrhoea), experienced by 11% (10% in control) to 37% (14% in control) of participants.\nAuthors' conclusions\nIt is unlikely that magnesium supplementation provides clinically meaningful cramp prophylaxis to older adults experiencing skeletal muscle cramps. In contrast, for those experiencing pregnancy-associated rest cramps the literature is conflicting and further research in this population is needed. We found no RCTs evaluating magnesium for exercise-associated muscle cramps or disease-state-associated muscle cramps (for example amyotrophic lateral sclerosis/motor neuron disease) other than a single small (inconclusive) study in people with liver cirrhosis, only some of whom suffered cramps.\n\nGiven the provided abstract, please respond to the following query: \"Key results and certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The combined results of five seemingly reliable studies suggest, with moderate-certainty, that magnesium is unlikely to reduce the frequency or severity of muscle cramps in older adults. In contrast, the five studies in pregnant women had important limitations to their reliability (both in study design and reporting of results), did not consistently show benefit, and could not be combined. As a result we are very uncertain as to whether pregnant women experiencing muscle cramps would benefit from magnesium. The single study in people with liver cirrhosis reported no difference in cramp frequency or intensity but was too small for conclusions to be drawn.\nMore research on magnesium in pregnant women is needed. The same is true for those who suffer cramps associated with medical conditions, or while exercising. However older adult cramp sufferers appear unlikely to benefit from this therapy. Major side effects were infrequent and participants withdrew from the study at similar rates when given magnesium or placebo. However minor side effects, mostly diarrhoea (as would be expected from magnesium salts) and nausea, were common and affected roughly 11% (10% in control) to 37% (14% in control) of participants.\nThe review is up to date to September 2019."
}
] |
query-laysum
|
22901
|
[
{
"role": "user",
"content": "Abstract: Background\nUpper gastrointestinal bleeding is one of the most frequent causes of morbidity and mortality in the course of liver cirrhosis. People with liver disease frequently have haemostatic abnormalities such as hyperfibrinolysis. Therefore, antifibrinolytic amino acids have been proposed to be used as supplementary interventions alongside any of the primary treatments for upper gastrointestinal bleeding in people with liver diseases. This is an update of this Cochrane review.\nObjectives\nTo assess the beneficial and harmful effects of antifibrinolytic amino acids for upper gastrointestinal bleeding in people with acute or chronic liver disease.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Controlled Trials Register (February 2015), Cochrane Central Register of Controlled Trials (CENTRAL) (Issue 2 of 12, 2015), MEDLINE (Ovid SP) (1946 to February 2015), EMBASE (Ovid SP) (1974 to February 2015), Science Citation Index EXPANDED (1900 to February 2015), LILACS (1982 to February 2015), World Health Organization Clinical Trials Search Portal (accessed 26 February 2015), and the metaRegister of Controlled Trials (accessed 26 February 2015). We scrutinised the reference lists of the retrieved publications.\nSelection criteria\nRandomised clinical trials irrespective of blinding, language, or publication status for assessment of benefits and harms. Observational studies for assessment of harms.\nData collection and analysis\nWe planned to summarise data from randomised clinical trials using standard Cochrane methodologies and assessed according to the GRADE approach.\nMain results\nWe found no randomised clinical trials assessing antifibrinolytic amino acids for treating upper gastrointestinal bleeding in people with acute or chronic liver disease. We did not identify quasi-randomised, historically controlled, or observational studies in which we could assess harms.\nAuthors' conclusions\nThis updated Cochrane review identified no randomised clinical trials assessing the benefits and harms of antifibrinolytic amino acids for upper gastrointestinal bleeding in people with acute or chronic liver disease. The benefits and harms of antifibrinolytic amino acids need to be tested in randomised clinical trials. Unless randomised clinical trials are conducted to assess the trade-off between benefits and harms, we cannot recommend or refute antifibrinolytic amino acids for upper gastrointestinal bleeding in people with acute or chronic liver diseases.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed antifibrinolytic amino acids for upper gastrointestinal bleeding in people with acute or chronic liver disease."
}
] |
query-laysum
|
22902
|
[
{
"role": "user",
"content": "Abstract: Background\nThere is evidence that implantable cardioverter-defibrillator (ICD) for primary prevention in people with an ischaemic cardiomyopathy improves survival rate. The evidence supporting this intervention in people with non-ischaemic cardiomyopathy is not as definitive, with the recently published DANISH trial finding no improvement in survival rate. A systematic review of all eligible studies was needed to evaluate the benefits and harms of using ICDs for primary prevention in people with non-ischaemic cardiomyopathy.\nObjectives\nTo evaluate the benefits and harms of using compared to not using ICD for primary prevention in people with non-ischaemic cardiomyopathy receiving optimal medical therapy.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and the Web of Science Core Collection on 10 October 2018. For ongoing or unpublished clinical trials, we searched the US National Institutes of Health Ongoing Trials Register ClinicalTrials.gov, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), and the ISRCTN registry. To identify economic evaluation studies, we conducted a separate search to 31 March 2015 of the NHS Economic Evaluation Database, and from March 2015 to October 2018 on MEDLINE and Embase.\nSelection criteria\nWe included randomised controlled trials involving adults with chronic non-ischaemic cardiomyopathy due to a left ventricular systolic dysfunction with an ejection fraction of 35% or less (New York Heart Association (NYHA) type I-IV). Participants in the intervention arm should have received ICD in addition to optimal medical therapy, while those in the control arm received optimal medical therapy alone. We included studies with cardiac resynchronisation therapy when it was appropriately balanced in the experimental and control groups.\nData collection and analysis\nThe primary outcomes were all-cause mortality, cardiovascular mortality, sudden cardiac death, and adverse events associated with the intervention. The secondary outcomes were non-cardiovascular death, health-related quality of life, hospitalisation for heart failure, first ICD-related hospitalisation, and cost. We abstracted the log (hazard ratio) and its variance from trial reports for time-to-event survival data. We extracted the raw data necessary to calculate the risk ratio. We summarised data on quality of life and cost-effectiveness narratively. We assessed the certainty of evidence for all outcomes using GRADE.\nMain results\nWe identified six eligible randomised trials with a total of 3128 participants. The use of ICD plus optimal medical therapy versus optimal medical therapy alone decreases the risk of all-cause mortality (hazard ratio (HR) 0.78, 95% confidence interval (CI) 0.66 to 0.92; participants = 3128; studies = 6; high-certainty evidence). An average of 24 patients need to be treated with ICD to prevent one additional death from any cause (number needed to treat for an additional beneficial outcome (NNTB) = 24). Individuals younger than 65 derive more benefit than individuals older than 65 (HR 0.51, 95% CI 0.29 to 0.91; participants = 348; studies = 1) (NNTB = 10). When added to medical therapy, ICDs probably decrease cardiovascular mortality compared to not adding them (risk ratio (RR) 0.75, 95% CI 0.46 to 1.21; participants = 1781; studies = 4; moderate-certainty evidence) (possibility of both plausible benefit and no effect). Implantable cardioverter-defibrillator was also found to decrease sudden cardiac deaths (HR 0.45, 95% CI 0.29 to 0.70; participants = 1677; studies = 3; high-certainty evidence). An average of 25 patients need to be treated with an ICD to prevent one additional sudden cardiac death (NNTB = 25). We found that ICDs probably increase adverse events (possibility of both plausible harm and benefit), but likely have little or no effect on non-cardiovascular mortality (RR 1.17, 95% CI 0.81 to 1.68; participants = 1781; studies = 4; moderate-certainty evidence) (possibility of both plausible benefit and no effect). Finally, using ICD therapy probably has little or no effect on quality of life, however shocks from the device cause a deterioration in quality of life. No study reported the outcome of first ICD-related hospitalisations.\nAuthors' conclusions\nThe use of ICD in addition to medical therapy in people with non-ischaemic cardiomyopathy decreases all-cause mortality and sudden cardiac deaths and probably decreases mortality from cardiovascular causes compared to medical therapy alone. Their use probably increases the risk for adverse events. However, these devices come at a high cost, and shocks from ICDs cause a deterioration in quality of life.\n\nGiven the provided abstract, please respond to the following query: \"What are the main results of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In this review, we synthesised the results of six major clinical trials that were identified through an extensive search to better understand the efficacy of ICDs in patients with non-ischaemic heart disease. Our analysis involved 3128 participants.\nWe found high-certainty evidence suggesting that the use of these devices (compared to no such use) in people with non-ischaemic heart disease decreases the risk of dying from any cause and of dying suddenly. There is moderate-certainty evidence suggesting that people on such devices probably have more complications than those not on these devices. The current evidence suggests that adding ICD therapy probably has little or no effect on quality of life when compared to not adding it, but electric shocks from ICDs worsen quality of life."
}
] |
query-laysum
|
22903
|
[
{
"role": "user",
"content": "Abstract: Background\nRecovery strategies are often used with the intention of preventing or minimising muscle soreness after exercise. Whole-body cryotherapy, which involves a single or repeated exposure(s) to extremely cold dry air (below -100 °C) in a specialised chamber or cabin for two to four minutes per exposure, is currently being advocated as an effective intervention to reduce muscle soreness after exercise.\nObjectives\nTo assess the effects (benefits and harms) of whole-body cryotherapy (extreme cold air exposure) for preventing and treating muscle soreness after exercise in adults.\nSearch methods\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register, the Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, the British Nursing Index and the Physiotherapy Evidence Database. We also searched the reference lists of articles, trial registers and conference proceedings, handsearched journals and contacted experts.The searches were run in August 2015.\nSelection criteria\nWe aimed to include randomised and quasi-randomised trials that compared the use of whole-body cryotherapy (WBC) versus a passive or control intervention (rest, no treatment or placebo treatment) or active interventions including cold or contrast water immersion, active recovery and infrared therapy for preventing or treating muscle soreness after exercise in adults. We also aimed to include randomised trials that compared different durations or dosages of WBC. Our prespecified primary outcomes were muscle soreness, subjective recovery (e.g. tiredness, well-being) and adverse effects.\nData collection and analysis\nTwo review authors independently screened search results, selected studies, assessed risk of bias and extracted and cross-checked data. Where appropriate, we pooled results of comparable trials. The random-effects model was used for pooling where there was substantial heterogeneity. We assessed the quality of the evidence using GRADE.\nMain results\nFour laboratory-based randomised controlled trials were included. These reported results for 64 physically active predominantly young adults (mean age 23 years). All but four participants were male. Two trials were parallel group trials (44 participants) and two were cross-over trials (20 participants). The trials were heterogeneous, including the type, temperature, duration and frequency of WBC, and the type of preceding exercise. None of the trials reported active surveillance of predefined adverse events. All four trials had design features that carried a high risk of bias, potentially limiting the reliability of their findings. The evidence for all outcomes was classified as 'very low' quality based on the GRADE criteria.\nTwo comparisons were tested: WBC versus control (rest or no WBC), tested in four studies; and WBC versus far-infrared therapy, also tested in one study. No studies compared WBC with other active interventions, such as cold water immersion, or different types and applications of WBC.\nAll four trials compared WBC with rest or no WBC. There was very low quality evidence for lower self-reported muscle soreness (pain at rest) scores after WBC at 1 hour (standardised mean difference (SMD) -0.77, 95% confidence interval (CI) -1.42 to -0.12; 20 participants, 2 cross-over trials); 24 hours (SMD -0.57, 95% CI -1.48 to 0.33) and 48 hours (SMD -0.58, 95% CI -1.37 to 0.21), both with 38 participants, 2 cross-over studies, 1 parallel group study; and 72 hours (SMD -0.65, 95% CI -2.54 to 1.24; 29 participants, 1 cross-over study, 1 parallel group study). Of note is that the 95% CIs also included either no between-group differences or a benefit in favour of the control group. One small cross-over trial (9 participants) found no difference in tiredness but better well-being after WBC at 24 hours post exercise. There was no report of adverse events.\nOne small cross-over trial involving nine well-trained runners provided very low quality evidence of lower levels of muscle soreness after WBC, when compared with infrared therapy, at 1 hour follow-up, but not at 24 or 48 hours. The same trial found no difference in well-being but less tiredness after WBC at 24 hours post exercise. There was no report of adverse events.\nAuthors' conclusions\nThere is insufficient evidence to determine whether whole-body cryotherapy (WBC) reduces self-reported muscle soreness, or improves subjective recovery, after exercise compared with passive rest or no WBC in physically active young adult males. There is no evidence on the use of this intervention in females or elite athletes. The lack of evidence on adverse events is important given that the exposure to extreme temperature presents a potential hazard. Further high-quality, well-reported research in this area is required and must provide detailed reporting of adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All four studies compared WBC with either passive rest or no treatment. These provided some evidence that WBC may reduce muscle soreness (pain at rest) at 1, 24, 48 and 72 hours after exercise. However, the evidence also included the possibility that WBC may not make a difference or may make the pain worse. There was some weak evidence that WBC may improve well-being at 24 hours. There was no report and probably no monitoring of adverse events in these four studies.\nOne very small study also compared WBC with far-infrared therapy and reported lower levels of muscle soreness one hour after the treatment."
}
] |
query-laysum
|
22904
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the original Cochrane Review published in Issue 11, 2006 of the Cochrane Database of Systematic Reviews.\nEpilepsy is a common neurological condition in which abnormal electrical discharges from the brain cause recurrent unprovoked seizures. It is believed that with effective drug treatment up to 70% of individuals with active epilepsy have the potential to become seizure-free, and to go into long-term remission shortly after starting drug therapy with a single antiepileptic drug (AED) in monotherapy.\nThe correct choice of first-line AED for individuals with newly diagnosed seizures is of great importance. It is important that the choice of AEDs for an individual is made using the highest quality evidence regarding the potential benefits and harms of the various treatments.\nCarbamazepine or lamotrigine are recommended as first-line treatments for new onset focal seizures and as a first- or second-line treatment for generalised tonic-clonic seizures. Performing a synthesis of the evidence from existing trials will increase the precision of the results for outcomes relating to efficacy and tolerability and may assist in informing a choice between the two drugs.\nObjectives\nTo review the time to treatment failure, remission and first seizure with lamotrigine compared to carbamazepine when used as monotherapy in people with focal onset seizures (simple or complex focal and secondarily generalised) or generalised onset tonic-clonic seizures (with or without other generalised seizure types).\nSearch methods\nWe conducted the first searches for this review in 1997. For the most recent update, we searched the Cochrane Epilepsy Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online (CRSO), MEDLINE, Clinical Trials.gov and the WHO International Clinical Trials Registry Platform on 26 February 2018, without language restrictions\nSelection criteria\nRandomised controlled trials comparing monotherapy with either carbamazepine or lamotrigine in children or adults with focal onset seizures or generalised onset tonic-clonic seizures\nData collection and analysis\nThis was an individual participant data (IPD) review. Our primary outcome was time to treatment failure and our secondary outcomes were time to first seizure post randomisation, time to six-month, 12-month and 24-month remission, and incidence of adverse events. We used Cox proportional hazards regression models to obtain trial-specific estimates of hazard ratios (HRs) with 95% confidence intervals (CIs), using the generic inverse variance method to obtain the overall pooled HR and 95% CI.\nMain results\nWe included 14 trials in this review. Individual participant data were available for 2572 participants out of 3787 eligible individuals from nine out of 14 trials: 68% of the potential data. For remission outcomes, a HR of less than one indicated an advantage for carbamazepine; and for first seizure and treatment failure outcomes, a HR of less than one indicated an advantage for lamotrigine.\nThe main overall results were: time to treatment failure for any reason related to treatment (pooled HR adjusted for seizure type: 0.71, 95% CI 0.62 to 0.82, moderate-quality evidence), time to treatment failure due to adverse events (pooled HR adjusted for seizure type: 0.55 (95% CI 0.45 to 0.66, moderate-quality evidence), time to treatment failure due to lack of efficacy (pooled HR for all participants: 1.03 (95% CI 0.75 to 1.41), moderate-quality evidence) showing a significant advantage for lamotrigine compared to carbamazepine in terms of treatment failure for any reason related to treatment and treatment failure due to adverse events, but no different between drugs for treatment failure due to lack of efficacy.\nTime to first seizure (pooled HR adjusted for seizure type: 1.26, 95% CI 1.12 to 1.41, high-quality evidence) and time to six-month remission (pooled HR adjusted for seizure type: 0.86, 95% CI 0.76 to 0.97, high-quality evidence), showed a significant advantage for carbamazepine compared to lamotrigine for first seizure and six-month remission. We found no difference between the drugs for time to 12-month remission (pooled HR for all participants 0.91, 95% CI 0.77 to 1.07, high-quality evidence) or time to 24-month remission (HR for all participants 1.00, 95% CI 0.80 to 1.25, high-quality evidence), however only two trials followed up participants for more than one year so evidence is limited.\nThe results of this review are applicable mainly to individuals with focal onset seizures; 88% of included individuals experienced seizures of this type at baseline. Up to 50% of the limited number of individuals classified as experiencing generalised onset seizures at baseline may have had their seizure type misclassified, therefore we recommend caution when interpreting the results of this review for individuals with generalised onset seizures.\nThe most commonly reported adverse events for both of the drugs across all of the included trials were dizziness, fatigue, gastrointestinal disturbances, headache and skin problems. The rate of adverse events was similar across the two drugs.\nThe methodological quality of the included trials was generally good, however there is some evidence that the design choice of masked or open-label treatment may have influenced the treatment failure and withdrawal rates of the trials. Hence, we judged the quality of the evidence for the primary outcome of treatment failure to be moderate for individuals with focal onset seizures and low for individuals with generalised onset seizures. For efficacy outcomes (first seizure, remission), we judged the quality of evidence to be high for individuals with focal onset seizures and moderate for individuals with generalised onset seizures.\nAuthors' conclusions\nModerate quality evidence indicates that treatment failure for any reason related to treatment or due to adverse events occurs significantly earlier on carbamazepine than lamotrigine, but the results for time to first seizure suggested that carbamazepine may be superior in terms of seizure control. The choice between these first-line treatments must be made with careful consideration. We recommend that future trials should be designed to the highest quality possible with consideration of masking, choice of population, classification of seizure type, duration of follow-up, choice of outcomes and analysis, and presentation of results.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "For people with focal onset seizures, we judged the quality of the evidence to be high for the outcomes of seizure recurrence and remission of seizures and we judged the quality of the evidence to be moderate for the outcome of treatment failure. The design of the trials (specifically, whether the people and treating clinicians knew which medication they were taking) may have influenced the rates of withdrawal from treatments. Up to 50% of people in the trials used in our results may have been wrongly classified as having generalised seizures; for people with generalised onset seizures, we judged the quality of the evidence to be moderate for the outcomes of seizure recurrence and remission of seizures and low quality for the outcome of treatment failure."
}
] |
query-laysum
|
22905
|
[
{
"role": "user",
"content": "Abstract: Background\nPulmonary embolism (PE) is a potentially life-threatening condition in which a clot can migrate from the deep veins, most commonly in the leg, to the lungs. Conventional treatment of PE used unfractionated heparin (UFH), low molecular weight heparin (LMWH), fondaparinux, and vitamin K antagonists (VKAs). Recently, two forms of direct oral anticoagulants (DOACs) have been developed: oral direct thrombin inhibitors (DTIs) and oral factor Xa inhibitors. DOACs have characteristics that may be favourable to conventional treatment, including oral administration, a predictable effect, no need for frequent monitoring or re-dosing, and few known drug interactions. This review reports the efficacy and safety of these drugs in the long-term treatment of PE (minimum duration of three months). This is an update of a Cochrane Review first published in 2015.\nObjectives\nTo assess the efficacy and safety of oral DTIs and oral factor Xa inhibitors versus conventional anticoagulants for the long-term treatment of PE.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase and CINAHL databases, the World Health Organization International Clinical Trials Registry Platform and the ClinicalTrials.gov trials registers to 2 March 2022. We checked the reference lists of relevant articles for additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) in which people with a PE confirmed by standard imaging techniques were allocated to receive an oral DTI or an oral factor Xa inhibitor compared with a conventional anticoagulant or compared with each other for the long-term treatment of PE (minimum duration three months).\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were recurrent PE, recurrent venous thromboembolism (VTE), and deep vein thrombosis (DVT). Secondary outcomes were all-cause mortality, major bleeding, and health-related quality of life. We used GRADE to assess the certainty of evidence for each outcome.\nMain results\nWe identified five additional RCTs with 1484 participants for this update. Together with the previously included trials, we have included ten RCTs with a total of 13,073 participants. Two studies investigated an oral DTI (dabigatran) and eight studies investigated oral factor Xa inhibitors (three rivaroxaban, three apixaban, and two edoxaban). The studies were of good methodological quality overall.\nMeta-analysis showed no clear difference in the efficacy and safety of oral DTI compared with conventional anticoagulation in preventing recurrent PE (odds ratio (OR) 1.02, 95% confidence interval (CI) 0.50 to 2.04; 2 studies, 1602 participants; moderate-certainty evidence), recurrent VTE (OR 0.93, 95% CI 0.52 to 1.66; 2 studies, 1602 participants; moderate-certainty evidence), DVT (OR 0.79, 95% CI 0.29 to 2.13; 2 studies, 1602 participants; moderate-certainty evidence), and major bleeding (OR 0.50, 95% CI 0.15 to 1.68; 2 studies, 1527 participants; moderate-certainty evidence). We downgraded the certainty of evidence by one level for imprecision due to the low number of events.\nThere was also no clear difference between the oral factor Xa inhibitors and conventional anticoagulation in the prevention of recurrent PE (OR 0.92, 95% CI 0.66 to 1.29; 3 studies, 8186 participants; moderate-certainty evidence), recurrent VTE (OR 0.83, 95% CI 0.66 to 1.03; 8 studies, 11,416 participants; moderate-certainty evidence), DVT (OR 0.77, 95% CI 0.48 to 1.25; 2 studies, 8151 participants; moderate-certainty evidence), all-cause mortality (OR 1.16, 95% CI 0.79 to 1.70; 1 study, 4817 participants; moderate-certainty evidence) and major bleeding (OR 0.71, 95% CI 0.36 to 1.41; 8 studies, 11,447 participants; low-certainty evidence); the heterogeneity for major bleeding was significant (I2 = 79%). We downgraded the certainty of the evidence to moderate and low because of imprecision due to the low number of events and inconsistency due to clinical heterogeneity. None of the included studies measured health-related quality of life.\nAuthors' conclusions\nAvailable evidence shows there is probably little or no difference between DOACs and conventional anticoagulation in the prevention of recurrent PE, recurrent VTE, DVT, all-cause mortality, and major bleeding. The certainty of evidence was moderate or low. Future large clinical trials are required to identify if individual drugs differ in effectiveness and bleeding risk, and to explore effect differences in subgroups, including people with cancer and obesity.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We are moderately confident in this evidence. This was because the number of events involved in the studies was small and there were differences in how individual studies were carried out."
}
] |
query-laysum
|
22906
|
[
{
"role": "user",
"content": "Abstract: Background\nAbout 10% of women of reproductive age suffer from endometriosis, a costly chronic disease causing pelvic pain and subfertility. Laparoscopy is the gold standard diagnostic test for endometriosis, but is expensive and carries surgical risks. Currently, there are no non-invasive tests available in clinical practice to accurately diagnose endometriosis. This review assessed the diagnostic accuracy of combinations of different non-invasive testing modalities for endometriosis and provided a summary of all the reviews in the non-invasive tests for endometriosis series.\nObjectives\nTo estimate the diagnostic accuracy of any combination of non-invasive tests for the diagnosis of pelvic endometriosis (peritoneal and/or ovarian or deep infiltrating) compared to surgical diagnosis as a reference standard. The combined tests were evaluated as replacement tests for diagnostic surgery and triage tests to assist decision-making to undertake diagnostic surgery for endometriosis.\nSearch methods\nWe did not restrict the searches to particular study designs, language or publication dates. We searched CENTRAL to July 2015, MEDLINE and EMBASE to May 2015, as well as the following databases to April 2015: CINAHL, PsycINFO, Web of Science, LILACS, OAIster, TRIP, ClinicalTrials.gov, DARE and PubMed.\nSelection criteria\nWe considered published, peer-reviewed, randomised controlled or cross-sectional studies of any size, including prospectively collected samples from any population of women of reproductive age suspected of having one or more of the following target conditions: ovarian, peritoneal or deep infiltrating endometriosis (DIE). We included studies comparing the diagnostic test accuracy of a combination of several testing modalities with the findings of surgical visualisation of endometriotic lesions.\nData collection and analysis\nThree review authors independently collected and performed a quality assessment of the data from each study by using the QUADAS-2 tool. For each test, the data were classified as positive or negative for the surgical detection of endometriosis and sensitivity and specificity estimates were calculated. The bivariate model was planned to obtain pooled estimates of sensitivity and specificity whenever sufficient data were available. The predetermined criteria for a clinically useful test to replace diagnostic surgery were a sensitivity of 0.94 and a specificity of 0.79 to detect endometriosis. We set the criteria for triage tests at a sensitivity of 0.95 and above and a specificity of 0.50 and above, which 'rules out' the diagnosis with high accuracy if there is a negative test result (SnOUT test), or a sensitivity of 0.50 and above and a specificity of 0.95 and above, which 'rules in' the diagnosis with high accuracy if there is a positive result (SpIN test).\nMain results\nEleven eligible studies included 1339 participants. All the studies were of poor methodological quality. Seven studies evaluated pelvic endometriosis, one study considered DIE and/or ovarian endometrioma, two studies differentiated endometrioma from other ovarian cysts and one study addressed mapping DIE at specific anatomical sites. Fifteen different diagnostic combinations were assessed, including blood, urinary or endometrial biomarkers, transvaginal ultrasound (TVUS) and clinical history or examination. We did not pool estimates of sensitivity and specificity, as each study analysed independent combinations of the non-invasive tests.\nTests that met the criteria for a replacement test were: a combination of serum IL-6 (cut-off >15.4 pg/ml) and endometrial PGP 9.5 for pelvic endometriosis (sensitivity 1.00 (95% confidence interval (CI) 0.91 to 1.00), specificity 0.93 (95% CI, 0.80, 0.98) and the combination of vaginal examination and transvaginal ultrasound (TVUS) for rectal endometriosis (sensitivity 0.96 (95% CI 0.86 to 0.99), specificity 0.98 (95% CI 0.94 to 1.00)). Tests that met the criteria for SpIN triage tests for pelvic endometriosis were: 1. a multiplication of urine vitamin-D-binding protein (VDBP) and serum CA-125 (cut-off >2755) (sensitivity 0.74 (95% CI 0.60 to 0.84), specificity 0.97 (95% CI 0.86 to 1.00)) and 2. a combination of history (length of menses), serum CA-125 (cut-off >35 U/ml) and endometrial leukocytes (sensitivity 0.61 (95% CI 0.54 to 0.69), specificity 0.95 (95% CI 0.91 to 0.98)). For endometrioma, the following combinations qualified as SpIN test: 1. TVUS and either serum CA-125 (cut-off ≥25 U/ml) or CA 19.9 (cut-off ≥12 U/ml) (sensitivity 0.79 (95% CI 0.64 to 0.91), specificity 0.97 (95% CI 0.91 to 1.00)); 2. TVUS and serum CA 19.9 (cut-off ≥12 U/ml) (sensitivity 0.54 (95% CI 0.37 to 0.70), specificity 0.97 (95% CI 0.91 to 1.0)); 3-4. TVUS and serum CA-125 (cut-off ≥20 U/ml or cut-off ≥25 U/ml) (sensitivity 0.69 (95% CI 0.49 to 0.85), specificity 0.96 (95% CI 0.88 to 0.99)); 5. TVUS and serum CA-125 (cut-off ≥35 U/ml) (sensitivity 0.52 (95% CI 0.33 to 0.71), specificity 0.97 (95% CI 0.90 to 1.00)). A combination of vaginal examination and TVUS reached the threshold for a SpIN test for obliterated pouch of Douglas (sensitivity 0.87 (95% CI 0.69 to 0.96), specificity 0.98 (95% CI 0.95 to 1.00)), vaginal wall endometriosis (sensitivity 0.82 (95% CI 0.60 to 0.95), specificity 0.99 (95% CI 0.97 to 1.0)) and rectovaginal septum endometriosis (sensitivity 0.88 (95% CI 0.47 to 1.00), specificity 0.99 (95% CI 0.96 to 1.00)).\nAll the tests were evaluated in individual studies and displayed wide CIs. Due to the heterogeneity and high risk of bias of the included studies, the clinical utility of the studied combination diagnostic tests for endometriosis remains unclear.\nAuthors' conclusions\nNone of the biomarkers evaluated in this review could be evaluated in a meaningful way and there was insufficient or poor-quality evidence. Laparoscopy remains the gold standard for the diagnosis of endometriosis and using any non-invasive tests should only be undertaken in a research setting.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Several studies identified the combined tests that might be of value in diagnosing endometriosis, but there are too few reports to be sure of their diagnostic benefit.\nThe reports were of low methodological quality, which is why these results cannot be considered reliable unless confirmed in large high-quality studies. Overall, there is not enough evidence to demonstrate benefit of any combined non-invasive test for use in clinical practice for the diagnosis of endometriosis over the current ‘gold standard’ of diagnostic laparoscopy."
}
] |
query-laysum
|
22907
|
[
{
"role": "user",
"content": "Abstract: Background\nCognitive stimulation (CS) is an intervention for people with dementia offering a range of enjoyable activities providing general stimulation for thinking, concentration and memory, usually in a social setting, such as a small group. CS is distinguished from other approaches such as cognitive training and cognitive rehabilitation by its broad focus and social elements, aiming to improve domains such as quality of life (QoL) and mood as well as cognitive function.\nRecommended in various guidelines and widely implemented internationally, questions remain regarding different modes of delivery and the clinical significance of any benefits. A systematic review of CS is important to clarify its effectiveness and place practice recommendations on a sound evidence base. This review was last updated in 2012.\nObjectives\nTo evaluate the evidence for the effectiveness of CS for people with dementia, including any negative effects, on cognition and other relevant outcomes, accounting where possible for differences in its implementation.\nSearch methods\nWe identified trials from a search of the Cochrane Dementia and Cognitive Improvement Group Specialized Register, last searched on 3 March 2022. We used the search terms: cognitive stimulation, reality orientation, memory therapy, memory groups, memory support, memory stimulation, global stimulation, cognitive psychostimulation. We performed supplementary searches in a number of major healthcare databases and trial registers to ensure the search was up-to-date and comprehensive.\nSelection criteria\nWe included all randomised controlled trials (RCTs) of CS for dementia published in peer review journals in the English language incorporating a measure of cognitive change.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. As CS is a psychosocial intervention, we did not expect those receiving or delivering CS to be blinded to the nature of the intervention. Where necessary, we contacted study authors requesting data not provided in the papers. Where appropriate, we undertook subgroup analysis by modality (individual versus group), number of sessions and frequency, setting (community versus care home), type of control condition and dementia severity. We used GRADE methods to assess the overall quality of evidence for each outcome.\nMain results\nWe included 37 RCTs (with 2766 participants), 26 published since the previous update. Most evaluated CS groups; eight examined individual CS. Participants' median age was 79.7 years. Sixteen studies included participants resident in care homes or hospitals. Study quality showed indications of improvement since the previous review, with few areas of high risk of bias. Assessors were clearly blinded to treatment allocation in most studies (81%) and most studies (81%) reported use of a treatment manual by those delivering the intervention. However, in a substantial number of studies (59%), we could not find details on all aspects of the randomisation procedures, leading us to rate the risk of selection bias as unclear.\nWe entered data in the meta-analyses from 36 studies (2704 participants; CS: 1432, controls: 1272). The primary analysis was on changes evident immediately following the treatment period (median length 10 weeks; range 4 to 52 weeks). Only eight studies provided data allowing evaluation of whether effects were subsequently maintained (four at 6- to 12-week follow-up; four at 8- to 12-month follow-up). No negative effects were reported. Overall, we found moderate-quality evidence for a small benefit in cognition associated with CS (standardised mean difference (SMD) 0.40, 95% CI 0.25 to 0.55). In the 25 studies, with 1893 participants, reporting the widely used MMSE (Mini-Mental State Examination) test for cognitive function in dementia, there was moderate-quality evidence of a clinically important difference of 1.99 points between CS and controls (95% CI: 1.24, 2.74).\nIn secondary analyses, with smaller total sample sizes, again examining the difference between CS and controls on changes immediately following the intervention period, we found moderate-quality evidence of a slight improvement in self-reported QoL (18 studies, 1584 participants; SMD: 0.25 [95% CI: 0.07, 0.42]) as well as in QoL ratings made by proxies (staff or caregivers). We found high-quality evidence for clinically relevant improvements in staff/interviewer ratings of communication and social interaction (5 studies, 702 participants; SMD: 0.53 [95% CI: 0.36, 0.70]) and for slight benefits in instrumental Activities of Daily Living, self-reported depressed mood, staff/interviewer-rated anxiety and general behaviour rating scales. We found moderate-quality evidence for slight improvements in behaviour that challenges and in basic Activities of Daily Living and low-quality evidence for a slight improvement in staff/interviewer-rated depressed mood. A few studies reported a range of outcomes for family caregivers. We found moderate-quality evidence that overall CS made little or no difference to caregivers' mood or anxiety.\nWe found a high level of inconsistency between studies in relation to both cognitive outcomes and QoL. In exploratory subgroup analyses, we did not identify an effect of modality (group versus individual) or, for group studies, of setting (community versus care home), total number of group sessions or type of control condition (treatment-as-usual versus active controls). However, we did find improvements in cognition were larger where group sessions were more frequent (twice weekly or more versus once weekly) and where average severity of dementia among participants at the start of the intervention was 'mild' rather than 'moderate'. Imbalance in numbers of studies and participants between subgroups and residual inconsistency requires these exploratory findings to be interpreted cautiously.\nAuthors' conclusions\nIn this updated review, now with a much more extensive evidence base, we have again identified small, short-term cognitive benefits for people with mild to moderate dementia participating in CS programmes. From a smaller number of studies, we have also found clinically relevant improvements in communication and social interaction and slight benefits in a range of outcomes including QoL, mood and behaviour that challenges. There are relatively few studies of individual CS, and further research is needed to delineate the effectiveness of different delivery methods (including digital and remote, individual and group) and of multi-component programmes. We have identified that the frequency of group sessions and level of dementia severity may influence the outcomes of CS, and these aspects should be studied further. There remains an evidence gap in relation to the potential benefits of longer-term CS programmes and their clinical significance.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 37 studies involving 2766 participants with mild or moderate dementia and an average age of 79 years. The biggest study involved 356 participants, the smallest 13. The studies were conducted in 17 countries from five continents, with most in Europe. Fewer than half (16) included participants living in care homes or hospitals. The length of the trials varied from four weeks to two years. Sessions per week varied from one to six. The overall number of sessions varied from eight to 520. Most studies lasted for around 10 weeks, with around 20 sessions. Most studies offered CS in groups, with just eight examining individual CS."
}
] |
query-laysum
|
22908
|
[
{
"role": "user",
"content": "Abstract: Background\nIndomethacin is used as standard therapy to close a patent ductus arteriosus (PDA) but is associated with reduced blood flow to several organs. Ibuprofen, another cyclo-oxygenase inhibitor, may be as effective as indomethacin with fewer adverse effects.\nObjectives\nTo determine the effectiveness and safety of ibuprofen compared with indomethacin, other cyclo-oxygenase inhibitor(s), placebo, or no intervention for closing a patent ductus arteriosus in preterm, low-birth-weight, or preterm and low-birth-weight infants.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2017, Issue 10), MEDLINE via PubMed (1966 to 30 November 2017), Embase (1980 to 30 November 2017), and CINAHL (1982 to 30 November 2017). We searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.\nSelection criteria\nRandomised or quasi-randomised controlled trials of ibuprofen for the treatment of a PDA in preterm, low birth weight, or both preterm and low-birth-weight newborn infants.\nData collection and analysis\nData collection and analysis conformed to the methods of the Cochrane Neonatal Review Group. We used the GRADE approach to assess the quality of evidence.\nMain results\nWe included 39 studies enrolling 2843 infants.\nIbuprofen (IV) versus placebo: IV Ibuprofen (3 doses) reduced the failure to close a PDA compared with placebo (typical relative risk (RR); 0.62 (95% CI 0.44 to 0.86); typical risk difference (RD); -0.18 (95% CI -0.30 to -0.06); NNTB 6 (95% CI 3 to 17); I2 = 65% for RR and I2 = 0% for RD; 2 studies, 206 infants; moderate-quality the evidence). One study reported decreased failure to close a PDA after single or three doses of oral ibuprofen compared with placebo (64 infants; RR 0.26, 95% CI 0.11 to 0.62; RD -0.44, 95% CI -0.65 to -0.23; NNTB 2, 95% CI 2 to 4; I2 test not applicable).\nIbuprofen (IV or oral) compared with indomethacin (IV or oral): Twenty-four studies (1590 infants) comparing ibuprofen (IV or oral) with indomethacin (IV or oral) found no significant differences in failure rates for PDA closure (typical RR 1.07, 95% CI 0.92 to 1.24; typical RD 0.02, 95% CI -0.02 to 0.06; I2 = 0% for both RR and RD; moderate-quality evidence). A reduction in NEC (necrotising enterocolitis) was noted in the ibuprofen (IV or oral) group (18 studies, 1292 infants; typical RR 0.68, 95% CI 0.49 to 0.94; typical RD -0.04, 95% CI -0.07 to -0.01; NNTB 25, 95% CI 14 to 100; I2 = 0% for both RR and RD; moderate-quality evidence). There was a statistically significant reduction in the proportion of infants with oliguria in the ibuprofen group (6 studies, 576 infants; typical RR 0.28, 95% CI 0.14 to 0.54; typical RD -0.09, 95% CI -0.14 to -0.05; NNTB 11, 95% CI 7 to 20; I2 = 24% for RR and I2 = 69% for RD; moderate-quality evidence). The serum/plasma creatinine levels 72 hours after initiation of treatment were statistically significantly lower in the ibuprofen group (11 studies, 918 infants; MD -8.12 µmol/L, 95% CI -10.81 to -5.43). For this comparison, there was high between-study heterogeneity (I2 = 83%) and low-quality evidence.\nIbuprofen (oral) compared with indomethacin (IV or oral): Eight studies (272 infants) reported on failure rates for PDA closure in a subgroup of the above studies comparing oral ibuprofen with indomethacin (IV or oral). There was no significant difference between the groups (typical RR 0.96, 95% CI 0.73 to 1.27; typical RD -0.01, 95% CI -0.12 to 0.09; I2 = 0% for both RR and RD). The risk of NEC was reduced with oral ibuprofen compared with indomethacin (IV or oral) (7 studies, 249 infants; typical RR 0.41, 95% CI 0.23 to 0.73; typical RD -0.13, 95% CI -0.22 to -0.05; NNTB 8, 95% CI 5 to 20; I2 = 0% for both RR and RD). There was low-quality evidence for these two outcomes. There was a decreased risk of failure to close a PDA with oral ibuprofen compared with IV ibuprofen (5 studies, 406 infants; typical RR 0.38, 95% CI 0.26 to 0.56; typical RD -0.22, 95% CI -0.31 to -0.14; NNTB 5, 95% CI 3 to 7; moderate-quality evidence). There was a decreased risk of failure to close a PDA with high-dose versus standard-dose of IV ibuprofen (3 studies 190 infants; typical RR 0.37, 95% CI 0.22 to 0.61; typical RD - 0.26, 95% CI -0.38 to -0.15; NNTB 4, 95% CI 3 to 7); I2 = 4% for RR and 0% for RD); moderate-quality evidence).\nEarly versus expectant administration of IV ibuprofen, echocardiographically-guided IV ibuprofen treatment versus standard IV ibuprofen treatment, continuous infusion of ibuprofen versus intermittent boluses of ibuprofen, and rectal ibuprofen versus oral ibuprofen were studied in too few trials to allow for precise estimates of any clinical outcomes.\nAuthors' conclusions\nIbuprofen is as effective as indomethacin in closing a PDA. Ibuprofen reduces the risk of NEC and transient renal insufficiency. Therefore, of these two drugs, ibuprofen appears to be the drug of choice. The effectiveness of ibuprofen versus paracetamol is assessed in a separate review. Oro-gastric administration of ibuprofen appears as effective as IV administration. To make further recommendations, studies are needed to assess the effectiveness of high-dose versus standard-dose ibuprofen, early versus expectant administration of ibuprofen, echocardiographically-guided versus standard IV ibuprofen, and continuous infusion versus intermittent boluses of ibuprofen. Studies are lacking evaluating the effect of ibuprofen on longer-term outcomes in infants with PDA.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases for randomised controlled trials (clinical studies where people are randomly put into one of two or more treatment groups) in preterm infants (born at less than 37 weeks into pregnancy), low-birth-weight (weighing less than 2500 g) infants, or preterm and low-birth-weight infants with a PDA. The treatments were ibuprofen, indomethacin, another cyclo-oxygenase inhibitor, placebo or no treatment. The evidence is current to 30 November 2017."
}
] |
query-laysum
|
22909
|
[
{
"role": "user",
"content": "Abstract: Background\nNon-steroidal antiandrogens and castration are the main therapy options for advanced stages of prostate cancer. However, debate regarding the value of these treatment options continues.\nObjectives\nTo assess the effects of non-steroidal antiandrogen monotherapy compared with luteinising hormone–releasing hormone agonists or surgical castration monotherapy for treating advanced stages of prostate cancer.\nSearch methods\nWe searched the Cochrane Prostatic Diseases and Urologic Cancers Group Specialized Register (PROSTATE), the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science with Conference Proceedings, three trial registries and abstracts from three major conferences to 23 December 2013, together with reference lists, and contacted selected experts in the field and manufacturers.\nSelection criteria\nWe included randomised controlled trials comparing non-steroidal antiandrogen monotherapy with medical or surgical castration monotherapy for men in advanced stages of prostate cancer.\nData collection and analysis\nOne review author screened all titles and abstracts; only citations that were clearly irrelevant were excluded at this stage. Then, two review authors independently examined full-text reports, identified relevant studies, assessed the eligibility of studies for inclusion, assessed trial quality and extracted data. We contacted the study authors to request additional information. We used Review Manager 5 for data synthesis and used the fixed-effect model for heterogeneity less than 50%; we used the random-effects model for substantial or considerable heterogeneity.\nMain results\nEleven studies involving 3060 randomly assigned participants were included in this review. The quality of evidence is hampered by risk of bias. Use of non-steroidal antiandrogens decreased overall survival (hazard ratio (HR) 1.24, 95% confidence interval (CI) 1.05 to 1.48, six studies, 2712 participants) and increased clinical progression (one year: risk ratio (RR) 1.25, 95% CI 1.08 to 1.45, five studies, 2067 participants; 70 weeks: RR 1.26, 95% CI 1.08 to 1.45, six studies, 2373 participants; two years: RR 1.14, 95% CI 1.04 to 1.25, three studies, 1336 participants), as well as treatment failure (one year: RR 1.19, 95% CI 1.02 to 1.38, four studies, 1539 participants; 70 weeks: RR 1.27, 95% CI 1.05 to 1.52, five studies, 1845 participants; two years: RR 1.14, 95% CI 1.05 to 1.24, two studies, 808 participants), compared with medical or surgical castration. The quality of evidence for overall survival, clinical progression and treatment failure was rated as moderate according to GRADE. Predefined subgroup analyses showed that use of non-steroidal antiandrogens, compared with castration, was less favourable for overall survival, clinical progression (at one year, 70 weeks, two years) and treatment failure (at one year, 70 weeks, two years) in men with metastatic disease. Use of non-steroidal antiandrogens also increased the risk for treatment discontinuation due to adverse events (RR 1.82, 95% CI 1.13 to 2.94, eight studies, 1559 participants), including events such as breast pain (RR 22.97, 95% CI 14.79 to 35.67, eight studies, 2670 participants), gynaecomastia (RR 8.43, 95% CI 3.19 to 22.28, nine studies, 2774 participants) and asthenia (RR 1.77, 95% CI 1.36 to 2.31, five studies, 2073 participants). The risk of other adverse events, such as hot flashes (RR 0.23, 95% CI 0.19 to 0.27, nine studies, 2774 participants), haemorrhage (RR 0.07, 95% CI 0.01 to 0.54, two studies, 546 participants), nocturia (RR 0.38, 95% CI 0.20 to 0.69, one study, 480 participants), fatigue (RR 0.52, 95% CI 0.31 to 0.88, one study, 51 participants), loss of sexual interest (RR 0.50, 95% CI 0.30 to 0.83, one study, 51 participants) and urinary frequency (RR 0.22, 95% CI 0.11 to 0.47, one study, 480 participants) was decreased when non-steroidal antiandrogens were used. The quality of evidence for breast pain, gynaecomastia and hot flashes was rated as moderate according to GRADE. The effects of non-steroidal antiandrogens on cancer-specific survival and biochemical progression remained unclear.\nAuthors' conclusions\nCurrently available evidence suggests that use of non-steroidal antiandrogen monotherapy compared with medical or surgical castration monotherapy for advanced prostate cancer is less effective in terms of overall survival, clinical progression, treatment failure and treatment discontinuation due to adverse events. Evidence quality was rated as moderate according to GRADE. Further research is likely to have an important impact on results for patients with advanced but non-metastatic prostate cancer treated with non-steroidal antiandrogen monotherapy. However, we believe that research is likely not necessary on non-steroidal antiandrogen monotherapy for men with metastatic prostate cancer. Only high-quality, randomised controlled trials with long-term follow-up should be conducted. If further research is planned to investigate biochemical progression, studies with standardised follow-up schedules using measurements of prostate-specific antigen based on current guidelines should be conducted.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence on the effects of androgen suppression monotherapies (non-steroidal antiandrogens compared with medical or surgical castration monotherapy) in men with advanced prostate cancer."
}
] |
query-laysum
|
22910
|
[
{
"role": "user",
"content": "Abstract: Background\nUS Centers for Disease Control guidelines recommend replacement of peripheral intravenous catheters (PIVC) no more frequently than every 72 to 96 hours. Routine replacement is thought to reduce the risk of phlebitis and bloodstream infection. Catheter insertion is an unpleasant experience for patients and replacement may be unnecessary if the catheter remains functional and there are no signs of inflammation or infection. Costs associated with routine replacement may be considerable. This is the third update of a review first published in 2010.\nObjectives\nTo assess the effects of removing peripheral intravenous catheters when clinically indicated compared with removing and re-siting the catheter routinely.\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase and CINAHL and World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 18 April 2018. We also undertook reference checking, and contacted researchers and manufacturers to identify additional studies.\nSelection criteria\nWe included randomised controlled trials that compared routine removal of PIVC with removal only when clinically indicated, in hospitalised or community-dwelling patients receiving continuous or intermittent infusions.\nData collection and analysis\nThree review authors independently reviewed trials for inclusion, extracted data, and assessed risk of bias using Cochrane methods. We used GRADE to assess the overall evidence certainty.\nMain results\nThis update contains two new trials, taking the total to nine included studies with 7412 participants. Eight trials were conducted in acute hospitals and one in a community setting. We rated the overall certainty of evidence as moderate for most outcomes, due to serious risk of bias for unblinded outcome assessment or imprecision, or both. Because outcome assessment was unblinded in all of the trials, none met our criteria for high methodological quality.\nPrimary outcomes\nSeven trials (7323 participants), assessed catheter-related bloodstream infection (CRBSI). There is no clear difference in the incidence of CRBSI between the clinically indicated (1/3590) and routine change (2/3733) groups (risk ratio (RR) 0.61, 95% confidence interval (CI) 0.08 to 4.68), low-certainty evidence (downgraded twice for serious imprecision).\nAll trials reported incidence of thrombophlebitis and we combined the results from seven of these in the analysis (7323 participants). We excluded two studies in the meta-analysis because they contributed to high heterogeneity. There is no clear difference in the incidence of thrombophlebitis whether catheters were changed according to clinical indication or routinely (RR 1.07, 95% CI 0.93 to 1.25; clinically indicated 317/3590; 3-day change 307/3733, moderate-certainty evidence, downgraded once for serious risk of bias). The result was unaffected by whether the infusion was continuous or intermittent. Six trials provided thrombophlebitis rates by number of device days (32,709 device days). There is no clear difference between groups (RR 0.90, 95% CI 0.76 to 1.08; clinically indicated 248/17,251; 3-day change 236/15,458; moderate-certainty evidence, downgraded once for serious risk of bias).\nOne trial (3283 participants), assessed all-cause blood stream infection (BSI). We found no clear difference in the all-cause BSI rate between the two groups (RR 0.47, 95% CI 0.15 to 1.53; clinically indicated: 4/1593 (0.02%); routine change 9/1690 (0.05%); moderate-certainty evidence, downgraded one level for serious imprecision).\nThree trials (4244 participants), investigated costs; clinically indicated removal probably reduces device-related costs by approximately AUD 7.00 compared with routine removal (MD −6.96, 95% CI −9.05 to −4.86; moderate-certainty evidence, downgraded once for serious risk of bias).\nSecondary outcomes\nSix trials assessed infiltration (7123 participants). Routine replacement probably reduces infiltration of fluid into surrounding tissues compared with a clinically indicated change (RR 1.16 (95% CI 1.06 to 1.26; routine replacement 747/3638 (20.5%); clinically indicated 834/3485 (23.9%); moderate-certainty evidence, downgraded once for serious risk of bias).\nMeta-analysis of seven trials (7323 participants), found that rates of catheter failure due to blockage were probably lower in the routine-replacement group compared to the clinically indicated group (RR 1.14, 95% CI 1.01 to 1.29; routine-replacement 519/3733 (13.9%); clinically indicated 560/3590 (15.6%); moderate-certainty evidence, downgraded once for serious risk of bias).\nFour studies (4606 participants), reported local infection rates. It is uncertain if there are differences between groups (RR 4.96, 95% CI 0.24 to 102.98; clinically indicated 2/2260 (0.09%); routine replacement 0/2346 (0.0%); very low-certainty evidence, downgraded one level for serious risk of bias and two levels for very serious imprecision).\nOne trial (3283 participants), found no clear difference in the incidence of mortality when clinically indicated removal was compared with routine removal (RR 1.06, 95% CI 0.27 to 4.23; low-certainty evidence, downgraded two levels for very serious imprecision).\nOne small trial (198 participants) reported no clear difference in device-related pain between clinically indicated and routine removal groups (MD −0.60, 95% CI −1.44 to 0.24; low-certainty evidence, downgraded one level for serious risk of bias and one level for serious imprecision).\nThe pre-planned outcomes 'number of catheter re-sites per patient', and 'satisfaction' were not reported by any studies included in this review.\nAuthors' conclusions\nThere is moderate-certainty evidence of no clear difference in rates of CRBSI, thrombophlebitis, all-cause BSI, mortality and pain between clinically indicated or routine replacement of PIVC. We are uncertain if local infection is reduced or increased when catheters are changed when clinically indicated. There is moderate-certainty evidence that infiltration and catheter blockage is probably lower when PIVC are changed routinely; and moderate-certainty evidence that clinically indicated removal probably reduces device-related costs. The addition of two new trials for this update found no further evidence to support changing catheters every 72 to 96 hours. Healthcare organisations may consider changing to a policy whereby catheters are changed only if there is a clinical indication to do so, for example, if there were signs of infection, blockage or infiltration. This would provide significant cost savings, spare patients the unnecessary pain of routine re-sites in the absence of clinical indications and would reduce time spent by busy clinicians on this intervention. To minimise PIVC-related complications, staff should inspect the insertion site at each shift change and remove the catheter if signs of inflammation, infiltration, occlusion, infection or blockage are present, or if the catheter is no longer needed for therapy.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The overall quality of the evidence was judged to be moderate for most outcomes, which leaves us uncertain of our findings. The uncertainty is largely due to outcomes, such as phlebitis, being assessed by people who were aware of the group allocation, which may or may not affect their decision about whether a problem is present or absent."
}
] |
query-laysum
|
22911
|
[
{
"role": "user",
"content": "Abstract: Background\nPanic disorder is common in the general population. It is often associated with other psychiatric disorders, such as drug dependence, major depression, bipolar disorder, social phobia, specific phobia and generalised anxiety disorder. Azapirones are a class of drugs used as anxiolytics. They are associated with less drowsiness, psychomotor impairment, alcohol potentiation and potential for addiction or abuse than benzodiazepines. However, azapirones are not widely used in the treatment of panic disorder and evidence for their efficacy is unclear. It is important to find out if azapirones are effective and acceptable in the treatment of panic disorder.\nObjectives\nTo assess the effects of azapirones on panic disorder in adults, specifically:\n1. to determine the efficacy of azapirones in alleviating symptoms of panic disorder, with or without agoraphobia, in comparison with placebo;\n2. to review the acceptability of azapirones in panic disorder, with or without agoraphobia, in comparison with placebo; and\n3. to investigate adverse effects of azapirones in panic disorder with or without agoraphobia, including general prevalence of adverse effects, compared with placebo.\nSearch methods\nWe searched the Cochrane Depression Anxiety and Neurosis Group Trials Specialised Register (CCDANCTR, search date: 10th January 2014), which includes relevant randomised controlled trials from The Cochrane Library (all years), MEDLINE (1950-), EMBASE (1974-), and PsycINFO (1967-).\nSelection criteria\nRandomised controlled trials that compared azapirones with placebo for panic disorder in adults.\nData collection and analysis\nThree review authors independently identified studies, assessed trial quality and extracted data. We contacted study authors for additional information.\nMain results\nThree studies involving 170 participants compared the azapirone buspirone with placebo. No study provided enough usable information on our primary efficacy outcome (response). For our primary acceptability outcome, moderate-quality evidence indicated that azapirones had lower acceptability than placebo: risk ratio (RR) for dropouts for any reason 2.13 (95% confidence interval (CI) 1.11 to 4.07; 3 studies, 170 participants. Evidence for secondary efficacy outcomes were of low quality. Results on efficacy between azapirone and placebo in terms of agoraphobia (standardised mean difference (SMD) -0.01, 95% CI -0.56 to 0.53; 1 study, 52 participants), general anxiety (mean difference (MD) -2.20, 95% CI -5.45 to 1.06; 2 studies, 115 participants) and depression (MD -1.80, 95% CI -5.60 to 2.00; 1 study, 52 participants) were uncertain. None of the studies provided information for the assessment of allocation concealment or sequence generation. Conflicts of interest were not explicitly expressed. The risk of attrition bias was rated high for all three studies. Information on adverse effects other than dropouts for any reason was insufficient to include in the analyses.\nAuthors' conclusions\nThe efficacy of azapirones is uncertain due to the lack of meta-analysable data for the primary outcome and low-quality evidence for secondary efficacy outcomes. A small amount of moderate-quality evidence suggested that the acceptability of azapirones for panic disorder was lower than for placebo. However, only trials of one azapirone (namely buspirone) were included in this review; this, combined with the small sample size, limits our conclusions. If further research is to be conducted, studies with larger sample sizes, with different azapirones and with less risk of bias are necessary to draw firm conclusions regarding azapirones for panic disorder.\n\nGiven the provided abstract, please respond to the following query: \"What does the evidence from the review tell us?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There was not enough information to work out whether azapirones are any more or less effective than placebo in causing substantial improvements in panic disorder overall.\nA small amount of moderate-quality evidence suggests that the acceptability of azapirones for panic disorder is lower than for placebo.\nThere was not enough information to compare any differences in adverse effects caused by azapirones and placebo."
}
] |
query-laysum
|
22912
|
[
{
"role": "user",
"content": "Abstract: Background\nDiabetic retinopathy is a complication of diabetes in which high blood sugar levels damage the blood vessels in the retina. Sometimes new blood vessels grow in the retina, and these can have harmful effects; this is known as proliferative diabetic retinopathy. Laser photocoagulation is an intervention that is commonly used to treat diabetic retinopathy, in which light energy is applied to the retina with the aim of stopping the growth and development of new blood vessels, and thereby preserving vision.\nObjectives\nTo assess the effects of laser photocoagulation for diabetic retinopathy compared to no treatment or deferred treatment.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 5), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to June 2014), EMBASE (January 1980 to June 2014), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 3 June 2014.\nSelection criteria\nWe included randomised controlled trials (RCTs) where people (or eyes) with diabetic retinopathy were randomly allocated to laser photocoagulation or no treatment or deferred treatment. We excluded trials of lasers that are no longer in routine use. Our primary outcome was the proportion of people who lost 15 or more letters (3 lines) of best-corrected visual acuity (BCVA) as measured on a logMAR chart at 12 months. We also looked at longer-term follow-up of the primary outcome at two to five years. Secondary outcomes included mean best corrected distance visual acuity, severe visual loss, mean near visual acuity, progression of diabetic retinopathy, quality of life, pain, loss of driving licence, vitreous haemorrhage and retinal detachment.\nData collection and analysis\nWe used standard methods as expected by the Cochrane Collaboration. Two review authors selected studies and extracted data.\nMain results\nWe identified a large number of trials of laser photocoagulation of diabetic retinopathy (n = 83) but only five of these studies were eligible for inclusion in the review, i.e. they compared laser photocoagulation with currently available lasers to no (or deferred) treatment. Three studies were conducted in the USA, one study in the UK and one study in Japan. A total of 4786 people (9503 eyes) were included in these studies. The majority of participants in four of these trials were people with proliferative diabetic retinopathy; one trial recruited mainly people with non-proliferative retinopathy. Four of the studies evaluated panretinal photocoagulation with argon laser and one study investigated selective photocoagulation of non-perfusion areas. Three studies compared laser treatment to no treatment and two studies compared laser treatment to deferred laser treatment. All studies were at risk of performance bias because the treatment and control were different and no study attempted to produce a sham treatment. Three studies were considered to be at risk of attrition bias.\nAt 12 months there was little difference between eyes that received laser photocoagulation and those allocated to no treatment (or deferred treatment), in terms of loss of 15 or more letters of visual acuity (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.89 to 1.11; 8926 eyes; 2 RCTs, low quality evidence). Longer term follow-up did not show a consistent pattern, but one study found a 20% reduction in risk of loss of 15 or more letters of visual acuity at five years with laser treatment. Treatment with laser reduced the risk of severe visual loss by over 50% at 12 months (RR 0.46, 95% CI 0.24 to 0.86; 9276 eyes; 4 RCTs, moderate quality evidence). There was a beneficial effect on progression of diabetic retinopathy with treated eyes experiencing a 50% reduction in risk of progression of diabetic retinopathy (RR 0.49, 95% CI 0.37 to 0.64; 8331 eyes; 4 RCTs, low quality evidence) and a similar reduction in risk of vitreous haemorrhage (RR 0.56, 95% CI 0.37 to 0.85; 224 eyes; 2 RCTs, low quality evidence).\nNone of the studies reported near visual acuity or patient-relevant outcomes such as quality of life, pain, loss of driving licence or adverse effects such as retinal detachment.\nWe did not plan any subgroup analyses, but there was a difference in baseline risk in participants with non-proliferative retinopathy compared to those with proliferative retinopathy. With the small number of included studies we could not do a formal subgroup analysis comparing effect in proliferative and non-proliferative retinopathy.\nAuthors' conclusions\nThis review provides evidence that laser photocoagulation is beneficial in treating proliferative diabetic retinopathy. We judged the evidence to be moderate or low, depending on the outcome. This is partly related to reporting of trials conducted many years ago, after which panretinal photocoagulation has become the mainstay of treatment of proliferative diabetic retinopathy.\nFuture Cochrane Reviews on variations in the laser treatment protocol are planned. Future research on laser photocoagulation should investigate the combination of laser photocoagulation with newer treatments such as anti-vascular endothelial growth factors (anti-VEGFs).\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Is laser photocoagulation an effective treatment for diabetic retinopathy?"
}
] |
query-laysum
|
22913
|
[
{
"role": "user",
"content": "Abstract: Background\nCycling is an attractive form of transport. It is beneficial to the individual as a form of physical activity that may fit more readily into an individual’s daily routine, such as for cycling to work and to the shops, than other physical activities such as visiting a gym. Cycling is also beneficial to the wider community and the environment as a result of fewer motorised journeys. Cyclists are seen as vulnerable road users who are frequently in close proximity to larger and faster motorised vehicles. Cycling infrastructure aims to make cycling both more convenient and safer for cyclists. This review is needed to guide transport planning.\nObjectives\nTo:\n1. evaluate the effects of different types of cycling infrastructure on reducing cycling injuries in cyclists, by type of infrastructure;\n2. evaluate the effects of cycling infrastructure on reducing the severity of cycling injuries in cyclists;\n3. evaluate the effects of cycling infrastructure on reducing cycling injuries in cyclists with respect to age, sex and social group.\nSearch methods\nWe ran the most recent search on 2nd March 2015. We searched the Cochrane Injuries Group Specialised Register, CENTRAL (The Cochrane Library), MEDLINE (OvidSP), Embase Classic + Embase(OvidSP), PubMed and 10 other databases. We searched websites, handsearched conference proceedings, screened reference lists of included studies and previously published reviews and contacted relevant organisations.\nSelection criteria\nWe included randomised controlled trials, cluster randomised controlled trials, controlled before-after studies, and interrupted time series studies which evaluated the effect of cycling infrastructure (such as cycle lanes, tracks or paths, speed management, roundabout design) on cyclist injury or collision rates. Studies had to include a comparator, that is, either no infrastructure or a different type of infrastructure. We excluded studies that assessed collisions that occurred as a result of competitive cycling.\nData collection and analysis\nTwo review authors examined the titles and abstracts of papers obtained from searches to determine eligibility. Two review authors extracted data from the included trials and assessed the risk of bias. We carried out a meta-analysis using the random-effects model where at least three studies reported the same intervention and outcome. Where there were sufficient studies, as a secondary analysis we accounted for changes in cyclist exposure in the calculation of the rate ratios. We rated the quality of the evidence as ‘high’, ‘moderate’, ‘low’ or ‘very low’ according to the GRADE approach for the installation of cycle routes and networks.\nMain results\nWe identified 21 studies for inclusion in the review: 20 controlled before-after (CBA) studies and one interrupted time series (ITS) study. These evaluated a range of infrastructure including cycle lanes, advanced stop lines, use of colour, cycle tracks, cycle paths, management of the road network, speed management, cycle routes and networks, roundabout design and packages of measures. No studies reported medically-attended or self-reported injuries. There was no evidence that cycle lanes reduce the rate of cycle collisions (rate ratio 1.21, 95% CI 0.70 to 2.08). Taking into account cycle flow, there was no difference in collisions for cyclists using cycle routes and networks compared with cyclists not using cycle routes and networks (RR 0.40, 95% CI 0.15 to 1.05). There was statistically significant heterogeneity between the studies (I² = 75%, Chi² = 8.00 df = 2, P = 0.02) for the analysis adjusted for cycle flow. We judged the quality of the evidence regarding cycle routes and networks as very low and we are very uncertain about the estimate. These analyses are based on findings from CBA studies.\nFrom data presented narratively, the use of 20 mph speed restrictions in urban areas may be effective at reducing cyclist collisions. Redesigning specific parts of cycle routes that may be particularly busy or complex in terms of traffic movement may be beneficial to cyclists in terms of reducing the risk of collision. Generally, the conversion of intersections to roundabouts may increase the number of cycle collisions. In particular, the conversion of intersections to roundabouts with cycle lanes marked as part of the circulating carriageway increased cycle collisions. However, the conversion of intersections with and without signals to roundabouts with cycle paths may reduce the odds of collision. Both continuing a cycle lane across the mouth of a side road with a give way line onto the main road, and cycle tracks, may increase the risk of injury collisions in cyclists. However, these conclusions are uncertain, being based on a narrative review of findings from included studies. There is a lack of evidence that cycle paths or advanced stop lines either reduce or increase injury collisions in cyclists. There is also insufficient evidence to draw any robust conclusions concerning the effect of cycling infrastructure on cycling collisions in terms of severity of injury, sex, age, and level of social deprivation of the casualty.\nIn terms of quality of the evidence, there was little matching of intervention and control sites. In many studies, the comparability of the control area to the intervention site was unclear and few studies provided information on other cycling infrastructures that may be in place in the control and intervention areas. The majority of studies analysed data routinely collected by organisations external to the study team, thus reducing the risk of bias in terms of systematic differences in assessing outcomes between the control and intervention groups. Some authors did not take regression-to-mean effects into account when examining changes in collisions. Longer data collection periods pre- and post-installation would allow for regression-to-mean effects and also seasonal and time trends in traffic volume to be observed. Few studies adjusted cycle collision rates for exposure.\nAuthors' conclusions\nGenerally, there is a lack of high quality evidence to be able to draw firm conclusions as to the effect of cycling infrastructure on cycling collisions. There is a lack of rigorous evaluation of cycling infrastructure.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review aimed to answer the question \"what effect do different types of cycling infrastructure have on cycling injuries and collisions?\". Cycling infrastructure involves changes which are made to the road design or management of the road for cyclists. We aimed to include studies which looked at the effects of three types of cycling infrastructure:\n1. that which aims to manage the shared use of the road space for both motor vehicles and cyclists, for example, cycle lanes and shared use of a bus lane;\n2. that which separates cycle traffic from motorised traffic and may include special routes just for cycle traffic, for example, cycle tracks and cycle paths. These may be shared with pedestrians;\n3. management of the roads to include separation of motor vehicle and cycle traffic (for example, traffic rules that ban certain types of traffic from making particular turns) and cycle turns at traffic signals.\nComparisons were made with either routes or crossings that either did not have cycling infrastructure in place or had a different type of infrastructure. We were interested in studies with both adults and children. The primary outcome of interest was cycling injuries suffered as a result of a cycling collision. Secondary outcomes were collision rates for cyclists; and cycle counts, that is the number of cyclists using the infrastructure."
}
] |
query-laysum
|
22914
|
[
{
"role": "user",
"content": "Abstract: Background\nUrinary catheterisation is a common procedure, with approximately 15% to 25% of all people admitted to hospital receiving short-term (14 days or less) indwelling urethral catheterisation at some point during their care. However, the use of urinary catheters is associated with an increased risk of developing urinary tract infection. Catheter-associated urinary tract infection (CAUTI) is one of the most common hospital-acquired infections. It is estimated that around 20% of hospital-acquired bacteraemias arise from the urinary tract and are associated with mortality of around 10%.\nThis is an update of a Cochrane Review first published in 2005 and last published in 2007.\nObjectives\nTo assess the effects of strategies for removing short-term (14 days or less) indwelling catheters in adults.\nSearch methods\nWe searched the Cochrane Incontinence Specialised Register, which contains trials identified from CENTRAL, MEDLINE, MEDLINE In-Process, MEDLINE Epub Ahead of Print, CINAHL, ClinicalTrials.gov, WHO ICTRP, and handsearching of journals and conference proceedings (searched 17 March 2020), and reference lists of relevant articles.\nSelection criteria\nWe included all randomised controlled trials (RCTs) and quasi-RCTs that evaluated the effectiveness of practices undertaken for the removal of short-term indwelling urethral catheters in adults for any reason in any setting.\nData collection and analysis\nTwo review authors performed abstract and full-text screening of all relevant articles. At least two review authors independently performed risk of bias assessment, data abstraction and GRADE assessment.\nMain results\nWe included 99 trials involving 12,241 participants. We judged the majority of trials to be at low or unclear risk of selection and detection bias, with a high risk of performance bias. We also deemed most trials to be at low risk of attrition and reporting bias. None of the trials reported on quality of life. The majority of participants across the trials had undergone some form of surgical procedure.\nThirteen trials involving 1506 participants compared the removal of short-term indwelling urethral catheters at one time of day (early morning removal group between 6 am to 7 am) versus another (late night removal group between 10 pm to midnight). Catheter removal late at night may slightly reduce the risk of requiring recatheterisation compared with early morning (RR 0.71, 95% CI 0.53 to 0.96; 10 RCTs, 1920 participants; low-certainty evidence). We are uncertain if there is any difference between early morning and late night removal in the risk of developing symptomatic CAUTI (RR 1.00, 95% CI 0.61 to 1.63; 1 RCT, 41 participants; very low-certainty evidence). We are uncertain whether the time of day makes a difference to the risk of dysuria (RR 2.20; 95% CI 0.70 to 6.86; 1 RCT, 170 participants; low-certainty evidence).\nSixty-eight trials involving 9247 participants compared shorter versus longer durations of catheterisation. Shorter durations may increase the risk of requiring recatheterisation compared with longer durations (RR 1.81, 95% CI 1.35 to 2.41; 44 trials, 5870 participants; low-certainty evidence), but probably reduce the risk of symptomatic CAUTI (RR 0.52, 95% CI 0.45 to 0.61; 41 RCTs, 5759 participants; moderate-certainty evidence) and may reduce the risk of dysuria (RR 0.42, 95% CI 0.20 to 0.88; 7 RCTs; 1398 participants; low-certainty evidence).\nSeven trials involving 714 participants compared policies of clamping catheters versus free drainage. There may be little to no difference between clamping and free drainage in terms of the risk of requiring recatheterisation (RR 0.82, 95% CI 0.55 to 1.21; 5 RCTs; 569 participants; low-certainty evidence). We are uncertain if there is any difference in the risk of symptomatic CAUTI (RR 0.99, 95% CI 0.60 to 1.63; 2 RCTs, 267 participants; very low-certainty evidence) or dysuria (RR 0.84, 95% CI 0.46 to 1.54; 1 trial, 79 participants; very low-certainty evidence).\nThree trials involving 402 participants compared the use of prophylactic alpha blockers versus no intervention or placebo. We are uncertain if prophylactic alpha blockers before catheter removal has any effect on the risk of requiring recatheterisation (RR 1.18, 95% CI 0.58 to 2.42; 2 RCTs, 184 participants; very low-certainty evidence) or risk of symptomatic CAUTI (RR 0.20, 95% CI 0.01 to 4.06; 1 trial, 94 participants; very low-certainty evidence). None of the included trials investigating prophylactic alpha blockers reported the number of participants with dysuria.\nAuthors' conclusions\nThere is some evidence to suggest the removal of indwelling urethral catheters late at night rather than early in the morning may reduce the number of people who require recatheterisation. It appears that catheter removal after shorter compared to longer durations probably reduces the risk of symptomatic CAUTI and may reduce the risk of dysuria. However, it may lead to more people requiring recatheterisation. The other evidence relating to the risk of symptomatic CAUTI and dysuria is too uncertain to allow us to draw any conclusions.\nDue to the low certainty of the majority of the evidence presented here, the results of further research are likely to change our findings and to have a further impact on clinical practice. This systematic review has highlighted the need for a standardised set of core outcomes, which should be measured and reported by all future trials comparing strategies for the removal of short-term urinary catheters. Future trials should also study the effects of short-term indwelling urethral catheter removal on non-surgical patients.\n\nGiven the provided abstract, please respond to the following query: \"Early-morning compared to late-night removal\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Late-night catheter removal might reduce the risk of needing to have the catheter reinserted compared with early-morning removal. We are uncertain if there is any difference between early-morning and late-night removal for developing UTI or painful urination."
}
] |
query-laysum
|
22915
|
[
{
"role": "user",
"content": "Abstract: Background\nScoliosis in patients with Duchenne muscular dystrophy (DMD) is usually progressive and is treated with surgery. However, it is unclear whether the existing evidence is sufficiently scientifically rigorous to support a recommendation for spinal surgery for most patients with DMD and scoliosis. This is an updated review, and an updated search was undertaken in which no new studies were found for inclusion.\nObjectives\nTo determine the effectiveness and safety of spinal surgery in patients with DMD with scoliosis. We intended to test whether spinal surgery is effective in increasing survival and improving respiratory function, quality of life, and overall functioning, and whether spinal surgery is associated with severe adverse effects.\nSearch methods\nOn 16 June 2015 we searched the Cochrane Neuromuscular Disease Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and CINAHL Plus. We also searched ProQuest Dissertation and Thesis database (January 1980 to June 2015), the National Institutes of Health Clinical Trials Database (6 January 2015), and the WHO International Clinical Trials Registry Platform (17 June 2015), and checked references. We imposed no language restrictions.\nSelection criteria\nWe planned to include controlled clinical trials using random or quasi-random allocation of treatment evaluating all forms of spinal surgery for scoliosis in patients with DMD in the review. The control interventions would have been no treatment, non-operative treatment, or a different form of spinal surgery.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration. Two review authors independently examined the search results and evaluated the study characteristics against inclusion criteria in order to decide which studies to include in the review.\nMain results\nOf the 49 relevant studies we found, none met the inclusion criteria for the review because they were not clinical trials, but prospective or retrospective reviews of case series.\nAuthors' conclusions\nSince no randomized controlled clinical trials were available to evaluate the effectiveness of scoliosis surgery in patients with DMD, we can make no good evidence-based conclusion to guide clinical practice. Patients with scoliosis should be informed as to the uncertainty of benefits and potential risks of surgery for scoliosis. Randomized controlled trials are needed to investigate the effectiveness of scoliosis surgery, in terms of quality of life, functional status, respiratory function, and life expectancy.\n\nGiven the provided abstract, please respond to the following query: \"Key results and quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 49 relevant studies, however they were not clinical trials but prospective or retrospective reviews of case series. The quality of evidence was very low because no clinical trial was available. This is an updated review, and an updated search was undertaken in which no new studies were found."
}
] |
query-laysum
|
22916
|
[
{
"role": "user",
"content": "Abstract: Background\nIntraperitoneal adhesions are associated with considerable co-morbidity and have large financial and public health repercussions. They have secondary effects that include chronic pelvic pain, dyspareunia, subfertility and bowel obstruction. In women with adhesions, subsequent surgery is more difficult, often takes longer, and is associated with a higher complication rate (Broek 2013). The significant burden of adhesions has led to the development of several anti-adhesion agents, although there is disagreement as to their relative effectiveness.\nObjectives\nTo summarise evidence derived from Cochrane systematic reviews on the clinical safety and effectiveness of solid agents, gel agents, liquid agents and pharmacological agents, used as adjuvants to prevent formation of adhesions after gynaecological pelvic surgery.\nMethods\nThe Cochrane Database of Systematic Reviews was searched using the keyword 'adhesion' up to August 2014. The Cochrane information management system was also searched for any titles or protocols of reviews in progress. Two review authors independently extracted information from the reviews, with disagreements being resolved by a third review author. The quality of the included reviews was described in a narrative manner, and the AMSTAR tool was used to formally assess each review included in this overview. The quality of evidence provided in the original reviews was described using GRADE methods.\nMain results\nWe included two reviews, one with 18 studies comparing solid agents (oxidised regenerated cellulose expanded polytetrafluoroethylene, sodium hyaluronate and carboxymethylcellulose, and fibrin sheets) with control or with each other. The other review included 29 studies which compared liquid agents (4% icodextrin, 32% dextran, crystalloids), gel agents (carboxymethylcellulose and polyethylene oxide, polyethylene glycol gels, hyaluronic acid based gel, 0.5% ferric hyaluronate gel, sodium hyaluronate spray) and pharmacological agents (gonadotrophin-releasing hormone agonist, reteplase plasminogen activator, N,O-carboxymethyl chitosan, steroid agents, intraperitoneal noxytioline, intraperitoneal heparin, systemic promethazine) with control or each other. Both reviews met all of the criteria of the AMSTAR assessment.\nThe reviews included as outcomes both the primary outcomes of this overview (pelvic pain, pregnancy, live birth rate and quality of life (QoL)) and our secondary outcomes (adverse effects, presence or absence of adhesions at second–look laparoscopy (SLL) and adhesion score). However, neither of the reviews identified any primary studies of solid, gel or pharmacological agents that reported any of our primary outcomes. The only studies in either review that reported any of our primary outcomes were studies comparing liquid agents versus control (saline or Hartmann's solution), which reported pelvic pain (two studies), live birth (two studies) and pregnancy (three studies).\nAn external source of funding was stated for 25 of the 47 studies across both reviews; in 24 of these studies the funding was commercial.\nSolid agents (18 studies)\nNone of our primary outcomes were reported. Adverse events were reported as an outcome by only 9 of the 18 studies. These reported no adverse events.\nLiquid agents (nine studies)\nThere was no evidence of a difference between liquid agents and control (saline or Hartmann's solution) with respect to pelvic pain (odds ratio (OR) 0.65, 95% confidence interval (CI) 0.37 to 1.14, 1 study, n = 286, moderate quality evidence), pregnancy rate (OR 0.64, 95% CI 0.36 to 1.14, 3 studies, n = 310, moderate quality evidence) or live birth rate (OR 0.67, 95% CI 0.29 to 1.58, 2 studies, n = 208, moderate quality evidence). No studies of liquid agents reported QoL. Adverse events were not reported as an outcome by any of the nine studies.\nGel agents (seven studies)\nNone of our primary outcomes were reported. Adverse events were not reported as an outcome by any of the seven studies.\nPharmacological agents (seven studies)\nNone of our primary outcomes were reported. Adverse events were reported as an outcome by only one of the seven primary studies. This study reported no evidence of difference in ectopic pregnancy rates between intraperitoneal noxytioline and no treatment (OR 4.91, 95% CI 0.45 to 53.27, 1 study, n = 33, low quality evidence).\nAuthors' conclusions\nThere is insufficient evidence to allow us to draw any conclusions about the effectiveness and safety of anti-adhesion agents in gynaecological surgery, due to the lack of data on pelvic pain, fertility outcomes, quality of life or safety. A substantial proportion of research in this field has been funded by private companies that manufacture these agents, and further high powered, independent trials will be needed before definitive conclusions can be made.\n\nGiven the provided abstract, please respond to the following query: \"Search results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our search up to 31 August 2014 identified two Cochrane reviews. One focused on solid agents, while the other focused on liquid and gel anti-adhesion agents as well as drugs that may prevent adhesions from forming. All reviews were high quality, though the quality of specific comparisons in each review ranged from low to high as a result of the limitations of the original studies. The characteristics of the women who underwent the trials were recorded poorly among the trials."
}
] |
query-laysum
|
22917
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with end-stage kidney disease (ESKD) treated with dialysis are frequently affected by major depression. Dialysis patients have prioritised depression as a critically important clinical outcome in nephrology trials. Psychological and social support are potential treatments for depression, although a Cochrane review in 2005 identified zero eligible studies. This is an update of the Cochrane review first published in 2005.\nObjectives\nTo assess the effect of using psychosocial interventions versus usual care or a second psychosocial intervention for preventing and treating depression in patients with ESKD treated with dialysis.\nSearch methods\nWe searched Cochrane Kidney and Transplant's Register of Studies up to 21 June 2019 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nWe included randomised controlled trials (RCTs) and quasi-RCTs of psychosocial interventions for prevention and treatment of depression among adults treated with long-term dialysis. We assessed effects of interventions on changes in mental state (depression, anxiety, cognition), suicide, health-related quality of life (HRQoL), withdrawal from dialysis treatment, withdrawal from intervention, death (any cause), hospitalisation and adverse events.\nData collection and analysis\nTwo authors independently selected studies for inclusion and extracted study data. We applied the Cochrane 'Risk of Bias' tool and used the GRADE process to assess evidence certainty. We estimated treatment effects using random-effects meta-analysis. Results for continuous outcomes were expressed as a mean difference (MD) or as a standardised mean difference (SMD) when investigators used different scales. Dichotomous outcomes were expressed as risk ratios. All estimates were reported together with 95% confidence intervals (CI).\nMain results\nWe included 33 studies enrolling 2056 participants. Twenty-six new studies were added to this 2019 update. Seven studies originally excluded from the 2005 review were included as they met the updated review eligibility criteria, which have been expanded to include RCTs in which participants did not meet criteria for depression as an inclusion criterion.\nPsychosocial interventions included acupressure, cognitive-behavioural therapy, counselling, education, exercise, meditation, motivational interviewing, relaxation techniques, social activity, spiritual practices, support groups, telephone support, visualisation, and voice-recording of a psychological intervention.\nThe duration of study follow-up ranged between three weeks and one year. Studies included between nine and 235 participants. The mean study age ranged between 36.1 and 73.9 years.\nRandom sequence generation and allocation concealment were at low risk of bias in eight and one studies respectively. One study reported low risk methods for blinding of participants and investigators, and outcome assessment was blinded in seven studies. Twelve studies were at low risk of attrition bias, eight studies were at low risk of selective reporting bias, and 21 studies were at low risk of other potential sources of bias.\nCognitive behavioural therapy probably improves depressive symptoms measured using the Beck Depression Inventory (4 studies, 230 participants: MD -6.10, 95% CI -8.63 to -3.57), based on moderate certainty evidence. Cognitive behavioural therapy compared to usual care probably improves HRQoL measured either with the Kidney Disease Quality of Life Instrument Short Form or the Quality of Life Scale, with a 0.5 standardised mean difference representing a moderate effect size (4 studies, 230 participants: SMD 0.51, 95% CI 0.19 to 0.83) , based on moderate certainty evidence. Cognitive behavioural therapy may reduce major depression symptoms (one study) and anxiety, and increase self-efficacy (one study). Cognitive behavioural therapy studies did not report hospitalisation.\nWe found low-certainty evidence that counselling may slightly reduce depressive symptoms measured with the Beck Depression Inventory (3 studies, 99 participants: MD -3.84, 95% CI -6.14 to -1.53) compared to usual care. Counselling reported no difference in HRQoL (one study). Counselling studies did not measure risk of major depression, suicide, or hospitalisation.\nExercise may reduce or prevent major depression (3 studies, 108 participants: RR 0.47, 95% CI 0.27 to 0.81), depression of any severity (3 studies, 108 participants: RR 0.69, 95% CI 0.54 to 0.87) and improve HRQoL measured with Quality of Life Index score (2 studies, 64 participants: MD 3.06, 95% CI 2.29 to 3.83) compared to usual care with low certainty. With moderate certainty, exercise probably improves depression symptoms measured with the Beck Depression Inventory (3 studies, 108 participants: MD -7.61, 95% CI -9.59 to -5.63). Exercise may reduce anxiety (one study). No exercise studies measured suicide risk or withdrawal from dialysis.\nWe found moderate-certainty evidence that relaxation techniques probably reduce depressive symptoms measured with the Beck Depression Inventory (2 studies, 122 participants: MD -5.77, 95% CI -8.76 to -2.78). Relaxation techniques reported no difference in HRQoL (one study). Relaxation studies did not measure risk of major depression or suicide.\nSpiritual practices have uncertain effects on depressive symptoms measured either with the Beck Depression Inventory or the Brief Symptom Inventory (2 studies, 116 participants: SMD -1.00, 95% CI -3.52 to 1.53; very low certainty evidence). No differences between spiritual practices and usual care were reported on anxiety (one study), and HRQoL (one study). No study of spiritual practices evaluated effects on suicide risk, withdrawal from dialysis or hospitalisation.\nThere were few or no data on acupressure, telephone support, meditation and adverse events related to psychosocial interventions.\nAuthors' conclusions\nCognitive behavioural therapy, exercise or relaxation techniques probably reduce depressive symptoms (moderate-certainty evidence) for adults with ESKD treated with dialysis. Cognitive behavioural therapy probably increases health-related quality of life. Evidence for spiritual practices, acupressure, telephone support, and meditation is of low certainty . Similarly, evidence for effects of psychosocial interventions on suicide risk, major depression, hospitalisation, withdrawal from dialysis, and adverse events is of low or very low certainty.\n\nGiven the provided abstract, please respond to the following query: \"What is the issue?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Depression is frequently experienced by people treated with dialysis. Dialysis patients consider treatments that help with depression to be a high priority. Despite that fact that psychosocial interventions have been shown to decrease depression in various chronic diseases, we are very uncertain about whether treatments prevent or treat depression for dialysis patients as studies are rare."
}
] |
query-laysum
|
22918
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatitis is a viral infection of the liver. It is mainly transmitted between people through contact with infected blood, frequently from mother to baby in-utero. Hepatitis B poses significant risk to the fetus and up to 85% of infants infected by their mothers at birth develop chronic hepatitis B virus (HBV) infection. Hepatitis B immunoglobulin (HBIG) is a purified solution of human immunoglobulin that could be administered to the mother, newborn, or both. HBIG offers protection against HBV infection when administered to pregnant women who test positive for hepatitis B envelope antigen (HBeAg) or hepatitis B surface antigen (HBsAg), or both. When HBIG is administered to pregnant women, the antibodies passively diffuse across the placenta to the child. This materno-fetal diffusion is maximal during the third trimester of pregnancy. Up to 1% to 9% infants born to HBV-carrying mothers still have HBV infection despite the newborn receiving HBIG plus active HBV vaccine in the immediate neonatal period. This suggests that additional intervention such as HBIG administration to the mother during the antenatal period could be beneficial to reduce the transmission rate in utero.\nObjectives\nTo determine the benefits and harms of hepatitis B immunoglobulin (HBIG) administration to pregnant women during their third trimester of pregnancy for the prevention of mother-to-child transmission of hepatitis B virus infection.\nSearch methods\nWe searched the The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE Ovid, Embase Ovid, Science Citation Index Expanded (Web of Science), SCOPUS, African Journals OnLine, and INDEX MEDICUS up to June 2016. We searched ClinicalTrials.gov and portal of the WHO International Clinical Trials Registry Platform (ICTRP) in December 2016.\nSelection criteria\nWe included randomised clinical trials comparing HBIG versus placebo or no intervention in pregnant women with HBV.\nData collection and analysis\nTwo authors extracted data independently. We analysed dichotomous outcome data using risk ratio (RR) and continuous outcome data using mean difference (MD) with 95% confidence intervals (CI). For meta-analyses, we used a fixed-effect model and a random-effects model, along with an assessment of heterogeneity. If there were statistically significant discrepancies in the results, we reported the more conservative point estimate. If the two estimates were equal, we used the estimate with the widest CI as our main result. We assessed bias control using the Cochrane Hepato-Biliary Group suggested bias risk domains and risk of random errors using Trial Sequential Analysis (TSA). We assessed the quality of the evidence using GRADE.\nMain results\nAll 36 included trials originated from China and were at overall high risk of bias. The trials included 6044 pregnant women who were HBsAg, HBeAg, or hepatitis B virus DNA (HBV-DNA) positive. Only seven trials reported inclusion of HBeAg-positive mothers. All 36 trials compared HBIG versus no intervention. None of the trials used placebo.\nMost of the trials assessed HBIG 100 IU (two trials) and HBIG 200 IU (31 trials). The timing of administration of HBIG varied; 30 trials administered three doses of HBIG 200 IU at 28, 32, and 36 weeks of pregnancy. None of the trials reported all-cause mortality or other serious adverse events in the mothers or babies. Serological signs of hepatitis B infection of the newborns were reported as HBsAg, HBeAg, and HBV-DNA positive results at end of follow-up. Twenty-nine trials reported HBsAg status in newborns (median 1.2 months of follow-up after birth; range 0 to 12 months); seven trials reported HBeAg status (median 1.1 months of follow-up after birth; range 0 to 12 months); and 16 trials reported HBV-DNA status (median 1.2 months of follow-up; range 0 to 12 months). HBIG reduced mother-to-child transmission (MTCT) of HBsAg when compared with no intervention (179/2769 (6%) with HBIG versus 537/2541 (21%) with no intervention; RR 0.30, TSA-adjusted CI 0.20 to 0.52; I2 = 36%; 29 trials; 5310 participants; very low quality evidence). HBV-DNA reduced MTCT of HBsAg (104/1112 (9%) with HBV-DNA versus 382/1018 (38%) with no intervention; RR 0.25, TSA-adjusted CI 0.22 to 0.27; I2 = 84%; 16 trials; 2130 participants; low quality evidence). TSA supported both results. Meta-analysis showed that maternal HBIG did not decrease HBeAg in newborns compared with no intervention (184/889 (21%) with HBIG versus 232/875 (27%) with no intervention; RR 0.68, TSA-adjusted CI 0.04 to 6.37; I2 = 90%; 7 trials; 1764 participants; very low quality evidence). TSA could neither support nor refute this observation as data were too sparse. None of the trials reported adverse events of the immunoglobulins on the newborns, presence of local and systemic adverse events on the mothers, or cost-effectiveness of treatment.\nAuthors' conclusions\nDue to very low to low quality evidence found in this review, we are uncertain of the effect of benefit of antenatal HBIG administration to the HBV-infected mothers on newborn outcomes, such as HBsAg, HBV-DNA, and HBeAg compared with no intervention. The results of the effects of HBIG on HBsAg and HBeAg are surrogate outcomes (raising risk of indirectness), and we need to be critical while interpreting the findings. We found no data on newborn mortality or maternal mortality or both, or other serious adverse events. Well-designed randomised clinical trials are needed to determine the benefits and harms of HBIG versus placebo in prevention of MTCT of HBV.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The studies assessed only hepatitis B surface antigen (HBsAg) (proteins on the surface of the HBV that cause immune system of the body to make antibodies when exposed to HBV), hepatitis B virus DNA (HBV-DNA) (self-dividing material of the HBV which carries its genetic information), and hepatitis B envelope antigen (HBeAg) (blood proteins that shows that the virus is still active in the liver) status in newborns. There was no information about the effects of HBIG on death from all causes (newborn or mother), antibodies to hepatitis B core antigen (proteins made by the immune system which bind to HBV and cause them to be destroyed), cost-effectiveness of HBIG, and side effects.\nAntenatal (before birth) HBIG might have an effect on preventing mother-to-child transmission of HBV as more treated babies than non-treated babies had no HBsAg or HBV-DNA; however, both results could have been affected by the way the trials were conducted and were at high risk of bias. The authors could draw no conclusions about the side effects of HBIG for pregnant women with HBV infection. Well-designed clinical trials with low risks of bias are needed to establish the benefits and harms of HBIG compared with no treatment in pregnant women with HBV."
}
] |
query-laysum
|
22919
|
[
{
"role": "user",
"content": "Abstract: Background\nThe effectiveness of interventions to increase influenza vaccination uptake in people aged 60 years and older varies by country and participant characteristics. This review updates versions published in 2010 and 2014.\nObjectives\nTo assess access, provider, system, and societal interventions to increase the uptake of influenza vaccination in people aged 60 years and older in the community.\nSearch methods\nWe searched CENTRAL, which includes the Cochrane Acute Respiratory Infections Group's Specialised Register, MEDLINE, Embase, CINAHL, and ERIC for this update, as well as WHO ICTRP and ClinicalTrials.gov for ongoing studies to 7 December 2017. We also searched reference lists of included studies.\nSelection criteria\nRandomised controlled trials (RCTs) and cluster-RCTs of interventions to increase influenza vaccination in people aged 60 years or older in the community.\nData collection and analysis\nWe used standard methodological procedures as specified by Cochrane.\nMain results\nWe included 3 new RCTs for this update (total 61 RCTs; 1,055,337 participants). Trials involved people aged 60 years and older living in the community in high-income countries. Heterogeneity limited some meta-analyses. We assessed studies as at low risk of bias for randomisation (38%), allocation concealment (11%), blinding (44%), and selective reporting (100%). Half (51%) had missing data. We assessed the evidence as low-quality. We identified three levels of intervention intensity: low (e.g. postcards), medium (e.g. personalised phone calls), and high (e.g. home visits, facilitators).\nIncreasing community demand (12 strategies, 41 trials, 53 study arms, 767,460 participants)\nOne successful intervention that could be meta-analysed was client reminders or recalls by letter plus leaflet or postcard compared to reminder (odds ratio (OR) 1.11, 95% confidence interval (CI) 1.07 to 1.15; 3 studies; 64,200 participants). Successful interventions tested by single studies were patient outreach by retired teachers (OR 3.33, 95% CI 1.79 to 6.22); invitations by clinic receptionists (OR 2.72, 95% CI 1.55 to 4.76); nurses or pharmacists educating and nurses vaccinating patients (OR 152.95, 95% CI 9.39 to 2490.67); medical students counselling patients (OR 1.62, 95% CI 1.11 to 2.35); and multiple recall questionnaires (OR 1.13, 95% CI 1.03 to 1.24).\nSome interventions could not be meta-analysed due to significant heterogeneity: 17 studies tested simple reminders (the 95% CI was entirely above unity in 11 trials implying all 11 interventions increased vaccination rates); 16 tested personalised reminders (the 95% CI was entirely above unity in 12 trials implying all 12 interventions increased vaccination rates ); 2 investigated customised compared to form letters (the 95% CI was above unity in both trials implying both interventions increased vaccination rates); and 4 studies examined the impact of health risk appraisals (the 95% CI was above unity in all 4 trials implying all 4 interventions increased vaccination rates). One study of a lottery for free groceries was not effective.\nEnhancing vaccination access (6 strategies, 8 trials, 10 arms, 9353 participants)\nWe meta-analysed results from 2 studies of home visits (OR 1.30, 95% CI 1.05 to 1.61), and 2 studies that tested free vaccine compared to patient payment for vaccine (OR 2.36, 95% CI 1.98 to 2.82). We were unable to conduct meta-analyses of 2 studies of home visits by nurses plus a physician care plan (the 95% CI was entirely above unity in both trials implying both interventions increased vaccination rates) and 2 studies of free vaccine compared to no intervention (the 95% CI was entirely above unity in both trials implying both interventions increased vaccination rates). One study of group visits (OR 27.2, 95% CI 1.60 to 463.3) was effective, and 1 study of home visits compared to safety interventions was not.\nProvider- or system-based interventions (11 strategies, 15 trials, 17 arms, 278,524 participants)\nOne successful intervention that could be meta-analysed focused on payments to physicians (OR 2.22, 95% CI 1.77 to 2.77). Successful interventions tested by individual studies were: reminding physicians to vaccinate all patients (OR 2.47, 95% CI 1.53 to 3.99); posters in clinics presenting vaccination rates and encouraging competition between doctors (OR 2.03, 95% CI 1.86 to 2.22); and chart reviews and benchmarking to the rates achieved by the top 10% of physicians (OR 3.43, 95% CI 2.37 to 4.97).\nWe were unable to meta-analyse 4 studies that looked at physician reminders (the 95% CI was entirely above unity in 3 trials implying all 3 interventions increased vaccination rates) and 3 studies of facilitator encouragement of vaccination (the 95% CI was entirely above unity in 2 trials implying both interventions increased vaccination rates). Interventions that were not effective were: comparing letters on discharge from hospital to letters to general practitioners; posters plus postcards versus posters alone; educational reminders, academic detailing, and peer comparisons compared to mailed educational materials; educational outreach plus feedback to teams versus written feedback; and an intervention to increase staff vaccination rates.\nInterventions at the societal level\nNo studies reported on societal-level interventions.\nStudy funding sources\nStudies were funded by government health organisations (n = 33), foundations (n = 9), organisations that provided healthcare services in the studies (n = 3), and a pharmaceutical company offering free vaccines (n = 1). Fifteen studies did not report study funding sources.\nAuthors' conclusions\nWe identified interventions that demonstrated significant positive effects of low (postcards), medium (personalised phone calls), and high (home visits, facilitators) intensity that increase community demand for vaccination, enhance access, and improve provider/system response. The overall GRADE assessment of the evidence was moderate quality. Conclusions are unchanged from the 2014 review.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Does increasing demand, vaccination access, and provider activity increase influenza vaccination rates in people aged 60 years and older living in the community?"
}
] |
query-laysum
|
22920
|
[
{
"role": "user",
"content": "Abstract: Background\nProvision of assistive technology is a widely used intervention for people with rheumatoid arthritis. Assistive technology is any item used to increase or maintain functional ability in individuals with disabilities. It includes a wide range of products, from low-technology devices to technologically complex equipment. Yet, there are few systematic reviews on the effectiveness of assistive technology in this population.\nObjectives\nTo assess the benefits of assistive technology for adults with rheumatoid arthritis in terms of improving functional ability and reducing pain, and to assess potential adverse effects in terms of psychological discomfort, personal injury or material damage related to device use.\nSearch methods\nWe searched the following databases: CENTRAL (The Cochrane Library), MEDLINE, EMBASE, CINAHL, AMED, ISI Web of Science, PEDro, and OTseeker (to October 2008). In addition, we scanned reference lists, sought grey literature, and had personally communicated with authors. We updated the literature searches in January 2009.\nSelection criteria\nIncluded study designs were randomised controlled trials, clinical controlled trials, controlled before and after studies, and interrupted time series where the effectiveness of assistive technology was evaluated. In addition, comparative observational studies were included if addressing adverse effects.\nData collection and analysis\nTwo authors independently selected trials, extracted data, and assessed study quality. Investigators were contacted to obtain missing information.\nMain results\nOnly one randomised controlled trial with 29 participants was included. The study compared the use of an eye drop device to a standard bottle in people with rheumatoid arthritis suffering from persistent dry eyes. The study was considered to have low quality of evidence. The proportions with observed difficulties when using the device to squeeze out drops and getting the drops in the eyes were 10% and 14%, respectively. This compared to 52% and 52% when using the standard bottle (P = 0.001; P = 0.003, respectively). The proportions of participants reporting difficulties with squeezing the bottle, controlling the number of drops, and aiming the drops when using the device were 40%, 44%, and 46% respectively, while using the standard bottle the proportions with difficulties were 72%, 84%, and 76% (P = 0.001; P = 0.003; P = 0.031, respectively).\nAuthors' conclusions\nOnly one trial met the inclusion criteria for this review. Thus, there is very limited evidence for the effect of assistive technology for adults with rheumatoid arthritis and, therefore, an urgent need for high-quality research addressing the effectiveness of commonly used interventions.\n\nGiven the provided abstract, please respond to the following query: \"The review shows that in people with rheumatoid arthritis\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- An eye drop device may improve the ability to squeeze out drops, aim drops, and control the number of drops. \n- An eye drop device may prevent side effects caused by the eye drop bottle touching the eye itself."
}
] |
query-laysum
|
22921
|
[
{
"role": "user",
"content": "Abstract: Background\nInfertility is a prevalent problem that has significant consequences for individuals, families, and the community. Modifiable lifestyle factors may affect the chance of people with infertility having a baby. However, no guideline is available about what preconception advice should be offered. It is important to determine what preconception advice should be given to people with infertility and to evaluate whether this advice helps them make positive behavioural changes to improve their lifestyle and their chances of conceiving.\nObjectives\nTo assess the safety and effectiveness of preconception lifestyle advice on fertility outcomes and lifestyle behavioural changes for people with infertility.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register of controlled trials, CENTRAL, MEDLINE, Embase, PsycINFO, AMED, CINAHL, trial registers, Google Scholar, and Epistemonikos in January 2021; we checked references and contacted field experts to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs), randomised cross-over studies, and cluster-randomised studies that compared at least one form of preconception lifestyle advice with routine care or attention control for people with infertility.\nData collection and analysis\nWe used standard methodological procedures recommended by Cochrane. Primary effectiveness outcomes were live birth and ongoing pregnancy. Primary safety outcomes were adverse events and miscarriage. Secondary outcomes included reported behavioural changes in lifestyle, birth weight, gestational age, clinical pregnancy, time to pregnancy, quality of life, and male factor infertility outcomes. We assessed the overall quality of evidence using GRADE criteria.\nMain results\nWe included in the review seven RCTs involving 2130 participants. Only one RCT included male partners. Three studies compared preconception lifestyle advice on a combination of topics with routine care or attention control. Four studies compared preconception lifestyle advice on one topic (weight, alcohol intake, or smoking) with routine care for women with infertility and specific lifestyle characteristics. The evidence was of low to very low-quality. The main limitations of the included studies were serious risk of bias due to lack of blinding, serious imprecision, and poor reporting of outcome measures.\nPreconception lifestyle advice on a combination of topics versus routine care or attention control \nPreconception lifestyle advice on a combination of topics may result in little to no difference in the number of live births (risk ratio (RR) 0.93, 95% confidence interval (CI) 0.79 to 1.10; 1 RCT, 626 participants), but the quality of evidence was low. No studies reported on adverse events or miscarriage. Due to very low-quality evidence, we are uncertain whether preconception lifestyle advice on a combination of topics affects lifestyle behavioural changes: body mass index (BMI) (mean difference (MD) -1.06 kg/m², 95% CI -2.33 to 0.21; 1 RCT, 180 participants), vegetable intake (MD 12.50 grams/d, 95% CI -8.43 to 33.43; 1 RCT, 264 participants), alcohol abstinence in men (RR 1.08, 95% CI 0.74 to 1.58; 1 RCT, 210 participants), or smoking cessation in men (RR 1.01, 95% CI 0.91 to 1.12; 1 RCT, 212 participants). Preconception lifestyle advice on a combination of topics may result in little to no difference in the number of women with adequate folic acid supplement use (RR 0.98, 95% CI 0.95 to 1.01; 2 RCTs, 850 participants; I² = 4%), alcohol abstinence (RR 1.07, 95% CI 0.99 to 1.17; 1 RCT, 607 participants), and smoking cessation (RR 1.01, 95% CI 0.98 to 1.04; 1 RCT, 606 participants), on low quality evidence. No studies reported on other behavioural changes.\nPreconception lifestyle advice on weight versus routine care \nStudies on preconception lifestyle advice on weight were identified only in women with infertility and obesity. Compared to routine care, we are uncertain whether preconception lifestyle advice on weight affects the number of live births (RR 0.94, 95% CI 0.62 to 1.43; 2 RCTs, 707 participants; I² = 68%; very low-quality evidence), adverse events including gestational diabetes (RR 0.78, 95% CI 0.48 to 1.26; 1 RCT, 317 participants; very low-quality evidence), hypertension (RR 1.07, 95% CI 0.66 to 1.75; 1 RCT, 317 participants; very low-quality evidence), or miscarriage (RR 1.50, 95% CI 0.95 to 2.37; 1 RCT, 577 participants; very low-quality evidence). Regarding lifestyle behavioural changes for women with infertility and obesity, preconception lifestyle advice on weight may slightly reduce BMI (MD -1.30 kg/m², 95% CI -1.58 to -1.02; 1 RCT, 574 participants; low-quality evidence). Due to very low-quality evidence, we are uncertain whether preconception lifestyle advice affects the percentage of weight loss, vegetable and fruit intake, alcohol abstinence, or physical activity. No studies reported on other behavioural changes.\nPreconception lifestyle advice on alcohol intake versus routine care \nStudies on preconception lifestyle advice on alcohol intake were identified only in at-risk drinking women with infertility. We are uncertain whether preconception lifestyle advice on alcohol intake affects the number of live births (RR 1.15, 95% CI 0.53 to 2.50; 1 RCT, 37 participants; very low-quality evidence) or miscarriages (RR 1.31, 95% CI 0.21 to 8.34; 1 RCT, 37 participants; very low-quality evidence). One study reported on behavioural changes for alcohol consumption but not as defined in the review methods. No studies reported on adverse events or other behavioural changes.\nPreconception lifestyle advice on smoking versus routine care \nStudies on preconception lifestyle advice on smoking were identified only in smoking women with infertility. No studies reported on live birth, ongoing pregnancy, adverse events, or miscarriage. One study reported on behavioural changes for smoking but not as defined in the review methods.\nAuthors' conclusions\nLow-quality evidence suggests that preconception lifestyle advice on a combination of topics may result in little to no difference in the number of live births. Evidence was insufficient to allow conclusions on the effects of preconception lifestyle advice on adverse events and miscarriage and on safety, as no studies were found that looked at these outcomes, or the studies were of very low quality. This review does not provide clear guidance for clinical practice in this area. However, it does highlight the need for high-quality RCTs to investigate preconception lifestyle advice on a combination of topics and to assess relevant effectiveness and safety outcomes in men and women with infertility.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence was of low to very low quality. The main limitations of the evidence were poor study methods in included studies (lack of blinding) and lack of (precision in) findings for live birth, safety outcomes, and reported behavioural changes."
}
] |
query-laysum
|
22922
|
[
{
"role": "user",
"content": "Abstract: Background\nVirtual reality (VR) computer technology creates a simulated environment, perceived as comparable to the real world, with which users can actively interact. The effectiveness of VR distraction on acute pain intensity in children is uncertain.\nObjectives\nTo assess the effectiveness and adverse effects of virtual reality (VR) distraction interventions for children (0 to 18 years) with acute pain in any healthcare setting.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, PsycINFO and four trial registries to October 2019. We also searched reference lists of eligible studies, handsearched relevant journals and contacted study authors.\nSelection criteria\nRandomised controlled trials (RCTs), including cross-over and cluster-RCTs, comparing VR distraction to no distraction, non-VR distraction or other VR distraction.\nData collection and analysis\nWe used standard Cochrane methodological processes. Two reviewers assessed risk of bias and extracted data independently. The primary outcome was acute pain intensity (during procedure, and up to one hour post-procedure). Secondary outcomes were adverse effects, child satisfaction with VR, pain-related distress, parent anxiety, rescue analgesia and cost. We used GRADE and created 'Summary of findings' tables.\nMain results\nWe included 17 RCTs (1008 participants aged four to 18 years) undergoing various procedures in healthcare settings. We did not pool data because the heterogeneity in population (i.e. diverse ages and developmental stages of children and their different perceptions and reactions to pain) and variations in procedural conditions (e.g. phlebotomy, burn wound dressings, physical therapy sessions), and consequent level of pain experienced, made statistical pooling of data impossible. We narratively describe results.\nWe judged most studies to be at unclear risk of selection bias, high risk of performance and detection bias, and high risk of bias for small sample sizes. Across all comparisons and outcomes, we downgraded the certainty of evidence to low or very low due to serious study limitations and serious or very serious indirectness. We also downgraded some of the evidence for very serious imprecision.\n1: VR distraction versus no distraction\nAcute pain intensity: during procedure\nSelf-report: one study (42 participants) found no beneficial effect of non-immersive VR (very low-certainty evidence).\nObserver-report: no data.\nBehavioural measurements (observer-report): two studies, 62 participants; low-certainty evidence. One study (n = 42) found no beneficial effect of non-immersive VR. One study (n = 20) found a beneficial effect favouring immersive VR.\nAcute pain intensity: post-procedure\nSelf-report: 10 studies, 461 participants; very low-certainty evidence. Four studies (n = 95) found no beneficial effect of immersive and semi-immersive or non-immersive VR. Five studies (n = 357) found a beneficial effect favouring immersive VR. Another study (n = 9) reported less pain in the VR group.\nObserver-report: two studies (216 participants; low-certainty evidence) found a beneficial effect of immersive VR, as reported by primary caregiver/parents or nurses. One study (n = 80) found a beneficial effect of immersive VR, as reported by researchers.\nBehavioural measurements (observer-report): one study (42 participants) found no beneficial effect of non-immersive VR (very low-certainty evidence).\nAdverse effects: five studies, 154 participants; very low-certainty evidence. Three studies (n = 53) reported no adverse effects. Two studies (n = 101) reported mild adverse effects (e.g. nausea) in the VR group.\n2: VR distraction versus other non-VR distraction\nAcute pain intensity:during procedure\nSelf-report, observer-report and behavioural measurements (observer-report): two studies, 106 participants:\nSelf-report: one study (n = 65) found a beneficial effect favouring immersive VR and one (n = 41) found no evidence of a difference in mean pain change scores (very low-certainty evidence).\nObserver-report: one study (n = 65) found a beneficial effect favouring immersive VR and one (n = 41) found no evidence of a difference in mean pain change scores (low-certainty evidence).\nBehavioural measurements (observer-report): one study (n = 65) found a beneficial effect favouring immersive VR and one (n = 41) reported a difference in mean pain change scores with fewer pain behaviours in VR group (low-certainty evidence).\nAcute pain intensity: post-procedure\nSelf-report: eight studies, 575 participants; very low-certainty evidence. Two studies (n = 146) found a beneficial effect favouring immersive VR. Two studies (n = 252) reported a between-group difference favouring immersive VR. One study (n = 59) found no beneficial effect of immersive VR versus television and Child Life non-VR distraction. One study (n = 18) found no beneficial effect of semi-immersive VR. Two studies (n = 100) reported no between-group difference.\nObserver-report: three studies, 187 participants; low-certainty evidence. One study (n = 81) found a beneficial effect favouring immersive VR for parent, nurse and researcher reports. One study (n = 65) found a beneficial effect favouring immersive VR for caregiver reports. Another study (n = 41) reported no evidence of a difference in mean pain change scores.\nBehavioural measurements (observer-report): two studies, 106 participants; low-certainty evidence. One study (n = 65) found a beneficial effect favouring immersive VR. Another study (n = 41) reported no evidence of a difference in mean pain change scores.\nAdverse effects: six studies, 429 participants; very low-certainty evidence. Three studies (n = 229) found no evidence of a difference between groups. Two studies (n = 141) reported no adverse effects in VR group. One study (n = 59) reported no beneficial effect in reducing estimated cyber-sickness before and after VR immersion.\n3: VR distraction versus other VR distraction\nWe did not identify any studies for this comparison.\nAuthors' conclusions\nWe found low-certainty and very low-certainty evidence of the effectiveness of VR distraction compared to no distraction or other non-VR distraction in reducing acute pain intensity in children in any healthcare setting. This level of uncertainty makes it difficult to interpret the benefits or lack of benefits of VR distraction for acute pain in children. Most of the review primary outcomes were assessed by only two or three small studies. We found limited data for adverse effects and other secondary outcomes. Future well-designed, large, high-quality trials may have an important impact on our confidence in the results.\n\nGiven the provided abstract, please respond to the following query: \"How did we identify and evaluate the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the medical literature for randomised controlled studies (clinical studies where people are randomly put into one of two or more treatment groups), because these provide the most robust evidence about the effects of a treatment. We compared and summarized their results. Finally, we rated our confidence in the evidence, based on factors such as study methods and sizes, and the consistency of findings across studies."
}
] |
query-laysum
|
22923
|
[
{
"role": "user",
"content": "Abstract: Background\nScalpels or electrosurgery can be used to make abdominal incisions. The potential benefits of electrosurgery may include reduced blood loss, dry and rapid separation of tissue, and reduced risk of cutting injury to surgeons. Postsurgery risks possibly associated with electrosurgery may include poor wound healing and complications such as surgical site infection.\nObjectives\nTo assess the effects of electrosurgery compared with scalpel for major abdominal incisions.\nSearch methods\nThe first version of this review included studies published up to February 2012. In October 2016, for this first update, we searched the Cochrane Wounds Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Ovid MEDLINE (including In-Process & Other Non-Indexed Citations), Ovid Embase, EBSCO CINAHL Plus, and the registry for ongoing trials (www.clinicaltrials.gov). We did not apply date or language restrictions.\nSelection criteria\nStudies considered in this analysis were randomised controlled trials (RCTs) that compared electrosurgery to scalpel for creating abdominal incisions during major open abdominal surgery. Incisions could be any orientation (vertical, oblique, or transverse) and surgical setting (elective or emergency). Electrosurgical incisions were made through major layers of the abdominal wall, including subcutaneous tissue and the musculoaponeurosis (a sheet of connective tissue that attaches muscles), regardless of the technique used to incise the skin and peritoneum. Scalpel incisions were made through major layers of abdominal wall including skin, subcutaneous tissue, and musculoaponeurosis, regardless of the technique used to incise the abdominal peritoneum. Primary outcomes analysed were wound infection, time to wound healing, and wound dehiscence. Secondary outcomes were postoperative pain, wound incision time, wound-related blood loss, and adhesion or scar formation.\nData collection and analysis\nTwo review authors independently carried out study selection, data extraction, and risk of bias assessment. When necessary, we contacted trial authors for missing data. We calculated risk ratios (RR) and 95% confidence intervals (CI) for dichotomous data, and mean differences (MD) and 95% CI for continuous data.\nMain results\nThe updated search found seven additional RCTs making a total of 16 included studies (2769 participants). All studies compared electrosurgery to scalpel and were considered in one comparison. Eleven studies, analysing 2178 participants, reported on wound infection. There was no clear difference in wound infections between electrosurgery and scalpel (7.7% for electrosurgery versus 7.4% for scalpel; RR 1.07, 95% CI 0.74 to 1.54; low-certainty evidence downgraded for risk of bias and serious imprecision). None of the included studies reported time to wound healing.\nIt is uncertain whether electrosurgery decreases wound dehiscence compared to scalpel (2.7% for electrosurgery versus 2.4% for scalpel; RR 1.21, 95% CI 0.58 to 2.50; 1064 participants; 6 studies; very low-certainty evidence downgraded for risk of bias and very serious imprecision).\nThere was no clinically important difference in incision time between electrosurgery and scalpel (MD -45.74 seconds, 95% CI -88.41 to -3.07; 325 participants; 4 studies; moderate-certainty evidence downgraded for serious imprecision). There was no clear difference in incision time per wound area between electrosurgery and scalpel (MD -0.58 seconds/cm2, 95% CI -1.26 to 0.09; 282 participants; 3 studies; low-certainty evidence downgraded for very serious imprecision).\nThere was no clinically important difference in mean blood loss between electrosurgery and scalpel (MD -20.10 mL, 95% CI -28.16 to -12.05; 241 participants; 3 studies; moderate-certainty evidence downgraded for serious imprecision). Two studies reported on mean wound-related blood loss per wound area; however, we were unable to pool the studies due to considerable heterogeneity. It was uncertain whether electrosurgery decreased wound-related blood loss per wound area. We could not reach a conclusion on the effects of the two interventions on pain and appearance of scars for various reasons such as small number of studies, insufficient data, the presence of conflicting data, and different measurement methods.\nAuthors' conclusions\nThe certainty of evidence was moderate to very low due to risk of bias and imprecise results. Low-certainty evidence shows no clear difference in wound infection between the scalpel and electrosurgery. There is a need for more research to determine the relative effectiveness of scalpel compared with electrosurgery for major abdominal incisions.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "During abdominal surgery, surgeons need to cut through several layers of abdominal wall tissue before reaching the target operation site. To do this, surgeons can use either a sharp-bladed scalpel or an electrosurgical device that burns through tissue using a precise high-frequency current (known as electrosurgery). It is thought that, compared to using a scalpel, electrosurgery may result in less blood loss, more rapid tissue separation, and a lower risk of surgeons cutting themselves. We wanted to find out about the benefits and risks of these two techniques, and to compare them in terms of safety and other measures such as risk of infection and pain."
}
] |
query-laysum
|
22924
|
[
{
"role": "user",
"content": "Abstract: Background\nFast-track cardiac care is a complex intervention involving several components of care during cardiac anaesthesia and in the postoperative period, with the ultimate aim of early extubation after surgery, to reduce length of stay in the intensive care unit and in the hospital. Safe and effective fast-track cardiac care may reduce hospital costs. This is an update of a Cochrane review first published in 2003, updated in 2012 and updated now in 2016.\nObjectives\nTo determine the safety and effectiveness of fast-track cardiac care compared with conventional (not fast-track) care in adult patients undergoing cardiac surgery. Fast-track cardiac care intervention includes administration of low-dose opioid-based general anaesthesia or use of a time-directed extubation protocol, or both.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 5), MEDLINE (January 2012 to May 2015), Embase (January 2012 to May 2015), the Cumulative Index to Nursing and Allied Health Literature (CINAHL; January 2012 to May 2015) and the Institute for Scientific Information (ISI) Web of Science (January 2012 to May 2015), along with reference lists of articles, to identify additional trials. We applied no language restrictions.\nSelection criteria\nWe included all randomized controlled trials of adult cardiac surgical patients (coronary artery bypass grafts, aortic valve replacement, mitral valve replacement) that compared fast-track cardiac care and conventional (not fast-track) care groups. We focused on the following fast-track interventions, which were designed for early extubation after surgery: administration of low-dose opioid-based general anaesthesia during cardiac surgery and use of a time-directed extubation protocol after surgery. The primary outcome was risk of mortality. Secondary outcomes included postoperative complications, reintubation within 24 hours of surgery, time to extubation, length of stay in the intensive care unit and in the hospital, quality of life after surgery and hospital costs.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted study data. We contacted study authors for additional information. We calculated a Peto odds ratio (OR) for risk of mortality and used a random-effects model to report risk ratio (RR), mean difference (MD) and 95% confidence intervals (95% CIs) for all secondary outcomes.\nMain results\nWe included 28 trials (4438 participants) in the updated review. We considered most participants to be at low to moderate risk of death after surgery. We assessed two studies as having low risk of bias and 11 studies high risk of bias. Investigators reported no differences in risk of mortality within the first year after surgery between low-dose versus high-dose opioid-based general anaesthesia groups (OR 0.53, 95% CI 0.25 to 1.12; eight trials, 1994 participants, low level of evidence) and between a time-directed extubation protocol versus usual care (OR 0.80, 95% CI 0.45 to 1.45; 10 trials, 1802 participants, low level of evidence).\nResearchers noted no significant differences between low-dose and high-dose opioid-based anaesthesia groups in the following postoperative complications: myocardial infarction (RR 0.98, 95% CI 0.48 to 1.99; eight trials, 1683 participants, low level of evidence), stroke (RR 1.17, 95% CI 0.36 to 3.78; five trials, 562 participants, low level of evidence) and tracheal reintubation (RR 1.77, 95% CI 0.38 to 8.27; five trials, 594 participants, low level of evidence).\nComparisons with usual care revealed no significant differences in the risk of postoperative complications associated with a time-directed extubation protocol: myocardial infarction (RR 0.59, 95% CI 0.27 to 1.31; eight trials, 1378 participants, low level of evidence), stroke (RR 0.85, 95% CI 0.33 to 2.16; 11 trials, 1646 participants, low level of evidence) and tracheal reintubation (RR 1.34, 95% CI 0.74 to 2.41; 12 trials, 1261 participants, low level of evidence).\nAlthough levels of heterogeneity were high, low-dose opioid anaesthesia was associated with reduced time to extubation (reduction of 4.3 to 10.5 hours, 14 trials, 2486 participants, low level of evidence) and length of stay in the intensive care unit (reduction of 0.4 to 7.0 hours, 12 trials, 1394 participants, low level of evidence). Use of a time-directed extubation protocol was associated with reduced time to extubation (reduction of 3.7 to 8.8 hours, 16 trials, 2024 participants, low level of evidence) and length of stay in the intensive care unit (reduction of 3.9 to 10.5 hours, 13 trials, 1888 participants, low level of evidence). However, these two fast-track care interventions were not associated with reduced total length of stay in the hospital (low level of evidence).\nAuthors' conclusions\nLow-dose opioid-based general anaesthesia and time-directed extubation protocols for fast-track interventions have risks of mortality and major postoperative complications similar to those of conventional (not fast-track) care, and therefore appear to be safe for use in patients considered to be at low to moderate risk. These fast-track interventions reduced time to extubation and shortened length of stay in the intensive care unit but did not reduce length of stay in the hospital.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In the past, adults were given high-dose opioid-based anaesthesia for cardiac surgery and were provided with mechanical breathing support overnight in an intensive care unit after surgery. Now, many surgical units remove the tube that provides mechanical breathing support when the patient is on the operating table or within hours after cardiac surgery. They use time-directed protocols for removing breathing support. Some patients recover in an intensive care unit (ICU) or in a dedicated unit outside the ICU. It is important to improve hospital efficiency by using safe fast-track interventions."
}
] |
query-laysum
|
22925
|
[
{
"role": "user",
"content": "Abstract: Background\nPolycystic ovary syndrome (PCOS) is a common condition affecting 8% to 13% of reproductive-aged women. In the past clomiphene citrate (CC) used to be the first-line treatment in women with PCOS. Ovulation induction with letrozole should be the first-line treatment according to new guidelines, but the use of letrozole is off-label. Consequently, CC is still commonly used. Approximately 20% of women on CC do not ovulate. Women who are CC-resistant can be treated with gonadotrophins or other medical ovulation-induction agents. These medications are not always successful, can be time-consuming and can cause adverse events like multiple pregnancies and cycle cancellation due to an excessive response. Laparoscopic ovarian drilling (LOD) is a surgical alternative to medical treatment. There are risks associated with surgery, such as complications from anaesthesia, infection, and adhesions.\nObjectives\nTo evaluate the effectiveness and safety of LOD with or without medical ovulation induction compared with medical ovulation induction alone for women with anovulatory polycystic PCOS and CC-resistance.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group (CGFG) trials register, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL and two trials registers up to 8 October 2019, together with reference checking and contact with study authors and experts in the field to identify additional studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) of women with anovulatory PCOS and CC resistance who underwent LOD with or without medical ovulation induction versus medical ovulation induction alone, LOD with assisted reproductive technologies (ART) versus ART, LOD with second-look laparoscopy versus expectant management, or different techniques of LOD.\nData collection and analysis\nTwo review authors independently selected studies, assessed risks of bias, extracted data and evaluated the quality of the evidence using the GRADE method. The primary effectiveness outcome was live birth and the primary safety outcome was multiple pregnancy. Pregnancy, miscarriage, ovarian hyperstimulation syndrome (OHSS), ovulation, costs, and quality of life were secondary outcomes.\nMain results\nThis updated review includes 38 trials (3326 women). The evidence was very low- to moderate-quality; the main limitations were due to poor reporting of study methods, with downgrading for risks of bias (randomisation and allocation concealment) and lack of blinding.\nLaparoscopic ovarian drilling with or without medical ovulation induction versus medical ovulation induction alone\nPooled results suggest LOD may decrease live birth slightly when compared with medical ovulation induction alone (odds ratio (OR) 0.71, 95% confidence interval (CI) 0.54 to 0.92; 9 studies, 1015 women; I2 = 0%; low-quality evidence). The evidence suggest that if the chance of live birth following medical ovulation induction alone is 42%, the chance following LOD would be between 28% and 40%. The sensitivity analysis restricted to only RCTs with low risk of selection bias suggested there is uncertainty whether there is a difference between the treatments (OR 0.90, 95% CI 0.59 to 1.36; 4 studies, 415 women; I2 = 0%, low-quality evidence). LOD probably reduces multiple pregnancy rates (Peto OR 0.34, 95% CI 0.18 to 0.66; 14 studies, 1161 women; I2 = 2%; moderate-quality evidence). This suggests that if we assume the risk of multiple pregnancy following medical ovulation induction is 5.0%, the risk following LOD would be between 0.9% and 3.4%.\nRestricting to RCTs that followed women for six months after LOD and six cycles of ovulation induction only, the results for live birth were consistent with the main analysis.\nThere may be little or no difference between the treatments for the likelihood of a clinical pregnancy (OR 0.86, 95% CI 0.72 to 1.03; 21 studies, 2016 women; I2 = 19%; low-quality evidence). There is uncertainty about the effect of LOD compared with ovulation induction alone on miscarriage (OR 1.11, 95% CI 0.78 to 1.59; 19 studies, 1909 women; I2 = 0%; low-quality evidence). OHSS was a very rare event. LOD may reduce OHSS (Peto OR 0.25, 95% CI 0.07 to 0.91; 8 studies, 722 women; I2 = 0%; low-quality evidence).\nUnilateral LOD versus bilateral LOD\nDue to the small sample size, the quality of evidence is insufficient to justify a conclusion on live birth (OR 0.83, 95% CI 0.24 to 2.78; 1 study, 44 women; very low-quality evidence).\nThere were no data available on multiple pregnancy.\nThe likelihood of a clinical pregnancy is uncertain between the treatments, due to the quality of the evidence and the large heterogeneity between the studies (OR 0.57, 95% CI 0.39 to 0.84; 7 studies, 470 women; I2 = 60%, very low-quality evidence). Due to the small sample size, the quality of evidence is not sufficient to justify a conclusion on miscarriage (OR 1.02, 95% CI 0.31 to 3.33; 2 studies, 131 women; I2 = 0%; very low-quality evidence).\nOther comparisons\nDue to lack of evidence and very low-quality data there is uncertainty whether there is a difference for any of the following comparisons: LOD with IVF versus IVF, LOD with second-look laparoscopy versus expectant management, monopolar versus bipolar LOD, and adjusted thermal dose versus fixed thermal dose.\nAuthors' conclusions\nLaparoscopic ovarian drilling with and without medical ovulation induction may decrease the live birth rate in women with anovulatory PCOS and CC resistance compared with medical ovulation induction alone. But the sensitivity analysis restricted to only RCTs at low risk of selection bias suggests there is uncertainty whether there is a difference between the treatments, due to uncertainty around the estimate. Moderate-quality evidence shows that LOD probably reduces the number of multiple pregnancy. Low-quality evidence suggests that there may be little or no difference between the treatments for the likelihood of a clinical pregnancy, and there is uncertainty about the effect of LOD compared with ovulation induction alone on miscarriage. LOD may result in less OHSS.\nThe quality of evidence is insufficient to justify a conclusion on live birth, clinical pregnancy or miscarriage rate for the analysis of unilateral LOD versus bilateral LOD. There were no data available on multiple pregnancy.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our main analysis with low-quality evidence shows that LOD with and without medical ovulation induction may decrease the live birth rate slightly in women with anovulatory PCOS and CC-resistance compared with medical ovulation induction alone. Analysis including only the higher-quality RCTs shows uncertainty about any difference between the treatments. The evidence suggests that if the chance of live birth following medical ovulation induction alone is 44%, the chance following LOD would be between 32% and 52%. Moderate-quality evidence shows that LOD probably reduces the number of multiple pregnancies. The evidence suggests that if we assume the chance of a multiple pregnancy following medical ovulation induction alone to be 5.0%, the chance following LOD would be between 0.9% and 3.4%.\nThere may be little or no difference between the treatments for clinical pregnancy, and there is uncertainty about the effect of LOD compared with ovulation induction alone on miscarriage. Ovarian hyperstimulation syndrome (OHSS) may occur less often following LOD.\nThe quality of the evidence is not sufficient to justify a conclusion on live birth, clinical pregnancy or miscarriage for the analysis of unilateral LOD versus bilateral LOD.\nThe results of the primary outcomes for the other interventions were insufficient to enable us to draw any conclusions."
}
] |
query-laysum
|
22926
|
[
{
"role": "user",
"content": "Abstract: Background\nPreterm infants (< 37 weeks' post-menstrual age (PMA)) are often delayed in attaining oral feeding. Normal oral feeding is suggested as an important outcome for the timing of discharge from the hospital and can be an early indicator of neuromotor integrity and developmental outcomes. A range of oral stimulation interventions may help infants to develop sucking and oromotor co-ordination, promoting earlier oral feeding and earlier hospital discharge. This is an update of our 2016 review.\nObjectives\nTo determine the effectiveness of oral stimulation interventions for attainment of oral feeding in preterm infants born before 37 weeks' PMA.\nSearch methods\nSearches were run in March 2022 of the following databases: CENTRAL via CRS Web; MEDLINE and Embase via Ovid. We also searched clinical trials databases and the reference lists of retrieved articles for randomised controlled trials (RCTs) and quasi-randomised trials. Searches were limited by date 2016 (the date of the search for the original review) forward.\nNote: Due to circumstances beyond our control (COVID and staffing shortages at the editorial base of Cochrane Neonatal), publication of this review, planned for mid 2021, was delayed. Thus, although searches were conducted in 2022 and results screened, potentially relevant studies found after September 2020 have been placed in the section, Awaiting Classification, and not incorporated into our analysis.\nSelection criteria\nRandomised and quasi-randomised controlled trials comparing a defined oral stimulation intervention with no intervention, standard care, sham treatment or non-oral intervention (e.g. body stroking protocols or gavage adjustment protocols) in preterm infants and reporting at least one of the specified outcomes.\nData collection and analysis\nFollowing the updated search, two review authors screened the titles and abstracts of studies and full-text copies when needed to identify trials for inclusion in the review. The primary outcomes of interest were time (days) to exclusive oral feeding, time (days) spent in NICU, total hospital stay (days), and duration (days) of parenteral nutrition. All review and support authors contributed to independent extraction of data and analysed assigned studies for risk of bias across the five domains of bias using the Cochrane Risk of Bias assessment tool. The GRADE system was used to rate the certainty of the evidence. Studies were divided into two groups for comparison: intervention versus standard care and intervention versus other non-oral or sham intervention. We performed meta-analysis using a fixed-effect model.\nMain results\nWe included 28 RCTs (1831 participants). Most trials had methodological weaknesses, particularly in relation to allocation concealment and masking of study personnel.\nOral stimulation compared with standard care\nFollowing meta-analysis, it is uncertain whether oral stimulation reduces the time to transition to oral feeding compared with standard care (mean difference (MD) -4.07 days, 95% confidence interval (CI) -4.81 to -3.32 days, 6 studies, 292 infants; I2 =85%, very low-certainty evidence due to serious risk of bias and inconsistency). Time (days) spent in the neonatal intensive care unit (NICU) was not reported. It is uncertain whether oral stimulation reduces the duration of hospitalisation (MD -4.33, 95% CI -5.97 to -2.68 days, 5 studies, 249 infants; i2 =68%, very low-certainty evidence due to serious risk of bias and inconsistency). Duration (days) of parenteral nutrition was not reported.\nOral stimulation compared with non-oral intervention\nFollowing meta-analysis, it is uncertain whether oral stimulation reduces the time to transition to exclusive oral feeding compared with a non-oral intervention (MD -7.17, 95% CI -8.04 to -6.29 days, 10 studies, 574 infants; I2 =80%, very low-certainty evidence due to serious risk of bias, inconsistency and precision). Time (days) spent in the NICU was not reported. Oral stimulation may reduce the duration of hospitalisation (MD -6.15, 95% CI -8.63 to -3.66 days, 10 studies, 591 infants; I2 =0%, low-certainty evidence due to serious risk of bias). Oral stimulation may have little or no effect on the duration (days) of parenteral nutrition exposure (MD -2.85, 95% CI -6.13 to 0.42, 3 studies, 268 infants; very low-certainty evidence due to serious risk of bias, inconsistency and imprecision).\nAuthors' conclusions\nThere remains uncertainty about the effects of oral stimulation (versus either standard care or a non-oral intervention) on transition times to oral feeding, duration of intensive care stay, hospital stay, or exposure to parenteral nutrition for preterm infants. Although we identified 28 eligible trials in this review, only 18 provided data for meta-analyses. Methodological weaknesses, particularly in relation to allocation concealment and masking of study personnel and caregivers, inconsistency between trials in effect size estimates (heterogeneity), and imprecision of pooled estimates were the main reasons for assessing the evidence as low or very low certainty.\nMore well-designed trials of oral stimulation interventions for preterm infants are warranted. Such trials should attempt to mask caregivers to treatment when possible, paying particular attention to blinding of outcome assessors. There are currently 32 ongoing trials. Outcome measures that reflect improvements in oral motor skill development as well as longer term outcome measures beyond six months of age need to be defined and used by researchers to capture the full impact of these interventions.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review included randomised controlled trials (RCTs) that explored oral stimulation by finger stimulation only in preterm infants. Review authors identified studies to be included by searching electronic databases, clinical trials registers, peer-reviewed journals and published conference proceedings. Search is up-to-date as of March 2022."
}
] |
query-laysum
|
22927
|
[
{
"role": "user",
"content": "Abstract: Background\nPrevention of chronic chagasic cardiomyopathy (CCC) by treating infected populations with trypanocidal therapy (TT) remains a challenge. Despite a renewed enthusiasm for TT, uncertainty regarding its efficacy, concerns about its safety and limited availability remain barriers for a wider use of conventional drugs. We have updated a previous version of this review.\nObjectives\nTo systematically search, appraise, identify and extract data from eligible studies comparing the outcome of cohorts of seropositive individuals to Trypanosoma cruzi exposed to TT versus placebo or no treatment.\nSearch methods\nWe sought eligible studies in electronic databases (Cochrane Central Register of Controlled Trials (CENTRAL), Issue 1, 2014); MEDLINE (Ovid, 1946 to January week 5 2014); EMBASE (Ovid, 1980 to 2014 week 6) and LILACS (up to 6 May 2010)) by combining terms related with the disease and the treatment. The search also included a Google search, handsearch for references in review or selected articles, and search of expert files. We applied no language restrictions.\nSelection criteria\nReview authors screened the retrieved references for eligibility (those dealing with human participants treated with TT) and then assessed the pre-selected studies in full for inclusion. We included randomised controlled trials (RCTs) and observational studies that provided data on either mortality or clinical progression of CCC after at least four years of follow-up.\nData collection and analysis\nTeams of two review authors independently carried out the study selection, data extraction and risk of bias assessment, with a referee resolving disagreement within the pairs. Data collection included study design, characteristics of the population and interventions or exposures and outcome measures. We defined categories of outcome data as parasite-related (positive serology, xenodiagnosis or polymerase chain reaction (PCR) after TT) and participant-related (including efficacy outcomes such as progression towards CCC, all-cause mortality and side effects of TT). We reported pooled outcome data as Mantel-Haenszel odds ratios (OR) or standardised mean differences (SMD) along with 95% confidence intervals (CI), using a random-effects model. I2 statistics provided an estimate of heterogeneity across studies. We conducted an exploratory meta-regression analysis of the relationship between positive-serology and progression of CCC or mortality.\nMain results\nWe included 13 studies involving 4229 participants (six RCTs, n = 1096, five RCTs of intermediate risk of bias, one RCT of high risk of bias; four non-randomised experiments, n = 1639 and three observational studies, n = 1494). Ten studies tested nitroderivative drugs nifurtimox or benznidazole (three exposed participants to allopurinol, one to itraconazole). Five studies were conducted in Brazil, five in Argentina, one in Bolivia, one in Chile and one in Venezuela.\nTT was associated with substantial, but heterogeneous reductions on parasite-related outcomes such as positive serology (9 studies, OR 0.21, 95% CI 0.10 to 0.44, I2 = 76%), positive PCR (2 studies, OR 0.50, 95% CI 0.27 to 0.92, I2 = 0%), positive xenodiagnosis after treatment (6 studies, OR 0.35, 95% CI 0.14 to 0.86, I2 = 79%), or reduction on antibody titres (3 studies, SMD -0.56, 95% CI -0.89 to -0.23, I2 = 28%). Efficacy data on patient-related outcomes was largely from non-RCTs. TT with nitroderivatives was associated with potentially important, but imprecise and inconsistent reductions in progression of CCC (4 studies, 106 events, OR 0.74, 95% CI 0.32 to 1.73, I2 = 66%) and mortality after TT (6 studies, 99 events, OR 0.55, 95% CI 0.26 to 1.14, I2 = 48%). The overall median incidence of any severe side effects among 1475 individuals from five studies exposed to TT was 2.7%, and the overall discontinuation of this two-month therapy in RCTs (5 studies, 134 events) was 20.5% (versus 4.3% among controls) and 10.4% in other five studies (125 events).\nAuthors' conclusions\nDespite the evidence that TT reduced parasite-related outcomes, the low quality and inconsistency of the data for patient-important outcomes must be treated with caution. More geographically diverse RCTs testing newer forms of TT are warranted in order to 1. estimate efficacy more precisely, 2. explore factors potentially responsible for the heterogeneity of results and 3. increase knowledge on the efficacy/tolerance balance of conventional TT.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We identified 13 studies comparing the outcomes of 4229 people after receiving TT or placebo. Five of these studies were from Argentina, five from Brazil, one from Venezuela, one from Chile and one from Bolivia.\nReceiving TT was associated with a 50% to 90% smaller chance of having circulating antibodies or parasitic material, as compared with non-treated people. However, the results on progression towards Chagas disease or death indicate smaller benefits. Furthermore, the results were also statistically inconclusive, did not rule out potential harm and had substantial variation across studies conducted in different countries or testing different drugs. About one in five individuals treated abandoned the treatment and one in 40 treated individuals had a severe reaction (needing hospitalisation, additional treatments or interruption of this treatment).\nWe conclude that although TT may reduce the progression of Chagas disease, better quality studies are warranted before its use can be generally recommended for chronically infected individuals. New data should bring more certainty of the efficacy of TT and provide a precise evaluation of the balance between benefits and harms. Because of the variations across studies, these studies should include populations from more regions and test newer drugs."
}
] |
query-laysum
|
22928
|
[
{
"role": "user",
"content": "Abstract: Background\nThis review is one of six looking at the primary medical management options for patients with chronic rhinosinusitis.\nChronic rhinosinusitis is common and is characterised by inflammation of the lining of the nose and paranasal sinuses leading to nasal blockage, nasal discharge, facial pressure/pain and loss of sense of smell. The condition can occur with or without nasal polyps. Systemic and topical antibiotics are used with the aim of eliminating infection in the short term (and some to reduce inflammation in the long term), in order to normalise nasal mucus and improve symptoms.\nObjectives\nTo assess the effects of systemic and topical antibiotics in people with chronic rhinosinusitis.\nSearch methods\nThe Cochrane ENT Information Specialist searched the Cochrane ENT Trials Register; CENTRAL (2015, Issue 8); MEDLINE; EMBASE; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 29 September 2015.\nSelection criteria\nRandomised controlled trials (RCTs) with a follow-up period of at least three months comparing systemic or topical antibiotic treatment to (a) placebo or (b) no treatment or (c) other pharmacological interventions.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. Our primary outcomes were disease-specific health-related quality of life (HRQL), patient-reported disease severity and the commonest adverse event - gastrointestinal disturbance. Secondary outcomes included general HRQL, endoscopic nasal polyp score, computerised tomography (CT) scan score and the adverse events of suspected allergic reaction (rash or skin irritation) and anaphylaxis or other very serious reactions. We used GRADE to assess the quality of the evidence for each outcome; this is indicated in italics.\nMain results\nWe included five RCTs (293 participants), all of which compared systemic antibiotics with placebo or another pharmacological intervention.\nThe varying study characteristics made comparison difficult. Four studies recruited only adults and one only children. Three used macrolide, one tetracycline and one a cephalosporin-type antibiotic. Three recruited only patients with chronic rhinosinusitis without nasal polyps, one recruited patients with chronic rhinosinusitis with nasal polyps and one had a mixed population. Three followed up patients for 10 to 12 weeks after treatment had finished.\nSystemic antibiotics versus placebo\nThree studies compared antibiotics with placebo (176 participants).\nOne study (64 participants, without polyps) reported disease-specific HRQL using the SNOT-20 (0 to 5, 0 = best quality of life). At the end of treatment (three months) the SNOT-20 score was lower in the group receiving macrolide antibiotics than the placebo group (mean difference (MD) -0.54 points, 95% confidence interval (CI) -0.98 to -0.10), corresponding to a moderate effect size favouring antibiotics (moderate quality evidence). Three months after treatment, it is uncertain if there was a difference between groups.\nOne study (33 participants, with polyps) provided information on gastrointestinal disturbances and suspected allergic reaction (rash or skin irritation) after a short course of tetracycline antibiotic compared with placebo. We are very uncertain if antibiotics were associated with an increase in gastrointestinal disturbances (risk ratio (RR) 1.36, 95% CI 0.22 to 8.50) or skin irritation (RR 6.67, 95% CI 0.34 to 128.86) (very low quality evidence).\nSystemic antibiotics plus saline irrigation and intranasal corticosteroids versus placebo plus saline irrigation and intranasal corticosteroids\nOne study (60 participants, some with and some without polyps) compared a three-month course of macrolide antibiotic with placebo; all participants also used saline irrigation and 70% used intranasal corticosteroids. Disease-specific HRQL was reported using SNOT-22 (0 to 110, 0 = best quality of life). Data were difficult to interpret (highly skewed and baseline imbalances) and it is unclear if there was an important difference at any time point (low quality evidence). To assess patient-reported disease severity participants rated the effect of treatment on a five-point scale (-2 for \"desperately worse\" to 2 for \"cured\") at the end of treatment (three months). For improvement in symptoms there was no difference between the antibiotics and placebo groups; the RR was 1.50 (95% CI 0.81 to 2.79; very low quality evidence), although there were also slightly more people who felt worse after treatment in the antibiotics group. There was no demonstrable difference in the rate of gastrointestinal disturbances between the groups (RR 1.07, 95% CI 0.16 to 7.10). General HRQL was measured using the SF-36. The authors stated that there was no difference between groups at the end of treatment (12 weeks) or two weeks later.\nSystemic antibiotics versus intranasal corticosteroids\nOne study (43 participants, without polyps) compared a three-month course of macrolide antibiotic with intranasal corticosteroids. Patient-reported disease severity was assessed using a composite symptom score (0 to 40; 0 = no symptoms). It is very uncertain if there was a difference as patient-reported disease severity was similar between groups (MD -0.32, 95% CI -2.11 to 1.47; low quality evidence).\nSystemic antibiotics versus oral corticosteroids\nOne study (28 participants, with polyps) compared a short course of tetracycline antibiotic (unclear duration, ˜20 days) with a 20-day course of oral corticosteroids. We were unable to extract data on any of the primary efficacy outcomes. It is uncertain if there was a difference ingastrointestinal disturbances (RR 1.00, 95% CI 0.16 to 6.14) or skin irritation (RR 2.00, 95% CI 0.20 to 19.62) as the results for these outcomes were similar between groups (very low quality evidence).\nAuthors' conclusions\nWe found very little evidence that systemic antibiotics are effective in patients with chronic rhinosinusitis. We did find moderate quality evidence of a modest improvement in disease-specific quality of life in adults with chronic rhinosinusitis without polyps receiving three months of a macrolide antibiotic. The size of improvement was moderate (0.5 points on a five-point scale) and only seen at the end of the three-month treatment; by three months later no difference was found.\nDespite a general understanding that antibiotics can be associated with adverse effects, including gastrointestinal disturbances, the results in this review were very uncertain because the studies were small and few events were reported.\nNo RCTs of topical antibiotics met the inclusion criteria.\nMore research in this area, particularly evaluating longer-term outcomes and adverse effects, is required.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chronic rhinosinusitis is a common condition that is defined as inflammation of the nose and paranasal sinuses (a group of air-filled spaces behind the nose, eyes and cheeks). Patients experience at least two or more of the following symptoms for at least 12 weeks: blocked nose, discharge from their nose or runny nose, pain or pressure in their face and/or a reduced sense of smell (hyposmia). Some people will also have nasal polyps, which are grape-like swellings of the normal nasal lining inside the nasal passage and sinuses."
}
] |
query-laysum
|
22929
|
[
{
"role": "user",
"content": "Abstract: Background\nSubfertility due to anovulation is a common problem in women. First-line oral treatment is with antioestrogens such as clomiphene citrate, but resistance may be apparent with clomiphene. Alternative and adjunctive treatments have been used including tamoxifen, dexamethasone, and bromocriptine. The effectiveness of these is to be determined.\nObjectives\nTo determine the relative effectiveness of antioestrogen agents including clomiphene alone or in combination with other medical therapies in women with subfertility associated with anovulation, possibly caused by polycystic ovarian syndrome.\nSearch methods\nWe conducted a search of the Cochrane Gynaecology and Fertility Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PsycINFO, and CINAHL (all from inception to August 2016) to identify relevant randomised controlled trials (RCTs). We searched the United Kingdom National Institute for Clinical Excellence (NICE) guidelines and the references of relevant reviews and RCTs. We also searched the clinical trial registries for ongoing trials (inception until August 2016).\nSelection criteria\nWe considered RCTs comparing oral antioestrogen agents for ovulation induction (alone or in conjunction with medical therapies) in anovulatory subfertility. We excluded insulin-sensitising agents, aromatase inhibitors, and hyperprolactinaemic infertility.\nData collection and analysis\nTwo review authors independently performed data extraction and quality assessment. The primary outcome was live birth; secondary outcomes were pregnancy, ovulation, miscarriage, multiple pregnancy, ovarian hyperstimulation syndrome, and adverse effects.\nMain results\nThis is a substantive update of a previous review. We identified an additional 13 studies in the 2016 update. The review now includes 28 RCTs (3377 women) and five RCTs awaiting classification. Five of the 28 included trials reported live birth/ongoing pregnancy. Secondary outcomes were poorly reported.\nThe quality of the evidence ranged from low to very low. The primary reasons for downgrading the evidence were imprecision and risk of bias associated with poor reporting.\nAntioestrogen versus placebo\nLive birth rate, miscarriage rate, multiple pregnancy rate, and ovarian hyperstimulation syndrome (OHSS)\nNo data were reported for these outcomes.\nClinical pregnancy rate\nClomiphene citrate was associated with an increased chance of a clinical pregnancy compared with placebo, though the size of the benefit was very uncertain (odds ratio (OR) 5.91, 95% confidence interval (CI) 1.77 to 19.68; 3 studies; 133 women; low-quality evidence). If the chance of a clinical pregnancy was 5% in the placebo group, then between 8% and 50% of women would have a clinical pregnancy in the clomiphene group.\nClomiphene citrate versus tamoxifen\nLive birth rate\nThere was no clear evidence of a difference in the chance of a live birth between the clomiphene citrate and tamoxifen groups (OR 1.24, 95% CI 0.59 to 2.62; 2 studies; 195 women; low-quality evidence). If 20% of women in the tamoxifen group had a live birth, then between 13% and 40% of women in the clomiphene citrate group would have a live birth.\nMiscarriage rate\nThere was no clear evidence of a difference in the chance of a miscarriage between the clomiphene citrate and tamoxifen groups (OR 1.81, 95% CI 0.80 to 4.12; 4 studies; 653 women; low-quality evidence). If 3% of women in the tamoxifen group had a miscarriage, then between 2% and 10% in the clomiphene citrate group would have a miscarriage.\nClinical pregnancy rate\nThere was no clear evidence of a difference in the chance of a clinical pregnancy between the clomiphene citrate and tamoxifen groups (OR 1.30, 95% CI 0.92 to 1.85; 5 studies; 757 women; I2 = 69%; low-quality evidence). If 22% of women in the tamoxifen group had a clinical pregnancy, then between 21% and 35% in the clomiphene citrate group would have a clinical pregnancy.\nMultiple pregnancy rate\nThere was insufficient evidence of a difference in the chance of a multiple pregnancy between the clomiphene citrate group (OR 2.34, 95% CI 0.34 to 16.04; 3 studies; 567 women; very low-quality evidence). If 0% of women in the tamoxifen group had a multiple pregnancy, then between 0% and 0.5% of women in the clomiphene group would have a multiple pregnancy.\nOHSS\nThere were no instances of OHSS in either the clomiphene citrate or the tamoxifen group reported from three studies.\nClomiphene citrate with tamoxifen versus tamoxifen alone\nClinical pregnancy rate\nThere was insufficient evidence to determine whether there was a difference between groups (OR 3.32, 95% CI 0.12 to 91.60; 1 study; 20 women; very low-quality evidence). No data were reported for the other outcomes.\nOther comparisons of interest\nLimited evidence suggested that compared with a gonadotropin, clomiphene citrate was associated with a reduced chance of a pregnancy, ongoing pregnancy, or live birth, with no clear evidence of a difference in multiple pregnancy rates.\nThe comparison of clomiphene citrate plus medical adjunct versus clomiphene alone was limited by the number of trials reporting the comparison and poor reporting of clinical outcomes relevant to this systematic review and by the number of adjuncts reported (ketoconazole, bromocriptine, dexamethasone, combined oral contraceptive, human chorionic gonadotropin, hormone supplementation). The addition of dexamethasone or combined oral contraceptive suggested a possible benefit in pregnancy outcomes, but findings were very uncertain and further research is required to confirm this.\nThere was limited evidence suggesting that a 10-day regimen of clomiphene citrate improves pregnancy outcomes compared with a 5-day regimen. Data for early versus late regimens of clomiphene citrate were insufficient to be able to make a judgement on differences for pregnancy outcomes.\nAuthors' conclusions\nWe found evidence suggesting that clomiphene citrate improves the chance of a clinical pregnancy compared with placebo, but may reduce the chance of live birth or ongoing pregnancy when compared with a gonadotropin. Due to low event rates, we advise caution interpreting these data.\nThe comparison of clomiphene citrate plus medical adjunctive versus clomiphene alone was limited by the number of trials reporting the comparison. The evidence was very low quality and no firm conclusions could be drawn, but very limited evidence suggested a benefit from adjunctive dexamethasone or combined oral contraceptives. Low-quality evidence suggested that a 10-day regimen of clomiphene citrate improves pregnancy rates compared with a 5-day regimen, but further research is required.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Do antioestrogens including clomiphene improve fertility in women with anovulation associated with polycystic ovary syndrome?"
}
] |
query-laysum
|
22930
|
[
{
"role": "user",
"content": "Abstract: Background\nMelanoma has one of the fastest rising incidence rates of any cancer. It accounts for a small percentage of skin cancer cases but is responsible for the majority of skin cancer deaths. Although history-taking and visual inspection of a suspicious lesion by a clinician are usually the first in a series of ‘tests’ to diagnose skin cancer, dermoscopy has become an important tool to assist diagnosis by specialist clinicians and is increasingly used in primary care settings. Dermoscopy is a magnification technique using visible light that allows more detailed examination of the skin compared to examination by the naked eye alone. Establishing the additive value of dermoscopy over and above visual inspection alone across a range of observers and settings is critical to understanding its contribution for the diagnosis of melanoma and to future understanding of the potential role of the growing number of other high-resolution image analysis techniques.\nObjectives\nTo determine the diagnostic accuracy of dermoscopy alone, or when added to visual inspection of a skin lesion, for the detection of cutaneous invasive melanoma and atypical intraepidermal melanocytic variants in adults. We separated studies according to whether the diagnosis was recorded face-to-face (in-person), or based on remote (image-based), assessment.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception up to August 2016: CENTRAL; MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nStudies of any design that evaluated dermoscopy in adults with lesions suspicious for melanoma, compared with a reference standard of either histological confirmation or clinical follow-up. Data on the accuracy of visual inspection, to allow comparisons of tests, was included only if reported in the included studies of dermoscopy.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic threshold were missing. We estimated accuracy using hierarchical summary receiver operating characteristic (SROC),methods. Analysis of studies allowing direct comparison between tests was undertaken. To facilitate interpretation of results, we computed values of sensitivity at the point on the SROC curve with 80% fixed specificity and values of specificity with 80% fixed sensitivity. We investigated the impact of in-person test interpretation; use of a purposely developed algorithm to assist diagnosis; observer expertise; and dermoscopy training.\nMain results\nWe included a total of 104 study publications reporting on 103 study cohorts with 42,788 lesions (including 5700 cases), providing 354 datasets for dermoscopy. The risk of bias was mainly low for the index test and reference standard domains and mainly high or unclear for participant selection and participant flow. Concerns regarding the applicability of study findings were largely scored as ‘high’ concern in three of four domains assessed. Selective participant recruitment, lack of reproducibility of diagnostic thresholds and lack of detail on observer expertise were particularly problematic.\nThe accuracy of dermoscopy for the detection of invasive melanoma or atypical intraepidermal melanocytic variants was reported in 86 datasets; 26 for evaluations conducted in person (dermoscopy added to visual inspection), and 60 for image-based evaluations (diagnosis based on interpretation of dermoscopic images). Analyses of studies by prior testing revealed no obvious effect on accuracy; analyses were hampered by the lack of studies in primary care, lack of relevant information and the restricted inclusion of lesions selected for biopsy or excision. Accuracy was higher for in-person diagnosis compared to image-based evaluations (relative diagnostic odds ratio (RDOR) 4.6, 95% confidence interval (CI) 2.4 to 9.0; P < 0.001).\nWe compared accuracy for (a), in-person evaluations of dermoscopy (26 evaluations; 23,169 lesions and 1664 melanomas),versus visual inspection alone (13 evaluations; 6740 lesions and 459 melanomas), and for (b), image-based evaluations of dermoscopy (60 evaluations; 13,475 lesions and 2851 melanomas),versus image-based visual inspection (11 evaluations; 1740 lesions and 305 melanomas). For both comparisons, meta-analysis found dermoscopy to be more accurate than visual inspection alone, with RDORs of (a), 4.7 (95% CI 3.0 to 7.5; P < 0.001), and (b), 5.6 (95% CI 3.7 to 8.5; P < 0.001). For a), the predicted difference in sensitivity at a fixed specificity of 80% was 16% (95% CI 8% to 23%; 92% for dermoscopy + visual inspection versus 76% for visual inspection), and predicted difference in specificity at a fixed sensitivity of 80% was 20% (95% CI 7% to 33%; 95% for dermoscopy + visual inspection versus 75% for visual inspection). For b) the predicted differences in sensitivity was 34% (95% CI 24% to 46%; 81% for dermoscopy versus 47% for visual inspection), at a fixed specificity of 80%, and predicted difference in specificity was 40% (95% CI 27% to 57%; 82% for dermoscopy versus 42% for visual inspection), at a fixed sensitivity of 80%.\nUsing the median prevalence of disease in each set of studies ((a), 12% for in-person and (b), 24% for image-based), for a hypothetical population of 1000 lesions, an increase in sensitivity of (a), 16% (in-person), and (b), 34% (image-based), from using dermoscopy at a fixed specificity of 80% equates to a reduction in the number of melanomas missed of (a), 19 and (b), 81 with (a), 176 and (b), 152 false positive results. An increase in specificity of (a), 20% (in-person), and (b), 40% (image-based), at a fixed sensitivity of 80% equates to a reduction in the number of unnecessary excisions from using dermoscopy of (a), 176 and (b), 304 with (a), 24 and (b), 48 melanomas missed.\nThe use of a named or published algorithm to assist dermoscopy interpretation (as opposed to no reported algorithm or reported use of pattern analysis), had no significant impact on accuracy either for in-person (RDOR 1.4, 95% CI 0.34 to 5.6; P = 0.17), or image-based (RDOR 1.4, 95% CI 0.60 to 3.3; P = 0.22), evaluations. This result was supported by subgroup analysis according to algorithm used. We observed higher accuracy for observers reported as having high experience and for those classed as ‘expert consultants’ in comparison to those considered to have less experience in dermoscopy, particularly for image-based evaluations. Evidence for the effect of dermoscopy training on test accuracy was very limited but suggested associated improvements in sensitivity.\nAuthors' conclusions\nDespite the observed limitations in the evidence base, dermoscopy is a valuable tool to support the visual inspection of a suspicious skin lesion for the detection of melanoma and atypical intraepidermal melanocytic variants, particularly in referred populations and in the hands of experienced users. Data to support its use in primary care are limited, however, it may assist in triaging suspicious lesions for urgent referral when employed by suitably trained clinicians. Formal algorithms may be of most use for dermoscopy training purposes and for less expert observers, however reliable data comparing approaches using dermoscopy in person are lacking.\n\nGiven the provided abstract, please respond to the following query: \"Who do the results of this review apply to?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Sixty-six studies were undertaken in Europe (77%), with the remainder undertaken in North America (6 studies), Asia (4), Oceania (4), or were multicentre (7). Mean age ranged from 30 to 58 years (reported in 26 studies). The percentage of individuals with melanoma ranged between 1% and 41% for dermoscopy in-person studies (median 12%), and between 3% and 61% in studies using dermoscopy images (median 24%). Almost all of the studies were carried out in referral settings rather than in primary care. In the majority of studies the lesions were unlikely to be representative of the range of those seen in practice, for example only including skin lesions of a certain size or with a specific appearance. In addition variation in the expertise of clinicians performing visual inspection and the definition used for a positive dermoscopy test result across studies makes it unclear as to how dermoscopy should be carried out and by people with different levels of clinical expertise in order to achieve the accuracy observed in studies."
}
] |
query-laysum
|
22931
|
[
{
"role": "user",
"content": "Abstract: Background\nAlveolar bone changes following tooth extraction can compromise prosthodontic rehabilitation. Alveolar ridge preservation (ARP) has been proposed to limit these changes and improve prosthodontic and aesthetic outcomes when implants are used. This is an update of the Cochrane Review first published in 2015.\nObjectives\nTo assess the clinical effects of various materials and techniques for ARP after tooth extraction compared with extraction alone or other methods of ARP, or both, in patients requiring dental implant placement following healing of extraction sockets.\nSearch methods\nCochrane Oral Health's Information Specialist searched the following databases: Cochrane Oral Health's Trials Register (to 19 March 2021), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2021, Issue 2), MEDLINE Ovid (1946 to 19 March 2021), Embase Ovid (1980 to 19 March 2021), Latin American and Caribbean Health Science Information database (1982 to 19 March 2021), Web of Science Conference Proceedings (1990 to 19 March 2021), Scopus (1966 to 19 March 2021), ProQuest Dissertations and Theses (1861 to 19 March 2021), and OpenGrey (to 19 March 2021). The US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov) and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. No restrictions were placed on the language or date of publication when searching the electronic databases. A number of journals were also handsearched.\nSelection criteria\nWe included all randomised controlled trials (RCTs) on the use of ARP techniques with at least six months of follow-up. Outcome measures were: changes in the bucco-lingual/palatal width of alveolar ridge, changes in the vertical height of the alveolar ridge, complications, the need for additional augmentation prior to implant placement, aesthetic outcomes, implant failure rates, peri-implant marginal bone level changes, changes in probing depths and clinical attachment levels at teeth adjacent to the extraction site, and complications of future prosthodontic rehabilitation.\nData collection and analysis\nWe selected trials, extracted data, and assessed risk of bias in duplicate. Corresponding authors were contacted to obtain missing information. We estimated mean differences (MD) for continuous outcomes and risk ratios (RR) for dichotomous outcomes, with 95% confidence intervals (95% CI). We constructed 'Summary of findings' tables to present the main findings and assessed the certainty of the evidence using GRADE.\nMain results\nWe included 16 RCTs conducted worldwide involving a total of 524 extraction sites in 426 adult participants. We assessed four trials as at overall high risk of bias and the remaining trials at unclear risk of bias. Nine new trials were included in this update with six new trials in the category of comparing ARP to extraction alone and three new trials in the category of comparing different grafting materials.\nARP versus extraction: from the seven trials comparing xenografts with extraction alone, there is very low-certainty evidence of a reduction in loss of alveolar ridge width (MD -1.18 mm, 95% CI -1.82 to -0.54; P = 0.0003; 6 studies, 184 participants, 201 extraction sites), and height (MD -1.35 mm, 95% CI -2.00 to -0.70; P < 0.0001; 6 studies, 184 participants, 201 extraction sites) in favour of xenografts, but we found no evidence of a significant difference for the need for additional augmentation (RR 0.68, 95% CI 0.29 to 1.62; P = 0.39; 4 studies, 154 participants, 156 extraction sites; very low-certainty evidence) or in implant failure rate (RR 1.00, 95% CI 0.07 to 14.90; 2 studies, 70 participants/extraction sites; very low-certainty evidence). From the one trial comparing alloplasts versus extraction, there is very low-certainty evidence of a reduction in loss of alveolar ridge height (MD -3.73 mm; 95% CI -4.05 to -3.41; 1 study, 15 participants, 60 extraction sites) in favour of alloplasts. This single trial did not report any other outcomes.\nDifferent grafting materials for ARP: three trials (87 participants/extraction sites) compared allograft versus xenograft, two trials (37 participants, 55 extraction sites) compared alloplast versus xenograft, one trial (20 participants/extraction sites) compared alloplast with and without membrane, one trial (18 participants, 36 extraction sites) compared allograft with and without synthetic cell-binding peptide P-15, and one trial (30 participants/extraction sites) compared alloplast with different particle sizes. The evidence was of very low certainty for most comparisons and insufficient to determine whether there are clinically significant differences between different ARP techniques based on changes in alveolar ridge width and height, the need for additional augmentation prior to implant placement, or implant failure.\nWe found no trials which evaluated parameters relating to clinical attachment levels, specific aesthetic or prosthodontic outcomes for any of the comparisons.\nNo serious adverse events were reported with most trials indicating that the procedure was uneventful. Among the complications reported were delayed healing with partial exposure of the buccal plate at suture removal, postoperative pain and swelling, moderate glazing, redness and oedema, membrane exposure and partial loss of grafting material, and fibrous adhesions at the cervical part of previously preserved sockets, for the comparisons xenografts versus extraction, allografts versus xenografts, alloplasts versus xenografts, and alloplasts with and without membrane.\nAuthors' conclusions\nARP techniques may minimise the overall changes in residual ridge height and width six months after extraction but the evidence is very uncertain. There is lack of evidence of any differences in the need for additional augmentation at the time of implant placement, implant failure, aesthetic outcomes, or any other clinical parameters due to lack of information or long-term data. There is no evidence of any clinically significant difference between different grafting materials and barriers used for ARP. Further long-term RCTs that follow CONSORT guidelines (www.consort-statement.org) are necessary.\n\nGiven the provided abstract, please respond to the following query: \"How did we identify and evaluate the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "First, we searched the medical literature for studies that compared:\n- ARP against no ARP; or\n- different ARP materials or techniques.\nWe then compared the results, and summarised the evidence from all the studies. Finally, we rated our confidence in the evidence, based on factors such as study methods and sizes, and the consistency of findings across studies."
}
] |
query-laysum
|
22932
|
[
{
"role": "user",
"content": "Abstract: Background\nHepatic encephalopathy is a disorder of brain function as a result of liver failure or portosystemic shunt or both. Both hepatic encephalopathy (clinically overt) and minimal hepatic encephalopathy (not clinically overt) significantly impair patient’s quality of life and daily functioning, and represent a significant burden on healthcare resources. Probiotics are live micro-organisms, which when administered in adequate amounts, may confer a health benefit on the host.\nObjectives\nTo determine the beneficial and harmful effects of probiotics in any dosage, compared with placebo or no intervention, or with any other treatment for people with any grade of acute or chronic hepatic encephalopathy. This review did not consider the primary prophylaxis of hepatic encephalopathy.\nSearch methods\nWe searched The Cochrane Hepato-Biliary Group Controlled Trials Register, CENTRAL, MEDLINE, Embase, Science Citation Index Expanded, conference proceedings, reference lists of included trials, and the World Health Organization International Clinical Trials Registry Platform until June 2016.\nSelection criteria\nWe included randomised clinical trials that compared probiotics in any dosage with placebo or no intervention, or with any other treatment in people with hepatic encephalopathy.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration. We conducted random-effects model meta-analysis due to obvious heterogeneity of participants and interventions. We defined a P value of 0.05 or less as significant. We expressed dichotomous outcomes as risk ratio (RR) and continuous outcomes as mean difference (MD) with 95% confidence intervals (CI).\nMain results\nWe included 21 trials with 1420 participants, of these, 14 were new trials. Fourteen trials compared a probiotic with placebo or no treatment, and seven trials compared a probiotic with lactulose. The trials used a variety of probiotics; the most commonly used group of probiotic was VSL#3, a proprietary name for a group of eight probiotics. Duration of administration ranged from 10 days to 180 days. Eight trials declared their funding source, of which six were independently funded and two were industry funded. The remaining 13 trials did not disclose their funding source. We classified 19 of the 21 trials at high risk of bias.\nWe found no effect on all-cause mortality when probiotics were compared with placebo or no treatment (7 trials; 404 participants; RR 0.58, 95% CI 0.23 to 1.44; low-quality evidence). No-recovery (as measured by incomplete resolution of symptoms) was lower for participants treated with probiotic (10 trials; 574 participants; RR 0.67, 95% CI 0.56 to 0.79; moderate-quality evidence). Adverse events were lower for participants treated with probiotic than with no intervention when considering the development of overt hepatic encephalopathy (10 trials; 585 participants; RR 0.29, 95% CI 0.16 to 0.51; low-quality evidence), but effects on hospitalisation and change of/or withdrawal from treatment were uncertain (hospitalisation: 3 trials, 163 participants; RR 0.67, 95% CI 0.11 to 4.00; very low-quality evidence; change of/or withdrawal from treatment: 9 trials, 551 participants; RR 0.70, 95% CI 0.46 to 1.07; very low-quality evidence). Probiotics may slightly improve quality of life compared with no intervention (3 trials; 115 participants; results not meta-analysed; low-quality evidence). Plasma ammonia concentration was lower for participants treated with probiotic (10 trials; 705 participants; MD -8.29 μmol/L, 95% CI -13.17 to -3.41; low-quality evidence). There were no reports of septicaemia attributable to probiotic in any trial.\nWhen probiotics were compared with lactulose, the effects on all-cause mortality were uncertain (2 trials; 200 participants; RR 5.00, 95% CI 0.25 to 102.00; very low-quality evidence); lack of recovery (7 trials; 430 participants; RR 1.01, 95% CI 0.85 to 1.21; very low-quality evidence); adverse events considering the development of overt hepatic encephalopathy (6 trials; 420 participants; RR 1.17, 95% CI 0.63 to 2.17; very low-quality evidence); hospitalisation (1 trial; 80 participants; RR 0.33, 95% CI 0.04 to 3.07; very low-quality evidence); intolerance leading to discontinuation (3 trials; 220 participants; RR 0.35, 95% CI 0.08 to 1.43; very low-quality evidence); change of/or withdrawal from treatment (7 trials; 490 participants; RR 1.27, 95% CI 0.88 to 1.82; very low-quality evidence); quality of life (results not meta-analysed; 1 trial; 69 participants); and plasma ammonia concentration overall (6 trials; 325 participants; MD -2.93 μmol/L, 95% CI -9.36 to 3.50; very low-quality evidence). There were no reports of septicaemia attributable to probiotic in any trial.\nAuthors' conclusions\nThe majority of included trials suffered from a high risk of systematic error (‘bias’) and a high risk of random error (‘play of chance’). Accordingly, we consider the evidence to be of low quality. Compared with placebo or no intervention, probiotics probably improve recovery and may lead to improvements in the development of overt hepatic encephalopathy, quality of life, and plasma ammonia concentrations, but probiotics may lead to little or no difference in mortality. Whether probiotics are better than lactulose for hepatic encephalopathy is uncertain because the quality of the available evidence is very low. High-quality randomised clinical trials with standardised outcome collection and data reporting are needed to further clarify the true efficacy of probiotics.\n\nGiven the provided abstract, please respond to the following query: \"Funding\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Eight trials declared their funding source, of which six were independently funded and two were industry funded. The remaining 13 trials did not disclose their funding source."
}
] |
query-laysum
|
22933
|
[
{
"role": "user",
"content": "Abstract: Background\nInositol is an essential nutrient required by human cells in culture for growth and survival. Inositol promotes maturation of several components of surfactant and may play a critical role in fetal and early neonatal life. A drop in inositol levels in infants with respiratory distress syndrome (RDS) can be a sign that their illness will be severe.\nObjectives\nTo assess the effectiveness and safety of supplementary inositol in preterm infants with or without respiratory distress syndrome (RDS) in reducing adverse neonatal outcomes including: death (neonatal and infant deaths), bronchopulmonary dysplasia (BPD), retinopathy of prematurity (ROP), intraventricular haemorrhage (IVH), periventricular leukomalacia (PVL), necrotizing enterocolitis (NEC) and sepsis.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2018, Issue 11), MEDLINE via PubMed (1966 to 5 November 2018), Embase (1980 to 5 November 2018), and CINAHL (1982 to 5 November 2018). We searched clinical trial databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials (RCT) and quasi-randomised trials.\nSelection criteria\nWe included all randomised controlled trials of inositol supplementation of preterm infants compared with a control group that received a placebo or no intervention. Outcomes included neonatal death, infant death, bronchopulmonary dysplasia (BPD), retinopathy of prematurity (ROP), intraventricular haemorrhage (IVH), necrotizing enterocolitis (NEC) and sepsis.\nData collection and analysis\nThe three review authors independently abstracted data on neonatal outcomes and resolved any disagreements through discussion and consensus. Outcomes were reported as typical risk ratio (RR), risk difference (RD) and number needed to treat for an additional beneficial outcome (NNTB) or number needed to treat for an additional harmful outcome (NNTH). We used the GRADE approach to assess the quality of evidence.\nMain results\nSix published randomised controlled trials were identified, with a total of 1177 infants. Study quality varied for the comparison 'Inositol supplementation to preterm infants (repeat doses in any amount and any duration of treatment) versus control' and interim analyses had occurred in several trials for the outcomes of interest. In this comparison, neonatal death was found to be significantly reduced (typical RR 0.53, 95% CI 0.31 to 0.91; typical RD −0.09, 95% CI −0.16 to −0.01; NNTB 11, 95% CI 6 to 100; 3 trials, 355 neonates). Infant deaths were not reduced (typical RR 0.89, 95% CI 0.71 to 1.13; typical RD −0.02, 95% CI −0.07 to 0.02; 5 trials, 1115 infants) (low-quality evidence). ROP stage 2 or higher or stage 3 or higher was not significantly reduced (typical RR 0.89, 95% CI 0.75 to 1.06; typical RD −0.04, 95% CI −0.10 to 0.02; 3 trials, 810 infants) (moderate-quality evidence). There were no significant findings for ROP (any stage), NEC (suspected or proven), sepsis, IVH grade greater than II (moderate-quality evidence). For the comparison 'Inositol supplementation IV initially followed by enteral administration (repeat doses of 80 mg/kg/day) in preterm infants born at less than 30 weeks' postmenstrual age (PMA) compared to placebo for preterm infants at risk for or having respiratory distress syndrome' the results from two studies of high quality were included (N = 760 neonates). Recruitment to the larger study (N = 638) was terminated because of a higher rate of deaths in the inositol group. We did not downgrade the quality of the study. The meta-analyses of the outcomes of 'Type 1 ROP or death before determination of ROP outcome using the adjudicated ROP outcome', 'Type 1 ROP including adjudicated ROP outcome', 'All-cause mortality (outcome collected through first event: death, hospital discharge, hospital transfer, or 120 days after birth)' and 'Severe IVH (grade 3 or 4)' did not show significant findings (moderate-quality evidence). There were no significant findings for the outcomes 'BPD or death by it prior to 37 weeks' postmenstrual age (outcomes collected through first event: death, hospital discharge, hospital transfer, or 120 days after birth)', 'Late onset sepsis (> 72 hours of age)', and 'Suspected or proven NEC' (high-quality evidence).\nAuthors' conclusions\nBased on the evidence from randomised controlled trials to date, inositol supplementation does not result in important reductions in the rates of infant deaths, ROP stage 3 or higher, type 1 ROP, IVH grades 3 or 4, BPD, NEC, or sepsis. These conclusions are based mainly on two recent randomised controlled trials in neonates less than 30 weeks' postmenstrual age (N = 760), the most vulnerable population. Currently inositol supplementation should not be routinely instituted as part of the nutritional management of preterm infants with or without RDS. It is important that infants who have been enrolled in the trials included in this review are followed to assess any effects of inositol supplementation on long-term outcomes in childhood. We do not recommend any additional trials in neonates.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Does the administration of supplementary inositol reduce adverse outcomes in preterm infants with or without respiratory distress syndrome (RDS)?"
}
] |
query-laysum
|
22934
|
[
{
"role": "user",
"content": "Abstract: Background\nPostoperative pancreatic fistula (POPF) is one of the most frequent and potentially life-threatening complications following pancreatic surgery. Fibrin sealants have been used in some centres to reduce POPF rate. However, the use of fibrin sealant during pancreatic surgery is controversial. This is an update of a Cochrane Review last published in 2020.\nObjectives\nTo evaluate the benefits and harms of fibrin sealant use for the prevention of POPF (grade B or C) in people undergoing pancreatic surgery compared to no fibrin sealant use.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, two other databases, and five trials registers on 09 March 2023, together with reference checking, citation searching, and contacting study authors to identify additional studies.\nSelection criteria\nWe included all randomised controlled trials (RCTs) that compared fibrin sealant (fibrin glue or fibrin sealant patch) versus control (no fibrin sealant or placebo) in people undergoing pancreatic surgery.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included 14 RCTs, randomising 1989 participants, comparing fibrin sealant use versus no fibrin sealant use for different locations: stump closure reinforcement (eight trials), pancreatic anastomosis reinforcement (five trials), or main pancreatic duct occlusion (two trials). Six RCTs were carried out in single centres; two in dual centres; and six in multiple centres. One RCT was conducted in Australia; one in Austria; two in France; three in Italy; one in Japan; two in the Netherlands; two in South Korea; and two in the USA. The mean age of the participants ranged from 50.0 years to 66.5 years. All RCTs were at high risk of bias.\nApplication of fibrin sealants to pancreatic stump closure reinforcement after distal pancreatectomy\nWe included eight RCTs involving 1119 participants: 559 were randomised to the fibrin sealant group and 560 to the control group after distal pancreatectomy. Fibrin sealant use may result in little to no difference in the rate of POPF (risk ratio (RR) 0.94, 95% confidence interval (CI) 0.73 to 1.21; 5 studies, 1002 participants; low-certainty evidence) and overall postoperative morbidity (RR 1.20, 95% CI 0.98 to 1.48; 4 studies, 893 participants; low-certainty evidence). After fibrin sealant use, approximately 199 people (155 to 256 people) out of 1000 developed POPF compared with 212 people out of 1000 when no fibrin sealant was used. The evidence is very uncertain about the effect of fibrin sealant use on postoperative mortality (Peto odds ratio (OR) 0.39, 95% CI 0.12 to 1.29; 7 studies, 1051 participants; very low-certainty evidence) and total length of hospital stay (mean difference (MD) 0.99 days, 95% CI -1.83 to 3.82; 2 studies, 371 participants; very low-certainty evidence). Fibrin sealant use may reduce the reoperation rate slightly (RR 0.40, 95% CI 0.18 to 0.90; 3 studies, 623 participants; low-certainty evidence). Serious adverse events were reported in five studies (732 participants), and there were no serious adverse events related to fibrin sealant use (low-certainty evidence). The studies did not report quality of life or cost-effectiveness.\nApplication of fibrin sealants to pancreatic anastomosis reinforcement after pancreaticoduodenectomy\nWe included five RCTs involving 519 participants: 248 were randomised to the fibrin sealant group and 271 to the control group after pancreaticoduodenectomy. The evidence is very uncertain about the effect of fibrin sealant use on the rate of POPF (RR 1.34, 95% CI 0.72 to 2.48; 3 studies, 323 participants; very low-certainty evidence), postoperative mortality (Peto OR 0.24, 95% CI 0.05 to 1.06; 5 studies, 517 participants; very low-certainty evidence), reoperation rate (RR 0.74, 95% CI 0.33 to 1.66; 3 studies, 323 participants; very low-certainty evidence), and total hospital cost (MD -1489.00 US dollars, 95% CI -3256.08 to 278.08; 1 study, 124 participants; very low-certainty evidence). After fibrin sealant use, approximately 130 people (70 to 240 people) out of 1000 developed POPF compared with 97 people out of 1000 when no fibrin sealant was used. Fibrin sealant use may result in little to no difference both in overall postoperative morbidity (RR 1.02, 95% CI 0.87 to 1.19; 4 studies, 447 participants; low-certainty evidence) and in total length of hospital stay (MD -0.33 days, 95% CI -2.30 to 1.63; 4 studies, 447 participants; low-certainty evidence). Serious adverse events were reported in two studies (194 participants), and there were no serious adverse events related to fibrin sealant use (very low-certainty evidence). The studies did not report quality of life.\nApplication of fibrin sealants to pancreatic duct occlusion after pancreaticoduodenectomy\nWe included two RCTs involving 351 participants: 188 were randomised to the fibrin sealant group and 163 to the control group after pancreaticoduodenectomy. The evidence is very uncertain about the effect of fibrin sealant use on postoperative mortality (Peto OR 1.41, 95% CI 0.63 to 3.13; 2 studies, 351 participants; very low-certainty evidence), overall postoperative morbidity (RR 1.16, 95% CI 0.67 to 2.02; 2 studies, 351 participants; very low-certainty evidence), and reoperation rate (RR 0.85, 95% CI 0.52 to 1.41; 2 studies, 351 participants; very low-certainty evidence). Fibrin sealant use may result in little to no difference in the total length of hospital stay (median 16 to 17 days versus 17 days; 2 studies, 351 participants; low-certainty evidence). Serious adverse events were reported in one study (169 participants; low-certainty evidence): more participants developed diabetes mellitus when fibrin sealants were applied to pancreatic duct occlusion, both at three months' follow-up (33.7% fibrin sealant group versus 10.8% control group; 29 participants versus 9 participants) and 12 months' follow-up (33.7% fibrin sealant group versus 14.5% control group; 29 participants versus 12 participants). The studies did not report POPF, quality of life, or cost-effectiveness.\nAuthors' conclusions\nBased on the current available evidence, fibrin sealant use may result in little to no difference in the rate of POPF in people undergoing distal pancreatectomy. The evidence is very uncertain about the effect of fibrin sealant use on the rate of POPF in people undergoing pancreaticoduodenectomy. The effect of fibrin sealant use on postoperative mortality is uncertain in people undergoing either distal pancreatectomy or pancreaticoduodenectomy.\n\nGiven the provided abstract, please respond to the following query: \"Comparison 1\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 8 studies with 1119 participants who underwent removal of the 'tail' of the pancreas. In this operation, surgical glue is applied to the pancreatic stump. Compared with no glue, surgical glue may make little to no difference to:\n- postoperative pancreatic fistula rate;\n- overall complication rate.\nSurgical glue may reduce the additional operation rate slightly compared to no glue. There were no harmful or unwanted effects related to surgical glue use in these studies. However, we do not know if its use has an effect on:\n- death rate;\n- total length of hospital stay.\nThese 8 studies did not report on participant health-related quality of life/health status and hospital cost."
}
] |
query-laysum
|
22935
|
[
{
"role": "user",
"content": "Abstract: Background\nBrexpiprazole is the newest antipsychotic drug approved as an adjunctive for treatment-resistant depression. This systematic review provides comprehensive, high-quality evidence for the effects of brexpiprazole in major depressive disorder (MDD) based on existing and emerging randomised controlled trials (RCTs).\nObjectives\nTo evaluate the benefits and harms of brexpiprazole as monotherapy or adjunct treatment in the acute and longer-term treatment of MDD compared to placebo or other antidepressive agents.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was November 2021.\nSelection criteria\nWe included RCTs in people with a history of non-response to one or more antidepressant monotherapies comparing brexpiprazole as monotherapy or adjunct treatment with placebo or other antidepressive agents.\nData collection and analysis\nWe used standard Cochrane methods. Primary outcomes were 1. response to treatment, measured as number of participants who achieved 50% or more reduction of the score on a validated scale for depression, 2. dropouts due to any reason, and 3. dropouts due to adverse effects, all measured at eight weeks (acute treatment). Our secondary outcomes were 4. number of participants who responded to treatment after two and 18 weeks, number of participants who achieved remission after eight weeks, 5. total number of participants with any adverse events, 6. social functioning or adjustment, and 7. health-related quality of life. We used GRADE to assess certainty of evidence for each outcome.\nMain results\nWe retrieved nine studies with 3424 participants with treatment-resistant depression, defined as lack of response to at least two trials of antidepressant monotherapy. The age of participants was 18 to 65 years in seven studies, 18 to 75 years in one study, and older than 65 years in one study. Inclusion criteria for all studies required a diagnosis of MDD with a current major depressive episode per Diagnostic and Statistical Manual of Mental Disorders Text Revision Fourth Edition criteria and additional criteria on minimal scores on Montgomery–Åsberg Depression Rating Scale (MADRS) and Hamilton Depression Rating Scale which varied across studies. Eight studies compared brexpiprazole as adjunctive treatment to an antidepressant versus placebo as adjunctive treatment to an antidepressant. One study added brexpiprazole or placebo to a combination of antidepressant and ketamine. Four studies used fixed dosages and five studies used flexible dosages of brexpiprazole. The manufacturer sponsored eight studies and one study was independently funded.\nBrexpiprazole was superior in achieving response at eight weeks (odds ratio (OR) 1.47, 95% confidence interval (CI) 1.23 to 1.75; 8 studies, 3409 participants; high-certainty evidence). More participants randomised to brexpiprazole dropped out due to any cause at eight weeks (OR 1.37, 95% CI 1.02 to 1.83; 7 studies, 2523 participants; high-certainty evidence) or due to adverse effects (OR 2.88, 95% CI 1.37 to 6.05; 6 studies, 2472 participants; high-certainty evidence). Brexpiprazole was superior to placebo when added to an antidepressant for achieving remission at eight weeks (OR 1.46, 95% CI 1.19 to 1.79; 7 studies, 3358 participants; high-certainty evidence). When response was defined based on a change in the score on MADRS, brexpiprazole also demonstrated better efficacy than placebo (mean difference (MD) –1.39, 95% CI –1.96 to –0.82; 8 studies, 3263 participants; high-certainty evidence). Compared to placebo, add-on brexpiprazole is probably more likely to result in akathisia (OR 2.95, 95% CI 2.06 to 4.21; 7 studies, 3358 participants; moderate-certainty evidence) and weight gain (OR 3.14, 95% CI 2.19 to 4.49; 9 studies, 3424 participants; moderate-certainty evidence).\nThe risk of bias varied between low and unclear for most studies. Only one study was at high risk of selective reporting bias and other bias; however, this study had a small sample size and its impact on overall risk of bias was not considered significant. The certainty of the evidence on five of the main outcomes (response to treatment, measured as number of participants who achieved response and change in depressive symptoms, remission, dropouts due to any cause, dropouts due to adverse effects) was high. The certainty of evidence for akathisia and weight gain was downgraded one level to moderate due to indirectness.\nAuthors' conclusions\nOur review supports the superior efficacy of add-on brexpiprazole in comparison to placebo in adults with treatment-resistant depression for the short-term and acute treatment of depression. Our findings were limited by the small number of studies and the moderate-certainty evidence for the most common adverse effects (akathisia and weight gain). There was insufficient evidence for the effects of brexpiprazole for depression in older people or children. In addition, we found no studies that compared brexpiprazole to other antidepressants or where brexpiprazole was added after lack of response to only one antidepressant. We also lacked sufficient data to establish the long-term efficacy of brexpiprazole as adjunctive treatment. Active comparators, long-term studies in different age groups and cost-effect analyses are needed to more precisely establish the clinical role of brexpiprazole in the treatment of MDD.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases for studies that reported the effects of brexpiprazole versus placebo or antidepressants for depression. We included only studies in which the participants were assigned randomly to one or the other treatment. Neither them nor the team that was evaluating the effects of the treatment knew what was prescribed and this helped to objectively evaluate the results without the knowledge of what was prescribed impacting on the outcome. We compared and summarised the results and evaluated how confident we were that the results of the studies represented the real effects of the medication. We based our confidence on different factors such as the size of the effects, the methods used, and what and how the results were reported."
}
] |
query-laysum
|
22936
|
[
{
"role": "user",
"content": "Abstract: Background\nEarly accurate detection of all skin cancer types is essential to guide appropriate management and to improve morbidity and survival. Melanoma and cutaneous squamous cell carcinoma (cSCC) are high-risk skin cancers which have the potential to metastasise and ultimately lead to death, whereas basal cell carcinoma (BCC) is usually localised with potential to infiltrate and damage surrounding tissue. Anxiety around missing early curable cases needs to be balanced against inappropriate referral and unnecessary excision of benign lesions. Computer-assisted diagnosis (CAD) systems use artificial intelligence to analyse lesion data and arrive at a diagnosis of skin cancer. When used in unreferred settings ('primary care'), CAD may assist general practitioners (GPs) or other clinicians to more appropriately triage high-risk lesions to secondary care. Used alongside clinical and dermoscopic suspicion of malignancy, CAD may reduce unnecessary excisions without missing melanoma cases.\nObjectives\nTo determine the accuracy of CAD systems for diagnosing cutaneous invasive melanoma and atypical intraepidermal melanocytic variants, BCC or cSCC in adults, and to compare its accuracy with that of dermoscopy.\nSearch methods\nWe undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.\nSelection criteria\nStudies of any design that evaluated CAD alone, or in comparison with dermoscopy, in adults with lesions suspicious for melanoma or BCC or cSCC, and compared with a reference standard of either histological confirmation or clinical follow-up.\nData collection and analysis\nTwo review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic threshold were missing. We estimated summary sensitivities and specificities separately by type of CAD system, using the bivariate hierarchical model. We compared CAD with dermoscopy using (a) all available CAD data (indirect comparisons), and (b) studies providing paired data for both tests (direct comparisons). We tested the contribution of human decision-making to the accuracy of CAD diagnoses in a sensitivity analysis by removing studies that gave CAD results to clinicians to guide diagnostic decision-making.\nMain results\nWe included 42 studies, 24 evaluating digital dermoscopy-based CAD systems (Derm–CAD) in 23 study cohorts with 9602 lesions (1220 melanomas, at least 83 BCCs, 9 cSCCs), providing 32 datasets for Derm–CAD and seven for dermoscopy. Eighteen studies evaluated spectroscopy-based CAD (Spectro–CAD) in 16 study cohorts with 6336 lesions (934 melanomas, 163 BCC, 49 cSCCs), providing 32 datasets for Spectro–CAD and six for dermoscopy. These consisted of 15 studies using multispectral imaging (MSI), two studies using electrical impedance spectroscopy (EIS) and one study using diffuse-reflectance spectroscopy. Studies were incompletely reported and at unclear to high risk of bias across all domains. Included studies inadequately address the review question, due to an abundance of low-quality studies, poor reporting, and recruitment of highly selected groups of participants.\nAcross all CAD systems, we found considerable variation in the hardware and software technologies used, the types of classification algorithm employed, methods used to train the algorithms, and which lesion morphological features were extracted and analysed across all CAD systems, and even between studies evaluating CAD systems. Meta–analysis found CAD systems had high sensitivity for correct identification of cutaneous invasive melanoma and atypical intraepidermal melanocytic variants in highly selected populations, but with low and very variable specificity, particularly for Spectro–CAD systems. Pooled data from 22 studies estimated the sensitivity of Derm–CAD for the detection of melanoma as 90.1% (95% confidence interval (CI) 84.0% to 94.0%) and specificity as 74.3% (95% CI 63.6% to 82.7%). Pooled data from eight studies estimated the sensitivity of multispectral imaging CAD (MSI–CAD) as 92.9% (95% CI 83.7% to 97.1%) and specificity as 43.6% (95% CI 24.8% to 64.5%). When applied to a hypothetical population of 1000 lesions at the mean observed melanoma prevalence of 20%, Derm–CAD would miss 20 melanomas and would lead to 206 false-positive results for melanoma. MSI–CAD would miss 14 melanomas and would lead to 451 false diagnoses for melanoma. Preliminary findings suggest CAD systems are at least as sensitive as assessment of dermoscopic images for the diagnosis of invasive melanoma and atypical intraepidermal melanocytic variants. We are unable to make summary statements about the use of CAD in unreferred populations, or its accuracy in detecting keratinocyte cancers, or its use in any setting as a diagnostic aid, because of the paucity of studies.\nAuthors' conclusions\nIn highly selected patient populations all CAD types demonstrate high sensitivity, and could prove useful as a back-up for specialist diagnosis to assist in minimising the risk of missing melanomas. However, the evidence base is currently too poor to understand whether CAD system outputs translate to different clinical decision–making in practice. Insufficient data are available on the use of CAD in community settings, or for the detection of keratinocyte cancers. The evidence base for individual systems is too limited to draw conclusions on which might be preferred for practice. Prospective comparative studies are required that evaluate the use of already evaluated CAD systems as diagnostic aids, by comparison to face–to–face dermoscopy, and in participant populations that are representative of those in which the test would be used in practice.\n\nGiven the provided abstract, please respond to the following query: \"MSI-CAD results for diagnosis of melanoma\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The main results for MSI–CAD are based on eight studies including 2401 lesions. In a group of 1000 people, of whom 200 (20%) actually do have melanoma*, then:\n- An estimated 637 people will have an MSI–CAD result suggesting that a melanoma is present, and of these 451 (71%) will not actually have a melanoma (false-positive result)\n- Of the 363 people with an MSI–CAD result indicating that no melanoma is present, 14 (4%) will in fact have a melanoma (false-negative result)\nMSI–CAD detects more melanomas, but possibly produces more false-positive results (an increase in unnecessary surgery)."
}
] |
query-laysum
|
22937
|
[
{
"role": "user",
"content": "Abstract: Background\nEducational meetings are used widely by health personnel to provide continuing medical education and to promote implementation of innovations or translate new knowledge to change practice within healthcare systems. Previous reviews have concluded that educational meetings can result in small changes in behaviour, but that effects vary considerably. Investigations into which characteristics of educational meetings might lead to greater impact have yielded varying results, and factors that might explain heterogeneity in effects remain unclear. This is the second update of this Cochrane Review.\nObjectives\n• To assess the effects of educational meetings on professional practice and healthcare outcomes\n• To investigate factors that might explain the heterogeneity of these effects\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, ERIC, Science Citation Index Expanded (ISI Web of Knowledge), and Social Sciences Citation Index (last search in November 2016).\nSelection criteria\nWe sought randomised trials examining the effects of educational meetings on professional practice and patient outcomes.\nData collection and analysis\nTwo review authors independently extracted data and assessed risk of bias. One review author assessed the certainty of evidence (GRADE) and discussed with a second review author. We included studies in the primary analysis that reported baseline data and that we judged to be at low or unclear risk of bias. For each comparison of dichotomous outcomes, we measured treatment effect as risk difference adjusted for baseline compliance. We expressed adjusted risk difference values as percentages, and we noted that values greater than zero favour educational meetings. For continuous outcomes, we measured treatment effect as per cent change relative to the control group mean post test, adjusted for baseline performance; we expressed values as percentages and noted that values greater than zero favour educational meetings. We report means and 95% confidence intervals (CIs) and, when appropriate, medians and interquartile ranges to facilitate comparisons to previous versions of this review. We analysed professional and patient outcomes separately and analysed 22 variables that were hypothesised a priori to explain heterogeneity. We explored heterogeneity by using univariate meta-regression and by inspecting violin plots.\nMain results\nWe included 215 studies involving more than 28,167 health professionals, including 142 new studies for this update.\nEducational meetings as the single intervention or the main component of a multi-faceted intervention compared with no intervention\n• Probably slightly improve compliance with desired practice when compared with no intervention (65 comparisons, 7868 health professionals for dichotomous outcomes (adjusted risk difference 6.79%, 95% CI 6.62% to 6.97%; median 4.00%; interquartile range 0.29% to 13.00%); 28 comparisons, 2577 health professionals for continuous outcomes (adjusted relative percentage change 44.36%, 95% CI 41.98% to 46.75%; median 20.00%; interquartile range 6.00% to 65.00%))\n• Probably slightly improve patient outcomes compared with no intervention (15 comparisons, 2530 health professionals for dichotomous outcomes (adjusted risk difference 3.30%, 95% CI 3.10% to 3.51%; median 0.10%; interquartile range 0.00% to 4.00%); 28 comparisons, 2294 health professionals for continuous outcomes (adjusted relative percentage change 8.35%, 95% CI 7.46% to 9.24%; median 2.00%; interquartile range -1.00% to 21.00%))\nThe certainty of evidence for this comparison is moderate.\nEducational meetings alone compared with other interventions\n• May improve compliance with desired practice when compared with other interventions (6 studies, 1402 health professionals for dichotomous outcomes (adjusted risk difference 9.99%, 95% CI 9.47% to 10.52%; median 16.5%; interquartile range 0.80% to 16.50%); 2 studies, 72 health professionals for continuous outcomes (adjusted relative percentage change 12.00%, 95% CI 9.16% to 14.84%; median 12.00%; interquartile range 0.00% to 24.00%))\nNo studies met the inclusion criteria for patient outcome measurements. The certainty of evidence for this comparison is low.\nInteractive educational meetings compared with didactic (lecture-based) educational meetings\n• We are uncertain of effects on compliance with desired practice (3 studies, 370 health professionals for dichotomous outcomes; 1 study, 192 health professionals for continuous outcomes) or on patient outcomes (1 study, 54 health professionals for continuous outcomes), as the certainty of evidence is very low\nAny other comparison of different formats and durations of educational meetings\n• We are uncertain of effects on compliance with desired practice (1 study, 19 health professionals for dichotomous outcomes; 1 study, 20 health professionals for continuous outcomes) or on patient outcomes (1 study, 113 health professionals for continuous outcomes), as the certainty of evidence is very low.\nFactors that might explain heterogeneity of effects\nMeta-regression suggests that larger estimates of effect are associated with studies judged to be at high risk of bias, with studies that had unit of analysis errors, and with studies in which the unit of analysis was the provider rather than the patient.\nImproved compliance with desired practice may be associated with: shorter meetings; poor baseline compliance; better attendance; shorter follow-up; professionals provided with additional take-home material; explicit building of educational meetings on theory; targeting of low- versus high-complexity behaviours; targeting of outcomes with high versus low importance; goal of increasing rather than decreasing behaviour; teaching by opinion leaders; and use of didactic versus interactive teaching methods.\nPre-specified exploratory analyses of behaviour change techniques suggest that improved compliance with desired practice may be associated with use of a greater number of behaviour change techniques; goal-setting; provision of feedback; provision for social comparison; and provision for social support. Compliance may be decreased by the use of follow-up prompts, skills training, and barrier identification techniques.\nAuthors' conclusions\nCompared with no intervention, educational meetings as the main component of an intervention probably slightly improve professional practice and, to a lesser extent, patient outcomes. Educational meetings may improve compliance with desired practice to a greater extent than other kinds of behaviour change interventions, such as text messages, fees, or office systems. Our findings suggest that multi-strategy approaches might positively influence the effects of educational meetings.\nAdditional trials of educational meetings compared with no intervention are unlikely to change the review findings; therefore we will not further update this review comparison in the future. However, we note that randomised trials comparing different types of education are needed.\n\nGiven the provided abstract, please respond to the following query: \"Educational meetings alone compared with other strategies to change healthcare professionals’ behaviour\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Educational meetings alone compared with other strategies to change healthcare professionals’ behaviour\n- Healthcare professionals may be more likely to follow recommended practices (low-certainty evidence)\n- We do not know about effects on patient health because we found no relevant studies"
}
] |
query-laysum
|
22938
|
[
{
"role": "user",
"content": "Abstract: Background\nPersistent pulmonary hypertension of the newborn (PPHN) is a disease entity that describes a physiology in which there is persistence of increased pulmonary arterial pressure. PPHN is characterised by failure to adapt to a functional postnatal circulation with a fall in pulmonary vascular resistance. PPHN is responsible for impairment in oxygenation and significant neonatal mortality and morbidity. Prostanoids and their analogues may be useful therapeutic interventions due to their pulmonary vasodilatory and immunomodulatory effects.\nObjectives\nPrimary objective\n• To determine the efficacy and safety of prostanoids and their analogues (iloprost, treprostinil, and beraprost) in decreasing mortality and the need for extracorporeal membrane oxygenation (ECMO) among neonates with PH\nSecondary objective\n• To determine the efficacy and safety of prostanoids and their analogues (iloprost, treprostinil, and beraprost) in decreasing neonatal morbidity (necrotizing enterocolitis (NEC), chronic lung disease (CLD), retinopathy of prematurity (ROP), intraventricular hemorrhage (IVH), periventricular leukomalacia (PVL), length of hospital stay, and duration of mechanical ventilation) and improving neurodevelopmental outcomes among neonates with PH\nComparisons\n• Prostanoids and their analogues at any dosage or duration used to treat PPHN versus ‘standard treatment without these agents’, placebo, or inhaled nitric oxide (iNO) therapy\n• Prostanoids and their analogues at any dosage or duration used to treat refractory PPHN as an ‘add-on’ therapy to iNO versus iNO alone\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 9), MEDLINE via PubMed (1966 to 16 September 2018), Embase (1980 to 16 September 2018), and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 16 September 2018). We also searched clinical trials databases, conference proceedings of the Pediatric Academic Societies (1990 to 16 September 2018), and the reference lists of retrieved articles for randomized controlled trials and quasi-randomized trials. We contacted authors who have published in this field as discerned from the reference lists of identified clinical trials and review authors' personal files.\nSelection criteria\nRandomized and quasi-randomized controlled trials evaluating prostanoids or their analogues (at any dose, route of administration, or duration) used in neonates at any gestational age less than 28 days' postnatal age for confirmed or suspected PPHN.\nData collection and analysis\nWe used the standard methods of Cochrane Neonatal to conduct a systematic review and to assess the methodological quality of included studies (neonatal.cochrane.org/en/index.html). Three review authors independently assessed the titles and abstracts of studies identified by the search strategy and obtained full-text versions for assessment if necessary. We designed forms for trial inclusion or exclusion and for data extraction. We planned to use the GRADE approach to assess the quality of evidence.\nMain results\nWe did not identify any eligible neonatal trials evaluating prostanoids or their analogues as sole agents in the treatment of PPHN.\nAuthors' conclusions\nImplications for practice\nCurrently, no evidence shows the use of prostanoids or their analogues as pulmonary vasodilators and sole therapeutic agents for the treatment of PPHN in neonates (age 28 days or less).\nImplications for research\nThe safety and efficacy of different preparations and doses and routes of administration of prostacyclins and their analogues in neonates must be established. Well-designed, adequately powered, randomized, multi-center trials are needed to address the efficacy and safety of prostanoids and their analogues in the treatment of PPHN. These trials should evaluate long-term neurodevelopmental and pulmonary outcomes, in addition to short-term outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Persistent pulmonary hypertension of the neonate (PPHN) is a life-threatening condition. Before birth, a baby’s nourishment and oxygen are obtained through the placenta, hence blood circulates differently within the uterus. The baby with PPHN does not change over from fetal to normal newborn circulation. Blood flow is diverted from the lungs due to abnormally high blood pressure in the arteries that go to the lungs. This decreases the body’s supply of oxygen, causing significant injury to the brain and other organs.\nThe primary problem for newborns is that normal exchange of oxygen in the lung does not occur, so oxygen cannot be delivered to the body. Prostanoids are metabolites of fatty acid called 'arachidonic acid'. They have been shown to relax the lung bed blood vessels, improving blood flow to the lungs and helping with oxygenation in humans and animals. (Prostanoids are a class of drugs that dilate lung blood vessels and may help babies with PPHN. Prostacyclin (PGI₂) and prostaglandin E₁ (PGE₁) are two classes of prostanoids that have been used to treat PPHN in newborn babies.) The safety and effectiveness of these medicines have not been established."
}
] |
query-laysum
|
22939
|
[
{
"role": "user",
"content": "Abstract: Background\nEczema is a common, chronic, inflammatory skin condition that is frequently associated with atopic conditions, including asthma. Leukotriene receptor antagonists (LTRAs) have a corticosteroid-sparing role in asthma, but their role in eczema remains controversial. Currently available topical therapies for eczema are often poorly tolerated, and use of systemic agents is restricted by their adverse effect profile. A review of alternative treatments was therefore warranted.\nObjectives\nTo assess the possible benefits and harms of leukotriene receptor antagonists for eczema.\nSearch methods\nWe searched the following databases to September 2017: the Cochrane Skin Specialised Register, CENTRAL, MEDLINE, Embase, and the GREAT database. We also searched five trial registries, and handsearched the bibliographies of all extracted studies for further relevant trials.\nSelection criteria\nRandomised controlled trials of LTRAs alone or in combination with other (topical or systemic) treatments compared with other treatments alone such as topical corticosteroids or placebo for eczema in the acute or chronic (maintenance) phase of eczema in adults and children.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. The primary outcome measures were change in disease severity, long-term symptom control, and adverse effects of treatment. Secondary outcomes were change in corticosteroid requirement, reduction of pruritis, quality of life, and emollient requirement. We used GRADE to assess the quality of the evidence for each outcome.\nMain results\nOnly five studies (including a total of 202 participants) met the inclusion criteria, all of which assessed oral montelukast; hence, we found no studies assessing other LTRAs. Treatment ranged from four to eight weeks, and outcomes were assessed at the end of treatment; therefore, we could only report short-term measurements (defined as less than three months follow-up from baseline). Montelukast dosing was 10 mg for adults (age 14 years and above) and 5 mg for children (age 6 years to 14 years). One study included children (aged 6 years and above) among their participants, while the remaining studies only included adults (participant age ranged from 16 to 70 years). The participants were diagnosed with moderate-to-severe eczema in four studies and moderate eczema in one study. The study setting was unclear in two studies, multicentre in two studies, and single centre in one study; the studies were conducted in Europe and Bangladesh. Two studies were industry funded. The comparator was placebo in three studies and conventional treatment in two studies. The conventional treatment comparator was a combination of antihistamines and topical corticosteroids (plus oral antibiotics in one study).\nFour of the studies did not adequately describe their randomisation or allocation concealment method and were considered as at unclear risk of selection bias. Only one study was at low risk of performance and detection bias. However, we judged all studies to be at low risk of attrition and reporting bias.\nWe found no evidence of a difference in disease severity of moderate-to-severe eczema after short-term use of montelukast (10 mg) when compared with placebo. The outcome was assessed using the modified EASI (Eczema Area and Severity Index) score and SASSAD (Six Area, Six Sign Atopic Dermatitis) severity score (standardised mean difference 0.29, with a positive score showing montelukast is favoured, 95% confidence interval (CI) -0.23 to 0.81; 3 studies; n = 131; low-quality evidence).\nWhen short-term montelukast (10 mg) treatment was compared with conventional treatment in one study, the mean improvement in severity of moderate-to-severe eczema was greater in the intervention group (measured using SCORAD (SCORing of Atopic Dermatitis) severity index) (mean difference 10.57, 95% CI 4.58 to 16.56; n = 31); however, another study of 32 participants found no significant difference between groups using the same measure (mean improvement was 25.2 points with montelukast versus 23.9 points with conventional treatment; no further numerical data provided). We judged the quality of the evidence as very low for this outcome, meaning the results are uncertain.\nAll studies reported their adverse event rate during treatment. Four studies (136 participants) reported no adverse events. In one study of 58 participants with moderate eczema who received montelukast 10 mg (compared with placebo), there was one case of septicaemia and one case of dizziness reported in the intervention group, both resulting in study withdrawal, although whether these effects were related to the medication is unclear. Mild side effects (e.g. headache and mild gastrointestinal disturbances) were also noted, but these were fairly evenly distributed between the montelukast and placebo groups. The quality of evidence for this outcome was low.\nNo studies specifically evaluated emollient requirement or quality of life. One study that administered treatment for eight weeks specifically evaluated pruritus improvement at the end treatment and topical corticosteroid use during treatment. We found no evidence of a difference between montelukast (10 mg) and placebo for both outcomes (low-quality evidence, n = 58). No other study assessed these outcomes.\nAuthors' conclusions\nThe findings of this review are limited to montelukast. There was a lack of evidence addressing the review question, and the quality of the available evidence for most of the measured outcomes was low. Some primary and secondary outcomes were not addressed at all, including long-term control.\nWe found no evidence of a difference between montelukast (10 mg) and placebo on disease severity, pruritus improvement, and topical corticosteroid use. Very low-quality evidence means we are uncertain of the effect of montelukast (10 mg) compared with conventional treatment on disease severity. Participants in only one study reported adverse events, which were mainly mild (low-quality evidence).\nThere is no evidence that LTRA is an effective treatment for eczema. Serious limitations were that all studies focused on montelukast and only included people with moderate-to-severe eczema, who were mainly adults; and that each outcome was evaluated with a small sample size, if at all.\nFurther large randomised controlled trials, with a longer treatment duration, of adults and children who have eczema of all severities may help to evaluate the effect of all types of LTRA, especially on eczema maintenance.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Eczema, also called 'atopic eczema' and 'dermatitis', is a common dry skin condition. Eczema can be mild, moderate, or severe depending on how itchy or red the skin is, how much skin is affected, and its impact on daily life. Eczema is sometimes linked to a group of conditions, including asthma. Corticosteroids are a commonly used treatment, but long-term treatment can produce unwanted side effects, such as skin thinning. LTRAs have been shown to improve asthma symptoms, and their use in eczema would reduce the amount of steroids used. LTRAs are not currently given for eczema, and whether they are effective for this condition is uncertain."
}
] |
query-laysum
|
22940
|
[
{
"role": "user",
"content": "Abstract: Background\nFollowing surgery, surgical wounds can be closed using a variety of devices including sutures (subcuticular or transdermal), staples and tissue adhesives. Subcuticular sutures are intradermal stitches (placed immediately below the epidermal layer). The increased availability of synthetic absorbable filaments (stitches which are absorbed by the body and do not have to be removed) has led to an increased use of subcuticular sutures. However, in non-obstetric surgery, there is still controversy about whether subcuticular sutures increase the incidence of wound complications.\nObjectives\nTo examine the efficacy and acceptability of subcuticular sutures for skin closure in non-obstetric surgery.\nSearch methods\nIn March 2019, we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nAll randomised controlled trials which compared subcuticular sutures with any other methods for skin closure in non-obstetric surgery were included in the review.\nData collection and analysis\nTwo review authors independently identified the trials, extracted data and carried out risk of bias and GRADE assessment of the certainty of the evidence.\nMain results\nWe included 66 studies (7487 participants); 11 included trials had more than two arms. Most trials had poorly-reported methodology, meaning that it is unclear whether they were at high risk of bias. Most trials compared subcuticular sutures with transdermal sutures, skin staples or tissue adhesives. Most outcomes prespecified in the review protocol were reported. The certainty of evidence varied from high to very low in the comparisons of subcuticular sutures with transdermal sutures or staples and tissue adhesives; the certainty of the evidence for the comparison with surgical tapes and zippers was low to very low. Most evidence was downgraded for imprecision or risk of bias.\nAlthough the majority of studies enrolled people who underwent CDC class 1 (clean) surgeries, two-thirds of participants were enrolled in studies which included CDC class 2 to 4 surgeries, such as appendectomies and gastrointestinal surgeries. Most participants were adults in a hospital setting.\nSubcuticular sutures versus transdermal sutures\nThere may be little difference in the incidence of SSI (risk ratio (RR) 1.10; 95% confidence interval (CI) 0.80 to 1.52; 3107 participants; low-certainty evidence).\nIt is uncertain whether subcuticular sutures reduce wound complications (RR 0.83; 95% CI 0.40 to 1.71; 1489 participants; very low-certainty evidence). Subcuticular sutures probably improve patient satisfaction (score from 1 to 10) (at 30 days; MD 1.60, 95% CI 1.32 to 1.88; 290 participants; moderate-certainty evidence). Wound closure time is probably longer when subcuticular sutures are used (MD 5.81 minutes; 95% CI 5.13 to 6.49 minutes; 585 participants; moderate-certainty evidence).\nSubcuticular sutures versus skin staples\nThere is moderate-certainty evidence that, when compared with skin staples, subcuticular sutures probably have little effect on SSI (RR 0.81, 95% CI 0.64 to 1.01; 4163 participants); but probably decrease the incidence of wound complications (RR 0.79, 95% CI 0.64 to 0.98; 2973 participants). Subcuticular sutures are associated with slightly higher patient satisfaction (score from 1 to 5) (MD 0.20, 95% CI 0.10 to 0.30; 1232 participants; high-certainty evidence). Wound closure time may also be longer compared with staples (MD 0.30 to 5.50 minutes; 1384 participants; low-certainty evidence).\nSubcuticular sutures versus tissue adhesives, surgical tapes and zippers\nThere is moderate-certainty evidence showing no clear difference in the incidence of SSI between participants treated with subcuticular sutures and those treated with tissue adhesives (RR 0.77, 95% CI 0.41 to 1.45; 869 participants). There is also no clear difference in the incidence of wound complications (RR 0.62, 95% CI 0.35 to 1.11; 1058 participants; low-certainty evidence). Subcuticular sutures may also achieve lower patient satisfaction ratings (score from 1 to 10) (MD -2.05, 95% CI -3.05 to -1.05; 131 participants) (low-certainty evidence). In terms of SSI incidence, the evidence is uncertain when subcuticular sutures are compared with surgical tapes (RR 1.31, 95% CI 0.40 to 4.27; 354 participants; very low-certainty evidence) or surgical zippers (RR 0.80, 95% CI 0.08 to 8.48; 424 participants; very low-certainty evidence). There may be little difference in the incidence of wound complications between participants treated with subcuticular sutures and those treated with surgical tapes (RR 0.90, 95% CI 0.61 to 1.34; 492 participants; low-certainty evidence). It is uncertain whether subcuticular sutures reduce the risk of wound complications compared with surgical zippers (RR 0.55, 95% CI 0.15 to 2.04; 424 participants; very low-certainty evidence). It is also uncertain whether it takes longer to close a wound with subcuticular sutures compared with tissue adhesives (MD -0.34 to 10.39 minutes; 895 participants), surgical tapes (MD 0.74 to 6.36 minutes; 169 participants) or zippers (MD 4.38 to 8.25 minutes; 424 participants) (very low-certainty evidence). No study reported results for patient satisfaction compared with surgical tapes or zippers.\nAuthors' conclusions\nThere is no clear difference in the incidence of SSI for subcuticular sutures in comparison with any other skin closure methods. Subcuticular sutures probably reduce wound complications compared with staples, and probably improve patient satisfaction compared with transdermal sutures or staples. However, tissue adhesives may improve patient satisfaction compared with subcuticular sutures, and transdermal sutures and skin staples may be quicker to apply than subcuticular sutures. The quality of the evidence ranged from high to very low; evidence for almost all comparisons was subject to some limitations. There seems to be no need for additional new trials to explore the comparison with staples because there are high-quality studies with large sample sizes and some ongoing studies. However, there is a need for studies exploring the comparisons with transdermal sutures, tissue adhesives, tapes and zippers, with high-quality studies and large sample sizes, including long-term assessments.\n\nGiven the provided abstract, please respond to the following query: \"What is the aim of this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this review was to find out whether subcuticular sutures (stitches placed under the skin) are effective for closing wounds after surgery. We were interested in all types of surgery except obstetric surgery (operations related to childbirth, e.g. caesarean sections). Cochrane researchers collected and analysed all studies related to this question and found 66 relevant randomised controlled trials. Randomised controlled trials are medical studies where patients are chosen at random to receive different treatments. This type of trial provides the most reliable health evidence."
}
] |
query-laysum
|
22941
|
[
{
"role": "user",
"content": "Abstract: Background\nUveitis is the most common extra-articular manifestation of juvenile idiopathic arthritis (JIA) and a potentially sight-threatening condition characterized by intraocular inflammation. Current treatment for JIA-associated uveitis (JIA-U) is largely based on physician experience, observational evidence and consensus guidelines, resulting in considerable variations in practice.\nObjectives\nTo evaluate the effectiveness and safety of tumor necrosis factor (TNF) inhibitors used for treatment of JIA-U.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE; Embase.com; PubMed; Latin American and Caribbean Health Sciences Literature Database (LILACS); ClinicalTrials.gov, and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We last searched the electronic databases on 3 February 2022.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing TNF inhibitors with placebo in participants with a diagnosis of JIA and uveitis who were aged 2 to 18 years old.\nData collection and analysis\nWe used standard Cochrane methodology and graded the certainty of the body of evidence for seven outcomes using the GRADE classification.\nMain results\nWe included three RCTs with 134 participants.\nOne study conducted in the USA randomized participants to etanercept or placebo (N = 12). Two studies, one conducted in the UK (N = 90) and one in France (N = 32), randomized participants to adalimumab or placebo. All studies were at low risk of bias. Initial pooled estimates suggested that TNF-inhibitors may result in little to no difference on treatment success defined as 0 to trace cells on Standardization of Uveitis Nomenclature (SUN)-grading; or two-step decrease in activity based on SUN grading (estimated risk ratio (RR) 0.66; 95% confidence interval (CI) 0.21 to 2.10; 2 studies; 43 participants; low-certainty evidence) or treatment failure defined as a two-step increase in activity based on SUN grading (RR 0.31; 95% CI 0.01 to 7.15; 1 study; 31 participants; low-certainty evidence). Further analysis using the individual trial definitions of treatment response and failure suggested a positive treatment effect of TNF inhibitors; a RR of treatment success of 2.60 (95% CI 1.30 to 5.20; 3 studies; 124 participants; low-certainty evidence), and RR of treatment failure of 0.23 (95% CI 0.11 to 0.50; 3 studies; 133 participants). Almost all the evidence was on adalimumab and the evidence on etanercept was very limited. For secondary outcomes, one study suggests that adalimumab may have little to no effect on risk of recurrence after induction of remission at three months (RR 2.50, 95% CI 0.31 to 20.45; 90 participants; very low-certainty evidence) and visual acuity, but the evidence is very uncertain; mean difference in longitudinal logMAR score change over six months was -0.01 (95% CI –0.06 to 0.03) and -0.02 (95% CI –0.07 to 0.03) using the best and worst logMAR measurement, respectively (low-certainty evidence). Low-certainty evidence from one study suggested that adalimumab treatment results in reduction of topical steroid doses at six months (hazard ratio 3.58; 95% CI 1.24 to 10.32; 74 participants who took one or more topical steroid per day at baseline). Adverse events, including injection site reactions and infections, were more common in the TNF inhibitor group. Serious adverse events were uncommon.\nAuthors' conclusions\nAdalimumab appears to increase the likelihood of treatment success and decrease the likelihood of treatment failure when compared with placebo. The evidence was less conclusive about a positive treatment effect with etanercept. Adverse events from JIA-U trials are in keeping with the known side effect profile of TNF inhibitors. Standard validated JIA-U outcome measures are required to homogenize assessment and to allow for comparison and analysis of multiple datasets.\n\nGiven the provided abstract, please respond to the following query: \"What does it mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our review suggests that adalimumab, one TNF inhibitor, increases the chance of improving ocular symptoms of uveitis and decrease the chance of worsening uveitis when compared with placebo."
}
] |
query-laysum
|
22942
|
[
{
"role": "user",
"content": "Abstract: Background\nAffordable, feasible and efficacious interventions to reduce neonatal infections and improve neonatal survival are needed. Chlorhexidine, a broad spectrum topical antiseptic agent, is active against aerobic and anaerobic organisms and reduces neonatal bacterial colonisation and may reduce infection.\nObjectives\nTo evaluate the efficacy of neonatal skin or cord care with chlorhexidine versus routine care or no treatment for prevention of infections in late preterm or term newborn infants in hospital and community settings.\nSearch methods\nWe searched CENTRAL, latest issue of The Cochrane Library, MEDLINE (1966 to November 2013), EMBASE (1980 to November 2013), and CINAHL (1982 to November 2013). Ongoing trials were detected by searching the following databases: www.clinicaltrials.gov and www.controlled-trials.com.\nSelection criteria\nCluster and individual patient randomised controlled trials of chlorhexidine use (for skin care, or cord care, or both) in term or late preterm neonates in hospital and community settings were eligible for inclusion. Three authors independently screened and selected studies for inclusion.\nData collection and analysis\nTwo review authors independently extracted data, and assessed study risk of bias. The quality of evidence for each outcome was assessed using GRADE. We calculated pooled risk ratios (RRs) and risk differences (RDs) with 95% confidence intervals (CIs), and presented results using GRADE 'Summary of findings' tables.\nMain results\nWe included 12 trials in this review. There were seven hospital-based and five community-based studies. In four studies maternal vaginal wash with chlorhexidine was done in addition to neonatal skin and cord care.\nNewborn skin or cord cleansing with chlorhexidine compared to usual care in hospitals\nLow-quality evidence from one trial showed that chlorhexidine cord cleansing compared to dry cord care may lead to no difference in neonatal mortality (RR 0.11, 95% CI 0.01 to 2.04). Moderate-quality evidence from two trials showed that chlorhexidine cord cleansing compared to dry cord care probably reduces the risk of omphalitis/infections (RR 0.48, 95% CI 0.28 to 0.84).\nLow-quality evidence from two trials showed that chlorhexidine skin cleansing compared to dry cord care may lead to no difference in omphalitis/infections (RR 0.88, 95% CI 0.56 to 1.39). None of the studies in this comparison reported effects of the treatments on neonatal mortality.\nNewborn skin or cord cleansing with chlorhexidine compared to usual care in the community\nHigh-quality evidence from three trials showed that chlorhexidine cord cleansing compared to dry cord care reduces neonatal mortality (RR 0.81, 95% CI 0.71 to 0.92) and omphalitis/infections (RR 0.48, 95% CI 0.40 to 0.57).\nHigh-quality evidence from one trial showed no difference between chlorhexidine skin cleansing and usual skin care on neonatal mortality (RR 1.03, 95% CI 0.87 to 1.23). None of the studies in this comparison reported effects of the treatments on omphalitis/infections.\nMaternal vaginal chlorhexidine in addition to total body cleansing compared to no intervention (sterile saline solution) in hospitals\nModerate-quality evidence from one trial showed no difference between maternal vaginal chlorhexidine in addition to total body cleansing and no intervention on neonatal mortality (RR 0.98, 95% CI 0.67 to 1.42). High-quality evidence from two trials showed no difference between maternal vaginal chlorhexidine in addition to total body cleansing and no intervention on the risk of infections (RR 0.93, 95% CI 0.82 to 1.16).\nFindings from one trial showed that maternal vaginal cleansing in addition to total body cleansing results in increased risk of hypothermia (RR 1.33, 95% CI 1.19 to 1.49).\nMaternal vaginal chlorhexidine in addition to total body cleansing compared to no intervention (sterile saline solution) in the community\nLow-quality evidence from one trial showed no difference between maternal vaginal chlorhexidine in addition to total body cleansing and no intervention on neonatal mortality (RR 0.20, 95% CI 0.01 to 4.03). Moderate-quality evidence from one trial showed that maternal vaginal chlorhexidine in addition to total body cleansing compared to no intervention probably reduces the risk of neonatal infections (RR 0.69, 95% CI 0.49 to 0.95). These studies did not report effect on omphalitis.\nAuthors' conclusions\nThere is some uncertainty as to the effect of chlorhexidine applied to the umbilical cords of newborns in hospital settings on neonatal mortality. The quality of evidence for the effects on infection are moderate for cord application and low for application to skin. There is high-quality evidence that chlorhexidine skin or cord care in the community setting results in a 50% reduction in the incidence of omphalitis and a 12% reduction in neonatal mortality. Maternal vaginal chlorhexidine compared to usual care probably leads to no difference in neonatal mortality in hospital settings. Maternal vaginal chlorhexidine compared to usual care results in no difference in the risk of infections in hospital settings. The uncertainty over the effect of maternal vaginal chlorhexidine on mortality outcomes reflects small sample sizes and low event rates in the community settings.\n\nGiven the provided abstract, please respond to the following query: \"Maternal vaginal chlorhexidine in addition to total body cleansing compared to no intervention (sterile saline solution) in the community\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Maternal vaginal chlorhexidine in addition to total body cleansing compared to no intervention may lead to no difference in neonatal mortality. Maternal vaginal chlorhexidine in addition to total body cleansing compared to no intervention probably reduces the risk of neonatal infections (none of the studies included in this comparison reported effects on omphalitis)."
}
] |
query-laysum
|
22943
|
[
{
"role": "user",
"content": "Abstract: Background\nOral poisoning is a major cause of mortality and disability worldwide, with estimates of over 100,000 deaths due to unintentional poisoning each year and an overrepresentation of children below five years of age. Any effective intervention that laypeople can apply to limit or delay uptake or to evacuate, dilute or neutralize the poison before professional help arrives may limit toxicity and save lives.\nObjectives\nTo assess the effects of pre-hospital interventions (alone or in combination) for treating acute oral poisoning, available to and feasible for laypeople before the arrival of professional help.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL, ISI Web of Science, International Pharmaceutical Abstracts, and three clinical trials registries to 4 December 2018, and we also carried out reference checking and citation searching.\nSelection criteria\nWe included randomized controlled trials comparing interventions (alone or in combination) that are feasible in a pre-hospital setting for treating acute oral poisoning patients, including but potentially not limited to activated charcoal (AC), emetics, cathartics, diluents, neutralizing agents and body positioning.\nData collection and analysis\nTwo review authors independently performed study selection, data collection and assessment. Primary outcomes of this review were incidence of mortality and adverse events, plus incidence and severity of symptoms of poisoning. Secondary outcomes were duration of symptoms of poisoning, drug absorption, and incidence of hospitalization and ICU admission.\nMain results\nWe included 24 trials involving 7099 participants. Using the Cochrane 'Risk of bias' tool, we assessed no study as being at low risk of bias for all domains. Many studies were poorly reported, so the risk of selection and detection biases were often unclear. Most studies reported important outcomes incompletely, and we judged them to be at high risk of reporting bias.\nAll but one study enrolled oral poisoning patients in an emergency department; the remaining study was conducted in a pre-hospital setting. Fourteen studies included multiple toxic syndromes or did not specify, while the other studies specifically investigated paracetamol (2 studies), carbamazepine (2 studies), tricyclic antidepressant (2 studies), yellow oleander (2 studies), benzodiazepine (1 study), or toxic berry intoxication (1 study). Twenty-one trials investigated the effects of activated charcoal (AC), administered as a single dose (SDAC) or in multiple doses (MDAC), alone or in combination with other first aid interventions (a cathartic) and/or hospital treatments. Six studies investigated syrup of ipecac plus other first aid interventions (SDAC + cathartic) versus ipecac alone. The collected evidence was mostly of low to very low certainty, often downgraded for indirectness, risk of bias or imprecision due to low numbers of events.\nFirst aid interventions that limit or delay the absorption of the poison in the body\nWe are uncertain about the effect of SDAC compared to no intervention on the incidence of adverse events in general (zero events in both treatment groups; 1 study, 451 participants) or vomiting specifically (Peto odds ratio (OR) 4.17, 95% confidence interval (CI) 0.30 to 57.26, 1 study, 25 participants), ICU admission (Peto OR 7.77, 95% CI 0.15 to 391.93, 1 study, 451 participants) and clinical deterioration (zero events in both treatment groups; 1 study, 451 participants) in participants with mixed types or paracetamol poisoning, as all evidence for these outcomes was of very low certainty. No studies assessed SDAC for mortality, duration of symptoms, drug absorption or hospitalization.\nOnly one study compared SDAC to syrup of ipecac in participants with mixed types of poisoning, providing very low-certainty evidence. Therefore we are uncertain about the effects on Glasgow Coma Scale scores (mean difference (MD) −0.15, 95% CI −0.43 to 0.13, 1 study, 34 participants) or incidence of adverse events (risk ratio (RR) 1.24, 95% CI 0.26 to 5.83, 1 study, 34 participants). No information was available concerning mortality, duration of symptoms, drug absorption, hospitalization or ICU admission.\nThis review also considered the added value of SDAC or MDAC to hospital interventions, which mostly included gastric lavage. No included studies investigated the use of body positioning in oral poisoning patients.\nFirst aid interventions that evacuate the poison from the gastrointestinal tract\nWe found one study comparing ipecac versus no intervention in toxic berry ingestion in a pre-hospital setting. Low-certainty evidence suggests there may be an increase in the incidence of adverse events, but the study did not report incidence of mortality, incidence or duration of symptoms of poisoning, drug absorption, hospitalization or ICU admission (103 participants).\nIn addition, we also considered the added value of syrup of ipecac to SDAC plus a cathartic and the added value of a cathartic to SDAC.\nNo studies used cathartics as an individual intervention.\nFirst aid interventions that neutralize or dilute the poison\nNo included studies investigated the neutralization or dilution of the poison in oral poisoning patients.\nAuthors' conclusions\nThe studies included in this review provided mostly low- or very low-certainty evidence about the use of first aid interventions for acute oral poisoning. A key limitation was the fact that only one included study actually took place in a pre-hospital setting, which undermines our confidence in the applicability of these results to this setting. Thus, the amount of evidence collected was insufficient to draw any conclusions.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "All but one study took place in a hospital setting, which means that the results cannot be directly applied to the lay setting. Because studies did not always report the methods they used, we are uncertain about the quality of the research conduct for many. Outcomes important to patients and pre-specified by us as important outcomes for this review were often absent or incompletely reported. Our certainty about the results of this review is mostly low to very low. Therefore future research is highly likely to change the findings."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.