Browsing by Author "Briel, Matthias"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- ItemA methodological survey of the analysis, reporting and interpretation of Absolute Risk ReductiOn in systematic revieWs (ARROW): a study protocol(2013) Alonso-Coello, Pablo; Carrasco-Labra, Alonso; Brignardello-Petersen, Romina; Neumann Burotto, Gonzalo Ignacio; Akl, Elie A; Sun, Xin; Johnston, Bradley C; Briel, Matthias; Busse, Jason W; Glujovsky, Demián; Granados, Carlos E; Iorio, Alfonso; Irfan, Affan; García, Laura M; Mustafa, Reem A; Ramirez-Morera, Anggie; Solà, Iván; Tikkinen, Kari A O; Ebrahim, Shanil; Vandvik, Per O; Zhang, Yuqing; Selva, Anna; Sanabria, Andrea J; Zazueta, Oscar E; Vernooij, Robin W M; Schünemann, Holger J; Guyatt, Gordon HAbstract Background Clinicians, providers and guideline panels use absolute effects to weigh the advantages and downsides of treatment alternatives. Relative measures have the potential to mislead readers. However, little is known about the reporting of absolute measures in systematic reviews. The objectives of our study are to determine the proportion of systematic reviews that report absolute measures of effect for the most important outcomes, and ascertain how they are analyzed, reported and interpreted. Methods/design We will conduct a methodological survey of systematic reviews published in 2010. We will conduct a 1:1 stratified random sampling of Cochrane vs. non-Cochrane systematic reviews. We will calculate the proportion of systematic reviews reporting at least one absolute estimate of effect for the most patient-important outcome for the comparison of interest. We will conduct multivariable logistic regression analyses with the reporting of an absolute estimate of effect as the dependent variable and pre-specified study characteristics as the independent variables. For systematic reviews reporting an absolute estimate of effect, we will document the methods used for the analysis, reporting and interpretation of the absolute estimate. Discussion Our methodological survey will inform current practices regarding reporting of absolute estimates in systematic reviews. Our findings may influence recommendations on reporting, conduct and interpretation of absolute estimates. Our results are likely to be of interest to systematic review authors, funding agencies, clinicians, guideline developers and journal editors.
- ItemCompelling evidence from meta-epidemiological studies demonstrates overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors(2024) Wang, Ying; Parpia, Sameer; Couban, Rachel; Wang, Qi; Armijo-Olivo, Susan; Bassler, Dirk; Briel, Matthias; Brignardello-Petersen, Romina; Gluud, Lise Lotte; Keitz, Sheri A.; Letelier, Luz M.; Ravaud, Philippe; Schulz, Kenneth F.; Siemieniuk, Reed A. C.; Zeraatkar, Dena; Guyatt, Gordon H.Objectives: To investigate the impact of potential risk of bias elements on effect estimates in randomized trials. Study Design and Setting: We conducted a systematic survey of meta-epidemiological studies examining the influence of potential risk of bias elements on effect estimates in randomized trials. We included only meta-epidemiological studies that either preserved the clustering of trials within meta-analyses (compared effect estimates between trials with and without the potential risk of bias element within each meta-analysis, then combined across meta-analyses; between-trial comparisons), or preserved the clustering of substudies within trials (compared effect estimates between substudies with and without the element, then combined across trials; within-trial comparisons). Sepa-rately for studies based on between-and within-trial comparisons, we extracted ratios of odds ratios (RORs) from each study and combined them using a random-effects model. We made overall inferences and assessed certainty of evidence based on Grading of Recommendations, Assessment, development, and Evaluation and Instrument to assess the Credibility of Effect Modification Analyses. Results: Forty-one meta-epidemiological studies (34 of between-, 7 of within-trial comparisons) proved eligible. Inadequate random sequence generation (ROR 0.94, 95% confidence interval [CI] 0.90-0.97) and allocation concealment (ROR 0.92, 95% CI 0.88-0.97) probably lead to effect overestimation (moderate certainty). Lack of patients blinding probably overestimates effects for patient -reported outcomes (ROR 0.36, 95% CI 0.28-0.48; moderate certainty). Lack of blinding of outcome assessors results in effect overesti-mation for subjective outcomes (ROR 0.69, 95% CI 0.51-0.93; high certainty). The impact of patients or outcome assessors blinding on other outcomes, and the impact of blinding of health-care providers, data collectors, or data analysts, remain uncertain. Trials stopped early for benefit probably overestimate effects (moderate certainty). Trials with imbalanced cointerventions may overestimate effects, while trials with missing outcome data may underestimate effects (low certainty). Influence of baseline imbalance, compliance, selective reporting, and intention-to-treat analysis remain uncertain. Conclusion: Failure to ensure random sequence generation or adequate allocation concealment probably results in modest overestimates of effects. Lack of patients blinding probably leads to substantial overestimates of effects for patient-reported outcomes. Lack of blinding of outcome assessors results in substantial effect overestimation for subjective outcomes. For other elements, though evidence for consistent systematic overestimate of effect remains limited, failure to implement these safeguards may still introduce important bias. (c) 2023 Elsevier Inc. All rights reserved.
- ItemCompletion and publication rates of randomized controlled trials in surgery an empirical study(2015) Rosenthal, Rachel; Kasenda, Benjamin; Dell-Kuster, Salome; Von Elm, Erik; You, John; Neumann Burotto, Gonzalo Ignacio; Tomonaga, Yuki; Saccilotto, Ramon; Amstutz, Alain; Bengough, Theresa; Meerpohl, Joerg J.; Stegert, Mihaela; Tikkinen, Kari A. O.; Blümle, Anette; Carrasco-Labra, Alonso; Faulhaber, Markus; Mulla, Sohail; Mertz, Dominik; Akl, Elie A.; Bassler, Dirk; Busse, Jason W.; Ferreira-González, Ignacio; Lamontagne, Francois; Nordmann, Alain; Gloy, Viktoria; Olu, Kelechi K.; Raatz, Heike; Moja, Lorenzo; Ebrahim, Shanil; Schandelmaier, Stefan; Sun, Xin; Vandvik, Per O.; Johnston, Bradley C.; Walter, Martin A.; Burnand, Bernard; Schwenkglenks, Matthias; Hemkens, Lars G.; Bucher, Heiner C.; Guyatt, Gordon H.; Briel, Matthias
- ItemInstruments assessing risk of bias of randomized trials frequently included items that are not addressing risk of bias issues(2022) Wang, Ying; Ghadimi, Maryam; Wang, Qi; Hou, Liangying; Zeraatkar, Dena; Iqbal, Atiya; Ho, Cameron; Yao, Liang; Hu, Malini; Ye, Zhikang; Couban, Rachel; Armijo-Olivo, Susan; Bassler, Dirk; Briel, Matthias; Gluud, Lise Lotte; Glasziou, Paul; Jackson, Rod; Keitz, Sheri A.; Letelier, Luz M.; Ravaud, Philippe; Schulz, Kenneth F.; Siemieniuk, Reed A. C.; Brignardello-Petersen, Romina; Guyatt, Gordon H.Objectives: To establish whether items included in instruments published in the last decade assessing risk of bias of randomized controlled trials (RCTs) are indeed addressing risk of bias.Study Design and Setting: We searched Medline, Embase, Web of Science, and Scopus from 2010 to October 2021 for instruments assessing risk of bias of RCTs. By extracting items and summarizing their essential content, we generated an item list. Items that two re-viewers agreed clearly did not address risk of bias were excluded. We included the remaining items in a survey in which 13 experts judged the issue each item is addressing: risk of bias, applicability, random error, reporting quality, or none of the above.Results: Seventeen eligible instruments included 127 unique items. After excluding 61 items deemed as clearly not addressing risk of bias, the item classification survey included 66 items, of which the majority of respondents deemed 20 items (30.3%) as addressing risk of bias; the majority deemed 11 (16.7%) as not addressing risk of bias; and there proved substantial disagreement for 35 (53.0%) items. Conclusion: Existing risk of bias instruments frequently include items that do not address risk of bias. For many items, experts disagree on whether or not they are addressing risk of bias.(c) 2022 Elsevier Inc. All rights reserved.
- ItemPotential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review(BMJ PUBLISHING GROUP, 2012) Akl, Elie A.; Briel, Matthias; You, John J.; Sun, Xin; Johnston, Bradley C.; Busse, Jason W.; Mulla, Sohail; Lamontagne, Francois; Bassler, Dirk; Vera, Claudio; Alshurafa, Mohamad; Katsios, Christina M.; Zhou, Qi; Cukierman Yaffe, Tali; Gangji, Azim; Mills, Edward J.; Walter, Stephen D.; Cook, Deborah J.; Schuenemann, Holger J.; Altman, Douglas G.; Guyatt, Gordon H.Objective To assess the reporting, extent, and handling of loss to follow-up and its potential impact on the estimates of the effect of treatment in randomised controlled trials.
- ItemReporting handling and assessing the risk of bias associated with missing participant data in systematic reviews : a methodological survey(2015) Akl, Elie A.; Carrasco Labra, Alonso; Brignardello Petersen, Romina; Neumann Burotto, Gonzalo Ignacio; Johnston, Bradley C.; Sun, Xin; Briel, Matthias; Busse, Jason W; Ebrahim, Shanil; Granados, Carlos; Iorio, Alfonso; Irfan, Affan; Martínez García, Laura; Mustafa, Reem A.; Ramírez Morera, Anggie; Selva, Anna; Solà, Ivan; Sanabria, Andrea Juliana; Tikkinen, Kari A. O.; Vandvik, Per O.; Vernooij, Robin W. M.; Zazueta, Oscar E.; Zhou, Qi; Guyatt, Gordon H.; Alonso Coello, Pablo
- ItemSpecific instructions for estimating unclearly reported blinding status in randomized trials were reliable and valid(ELSEVIER SCIENCE INC, 2012) Akl, Elie A.; Sun, Xin; Busse, Jason W.; Johnston, Bradley C.; Briel, Matthias; Mulla, Sohail; You, John J.; Bassler, Dirk; Lamontagne, Francois; Vera, Claudio; Alshurafa, Mohamad; Katsios, Christina M.; Heels Ansdell, Diane; Zhou, Qi; Mills, Ed; Guyatt, Gordon H.Objective: To test the reliability and validity of specific instructions to classify blinding, when unclearly reported in randomized trials, as "probably done" or "probably not done."