-
BMJ (Clinical Research Ed.) Mar 2021The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently...
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
Topics: Humans; Medical Writing; Meta-Analysis as Topic; Practice Guidelines as Topic; Quality Control; Research Design; Statistics as Topic; Systematic Reviews as Topic; Terminology as Topic
PubMed: 33782057
DOI: 10.1136/bmj.n71 -
Journal of Educational Evaluation For... 2021Appropriate sample size calculation and power analysis have become major issues in research and publication processes. However, the complexity and difficulty of... (Review)
Review
Appropriate sample size calculation and power analysis have become major issues in research and publication processes. However, the complexity and difficulty of calculating sample size and power require broad statistical knowledge, there is a shortage of personnel with programming skills, and commercial programs are often too expensive to use in practice. The review article aimed to explain the basic concepts of sample size calculation and power analysis; the process of sample estimation; and how to calculate sample size using G*Power software (latest ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with 5 statistical examples. The null and alternative hypothesis, effect size, power, alpha, type I error, and type II error should be described when calculating the sample size or power. G*Power is recommended for sample size and power calculations for various statistical methods (F, t, χ2, Z, and exact tests), because it is easy to use and free. The process of sample estimation consists of establishing research goals and hypotheses, choosing appropriate statistical tests, choosing one of 5 possible power analysis methods, inputting the required variables for analysis, and selecting the “calculate” button. The G*Power software supports sample size and power calculation for various statistical methods (F, t, χ2, z, and exact tests). This software is helpful for researchers to estimate the sample size and to conduct power analysis.
Topics: Humans; Research Design; Sample Size; Software
PubMed: 34325496
DOI: 10.3352/jeehp.2021.18.17 -
Psychiatry Research Jan 2020Implementation science is focused on maximizing the adoption, appropriate use, and sustainability of effective clinical practices in real world clinical settings. Many... (Review)
Review
Implementation science is focused on maximizing the adoption, appropriate use, and sustainability of effective clinical practices in real world clinical settings. Many implementation science questions can be feasibly answered by fully experimental designs, typically in the form of randomized controlled trials (RCTs). Implementation-focused RCTs, however, usually differ from traditional efficacy- or effectiveness-oriented RCTs on key parameters. Other implementation science questions are more suited to quasi-experimental designs, which are intended to estimate the effect of an intervention in the absence of randomization. These designs include pre-post designs with a non-equivalent control group, interrupted time series (ITS), and stepped wedges, the last of which require all participants to receive the intervention, but in a staggered fashion. In this article we review the use of experimental designs in implementation science, including recent methodological advances for implementation studies. We also review the use of quasi-experimental designs in implementation science, and discuss the strengths and weaknesses of these approaches. This article is therefore meant to be a practical guide for researchers who are interested in selecting the most appropriate study design to answer relevant implementation science questions, and thereby increase the rate at which effective clinical practices are adopted, spread, and sustained.
Topics: Biomedical Research; Control Groups; Humans; Implementation Science; Randomized Controlled Trials as Topic; Research Design
PubMed: 31255320
DOI: 10.1016/j.psychres.2019.06.027 -
Value in Health : the Journal of the... Jan 2022Health economic evaluations are comparative analyses of alternative courses of action in terms of their costs and consequences. The Consolidated Health Economic... (Review)
Review
Health economic evaluations are comparative analyses of alternative courses of action in terms of their costs and consequences. The Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement, published in 2013, was created to ensure health economic evaluations are identifiable, interpretable, and useful for decision making. It was intended as guidance to help authors report accurately which health interventions were being compared and in what context, how the evaluation was undertaken, what the findings were, and other details that may aid readers and reviewers in interpretation and use of the study. The new CHEERS 2022 statement replaces previous CHEERS reporting guidance. It reflects the need for guidance that can be more easily applied to all types of health economic evaluation, new methods and developments in the field, as well as the increased role of stakeholder involvement including patients and the public. It is also broadly applicable to any form of intervention intended to improve the health of individuals or the population, whether simple or complex, and without regard to context (such as health care, public health, education, social care, etc). This summary article presents the new CHEERS 2022 28-item checklist and recommendations for each item. The CHEERS 2022 statement is primarily intended for researchers reporting economic evaluations for peer reviewed journals as well as the peer reviewers and editors assessing them for publication. However, we anticipate familiarity with reporting requirements will be useful for analysts when planning studies. It may also be useful for health technology assessment bodies seeking guidance on reporting, as there is an increasing emphasis on transparency in decision making.
Topics: Checklist; Cost-Benefit Analysis; Economics, Medical; Humans; Publishing; Research Design
PubMed: 35031096
DOI: 10.1016/j.jval.2021.11.1351 -
Psychiatry Research Oct 2019The traditional research pipeline that encourages a staged approach to moving an intervention from efficacy trials to the real world can take a long time. To address...
The traditional research pipeline that encourages a staged approach to moving an intervention from efficacy trials to the real world can take a long time. To address this issue, hybrid effectiveness-implementation designs were codified to promote examination of both effectiveness and implementation outcomes within a study. There are three types of hybrid designs and they vary based on their primary focus and the amount of emphasis on effectiveness versus implementation outcomes. A type 1 hybrid focuses primarily on the effectiveness outcomes of an intervention while exploring the "implementability" of the intervention. A type 2 hybrid has a dual focus on effectiveness and implementation outcomes; these designs allow for the simultaneous testing or piloting of implementation strategies during an effectiveness trial. A type 3 hybrid focuses primarily on implementation outcomes while also collecting effectiveness outcomes as they relate to uptake or fidelity of the intervention. This paper provides an introduction to these designs and describes each of the three types, design considerations, and examples for each.
Topics: Biomedical Research; Clinical Trials as Topic; Humans; Research Design; Treatment Outcome
PubMed: 31434011
DOI: 10.1016/j.psychres.2019.112513 -
JAMA Jan 2023
Randomized Controlled Trial
Topics: Research Design; Randomized Controlled Trials as Topic
PubMed: 36692577
DOI: 10.1001/jama.2022.24324 -
Statistical Methods in Medical Research Aug 2019Binary logistic regression is one of the most frequently applied statistical approaches for developing clinical prediction models. Developers of such models often rely...
Binary logistic regression is one of the most frequently applied statistical approaches for developing clinical prediction models. Developers of such models often rely on an Events Per Variable criterion (EPV), notably EPV ≥10, to determine the minimal sample size required and the maximum number of candidate predictors that can be examined. We present an extensive simulation study in which we studied the influence of EPV, events fraction, number of candidate predictors, the correlations and distributions of candidate predictor variables, area under the ROC curve, and predictor effects on out-of-sample predictive performance of prediction models. The out-of-sample performance (calibration, discrimination and probability prediction error) of developed prediction models was studied before and after regression shrinkage and variable selection. The results indicate that EPV does not have a strong relation with metrics of predictive performance, and is not an appropriate criterion for (binary) prediction model development studies. We show that out-of-sample predictive performance can better be approximated by considering the number of predictors, the total sample size and the events fraction. We propose that the development of new sample size criteria for prediction models should be based on these three parameters, and provide suggestions for improving sample size determination.
Topics: Computer Simulation; Humans; Logistic Models; Models, Statistical; Research Design; Sample Size
PubMed: 29966490
DOI: 10.1177/0962280218784726 -
Military Medical Research Feb 2020Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and... (Review)
Review
Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.
Topics: Animals; Bias; Humans; Psychometrics; Reproducibility of Results; Research; Research Design
PubMed: 32111253
DOI: 10.1186/s40779-020-00238-8 -
Journal of Clinical Epidemiology Feb 2021To develop methods guidance to support the conduct of rapid reviews (RRs) produced within Cochrane and beyond, in response to requests for timely evidence syntheses for...
OBJECTIVES
To develop methods guidance to support the conduct of rapid reviews (RRs) produced within Cochrane and beyond, in response to requests for timely evidence syntheses for decision-making purposes including urgent health issues of high priority.
STUDY DESIGN AND SETTING
Interim recommendations were informed by a scoping review of the underlying evidence, primary methods studies conducted, and a survey sent to 119 representatives from 20 Cochrane entities, who were asked to rate and rank RR methods across stages of review conduct. Discussions among those with expertise in RR methods further informed the list of recommendations with accompanying rationales provided.
RESULTS
Based on survey results from 63 respondents (53% response rate), 26 RR methods recommendations are presented for which there was a high or moderate level of agreement or scored highest in the absence of such agreement. Where possible, how recommendations align with Cochrane methods guidance for systematic reviews is highlighted.
CONCLUSION
The Cochrane Rapid Reviews Methods Group offers new, interim guidance to support the conduct of RRs. Because best practice is limited by the lack of currently available evidence for some RR methods shortcuts taken, this guidance will need to be updated as additional abbreviated methods are evaluated.
Topics: Guidelines as Topic; Humans; Research Design; Research Report; Surveys and Questionnaires; Systematic Reviews as Topic
PubMed: 33068715
DOI: 10.1016/j.jclinepi.2020.10.007 -
Korean Journal of Anesthesiology Apr 2020Properly set sample size is one of the important factors for scientific and persuasive research. The sample size that can guarantee both clinically significant... (Review)
Review
Properly set sample size is one of the important factors for scientific and persuasive research. The sample size that can guarantee both clinically significant differences and adequate power in the phenomena of interest to the investigator, without causing excessive financial or medical considerations, will always be the object of concern. In this paper, we reviewed the essential factors for sample size calculation. We described the primary endpoints that are the main concern of the study and the basis for calculating sample size, the statistics used to analyze the primary endpoints, type I error and power, the effect size and the rationale. It also included a method of calculating the adjusted sample size considering the dropout rate inevitably occurring during the research. Finally, examples regarding sample size calculation that are appropriately and incorrectly described in the published papers are presented with explanations.
Topics: Biometry; Humans; Patient Dropouts; Research Design; Sample Size
PubMed: 32229812
DOI: 10.4097/kja.19497