-
Trials Feb 2017Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results... (Review)
Review
BACKGROUND
Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results help inform decisions concerning drug development and use, preventive therapies, and screening programs. However, the inputs that justify an RCT to be conducted have not been studied.
METHODS
We reviewed the MEDLINE and EMBASE databases across six specialties (Ophthalmology, Otorhinolaryngology (ENT), General Surgery, Psychiatry, Obstetrics-Gynecology (OB-GYN), and Internal Medicine) and randomly chose 25 RCTs from each specialty except for Otorhinolaryngology (20 studies) and Internal Medicine (28 studies). For each RCT, we recorded information relating to the justification for conducting RCTs such as average study size cited, number of studies cited, and types of studies cited. The justification varied widely both within and between specialties.
RESULTS
For Ophthalmology and OB-GYN, the average study sizes cited were around 1100 patients, whereas they were around 500 patients for Psychiatry and General Surgery. Between specialties, the average number of studies cited ranged from around 4.5 for ENT to around 10 for Ophthalmology, but the standard deviations were large, indicating that there was even more discrepancy within each specialty. When standardizing by the sample size of the RCT, some of the discrepancies between and within specialties can be explained, but not all. On average, Ophthalmology papers cited review articles the most (2.96 studies per RCT) compared to less than 1.5 studies per RCT for all other specialties.
CONCLUSIONS
The justifications for RCTs vary widely both within and between specialties, and the justification for conducting RCTs is not standardized.
Topics: Evidence-Based Medicine; Humans; Medicine; Patient Selection; Randomized Controlled Trials as Topic; Research Design; Sample Size; Specialization
PubMed: 28148278
DOI: 10.1186/s13063-017-1804-z -
Journal of Educational Evaluation For... 2020This paper is a critical review of the descriptive phenomenological methodology in Korean nursing research. We propose constructive suggestions for the improvement of... (Review)
Review
PURPOSE
This paper is a critical review of the descriptive phenomenological methodology in Korean nursing research. We propose constructive suggestions for the improvement of descriptive phenomenological methodology in light of Husserl's phenomenological approaches.
METHODS
Using the keywords of 'phenomenology,' 'experience,' and 'nursing,' we identify and analyze 64 Korean empirical phenomenological studies (selected from 282 studies) published in 14 Korean nursing journals from 2005 to 2018. The PubMed and the Korea Citation Index were used to identify the studies.
RESULTS
Our analysis shows that all the reviewed articles used Giorgi's or Colaizzi's scientific phenomenological methodology, without critical attention to Husserl's philosophical phenomenological principles.
CONCLUSIONS
The use of scientific phenomenology in nursing research, which originated in North America, has become a global phenomenon, and Korean phenomenological nursing research has faithfully followed this scholarly trend. This paper argues that greater integration of Husserlian phenomenological principles into scientific phenomenological methodology in nursing research, such as participant-centered bracketing and eidetic reduction, is needed to ensure that scientific phenomenology lives up to its promise as a research methodology.
Topics: Empirical Research; Humans; Nursing Research; Philosophy; Qualitative Research; Republic of Korea; Research Design
PubMed: 32311867
DOI: 10.3352/jeehp.2020.17.13 -
NeuroImage Aug 2023Cognitive neuroscientists have been grappling with two related experimental design problems. First, the complexity of neuroimaging data (e.g. often hundreds of thousands...
Cognitive neuroscientists have been grappling with two related experimental design problems. First, the complexity of neuroimaging data (e.g. often hundreds of thousands of correlated measurements) and analysis pipelines demands bespoke, non-parametric statistical tests for valid inference, and these tests often lack an agreed-upon method for performing a priori power analyses. Thus, sample size determination for neuroimaging studies is often arbitrary or inferred from other putatively but questionably similar studies, which can result in underpowered designs - undermining the efficacy of neuroimaging research. Second, when meta-analyses estimate the sample sizes required to obtain reasonable statistical power, estimated sample sizes can be prohibitively large given the resource constraints of many labs. We propose the use of sequential analyses to partially address both of these problems. Sequential study designs - in which the data is analyzed at interim points during data collection and data collection can be stopped if the planned test statistic satisfies a stopping rule specified a priori - are common in the clinical trial literature, due to the efficiency gains they afford over fixed-sample designs. However, the corrections used to control false positive rates in existing approaches to sequential testing rely on parametric assumptions that are often violated in neuroimaging settings. We introduce a general permutation scheme that allows sequential designs to be used with arbitrary test statistics. By simulation, we show that this scheme controls the false positive rate across multiple interim analyses. Then, performing power analyses for seven evoked response effects seen in the EEG literature, we show that this sequential analysis approach can substantially outperform fixed-sample approaches (i.e. require fewer subjects, on average, to detect a true effect) when study designs are sufficiently well-powered. To facilitate the adoption of this methodology, we provide a Python package "niseq" with sequential implementations of common tests used for neuroimaging: cluster-based permutation tests, threshold-free cluster enhancement, t-max, F-max, and the network-based statistic with tutorial examples using EEG and fMRI data.
Topics: Humans; Cognitive Neuroscience; Research Design; Sample Size; Magnetic Resonance Imaging; Neuroimaging
PubMed: 37348624
DOI: 10.1016/j.neuroimage.2023.120232 -
Medical Science Monitor : International... Jan 2011Obtaining and critically appraising evidence is clearly not enough to make better decisions in clinical care. The evidence should be linked to the clinician's expertise,... (Review)
Review
Obtaining and critically appraising evidence is clearly not enough to make better decisions in clinical care. The evidence should be linked to the clinician's expertise, the patient's individual circumstances (including values and preferences), and clinical context and settings. We propose critical thinking and decision-making as the tools for making that link. Critical thinking is also called for in medical research and medical writing, especially where pre-canned methodologies are not enough. It is also involved in our exchanges of ideas at floor rounds, grand rounds and case discussions; our communications with patients and lay stakeholders in health care; and our writing of research papers, grant applications and grant reviews. Critical thinking is a learned process which benefits from teaching and guided practice like any discipline in health sciences. Training in critical thinking should be a part or a pre-requisite of the medical curriculum.
Topics: Communication; Decision Making; Education, Medical; Evidence-Based Medicine; Research Design; Thinking
PubMed: 21169920
DOI: 10.12659/msm.881321 -
Current Opinion in Rheumatology May 2013To provide an overview of recently published articles describing or applying newer methods for evaluating comparative effectiveness research (CER) in rheumatoid... (Review)
Review
PURPOSE OF REVIEW
To provide an overview of recently published articles describing or applying newer methods for evaluating comparative effectiveness research (CER) in rheumatoid arthritis (RA).
RECENT FINDINGS
Historically, clinical trials in RA have compared newer therapies against placebo. Newer trials designed to increase the relevance of trial results to real-world settings include head-to-head comparisons, some that incorporate noninferiority, factorial and crossover designs. Extensions of traditional meta-analysis through network meta-analysis can combine direct and indirect evidence together and can compare multiple treatments with each other.Observational data used to support CER include disease registries, administrative claims data and electronic medical records. Pooling and linking across these data sources and applying newer epidemiologic methods to analyse such data can provide more valid inferences regarding optimal treatment regimens for RA.
SUMMARY
CER methods in RA include head-to-head clinical trials, advanced techniques to summarize and aggregate data across studies, enrich the data available in observational settings and enhance the methods used for analysis. Efforts to continue to apply and improve these methodologies will address key needs of clinicians, patients and health policy decision-makers to generate evidence regarding real-world risks and benefits.
Topics: Antirheumatic Agents; Arthritis, Rheumatoid; Comparative Effectiveness Research; Humans; Meta-Analysis as Topic; Randomized Controlled Trials as Topic; Research Design
PubMed: 23508131
DOI: 10.1097/BOR.0b013e32835fd8c0 -
Trials Dec 2023Retention to trials is important to ensure the results of the trial are valid and reliable. The SPIRIT guidelines (18b) require "plans to promote participant retention... (Review)
Review
BACKGROUND
Retention to trials is important to ensure the results of the trial are valid and reliable. The SPIRIT guidelines (18b) require "plans to promote participant retention and complete follow-up, including list of any outcome data to be collected for participants who discontinue or deviate from intervention protocols" be included in trial protocols. It is unknown how often protocols report this retention information. The purpose of our scoping review is to establish if, and how, trial teams report plans for retention during the design stage of the trial.
MATERIALS AND METHODS
A scoping review with searches in key databases (PubMed, Scopus, EMBASE, CINAHL (EBSCO), and Web of Science from 2014 to 2019 inclusive) to identify randomised controlled trial protocols. We produced descriptive statistics on the characteristics of the trial protocols and also on those adhering to SPIRIT item 18b. A narrative synthesis of the retention strategies was also conducted.
RESULTS
Eight-hundred and twenty-four protocols met our inclusion criteria. RCTs (nā=ā722) and pilot and feasibility trial protocols (nā=ā102) reported using the SPIRIT guidelines during protocol development 35% and 34.3% of the time respectively. Of these protocols, only 9.5% and 11.4% respectively reported all aspects of SPIRIT item 18b "plans to promote participant retention and to complete follow-up, including list of any outcome data for participants who discontinue or deviate from intervention protocols". Of the RCT protocols, 36.8% included proactive "plans to promote participant retention" regardless of whether they reported using SPIRIT guidelines or not. Most protocols planned "combined strategies" (48.1%). Of these, the joint most commonly reported were "reminders and data collection location and method" and "reminders and monetary incentives". The most popular individual retention strategy was "reminders" (14.7%) followed by "monetary incentives- conditional" (10.2%). Of the pilot and feasibility protocols, 40.2% included proactive "plans to promote participant retention" with the use of "combined strategies" being most frequent (46.3%). The use of "monetary incentives - conditional" (22%) was the most popular individual reported retention strategy.
CONCLUSION
There is a lack of reporting of plans to promote participant retention in trial protocols. Proactive planning of retention strategies during the trial design stage is preferable to the reactive implementation of retention strategies. Prospective retention planning and clear communication in protocols may inform more suitable choice, costing and implementation of retention strategies and improve transparency in trial conduct.
Topics: Humans; Randomized Controlled Trials as Topic; Retention in Care; Research Design
PubMed: 38049833
DOI: 10.1186/s13063-023-07775-2 -
Experimental Physiology Mar 2022Exercise physiology and sport science have traditionally made use of the null hypothesis of no difference to make decisions about experimental interventions. In this... (Review)
Review
Exercise physiology and sport science have traditionally made use of the null hypothesis of no difference to make decisions about experimental interventions. In this article, we aim to review current statistical approaches typically used by exercise physiologists and sport scientists for the design and analysis of experimental interventions and to highlight the importance of including equivalence and non-inferiority studies, which address different research questions from deciding whether an effect is present. Initially, we briefly describe the most common approaches, along with their rationale, to investigate the effects of different interventions. We then discuss the main steps involved in the design and analysis of equivalence and non-inferiority studies, commonly performed in other research fields, with worked examples from exercise physiology and sport science scenarios. Finally, we provide recommendations to exercise physiologists and sport scientists who would like to apply the different approaches in future research. We hope this work will promote the correct use of equivalence and non-inferiority designs in exercise physiology and sport science whenever the research context, conditions, applications, researchers' interests or reasonable beliefs justify these approaches.
Topics: Exercise; Humans; Research Design; Sports
PubMed: 35041233
DOI: 10.1113/EP090171 -
Health Expectations : An International... Dec 2018Engaging youth as partners in academic research projects offers many benefits for the youth and the research team. However, it is not always clear to researchers how to... (Review)
Review
CONTEXT
Engaging youth as partners in academic research projects offers many benefits for the youth and the research team. However, it is not always clear to researchers how to engage youth effectively to optimize the experience and maximize the impact.
OBJECTIVE
This article provides practical recommendations to help researchers engage youth in meaningful ways in academic research, from initial planning to project completion. These general recommendations can be applied to all types of research methodologies, from community action-based research to highly technical designs.
RESULTS
Youth can and do provide valuable input into academic research projects when their contributions are authentically valued, their roles are clearly defined, communication is clear, and their needs are taken into account. Researchers should be aware of the risk of tokenizing the youth they engage and work proactively to take their feedback into account in a genuine way. Some adaptations to regular research procedures are recommended to improve the success of the youth engagement initiative.
CONCLUSIONS
By following these guidelines, academic researchers can make youth engagement a key tenet of their youth-oriented research initiatives, increasing the feasibility, youth-friendliness and ecological validity of their work and ultimately improve the value and impact of the results their research produces.
Topics: Adolescent; Communication; Community-Based Participatory Research; Cooperative Behavior; Humans; Mental Health; Program Development; Research Design; Research Personnel
PubMed: 29858526
DOI: 10.1111/hex.12795 -
The Journals of Gerontology. Series A,... Nov 2022This review identifies frequent design and analysis errors in aging and senescence research and discusses best practices in study design, statistical methods, analyses,... (Review)
Review
This review identifies frequent design and analysis errors in aging and senescence research and discusses best practices in study design, statistical methods, analyses, and interpretation. Recommendations are offered for how to avoid these problems. The following issues are addressed: (a) errors in randomization, (b) errors related to testing within-group instead of between-group differences, (c) failing to account for clustering, (d) failing to consider interference effects, (e) standardizing metrics of effect size, (f) maximum life-span testing, (g) testing for effects beyond the mean, (h) tests for power and sample size, (i) compression of morbidity versus survival curve squaring, and (j) other hot topics, including modeling high-dimensional data and complex relationships and assessing model assumptions and biases. We hope that bringing increased awareness of these topics to the scientific community will emphasize the importance of employing sound statistical practices in all aspects of aging and senescence research.
Topics: Humans; Data Interpretation, Statistical; Research Design; Sample Size; Bias; Aging
PubMed: 34950945
DOI: 10.1093/gerona/glab382 -
Statistical Methods in Medical Research Feb 2015Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles... (Review)
Review
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes.
Topics: Biomarkers; Diagnostic Imaging; Guidelines as Topic; Humans; Meta-Analysis as Topic; Reproducibility of Results; Research Design; Statistics as Topic
PubMed: 24872353
DOI: 10.1177/0962280214537394