-
World Neurosurgery May 2022Stepped wedge cluster randomized trials enable rigorous evaluations of health intervention programs in pragmatic settings. In the present study, we aimed to update... (Review)
Review
BACKGROUND
Stepped wedge cluster randomized trials enable rigorous evaluations of health intervention programs in pragmatic settings. In the present study, we aimed to update neurosurgeon scientists on the design of stepped wedge randomized trials.
METHODS
We have presented an overview of recent methodological developments for stepped wedge designs and included an update on the newer associated methodological tools to aid with future study designs.
RESULTS
We defined the stepped wedge trial design and reviewed the indications for the design in depth. In addition, key considerations, including mainstream methods of analysis and sample size determination, were discussed.
CONCLUSIONS
Stepped wedge designs can be attractive for study intervention programs aiming to improve the delivery of patient care, especially when examining a small number of heterogeneous clusters.
Topics: Humans; Randomized Controlled Trials as Topic; Research Design; Sample Size
PubMed: 35505551
DOI: 10.1016/j.wneu.2021.10.136 -
BMC Medical Research Methodology Apr 2022The sample size calculation in a confirmatory diagnostic accuracy study is performed for co-primary endpoints because sensitivity and specificity are considered...
BACKGROUND
The sample size calculation in a confirmatory diagnostic accuracy study is performed for co-primary endpoints because sensitivity and specificity are considered simultaneously. The initial sample size calculation in an unpaired and paired diagnostic study is based on assumptions about, among others, the prevalence of the disease and, in the paired design, the proportion of discordant test results between the experimental and the comparator test. The choice of the power for the individual endpoints impacts the sample size and overall power. Uncertain assumptions about the nuisance parameters can additionally affect the sample size.
METHODS
We develop an optimal sample size calculation considering co-primary endpoints to avoid an overpowered study in the unpaired and paired design. To adjust assumptions about the nuisance parameters during the study period, we introduce a blinded adaptive design for sample size re-estimation for the unpaired and the paired study design. A simulation study compares the adaptive design to the fixed design. For the paired design, the new approach is compared to an existing approach using an example study.
RESULTS
Due to blinding, the adaptive design does not inflate type I error rates. The adaptive design reaches the target power and re-estimates nuisance parameters without any relevant bias. Compared to the existing approach, the proposed methods lead to a smaller sample size.
CONCLUSIONS
We recommend the application of the optimal sample size calculation and a blinded adaptive design in a confirmatory diagnostic accuracy study. They compensate inefficiencies of the sample size calculation and support to reach the study aim.
Topics: Computer Simulation; Humans; Models, Statistical; Prevalence; Research Design; Sample Size; Sensitivity and Specificity
PubMed: 35439947
DOI: 10.1186/s12874-022-01564-2 -
Journal of Biopharmaceutical Statistics 2016Important objectives in the development of stratified medicines include the identification and confirmation of subgroups of patients with a beneficial treatment effect... (Meta-Analysis)
Meta-Analysis Review
Important objectives in the development of stratified medicines include the identification and confirmation of subgroups of patients with a beneficial treatment effect and a positive benefit-risk balance. We report the results of a literature review on methodological approaches to the design and analysis of clinical trials investigating a potential heterogeneity of treatment effects across subgroups. The identified approaches are classified based on certain characteristics of the proposed trial designs and analysis methods. We distinguish between exploratory and confirmatory subgroup analysis, frequentist, Bayesian and decision-theoretic approaches and, last, fixed-sample, group-sequential, and adaptive designs and illustrate the available trial designs and analysis strategies with published case studies.
Topics: Biomarkers; Clinical Trials as Topic; Humans; Precision Medicine; Research Design
PubMed: 26378339
DOI: 10.1080/10543406.2015.1092034 -
Orphanet Journal of Rare Diseases Oct 2018Where there are a limited number of patients, such as in a rare disease, clinical trials in these small populations present several challenges, including statistical... (Review)
Review
Where there are a limited number of patients, such as in a rare disease, clinical trials in these small populations present several challenges, including statistical issues. This led to an EU FP7 call for proposals in 2013. One of the three projects funded was the Innovative Methodology for Small Populations Research (InSPiRe) project. This paper summarizes the main results of the project, which was completed in 2017.The InSPiRe project has led to development of novel statistical methodology for clinical trials in small populations in four areas. We have explored new decision-making methods for small population clinical trials using a Bayesian decision-theoretic framework to compare costs with potential benefits, developed approaches for targeted treatment trials, enabling simultaneous identification of subgroups and confirmation of treatment effect for these patients, worked on early phase clinical trial design and on extrapolation from adult to pediatric studies, developing methods to enable use of pharmacokinetics and pharmacodynamics data, and also developed improved robust meta-analysis methods for a small number of trials to support the planning, analysis and interpretation of a trial as well as enabling extrapolation between patient groups. In addition to scientific publications, we have contributed to regulatory guidance and produced free software in order to facilitate implementation of the novel methods.
Topics: Humans; Rare Diseases; Research Design
PubMed: 30359266
DOI: 10.1186/s13023-018-0919-y -
PloS One 2022Software Test Case Prioritization (TCP) is an effective approach for regression testing to tackle time and budget constraints. The major benefit of TCP is to save time...
Software Test Case Prioritization (TCP) is an effective approach for regression testing to tackle time and budget constraints. The major benefit of TCP is to save time through the prioritization of important test cases first. Existing TCP techniques can be categorized as value-neutral and value-based approaches. In a value-based fashion, the cost of test cases and severity of faults are considered whereas, in a value-neutral fashion these are not considered. The value-neutral fashion is dominant over value-based fashion, and it assumes that all test cases have equal cost and all software faults have equal severity. But this assumption rarely holds in practice. Therefore, value-neutral TCP techniques are prone to produce unsatisfactory results. To overcome this research gap, a paradigm shift is required from value-neutral to value-based TCP techniques. Currently, very limited work is done in a value-based fashion and to the best of the authors' knowledge, no comprehensive review of value-based cost-cognizant TCP techniques is available in the literature. To address this problem, a systematic literature review (SLR) of value-based cost-cognizant TCP techniques is presented in this paper. The core objective of this study is to combine the overall knowledge related to value-based cost-cognizant TCP techniques and to highlight some open research problems of this domain. Initially, 165 papers were reviewed from the prominent research repositories. Among these 165 papers, 21 papers were selected by using defined inclusion/exclusion criteria and quality assessment procedures. The established questions are answered through a thorough analysis of the selected papers by comparing their research contributions in terms of the algorithm used, the performance evaluation metric, and the results validation method used. Total 12 papers used an algorithm for their technique but 9 papers didn't use any algorithm. Particle Swarm Optimization (PSO) Algorithm is dominantly used. For results validation, 4 methods are used including, Empirical study, Experiment, Case study, and Industrial case study. The experiment method is dominantly used. Total 6 performance evaluation metrics are used and the APFDc metric is dominantly used. This SLR yields that value-orientation and cost cognition are vital in the TCP process to achieve its intended goals and there is great research potential in this research domain.
Topics: Algorithms; Research Design; Software
PubMed: 35580089
DOI: 10.1371/journal.pone.0264972 -
Journal of Clinical Epidemiology Oct 2021To describe the prevalence of risks of bias in cluster-randomized trials of individual-level interventions, according to the Cochrane Risk of Bias tool. (Review)
Review
OBJECTIVES
To describe the prevalence of risks of bias in cluster-randomized trials of individual-level interventions, according to the Cochrane Risk of Bias tool.
STUDY DESIGN AND SETTING
Review undertaken in duplicate of a random sample of 40 primary reports of cluster-randomized trials of individual-level interventions.
RESULTS
The most common reported reasons for adopting cluster randomization were the need to avoid contamination (17, 42.5%) and practical considerations (14, 35%). Of the 40 trials all but one was assessed as being at risk of bias. A majority (27, 67.5%) were assessed as at risk due to the timing of identification and recruitment of participants; many (21, 52.5%) due to an apparent lack of adequate allocation concealment; and many due to selectively reported results (22, 55%), arising from a mixture of reasons including lack of documentation of primary outcome. Other risks mostly occurred infrequently.
CONCLUSION
Many cluster-randomized trials evaluating individual-level interventions appear to be at risk of bias, mostly due to identification and recruitment biases. We recommend that investigators carefully consider the need for cluster randomization; follow recommended procedures to mitigate risks of identification and recruitment bias; and adhere to good reporting practices including clear documentation of primary outcome and allocation concealment methods.
Topics: Biomedical Research; Guidelines as Topic; Humans; Publication Bias; Randomized Controlled Trials as Topic; Research Design; Research Report
PubMed: 34197941
DOI: 10.1016/j.jclinepi.2021.06.021 -
Environmental Science & Technology Mar 2014We have become progressively more concerned about the quality of some published ecotoxicology research. Others have also expressed concern. It is not uncommon for basic,... (Review)
Review
We have become progressively more concerned about the quality of some published ecotoxicology research. Others have also expressed concern. It is not uncommon for basic, but extremely important, factors to apparently be ignored. For example, exposure concentrations in laboratory experiments are sometimes not measured, and hence there is no evidence that the test organisms were actually exposed to the test substance, let alone at the stated concentrations. To try to improve the quality of ecotoxicology research, we suggest 12 basic principles that should be considered, not at the point of publication of the results, but during the experimental design. These principles range from carefully considering essential aspects of experimental design through to accurately defining the exposure, as well as unbiased analysis and reporting of the results. Although not all principles will apply to all studies, we offer these principles in the hope that they will improve the quality of the science that is available to regulators. Science is an evidence-based discipline and it is important that we and the regulators can trust the evidence presented to us. Significant resources often have to be devoted to refuting the results of poor research when those resources could be utilized more effectively.
Topics: Ecotoxicology; Research Design
PubMed: 24512103
DOI: 10.1021/es4047507 -
American Journal of Biological... Nov 2022Previous research has shown that while missing data are common in bioarchaeological studies, they are seldom handled using statistically rigorous methods. The primary...
OBJECTIVES
Previous research has shown that while missing data are common in bioarchaeological studies, they are seldom handled using statistically rigorous methods. The primary objective of this article is to evaluate the ability of imputation to manage missing data and encourage the use of advanced statistical methods in bioarchaeology and paleopathology. An overview of missing data management in biological anthropology is provided, followed by a test of imputation and deletion methods for handling missing data.
MATERIALS AND METHODS
Missing data were simulated on complete datasets of ordinal (n = 287) and continuous (n = 369) bioarchaeological data. Missing values were imputed using five imputation methods (mean, predictive mean matching, random forest, expectation maximization, and stochastic regression) and the success of each at obtaining the parameters of the original dataset compared with pairwise and listwise deletion.
RESULTS
In all instances, listwise deletion was least successful at approximating the original parameters. Imputation of continuous data was more effective than ordinal data. Overall, no one method performed best and the amount of missing data proved a stronger predictor of imputation success.
DISCUSSION
These findings support the use of imputation methods over deletion for handling missing bioarchaeological and paleopathology data, especially when the data are continuous. Whereas deletion methods reduce sample size, imputation maintains sample size, improving statistical power and preventing bias from being introduced into the dataset.
Topics: Archaeology; Sample Size; Research Design; Data Management; Bias
PubMed: 36790608
DOI: 10.1002/ajpa.24614 -
Proceedings of the National Academy of... Mar 2018We describe and demonstrate an empirical strategy useful for discovering and replicating empirical effects in psychological science. The method involves the design of a...
We describe and demonstrate an empirical strategy useful for discovering and replicating empirical effects in psychological science. The method involves the design of a metastudy, in which many independent experimental variables-that may be moderators of an empirical effect-are indiscriminately randomized. Radical randomization yields rich datasets that can be used to test the robustness of an empirical claim to some of the vagaries and idiosyncrasies of experimental protocols and enhances the generalizability of these claims. The strategy is made feasible by advances in hierarchical Bayesian modeling that allow for the pooling of information across unlike experiments and designs and is proposed here as a gold standard for replication research and exploratory research. The practical feasibility of the strategy is demonstrated with a replication of a study on subliminal priming.
Topics: Bayes Theorem; Biomedical Research; Data Interpretation, Statistical; Humans; Random Allocation; Research Design
PubMed: 29531092
DOI: 10.1073/pnas.1708285114 -
PLoS Biology Mar 2018Meta-research is the study of research itself: its methods, reporting, reproducibility, evaluation, and incentives. Given that science is the key driver of human...
Meta-research is the study of research itself: its methods, reporting, reproducibility, evaluation, and incentives. Given that science is the key driver of human progress, improving the efficiency of scientific investigation and yielding more credible and more useful research results can translate to major benefits. The research enterprise grows very fast. Both new opportunities for knowledge and innovation and new threats to validity and scientific integrity emerge. Old biases abound, and new ones continuously appear as novel disciplines emerge with different standards and challenges. Meta-research uses an interdisciplinary approach to study, promote, and defend robust science. Major disruptions are likely to happen in the way we pursue scientific investigation, and it is important to ensure that these disruptions are evidence based.
Topics: Interdisciplinary Research; Meta-Analysis as Topic; Reproducibility of Results; Research Design
PubMed: 29534060
DOI: 10.1371/journal.pbio.2005468