-
CPT: Pharmacometrics & Systems... Feb 2022The full random-effects model (FREM) is a method for determining covariate effects in mixed-effects models. Covariates are modeled as random variables, described by mean...
The full random-effects model (FREM) is a method for determining covariate effects in mixed-effects models. Covariates are modeled as random variables, described by mean and variance. The method captures the covariate effects in estimated covariances between individual parameters and covariates. This approach is robust against issues that may cause reduced performance in methods based on estimating fixed effects (e.g., correlated covariates where the effects cannot be simultaneously identified in fixed-effects methods). FREM covariate parameterization and transformation of covariate data records can be used to alter the covariate-parameter relation. Four relations (linear, log-linear, exponential, and power) were implemented and shown to provide estimates equivalent to their fixed-effects counterparts. Comparisons between FREM and mathematically equivalent full fixed-effects models (FFEMs) were performed in original and simulated data, in the presence and absence of non-normally distributed and highly correlated covariates. These comparisons show that both FREM and FFEM perform well in the examined cases, with a slightly better estimation accuracy of parameter interindividual variability (IIV) in FREM. In addition, FREM offers the unique advantage of letting a single estimation simultaneously provide covariate effect coefficient estimates and IIV estimates for any subset of the examined covariates, including the effect of each covariate in isolation. Such subsets can be used to apply the model across data sources with different sets of available covariates, or to communicate covariate effects in a way that is not conditional on other covariates.
Topics: Humans; Models, Statistical; Research Design
PubMed: 34984855
DOI: 10.1002/psp4.12741 -
BMC Medical Research Methodology Aug 2021Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with... (Randomized Controlled Trial)
Randomized Controlled Trial
BACKGROUND
Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with respect to important known and unknown confounders, and contributes to the validity of statistical tests. Various restricted randomization procedures with different probabilistic structures and different statistical properties are available. The goal of this paper is to present a systematic roadmap for the choice and application of a restricted randomization procedure in a clinical trial.
METHODS
We survey available restricted randomization procedures for sequential allocation of subjects in a randomized, comparative, parallel group clinical trial with equal (1:1) allocation. We explore statistical properties of these procedures, including balance/randomness tradeoff, type I error rate and power. We perform head-to-head comparisons of different procedures through simulation under various experimental scenarios, including cases when common model assumptions are violated. We also provide some real-life clinical trial examples to illustrate the thinking process for selecting a randomization procedure for implementation in practice.
RESULTS
Restricted randomization procedures targeting 1:1 allocation vary in the degree of balance/randomness they induce, and more importantly, they vary in terms of validity and efficiency of statistical inference when common model assumptions are violated (e.g. when outcomes are affected by a linear time trend; measurement error distribution is misspecified; or selection bias is introduced in the experiment). Some procedures are more robust than others. Covariate-adjusted analysis may be essential to ensure validity of the results. Special considerations are required when selecting a randomization procedure for a clinical trial with very small sample size.
CONCLUSIONS
The choice of randomization design, data analytic technique (parametric or nonparametric), and analysis strategy (randomization-based or population model-based) are all very important considerations. Randomization-based tests are robust and valid alternatives to likelihood-based tests and should be considered more frequently by clinical investigators.
Topics: Computer Simulation; Humans; Likelihood Functions; Random Allocation; Sample Size; Selection Bias
PubMed: 34399696
DOI: 10.1186/s12874-021-01303-z -
Biostatistics (Oxford, England) Apr 2022Divide-and-conquer (DAC) is a commonly used strategy to overcome the challenges of extraordinarily large data, by first breaking the dataset into series of data blocks,...
Divide-and-conquer (DAC) is a commonly used strategy to overcome the challenges of extraordinarily large data, by first breaking the dataset into series of data blocks, then combining results from individual data blocks to obtain a final estimation. Various DAC algorithms have been proposed to fit a sparse predictive regression model in the $L_1$ regularization setting. However, many existing DAC algorithms remain computationally intensive when sample size and number of candidate predictors are both large. In addition, no existing DAC procedures provide inference for quantifying the accuracy of risk prediction models. In this article, we propose a screening and one-step linearization infused DAC (SOLID) algorithm to fit sparse logistic regression to massive datasets, by integrating the DAC strategy with a screening step and sequences of linearization. This enables us to maximize the likelihood with only selected covariates and perform penalized estimation via a fast approximation to the likelihood. To assess the accuracy of a predictive regression model, we develop a modified cross-validation (MCV) that utilizes the side products of the SOLID, substantially reducing the computational burden. Compared with existing DAC methods, the MCV procedure is the first to make inference on accuracy. Extensive simulation studies suggest that the proposed SOLID and MCV procedures substantially outperform the existing methods with respect to computational speed and achieve similar statistical efficiency as the full sample-based estimator. We also demonstrate that the proposed inference procedure provides valid interval estimators. We apply the proposed SOLID procedure to develop and validate a classification model for disease diagnosis using narrative clinical notes based on electronic medical record data from Partners HealthCare.
Topics: Algorithms; Computer Simulation; Humans; Logistic Models; Research Design
PubMed: 32909599
DOI: 10.1093/biostatistics/kxaa031 -
International Journal of Surgery... Jun 2022revisional bariatric surgery is gaining increasing interest as long term follow-up studies demonstrate an elevated failure rate of primary surgery due to insufficient... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
revisional bariatric surgery is gaining increasing interest as long term follow-up studies demonstrate an elevated failure rate of primary surgery due to insufficient weight loss, weight regain or complications. This particularly concerns restrictive bariatric surgery which has been widely adopted from the '80s till present through different procedures, notably vertical banded gastroplasty, laparoscopic adjusted gastric banding and sleeve gastrectomy. The aim of this study is to define which revisional bariatric procedure performs the best after failure of primary restrictive surgery.
METHODS
a systematic review and network meta-analysis of 39 studies was conducted following the PRISMA guidelines and the Cochrane protocol.
RESULTS
biliopancreatic diversion with duodenal switch guarantees the best results in terms of weight loss (1 and 3-years %TWL MD: 12.38 and 28.42) followed by single-anastomosis duodenoileal bypass (9.24 and 19.13), one-anastomosis gastric bypass (7.16 and 13.1), and Roux-en-Y gastric bypass (4.68 and 7.3) compared to re-sleeve gastrectomy. Duodenal switch and Roux-en-Y gastric bypass are associated to an increased risk of late major morbidity (OR: 3.07 and 2.11 respectively) compared to re-sleeve gastrectomy while no significant difference was highlighted for the other procedures. Re-sleeve gastrectomy is the revisional intervention most frequently burdened by weight recidivism; compared to it, patients undergoing single-anastomosis duodenoileal bypass have the lowest risk of weight regain (OR: 0.07).
CONCLUSION
considering the analyzed outcomes altogether, single-anastomosis duodenoileal bypass and one-anastomosis gastric bypass are the most performing revisional procedures after failure of restrictive surgery due to satisfying short and mid-term weight loss and low early and late morbidity. Moreover, single-anastomosis duodenoileal bypass has low risk of weight recidivism.
Topics: Bariatric Surgery; Gastrectomy; Gastric Bypass; Humans; Laparoscopy; Morbidity; Network Meta-Analysis; Obesity, Morbid; Reoperation; Retrospective Studies; Weight Gain; Weight Loss
PubMed: 35589051
DOI: 10.1016/j.ijsu.2022.106677 -
Chest Jul 2020Case-control studies are one of the major observational study designs for performing clinical research. The advantages of these study designs over other study designs... (Review)
Review
Case-control studies are one of the major observational study designs for performing clinical research. The advantages of these study designs over other study designs are that they are relatively quick to perform, economical, and easy to design and implement. Case-control studies are particularly appropriate for studying disease outbreaks, rare diseases, or outcomes of interest. This article describes several types of case-control designs, with simple graphical displays to help understand their differences. Study design considerations are reviewed, including sample size, power, and measures associated with risk factors for clinical outcomes. Finally, we discuss the advantages and disadvantages of case-control studies and provide a checklist for authors and a framework of considerations to guide reviewers' comments.
Topics: Case-Control Studies; Checklist; Guidelines as Topic; Humans; Research Design
PubMed: 32658653
DOI: 10.1016/j.chest.2020.03.009 -
Current Opinion in Urology Mar 2018With the increasing incidence of small renal masses (SRMs), ablative technologies are becoming more commonly utilized. With any nascent treatment modality, outcomes... (Review)
Review
PURPOSE OF REVIEW
With the increasing incidence of small renal masses (SRMs), ablative technologies are becoming more commonly utilized. With any nascent treatment modality, outcomes literature needs to be constantly re-evaluated. The purpose of this review is to revisit the most updated literature regarding the safety and efficacy of ablative treatments of renal lesions.
RECENT FINDINGS
Recent literature demonstrates that small renal tumor ablation is safe and effective. Although it does not have the same oncological efficacy of surgical extirpation, local recurrence-free survival has consistently shown to be around 90%. Cryoablation and radiofrequency ablation have longer-term data demonstrating durable responses. Microwave ablation and irreversible electroporation are promising modalities with longer-term data coming. Complication rates and procedural morbidity of ablation are consistently lower than for partial nephrectomy.
SUMMARY
Image-guided focal ablation is a valuable tool in the management of SRMs. Although it does not have the same efficacy of surgical extirpation, with the ability to perform repeat procedures and salvage surgery if necessary, oncologic outcomes are comparable to those of upfront surgery. Ultimately, longer-term studies and prospective trials are needed to further elucidate these modalities.
Topics: Ablation Techniques; Electroporation; Humans; Kidney Neoplasms; Microwaves; Nephrectomy; Patient Selection; Postoperative Complications; Treatment Outcome
PubMed: 29303914
DOI: 10.1097/MOU.0000000000000475 -
Statistics in Medicine Nov 2021In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed...
In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed treatment effect at interim preserves till end of the study. A traditional approach is to conduct a conditional power analysis for sample size only based on observed treatment effect. This approach, however, does not take into consideration the variabilities of (i) the observed (estimate) treatment effect and (ii) the observed (estimate) variability associated with the treatment effect. Thus, the resultant re-estimated sample sizes may not be robust and hence may not be reliable. In this article, a couple of methods are proposed, namely, adjusted effect size (AES) approach and iterated expectation/variance (IEV) approach, which can account for the variability associated with the observed responses at interim. The proposed methods provide interval estimates of sample size required for the intended trial, which is useful for making critical go/no go decision. Statistical properties of the proposed methods are evaluated in terms of controlling of type I error rate and statistical power. The results show that traditional approach performs poorly in controlling type I error inflation, whereas IEV approach has the best performance in most cases. Additionally, all re-estimation approaches can keep the statistical power over 80 ; especially, IEV approach's statistical power, using adjusted significance level, is over 95 . However, IEV approach may lead to a greater increment in sample size when detecting a smaller effect size. In general, IEV approach is effective when effect size is large; otherwise, AES approach is more suitable for controlling type I error rate and keep power over 80 with a more reasonable re-estimated sample size.
Topics: Clinical Trials as Topic; Humans; Research Design; Sample Size
PubMed: 34433225
DOI: 10.1002/sim.9175 -
Biometrics Mar 2022Comparing areas under the ROC curve (AUCs) is a popular approach to compare prognostic biomarkers. The aim of this paper is to present an efficient method to control the...
Comparing areas under the ROC curve (AUCs) is a popular approach to compare prognostic biomarkers. The aim of this paper is to present an efficient method to control the family-wise error rate when multiple comparisons are performed. We suggest to combine the max-t test and the closed testing procedures. We build on previous work on asymptotic results for ROC curves and on general multiple testing methods to efficiently take into account both the correlations between the test statistics and the logical constraints between the null hypotheses. The proposed method results in an uniformly more powerful procedure than both the single-step max-t test procedure and popular stepwise extensions of the Bonferroni procedure, such as Bonferroni-Holm. As demonstrated in this paper, the method can be applied in most usual contexts, including the time-dependent context with right censored data. We show how the method works in practice through a motivating example where we compare several psychometric scores to predict the t-year risk of Alzheimer's disease. The example illustrates several multiple testing settings and demonstrates the advantage of using the proposed methods over common alternatives. R code has been made available to facilitate the use of the methods by others.
Topics: ROC Curve; Research Design
PubMed: 33207001
DOI: 10.1111/biom.13401 -
Behaviour Research and Therapy Jun 2019Randomization tests for alternating treatments designs, multiple baseline designs, and withdrawal/reversal designs are well-established. Recent classifications, however,...
Randomization tests for alternating treatments designs, multiple baseline designs, and withdrawal/reversal designs are well-established. Recent classifications, however, also mention the "changing criterion design" as a fourth important type of single-case experimental design. In this paper, we examine the potential of randomization tests for changing criterion designs. We focus on the rationale of the randomization test, the random assignment procedure, the choice of the test statistic, and the calculation of randomization test p-values. Two examples using empirical data and an R computer program to perform the calculations are provided. We discuss the problems associated with conceptualizing the changing criterion design as a variant of the multiple baseline design, the potential of the range-bound changing criterion design, experimental control as an all-or-none phenomenon, the necessity of random assignment for the statistical-conclusion validity of the randomization test, and the use of randomization tests in nonrandomized designs.
Topics: Humans; Random Allocation; Research Design; Software
PubMed: 30670306
DOI: 10.1016/j.brat.2019.01.005 -
Sports Medicine (Auckland, N.Z.) Feb 2019Sport nutrition is one of the fastest growing and evolving disciplines of sport and exercise science, demonstrated by a 4-fold increase in the number of research papers... (Review)
Review
Sport nutrition is one of the fastest growing and evolving disciplines of sport and exercise science, demonstrated by a 4-fold increase in the number of research papers between 2012 and 2018. Indeed, the scope of contemporary nutrition-related research could range from discovery of novel nutrient-sensitive cell-signalling pathways to the assessment of the effects of sports drinks on exercise performance. For the sport nutrition practitioner, the goal is to translate innovations in research to develop and administer practical interventions that contribute to the delivery of winning performances. Accordingly, step one in the translation of research to practice should always be a well-structured critique of the translational potential of the existing scientific evidence. To this end, we present an operational framework (the "Paper-2-Podium Matrix") that provides a checklist of criteria for which to prompt the critical evaluation of performance nutrition-related research papers. In considering the (1) research context, (2) participant characteristics, (3) research design, (4) dietary and exercise controls, (5) validity and reliability of exercise performance tests, (6) data analytics, (7) feasibility of application, (8) risk/reward and (9) timing of the intervention, we aimed to provide a time-efficient framework to aid practitioners in their scientific appraisal of research. Ultimately, it is the combination of boldness of reform (i.e. innovations in research) and quality of execution (i.e. ease of administration of practical solutions) that is most likely to deliver the transition from paper to podium.
Topics: Athletic Performance; Decision Making; Diet; Humans; Patient Selection; Research Design; Sports; Sports Nutritional Physiological Phenomena; Translational Research, Biomedical
PubMed: 30671902
DOI: 10.1007/s40279-018-1005-2