-
European Journal of Obstetrics,... Oct 2021The clinical application of prediction models is increasing within the field of gynaecology and obstetrics. This is mostly due to the fact that clinicians and patients... (Review)
Review
The clinical application of prediction models is increasing within the field of gynaecology and obstetrics. This is mostly due to the fact that clinicians and patients prefer individualized counselling and person specific, more objective outcome assessment. To prevent using inadequate models, it is important to construct and perform prediction model studies correctly. Therefore, the TRIPOD statement (the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) was developed. The aim of this review is to obtain an overview of the existing published prediction models for benign gynaecology and to investigate to what extent these studies meet the TRIPOD criteria. We performed a literature search in the databases PubMed, Embase and Cochrane Library from inception to August 2020. Searching the cross-references of the relevant studies within our search identified additional articles. Publications were included if the aim of the study was to develop a multivariable prediction model within the field of benign gynaecology. Two independent reviewers extracted the data. Analysis of the studies was performed by using a checklist derived from the TRIPOD criteria. Based on our search, 2487 studies were selected, including potential duplications. Eventually, a total of twenty-two studies were selected. 91% of these studies handled their predictors by univariable analysis before developing a multivariable prediction model. Fifteen studies described having missing data, but not all of them (9%) handled these missing data. Four different internal validation methods were used in twenty studies. Fifteen studies (68%) had prediction models with a C-index ≥ 0.7, which indicates a good model. Half of the studies (50%) did not measure the calibration, overall performance was described in two studies (9%). External validation was performed in 9% of the studies. The correct development of a prediction model within benign gynaecology and subsequent transparent reporting of the model development is important to facilitate clinical use. Without transparent reporting, wrong assumptions can be made leading to incorrect application of a specific prediction model. This overview shows that excepting carrying out an external validation, only one article met all the criteria. Therefore, we strongly recommend use of the TRIPOD criteria for developing and validating a prediction model (study). In addition, prior to publication, content experts should critically and statistically review the prediction model. If too many criteria are not met, refusing publication should be considered.
Topics: Checklist; Female; Gynecology; Humans; Pregnancy; Prognosis; Research Design
PubMed: 34509878
DOI: 10.1016/j.ejogrb.2021.08.013 -
CPT: Pharmacometrics & Systems... Feb 2022The full random-effects model (FREM) is a method for determining covariate effects in mixed-effects models. Covariates are modeled as random variables, described by mean...
The full random-effects model (FREM) is a method for determining covariate effects in mixed-effects models. Covariates are modeled as random variables, described by mean and variance. The method captures the covariate effects in estimated covariances between individual parameters and covariates. This approach is robust against issues that may cause reduced performance in methods based on estimating fixed effects (e.g., correlated covariates where the effects cannot be simultaneously identified in fixed-effects methods). FREM covariate parameterization and transformation of covariate data records can be used to alter the covariate-parameter relation. Four relations (linear, log-linear, exponential, and power) were implemented and shown to provide estimates equivalent to their fixed-effects counterparts. Comparisons between FREM and mathematically equivalent full fixed-effects models (FFEMs) were performed in original and simulated data, in the presence and absence of non-normally distributed and highly correlated covariates. These comparisons show that both FREM and FFEM perform well in the examined cases, with a slightly better estimation accuracy of parameter interindividual variability (IIV) in FREM. In addition, FREM offers the unique advantage of letting a single estimation simultaneously provide covariate effect coefficient estimates and IIV estimates for any subset of the examined covariates, including the effect of each covariate in isolation. Such subsets can be used to apply the model across data sources with different sets of available covariates, or to communicate covariate effects in a way that is not conditional on other covariates.
Topics: Humans; Models, Statistical; Research Design
PubMed: 34984855
DOI: 10.1002/psp4.12741 -
BMC Medical Research Methodology Aug 2021Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with... (Randomized Controlled Trial)
Randomized Controlled Trial
BACKGROUND
Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with respect to important known and unknown confounders, and contributes to the validity of statistical tests. Various restricted randomization procedures with different probabilistic structures and different statistical properties are available. The goal of this paper is to present a systematic roadmap for the choice and application of a restricted randomization procedure in a clinical trial.
METHODS
We survey available restricted randomization procedures for sequential allocation of subjects in a randomized, comparative, parallel group clinical trial with equal (1:1) allocation. We explore statistical properties of these procedures, including balance/randomness tradeoff, type I error rate and power. We perform head-to-head comparisons of different procedures through simulation under various experimental scenarios, including cases when common model assumptions are violated. We also provide some real-life clinical trial examples to illustrate the thinking process for selecting a randomization procedure for implementation in practice.
RESULTS
Restricted randomization procedures targeting 1:1 allocation vary in the degree of balance/randomness they induce, and more importantly, they vary in terms of validity and efficiency of statistical inference when common model assumptions are violated (e.g. when outcomes are affected by a linear time trend; measurement error distribution is misspecified; or selection bias is introduced in the experiment). Some procedures are more robust than others. Covariate-adjusted analysis may be essential to ensure validity of the results. Special considerations are required when selecting a randomization procedure for a clinical trial with very small sample size.
CONCLUSIONS
The choice of randomization design, data analytic technique (parametric or nonparametric), and analysis strategy (randomization-based or population model-based) are all very important considerations. Randomization-based tests are robust and valid alternatives to likelihood-based tests and should be considered more frequently by clinical investigators.
Topics: Computer Simulation; Humans; Likelihood Functions; Random Allocation; Sample Size; Selection Bias
PubMed: 34399696
DOI: 10.1186/s12874-021-01303-z -
Biostatistics (Oxford, England) Apr 2022Divide-and-conquer (DAC) is a commonly used strategy to overcome the challenges of extraordinarily large data, by first breaking the dataset into series of data blocks,...
Divide-and-conquer (DAC) is a commonly used strategy to overcome the challenges of extraordinarily large data, by first breaking the dataset into series of data blocks, then combining results from individual data blocks to obtain a final estimation. Various DAC algorithms have been proposed to fit a sparse predictive regression model in the $L_1$ regularization setting. However, many existing DAC algorithms remain computationally intensive when sample size and number of candidate predictors are both large. In addition, no existing DAC procedures provide inference for quantifying the accuracy of risk prediction models. In this article, we propose a screening and one-step linearization infused DAC (SOLID) algorithm to fit sparse logistic regression to massive datasets, by integrating the DAC strategy with a screening step and sequences of linearization. This enables us to maximize the likelihood with only selected covariates and perform penalized estimation via a fast approximation to the likelihood. To assess the accuracy of a predictive regression model, we develop a modified cross-validation (MCV) that utilizes the side products of the SOLID, substantially reducing the computational burden. Compared with existing DAC methods, the MCV procedure is the first to make inference on accuracy. Extensive simulation studies suggest that the proposed SOLID and MCV procedures substantially outperform the existing methods with respect to computational speed and achieve similar statistical efficiency as the full sample-based estimator. We also demonstrate that the proposed inference procedure provides valid interval estimators. We apply the proposed SOLID procedure to develop and validate a classification model for disease diagnosis using narrative clinical notes based on electronic medical record data from Partners HealthCare.
Topics: Algorithms; Computer Simulation; Humans; Logistic Models; Research Design
PubMed: 32909599
DOI: 10.1093/biostatistics/kxaa031 -
Chest Jul 2020Case-control studies are one of the major observational study designs for performing clinical research. The advantages of these study designs over other study designs... (Review)
Review
Case-control studies are one of the major observational study designs for performing clinical research. The advantages of these study designs over other study designs are that they are relatively quick to perform, economical, and easy to design and implement. Case-control studies are particularly appropriate for studying disease outbreaks, rare diseases, or outcomes of interest. This article describes several types of case-control designs, with simple graphical displays to help understand their differences. Study design considerations are reviewed, including sample size, power, and measures associated with risk factors for clinical outcomes. Finally, we discuss the advantages and disadvantages of case-control studies and provide a checklist for authors and a framework of considerations to guide reviewers' comments.
Topics: Case-Control Studies; Checklist; Guidelines as Topic; Humans; Research Design
PubMed: 32658653
DOI: 10.1016/j.chest.2020.03.009 -
Journal of Neurological Surgery. Part... Sep 2020The use of sham interventions in randomized controlled trials (RCTs) is essential to minimize bias. However, their use in surgical RCTs is rare and subject to ethical... (Review)
Review
BACKGROUND
The use of sham interventions in randomized controlled trials (RCTs) is essential to minimize bias. However, their use in surgical RCTs is rare and subject to ethical concerns. To date, no studies have looked at the use of sham interventions in RCTs in neurosurgery.
METHODS
This study evaluated the frequency, type, and indication of sham interventions in RCTs in neurosurgery. RCTs using sham interventions were also characterized in terms of design and risk of bias.
RESULTS
From a total of 1,102 identified RCTs in neurosurgery, 82 (7.4%) used sham interventions. The most common indication for the RCT was the treatment of pain (67.1%), followed by the treatment of movement disorders and other clinical problems (18.3%) and brain injuries (12.2%). The most used sham interventions were saline injections into spinal structures (31.7%) and peripheral nerves (10.9%), followed by sham interventions in cranial surgery (26.8%), and spine surgery (15.8%). Insertion of probes or catheters for a sham lesions was performed in 14.6%.In terms of methodology, most RCTs using sham interventions were double blinded (76.5%), 9.9% were single blinded, and 13.6% did not report the type of blinding.
CONCLUSION
Sham-controlled RCTs in neurosurgery are feasible. Most aim to minimize bias and to evaluate the efficacy of pain management methods, especially in spinal disorders. The greatest proportion of sham-controlled RCTs involves different types of substance administration routes, with sham surgery the less commonly performed.
Topics: Double-Blind Method; Humans; Movement Disorders; Neurosurgical Procedures; Pain; Randomized Controlled Trials as Topic; Research Design
PubMed: 32438420
DOI: 10.1055/s-0040-1709161 -
International Journal of Surgery... Jun 2022revisional bariatric surgery is gaining increasing interest as long term follow-up studies demonstrate an elevated failure rate of primary surgery due to insufficient... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
revisional bariatric surgery is gaining increasing interest as long term follow-up studies demonstrate an elevated failure rate of primary surgery due to insufficient weight loss, weight regain or complications. This particularly concerns restrictive bariatric surgery which has been widely adopted from the '80s till present through different procedures, notably vertical banded gastroplasty, laparoscopic adjusted gastric banding and sleeve gastrectomy. The aim of this study is to define which revisional bariatric procedure performs the best after failure of primary restrictive surgery.
METHODS
a systematic review and network meta-analysis of 39 studies was conducted following the PRISMA guidelines and the Cochrane protocol.
RESULTS
biliopancreatic diversion with duodenal switch guarantees the best results in terms of weight loss (1 and 3-years %TWL MD: 12.38 and 28.42) followed by single-anastomosis duodenoileal bypass (9.24 and 19.13), one-anastomosis gastric bypass (7.16 and 13.1), and Roux-en-Y gastric bypass (4.68 and 7.3) compared to re-sleeve gastrectomy. Duodenal switch and Roux-en-Y gastric bypass are associated to an increased risk of late major morbidity (OR: 3.07 and 2.11 respectively) compared to re-sleeve gastrectomy while no significant difference was highlighted for the other procedures. Re-sleeve gastrectomy is the revisional intervention most frequently burdened by weight recidivism; compared to it, patients undergoing single-anastomosis duodenoileal bypass have the lowest risk of weight regain (OR: 0.07).
CONCLUSION
considering the analyzed outcomes altogether, single-anastomosis duodenoileal bypass and one-anastomosis gastric bypass are the most performing revisional procedures after failure of restrictive surgery due to satisfying short and mid-term weight loss and low early and late morbidity. Moreover, single-anastomosis duodenoileal bypass has low risk of weight recidivism.
Topics: Bariatric Surgery; Gastrectomy; Gastric Bypass; Humans; Laparoscopy; Morbidity; Network Meta-Analysis; Obesity, Morbid; Reoperation; Retrospective Studies; Weight Gain; Weight Loss
PubMed: 35589051
DOI: 10.1016/j.ijsu.2022.106677 -
Multivariate Behavioral Research 2022There is an increasing need to analyze multivariate time series data due to the rapid development of data collection tools such as smartphone APPs, wearable sensors, and...
There is an increasing need to analyze multivariate time series data due to the rapid development of data collection tools such as smartphone APPs, wearable sensors, and brain imaging techniques. P-technique factor analysis allows researchers to establish a measurement model for these time series. Analyzing such data is challenging because they are often non-normal (e.g., steps, heart rate, sleep, mood, and brain signals) and correlated at nearby time points. We propose using a bootstrap procedure to accommodate both the non-normality and the dependency of nearby time points. We explore the statistical properties with simulated data and illustrate the test with two empirical data sets. The results of the simulation study include (1) the bootstrap procedure performed better than an existing analytic procedure for time series data with excessive kurtosis (2) an existing analytic procedure performed better than the bootstrap procedure for normal time series and skewed time series.
Topics: Time Factors; Factor Analysis, Statistical; Research Design; Computer Simulation; Data Collection
PubMed: 33999744
DOI: 10.1080/00273171.2021.1919047 -
Journal of Plastic, Reconstructive &... Nov 2020Breast reconstruction with DIEP flap is a well-accepted and well-established technique for autologous breast reconstruction. In the past, this reconstructive option was... (Review)
Review
Breast reconstruction with DIEP flap is a well-accepted and well-established technique for autologous breast reconstruction. In the past, this reconstructive option was typically offered to a limited group of patients as previous surgeries or low BMI were considered to be an obstacle to the success of the procedure or for the achievement of a satisfactory cosmetic outcome due to the lack of available tissue. Nowadays, this does not correspond to truth anymore and DIEP flaps are performed routinely on slender patients and on women who have undergone previous liposuction or abdominal surgeries. This paper analyzes current surgical options for volume recruitment in patients with scanty abdominal tissue or with abdominal scars and presents our standardized approach for DIEP volume augmentation with the "Calzone style" bipedicled DIEP flap.
Topics: Female; Humans; Mammaplasty; Patient Selection; Surgical Flaps
PubMed: 32571688
DOI: 10.1016/j.bjps.2020.05.070 -
Statistics in Medicine Nov 2021In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed...
In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed treatment effect at interim preserves till end of the study. A traditional approach is to conduct a conditional power analysis for sample size only based on observed treatment effect. This approach, however, does not take into consideration the variabilities of (i) the observed (estimate) treatment effect and (ii) the observed (estimate) variability associated with the treatment effect. Thus, the resultant re-estimated sample sizes may not be robust and hence may not be reliable. In this article, a couple of methods are proposed, namely, adjusted effect size (AES) approach and iterated expectation/variance (IEV) approach, which can account for the variability associated with the observed responses at interim. The proposed methods provide interval estimates of sample size required for the intended trial, which is useful for making critical go/no go decision. Statistical properties of the proposed methods are evaluated in terms of controlling of type I error rate and statistical power. The results show that traditional approach performs poorly in controlling type I error inflation, whereas IEV approach has the best performance in most cases. Additionally, all re-estimation approaches can keep the statistical power over 80 ; especially, IEV approach's statistical power, using adjusted significance level, is over 95 . However, IEV approach may lead to a greater increment in sample size when detecting a smaller effect size. In general, IEV approach is effective when effect size is large; otherwise, AES approach is more suitable for controlling type I error rate and keep power over 80 with a more reasonable re-estimated sample size.
Topics: Clinical Trials as Topic; Humans; Research Design; Sample Size
PubMed: 34433225
DOI: 10.1002/sim.9175