-
Multivariate Behavioral Research 2022There is an increasing need to analyze multivariate time series data due to the rapid development of data collection tools such as smartphone APPs, wearable sensors, and...
There is an increasing need to analyze multivariate time series data due to the rapid development of data collection tools such as smartphone APPs, wearable sensors, and brain imaging techniques. P-technique factor analysis allows researchers to establish a measurement model for these time series. Analyzing such data is challenging because they are often non-normal (e.g., steps, heart rate, sleep, mood, and brain signals) and correlated at nearby time points. We propose using a bootstrap procedure to accommodate both the non-normality and the dependency of nearby time points. We explore the statistical properties with simulated data and illustrate the test with two empirical data sets. The results of the simulation study include (1) the bootstrap procedure performed better than an existing analytic procedure for time series data with excessive kurtosis (2) an existing analytic procedure performed better than the bootstrap procedure for normal time series and skewed time series.
Topics: Time Factors; Factor Analysis, Statistical; Research Design; Computer Simulation; Data Collection
PubMed: 33999744
DOI: 10.1080/00273171.2021.1919047 -
Journal of Plastic, Reconstructive &... Nov 2020Breast reconstruction with DIEP flap is a well-accepted and well-established technique for autologous breast reconstruction. In the past, this reconstructive option was... (Review)
Review
Breast reconstruction with DIEP flap is a well-accepted and well-established technique for autologous breast reconstruction. In the past, this reconstructive option was typically offered to a limited group of patients as previous surgeries or low BMI were considered to be an obstacle to the success of the procedure or for the achievement of a satisfactory cosmetic outcome due to the lack of available tissue. Nowadays, this does not correspond to truth anymore and DIEP flaps are performed routinely on slender patients and on women who have undergone previous liposuction or abdominal surgeries. This paper analyzes current surgical options for volume recruitment in patients with scanty abdominal tissue or with abdominal scars and presents our standardized approach for DIEP volume augmentation with the "Calzone style" bipedicled DIEP flap.
Topics: Female; Humans; Mammaplasty; Patient Selection; Surgical Flaps
PubMed: 32571688
DOI: 10.1016/j.bjps.2020.05.070 -
Statistics in Medicine Nov 2021In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed...
In clinical trials, sample size re-estimation is often conducted at interim. The purpose is to determine whether the study will achieve study objectives if the observed treatment effect at interim preserves till end of the study. A traditional approach is to conduct a conditional power analysis for sample size only based on observed treatment effect. This approach, however, does not take into consideration the variabilities of (i) the observed (estimate) treatment effect and (ii) the observed (estimate) variability associated with the treatment effect. Thus, the resultant re-estimated sample sizes may not be robust and hence may not be reliable. In this article, a couple of methods are proposed, namely, adjusted effect size (AES) approach and iterated expectation/variance (IEV) approach, which can account for the variability associated with the observed responses at interim. The proposed methods provide interval estimates of sample size required for the intended trial, which is useful for making critical go/no go decision. Statistical properties of the proposed methods are evaluated in terms of controlling of type I error rate and statistical power. The results show that traditional approach performs poorly in controlling type I error inflation, whereas IEV approach has the best performance in most cases. Additionally, all re-estimation approaches can keep the statistical power over 80 ; especially, IEV approach's statistical power, using adjusted significance level, is over 95 . However, IEV approach may lead to a greater increment in sample size when detecting a smaller effect size. In general, IEV approach is effective when effect size is large; otherwise, AES approach is more suitable for controlling type I error rate and keep power over 80 with a more reasonable re-estimated sample size.
Topics: Clinical Trials as Topic; Humans; Research Design; Sample Size
PubMed: 34433225
DOI: 10.1002/sim.9175 -
Biostatistics (Oxford, England) Jan 2022We introduce a novel Bayesian estimator for the class proportion in an unlabeled dataset, based on the targeted learning framework. The procedure requires the...
We introduce a novel Bayesian estimator for the class proportion in an unlabeled dataset, based on the targeted learning framework. The procedure requires the specification of a prior (and outputs a posterior) only for the target of inference, and yields a tightly concentrated posterior. When the scientific question can be characterized by a low-dimensional parameter functional, this focus on target prior and posterior distributions perfectly aligns with Bayesian subjectivism. We prove a Bernstein-von Mises-type result for our proposed Bayesian procedure, which guarantees that the posterior distribution converges to the distribution of an efficient, asymptotically linear estimator. In particular, the posterior is Gaussian, doubly robust, and efficient in the limit, under the only assumption that certain nuisance parameters are estimated at slower-than-parametric rates. We perform numerical studies illustrating the frequentist properties of the method. We also illustrate their use in a motivating application to estimate the proportion of embolic strokes of undetermined source arising from occult cardiac sources or large-artery atherosclerotic lesions. Though we focus on the motivating example of the proportion of cases in an unlabeled dataset, the procedure is general and can be adapted to estimate any pathwise differentiable parameter in a non-parametric model.
Topics: Bayes Theorem; Humans; Research Design
PubMed: 32529244
DOI: 10.1093/biostatistics/kxaa022 -
Behavior Research Methods Jan 2023Recent insights into problems with common statistical practice in psychology have motivated scientists to consider alternatives to the traditional frequentist approach...
Recent insights into problems with common statistical practice in psychology have motivated scientists to consider alternatives to the traditional frequentist approach that compares p-values to a significance criterion. While these alternatives have worthwhile attributes, Francis (Behavior Research Methods, 40, 1524-1538, 2017) showed that many proposed test statistics for the situation of a two-sample t-test are based on precisely the same information in a given data set; and for a given sample size, one can convert from any statistic to the others. Here, we show that the same relationship holds for the equivalent of a one-sample t-test. We derive the relationships and provide an on-line app that performs the computations. A key conclusion of this analysis is that many types of tests are based on the same information, so the choice of which approach to use should reflect the intent of the scientist and the appropriateness of the corresponding inferential framework for that intent.
Topics: Humans; Data Interpretation, Statistical; Research Design; Sample Size; Bayes Theorem
PubMed: 35262898
DOI: 10.3758/s13428-021-01775-3 -
The Annals of Thoracic Surgery Mar 2022Whether robotic segmentectomies are advantageous is unclear. We describe our experience with the robot, comparing patient populations and outcomes with video-assisted...
BACKGROUND
Whether robotic segmentectomies are advantageous is unclear. We describe our experience with the robot, comparing patient populations and outcomes with video-assisted thoracoscopic surgery (VATS) and open resection.
METHODS
Patients who underwent anatomic segmentectomy from 2004 to 2019 were reviewed. Resection methods were categorized as robotic, VATS, or open. Segmentectomies were categorized as simple or complex. Baseline characteristics and perioperative outcomes were analyzed from 2015 to 2019 due to implementation of the Enhanced Recovery After Surgery pathway for all thoracic surgery patients and to thus minimize confounders resulting from the Enhanced Recovery After Surgery protocol.
RESULTS
Since 2004, an increase has occurred in segmentectomies, including robotic and complex segmentectomies. Of the 222 segmentectomies performed from 2015 to 2019, 77 (35%) were robotic, 40 VATS (18%), and 105 open (47%). More complex segmentectomies were performed in the robotic group compared with VATS and open (45% vs 15% vs 22%; P < .001). Operative time for robotic resections were longer compared with VATS and open (205 vs 147 vs 147 minutes; P < .001) but had lower blood loss (50 vs 75 vs 100 mL; P < .001) and shorter chest tube days (2 vs 2 vs 3 days; P = .004) and lengths of stay (3 vs 3 vs 4 days; P < .001). Perioperative mortality was low in all groups. No robotic segmentectomy was converted to open compared with 7.5% for VATS (P = .038). Prolonged air leak was lower for robotic compared with open (4% vs 13%; P = .038).
CONCLUSIONS
Robotic segmentectomy has increased in our institution, with a concurrent rise in atypical segmentectomies. Despite performing more complex procedures, there were no conversions and low perioperative morbidity and mortality. Our results suggest that the robotic platform can facilitate performance of complex anatomic segmentectomies.
Topics: Humans; Lung Neoplasms; Mastectomy, Segmental; Patient Selection; Pneumonectomy; Retrospective Studies; Robotic Surgical Procedures; Thoracic Surgery, Video-Assisted
PubMed: 33838123
DOI: 10.1016/j.athoracsur.2021.03.068 -
Biometrics Mar 2022Comparing areas under the ROC curve (AUCs) is a popular approach to compare prognostic biomarkers. The aim of this paper is to present an efficient method to control the...
Comparing areas under the ROC curve (AUCs) is a popular approach to compare prognostic biomarkers. The aim of this paper is to present an efficient method to control the family-wise error rate when multiple comparisons are performed. We suggest to combine the max-t test and the closed testing procedures. We build on previous work on asymptotic results for ROC curves and on general multiple testing methods to efficiently take into account both the correlations between the test statistics and the logical constraints between the null hypotheses. The proposed method results in an uniformly more powerful procedure than both the single-step max-t test procedure and popular stepwise extensions of the Bonferroni procedure, such as Bonferroni-Holm. As demonstrated in this paper, the method can be applied in most usual contexts, including the time-dependent context with right censored data. We show how the method works in practice through a motivating example where we compare several psychometric scores to predict the t-year risk of Alzheimer's disease. The example illustrates several multiple testing settings and demonstrates the advantage of using the proposed methods over common alternatives. R code has been made available to facilitate the use of the methods by others.
Topics: ROC Curve; Research Design
PubMed: 33207001
DOI: 10.1111/biom.13401 -
The Journal of Evidence-based Dental... Jun 2020New methodological approaches, such as the umbrella review, constitute an important pathway for synthesizing the scientific evidence provided from studies with a high... (Review)
Review
OBJECTIVES
New methodological approaches, such as the umbrella review, constitute an important pathway for synthesizing the scientific evidence provided from studies with a high level of evidence. This study aims to summarize the results on the effectiveness of temporary anchorage devices (TADs) and the factors that contribute to their success or failure during orthodontic treatment in patients of different age groups and to identify the gaps in knowledge based on analysis of the scientific literature.
METHODS
An umbrella review of systematic reviews and meta-analyses was performed. A quality evaluation and a descriptive analysis of the included studies were conducted. The study protocol was registered at the International Prospective Register of Systematic Reviews (PROSPERO: CRD42018094463).
RESULTS
Seventeen systematic reviews and meta-analyses were considered (10 descriptive and 7 with meta-analysis; 12 of high quality and 5 of moderate quality). Variability was observed in the type of intervention and the type of system (TADs). Most of the studies reported high success rates (≥90%), and just one systematic review indicated a low rate of success (≤56%) for the mini-screws. All the studies discussed several factors related to the success of the TADs. These factors were classified as device-related factors, patient-related factors, procedure-related factors, and orthodontic treatment-related factors. Conceptual and methodological gaps were observed when considering the data analysis, the terminology used, and the orthodontic protocols.
CONCLUSIONS
The results should be analysed cautiously because of several research gaps related to the methodological quality and the high heterogeneity of the original studies and because of the necessity to add several clinical and sociodemographic variables to enrich the data analysis.
Topics: Humans; Meta-Analysis as Topic; Orthodontic Anchorage Procedures; Systematic Reviews as Topic
PubMed: 32473811
DOI: 10.1016/j.jebdp.2020.101402 -
Biometrics Mar 2024Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of...
Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.
Topics: Confidence Intervals; Computer Simulation; Humans; Biometry; Models, Statistical; Data Interpretation, Statistical; Random Allocation; Randomized Controlled Trials as Topic
PubMed: 38837900
DOI: 10.1093/biomtc/ujae051 -
Statistics in Medicine Dec 2022Estimating relationships between multiple incomplete patient measurements requires methods to cope with missing values. Multiple imputation is one approach to address... (Review)
Review
Estimating relationships between multiple incomplete patient measurements requires methods to cope with missing values. Multiple imputation is one approach to address missing data by filling in plausible values for those that are missing. Multiple imputation procedures can be classified into two broad types: joint modeling (JM) and fully conditional specification (FCS). JM fits a multivariate distribution for the entire set of variables, but it may be complex to define and implement. FCS imputes missing data variable-by-variable from a set of conditional distributions. In many studies, FCS is easier to define and implement than JM, but it may be based on incompatible conditional models. Imputation methods based on multilevel modeling show improved operating characteristics when imputing longitudinal data, but they can be computationally intensive, especially when imputing multiple variables simultaneously. We review current MI methods for incomplete longitudinal data and their implementation on widely accessible software. Using simulated data from the National Health and Aging Trends Study, we compare their performance for monotone and intermittent missing data patterns. Our simulations demonstrate that in a longitudinal study with a limited number of repeated observations and time-varying variables, FCS-Standard is a computationally efficient imputation method that is accurate and precise for univariate single-level and multilevel regression models. When the analyses comprise multivariate multilevel models, FCS-LMM-latent is a statistically valid procedure with overall more accurate estimates, but it requires more intensive computations. Imputation methods based on generalized linear multilevel models can lead to biased subject-level variance estimates when the statistical analyses involve hierarchical models.
Topics: Humans; Longitudinal Studies; Models, Statistical; Biometry; Research Design; Software; Computer Simulation
PubMed: 36220138
DOI: 10.1002/sim.9592