-
Pharmaceutical Statistics 2006Fisher's least significant difference (LSD) procedure is a two-step testing procedure for pairwise comparisons of several treatment groups. In the first step of the...
Fisher's least significant difference (LSD) procedure is a two-step testing procedure for pairwise comparisons of several treatment groups. In the first step of the procedure, a global test is performed for the null hypothesis that the expected means of all treatment groups under study are equal. If this global null hypothesis can be rejected at the pre-specified level of significance, then in the second step of the procedure, one is permitted in principle to perform all pairwise comparisons at the same level of significance (although in practice, not all of them may be of primary interest). Fisher's LSD procedure is known to preserve the experimentwise type I error rate at the nominal level of significance, if (and only if) the number of treatment groups is three. The procedure may therefore be applied to phase III clinical trials comparing two doses of an active treatment against placebo in the confirmatory sense (while in this case, no confirmatory comparison has to be performed between the two active treatment groups). The power properties of this approach are examined in the present paper. It is shown that the power of the first step global test--and therefore the power of the overall procedure--may be relevantly lower than the power of the pairwise comparison between the more-favourable active dose group and placebo. Achieving a certain overall power for this comparison with Fisher's LSD procedure--irrespective of the effect size at the less-favourable dose group--may require slightly larger treatment groups than sizing the study with respect to the simple Bonferroni alpha adjustment. Therefore if Fisher's LSD procedure is used to avoid an alpha adjustment for phase III clinical trials, the potential loss of power due to the first-step global test should be considered at the planning stage.
Topics: Analysis of Variance; Biometry; Clinical Trials as Topic; Clinical Trials, Phase III as Topic; Data Interpretation, Statistical; Endpoint Determination; Humans; Models, Statistical; Multivariate Analysis; Placebos; Research; Research Design; Sample Size; Technology, Pharmaceutical
PubMed: 17128424
DOI: 10.1002/pst.210 -
Human Factors Sep 2023In future deep space exploration missions, crew will have to work more autonomously from Earth. Greater crew autonomy will increase dependence on automated systems. This...
OBJECTIVE
In future deep space exploration missions, crew will have to work more autonomously from Earth. Greater crew autonomy will increase dependence on automated systems. This study investigates the performance effects of different strategies to automate procedural work for space exploration operations.
BACKGROUND
The following strategies are investigated for performing procedural work:• uses no procedure automation and crew performs all actions.• uses procedure automation to perform some actions within a procedure while crew performs other actions.• uses procedure automation to perform procedure actions while crew supervises the automation.
METHOD
Twenty-seven participants participated in a planetary habitat scenario-based simulation using electronic procedures with automatable actions to investigate the effect of these strategies on situation awareness (SA) and workload. This study used a modification of the Situation Presence Assessment Method to measure SA and the Bedford Workload Scale to measure subjective workload.
RESULTS
Mean response times and accuracy for SA queries show no significant difference among the three strategies. Bedford Workload ratings compared across the three strategies indicate that participants rated their workload as highest in the Manual Work condition, followed by the Shared Work condition, and lowest in the Supervised Work condition.
CONCLUSION
The study hypothesized that increased levels of automation would lead to lower subjective workload and decreased SA. Although no significant difference in SA was observed, subjective workload was lower in automation strategies. Based on subjective ratings, 93% of participants preferred some form of automation, with 56% preferring the Shared Work automation condition.
Topics: Humans; Workload; Awareness; Task Performance and Analysis; Automation; Computer Simulation
PubMed: 35089111
DOI: 10.1177/00187208211060978 -
Journal of Pediatric Surgery Jul 2020Determining the appropriate sample size is an integral component of any well-designed research study, grant application, or scientific manuscript. Surgeons intuitively... (Review)
Review
BACKGROUND/PURPOSE
Determining the appropriate sample size is an integral component of any well-designed research study, grant application, or scientific manuscript. Surgeons intuitively understand the concept of statistical power, but have limited knowledge in how to go about performing the calculations correctly. Our goal is to provide a strategy for pediatric surgeons to use when planning a study to determine the sample sizes required for detecting a clinically meaningful effect, which is important for interpreting and validating their results.
METHODS
We present a general 5-step approach for performing a sample size justification and statistical power analysis, and illustrate this approach using several surgical research examples. The 5 steps are: 1) Define the primary outcome of interest, 2) Define the magnitude of the effect or effect size and power desired, 3) Determine the appropriate statistics and statistical test that will be considered, 4) Perform the calculations to estimate the required sample size using software or a reference table, 5) Write the formal power and sample size statement for the manuscript, grant application, or project proposal.
CONCLUSIONS
Understanding sample size considerations and statistical power in the surgical research community will improve the quality of published articles. This primer can be used by pediatric surgeons in the process of determining the appropriate sample sizes for detecting a clinically meaningful effect with sufficient statistical power. Virtually all research studies in pediatric surgery should include a justification of sample size based on a power calculation as this leads to more meaningful inferences from the data and analysis.
TYPE OF STUDY
Review article.
LEVEL OF EVIDENCE
N/A.
Topics: General Surgery; Humans; Pediatrics; Research Design; Sample Size; Surgeons
PubMed: 31155391
DOI: 10.1016/j.jpedsurg.2019.05.007 -
Arthroscopy : the Journal of... Dec 2006Successful outcomes of hip arthroscopy are most clearly dependent on selecting appropriate patients. The indications are numerous and continue to evolve. These...
Successful outcomes of hip arthroscopy are most clearly dependent on selecting appropriate patients. The indications are numerous and continue to evolve. These indications are summarized in this report. The anatomic architecture of the hip region imposes unique challenges to performing this procedure. As a surgeon's experience evolves, so will his or her indications for this operation. It is imperative to be knowledgeable about the technique, to exercise care with the procedure, and to be certain that it is being performed for proper reasons.
Topics: Arthroplasty, Replacement, Hip; Hip Joint; Humans; Patient Education as Topic; Patient Selection
PubMed: 17165215
DOI: 10.1016/j.arthro.2006.08.021 -
BMC Medical Research Methodology Aug 2021Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with... (Randomized Controlled Trial)
Randomized Controlled Trial
BACKGROUND
Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with respect to important known and unknown confounders, and contributes to the validity of statistical tests. Various restricted randomization procedures with different probabilistic structures and different statistical properties are available. The goal of this paper is to present a systematic roadmap for the choice and application of a restricted randomization procedure in a clinical trial.
METHODS
We survey available restricted randomization procedures for sequential allocation of subjects in a randomized, comparative, parallel group clinical trial with equal (1:1) allocation. We explore statistical properties of these procedures, including balance/randomness tradeoff, type I error rate and power. We perform head-to-head comparisons of different procedures through simulation under various experimental scenarios, including cases when common model assumptions are violated. We also provide some real-life clinical trial examples to illustrate the thinking process for selecting a randomization procedure for implementation in practice.
RESULTS
Restricted randomization procedures targeting 1:1 allocation vary in the degree of balance/randomness they induce, and more importantly, they vary in terms of validity and efficiency of statistical inference when common model assumptions are violated (e.g. when outcomes are affected by a linear time trend; measurement error distribution is misspecified; or selection bias is introduced in the experiment). Some procedures are more robust than others. Covariate-adjusted analysis may be essential to ensure validity of the results. Special considerations are required when selecting a randomization procedure for a clinical trial with very small sample size.
CONCLUSIONS
The choice of randomization design, data analytic technique (parametric or nonparametric), and analysis strategy (randomization-based or population model-based) are all very important considerations. Randomization-based tests are robust and valid alternatives to likelihood-based tests and should be considered more frequently by clinical investigators.
Topics: Computer Simulation; Humans; Likelihood Functions; Random Allocation; Sample Size; Selection Bias
PubMed: 34399696
DOI: 10.1186/s12874-021-01303-z -
Anesthesia and Analgesia Apr 2010Predictive variability of operating room (OR) times influences decision making on the day of surgery including when to start add-on cases, whether to move a case from...
BACKGROUND
Predictive variability of operating room (OR) times influences decision making on the day of surgery including when to start add-on cases, whether to move a case from one OR to another, and where to assign relief staff. One contributor to predictive variability is process variability, which arises among cases of the same procedure(s). Another contributor is parameter uncertainty, which is caused by small sample sizes of historical data.
METHODS
Process variability was quantified using absolute percentage errors of surgeons' bias-corrected estimates of OR time. The influence of procedure classification on process variability was studied using a dataset of 61,353 cases, each with 1 to 5 scheduled and actual Current Procedural Terminology (CPT) codes (i.e., a standardized vocabulary). Parameter uncertainty's sensitivity to sample size was quantified by studying ratios of 90% prediction bounds to medians. That studied dataset of 65,661 cases was used previously to validate a Bayesian method to calculate 90% prediction bounds using combinations of surgeons' scheduled estimates and historical OR times.
RESULTS
(1) Process variability differed significantly among 11 groups of surgical specialty and case urgency (P < 0.0001). For example, absolute percentage errors exceeded the overall median of 22% for 57% of urgent spine surgery cases versus 42% of elective spine surgery cases. (2) Process variability was not increased when scheduled and actual CPTs differed (P = 0.23 without and P = 0.47 with stratification based on the 11 groups), because most differences represented known (planned) options inherent to procedures. (3) Process variability was not associated with incidence of procedures (P = 0.79), after excluding cataract surgery, a procedure with high relative variability. (4) Parameter uncertainty from uncommon procedures (0-2 historical cases) accounted for essentially all of the uncertainty in decisions dependent on estimates of OR times. The Bayesian method moderated the effect of small sample sizes on uncertainty in estimates of OR times. In contrast, from prior work, the use of broad categories of procedures reduces parameter uncertainty but at the expense of increased process variability.
CONCLUSIONS
For procedures with few historic data, the Bayesian method allows for effective case duration prediction, permitting use of detailed procedure descriptions. Although fine resolution of scheduling procedures increases the chance of performed procedure(s) differing from scheduled procedure(s), this does not increase process variability. Future studies need both to address differences in process variability among specialties and accept the limitation that findings from one may not apply to others.
Topics: Bayes Theorem; Databases, Factual; Decision Making; Operating Rooms; Retrospective Studies; Sample Size; Surgical Procedures, Operative; Time Factors; Uncertainty
PubMed: 20357155
DOI: 10.1213/ANE.0b013e3181d3e79d -
Vascular 2006Endovascular repair of infrarenal abdominal aortic aneurysms (EVAR) has become a widely accepted treatment modality. The conventional approach of an EVAR involves... (Review)
Review
Endovascular repair of infrarenal abdominal aortic aneurysms (EVAR) has become a widely accepted treatment modality. The conventional approach of an EVAR involves bilateral groin incisions to expose the femoral arteries followed by introducer sheath placement, which is typically performed with the use of general or epidural anesthesia. As technology trends toward less invasive methods and sheath sizes become smaller, the use of a total percutaneous approach to endovascular repair of aortic pathology is becoming more common. In this review, we present a brief history of percutaneous closure devices for common femoral artery access, factors important in patient selection, the technique of performing a percutaneous EVAR procedure, early and late complications, and overall outcomes of percutaneous approaches for the endovascular treatment of aortic pathology.
Topics: Aortic Aneurysm, Abdominal; Aortic Aneurysm, Thoracic; Blood Vessel Prosthesis Implantation; Humans; Minimally Invasive Surgical Procedures; Patient Selection; Postoperative Complications; Treatment Outcome
PubMed: 17038297
DOI: 10.2310/6670.2006.00051 -
Repetitive transcranial magnetic stimulation as a treatment for chronic tinnitus: a critical review.Otology & Neurotology : Official... Feb 2013Because chronic tinnitus is a condition that negatively impacts the quality of life for millions of people worldwide, a safe and effective treatment for tinnitus has... (Review)
Review
OBJECTIVE
Because chronic tinnitus is a condition that negatively impacts the quality of life for millions of people worldwide, a safe and effective treatment for tinnitus has been sought for decades. However, a true "cure" for the most common causes of tinnitus remains elusive. Repetitive transcranial magnetic stimulation (rTMS), a noninvasive procedure, has shown potential for reducing patients' perception or severity of tinnitus. This article provides background information about rTMS and reviews studies that investigated rTMS as a treatment for chronic tinnitus.
DATA SOURCES
PubMed and Medline databases (National Center for Biotechnology Information, U.S. National Library of Medicine) were searched for the terms repetitive transcranial magnetic stimulation, tinnitus, TMS, and rTMS in articles published from 1980 to 2012.
STUDY SELECTION
Articles included in this review were selected to represent a sampling of rTMS methodologies that have been used with tinnitus patients.
DATA EXTRACTION
Data extraction included sample size, TMS stimulation frequency, TMS stimulation intensity, number of pulses administered per session, number of TMS sessions, and method of tinnitus assessment.
DATA SYNTHESIS
Because of the heterogeneity of the studies reviewed, most of which had small populations of subjects, it was not appropriate to perform a meta-analysis. A systematic review of the literature was conducted to summarize and critique published research results.
CONCLUSION
Although optimism for the clinical use of rTMS as an effective treatment for tinnitus remains high among many researchers, clinicians, and patients, several key questions and procedural issues remain unresolved. Suggestions for improving rTMS research protocols are described and discussed.
Topics: Chronic Disease; Electromagnetic Fields; Humans; Magnetic Resonance Imaging; Patient Selection; Placebo Effect; Research Design; Sample Size; Tinnitus; Transcranial Magnetic Stimulation; Treatment Outcome
PubMed: 23444467
DOI: 10.1097/mao.0b013e31827b4d46 -
Briefings in Bioinformatics Jan 2022The growing expansion of data availability in medical fields could help improve the performance of machine learning methods. However, with healthcare data, using...
The growing expansion of data availability in medical fields could help improve the performance of machine learning methods. However, with healthcare data, using multi-institutional datasets is challenging due to privacy and security concerns. Therefore, privacy-preserving machine learning methods are required. Thus, we use a federated learning model to train a shared global model, which is a central server that does not contain private data, and all clients maintain the sensitive data in their own institutions. The scattered training data are connected to improve model performance, while preserving data privacy. However, in the federated training procedure, data errors or noise can reduce learning performance. Therefore, we introduce the self-paced learning, which can effectively select high-confidence samples and drop high noisy samples to improve the performances of the training model and reduce the risk of data privacy leakage. We propose the federated self-paced learning (FedSPL), which combines the advantage of federated learning and self-paced learning. The proposed FedSPL model was evaluated on gene expression data distributed across different institutions where the privacy concerns must be considered. The results demonstrate that the proposed FedSPL model is secure, i.e. it does not expose the original record to other parties, and the computational overhead during training is acceptable. Compared with learning methods based on the local data of all parties, the proposed model can significantly improve the predicted F1-score by approximately 4.3%. We believe that the proposed method has the potential to benefit clinicians in gene selections and disease prognosis.
Topics: Humans; Machine Learning; Privacy; Research Design
PubMed: 34874995
DOI: 10.1093/bib/bbab498 -
BMJ Quality & Safety Apr 2016Optimal approaches to teaching bedside procedures are unknown. (Meta-Analysis)
Meta-Analysis Review
IMPORTANCE
Optimal approaches to teaching bedside procedures are unknown.
OBJECTIVE
To identify effective instructional approaches in procedural training.
DATA SOURCES
We searched PubMed, EMBASE, Web of Science and Cochrane Library through December 2014.
STUDY SELECTION
We included research articles that addressed procedural training among physicians or physician trainees for 12 bedside procedures. Two independent reviewers screened 9312 citations and identified 344 articles for full-text review.
DATA EXTRACTION AND SYNTHESIS
Two independent reviewers extracted data from full-text articles.
MAIN OUTCOMES AND MEASURES
We included measurements as classified by translational science outcomes T1 (testing settings), T2 (patient care practices) and T3 (patient/public health outcomes). Due to incomplete reporting, we post hoc classified study outcomes as 'negative' or 'positive' based on statistical significance. We performed meta-analyses of outcomes on the subset of studies sharing similar outcomes.
RESULTS
We found 161 eligible studies (44 randomised controlled trials (RCTs), 34 non-RCTs and 83 uncontrolled trials). Simulation was the most frequently published educational mode (78%). Our post hoc classification showed that studies involving simulation, competency-based approaches and RCTs had higher frequencies of T2/T3 outcomes. Meta-analyses showed that simulation (risk ratio (RR) 1.54 vs 0.55 for studies with vs without simulation, p=0.013) and competency-based approaches (RR 3.17 vs 0.89, p<0.001) were effective forms of training.
CONCLUSIONS AND RELEVANCE
This systematic review of bedside procedural skills demonstrates that the current literature is heterogeneous and of varying quality and rigour. Evidence is strongest for the use of simulation and competency-based paradigms in teaching procedures, and these approaches should be the mainstay of programmes that train physicians to perform procedures. Further research should clarify differences among instructional methods (eg, forms of hands-on training) rather than among educational modes (eg, lecture vs simulation).
Topics: Clinical Competence; Curriculum; Female; Humans; Male; Methods; Patient Care; Point-of-Care Testing; Practice Guidelines as Topic; Randomized Controlled Trials as Topic
PubMed: 26543067
DOI: 10.1136/bmjqs-2014-003518