-
BMC Medical Research Methodology Nov 2021The natural indirect effect (NIE) and mediation proportion (MP) are two measures of primary interest in mediation analysis. The standard approach for mediation analysis...
BACKGROUND
The natural indirect effect (NIE) and mediation proportion (MP) are two measures of primary interest in mediation analysis. The standard approach for mediation analysis is through the product method, which involves a model for the outcome conditional on the mediator and exposure and another model describing the exposure-mediator relationship. The purpose of this article is to comprehensively develop and investigate the finite-sample performance of NIE and MP estimators via the product method.
METHODS
With four common data types with a continuous/binary outcome and a continuous/binary mediator, we propose closed-form interval estimators for NIE and MP via the theory of multivariate delta method, and evaluate its empirical performance relative to the bootstrap approach. In addition, we have observed that the rare outcome assumption is frequently invoked to approximate the NIE and MP with a binary outcome, although this approximation may lead to non-negligible bias when the outcome is common. We therefore introduce the exact expressions for NIE and MP with a binary outcome without the rare outcome assumption and compare its performance with the approximate estimators.
RESULTS
Simulation studies suggest that the proposed interval estimator provides satisfactory coverage when the sample size ≥500 for the scenarios with a continuous outcome and sample size ≥20,000 and number of cases ≥500 for the scenarios with a binary outcome. In the binary outcome scenarios, the approximate estimators based on the rare outcome assumption worked well when outcome prevalence less than 5% but could lead to substantial bias when the outcome is common; in contrast, the exact estimators always perform well under all outcome prevalences considered.
CONCLUSIONS
Under samples sizes commonly encountered in epidemiology and public health research, the proposed interval estimator is valid for constructing confidence interval. For a binary outcome, the exact estimator without the rare outcome assumption is more robust and stable to estimate NIE and MP. An R package mediateP is developed to implement the methods for point and variance estimation discussed in this paper.
Topics: Bias; Computer Simulation; Humans; Models, Statistical; Research Design; Sample Size
PubMed: 34800985
DOI: 10.1186/s12874-021-01425-4 -
BMJ Open Oct 2023Exposure of pregnant women and newborns to secondhand smoke (SHS) can lead to adverse maternal and neonatal health outcomes. Among expectant and new fathers, who are the...
INTRODUCTION
Exposure of pregnant women and newborns to secondhand smoke (SHS) can lead to adverse maternal and neonatal health outcomes. Among expectant and new fathers, who are the main source of SHS exposure for pregnant women, new mothers and babies, smoking rates remain high. A partner's pregnancy potentially constitutes a critical period where expectant and new fathers are motivated to quit smoking. However, there is no consensus on the optimal form and delivery of smoking cessation and relapse-prevention interventions. We present a systematic review and network meta-analysis protocol that aims to synthesise and evaluate the effectiveness of smoking cessation and relapse-prevention interventions tailored for this population.
METHODS AND ANALYSIS
To identify relevant studies, we will conduct a comprehensive search, in English and Chinese, of 10 electronic databases. The review will include randomised and quasi-randomised controlled trials that compare behavioural interventions (tailored and non-tailored) with/without the addition of pharmacotherapy with usual care, a minimal or placebo control for assisting expectant and new fathers to quit smoking and prevent smoking relapse. The primary outcome of interest is the self-reported and/or biochemically verified smoking abstinence at ≥1-month follow-up. Two reviewers will independently screen, select and extract relevant studies, and perform a quality assessment. Disagreements will be resolved by a consensus or third-party adjudication. The Cochrane Risk of Bias tool V.2 will be used to assess the risk of bias in the included studies. We will obtain the results of the systematic review through pooled quantitative analyses using a network meta-analysis. Sensitivity and subgroup analyses will be performed.
ETHICS AND DISSEMINATION
Ethical approval is not required for this systematic review of published data. The findings will be disseminated via peer-reviewed publication.
PROSPERO REGISTRATION NUMBER
CRD42022340617.
Topics: Humans; Female; Infant, Newborn; Pregnancy; Male; Smoking Cessation; Network Meta-Analysis; Systematic Reviews as Topic; Pregnant Women; Fathers; Meta-Analysis as Topic
PubMed: 37802607
DOI: 10.1136/bmjopen-2023-071745 -
NeuroImage Aug 2023Cognitive neuroscientists have been grappling with two related experimental design problems. First, the complexity of neuroimaging data (e.g. often hundreds of thousands...
Cognitive neuroscientists have been grappling with two related experimental design problems. First, the complexity of neuroimaging data (e.g. often hundreds of thousands of correlated measurements) and analysis pipelines demands bespoke, non-parametric statistical tests for valid inference, and these tests often lack an agreed-upon method for performing a priori power analyses. Thus, sample size determination for neuroimaging studies is often arbitrary or inferred from other putatively but questionably similar studies, which can result in underpowered designs - undermining the efficacy of neuroimaging research. Second, when meta-analyses estimate the sample sizes required to obtain reasonable statistical power, estimated sample sizes can be prohibitively large given the resource constraints of many labs. We propose the use of sequential analyses to partially address both of these problems. Sequential study designs - in which the data is analyzed at interim points during data collection and data collection can be stopped if the planned test statistic satisfies a stopping rule specified a priori - are common in the clinical trial literature, due to the efficiency gains they afford over fixed-sample designs. However, the corrections used to control false positive rates in existing approaches to sequential testing rely on parametric assumptions that are often violated in neuroimaging settings. We introduce a general permutation scheme that allows sequential designs to be used with arbitrary test statistics. By simulation, we show that this scheme controls the false positive rate across multiple interim analyses. Then, performing power analyses for seven evoked response effects seen in the EEG literature, we show that this sequential analysis approach can substantially outperform fixed-sample approaches (i.e. require fewer subjects, on average, to detect a true effect) when study designs are sufficiently well-powered. To facilitate the adoption of this methodology, we provide a Python package "niseq" with sequential implementations of common tests used for neuroimaging: cluster-based permutation tests, threshold-free cluster enhancement, t-max, F-max, and the network-based statistic with tutorial examples using EEG and fMRI data.
Topics: Humans; Cognitive Neuroscience; Research Design; Sample Size; Magnetic Resonance Imaging; Neuroimaging
PubMed: 37348624
DOI: 10.1016/j.neuroimage.2023.120232 -
Hospital Practice (1995) Dec 2021: Mounting literature describes increased procedure volume and improvement in procedural skills following implementation of procedural curricula and standardized...
: Mounting literature describes increased procedure volume and improvement in procedural skills following implementation of procedural curricula and standardized rotations, generally requiring at least two weeks and incorporating dedicated lecture and didactic efforts. It is unknown whether shorter rotations that feature self-directed curricula can achieve similar outcomes.: House staff participated in a one-week procedure rotation that coincided with preexisting non-clinical blocks ('jeopardy'). It provided an online curriculum as well as opportunities to perform procedures under interprofessional supervision. Inpatient procedure volumes were tallied before and after implementation of the rotation. During the first year of the rotation (academic year 2013-2014), house staff completed a knowledge-based quiz and a Likert-based survey (range 1-5) addressing confidence in performing procedures and satisfaction in procedural training. : Ninety-five of 99 house staff participated in the intervention (96% response rate). The total number of procedures performed by the Division of Hospital Medicine increased from an average of 74 per year over the four years prior to the introduction of the rotation to 291 per year during the third year of the rotation. The knowledge-based quiz score improved from a pre-intervention mean value of 50% to a post-intervention mean value of 61% (P = 0.020). Confidence in performing procedures improved from a pre-intervention mean value of 2.37 to a post-intervention mean value of 2.59 (P < 0.001). Satisfaction with procedural training improved from a pre-intervention mean value of 2.48 to a post-intervention mean value of 2.69 (P < 0.001).: A one-week procedure rotation with a self-directed curriculum was introduced into the curriculum of an internal medicine residency program and was associated with increased procedure volume and sustained improvement in house staff knowledge, confidence, and satisfaction with procedural training.
Topics: Attitude of Health Personnel; Clinical Competence; Curriculum; Educational Measurement; Humans; Internal Medicine; Internship and Residency; Quality Improvement
PubMed: 34291702
DOI: 10.1080/21548331.2021.1959747 -
Current Opinion in Organ Transplantation Apr 2023Combined heart and liver transplantation (CHLT) is an uncommon but increasingly performed procedure with rising need as the population who has undergone Fontan... (Review)
Review
PURPOSE OF REVIEW
Combined heart and liver transplantation (CHLT) is an uncommon but increasingly performed procedure with rising need as the population who has undergone Fontan palliation for single ventricle physiology grows. This article reviews the current literature to summarize what is known about patient selection and outcomes and highlights the questions that remain.
RECENT FINDINGS
Congenital heart disease (CHD) with Fontan-associated liver disease (FALD) has surpassed noncongenital heart disease as the most common indication for CHLT. In patients with failing Fontan physiology, accurate assessment of recoverability of liver injury remains challenging and requires multifaceted evaluation to determine who would benefit from isolated versus dual organ transplantation. Patient survival has improved over time without significant differences between those with and without a diagnosis of CHD. En bloc surgical technique and best use of intraoperative mechanical circulatory support are topics of interest as the field continues to evolve.
SUMMARY
A more refined understanding of appropriate patient selection and indication-specific outcomes will develop as we gain more experience with this complex operation and perform prospective, randomized studies.
Topics: Humans; Liver Transplantation; Patient Selection; Prospective Studies; Heart Transplantation; Heart Defects, Congenital; Retrospective Studies
PubMed: 36454232
DOI: 10.1097/MOT.0000000000001041 -
Journal of the American College of... Dec 2020Emergency physicians must maintain procedural skills, but clinical opportunities may be insufficient. We sought to determine how often practicing emergency physicians in...
BACKGROUND
Emergency physicians must maintain procedural skills, but clinical opportunities may be insufficient. We sought to determine how often practicing emergency physicians in academic, community and freestanding emergency departments (EDs) perform 4 procedures: central venous catheterization (CVC), tube thoracostomy, tracheal intubation, and lumbar puncture (LP).
METHODS
This was a retrospective study evaluating emergency physician procedural performance over a 12-month period. We collected data from the electronic records of 18 EDs in one healthcare system. The study EDs included higher and lower volume, academic, community and freestanding, and trauma and non-trauma centers. The main outcome measures were median number of procedures performed. We examined differences in procedural performance by physician years in practice, facility type, and trauma status.
RESULTS
Over 12 months, 182 emergency physicians performed 1582 of 2805 procedures (56%) and supervised (resident, nurse practitioner or physician assistant) an additional 1223 of the procedures they did not perform (43%). Median (interquartile range) physician performance for each procedure was CVC 0 [0, 2], tube thoracostomy 0 [0, 0], tracheal intubation 3 [0.25, 8], and LP 0 [0, 2]. The percentage of emergency physicians who did not perform at least one of each procedure during the 1-year time frame ranged from 25.3% (tracheal intubation) to 76.4% (tube thoracostomy). Physicians who work at high-volume EDs (>50,000 visits per year) performed nearly twice as many tracheal intubations, CVCs, and LPs than those at low-volume EDs or freestanding EDs when normalized per 1000 visits. Years out of training were inversely related to total number of procedures performed. Emergency physicians at trauma centers performed almost 3 times as many tracheal intubations and almost 4 times as many CVCs compared to non-trauma centers.
CONCLUSION
In a large healthcare system, regardless of ED type, emergency physicians infrequently performed the 4 procedures studied. Physicians in high-volume EDs, trauma centers, and recent graduates performed more procedures. Our study adds to a growing body of research that suggests clinical frequency alone may be insufficient for all emergency physicians to maintain competency.
PubMed: 33392575
DOI: 10.1002/emp2.12238 -
PloS One 2023Various methods are available to determine optimal cutpoints for diagnostic measures. Unfortunately, many authors fail to report the precision at which these optimal...
Various methods are available to determine optimal cutpoints for diagnostic measures. Unfortunately, many authors fail to report the precision at which these optimal cutpoints are being estimated and use sample sizes that are not suitable to achieve an adequate precision. The aim of the present study is to evaluate methods to estimate the variance of cutpoint estimations based on published descriptive statistics ('post-hoc') and to discuss sample size planning for estimating cutpoints. We performed a simulation study using widely-used methods to optimize the Youden index (empirical, normal, and transformed normal method) and three methods to determine confidence intervals (the delta method, the parametric bootstrap, and the nonparametric bootstrap). We found that both the delta method and the parametric bootstrap are suitable for post-hoc calculation of confidence intervals, depending on the sample size, the distribution of marker values, and the correctness of model assumptions. On average, the parametric bootstrap in combination with normal-theory-based cutpoint estimation has the best coverage. The delta method performs very well for normally distributed data, except in small samples, and is computationally more efficient. Obviously, not every combination of distributions, cutpoint optimization methods, and optimized metrics can be simulated and a lot of the literature is concerned specifically with cutpoints and confidence intervals for the Youden index. This complicates sample size planning for studies that estimate optimal cutpoints. As a practical tool, we introduce a web-application that allows for running simulations of width and coverage of confidence intervals using the percentile bootstrap with various distributions and cutpoint optimization methods.
Topics: Sample Size; Confidence Intervals; Computer Simulation; Software
PubMed: 36595525
DOI: 10.1371/journal.pone.0279693 -
Therapeutic Innovation & Regulatory... Mar 2023When simultaneous comparisons are performed, a procedure must be employed to control the overall level (also known as the Type I Error rate). Hochberg's stepwise testing...
When simultaneous comparisons are performed, a procedure must be employed to control the overall level (also known as the Type I Error rate). Hochberg's stepwise testing procedure is often used and here determination of the sample size needed to achieve a specified power for two pairwise comparisons when observations follow a normal distribution is addressed. Three different scenarios are considered: subsets defined by a baseline criterion, two treatments compared to a control, or one set of subjects nested within the other. The solutions for these three scenarios differ and are examined. The sample sizes for the differences in success probabilities for binomial distributions are presented using the asymptotic normality. The sample sizes and power using Hochberg's procedure are compared to the corresponding results using the Bonferroni approach.
Topics: Humans; Sample Size; Research Design
PubMed: 36280651
DOI: 10.1007/s43441-022-00468-z -
The American Journal of Gastroenterology Feb 2020Formative colonoscopy direct observation of procedural skills (DOPS) assessments were updated in 2016 and incorporated into UK training but lack validity evidence. We... (Observational Study)
Observational Study
INTRODUCTION
Formative colonoscopy direct observation of procedural skills (DOPS) assessments were updated in 2016 and incorporated into UK training but lack validity evidence. We aimed to appraise the validity of DOPS assessments, benchmark performance, and evaluate competency development during training in diagnostic colonoscopy.
METHODS
This prospective national study identified colonoscopy DOPS submitted over an 18-month period to the UK training e-portfolio. Generalizability analyses were conducted to evaluate internal structure validity and reliability. Benchmarking was performed using receiver operator characteristic analyses. Learning curves for DOPS items and domains were studied, and multivariable analyses were performed to identify predictors of DOPS competency.
RESULTS
Across 279 training units, 10,749 DOPS submitted for 1,199 trainees were analyzed. The acceptable reliability threshold (G > 0.70) was achieved with 3 assessors performing 2 DOPS each. DOPS competency rates correlated with the unassisted caecal intubation rate (rho 0.404, P < 0.001). Demonstrating competency in 90% of assessed items provided optimal sensitivity (90.2%) and specificity (87.2%) for benchmarking overall DOPS competence. This threshold was attained in the following order: "preprocedure" (50-99 procedures), "endoscopic nontechnical skills" and "postprocedure" (150-199), "management" (200-249), and "procedure" (250-299) domain. At item level, competency in "proactive problem solving" (rho 0.787) and "loop management" (rho 0.780) correlated strongest with the overall DOPS rating (P < 0.001) and was the last to develop. Lifetime procedure count, DOPS count, trainer specialty, easier case difficulty, and higher cecal intubation rate were significant multivariable predictors of DOPS competence.
DISCUSSION
This study establishes milestones for competency acquisition during colonoscopy training and provides novel validity and reliability evidence to support colonoscopy DOPS as a competency assessment tool.
Topics: Clinical Competence; Colonoscopy; Gastroenterology; General Surgery; Humans; Nurse Specialists; Observation; Reproducibility of Results; United Kingdom
PubMed: 31738285
DOI: 10.14309/ajg.0000000000000426 -
Briefings in Bioinformatics Jan 2022The growing expansion of data availability in medical fields could help improve the performance of machine learning methods. However, with healthcare data, using...
The growing expansion of data availability in medical fields could help improve the performance of machine learning methods. However, with healthcare data, using multi-institutional datasets is challenging due to privacy and security concerns. Therefore, privacy-preserving machine learning methods are required. Thus, we use a federated learning model to train a shared global model, which is a central server that does not contain private data, and all clients maintain the sensitive data in their own institutions. The scattered training data are connected to improve model performance, while preserving data privacy. However, in the federated training procedure, data errors or noise can reduce learning performance. Therefore, we introduce the self-paced learning, which can effectively select high-confidence samples and drop high noisy samples to improve the performances of the training model and reduce the risk of data privacy leakage. We propose the federated self-paced learning (FedSPL), which combines the advantage of federated learning and self-paced learning. The proposed FedSPL model was evaluated on gene expression data distributed across different institutions where the privacy concerns must be considered. The results demonstrate that the proposed FedSPL model is secure, i.e. it does not expose the original record to other parties, and the computational overhead during training is acceptable. Compared with learning methods based on the local data of all parties, the proposed model can significantly improve the predicted F1-score by approximately 4.3%. We believe that the proposed method has the potential to benefit clinicians in gene selections and disease prognosis.
Topics: Humans; Machine Learning; Privacy; Research Design
PubMed: 34874995
DOI: 10.1093/bib/bbab498