-
Medical Physics Jan 2021Reconstructing the images from undersampled k-space data are an ill-posed inverse problem. As a solution to this problem, we propose a method to reconstruct magnetic...
PURPOSE
Reconstructing the images from undersampled k-space data are an ill-posed inverse problem. As a solution to this problem, we propose a method to reconstruct magnetic resonance (MR) images directly from k-space data using a recurrent neural network.
METHODS
A novel neural network architecture named "ETER-net" is developed as a unified solution to reconstruct MR images from undersampled k-space data, where two bi-RNNs and convolutional neural network (CNN) are utilized to perform domain transformation and de-aliasing. To demonstrate the practicality of the proposed method, we conducted model optimization, cross-validation, and network pruning using in-house data from a 3T MRI scanner and public dataset called "FastMRI."
RESULTS
The experimental results showed that the proposed method could be utilized for accurate image reconstruction from undersampled k-space data. The size of the proposed model was optimized and cross-validation was performed to show the robustness of the proposed method. For in-house dataset (R = 4), the proposed method provided nMSE = 1.09% and SSIM = 0.938. For "FastMRI" dataset, the proposed method provided nMSE = 1.05 % and SSIM = 0.931 for R = 4, and nMSE = 3.12 % and SSIM = 0.884 for R = 8. The performance of the pruned model trained the loss function including with L2 regularization was consistent for a pruning ratio of up to 70%.
CONCLUSIONS
The proposed method is an end-to-end MR image reconstruction method based on recurrent neural networks. It performs direct mapping of the input k-space data and the reconstructed images, operating as a unified solution that is applicable to various scanning trajectories.
Topics: Image Processing, Computer-Assisted; Magnetic Resonance Imaging; Neural Networks, Computer; Research Design
PubMed: 33128235
DOI: 10.1002/mp.14566 -
Biometrics Dec 2022Biomedical research is increasingly data rich, with studies comprising ever growing numbers of features. The larger a study, the higher the likelihood that a substantial...
Biomedical research is increasingly data rich, with studies comprising ever growing numbers of features. The larger a study, the higher the likelihood that a substantial portion of the features may be redundant and/or contain contamination (outlying values). This poses serious challenges, which are exacerbated in cases where the sample sizes are relatively small. Effective and efficient approaches to perform sparse estimation in the presence of outliers are critical for these studies, and have received considerable attention in the last decade. We contribute to this area considering high-dimensional regressions contaminated by multiple mean-shift outliers affecting both the response and the design matrix. We develop a general framework and use mixed-integer programming to simultaneously perform feature selection and outlier detection with provably optimal guarantees. We prove theoretical properties for our approach, that is, a necessary and sufficient condition for the robustly strong oracle property, where the number of features can increase exponentially with the sample size; the optimal estimation of parameters; and the breakdown point of the resulting estimates. Moreover, we provide computationally efficient procedures to tune integer constraints and warm-start the algorithm. We show the superior performance of our proposal compared to existing heuristic methods through simulations and use it to study the relationships between childhood obesity and the human microbiome.
Topics: Child; Humans; Pediatric Obesity; Algorithms; Sample Size; Probability
PubMed: 34437713
DOI: 10.1111/biom.13553 -
Therapeutic Innovation & Regulatory... May 2020Although a number of studies have quantitatively measured investigative site burden to administer increasingly complex protocol designs, robust scholarly research has...
BACKGROUND
Although a number of studies have quantitatively measured investigative site burden to administer increasingly complex protocol designs, robust scholarly research has not been performed to quantify the burden that patients face as participants in clinical trials.
METHODS
This paper presents the results of a cross-sectional pilot study conducted by the Tufts Center for the Study of Drug Development and ZS Associates among nearly 600 patients via an online survey conducted between February and March 2019. Respondents rated the perceived burden of 60 commonly administered protocol procedures. The association and relationship between overall patient burden-derived from aggregating mean perceived burden ratings for individual procedures-and performance (eg, screen failure and retention rates, clinical trial cycle times) for a cross-sectional sample of 137 individual protocols was assessed. Descriptive statistics, significance tests, and univariate analyses were performed.
RESULTS
Strong positive, statistically significant associations were observed between a composite measure of patient burden and protocol-specific design and performance measures, most notably study visits above the tolerable mean and the study conduct duration from first patient first visit to last patient last visit.
CONCLUSIONS
The study results suggest a new and viable approach to optimize protocol design and improve patient engagement.
Topics: Cross-Sectional Studies; Humans; Patient Participation; Pilot Projects; Research Design
PubMed: 33301141
DOI: 10.1007/s43441-019-00092-4 -
British Journal of Nursing (Mark Allen... Apr 2023Vascular access devices (VADs) are the most common invasive procedure performed in acute medicine and cancer patients undergo multiple invasive vascular access...
INTRODUCTION
Vascular access devices (VADs) are the most common invasive procedure performed in acute medicine and cancer patients undergo multiple invasive vascular access procedures. Our aim is to identify the type of evidence available regarding the best choice of VAD for cancer patients undergoing systemic anti-cancer therapy (SACT). In this article, the authors frame the scoping review protocol used, which will systematically report all published and unpublished literature around the use of VADs for the infusion of SACT in oncology.
INCLUSION CRITERIA
For studies to be included, they must focus on people or populations aged 18 years or older and report on vascular access in cancer patients. The concept is the variety of VAD use in cancer and reported insertion and post-insertion complications. The context surrounds the intravenous treatment of SACT whether in a cancer centre or non-cancer setting.
METHODS
The JBI scoping review methodology framework will guide the conduct of this scoping review. Electronic databases (CINAHL, Cochrane, Medline and Embase) will be searched. Grey literature sources and the reference lists of key studies will be reviewed to identify those appropriate for inclusion. No date limits will be used in the searches and studies will be limited to the English language. Two reviewers will independently screen all titles and abstracts and full-text studies for inclusion, and a third reviewer will arbitrate disagreements. All bibliographic data, study characteristics and indicators will be collected and charted using a data extraction tool.
Topics: Humans; Neoplasms; Research Design; Review Literature as Topic
PubMed: 37027405
DOI: 10.12968/bjon.2023.32.7.S18 -
Journal of Clinical Epidemiology Dec 2022To explore the impact of methodological choices on the results of meta-analyses (MAs), with acupuncture for smoking cessation as a case study. (Review)
Review
OBJECTIVE
To explore the impact of methodological choices on the results of meta-analyses (MAs), with acupuncture for smoking cessation as a case study.
STUDY DESIGN AND SETTING
After performing an umbrella review (using MEDLINE, the COCHRANE Library, the Wan Fang database, and the Chinese Journal Full-text Database/March 2018) of MAs exploring the use of acupuncture for smoking cessation, we extracted all randomized controlled trials. Numerous MAs were performed as per every possible combination of various methodological choices (e.g., characteristics of the intervention and control procedures, outcome, publication status, language) to assess their vibration of effects or more precisely the existence of a Janus effect, that is, whether the 10th and 90th percentiles in the distribution of effect sizes were in opposite directions.
RESULTS
After including 7 MAs and 39 randomized controlled trials, we performed 496,528 MAs. The effect size was negative at the 10th percentile (-0.1, favoring controls) and positive at the 90th percentile (1.17, favoring acupuncture). In all, 104,491 MAs showed a statistically significant difference in favor of acupuncture, whereas 392,037 failed to demonstrate the efficacy of acupuncture (including 96 that showed a statistically significant difference in favor of the control).
CONCLUSION
The methodological choices made in performing pairwise MAs can result in substantial vibration of effects, occasionally leading to opposite results.
Topics: Humans; Acupuncture Therapy; Databases, Factual; Physical Therapy Modalities; Smoking Cessation; Vibration; Meta-Analysis as Topic; Randomized Controlled Trials as Topic
PubMed: 36150547
DOI: 10.1016/j.jclinepi.2022.09.001 -
Journal of Pediatric Surgery Jul 2020Determining the appropriate sample size is an integral component of any well-designed research study, grant application, or scientific manuscript. Surgeons intuitively... (Review)
Review
BACKGROUND/PURPOSE
Determining the appropriate sample size is an integral component of any well-designed research study, grant application, or scientific manuscript. Surgeons intuitively understand the concept of statistical power, but have limited knowledge in how to go about performing the calculations correctly. Our goal is to provide a strategy for pediatric surgeons to use when planning a study to determine the sample sizes required for detecting a clinically meaningful effect, which is important for interpreting and validating their results.
METHODS
We present a general 5-step approach for performing a sample size justification and statistical power analysis, and illustrate this approach using several surgical research examples. The 5 steps are: 1) Define the primary outcome of interest, 2) Define the magnitude of the effect or effect size and power desired, 3) Determine the appropriate statistics and statistical test that will be considered, 4) Perform the calculations to estimate the required sample size using software or a reference table, 5) Write the formal power and sample size statement for the manuscript, grant application, or project proposal.
CONCLUSIONS
Understanding sample size considerations and statistical power in the surgical research community will improve the quality of published articles. This primer can be used by pediatric surgeons in the process of determining the appropriate sample sizes for detecting a clinically meaningful effect with sufficient statistical power. Virtually all research studies in pediatric surgery should include a justification of sample size based on a power calculation as this leads to more meaningful inferences from the data and analysis.
TYPE OF STUDY
Review article.
LEVEL OF EVIDENCE
N/A.
Topics: General Surgery; Humans; Pediatrics; Research Design; Sample Size; Surgeons
PubMed: 31155391
DOI: 10.1016/j.jpedsurg.2019.05.007 -
BMJ Open Nov 2022Rapid sequence intubation (RSI) is an advanced airway technique to perform endotracheal intubation in patients at high risk of aspiration. Although RSI is recognised as...
INTRODUCTION
Rapid sequence intubation (RSI) is an advanced airway technique to perform endotracheal intubation in patients at high risk of aspiration. Although RSI is recognised as a life-saving technique and performed by many physicians in various settings (emergency departments, intensive care units), there is still a lack of consensus on various features of the procedure, most notably patient positioning. Previously, experts have commented on the unique drawbacks and benefits of various positions and studies have been published comparing patient positions and how it can affect endotracheal intubation in the context of RSI. The purpose of this systematic review is to compile the existing evidence to understand and compare how different patient positions can potentially affect the success of RSI.
METHODS AND ANALYSIS
We will use MEDLINE, EMBASE and the Cochrane Library to source studies from 1946 to 2021 that evaluate the impact of patient positioning on endotracheal intubation in the context of RSI. We will include randomised control trials, case-control studies, prospective/retrospective cohort studies and mannequin simulation studies for consideration in this systematic review. Subsequently, we will generate a Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram to display how we selected our final studies for inclusion in the review. Two independent reviewers will complete the study screening, selection and extraction, with a third reviewer available to address any conflicts. The reviewers will extract this data in accordance with our outcomes of interest and display it in a table format to highlight patient-relevant outcomes and difficulty airway management outcomes. We will use the Risk of Bias tool and the Newcastle-Ottawa Scale to assess included studies for bias.
ETHICS AND DISSEMINATION
This systematic review does not require ethics approval, as all patient-centred data will be reported from published studies.
PROSPERO REGISTRATION NUMBER
CRD42022289773.
Topics: Humans; Intubation, Intratracheal; Patient Positioning; Prospective Studies; Rapid Sequence Induction and Intubation; Research Design; Retrospective Studies; Systematic Reviews as Topic; Randomized Controlled Trials as Topic
PubMed: 36332945
DOI: 10.1136/bmjopen-2022-062988 -
Journal of Biomedical Informatics Aug 2023The imputation of missing values in multivariate time series (MTS) data is critical in ensuring data quality and producing reliable data-driven predictive models. Apart... (Review)
Review
The imputation of missing values in multivariate time series (MTS) data is critical in ensuring data quality and producing reliable data-driven predictive models. Apart from many statistical approaches, a few recent studies have proposed state-of-the-art deep learning methods to impute missing values in MTS data. However, the evaluation of these deep methods is limited to one or two data sets, low missing rates, and completely random missing value types. This survey performs six data-centric experiments to benchmark state-of-the-art deep imputation methods on five time series health data sets. Our extensive analysis reveals that no single imputation method outperforms the others on all five data sets. The imputation performance depends on data types, individual variable statistics, missing value rates, and types. Deep learning methods that jointly perform cross-sectional (across variables) and longitudinal (across time) imputations of missing values in time series data yield statistically better data quality than traditional imputation methods. Although computationally expensive, deep learning methods are practical given the current availability of high-performance computing resources, especially when data quality and sample size are of paramount importance in healthcare informatics. Our findings highlight the importance of data-centric selection of imputation methods to optimize data-driven predictive models.
Topics: Benchmarking; Research Design; Time Factors; Cross-Sectional Studies; Surveys and Questionnaires
PubMed: 37429511
DOI: 10.1016/j.jbi.2023.104440 -
Biometrics Dec 2019Errors-in-variables models in high-dimensional settings pose two challenges in application. First, the number of observed covariates is larger than the sample size,...
Errors-in-variables models in high-dimensional settings pose two challenges in application. First, the number of observed covariates is larger than the sample size, while only a small number of covariates are true predictors under an assumption of model sparsity. Second, the presence of measurement error can result in severely biased parameter estimates, and also affects the ability of penalized methods such as the lasso to recover the true sparsity pattern. A new estimation procedure called SIMulation-SELection-EXtrapolation (SIMSELEX) is proposed. This procedure makes double use of lasso methodology. First, the lasso is used to estimate sparse solutions in the simulation step, after which a group lasso is implemented to do variable selection. The SIMSELEX estimator is shown to perform well in variable selection, and has significantly lower estimation error than naive estimators that ignore measurement error. SIMSELEX can be applied in a variety of errors-in-variables settings, including linear models, generalized linear models, and Cox survival models. It is furthermore shown in the Supporting Information how SIMSELEX can be applied to spline-based regression models. A simulation study is conducted to compare the SIMSELEX estimators to existing methods in the linear and logistic model settings, and to evaluate performance compared to naive methods in the Cox and spline models. Finally, the method is used to analyze a microarray dataset that contains gene expression measurements of favorable histology Wilms tumors.
Topics: Gene Expression Profiling; Humans; Linear Models; Logistic Models; Methods; Microarray Analysis; Models, Statistical; Proportional Hazards Models; Sample Size; Scientific Experimental Error; Wilms Tumor
PubMed: 31260084
DOI: 10.1111/biom.13112 -
ALTEX 2024Many laboratory procedures generate data on properties of chemicals, but they cannot be equated with toxicological "test methods". This apparent discrepancy is not...
Many laboratory procedures generate data on properties of chemicals, but they cannot be equated with toxicological "test methods". This apparent discrepancy is not limited to in vitro testing, using animal-free new approach methods (NAM), but also applies to animal-based testing approaches. Here, we give a brief overview of the differences between data generation and the setup or use of a complete test method. While there is excellent literature available on this topic for specialists (GIVIMP guidance; ToxTemp overview), a brief overview and easily-accessible entry point may be useful for a broader community. We provide a single figure to summarize all test method elements and processes required in the development (setup and adaptation) of a test method. The exposure scheme, the endpoint, and the test system are briefly outlined as fundamental elements of any test method. A rationale is provided, why they are not sufficient. We then explain the importance and role of purpose definition (including some information on what is modelled) and the prediction model, aka data interpretation procedure, which depends on the purpose definition, as further essential elements. This connection exemplifies that all fundamental elements are interdependent, and none can be omitted. Finally, discussion is provided on validation as a measure to provide confidence in the reliability, performance, and relevance of a test method. In this sense, validation may be considered a sixth fundamental element for practical use of test methods.
Topics: Animals; Humans; Reproducibility of Results; Research Design; Biological Science Disciplines
PubMed: 38207287
DOI: 10.14573/altex.2401041