-
BMC Musculoskeletal Disorders Jun 2024Taping is increasingly used to manage proprioceptive deficits, but existing reviews on its impact have shortcomings. To accurately assess the effects of taping, a... (Meta-Analysis)
Meta-Analysis
Taping is increasingly used to manage proprioceptive deficits, but existing reviews on its impact have shortcomings. To accurately assess the effects of taping, a separate meta-analyses for different population groups and tape types is needed. Therefore, both between- and within-group meta-analyses are needed to evaluate the influence of taping on proprioception. According to PRISMA guidelines, a literature search was conducted across seven databases (Web of Science, PEDro, Pubmed, EBSCO, Scopus, ERIC, SportDiscus, Psychinfo) and one register (CENTRAL) using the keywords "tape" and "proprioception". Out of 1372 records, 91 studies, involving 2718 individuals, met the inclusion criteria outlined in the systematic review. The meta-analyses revealed a significant between and within-group reduction in repositioning errors with taping compared to no tape (Hedge's g: -0.39, p < 0.001) and placebo taping (Hedge's g: -1.20, p < 0.001). Subgroup and sensitivity analyses further confirmed the reliability of the overall between and within-group analyses. The between-group results further demonstrated that both elastic tape and rigid tape had similar efficacy to improve repositioning errors in both healthy and fatigued populations. Additional analyses on the threshold to detection of passive motion and active movement extent discrimination apparatus revealed no significant influence of taping. In conclusion, the findings highlight the potential of taping to enhance joint repositioning accuracy compared to no tape or placebo taping. Further research needs to uncover underlying mechanisms and refine the application of taping for diverse populations with proprioceptive deficits.
Topics: Humans; Proprioception; Athletic Tape
PubMed: 38890668
DOI: 10.1186/s12891-024-07571-2 -
Current Hypertension Reports Jul 2024Machine learning (ML) approaches are an emerging alternative for healthcare risk prediction. We aimed to synthesise the literature on ML and classical regression studies... (Review)
Review
PURPOSE OF REVIEW
Machine learning (ML) approaches are an emerging alternative for healthcare risk prediction. We aimed to synthesise the literature on ML and classical regression studies exploring potential prognostic factors and to compare prediction performance for pre-eclampsia.
RECENT FINDINGS
From 9382 studies retrieved, 82 were included. Sixty-six publications exclusively reported eighty-four classical regression models to predict variable timing of onset of pre-eclampsia. Another six publications reported purely ML algorithms, whilst another 10 publications reported ML algorithms and classical regression models in the same sample with 8 of 10 findings that ML algorithms outperformed classical regression models. The most frequent prognostic factors were age, pre-pregnancy body mass index, chronic medical conditions, parity, prior history of pre-eclampsia, mean arterial pressure, uterine artery pulsatility index, placental growth factor, and pregnancy-associated plasma protein A. Top performing ML algorithms were random forest (area under the curve (AUC) = 0.94, 95% confidence interval (CI) 0.91-0.96) and extreme gradient boosting (AUC = 0.92, 95% CI 0.90-0.94). The competing risk model had similar performance (AUC = 0.92, 95% CI 0.91-0.92) compared with a neural network. Calibration performance was not reported in the majority of publications. ML algorithms had better performance compared to classical regression models in pre-eclampsia prediction. Random forest and boosting-type algorithms had the best prediction performance. Further research should focus on comparing ML algorithms to classical regression models using the same samples and evaluation metrics to gain insight into their performance. External validation of ML algorithms is warranted to gain insights into their generalisability.
Topics: Humans; Pre-Eclampsia; Pregnancy; Female; Machine Learning; Algorithms; Prognosis; Regression Analysis; Risk Assessment; Risk Factors; Predictive Value of Tests
PubMed: 38806766
DOI: 10.1007/s11906-024-01297-1 -
Multiple Sclerosis and Related Disorders Jul 2024Magnetic resonance imaging [MRI] findings in Neuromyelitis optica spectrum disorder [NMOSD] and Multiple Sclerosis [MS] patients could lead us to discriminate toward... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
Magnetic resonance imaging [MRI] findings in Neuromyelitis optica spectrum disorder [NMOSD] and Multiple Sclerosis [MS] patients could lead us to discriminate toward them. For instance, U-fiber and Dawson's finger-type lesions are suggestive of MS, however linear ependymal lesions raise the possibility of NMOSD. Recently, artificial intelligence [AI] models have been used to discriminate between NMOSD and MS based on MRI features. In this study, we aim to systematically review the capability of AI algorithms in NMOSD and MS discrimination based on MRI features.
METHOD
We searched PubMed, Scopus, Web of Sciences, Embase, and IEEE databases up to August 2023. All studies that used AI-based algorithms to discriminate between NMOSD and MS using MRI features were included, without any restriction in time, region, race, and age. Data on NMOSD and MS patients, Aquaporin-4 antibodies [AQP4-Ab] status, diagnosis criteria, performance metrics (accuracy, sensitivity, specificity, and AUC), artificial intelligence paradigm, MR imaging, and used features were extracted. This study is registered with PROSPERO, CRD42023465265.
RESULTS
Fifteen studies were included in this systematic review, with sample sizes ranging between 53 and 351. 1,362 MS patients and 1,118 NMOSD patients were included in our systematic review. AQP4-Ab was positive in 94.9% of NMOSD patients in 9 studies. Eight studies used machine learning [ML] as a classifier, while 7 used deep learning [DL]. AI models based on only MRI or MRI and clinical features yielded a pooled accuracy of 82% (95% CI: 78-86%), sensitivity of 83% (95% CI: 79-88%), and specificity of 80% (95% CI: 75-86%). In subgroup analysis, using only MRI features yielded an accuracy, sensitivity, and specificity of 83% (95% CI: 78-88%), 81% (95% CI: 76-87%), and 84% (95% CI: 79-89%), respectively.
CONCLUSION
AI models based on MRI features showed a high potential to discriminate between NMOSD and MS. However, heterogeneity in MR imaging, model evaluation, and reporting performance metrics, among other confounders, affected the reliability of our results. Well-designed studies on multicentric datasets, standardized imaging and evaluation protocols, and detailed transparent reporting of results are needed to reach optimal performance.
Topics: Humans; Neuromyelitis Optica; Magnetic Resonance Imaging; Multiple Sclerosis; Artificial Intelligence; Algorithms; Diagnosis, Differential
PubMed: 38781885
DOI: 10.1016/j.msard.2024.105682 -
The American Journal of Gastroenterology May 2024Accurate risk prediction can facilitate screening and early detection of pancreatic cancer (PC). We conducted a systematic review to critically evaluate effectiveness of...
INTRODUCTION
Accurate risk prediction can facilitate screening and early detection of pancreatic cancer (PC). We conducted a systematic review to critically evaluate effectiveness of machine learning (ML) and artificial intelligence (AI) techniques applied to electronic health records (EHR) for PC risk prediction.
METHODS
Ovid MEDLINE(R), Ovid EMBASE, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, Scopus, and Web of Science were searched for articles that utilized ML/AI techniques to predict PC, published between January 1, 2012, and February 1, 2024. Study selection and data extraction were conducted by 2 independent reviewers. Critical appraisal and data extraction were performed using the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies checklist. Risk of bias and applicability were examined using prediction model risk of bias assessment tool.
RESULTS
Thirty studies including 169,149 PC cases were identified. Logistic regression was the most frequent modeling method. Twenty studies utilized a curated set of known PC risk predictors or those identified by clinical experts. ML model discrimination performance (C-index) ranged from 0.57 to 1.0. Missing data were underreported, and most studies did not implement explainable-AI techniques or report exclusion time intervals.
DISCUSSION
AI/ML models for PC risk prediction using known risk factors perform reasonably well and may have near-term applications in identifying cohorts for targeted PC screening if validated in real-world data sets. The combined use of structured and unstructured EHR data using emerging AI models while incorporating explainable-AI techniques has the potential to identify novel PC risk factors, and this approach merits further study.
PubMed: 38752654
DOI: 10.14309/ajg.0000000000002870 -
Radiotherapy and Oncology : Journal of... Jul 2024We performed this systematic review and meta-analysis to investigate the performance of ML in detecting genetic mutation status in NSCLC patients. (Meta-Analysis)
Meta-Analysis Review
BACKGROUND AND PURPOSE
We performed this systematic review and meta-analysis to investigate the performance of ML in detecting genetic mutation status in NSCLC patients.
MATERIALS AND METHODS
We conducted a systematic search of PubMed, Cochrane, Embase, and Web of Science up until July 2023. We discussed the genetic mutation status of EGFR, ALK, KRAS, and BRAF, as well as the mutation status at different sites of EGFR.
RESULTS
We included a total of 128 original studies, of which 114 constructed ML models based on radiomic features mainly extracted from CT, MRI, and PET-CT data. From a genetic mutation perspective, 121 studies focused on EGFR mutation status analysis. In the validation set, for the detection of EGFR mutation status, the aggregated c-index was 0.760 (95%CI: 0.706-0.814) for clinical feature-based models, 0.772 (95%CI: 0.753-0.791) for CT-based radiomics models, 0.816 (95%CI: 0.776-0.856) for MRI-based radiomics models, and 0.750 (95%CI: 0.712-0.789) for PET-CT-based radiomics models. When combined with clinical features, the aggregated c-index was 0.807 (95%CI: 0.781-0.832) for CT-based radiomics models, 0.806 (95%CI: 0.773-0.839) for MRI-based radiomics models, and 0.822 (95%CI: 0.789-0.854) for PET-CT-based radiomics models. In the validation set, the aggregated c-indexes for radiomics-based models to detect mutation status of ALK and KRAS, as well as the mutation status at different sites of EGFR were all greater than 0.7.
CONCLUSION
The use of radiomics-based methods for early discrimination of EGFR mutation status in NSCLC demonstrates relatively high accuracy. However, the influence of clinical variables cannot be overlooked in this process. In addition, future studies should also pay attention to the accuracy of radiomics in identifying mutation status of other genes in EGFR.
Topics: Humans; Lung Neoplasms; Machine Learning; Mutation; Carcinoma, Non-Small-Cell Lung; Positron Emission Tomography Computed Tomography; ErbB Receptors; Proto-Oncogene Proteins p21(ras)
PubMed: 38734145
DOI: 10.1016/j.radonc.2024.110325 -
The Journal of Trauma and Acute Care... May 2024Haemorrhage is a leading cause of preventable death in trauma. Accurately predicting a patient's blood transfusion requirement is essential but can be difficult. Machine...
BACKGROUND
Haemorrhage is a leading cause of preventable death in trauma. Accurately predicting a patient's blood transfusion requirement is essential but can be difficult. Machine learning (ML) is a field of artificial intelligence that is emerging within medicine for accurate prediction modelling. This systematic review aimed to identify and evaluate all ML models that predict blood transfusion in trauma.
METHODS
This systematic review was registered on The International Prospective register of Systematic Reviews (CRD4202237110). MEDLINE, Embase and the Cochrane Central Register of Controlled Trials were systematically searched. Publications reporting a ML model that predicted blood transfusion in injured adult patients were included. Data extraction and risk of bias assessment was performed using validated frameworks. Data was synthesised narratively due to significant heterogeneity.
RESULTS
Twenty-five ML models for blood transfusion prediction in trauma were identified. Models incorporated diverse predictors and varied ML methodologies. Predictive performance was variable but eight models achieved excellent discrimination (AUROC >0.9) and nine models achieved good discrimination (AUROC >0.8) in internal validation. Only two models reported measures of calibration. Four models have been externally validated in prospective cohorts: the Bleeding Risk Index, Compensatory Reserve Index, the Marsden model and the Mina model. All studies were considered at high risk of bias often due to retrospective datasets, small sample size and lack of external validation.
DISCUSSION
This review identified twenty-five ML models developed to predict blood transfusion requirement after injury. Seventeen ML models demonstrated good to excellent performance in-silico but only four models were externally validated. To date ML models demonstrate the potential for early and individualised blood transfusion prediction but further research is critically required to narrow the gap between ML model development and clinical application.
LEVEL OF EVIDENCE
Systematic Review Without Meta-Analysis, Level IV.
PubMed: 38720200
DOI: 10.1097/TA.0000000000004385 -
The Journal of Vascular Access Apr 2024Failure-to-mature and early stenosis remains the Achille's heel of hemodialysis arteriovenous fistula (AVF) creation. The maturation and patency of an AVF can be... (Review)
Review
OBJECTIVE
Failure-to-mature and early stenosis remains the Achille's heel of hemodialysis arteriovenous fistula (AVF) creation. The maturation and patency of an AVF can be influenced by a variety of demographic, comorbidity, and anatomical factors. This study aims to review the prediction models of AVF maturation and patency with various risk scores and machine learning models.
DATA SOURCES AND REVIEW METHODS
Literature search was performed on PubMed, Scopus, and Embase to identify eligible articles. The quality of the studies was assessed using the Prediction model Risk Of Bias ASsessment (PROBAST) Tool. The performance (discrimination and calibration) of the included studies were extracted.
RESULTS
Fourteen studies (seven studies used risk score approaches; seven studies used machine learning approaches) were included in the review. Among them, 12 studies were rated as high or unclear "risk of bias." Six studies were rated as high concern or unclear for "applicability." C-statistics (Model discrimination metric) was reported in five studies using risk score approach (0.70-0.886) and three utilized machine learning methods (0.80-0.85). Model calibration was reported in three studies. Failure-to-mature risk score developed by one of the studies has been externally validated in three different patient populations, however the model discrimination degraded significantly (C-statistics: 0.519-0.53).
CONCLUSION
The performance of existing predictive models for AVF maturation/patency is underreported. They showed satisfactory performance in their own study population. However, there was high risk of bias in methodology used to build some of the models. The reviewed models also lack external validation or had reduced performance in external cohort.
PubMed: 38658814
DOI: 10.1177/11297298241237830 -
Translational Vision Science &... Apr 2024The purpose of this study was to assess the current use and reliability of artificial intelligence (AI)-based algorithms for analyzing cataract surgery videos.
PURPOSE
The purpose of this study was to assess the current use and reliability of artificial intelligence (AI)-based algorithms for analyzing cataract surgery videos.
METHODS
A systematic review of the literature about intra-operative analysis of cataract surgery videos with machine learning techniques was performed. Cataract diagnosis and detection algorithms were excluded. Resulting algorithms were compared, descriptively analyzed, and metrics summarized or visually reported. The reproducibility and reliability of the methods and results were assessed using a modified version of the Medical Image Computing and Computer-Assisted (MICCAI) checklist.
RESULTS
Thirty-eight of the 550 screened studies were included, 20 addressed the challenge of instrument detection or tracking, 9 focused on phase discrimination, and 8 predicted skill and complications. Instrument detection achieves an area under the receiver operator characteristic curve (ROC AUC) between 0.976 and 0.998, instrument tracking an mAP between 0.685 and 0.929, phase recognition an ROC AUC between 0.773 and 0.990, and complications or surgical skill performs with an ROC AUC between 0.570 and 0.970.
CONCLUSIONS
The studies showed a wide variation in quality and pose a challenge regarding replication due to a small number of public datasets (none for manual small incision cataract surgery) and seldom published source code. There is no standard for reported outcome metrics and validation of the models on external datasets is rare making comparisons difficult. The data suggests that tracking of instruments and phase detection work well but surgical skill and complication recognition remains a challenge for deep learning.
TRANSLATIONAL RELEVANCE
This overview of cataract surgery analysis with AI models provides translational value for improving training of the clinician by identifying successes and challenges.
Topics: Humans; Artificial Intelligence; Reproducibility of Results; Algorithms; Software; Cataract
PubMed: 38618893
DOI: 10.1167/tvst.13.4.20 -
Heart, Lung & Circulation Apr 2024Risk adjustment following percutaneous coronary intervention (PCI) is vital for clinical quality registries, performance monitoring, and clinical decision-making. There... (Review)
Review
BACKGROUND AND AIM
Risk adjustment following percutaneous coronary intervention (PCI) is vital for clinical quality registries, performance monitoring, and clinical decision-making. There remains significant variation in the accuracy and nature of risk adjustment models utilised in international PCI registries/databases. Therefore, the current systematic review aims to summarise preoperative variables associated with 30-day mortality among patients undergoing PCI, and the other methodologies used in risk adjustments.
METHOD
The MEDLINE, EMBASE, CINAHL, and Web of Science databases until October 2022 without any language restriction were systematically searched to identify preoperative independent variables related to 30-day mortality following PCI. Information was systematically summarised in a descriptive manner following the Checklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies checklist. The quality and risk of bias of all included articles were assessed using the Prediction Model Risk Of Bias Assessment Tool. Two independent investigators took part in screening and quality assessment.
RESULTS
The search yielded 2,941 studies, of which 42 articles were included in the final assessment. Logistic regression, Cox-proportional hazard model, and machine learning were utilised by 27 (64.3%), 14 (33.3%), and one (2.4%) article, respectively. A total of 74 independent preoperative variables were identified that were significantly associated with 30-day mortality following PCI. Variables that repeatedly used in various models were, but not limited to, age (n=36, 85.7%), renal disease (n=29, 69.0%), diabetes mellitus (n=17, 40.5%), cardiogenic shock (n=14, 33.3%), gender (n=14, 33.3%), ejection fraction (n=13, 30.9%), acute coronary syndrome (n=12, 28.6%), and heart failure (n=10, 23.8%). Nine (9; 21.4%) studies used missing values imputation, and 15 (35.7%) articles reported the model's performance (discrimination) with values ranging from 0.501 (95% confidence interval [CI] 0.472-0.530) to 0.928 (95% CI 0.900-0.956), and four studies (9.5%) validated the model on external/out-of-sample data.
CONCLUSIONS
Risk adjustment models need further improvement in their quality through the inclusion of a parsimonious set of clinically relevant variables, appropriately handling missing values and model validation, and utilising machine learning methods.
PubMed: 38570260
DOI: 10.1016/j.hlc.2024.01.021 -
Artificial Intelligence in Medicine Apr 2024We aimed to analyze the study designs, modeling approaches, and performance evaluation metrics in studies using machine learning techniques to develop clinical...
BACKGROUND AND OBJECTIVES
We aimed to analyze the study designs, modeling approaches, and performance evaluation metrics in studies using machine learning techniques to develop clinical prediction models for children and adolescents with COVID-19.
METHODS
We searched four databases for articles published between 01/01/2020 and 10/25/2023, describing the development of multivariable prediction models using any machine learning technique for predicting several outcomes in children and adolescents who had COVID-19.
RESULTS
We included ten articles, six (60 % [95 % confidence interval (CI) 0.31 - 0.83]) were predictive diagnostic models and four (40% [95 % CI 0.170.69]) were prognostic models. All models were developed to predict a binary outcome (n= 10/10, 100 % [95 % CI 0.72-1]). The most frequently predicted outcome was disease detection (n=3/10, 30% [95 % CI 0.11-0.60]). The most commonly used machine learning models in the studies were tree-based (n=12/33, 36.3% [95 % CI 0.17-0.47]) and neural networks (n=9/27, 33.2% [95% CI 0.15-0.44]).
CONCLUSION
Our review revealed that attention is required to address problems including small sample sizes, inconsistent reporting practices on data preparation, biases in data sources, lack of reporting metrics such as calibration and discrimination, hyperparameters and other aspects that allow reproducibility by other researchers and might improve the methodology.
Topics: Child; Humans; Adolescent; Reproducibility of Results; COVID-19; Algorithms; Machine Learning; Neural Networks, Computer
PubMed: 38553164
DOI: 10.1016/j.artmed.2024.102824