-
Asian Spine Journal Jun 2024The purpose of this narrative review was to comprehensively elaborate the various components of artificial intelligence (AI), their applications in spine surgery,...
The purpose of this narrative review was to comprehensively elaborate the various components of artificial intelligence (AI), their applications in spine surgery, practical concerns, and future directions. Over the years, spine surgery has been continuously transformed in various aspects, including diagnostic strategies, surgical approaches, procedures, and instrumentation, to provide better-quality patient care. Surgeons have also augmented their surgical expertise with rapidly growing technological advancements. AI is an advancing field that has the potential to revolutionize many aspects of spine surgery. We performed a comprehensive narrative review of the various aspects of AI and machine learning in spine surgery. To elaborate on the current role of AI in spine surgery, a review of the literature was performed using PubMed and Google Scholar databases for articles published in English in the last 20 years. The initial search using the keywords "artificial intelligence" AND "spine," "machine learning" AND "spine," and "deep learning" AND "spine" extracted a total of 78, 60, and 37 articles and 11,500, 4,610, and 2,270 articles on PubMed and Google Scholar. After the initial screening and exclusion of unrelated articles, duplicates, and non-English articles, 405 articles were identified. After the second stage of screening, 93 articles were included in the review. Studies have shown that AI can be used to analyze patient data and provide personalized treatment recommendations in spine care. It also provides valuable insights for planning surgeries and assisting with precise surgical maneuvers and decisionmaking during the procedures. As more data become available and with further advancements, AI is likely to improve patient outcomes.
PubMed: 38917854
DOI: 10.31616/asj.2023.0382 -
EBioMedicine Jun 2024Accurate prediction of the optimal dose for β-lactam antibiotics in neonatal sepsis is challenging. We aimed to evaluate whether a reliable clinical decision support...
BACKGROUND
Accurate prediction of the optimal dose for β-lactam antibiotics in neonatal sepsis is challenging. We aimed to evaluate whether a reliable clinical decision support system (CDSS) based on machine learning (ML) can assist clinicians in making optimal dose selections.
METHODS
Five β-lactam antibiotics (amoxicillin, ceftazidime, cefotaxime, meropenem and latamoxef), commonly used to treat neonatal sepsis, were selected. The CDSS was constructed by incorporating the drug, patient, dosage, pharmacodynamic, and microbiological factors. The CatBoost ML algorithm was used to build the CDSS. Real-world studies were used to evaluate the CDSS performance. Virtual trials were used to compare the CDSS-optimized doses with guideline-recommended doses.
FINDINGS
For a specific drug, by entering the patient characteristics and pharmacodynamic (PD) target (50%/70%/100% fraction of time that the free drug concentration is above the minimal inhibitory concentration [fT > MIC]), the CDSS can determine whether the planned dosing regimen will achieve the PD target and suggest an optimal dose. The prediction accuracy of all five drugs was >80.0% in the real-world validation. Compared with the PopPK model, the overall accuracy, precision, recall, and F1-Score improved by 10.7%, 22.1%, 64.2%, and 43.1%, respectively. Using the CDSS-optimized doses, the average probability of target concentration attainment increased by 58.2% compared to the guideline-recommended doses.
INTERPRETATION
An ML-based CDSS was successfully constructed to assist clinicians in selecting optimal β-lactam antibiotic doses.
FUNDING
This work was supported by the National Natural Science Foundation of China; Distinguished Young and Middle-aged Scholar of Shandong University; National Key Research and Development Program of China.
PubMed: 38917512
DOI: 10.1016/j.ebiom.2024.105221 -
JCO Clinical Cancer Informatics Jun 2024The estimation of prognosis and life expectancy is critical in the care of patients with advanced cancer. To aid clinical decision making, we build a prognostic strategy...
PURPOSE
The estimation of prognosis and life expectancy is critical in the care of patients with advanced cancer. To aid clinical decision making, we build a prognostic strategy combining a machine learning (ML) model with explainable artificial intelligence to predict 1-year survival after palliative radiotherapy (RT) for bone metastasis.
MATERIALS AND METHODS
Data collected in the multicentric PRAIS trial were extracted for 574 eligible adults diagnosed with metastatic cancer. The primary end point was the overall survival (OS) at 1 year (1-year OS) after the start of RT. Candidate covariate predictors consisted of 13 clinical and tumor-related pre-RT patient characteristics, seven dosimetric and treatment-related variables, and 45 pre-RT laboratory variables. ML models were developed and internally validated using the Python package. The effectiveness of each model was evaluated in terms of discrimination. A Shapley Additive Explanations (SHAP) explainability analysis to infer the global and local feature importance and to understand the reasons for correct and misclassified predictions was performed.
RESULTS
The best-performing model for the classification of 1-year OS was the extreme gradient boosting algorithm, with AUC and F1-score values equal to 0.805 and 0.802, respectively. The SHAP technique revealed that higher chance of 1-year survival is associated with low values of interleukin-8, higher values of hemoglobin and lymphocyte count, and the nonuse of steroids.
CONCLUSION
An explainable ML approach can provide a reliable prediction of 1-year survival after RT in patients with advanced cancer. The implementation of SHAP analysis provides an intelligible explanation of individualized risk prediction, enabling oncologists to identify the best strategy for patient stratification and treatment selection.
Topics: Humans; Machine Learning; Bone Neoplasms; Palliative Care; Male; Female; Prognosis; Aged; Middle Aged; Algorithms
PubMed: 38917384
DOI: 10.1200/CCI.24.00027 -
PloS One 2024Access to brushes allows for natural scratching behaviors in cattle, especially in confined indoor settings. Cattle are motivated to use brushes, but brush use varies...
Access to brushes allows for natural scratching behaviors in cattle, especially in confined indoor settings. Cattle are motivated to use brushes, but brush use varies with multiple factors including social hierarchy and health. Brush use might serve an indicator of cow health or welfare, but practical application of these measures requires accurate and automated monitoring tools. This study describes a machine learning approach to monitor brush use by dairy cattle. We aimed to capture the daily brush use by integrating data on the rotation of a mechanical brush with data on cow identify derived from either 1) low-frequency radio frequency identification or 2) a computer vision system using fiducial markers. We found that the computer vision system outperformed the RFID system in accuracy, and that the machine learning algorithms enhanced the precision of the brush use estimates. This study presents the first description of a fiducial marker-based computer vision system for monitoring individual cattle behavior in a group setting; this approach could be applied to develop automated measures of other behaviors with the potential to better assess welfare and improve the care for farm animals.
Topics: Animals; Cattle; Behavior, Animal; Machine Learning; Dairying; Radio Frequency Identification Device; Female; Algorithms; Animal Welfare
PubMed: 38917231
DOI: 10.1371/journal.pone.0305671 -
PloS One 2024Global warming, caused by greenhouse gas emissions, is a major challenge for all human societies. To ensure that ambitious carbon neutrality and sustainable economic...
Global warming, caused by greenhouse gas emissions, is a major challenge for all human societies. To ensure that ambitious carbon neutrality and sustainable economic development goals are met, regional human activities and their impacts on carbon emissions must be studied. Guizhou Province is a typical karst area in China that predominantly uses fossil fuels. In this study, a backpropagation (BP) neural network and extreme learning machine (ELM) model, which is advantageous due to its nonlinear processing, were used to predict carbon emissions from 2020 to 2040 in Guizhou Province. The carbon emissions were calculated using conversion and inventory compilation methods with energy consumption data and the results showed an "S" growth trend. Twelve influencing factors were selected, however, five with larger correlations were screened out using a grey correlation analysis method. A prediction model for carbon emissions from Guizhou Province was established. The prediction performance of a whale optimization algorithm (WOA)-ELM model was found to be higher than the BP neural network and ELM models. Baseline, high-speed, and low-carbon scenarios were analyzed and the size and time of peak carbon emissions in Liaoning Province from 2020 to 2040 were predicted using the WOA-ELM model.
Topics: China; Neural Networks, Computer; Carbon; Global Warming; Humans; Algorithms; Machine Learning
PubMed: 38917224
DOI: 10.1371/journal.pone.0296596 -
PloS One 2024Deep learning, a pivotal branch of artificial intelligence, has increasingly influenced the financial domain with its advanced data processing capabilities. This paper...
Deep learning, a pivotal branch of artificial intelligence, has increasingly influenced the financial domain with its advanced data processing capabilities. This paper introduces Factor-GAN, an innovative framework that utilizes Generative Adversarial Networks (GAN) technology for factor investing. Leveraging a comprehensive factor database comprising 70 firm characteristics, Factor-GAN integrates deep learning techniques with the multi-factor pricing model, thereby elevating the precision and stability of investment strategies. To explain the economic mechanisms underlying deep learning, we conduct a subsample analysis of the Chinese stock market. The findings reveal that the deep learning-based pricing model significantly enhances return prediction accuracy and factor investment performance in comparison to linear models. Particularly noteworthy is the superior performance of the long-short portfolio under Factor-GAN, demonstrating an annualized return of 23.52% with a Sharpe ratio of 1.29. During the transition from state-owned enterprises (SOEs) to non-SOEs, our study discerns shifts in factor importance, with liquidity and volatility gaining significance while fundamental indicators diminish. Additionally, A-share listed companies display a heightened emphasis on momentum and growth indicators relative to their dual-listed counterparts. This research holds profound implications for the expansion of explainable artificial intelligence research and the exploration of financial technology applications.
Topics: Investments; Deep Learning; Models, Economic; Commerce; Neural Networks, Computer; Humans; Artificial Intelligence; China
PubMed: 38917175
DOI: 10.1371/journal.pone.0306094 -
PloS One 2024Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been...
Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been proposed for automatic 3D cephalometric analysis based on landmarking from CBCT scans. However, these approaches have relied on uniform datasets from a single center or imaging device but without considering patient ethnicity. In addition, previous works have considered a limited number of clinically relevant cephalometric landmarks and the approaches were computationally infeasible, both impairing integration into clinical workflow. Here our aim is to analyze the clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks with multi-center, multi-ethnic, and multi-device data consisting of 309 CBCT scans from Finnish and Thai patients. The localization performance of our approach resulted in the mean distance of 1.99 ± 1.55 mm for the Finnish cohort and 1.96 ± 1.25 mm for the Thai cohort. This performance turned out to be clinically significant i.e., ≤ 2 mm with 61.7% and 64.3% of the landmarks with Finnish and Thai cohorts, respectively. Furthermore, the estimated landmarks were used to measure cephalometric characteristics successfully i.e., with ≤ 2 mm or ≤ 2° error, on 85.9% of the Finnish and 74.4% of the Thai cases. Between the two patient cohorts, 33 of the landmarks and all cephalometric characteristics had no statistically significant difference (p < 0.05) measured by the Mann-Whitney U test with Benjamini-Hochberg correction. Moreover, our method is found to be computationally light, i.e., providing the predictions with the mean duration of 0.77 s and 2.27 s with single machine GPU and CPU computing, respectively. Our findings advocate for the inclusion of this method into clinical settings based on its technical feasibility and robustness across varied clinical datasets.
Topics: Humans; Cephalometry; Deep Learning; Cone-Beam Computed Tomography; Imaging, Three-Dimensional; Male; Female; Anatomic Landmarks; Finland; Adult; Thailand; Young Adult; Adolescent
PubMed: 38917161
DOI: 10.1371/journal.pone.0305947 -
PloS One 2024In recent years, the classification and identification of surface materials on earth have emerged as fundamental yet challenging research topics in the fields of...
In recent years, the classification and identification of surface materials on earth have emerged as fundamental yet challenging research topics in the fields of geoscience and remote sensing (RS). The classification of multi-modality RS data still poses certain challenges, despite the notable advancements achieved by deep learning technology in RS image classification. In this work, a deep learning architecture based on convolutional neural network (CNN) is proposed for the classification of multimodal RS image data. The network structure introduces a cross modality reconstruction (CMR) module in the multi-modality feature fusion stage, called CMR-Net. In other words, CMR-Net is based on CNN network structure. In the feature fusion stage, a plug-and-play module for cross-modal fusion reconstruction is designed to compactly integrate features extracted from multiple modalities of remote sensing data, enabling effective information exchange and feature integration. In addition, to validate the proposed scheme, extensive experiments were conducted on two multi-modality RS datasets, namely the Houston2013 dataset consisting of hyperspectral (HS) and light detection and ranging (LiDAR) data, as well as the Berlin dataset comprising HS and synthetic aperture radar (SAR) data. The results demonstrate the effectiveness and superiority of our proposed CMR-Net compared to several state-of-the-art methods for multi-modality RS data classification.
Topics: Remote Sensing Technology; Neural Networks, Computer; Deep Learning; Image Processing, Computer-Assisted; Algorithms
PubMed: 38917124
DOI: 10.1371/journal.pone.0304999 -
PloS One 2024Climate variability has become one of the most pressing issues of our time, affecting various aspects of the environment, including the agriculture sector. This study...
Climate variability has become one of the most pressing issues of our time, affecting various aspects of the environment, including the agriculture sector. This study examines the impact of climate variability on Ghana's maize yield for all agro-ecological zones and administrative regions in Ghana using annual data from 1992 to 2019. The study also employs the stacking ensemble learning model (SELM) in predicting the maize yield in the different regions taking random forest (RF), support vector machine (SVM), gradient boosting (GB), decision tree (DT), and linear regression (LR) as base models. The findings of the study reveal that maize production in the regions of Ghana is inconsistent, with some regions having high variability. All the climate variables considered have positive impact on maize yield, with a lesser variability of temperature in the Guinea savanna zones and a higher temperature variability in the Volta Region. Carbon dioxide (CO2) also plays a significant role in predicting maize yield across all regions of Ghana. Among the machine learning models utilized, the stacking ensemble model consistently performed better in many regions such as in the Western, Upper East, Upper West, and Greater Accra regions. These findings are important in understanding the impact of climate variability on the yield of maize in Ghana, highlighting regional disparities in maize yield in the country, and highlighting the need for advanced techniques for forecasting, which are important for further investigation and interventions for agricultural planning and decision-making on food security in Ghana.
Topics: Zea mays; Ghana; Machine Learning; Climate Change; Support Vector Machine; Agriculture; Climate; Crops, Agricultural; Carbon Dioxide; Temperature
PubMed: 38917094
DOI: 10.1371/journal.pone.0305762 -
PloS One 2024Deep learning, a subset of machine learning that utilizes neural networks, has seen significant advancements in recent years. These advancements have led to...
Deep learning, a subset of machine learning that utilizes neural networks, has seen significant advancements in recent years. These advancements have led to breakthroughs in a wide range of fields, from natural language processing to computer vision, and have the potential to revolutionize many industries or organizations. They have also demonstrated exceptional performance in the identification and mapping of seagrass images. However, these deep learning models, particularly the popular Convolutional Neural Networks (CNNs) require architectural engineering and hyperparameter tuning. This paper proposes a Deep Neuroevolutionary (DNE) model that can automate the architectural engineering and hyperparameter tuning of CNNs models by developing and using a novel metaheuristic algorithm, named 'Boosted Atomic Orbital Search (BAOS)'. The proposed BAOS is an improved version of the recently proposed Atomic Orbital Search (AOS) algorithm which is based on the principle of atomic model and quantum mechanics. The proposed algorithm leverages the power of the Lévy flight technique to boost the performance of the AOS algorithm. The proposed DNE algorithm (BAOS-CNN) is trained, evaluated and compared with six popular optimisation algorithms on a patch-based multi-species seagrass dataset. This proposed BAOS-CNN model achieves the highest overall accuracy (97.48%) among the seven evolutionary-based CNN models. The proposed model also achieves the state-of-the-art overall accuracy of 92.30% and 93.5% on the publicly available four classes and five classes version of the 'DeepSeagrass' dataset, respectively. This multi-species seagrass dataset is available at: https://ro.ecu.edu.au/datasets/141/.
Topics: Algorithms; Neural Networks, Computer; Deep Learning
PubMed: 38917071
DOI: 10.1371/journal.pone.0281568