-
Toxicological Sciences : An Official... Jan 2023Physiologically based pharmacokinetic (PBPK) models are useful tools in drug development and risk assessment of environmental chemicals. PBPK model development requires...
Physiologically based pharmacokinetic (PBPK) models are useful tools in drug development and risk assessment of environmental chemicals. PBPK model development requires the collection of species-specific physiological, and chemical-specific absorption, distribution, metabolism, and excretion (ADME) parameters, which can be a time-consuming and expensive process. This raises a need to create computational models capable of predicting input parameter values for PBPK models, especially for new compounds. In this review, we summarize an emerging paradigm for integrating PBPK modeling with machine learning (ML) or artificial intelligence (AI)-based computational methods. This paradigm includes 3 steps (1) obtain time-concentration PK data and/or ADME parameters from publicly available databases, (2) develop ML/AI-based approaches to predict ADME parameters, and (3) incorporate the ML/AI models into PBPK models to predict PK summary statistics (eg, area under the curve and maximum plasma concentration). We also discuss a neural network architecture "neural ordinary differential equation (Neural-ODE)" that is capable of providing better predictive capabilities than other ML methods when used to directly predict time-series PK profiles. In order to support applications of ML/AI methods for PBPK model development, several challenges should be addressed (1) as more data become available, it is important to expand the training set by including the structural diversity of compounds to improve the prediction accuracy of ML/AI models; (2) due to the black box nature of many ML models, lack of sufficient interpretability is a limitation; (3) Neural-ODE has great potential to be used to generate time-series PK profiles for new compounds with limited ADME information, but its application remains to be explored. Despite existing challenges, ML/AI approaches will continue to facilitate the efficient development of robust PBPK models for a large number of chemicals.
Topics: Artificial Intelligence; Drug Development; Machine Learning; Models, Biological; Neural Networks, Computer; Risk Assessment
PubMed: 36156156
DOI: 10.1093/toxsci/kfac101 -
Current Opinion in Ophthalmology Sep 2020In this article, we review the current state of artificial intelligence applications in retinopathy of prematurity (ROP) and provide insight on challenges as well as... (Review)
Review
PURPOSE OF REVIEW
In this article, we review the current state of artificial intelligence applications in retinopathy of prematurity (ROP) and provide insight on challenges as well as strategies for bringing these algorithms to the bedside.
RECENT FINDINGS
In the past few years, there has been a dramatic shift from machine learning approaches based on feature extraction to 'deep' convolutional neural networks for artificial intelligence applications. Several artificial intelligence for ROP approaches have demonstrated adequate proof-of-concept performance in research studies. The next steps are to determine whether these algorithms are robust to variable clinical and technical parameters in practice. Integration of artificial intelligence into ROP screening and treatment is limited by generalizability of the algorithms to maintain performance on unseen data and integration of artificial intelligence technology into new or existing clinical workflows.
SUMMARY
Real-world implementation of artificial intelligence for ROP diagnosis will require massive efforts targeted at developing standards for data acquisition, true external validation, and demonstration of feasibility. We must now focus on ethical, technical, clinical, regulatory, and financial considerations to bring this technology to the infant bedside to realize the promise offered by this technology to reduce preventable blindness from ROP.
Topics: Algorithms; Artificial Intelligence; Humans; Image Interpretation, Computer-Assisted; Infant, Newborn; Machine Learning; Neural Networks, Computer; Retinopathy of Prematurity
PubMed: 32694266
DOI: 10.1097/ICU.0000000000000680 -
Journal of Diabetes Science and... Mar 2018In the past decade diabetes management has been transformed by the addition of continuous glucose monitoring and insulin pump data. More recently, a wide variety of...
In the past decade diabetes management has been transformed by the addition of continuous glucose monitoring and insulin pump data. More recently, a wide variety of functions and physiologic variables, such as heart rate, hours of sleep, number of steps walked and movement, have been available through wristbands or watches. New data, hydration, geolocation, and barometric pressure, among others, will be incorporated in the future. All these parameters, when analyzed, can be helpful for patients and doctors' decision support. Similar new scenarios have appeared in most medical fields, in such a way that in recent years, there has been an increased interest in the development and application of the methods of artificial intelligence (AI) to decision support and knowledge acquisition. Multidisciplinary research teams integrated by computer engineers and doctors are more and more frequent, mirroring the need of cooperation in this new topic. AI, as a science, can be defined as the ability to make computers do things that would require intelligence if done by humans. Increasingly, diabetes-related journals have been incorporating publications focused on AI tools applied to diabetes. In summary, diabetes management scenarios have suffered a deep transformation that forces diabetologists to incorporate skills from new areas. This recently needed knowledge includes AI tools, which have become part of the diabetes health care. The aim of this article is to explain in an easy and plane way the most used AI methodologies to promote the implication of health care providers-doctors and nurses-in this field.
Topics: Artificial Intelligence; Decision Support Systems, Clinical; Diabetes Mellitus; Humans; Machine Learning
PubMed: 28539087
DOI: 10.1177/1932296817710475 -
Circulation. Cardiovascular Quality and... Sep 2019
Topics: Artificial Intelligence; Electrocardiography; Forecasting; Humans; Machine Learning
PubMed: 31525077
DOI: 10.1161/CIRCOUTCOMES.119.006021 -
Ophthalmology. Glaucoma 2022On September 3, 2020, the Collaborative Community on Ophthalmic Imaging conducted its first 2-day virtual workshop on the role of artificial intelligence (AI) and...
On September 3, 2020, the Collaborative Community on Ophthalmic Imaging conducted its first 2-day virtual workshop on the role of artificial intelligence (AI) and related machine learning techniques in the diagnosis and treatment of various ophthalmic conditions. In a session entitled "Artificial Intelligence for Glaucoma," a panel of glaucoma specialists, researchers, industry experts, and patients convened to share current research on the application of AI to commonly used diagnostic modalities, including fundus photography, OCT imaging, standard automated perimetry, and gonioscopy. The conference participants focused on the use of AI as a tool for disease prediction, highlighted its ability to address inequalities, and presented the limitations of and challenges to its clinical application. The panelists' discussion addressed AI and health equities from clinical, societal, and regulatory perspectives.
Topics: Artificial Intelligence; Diagnostic Imaging; Diagnostic Techniques, Ophthalmological; Glaucoma; Humans; Machine Learning
PubMed: 35218987
DOI: 10.1016/j.ogla.2022.02.010 -
European Urology Focus Jul 2021A better understanding of the tumor immune microenvironment (TIME) could lead to accurate diagnosis, prognosis, and treatment stratification. Although molecular analyses... (Review)
Review
A better understanding of the tumor immune microenvironment (TIME) could lead to accurate diagnosis, prognosis, and treatment stratification. Although molecular analyses at the tissue and/or single cell level could reveal the cellular status of the tumor microenvironment, these approaches lack information related to spatial-level cellular distribution, co-organization, and cell-cell interaction in the TIME. With the emergence of computational pathology coupled with machine learning (ML) and artificial intelligence (AI), ML- and AI-driven spatial TIME analyses of pathology images could revolutionize our understanding of the highly heterogeneous and complex molecular architecture of the TIME. In this review we highlight recent studies on spatial TIME analysis of pathology slides using state-of-the-art ML and AI algorithms. PATIENT SUMMARY: This mini-review reports recent advances in machine learning and artificial intelligence for spatial analysis of the tumor immune microenvironment in pathology slides. This information can help in understanding the spatial heterogeneity and organization of cells in patient tumors.
Topics: Artificial Intelligence; Humans; Machine Learning; Neoplasms; Spatial Analysis; Tumor Microenvironment
PubMed: 34353733
DOI: 10.1016/j.euf.2021.07.006 -
Frontiers in Public Health 2023Infectious keratitis (IK) is a sight-threatening condition requiring immediate definite treatment. The need for prompt treatment heavily depends on timely diagnosis. The... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
Infectious keratitis (IK) is a sight-threatening condition requiring immediate definite treatment. The need for prompt treatment heavily depends on timely diagnosis. The diagnosis of IK, however, is challenged by the drawbacks of the current "gold standard." The poorly differentiated clinical features, the possibility of low microbial culture yield, and the duration for culture are the culprits of delayed IK treatment. Deep learning (DL) is a recent artificial intelligence (AI) advancement that has been demonstrated to be highly promising in making automated diagnosis in IK with high accuracy. However, its exact accuracy is not yet elucidated. This article is the first systematic review and meta-analysis that aims to assess the accuracy of available DL models to correctly classify IK based on etiology compared to the current gold standards.
METHODS
A systematic search was carried out in PubMed, Google Scholars, Proquest, ScienceDirect, Cochrane and Scopus. The used keywords are: "Keratitis," "Corneal ulcer," "Corneal diseases," "Corneal lesions," "Artificial intelligence," "Deep learning," and "Machine learning." Studies including slit lamp photography of the cornea and validity study on DL performance were considered. The primary outcomes reviewed were the accuracy and classification capability of the AI machine learning/DL algorithm. We analyzed the extracted data with the MetaXL 5.2 Software.
RESULTS
A total of eleven articles from 2002 to 2022 were included with a total dataset of 34,070 images. All studies used convolutional neural networks (CNNs), with ResNet and DenseNet models being the most used models across studies. Most AI models outperform the human counterparts with a pooled area under the curve (AUC) of 0.851 and accuracy of 96.6% in differentiating IK vs. non-IK and pooled AUC 0.895 and accuracy of 64.38% for classifying bacterial keratitis (BK) vs. fungal keratitis (FK).
CONCLUSION
This study demonstrated that DL algorithms have high potential in diagnosing and classifying IK with accuracy that, if not better, is comparable to trained corneal experts. However, various factors, such as the unique architecture of DL model, the problem with overfitting, image quality of the datasets, and the complex nature of IK itself, still hamper the universal applicability of DL in daily clinical practice.
Topics: Humans; Artificial Intelligence; Keratitis; Algorithms; Machine Learning; Neural Networks, Computer
PubMed: 38074720
DOI: 10.3389/fpubh.2023.1239231 -
Sensors (Basel, Switzerland) Oct 2019Recently, significant developments have been achieved in the field of artificial intelligence, in particular the introduction of deep learning technology that has...
Recently, significant developments have been achieved in the field of artificial intelligence, in particular the introduction of deep learning technology that has improved the learning and prediction accuracy to unpresented levels, especially when dealing with big data and high-resolution images. Significant developments have occurred in the area of medical signal processing, measurement techniques, and health monitoring, such as vital biological signs for biomedical systems and noise and vibration of mechanical systems, which are carried out by instruments that generate large data sets. These big data sets, ultimately driven by high population growth, would require Artificial Intelligence techniques to analyse and model. In this Special Issue, papers are presented on the latest signal processing and deep learning techniques used for health monitoring of biomedical and mechanical systems.
Topics: Algorithms; Artificial Intelligence; Deep Learning; Monitoring, Physiologic; Signal Processing, Computer-Assisted
PubMed: 31683518
DOI: 10.3390/s19214727 -
Sensors (Basel, Switzerland) Dec 2022This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy....
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their potential for developing safe and robust autonomous systems in the context of future military and defense applications. Additionally, we discuss some of the technical and operational critical challenges that arise when attempting to practically build fully autonomous systems for advanced military and defense applications. Our paper provides the state-of-the-art advanced AI methods available for tactical autonomy. To the best of our knowledge, this is the first work that addresses the important current trends, strategies, critical challenges, tactical complexities, and future research directions of tactical autonomy. We believe this work will greatly interest researchers and scientists from academia and the industry working in the field of robotics and the autonomous systems community. We hope this work encourages researchers across multiple disciplines of AI to explore the broader tactical autonomy domain. We also hope that our work serves as an essential step toward designing advanced AI and ML models with practical implications for real-world military and defense settings.
Topics: Humans; Artificial Intelligence; Robotics; Machine Learning; Physicians; Forecasting
PubMed: 36560285
DOI: 10.3390/s22249916 -
Balkan Medical Journal Jan 2023In the field of computer science, known as artificial intelligence, algorithms imitate reasoning tasks that are typically performed by humans. The techniques that allow... (Review)
Review
In the field of computer science, known as artificial intelligence, algorithms imitate reasoning tasks that are typically performed by humans. The techniques that allow machines to learn and get better at tasks such as recognition and prediction, which form the basis of clinical practice, are referred to as machine learning, which is a subfield of artificial intelligence. The number of artificial intelligence-and machine learnings-related publications in clinical journals has grown exponentially, driven by recent developments in computation and the accessibility of simple tools. However, clinicians are often not included in data science teams, which may limit the clinical relevance, explanability, workflow compatibility, and quality improvement of artificial intelligence solutions. Thus, this results in the language barrier between clinicians and artificial intelligence developers. Healthcare practitioners sometimes lack a basic understanding of artificial intelligence research because the approach is difficult for non-specialists to understand. Furthermore, many editors and reviewers of medical publications might not be familiar with the fundamental ideas behind these technologies, which may prevent journals from publishing high-quality artificial intelligence studies or, worse still, could allow for the publication of low-quality works. In this review, we aim to improve readers’ artificial intelligence literacy and critical thinking. As a result, we concentrated on what we consider the 10 most important qualities of artificial intelligence research: valid scientific purpose, high-quality data set, robust reference standard, robust input, no information leakage, optimal bias-variance tradeoff, proper model evaluation, proven clinical utility, transparent reporting, and open science. Before designing a study, one should have defined a sound scientific purpose. Then, it should be backed by a high-quality data set, robust input, and a solid reference standard. The artificial intelligence development pipeline should prevent information leakage. For the models, optimal bias-variance tradeoff should be achieved, and generalizability assessment must be adequately performed. The clinical value of the final models must also be established. After the study, thought should be given to transparency in publishing the process and results as well as open science for sharing data, code, and models. We hope this work may improve the artificial intelligence literacy and mindset of the readers.
Topics: Humans; Artificial Intelligence; Machine Learning; Algorithms
PubMed: 36578657
DOI: 10.4274/balkanmedj.galenos.2022.2022-11-51