-
Veterinary Radiology & Ultrasound : the... Dec 2022The prevalence and pervasiveness of artificial intelligence (AI) with medical images in veterinary and human medicine is rapidly increasing. This article provides... (Review)
Review
The prevalence and pervasiveness of artificial intelligence (AI) with medical images in veterinary and human medicine is rapidly increasing. This article provides essential definitions of AI with medical images with a focus on veterinary radiology. Machine learning methods common in medical image analysis are compared, and a detailed description of convolutional neural networks commonly used in deep learning classification and regression models is provided. A brief introduction to natural language processing (NLP) and its utility in machine learning is also provided. NLP can economize the creation of "truth-data" needed when training AI systems for both diagnostic radiology and radiation oncology applications. The goal of this publication is to provide veterinarians, veterinary radiologists, and radiation oncologists the necessary background needed to understand and comprehend AI-focused research projects and publications.
Topics: Animals; Humans; Artificial Intelligence; Deep Learning; Diagnostic Imaging; Machine Learning; Radiology
PubMed: 36514230
DOI: 10.1111/vru.13160 -
Machine learning and AI-based approaches for bioactive ligand discovery and GPCR-ligand recognition.Methods (San Diego, Calif.) Aug 2020In the last decade, machine learning and artificial intelligence applications have received a significant boost in performance and attention in both academic research... (Review)
Review
In the last decade, machine learning and artificial intelligence applications have received a significant boost in performance and attention in both academic research and industry. The success behind most of the recent state-of-the-art methods can be attributed to the latest developments in deep learning. When applied to various scientific domains that are concerned with the processing of non-tabular data, for example, image or text, deep learning has been shown to outperform not only conventional machine learning but also highly specialized tools developed by domain experts. This review aims to summarize AI-based research for GPCR bioactive ligand discovery with a particular focus on the most recent achievements and research trends. To make this article accessible to a broad audience of computational scientists, we provide instructive explanations of the underlying methodology, including overviews of the most commonly used deep learning architectures and feature representations of molecular data. We highlight the latest AI-based research that has led to the successful discovery of GPCR bioactive ligands. However, an equal focus of this review is on the discussion of machine learning-based technology that has been applied to ligand discovery in general and has the potential to pave the way for successful GPCR bioactive ligand discovery in the future. This review concludes with a brief outlook highlighting the recent research trends in deep learning, such as active learning and semi-supervised learning, which have great potential for advancing bioactive ligand discovery.
Topics: Artificial Intelligence; Deep Learning; Drug Discovery; Ligands; Machine Learning; Neural Networks, Computer; Receptors, G-Protein-Coupled; Software; Supervised Machine Learning
PubMed: 32645448
DOI: 10.1016/j.ymeth.2020.06.016 -
Methods in Molecular Biology (Clifton,... 2022The discovery and development of drugs is a long and expensive process with a high attrition rate. Computational drug discovery contributes to ligand discovery and... (Review)
Review
The discovery and development of drugs is a long and expensive process with a high attrition rate. Computational drug discovery contributes to ligand discovery and optimization, by using models that describe the properties of ligands and their interactions with biological targets. In recent years, artificial intelligence (AI) has made remarkable modeling progress, driven by new algorithms and by the increase in computing power and storage capacities, which allow the processing of large amounts of data in a short time. This review provides the current state of the art of AI methods applied to drug discovery, with a focus on structure- and ligand-based virtual screening, library design and high-throughput analysis, drug repurposing and drug sensitivity, de novo design, chemical reactions and synthetic accessibility, ADMET, and quantum mechanics.
Topics: Artificial Intelligence; Deep Learning; Drug Design; Ligands; Machine Learning
PubMed: 34731478
DOI: 10.1007/978-1-0716-1787-8_16 -
Clinical Epigenetics Apr 2020Machine learning is a sub-field of artificial intelligence, which utilises large data sets to make predictions for future events. Although most algorithms used in... (Review)
Review
BACKGROUND
Machine learning is a sub-field of artificial intelligence, which utilises large data sets to make predictions for future events. Although most algorithms used in machine learning were developed as far back as the 1950s, the advent of big data in combination with dramatically increased computing power has spurred renewed interest in this technology over the last two decades.
MAIN BODY
Within the medical field, machine learning is promising in the development of assistive clinical tools for detection of e.g. cancers and prediction of disease. Recent advances in deep learning technologies, a sub-discipline of machine learning that requires less user input but more data and processing power, has provided even greater promise in assisting physicians to achieve accurate diagnoses. Within the fields of genetics and its sub-field epigenetics, both prime examples of complex data, machine learning methods are on the rise, as the field of personalised medicine is aiming for treatment of the individual based on their genetic and epigenetic profiles.
CONCLUSION
We now have an ever-growing number of reported epigenetic alterations in disease, and this offers a chance to increase sensitivity and specificity of future diagnostics and therapies. Currently, there are limited studies using machine learning applied to epigenetics. They pertain to a wide variety of disease states and have used mostly supervised machine learning methods.
Topics: DNA Methylation; Diagnosis; Disease; Epigenesis, Genetic; Epigenomics; Humans; Machine Learning; Precision Medicine; Supervised Machine Learning; Unsupervised Machine Learning
PubMed: 32245523
DOI: 10.1186/s13148-020-00842-4 -
PLoS Medicine Dec 2018Machine Learning Special Issue Guest Editors Suchi Saria, Atul Butte, and Aziz Sheikh cut through the hyperbole with an accessible and accurate portrayal of the...
Machine Learning Special Issue Guest Editors Suchi Saria, Atul Butte, and Aziz Sheikh cut through the hyperbole with an accessible and accurate portrayal of the forefront of machine learning in clinical translation.
Topics: Artificial Intelligence; Diagnosis, Computer-Assisted; Humans; Machine Learning; Medicine
PubMed: 30596635
DOI: 10.1371/journal.pmed.1002721 -
Computational Intelligence and... 2022Recently, artificial intelligence (AI) domain increased to contain finance, education, health, mining, and education. Artificial intelligence controls the performance of...
Recently, artificial intelligence (AI) domain increased to contain finance, education, health, mining, and education. Artificial intelligence controls the performance of systems that use new technologies, especially in the education environment. The multiagent system (MAS) is considered an intelligent system to facilitate the e-learning process in the educational environment. MAS is used to make interaction easily among agents, which supports the use of feature selection. The feature selection methods are used to select the important and relevant features from the database that could help machine learning algorithms produce high performance. This paper aims to propose an effective and suitable system for multiagent-based machine learning algorithms and feature selection methods to enhance the e-learning process in the educational environment which predicts pass or fail results. The univariate and Extra Trees feature selection methods are used to select the essential attributes from the database. Five machine learning algorithms named Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), Naive Bayes (NB), and K-nearest neighbors algorithm (KNN) are applied to all features and selected features. The results showed that the learning algorithm that has been measured by the Extra Trees method has achieved the highest performance depending on the evaluation of cross-validation and testing.
Topics: Algorithms; Artificial Intelligence; Bayes Theorem; Computer-Assisted Instruction; Machine Learning; Support Vector Machine
PubMed: 35140765
DOI: 10.1155/2022/2941840 -
Medical Physics Jan 2022Organ-at-risk contouring is still a bottleneck in radiotherapy, with many deep learning methods falling short of promised results when evaluated on clinical data. We...
PURPOSE
Organ-at-risk contouring is still a bottleneck in radiotherapy, with many deep learning methods falling short of promised results when evaluated on clinical data. We investigate the accuracy and time-savings resulting from the use of an interactive-machine-learning method for an organ-at-risk contouring task.
METHODS
We implement an open-source interactive-machine-learning software application that facilitates corrective-annotation for deep-learning generated contours on X-ray CT images. A trained-physician contoured 933 hearts using our software by delineating the first image, starting model training, and then correcting the model predictions for all subsequent images. These corrections were added into the training data, which was used for continuously training the assisting model. From the 933 hearts, the same physician also contoured the first 10 and last 10 in Eclipse (Varian) to enable comparison in terms of accuracy and duration.
RESULTS
We find strong agreement with manual delineations, with a dice score of 0.95. The annotations created using corrective-annotation also take less time to create as more images are annotated, resulting in substantial time savings compared to manual methods. After 923 images had been delineated, hearts took 2 min and 2 s to delineate on average, which includes time to evaluate the initial model prediction and assign the needed corrections, compared to 7 min and 1 s when delineating manually.
CONCLUSIONS
Our experiment demonstrates that interactive-machine-learning with corrective-annotation provides a fast and accessible way for non computer-scientists to train deep-learning models to segment their own structures of interest as part of routine clinical workflows.
Topics: Deep Learning; Heart; Image Processing, Computer-Assisted; Machine Learning; Tomography, X-Ray Computed
PubMed: 34783028
DOI: 10.1002/mp.15353 -
Microscopy (Oxford, England) Feb 2022We review the growing use of machine learning in electron microscopy (EM) driven in part by the availability of fast detectors operating at kiloHertz frame rates leading... (Review)
Review
We review the growing use of machine learning in electron microscopy (EM) driven in part by the availability of fast detectors operating at kiloHertz frame rates leading to large data sets that cannot be processed using manually implemented algorithms. We summarize the various network architectures and error metrics that have been applied to a range of EM-related problems including denoising and inpainting. We then provide a review of the application of these in both physical and life sciences, highlighting how conventional networks and training data have been specifically modified for EM.
Topics: Algorithms; Cryoelectron Microscopy; Deep Learning; Machine Learning; Microscopy, Electron
PubMed: 35275181
DOI: 10.1093/jmicro/dfab043 -
World Journal of Gastroenterology May 2021Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer... (Review)
Review
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Topics: Algorithms; Deep Learning; Humans; Machine Learning
PubMed: 34092975
DOI: 10.3748/wjg.v27.i20.2545 -
The Plant Journal : For Cell and... Sep 2022Advances in high-throughput omics technologies are leading plant biology research into the era of big data. Machine learning (ML) performs an important role in plant... (Review)
Review
Advances in high-throughput omics technologies are leading plant biology research into the era of big data. Machine learning (ML) performs an important role in plant systems biology because of its excellent performance and wide application in the analysis of big data. However, to achieve ideal performance, supervised ML algorithms require large numbers of labeled samples as training data. In some cases, it is impossible or prohibitively expensive to obtain enough labeled training data; here, the paradigms of unsupervised learning (UL) and semi-supervised learning (SSL) play an indispensable role. In this review, we first introduce the basic concepts of ML techniques, as well as some representative UL and SSL algorithms, including clustering, dimensionality reduction, self-supervised learning (self-SL), positive-unlabeled (PU) learning and transfer learning. We then review recent advances and applications of UL and SSL paradigms in both plant systems biology and plant phenotyping research. Finally, we discuss the limitations and highlight the significance and challenges of UL and SSL strategies in plant systems biology.
Topics: Algorithms; Machine Learning; Plants; Supervised Machine Learning; Systems Biology
PubMed: 35821601
DOI: 10.1111/tpj.15905