-
Indian Journal of Cancer 2021Artificial intelligence (AI) has found its way into every sphere of human life including the field of medicine. Detection of cancer might be AI's most altruistic and... (Review)
Review
Artificial intelligence (AI) has found its way into every sphere of human life including the field of medicine. Detection of cancer might be AI's most altruistic and convoluted challenge to date in the field of medicine. Embedding AI into various aspects of cancer diagnostics would be of immense use in dealing with the tedious, repetitive, time-consuming job of lesion detection, remove opportunities for human error, and cut costs and time. This would be of great value in cancer screening programs. By using AI algorithms, data from digital images from radiology and pathology that are imperceptible to the human eye can be identified (radiomics and pathomics). Correlating radiomics and pathomics with clinico-demographic-therapy-morbidity-mortality profiles will lead to a greater understanding of cancers. Specific imaging phenotypes have been found to be associated with specific gene-determined molecular pathways involved in cancer pathogenesis (radiogenomics). All these developments would not only help to personalize oncologic practice but also lead to the development of new imaging biomarkers. AI algorithms in oncoimaging and oncopathology will broadly have the following uses: cancer screening (detection of lesions), characterization and grading of tumors, and clinical decision-making and prognostication. However, AI cannot be a foolproof panacea nor can it supplant the role of humans. It can however be a powerful and useful complement to human insights and deeper understanding. Multiple issues like standardization, validity, ethics, privacy, finances, legal liability, training, accreditation, etc., need to be overcome before the vast potential of AI in diagnostic oncology can be fully harnessed.
Topics: Artificial Intelligence; Deep Learning; Humans; Machine Learning; Neoplasms
PubMed: 34975094
DOI: 10.4103/ijc.IJC_399_20 -
TheScientificWorldJournal 2015
Topics: Machine Learning; Medical Informatics
PubMed: 25692180
DOI: 10.1155/2015/825267 -
Bioinformatics (Oxford, England) Oct 2019In a predictive modeling setting, if sufficient details of the system behavior are known, one can build and use a simulation for making predictions. When sufficient...
MOTIVATION
In a predictive modeling setting, if sufficient details of the system behavior are known, one can build and use a simulation for making predictions. When sufficient system details are not known, one typically turns to machine learning, which builds a black-box model of the system using a large dataset of input sample features and outputs. We consider a setting which is between these two extremes: some details of the system mechanics are known but not enough for creating simulations that can be used to make high quality predictions. In this context we propose using approximate simulations to build a kernel for use in kernelized machine learning methods, such as support vector machines. The results of multiple simulations (under various uncertainty scenarios) are used to compute similarity measures between every pair of samples: sample pairs are given a high similarity score if they behave similarly under a wide range of simulation parameters. These similarity values, rather than the original high dimensional feature data, are used to build the kernel.
RESULTS
We demonstrate and explore the simulation-based kernel (SimKern) concept using four synthetic complex systems-three biologically inspired models and one network flow optimization model. We show that, when the number of training samples is small compared to the number of features, the SimKern approach dominates over no-prior-knowledge methods. This approach should be applicable in all disciplines where predictive models are sought and informative yet approximate simulations are available.
AVAILABILITY AND IMPLEMENTATION
The Python SimKern software, the demonstration models (in MATLAB, R), and the datasets are available at https://github.com/davidcraft/SimKern.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
Topics: Machine Learning; Software; Support Vector Machine
PubMed: 30903692
DOI: 10.1093/bioinformatics/btz199 -
Anesthesiology Clinics Sep 2021With the tremendous volume of data captured during surgeries and procedures, critical care, and pain management, the field of anesthesiology is uniquely suited for the... (Review)
Review
With the tremendous volume of data captured during surgeries and procedures, critical care, and pain management, the field of anesthesiology is uniquely suited for the application of machine learning, neural networks, and closed loop technologies. In the past several years, this area has expanded immensely in both interest and clinical applications. This article provides an overview of the basic tenets of machine learning, neural networks, and closed loop devices, with emphasis on the clinical applications of these technologies.
Topics: Anesthesia; Anesthesiology; Artificial Intelligence; Deep Learning; Humans; Machine Learning
PubMed: 34392886
DOI: 10.1016/j.anclin.2021.03.012 -
Critical Care (London, England) Apr 2019
Topics: Data Analysis; Evidence-Based Practice; Humans; Machine Learning
PubMed: 31014378
DOI: 10.1186/s13054-019-2424-7 -
International Journal of Molecular... Jun 2022In recent years, deep learning has emerged as a highly active research field, achieving great success in various machine learning areas, including image processing,...
In recent years, deep learning has emerged as a highly active research field, achieving great success in various machine learning areas, including image processing, speech recognition, and natural language processing, and now rapidly becoming a dominant tool in biomedicine [...].
Topics: Computational Biology; Deep Learning; Image Processing, Computer-Assisted; Machine Learning; Natural Language Processing
PubMed: 35743052
DOI: 10.3390/ijms23126610 -
Sensors (Basel, Switzerland) Jun 2021Embedded systems technology is undergoing a phase of transformation owing to the novel advancements in computer architecture and the breakthroughs in machine learning... (Review)
Review
Embedded systems technology is undergoing a phase of transformation owing to the novel advancements in computer architecture and the breakthroughs in machine learning applications. The areas of applications of embedded machine learning (EML) include accurate computer vision schemes, reliable speech recognition, innovative healthcare, robotics, and more. However, there exists a critical drawback in the efficient implementation of ML algorithms targeting embedded applications. Machine learning algorithms are generally computationally and memory intensive, making them unsuitable for resource-constrained environments such as embedded and mobile devices. In order to efficiently implement these compute and memory-intensive algorithms within the embedded and mobile computing space, innovative optimization techniques are required at the algorithm and hardware levels. To this end, this survey aims at exploring current research trends within this circumference. First, we present a brief overview of compute intensive machine learning algorithms such as hidden Markov models (HMM), -nearest neighbors (-NNs), support vector machines (SVMs), Gaussian mixture models (GMMs), and deep neural networks (DNNs). Furthermore, we consider different optimization techniques currently adopted to squeeze these computational and memory-intensive algorithms within resource-limited embedded and mobile environments. Additionally, we discuss the implementation of these algorithms in microcontroller units, mobile devices, and hardware accelerators. Conclusively, we give a comprehensive overview of key application areas of EML technology, point out key research directions and highlight key take-away lessons for future research exploration in the embedded machine learning domain.
Topics: Algorithms; Computers, Handheld; Machine Learning; Neural Networks, Computer; Support Vector Machine
PubMed: 34203119
DOI: 10.3390/s21134412 -
Machine learning and AI-based approaches for bioactive ligand discovery and GPCR-ligand recognition.Methods (San Diego, Calif.) Aug 2020In the last decade, machine learning and artificial intelligence applications have received a significant boost in performance and attention in both academic research... (Review)
Review
In the last decade, machine learning and artificial intelligence applications have received a significant boost in performance and attention in both academic research and industry. The success behind most of the recent state-of-the-art methods can be attributed to the latest developments in deep learning. When applied to various scientific domains that are concerned with the processing of non-tabular data, for example, image or text, deep learning has been shown to outperform not only conventional machine learning but also highly specialized tools developed by domain experts. This review aims to summarize AI-based research for GPCR bioactive ligand discovery with a particular focus on the most recent achievements and research trends. To make this article accessible to a broad audience of computational scientists, we provide instructive explanations of the underlying methodology, including overviews of the most commonly used deep learning architectures and feature representations of molecular data. We highlight the latest AI-based research that has led to the successful discovery of GPCR bioactive ligands. However, an equal focus of this review is on the discussion of machine learning-based technology that has been applied to ligand discovery in general and has the potential to pave the way for successful GPCR bioactive ligand discovery in the future. This review concludes with a brief outlook highlighting the recent research trends in deep learning, such as active learning and semi-supervised learning, which have great potential for advancing bioactive ligand discovery.
Topics: Artificial Intelligence; Deep Learning; Drug Discovery; Ligands; Machine Learning; Neural Networks, Computer; Receptors, G-Protein-Coupled; Software; Supervised Machine Learning
PubMed: 32645448
DOI: 10.1016/j.ymeth.2020.06.016 -
Systematic Reviews Dec 2020Despite existing research on text mining and machine learning for title and abstract screening, the role of machine learning within systematic literature reviews (SLRs)...
BACKGROUND
Despite existing research on text mining and machine learning for title and abstract screening, the role of machine learning within systematic literature reviews (SLRs) for health technology assessment (HTA) remains unclear given lack of extensive testing and of guidance from HTA agencies. We sought to address two knowledge gaps: to extend ML algorithms to provide a reason for exclusion-to align with current practices-and to determine optimal parameter settings for feature-set generation and ML algorithms.
METHODS
We used abstract and full-text selection data from five large SLRs (n = 3089 to 12,769 abstracts) across a variety of disease areas. Each SLR was split into training and test sets. We developed a multi-step algorithm to categorize each citation into the following categories: included; excluded for each PICOS criterion; or unclassified. We used a bag-of-words approach for feature-set generation and compared machine learning algorithms using support vector machines (SVMs), naïve Bayes (NB), and bagged classification and regression trees (CART) for classification. We also compared alternative training set strategies: using full data versus downsampling (i.e., reducing excludes to balance includes/excludes because machine learning algorithms perform better with balanced data), and using inclusion/exclusion decisions from abstract versus full-text screening. Performance comparisons were in terms of specificity, sensitivity, accuracy, and matching the reason for exclusion.
RESULTS
The best-fitting model (optimized sensitivity and specificity) was based on the SVM algorithm using training data based on full-text decisions, downsampling, and excluding words occurring fewer than five times. The sensitivity and specificity of this model ranged from 94 to 100%, and 54 to 89%, respectively, across the five SLRs. On average, 75% of excluded citations were excluded with a reason and 83% of these citations matched the reviewers' original reason for exclusion. Sensitivity significantly improved when both downsampling and abstract decisions were used.
CONCLUSIONS
ML algorithms can improve the efficiency of the SLR process and the proposed algorithms could reduce the workload of a second reviewer by identifying exclusions with a relevant PICOS reason, thus aligning with HTA guidance. Downsampling can be used to improve study selection, and improvements using full-text exclusions have implications for a learn-as-you-go approach.
Topics: Algorithms; Bayes Theorem; Data Mining; Humans; Machine Learning; Support Vector Machine; Systematic Reviews as Topic
PubMed: 33308292
DOI: 10.1186/s13643-020-01520-5 -
PLoS Computational Biology Jul 2019Omic data analysis is steadily growing as a driver of basic and applied molecular biology research. Core to the interpretation of complex and heterogeneous biological... (Review)
Review
Omic data analysis is steadily growing as a driver of basic and applied molecular biology research. Core to the interpretation of complex and heterogeneous biological phenotypes are computational approaches in the fields of statistics and machine learning. In parallel, constraint-based metabolic modeling has established itself as the main tool to investigate large-scale relationships between genotype, phenotype, and environment. The development and application of these methodological frameworks have occurred independently for the most part, whereas the potential of their integration for biological, biomedical, and biotechnological research is less known. Here, we describe how machine learning and constraint-based modeling can be combined, reviewing recent works at the intersection of both domains and discussing the mathematical and practical aspects involved. We overlap systematic classifications from both frameworks, making them accessible to nonexperts. Finally, we delineate potential future scenarios, propose new joint theoretical frameworks, and suggest concrete points of investigation for this joint subfield. A multiview approach merging experimental and knowledge-driven omic data through machine learning methods can incorporate key mechanistic information in an otherwise biologically-agnostic learning process.
Topics: Computational Biology; Deep Learning; Genome; Genotype; Machine Learning; Metabolic Networks and Pathways; Phenotype
PubMed: 31295267
DOI: 10.1371/journal.pcbi.1007084