-
Journal of Integrative Bioinformatics Dec 2023Epilepsy is a neurological disorder (the third most common, following stroke and migraines). A key aspect of its diagnosis is the presence of seizures that occur without... (Review)
Review
Epilepsy is a neurological disorder (the third most common, following stroke and migraines). A key aspect of its diagnosis is the presence of seizures that occur without a known cause and the potential for new seizures to occur. Machine learning has shown potential as a cost-effective alternative for rapid diagnosis. In this study, we review the current state of machine learning in the detection and prediction of epileptic seizures. The objective of this study is to portray the existing machine learning methods for seizure prediction. Internet bibliographical searches were conducted to identify relevant literature on the topic. Through cross-referencing from key articles, additional references were obtained to provide a comprehensive overview of the techniques. As the aim of this paper aims is not a pure bibliographical review of the subject, the publications here cited have been selected among many others based on their number of citations. To implement accurate diagnostic and treatment tools, it is necessary to achieve a balance between prediction time, sensitivity, and specificity. This balance can be achieved using deep learning algorithms. The best performance and results are often achieved by combining multiple techniques and features, but this approach can also increase computational requirements.
Topics: Humans; Deep Learning; Electroencephalography; Seizures; Epilepsy; Machine Learning; Algorithms
PubMed: 38099461
DOI: 10.1515/jib-2023-0002 -
American Journal of Transplantation :... Mar 2024
Topics: Transplants; Machine Learning
PubMed: 38061462
DOI: 10.1016/j.ajt.2023.11.015 -
Biomedical Engineering Online Dec 2023Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including... (Review)
Review
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Topics: Humans; Artificial Intelligence; Deep Learning; Glaucoma; Machine Learning; Ophthalmology
PubMed: 38102597
DOI: 10.1186/s12938-023-01187-8 -
Journal of Medical Internet Research Jan 2024Machine learning (ML) has seen impressive growth in health science research due to its capacity for handling complex data to perform a range of tasks, including...
Machine learning (ML) has seen impressive growth in health science research due to its capacity for handling complex data to perform a range of tasks, including unsupervised learning, supervised learning, and reinforcement learning. To aid health science researchers in understanding the strengths and limitations of ML and to facilitate its integration into their studies, we present here a guideline for integrating ML into an analysis through a structured framework, covering steps from framing a research question to study design and analysis techniques for specialized data types.
Topics: Humans; Machine Learning; Reinforcement, Psychology; Research Design; Research Personnel
PubMed: 38289657
DOI: 10.2196/50890 -
Chemical Reviews Jul 2023Small data are often used in scientific and engineering research due to the presence of various constraints, such as time, cost, ethics, privacy, security, and technical... (Review)
Review
Small data are often used in scientific and engineering research due to the presence of various constraints, such as time, cost, ethics, privacy, security, and technical limitations in data acquisition. However, big data have been the focus for the past decade, small data and their challenges have received little attention, even though they are technically more severe in machine learning (ML) and deep learning (DL) studies. Overall, the small data challenge is often compounded by issues, such as data diversity, imputation, noise, imbalance, and high-dimensionality. Fortunately, the current big data era is characterized by technological breakthroughs in ML, DL, and artificial intelligence (AI), which enable data-driven scientific discovery, and many advanced ML and DL technologies developed for big data have inadvertently provided solutions for small data problems. As a result, significant progress has been made in ML and DL for small data challenges in the past decade. In this review, we summarize and analyze several emerging potential solutions to small data challenges in molecular science, including chemical and biological sciences. We review both basic machine learning algorithms, such as linear regression, logistic regression (LR), -nearest neighbor (KNN), support vector machine (SVM), kernel learning (KL), random forest (RF), and gradient boosting trees (GBT), and more advanced techniques, including artificial neural network (ANN), convolutional neural network (CNN), U-Net, graph neural network (GNN), Generative Adversarial Network (GAN), long short-term memory (LSTM), autoencoder, transformer, transfer learning, active learning, graph-based semi-supervised learning, combining deep learning with traditional machine learning, and physical model-based data augmentation. We also briefly discuss the latest advances in these methods. Finally, we conclude the survey with a discussion of promising trends in small data challenges in molecular science.
Topics: Artificial Intelligence; Machine Learning; Algorithms; Electric Power Supplies; Neural Networks, Computer
PubMed: 37384816
DOI: 10.1021/acs.chemrev.3c00189 -
Aging Clinical and Experimental Research Nov 2023The increasing access to health data worldwide is driving a resurgence in machine learning research, including data-hungry deep learning algorithms. More computationally... (Review)
Review
The increasing access to health data worldwide is driving a resurgence in machine learning research, including data-hungry deep learning algorithms. More computationally efficient algorithms now offer unique opportunities to enhance diagnosis, risk stratification, and individualised approaches to patient management. Such opportunities are particularly relevant for the management of older patients, a group that is characterised by complex multimorbidity patterns and significant interindividual variability in homeostatic capacity, organ function, and response to treatment. Clinical tools that utilise machine learning algorithms to determine the optimal choice of treatment are slowly gaining the necessary approval from governing bodies and being implemented into healthcare, with significant implications for virtually all medical disciplines during the next phase of digital medicine. Beyond obtaining regulatory approval, a crucial element in implementing these tools is the trust and support of the people that use them. In this context, an increased understanding by clinicians of artificial intelligence and machine learning algorithms provides an appreciation of the possible benefits, risks, and uncertainties, and improves the chances for successful adoption. This review provides a broad taxonomy of machine learning algorithms, followed by a more detailed description of each algorithm class, their purpose and capabilities, and examples of their applications, particularly in geriatric medicine. Additional focus is given on the clinical implications and challenges involved in relying on devices with reduced interpretability and the progress made in counteracting the latter via the development of explainable machine learning.
Topics: Humans; Aged; Artificial Intelligence; Algorithms; Machine Learning; Geriatrics
PubMed: 37682491
DOI: 10.1007/s40520-023-02552-2 -
Journal of Korean Medical Science Feb 2024
Topics: Humans; Machine Learning; Shock, Septic; Clinical Medicine
PubMed: 38317454
DOI: 10.3346/jkms.2024.39.e68 -
Sensors (Basel, Switzerland) Jul 2023This paper presents reported machine learning approaches in the field of Brillouin distributed fiber optic sensors (DFOSs). The increasing popularity of Brillouin DFOSs... (Review)
Review
This paper presents reported machine learning approaches in the field of Brillouin distributed fiber optic sensors (DFOSs). The increasing popularity of Brillouin DFOSs stems from their capability to continuously monitor temperature and strain along kilometer-long optical fibers, rendering them attractive for industrial applications, such as the structural health monitoring of large civil infrastructures and pipelines. In recent years, machine learning has been integrated into the Brillouin DFOS signal processing, resulting in fast and enhanced temperature, strain, and humidity measurements without increasing the system's cost. Machine learning has also contributed to enhanced spatial resolution in Brillouin optical time domain analysis (BOTDA) systems and shorter measurement times in Brillouin optical frequency domain analysis (BOFDA) systems. This paper provides an overview of the applied machine learning methodologies in Brillouin DFOSs, as well as future perspectives in this area.
Topics: Fiber Optic Technology; Optical Devices; Optical Fibers; Humidity; Machine Learning
PubMed: 37448034
DOI: 10.3390/s23136187 -
Nutrients Apr 2024In industry 4.0, where the automation and digitalization of entities and processes are fundamental, artificial intelligence (AI) is increasingly becoming a pivotal tool... (Review)
Review
In industry 4.0, where the automation and digitalization of entities and processes are fundamental, artificial intelligence (AI) is increasingly becoming a pivotal tool offering innovative solutions in various domains. In this context, nutrition, a critical aspect of public health, is no exception to the fields influenced by the integration of AI technology. This study aims to comprehensively investigate the current landscape of AI in nutrition, providing a deep understanding of the potential of AI, machine learning (ML), and deep learning (DL) in nutrition sciences and highlighting eventual challenges and futuristic directions. A hybrid approach from the systematic literature review (SLR) guidelines and the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines was adopted to systematically analyze the scientific literature from a search of major databases on artificial intelligence in nutrition sciences. A rigorous study selection was conducted using the most appropriate eligibility criteria, followed by a methodological quality assessment ensuring the robustness of the included studies. This review identifies several AI applications in nutrition, spanning smart and personalized nutrition, dietary assessment, food recognition and tracking, predictive modeling for disease prevention, and disease diagnosis and monitoring. The selected studies demonstrated the versatility of machine learning and deep learning techniques in handling complex relationships within nutritional datasets. This study provides a comprehensive overview of the current state of AI applications in nutrition sciences and identifies challenges and opportunities. With the rapid advancement in AI, its integration into nutrition holds significant promise to enhance individual nutritional outcomes and optimize dietary recommendations. Researchers, policymakers, and healthcare professionals can utilize this research to design future projects and support evidence-based decision-making in AI for nutrition and dietary guidance.
Topics: Humans; Artificial Intelligence; Deep Learning; Machine Learning; Nutritional Status; Automation
PubMed: 38613106
DOI: 10.3390/nu16071073 -
Journal of Medical Internet Research May 2024The number of papers presenting machine learning (ML) models that are being submitted to and published in the Journal of Medical Internet Research and other JMIR...
The number of papers presenting machine learning (ML) models that are being submitted to and published in the Journal of Medical Internet Research and other JMIR Publications journals has steadily increased. Editors and peer reviewers involved in the review process for such manuscripts often go through multiple review cycles to enhance the quality and completeness of reporting. The use of reporting guidelines or checklists can help ensure consistency in the quality of submitted (and published) scientific manuscripts and, for example, avoid instances of missing information. In this Editorial, the editors of JMIR Publications journals discuss the general JMIR Publications policy regarding authors' application of reporting guidelines and specifically focus on the reporting of ML studies in JMIR Publications journals, using the Consolidated Reporting of Machine Learning Studies (CREMLS) guidelines, with an example of how authors and other journals could use the CREMLS checklist to ensure transparency and rigor in reporting.
Topics: Machine Learning; Humans; Guidelines as Topic; Prognosis; Checklist
PubMed: 38696776
DOI: 10.2196/52508