-
Scientific Reports Aug 2023In a rapidly transforming world, farm data is growing exponentially. Realizing the importance of this data, researchers are looking for new solutions to analyse this...
In a rapidly transforming world, farm data is growing exponentially. Realizing the importance of this data, researchers are looking for new solutions to analyse this data and make farming predictions. Artificial Intelligence, with its capacity to handle big data is rapidly becoming popular. In addition, it can also handle non-linear, noisy data and is not limited by the conditions required for conventional data analysis. This study was therefore undertaken to compare the most popular machine learning (ML) algorithms and rank them as per their ability to make predictions on sheep farm data spanning 11 years. Data was cleaned and prepared was done before analysis. Winsorization was done for outlier removal. Principal component analysis (PCA) and feature selection (FS) were done and based on that, three datasets were created viz. PCA (wherein only PCA was used), PCA+ FS (both techniques used for dimensionality reduction), and FS (only feature selection used) bodyweight prediction. Among the 11 ML algorithms that were evaluated, the correlations between true and predicted values for MARS algorithm, Bayesian ridge regression, Ridge regression, Support Vector Machines, Gradient boosting algorithm, Random forests, XgBoost algorithm, Artificial neural networks, Classification and regression trees, Polynomial regression, K nearest neighbours and Genetic Algorithms were 0.993, 0.992, 0.991, 0.991, 0.991, 0.99, 0.99, 0.984, 0.984, 0.957, 0.949, 0.734 respectively for bodyweights. The top five algorithms for the prediction of bodyweights, were MARS, Bayesian ridge regression, Ridge regression, Support Vector Machines and Gradient boosting algorithm. A total of 12 machine learning models were developed for the prediction of bodyweights in sheep in the present study. It may be said that machine learning techniques can perform predictions with reasonable accuracies and can thus help in drawing inferences and making futuristic predictions on farms for their economic prosperity, performance improvement and subsequently food security.
Topics: Animals; Sheep; Artificial Intelligence; Bayes Theorem; Algorithms; Neural Networks, Computer; Machine Learning; Support Vector Machine
PubMed: 37582936
DOI: 10.1038/s41598-023-40528-4 -
Texas Heart Institute Journal Mar 2022Artificial intelligence and machine learning are rapidly gaining popularity in every aspect of our daily lives, and cardiovascular medicine is no exception. Here, we...
Artificial intelligence and machine learning are rapidly gaining popularity in every aspect of our daily lives, and cardiovascular medicine is no exception. Here, we provide physicians with an overview of the past, present, and future of artificial intelligence applications in cardiovascular medicine. We describe essential and powerful examples of machine-learning applications in industry and elsewhere. Finally, we discuss the latest technologic advances, as well as the benefits and limitations of artificial intelligence and machine learning in cardiovascular medicine.
Topics: Algorithms; Artificial Intelligence; Forecasting; Humans; Machine Learning
PubMed: 35481866
DOI: 10.14503/THIJ-20-7527 -
Current Opinion in Structural Biology Jun 2022Machine learning methods, in particular convolutional neural networks, have been applied to a variety of problems in cryo-EM and macromolecular crystallographic... (Review)
Review
Machine learning methods, in particular convolutional neural networks, have been applied to a variety of problems in cryo-EM and macromolecular crystallographic structure solution. However, they still have only limited acceptance by the community, mainly in areas where they replace repetitive work and allow for easy visual checking, such as particle picking, crystal centering or crystal recognition. With Artificial Intelligence (AI) based protein fold prediction currently revolutionizing the field, it is clear that their scope could be much wider. However, whether we will be able to exploit this potential fully will depend on the manner in which we use machine learning: training data must be well-formulated, methods need to utilize appropriate architectures, and outputs must be critically assessed, which may even require explaining AI decisions.
Topics: Artificial Intelligence; Machine Learning; Neural Networks, Computer; Proteins
PubMed: 35436699
DOI: 10.1016/j.sbi.2022.102368 -
Journal of Neurophysiology Dec 2021Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in... (Review)
Review
Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.
Topics: Artificial Intelligence; Deep Learning; Humans; Neurosciences
PubMed: 34644128
DOI: 10.1152/jn.00195.2021 -
Neuron Sep 2019Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical... (Review)
Review
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called "inductive bias," determines how well any learning algorithm-or brain-generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.
Topics: Algorithms; Artificial Intelligence; Bias; Brain; Deep Learning; Generalization, Psychological; Humans; Machine Learning; Neural Networks, Computer; Neurosciences
PubMed: 31557461
DOI: 10.1016/j.neuron.2019.08.034 -
IEEE Journal of Biomedical and Health... Aug 2022During the past decades, many automated image analysis methods have been developed for colonoscopy. Real-time implementation of the most promising methods during...
During the past decades, many automated image analysis methods have been developed for colonoscopy. Real-time implementation of the most promising methods during colonoscopy has been tested in clinical trials, including several recent multi-center studies. All trials have shown results that may contribute to prevention of colorectal cancer. We summarize the past and present development of colonoscopy video analysis methods, focusing on two categories of artificial intelligence (AI) technologies used in clinical trials. These are (1) analysis and feedback for improving colonoscopy quality and (2) detection of abnormalities. Our survey includes methods that use traditional machine learning algorithms on carefully designed hand-crafted features as well as recent deep-learning methods. Lastly, we present the gap between current state-of-the-art technology and desirable clinical features and conclude with future directions of endoscopic AI technology development that will bridge the current gap.
Topics: Algorithms; Artificial Intelligence; Colonoscopy; Deep Learning; Humans; Machine Learning
PubMed: 35316197
DOI: 10.1109/JBHI.2022.3160098 -
International Journal of Environmental... Mar 2022Artificial intelligence can be used to realise new types of protective devices and assistance systems, so their importance for occupational safety and health is... (Review)
Review
Artificial intelligence can be used to realise new types of protective devices and assistance systems, so their importance for occupational safety and health is continuously increasing. However, established risk mitigation measures in software development are only partially suitable for applications in AI systems, which only create new sources of risk. Risk management for systems that for systems using AI must therefore be adapted to the new problems. This work objects to contribute hereto by identifying relevant sources of risk for AI systems. For this purpose, the differences between AI systems, especially those based on modern machine learning methods, and classical software were analysed, and the current research fields of trustworthy AI were evaluated. On this basis, a taxonomy could be created that provides an overview of various AI-specific sources of risk. These new sources of risk should be taken into account in the overall risk assessment of a system based on AI technologies, examined for their criticality and managed accordingly at an early stage to prevent a later system failure.
Topics: Artificial Intelligence; Machine Learning; Occupational Health; Software; Technology
PubMed: 35329328
DOI: 10.3390/ijerph19063641 -
Drug Discovery Today Apr 2023Over the past decade, the amount of biomedical data available has grown at unprecedented rates. Increased automation technology and larger data volumes have encouraged... (Review)
Review
Over the past decade, the amount of biomedical data available has grown at unprecedented rates. Increased automation technology and larger data volumes have encouraged the use of machine learning (ML) or artificial intelligence (AI) techniques for mining such data and extracting useful patterns. Because the identification of chemical entities with desired biological activity is a crucial task in drug discovery, AI technologies have the potential to accelerate this process and support decision making. In addition, the advent of deep learning (DL) has shown great promise in addressing diverse problems in drug discovery, such as de novo molecular design. Herein, we will appraise the current state-of-the-art in AI-assisted drug discovery, discussing the recent applications covering generative models for chemical structure generation, scoring functions to improve binding affinity and pose prediction, and molecular dynamics to assist in the parametrization, featurization and generalization tasks. Finally, we will discuss current hurdles and the strategies to overcome them, as well as potential future directions.
Topics: Artificial Intelligence; Drug Discovery; Machine Learning; Intelligence; Drug Design
PubMed: 36736583
DOI: 10.1016/j.drudis.2023.103516 -
Brazilian Oral Research 2021Artificial intelligence (AI) is a general term used to describe the development of computer systems which can perform tasks that normally require human cognition....
Artificial intelligence (AI) is a general term used to describe the development of computer systems which can perform tasks that normally require human cognition. Machine learning (ML) is one subfield of AI, where computers learn rules from data, capturing its intrinsic statistical patterns and structures. Neural networks (NNs) have been increasingly employed for ML complex data. The application of multilayered NN is referred to as "deep learning", which has been recently investigated in dentistry. Convolutional neural networks (CNNs) are mainly used for processing large and complex imagery data, as they are able to extract image features like edges, corners, shapes, and macroscopic patterns using layers of filters. CNN algorithms allow to perform tasks like image classification, object detection and segmentation. The literature involving AI in dentistry has increased rapidly, so a methodological guidance for designing, conducting and reporting studies must be rigorously followed, including the improvement of datasets. The limited interaction between the dental field and the technical disciplines, however, remains a hurdle for applicable dental AI. Similarly, dental users must understand why and how AI applications work and decide to appraise their decisions critically. Generalizable and robust AI applications will eventually prove helpful for clinicians and patients alike.
Topics: Artificial Intelligence; Deep Learning; Dentistry; Humans; Machine Learning; Neural Networks, Computer
PubMed: 34406309
DOI: 10.1590/1807-3107bor-2021.vol35.0094 -
Sensors (Basel, Switzerland) May 2022Nondestructive evaluation (NDE) techniques are used in many industries to evaluate the properties of components and inspect for flaws and anomalies in structures without... (Review)
Review
Nondestructive evaluation (NDE) techniques are used in many industries to evaluate the properties of components and inspect for flaws and anomalies in structures without altering the part's integrity or causing damage to the component being tested. This includes monitoring materials' condition (Material State Awareness (MSA)) and health of structures (Structural Health Monitoring (SHM)). NDE techniques are highly valuable tools to help prevent potential losses and hazards arising from the failure of a component while saving time and cost by not compromising its future usage. On the other hand, Artificial Intelligence (AI) and Machine Learning (ML) techniques are useful tools which can help automating data collection and analyses, providing new insights, and potentially improving detection performance in a quick and low effort manner with great cost savings. This paper presents a survey on state of the art AI-ML techniques for NDE and the application of related smart technologies including Machine Vision (MV) and Digital Twins in NDE.
Topics: Artificial Intelligence; Forecasting; Machine Learning; Technology
PubMed: 35684675
DOI: 10.3390/s22114055