-
Nutrition Journal Jun 2024Low fruit and vegetable consumption is a leading contributor to non-communicable disease risk. However, understanding of barriers and facilitators to fruit and vegetable...
BACKGROUND
Low fruit and vegetable consumption is a leading contributor to non-communicable disease risk. However, understanding of barriers and facilitators to fruit and vegetable intake in rural settings is limited. This study used a mixed methods approach to determine the barriers and facilitators to increasing fruit and vegetable intake in rural Australian adults and to identify if these varied by gender.
METHODS
Quantitative and qualitative data were used from the 2019 Active Living Census, completed by adults living in north-west Victoria, Australia. Data were collected on fruit and vegetable intakes and barriers and facilitators to meeting fruit and vegetable recommendations. Multivariate logistic regression analyses were used to investigate the association between facilitators, classified using the socio-ecological framework, and meeting recommendations. Machine learning was used to automate content analysis of open ended information on barriers.
RESULTS
A total of 13,464 adults were included in the quantitative analysis (51% female; mean age 48 [SE 0.17] years) with 48% and 19% of participants consuming the recommended two serves of fruit and five serves of vegetables daily, respectively. Strongest facilitators to fruit consumption were at the individual level: never smoked (OR: 2.12 95% CI: 1.83-2.45) and not drinking alcohol (OR: 1.47 95% CI: 1.31-1.64). Strongest facilitators for vegetable consumption were found at all levels; i.e., individual level: used to smoke (OR: 1.48 95% CI: 1.21-1.80), social-environmental level: living with three or more people (OR: 1.41 95% CI: 1.22-1.63), and physical-environmental level: use community gardens (OR: 1.20 95% CI: 1.07-1.34). Qualitative analyses (fruit n = 5,919; vegetable n = 9,601) showed that barriers to fruit consumption included a preference for other snacks and desire to limit sugar content, whilst lack of time and unachievable guidelines were barriers for vegetables. Barriers and facilitators differed by gender; females experienced barriers due to having a more varied diet while males reported a dislike of the taste.
CONCLUSIONS
Barriers and facilitators to fruit and vegetable consumption among rural Australian adults were identified across all levels of the socio-ecological framework and varied between fruit and vegetables and by gender. Strategies that address individual, social, and physical-level barriers are required to improve consumption.
Topics: Humans; Vegetables; Male; Female; Fruit; Middle Aged; Rural Population; Adult; Diet; Victoria; Feeding Behavior; Australia; Aged
PubMed: 38943157
DOI: 10.1186/s12937-024-00972-y -
BMC Public Health Jun 2024Coronavirus disease 2019 (COVID-19), a global public health crisis, continues to pose challenges despite preventive measures. The daily rise in COVID-19 cases is...
BACKGROUND
Coronavirus disease 2019 (COVID-19), a global public health crisis, continues to pose challenges despite preventive measures. The daily rise in COVID-19 cases is concerning, and the testing process is both time-consuming and costly. While several models have been created to predict mortality in COVID-19 patients, only a few have shown sufficient accuracy. Machine learning algorithms offer a promising approach to data-driven prediction of clinical outcomes, surpassing traditional statistical modeling. Leveraging machine learning (ML) algorithms could potentially provide a solution for predicting mortality in hospitalized COVID-19 patients in Ethiopia. Therefore, the aim of this study is to develop and validate machine-learning models for accurately predicting mortality in COVID-19 hospitalized patients in Ethiopia.
METHODS
Our study involved analyzing electronic medical records of COVID-19 patients who were admitted to public hospitals in Ethiopia. Specifically, we developed seven different machine learning models to predict COVID-19 patient mortality. These models included J48 decision tree, random forest (RF), k-nearest neighborhood (k-NN), multi-layer perceptron (MLP), Naïve Bayes (NB), eXtreme gradient boosting (XGBoost), and logistic regression (LR). We then compared the performance of these models using data from a cohort of 696 patients through statistical analysis. To evaluate the effectiveness of the models, we utilized metrics derived from the confusion matrix such as sensitivity, specificity, precision, and receiver operating characteristic (ROC).
RESULTS
The study included a total of 696 patients, with a higher number of females (440 patients, accounting for 63.2%) compared to males. The median age of the participants was 35.0 years old, with an interquartile range of 18-79. After conducting different feature selection procedures, 23 features were examined, and identified as predictors of mortality, and it was determined that gender, Intensive care unit (ICU) admission, and alcohol drinking/addiction were the top three predictors of COVID-19 mortality. On the other hand, loss of smell, loss of taste, and hypertension were identified as the three lowest predictors of COVID-19 mortality. The experimental results revealed that the k-nearest neighbor (k-NN) algorithm outperformed than other machine learning algorithms, achieving an accuracy of 95.25%, sensitivity of 95.30%, precision of 92.7%, specificity of 93.30%, F1 score 93.98% and a receiver operating characteristic (ROC) score of 96.90%. These findings highlight the effectiveness of the k-NN algorithm in predicting COVID-19 outcomes based on the selected features.
CONCLUSION
Our study has developed an innovative model that utilizes hospital data to accurately predict the mortality risk of COVID-19 patients. The main objective of this model is to prioritize early treatment for high-risk patients and optimize strained healthcare systems during the ongoing pandemic. By integrating machine learning with comprehensive hospital databases, our model effectively classifies patients' mortality risk, enabling targeted medical interventions and improved resource management. Among the various methods tested, the K-nearest neighbors (KNN) algorithm demonstrated the highest accuracy, allowing for early identification of high-risk patients. Through KNN feature identification, we identified 23 predictors that significantly contribute to predicting COVID-19 mortality. The top five predictors are gender (female), intensive care unit (ICU) admission, alcohol drinking, smoking, and symptoms of headache and chills. This advancement holds great promise in enhancing healthcare outcomes and decision-making during the pandemic. By providing services and prioritizing patients based on the identified predictors, healthcare facilities and providers can improve the chances of survival for individuals. This model provides valuable insights that can guide healthcare professionals in allocating resources and delivering appropriate care to those at highest risk.
Topics: Humans; COVID-19; Machine Learning; Ethiopia; Male; Female; Middle Aged; Adult; Algorithms; Aged; SARS-CoV-2; Hospitalization; Electronic Health Records; Young Adult; Adolescent
PubMed: 38943093
DOI: 10.1186/s12889-024-19196-0 -
BMC Medical Informatics and Decision... Jun 2024Clinical medicine offers a promising arena for applying Machine Learning (ML) models. However, despite numerous studies employing ML in medical data analysis, only a... (Review)
Review
BACKGROUND
Clinical medicine offers a promising arena for applying Machine Learning (ML) models. However, despite numerous studies employing ML in medical data analysis, only a fraction have impacted clinical care. This article underscores the importance of utilising ML in medical data analysis, recognising that ML alone may not adequately capture the full complexity of clinical data, thereby advocating for the integration of medical domain knowledge in ML.
METHODS
The study conducts a comprehensive review of prior efforts in integrating medical knowledge into ML and maps these integration strategies onto the phases of the ML pipeline, encompassing data pre-processing, feature engineering, model training, and output evaluation. The study further explores the significance and impact of such integration through a case study on diabetes prediction. Here, clinical knowledge, encompassing rules, causal networks, intervals, and formulas, is integrated at each stage of the ML pipeline, resulting in a spectrum of integrated models.
RESULTS
The findings highlight the benefits of integration in terms of accuracy, interpretability, data efficiency, and adherence to clinical guidelines. In several cases, integrated models outperformed purely data-driven approaches, underscoring the potential for domain knowledge to enhance ML models through improved generalisation. In other cases, the integration was instrumental in enhancing model interpretability and ensuring conformity with established clinical guidelines. Notably, knowledge integration also proved effective in maintaining performance under limited data scenarios.
CONCLUSIONS
By illustrating various integration strategies through a clinical case study, this work provides guidance to inspire and facilitate future integration efforts. Furthermore, the study identifies the need to refine domain knowledge representation and fine-tune its contribution to the ML model as the two main challenges to integration and aims to stimulate further research in this direction.
Topics: Machine Learning; Humans; Decision Support Systems, Clinical
PubMed: 38943085
DOI: 10.1186/s12911-024-02582-4 -
BMC Cardiovascular Disorders Jun 2024Pulmonary transit time (PTT) can be measured automatically from arterial input function (AIF) images of dual sequence first-pass perfusion imaging. PTT has been...
BACKGROUND
Pulmonary transit time (PTT) can be measured automatically from arterial input function (AIF) images of dual sequence first-pass perfusion imaging. PTT has been validated against invasive cardiac catheterisation correlating with both cardiac output and left ventricular filling pressure (both important prognostic markers in heart failure). We hypothesized that prolonged PTT is associated with clinical outcomes in patients with heart failure.
METHODS
We recruited outpatients with a recent diagnosis of non-ischaemic heart failure with left ventricular ejection fraction (LVEF) < 50% on referral echocardiogram. Patients were followed up by a review of medical records for major adverse cardiovascular events (MACE) defined as all-cause mortality, heart failure hospitalization, ventricular arrhythmia, stroke or myocardial infarction. PTT was measured automatically from low-resolution AIF dynamic series of both the LV and RV during rest perfusion imaging, and the PTT was measured as the time (in seconds) between the centroid of the left (LV) and right ventricle (RV) indicator dilution curves.
RESULTS
Patients (N = 294) were followed-up for median 2.0 years during which 37 patients (12.6%) had at least one MACE event. On univariate Cox regression analysis there was a significant association between PTT and MACE (Hazard ratio (HR) 1.16, 95% confidence interval (CI) 1.08-1.25, P = 0.0001). There was also significant association between PTT and heart failure hospitalisation (HR 1.15, 95% CI 1.02-1.29, P = 0.02) and moderate correlation between PTT and N-terminal pro B-type natriuretic peptide (NT-proBNP, r = 0.51, P < 0.001). PTT remained predictive of MACE after adjustment for clinical and imaging factors but was no longer significant once adjusted for NT-proBNP.
CONCLUSIONS
PTT measured automatically during CMR perfusion imaging in patients with recent onset non-ischaemic heart failure is predictive of MACE and in particular heart failure hospitalisation. PTT derived in this way may be a non-invasive marker of haemodynamic congestion in heart failure and future studies are required to establish if prolonged PTT identifies those who may warrant closer follow-up or medicine optimisation to reduce the risk of future adverse events.
Topics: Humans; Heart Failure; Male; Female; Middle Aged; Aged; Predictive Value of Tests; Time Factors; Prognosis; Ventricular Function, Left; Myocardial Perfusion Imaging; Stroke Volume; Risk Factors; Pulmonary Circulation; Natriuretic Peptide, Brain; Peptide Fragments; Pulmonary Artery; Risk Assessment; Ventricular Function, Right; Magnetic Resonance Imaging
PubMed: 38943084
DOI: 10.1186/s12872-024-04003-w -
Nature Medicine Jun 2024Metastasis occurs frequently after resection of pancreatic cancer (PaC). In this study, we hypothesized that multi-parametric analysis of pre-metastatic liver biopsies...
Metastasis occurs frequently after resection of pancreatic cancer (PaC). In this study, we hypothesized that multi-parametric analysis of pre-metastatic liver biopsies would classify patients according to their metastatic risk, timing and organ site. Liver biopsies obtained during pancreatectomy from 49 patients with localized PaC and 19 control patients with non-cancerous pancreatic lesions were analyzed, combining metabolomic, tissue and single-cell transcriptomics and multiplex imaging approaches. Patients were followed prospectively (median 3 years) and classified into four recurrence groups; early (<6 months after resection) or late (>6 months after resection) liver metastasis (LiM); extrahepatic metastasis (EHM); and disease-free survivors (no evidence of disease (NED)). Overall, PaC livers exhibited signs of augmented inflammation compared to controls. Enrichment of neutrophil extracellular traps (NETs), Ki-67 upregulation and decreased liver creatine significantly distinguished those with future metastasis from NED. Patients with future LiM were characterized by scant T cell lobular infiltration, less steatosis and higher levels of citrullinated H3 compared to patients who developed EHM, who had overexpression of interferon target genes (MX1 and NR1D1) and an increase of CD11B natural killer (NK) cells. Upregulation of sortilin-1 and prominent NETs, together with the lack of T cells and a reduction in CD11B NK cells, differentiated patients with early-onset LiM from those with late-onset LiM. Liver profiles of NED closely resembled those of controls. Using the above parameters, a machine-learning-based model was developed that successfully predicted the metastatic outcome at the time of surgery with 78% accuracy. Therefore, multi-parametric profiling of liver biopsies at the time of PaC diagnosis may determine metastatic risk and organotropism and guide clinical stratification for optimal treatment selection.
PubMed: 38942992
DOI: 10.1038/s41591-024-03075-7 -
Nature Aging Jun 2024Investigating the genetic underpinnings of human aging is essential for unraveling the etiology of and developing actionable therapies for chronic diseases. Here, we...
Investigating the genetic underpinnings of human aging is essential for unraveling the etiology of and developing actionable therapies for chronic diseases. Here, we characterize the genetic architecture of the biological age gap (BAG; the difference between machine learning-predicted age and chronological age) across nine human organ systems in 377,028 participants of European ancestry from the UK Biobank. The BAGs were computed using cross-validated support vector machines, incorporating imaging, physical traits and physiological measures. We identify 393 genomic loci-BAG pairs (P < 5 × 10) linked to the brain, eye, cardiovascular, hepatic, immune, metabolic, musculoskeletal, pulmonary and renal systems. Genetic variants associated with the nine BAGs are predominantly specific to the respective organ system (organ specificity) while exerting pleiotropic links with other organ systems (interorgan cross-talk). We find that genetic correlation between the nine BAGs mirrors their phenotypic correlation. Further, a multiorgan causal network established from two-sample Mendelian randomization and latent causal variance models revealed potential causality between chronic diseases (for example, Alzheimer's disease and diabetes), modifiable lifestyle factors (for example, sleep duration and body weight) and multiple BAGs. Our results illustrate the potential for improving human organ health via a multiorgan network, including lifestyle interventions and drug repurposing strategies.
PubMed: 38942983
DOI: 10.1038/s43587-024-00662-8 -
Nature Computational Science Jun 2024Partial differential equations (PDEs) are among the most universal and parsimonious descriptions of natural physical laws, capturing a rich variety of phenomenology and... (Review)
Review
Partial differential equations (PDEs) are among the most universal and parsimonious descriptions of natural physical laws, capturing a rich variety of phenomenology and multiscale physics in a compact and symbolic representation. Here, we examine several promising avenues of PDE research that are being advanced by machine learning, including (1) discovering new governing PDEs and coarse-grained approximations for complex natural and engineered systems, (2) learning effective coordinate systems and reduced-order models to make PDEs more amenable to analysis, and (3) representing solution operators and improving traditional numerical algorithms. In each of these fields, we summarize key advances, ongoing challenges, and opportunities for further development.
PubMed: 38942926
DOI: 10.1038/s43588-024-00643-2 -
Scientific Reports Jun 2024We aimed to identify the clinical subtypes in individuals starting long-term care in Japan and examined their association with prognoses. Using linked medical insurance...
We aimed to identify the clinical subtypes in individuals starting long-term care in Japan and examined their association with prognoses. Using linked medical insurance claims data and survey data for care-need certification in a large city, we identified participants who started long-term care. Grouping them based on 22 diseases recorded in the past 6 months using fuzzy c-means clustering, we examined the longitudinal association between clusters and death or care-need level deterioration within 2 years. We analyzed 4,648 participants (median age 83 [interquartile range 78-88] years, female 60.4%) between October 2014 and March 2019 and categorized them into (i) musculoskeletal and sensory, (ii) cardiac, (iii) neurological, (iv) respiratory and cancer, (v) insulin-dependent diabetes, and (vi) unspecified subtypes. The results of clustering were replicated in another city. Compared with the musculoskeletal and sensory subtype, the adjusted hazard ratio (95% confidence interval) for death was 1.22 (1.05-1.42), 1.81 (1.54-2.13), and 1.21 (1.00-1.46) for the cardiac, respiratory and cancer, and insulin-dependent diabetes subtypes, respectively. The care-need levels more likely worsened in the cardiac, respiratory and cancer, and unspecified subtypes than in the musculoskeletal and sensory subtype. In conclusion, distinct clinical subtypes exist among individuals initiating long-term care.
Topics: Humans; Female; Aged; Male; Japan; Cluster Analysis; Aged, 80 and over; Long-Term Care; Prognosis; Neoplasms
PubMed: 38942898
DOI: 10.1038/s41598-024-65699-6 -
Scientific Reports Jun 2024Remote sensing has been increasingly used in precision agriculture. Buoyed by the developments in the miniaturization of sensors and platforms, contemporary remote...
Remote sensing has been increasingly used in precision agriculture. Buoyed by the developments in the miniaturization of sensors and platforms, contemporary remote sensing offers data at resolutions finer enough to respond to within-farm variations. LiDAR point cloud, offers features amenable to modelling structural parameters of crops. Early prediction of crop growth parameters helps farmers and other stakeholders dynamically manage farming activities. The objective of this work is the development and application of a deep learning framework to predict plant-level crop height and crown area at different growth stages for vegetable crops. LiDAR point clouds were acquired using a terrestrial laser scanner on five dates during the growth cycles of tomato, eggplant and cabbage on the experimental research farms of the University of Agricultural Sciences, Bengaluru, India. We implemented a hybrid deep learning framework combining distinct features of long-term short memory (LSTM) and Gated Recurrent Unit (GRU) for the predictions of plant height and crown area. The predictions are validated with reference ground truth measurements. These predictions were validated against ground truth measurements. The findings demonstrate that plant-level structural parameters can be predicted well ahead of crop growth stages with around 80% accuracy. Notably, the LSTM and the GRU models exhibited limitations in capturing variations in structural parameters. Conversely, the hybrid model offered significantly improved predictions, particularly for crown area, with error rates for height prediction ranging from 5 to 12%, with deviations exhibiting a more balanced distribution between overestimation and underestimation This approach effectively captured the inherent temporal growth pattern of the crops, highlighting the potential of deep learning for precision agriculture applications. However, the prediction quality is relatively low at the advanced growth stage, closer to the harvest. In contrast, the prediction quality is stable across the three different crops. The results indicate the presence of a robust relationship between the features of the LiDAR point cloud and the auto-feature map of the deep learning methods adapted for plant-level crop structural characterization. This approach effectively captured the inherent temporal growth pattern of the crops, highlighting the potential of deep learning for precision agriculture applications.
Topics: Deep Learning; Crops, Agricultural; Remote Sensing Technology; Vegetables; India; Agriculture; Solanum lycopersicum; Solanum melongena
PubMed: 38942825
DOI: 10.1038/s41598-024-65322-8 -
Scientific Reports Jun 2024In tuberculosis (TB), chest radiography (CXR) patterns are highly variable, mimicking pneumonia and many other diseases. This study aims to evaluate the efficacy of...
In tuberculosis (TB), chest radiography (CXR) patterns are highly variable, mimicking pneumonia and many other diseases. This study aims to evaluate the efficacy of Google teachable machine, a deep neural network-based image classification tool, to develop algorithm for predicting TB probability of CXRs. The training dataset included 348 TB CXRs and 3806 normal CXRs for training TB detection. We also collected 1150 abnormal CXRs and 627 normal CXRs for training abnormality detection. For external validation, we collected 250 CXRs from our hospital. We also compared the accuracy of the algorithm to five pulmonologists and radiological reports. In external validation, the AI algorithm showed areas under the curve (AUC) of 0.951 and 0.975 in validation dataset 1 and 2. The accuracy of the pulmonologists on validation dataset 2 showed AUC range of 0.936-0.995. When abnormal CXRs other than TB were added, AUC decreased in both human readers (0.843-0.888) and AI algorithm (0.828). When combine human readers with AI algorithm, the AUC further increased to 0.862-0.885. The TB CXR AI algorithm developed by using Google teachable machine in this study is effective, with the accuracy close to experienced clinical physicians, and may be helpful for detecting tuberculosis by CXR.
Topics: Humans; Deep Learning; Tuberculosis, Pulmonary; Radiography, Thoracic; Algorithms; Female; Male; Middle Aged; Adult; Area Under Curve
PubMed: 38942819
DOI: 10.1038/s41598-024-65703-z