-
Scientific Reports Jul 2024With the increase in the dependency on digital devices, the incidence of myopia, a precursor of various ocular diseases, has risen significantly. Because myopia and...
With the increase in the dependency on digital devices, the incidence of myopia, a precursor of various ocular diseases, has risen significantly. Because myopia and eyeball volume are related, myopia progression can be monitored through eyeball volume estimation. However, existing methods are limited because the eyeball shape is disregarded during estimation. We propose an automated eyeball volume estimation method from computed tomography images that incorporates prior knowledge of the actual eyeball shape. This study involves data preprocessing, image segmentation, and volume estimation steps, which include the truncated cone formula and integral equation. We obtained eyeball image masks using U-Net, HFCN, DeepLab v3 +, SegNet, and HardNet-MSEG. Data from 200 subjects were used for volume estimation, and manually extracted eyeball volumes were used for validation. U-Net outperformed among the segmentation models, and the proposed volume estimation method outperformed comparative methods on all evaluation metrics, with a correlation coefficient of 0.819, mean absolute error of 0.640, and mean squared error of 0.554. The proposed method surpasses existing methods, provides an accurate eyeball volume estimation for monitoring the progression of myopia, and could potentially aid in the diagnosis of ocular diseases. It could be extended to volume estimation of other ocular structures.
Topics: Humans; Tomography, X-Ray Computed; Neural Networks, Computer; Eye; Myopia; Female; Male; Adult; Image Processing, Computer-Assisted; Middle Aged; Young Adult
PubMed: 38956139
DOI: 10.1038/s41598-024-64913-9 -
Scientific Reports Jul 2024Breast cancer is the most commonly diagnosed cancer among women worldwide. Breast cancer patients experience significant distress relating to their diagnosis and...
Breast cancer is the most commonly diagnosed cancer among women worldwide. Breast cancer patients experience significant distress relating to their diagnosis and treatment. Managing this distress is critical for improving the lifespan and quality of life of breast cancer survivors. This study aimed to assess the level of distress in breast cancer survivors and analyze the variables that significantly affect distress using machine learning techniques. A survey was conducted with 641 adult breast cancer patients using the National Comprehensive Cancer Network Distress Thermometer tool. Participants identified various factors that caused distress. Five machine learning models were used to predict the classification of patients into mild and severe distress groups. The survey results indicated that 57.7% of the participants experienced severe distress. The top-three best-performing models indicated that depression, dealing with a partner, housing, work/school, and fatigue are the primary indicators. Among the emotional problems, depression, fear, worry, loss of interest in regular activities, and nervousness were determined as significant predictive factors. Therefore, machine learning models can be effectively applied to determine various factors influencing distress in breast cancer patients who have completed primary treatment, thereby identifying breast cancer patients who are vulnerable to distress in clinical settings.
Topics: Humans; Breast Neoplasms; Female; Machine Learning; Cancer Survivors; Middle Aged; Adult; Psychological Distress; Quality of Life; Stress, Psychological; Aged; Depression; Surveys and Questionnaires
PubMed: 38956137
DOI: 10.1038/s41598-024-65132-y -
Nature Communications Jul 2024Chemical probes are an indispensable tool for translating biological discoveries into new therapies, though are increasingly difficult to identify since novel...
Chemical probes are an indispensable tool for translating biological discoveries into new therapies, though are increasingly difficult to identify since novel therapeutic targets are often hard-to-drug proteins. We introduce FRASE-based hit-finding robot (FRASE-bot), to expedite drug discovery for unconventional therapeutic targets. FRASE-bot mines available 3D structures of ligand-protein complexes to create a database of FRAgments in Structural Environments (FRASE). The FRASE database can be screened to identify structural environments similar to those in the target protein and seed the target structure with relevant ligand fragments. A neural network model is used to retain fragments with the highest likelihood of being native binders. The seeded fragments then inform ultra-large-scale virtual screening of commercially available compounds. We apply FRASE-bot to identify ligands for Calcium and Integrin Binding protein 1 (CIB1), a promising drug target implicated in triple negative breast cancer. FRASE-based virtual screening identifies a small-molecule CIB1 ligand (with binding confirmed in a TR-FRET assay) showing specific cell-killing activity in CIB1-dependent cancer cells, but not in CIB1-depletion-insensitive cells.
Topics: Humans; Antineoplastic Agents; Ligands; Drug Discovery; Calcium-Binding Proteins; Cell Line, Tumor; Computer Simulation; Triple Negative Breast Neoplasms; Protein Binding; Neural Networks, Computer
PubMed: 38956119
DOI: 10.1038/s41467-024-49892-9 -
Scientific Reports Jul 2024Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing...
Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.
Topics: Retinal Vessels; Humans; Neural Networks, Computer; Deep Learning; Image Processing, Computer-Assisted; Algorithms
PubMed: 38956117
DOI: 10.1038/s41598-024-63496-9 -
Scientific Data Jul 2024Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological...
Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological alternative to the visual examination of red blood cells (RBCs) morphology/shapes. True/non-artifact teardrop-shaped RBCs and schistocytes/fragmented RBCs are commonly associated with serious medical conditions that could be fatal, increased ovalocytes are associated with almost all types of anemias. 25 distinct blood smears, each from a different patient, were manually prepared, stained, and then sorted into four groups. Each group underwent imaging using different cameras integrated into light microscopes with 40X microscopic lenses resulting in total 47 K + field images/patches. Two hematologists processed cell-by-cell to provide one million + segmented RBCs with their XYWH coordinates and classified 240 K + RBCs into nine shapes. This dataset (Elsafty_RBCs_for_AI) enables the development/testing of deep learning-based (DL) automation of RBCs morphology/shapes examination, including specific normalization of blood smear stains (different from histopathology stains), detection/counting, segmentation, and classification. Two codes are provided (Elsafty_Codes_for_AI), one for semi-automated image processing and another for training/testing of a DL-based image classifier.
Topics: Erythrocytes; Humans; Microscopy; Deep Learning; Image Processing, Computer-Assisted
PubMed: 38956115
DOI: 10.1038/s41597-024-03570-z -
Scientific Reports Jul 2024Trainees develop surgical technical skills by learning from experts who provide context for successful task completion, identify potential risks, and guide correct... (Randomized Controlled Trial)
Randomized Controlled Trial
Trainees develop surgical technical skills by learning from experts who provide context for successful task completion, identify potential risks, and guide correct instrument handling. This expert-guided training faces significant limitations in objectively assessing skills in real-time and tracking learning. It is unknown whether AI systems can effectively replicate nuanced real-time feedback, risk identification, and guidance in mastering surgical technical skills that expert instructors offer. This randomized controlled trial compared real-time AI feedback to in-person expert instruction. Ninety-seven medical trainees completed a 90-min simulation training with five practice tumor resections followed by a realistic brain tumor resection. They were randomly assigned into 1-real-time AI feedback, 2-in-person expert instruction, and 3-no real-time feedback. Performance was assessed using a composite-score and Objective Structured Assessment of Technical Skills rating, rated by blinded experts. Training with real-time AI feedback (n = 33) resulted in significantly better performance outcomes compared to no real-time feedback (n = 32) and in-person instruction (n = 32), .266, [95% CI .107 .425], p < .001; .332, [95% CI .173 .491], p = .005, respectively. Learning from AI resulted in similar OSATS ratings (4.30 vs 4.11, p = 1) compared to in-person training with expert instruction. Intelligent systems may refine the way operating skills are taught, providing tailored, quantifiable feedback and actionable instructions in real-time.
Topics: Humans; Clinical Competence; Artificial Intelligence; Female; Male; Adult; Simulation Training
PubMed: 38956112
DOI: 10.1038/s41598-024-65716-8 -
Scientific Data Jul 2024
PubMed: 38956109
DOI: 10.1038/s41597-024-03548-x -
Scientific Reports Jul 2024Distinguishing between microscopic variances in temperature in both space and time with high precision can open up new opportunities in optical sensing. In this paper,...
Distinguishing between microscopic variances in temperature in both space and time with high precision can open up new opportunities in optical sensing. In this paper, we present a novel approach to optically measure temperature from the fluorescence of erbium:ytterbium doped tellurite glass, with fast temporal resolution at micron-scale localisation over an area with sub millimetre spatial dimensions. This confocal-based approach provides a micron-scale image of temperature variations over a 200 m 200 m field of view at sub-1 second time intervals. We test our sensing platform by monitoring the real-time evaporation of a water droplet over a wide field of view and track it's evaporative cooling effect on the glass where we report a net temperature change of 6.97 K ± 0.03 K. This result showcases a confocal approach to thermometry to provide high temporal and spatial resolution over a microscopic field of view with the goal of providing real-time measures of temperature on the micro-scale.
PubMed: 38956091
DOI: 10.1038/s41598-024-65529-9 -
Scientific Reports Jul 2024Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and...
Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. Therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. This study utilizes Raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. A total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. Convolutional Neural Network (CNN), Multi-Scale Convolutional Neural Network (MCNN), Residual Network (ResNet), and Deep Residual Shrinkage Network (DRSN) classification models were employed. The accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. Comparative validation results revealed that the DRSN model exhibited the best performance, with an AUC value and accuracy of 97.60% and 95%, respectively. This confirms the superiority of Raman spectroscopy combined with deep learning in the diagnosis of celiac disease.
Topics: Celiac Disease; Humans; Spectrum Analysis, Raman; Deep Learning; Female; Male; Adult; Neural Networks, Computer; Case-Control Studies; Middle Aged
PubMed: 38956075
DOI: 10.1038/s41598-024-64621-4 -
Scientific Data Jul 2024Patients with congenital heart disease often have cardiac anatomy that deviates significantly from normal, frequently requiring multiple heart surgeries. Image...
Patients with congenital heart disease often have cardiac anatomy that deviates significantly from normal, frequently requiring multiple heart surgeries. Image segmentation from a preoperative cardiovascular magnetic resonance (CMR) scan would enable creation of patient-specific 3D surface models of the heart, which have potential to improve surgical planning, enable surgical simulation, and allow automatic computation of quantitative metrics of heart function. However, there is no publicly available CMR dataset for whole-heart segmentation in patients with congenital heart disease. Here, we release the HVSMR-2.0 dataset, comprising 60 CMR scans alongside manual segmentation masks of the 4 cardiac chambers and 4 great vessels. The images showcase a wide range of heart defects and prior surgical interventions. The dataset also includes masks of required and optional extents of the great vessels, enabling fairer comparisons across algorithms. Detailed diagnoses for each subject are also provided. By releasing HVSMR-2.0, we aim to encourage development of robust segmentation algorithms and clinically relevant tools for congenital heart disease.
Topics: Humans; Heart Defects, Congenital; Magnetic Resonance Imaging; Heart; Imaging, Three-Dimensional; Algorithms
PubMed: 38956063
DOI: 10.1038/s41597-024-03469-9