-
Klinische Monatsblatter Fur... Jun 2024
Topics: Humans; Artificial Intelligence; Biomarkers; Corneal Diseases; Cornea
PubMed: 38941997
DOI: 10.1055/a-2296-6702 -
British Journal of Hospital Medicine... Jun 2024Patients with neck of femur fractures present a tremendous public health problem that leads to a high incidence of death and dysfunction. An essential factor is the... (Review)
Review
Patients with neck of femur fractures present a tremendous public health problem that leads to a high incidence of death and dysfunction. An essential factor is the postoperative length of stay, which heavily impacts hospital costs and the quality of care. As an extension of traditional statistical methods, machine learning (ML) provides the possibility of accurately predicting the length of hospital stay. This review assesses how machine learning can effectively use healthcare data to predict the outcomes of patients with operatively managed neck of femurs. A narrative literature review on the use of Artificial Intelligence to predict outcomes in the neck of femurs was undertaken to understand the field and critical considerations of its application. The papers and any relevant references were scrutinised using the specific inclusion and exclusion criteria to produce papers that were used in the analysis. Thirteen papers were used in the analysis. The critical themes recognised the different models, the 'backbox' conundrum, predictor identification, validation methodology and the need to improve efficiency and quality of care. Through reviewing the themes in this paper, current issues, and potential avenues of advancing the field are explored. This review has demonstrated that the use of machine learning in Orthopaedic pathways is in its infancy. Further work is needed to leverage this technology effectively to improve outcomes.
Topics: Humans; Femoral Neck Fractures; Artificial Intelligence; Length of Stay; Machine Learning
PubMed: 38941973
DOI: 10.12968/hmed.2024.0034 -
Computers in Biology and Medicine Jun 2024Convolutional neural networks (CNNs) are the most widely used deep-learning framework for decoding electroencephalograms (EEGs) due to their exceptional ability to...
BACKGROUND AND OBJECTIVES
Convolutional neural networks (CNNs) are the most widely used deep-learning framework for decoding electroencephalograms (EEGs) due to their exceptional ability to extract hierarchical features from high-dimensional EEG data. Traditionally, CNNs have primarily utilized multi-channel raw EEG data as the input tensor; however, the performance of CNN-based EEG decoding may be enhanced by incorporating phase information alongside amplitude information.
METHODS
This study introduces a novel CNN architecture called the Hilbert-transformed (HT) and raw EEG network (HiRENet), which incorporates both raw and HT EEG as inputs. This concurrent use of HT and raw EEG aims to integrate phase information with existing amplitude information, potentially offering a more comprehensive reflection of functional connectivity across various brain regions. The HiRENet model was developed using two CNN frameworks: ShallowFBCSPNet and a CNN with a residual block (ResCNN). The performance of the HiRENet model was assessed using a lab-made EEG database to classify human emotions, comparing three input modalities: raw EEG, HT EEG, and a combination of both signals. Additionally, the computational complexity was evaluated to validate the computational efficiency of the ResCNN design.
RESULTS
The HiRENet model based on ResCNN achieved the highest classification accuracy, with 86.03% for valence and 84.01% for arousal classifications, surpassing traditional CNN methodologies. Considering computational efficiency, ResCNN demonstrated superiority over ShallowFBCSPNet in terms of speed and inference time, despite having a higher parameter count.
CONCLUSION
Our experimental results showed that the proposed HiRENet can be potentially used as a new option to improve the overall performance for deep learning-based EEG decoding problems.
PubMed: 38941902
DOI: 10.1016/j.compbiomed.2024.108788 -
Computer Methods and Programs in... Jun 2024To develop a clinically reliable deep learning model to differentiate glioblastoma (GBM) from solitary brain metastasis (SBM) by providing predictive uncertainty...
BACKGROUND AND OBJECTIVES
To develop a clinically reliable deep learning model to differentiate glioblastoma (GBM) from solitary brain metastasis (SBM) by providing predictive uncertainty estimates and interpretability.
METHODS
A total of 469 patients (300 GBM, 169 SBM) were enrolled in the institutional training set. Deep ensembles based on DenseNet121 were trained on multiparametric MRI. The model performance was validated in the external test set consisting of 143 patients (101 GBM, 42 SBM). Entropy values for each input were evaluated for uncertainty measurement; based on entropy values, the datasets were split to high- and low-uncertainty groups. In addition, entropy values of out-of-distribution (OOD) data from unknown class (257 patients with meningioma) were compared to assess uncertainty estimates of the model. The model interpretability was further evaluated by localization accuracy of the model.
RESULTS
On external test set, the area under the curve (AUC), accuracy, sensitivity and specificity of the deep ensembles were 0.83 (95 % confidence interval [CI] 0.76-0.90), 76.2 %, 54.8 % and 85.2 %, respectively. The performance was higher in the low-uncertainty group than in the high-uncertainty group, with AUCs of 0.91 (95 % CI 0.83-0.98) and 0.58 (95 % CI 0.44-0.71), indicating that assessment of uncertainty with entropy values ascertained reliable prediction in the low-uncertainty group. Further, deep ensembles classified a high proportion (90.7 %) of predictions on OOD data to be uncertain, showing robustness in dataset shift. Interpretability evaluated by localization accuracy provided further reliability in the "low-uncertainty and high-localization accuracy" subgroup, with an AUC of 0.98 (95 % CI 0.95-1.00).
CONCLUSIONS
Empirical assessment of uncertainty and interpretability in deep ensembles provides evidence for the robustness of prediction, offering a clinically reliable model in differentiating GBM from SBM.
PubMed: 38941861
DOI: 10.1016/j.cmpb.2024.108288 -
Medical Image Analysis Jun 2024The conventional pretraining-and-finetuning paradigm, while effective for common diseases with ample data, faces challenges in diagnosing data-scarce occupational...
The conventional pretraining-and-finetuning paradigm, while effective for common diseases with ample data, faces challenges in diagnosing data-scarce occupational diseases like pneumoconiosis. Recently, large language models (LLMs) have exhibits unprecedented ability when conducting multiple tasks in dialogue, bringing opportunities to diagnosis. A common strategy might involve using adapter layers for vision-language alignment and diagnosis in a dialogic manner. Yet, this approach often requires optimization of extensive learnable parameters in the text branch and the dialogue head, potentially diminishing the LLMs' efficacy, especially with limited training data. In our work, we innovate by eliminating the text branch and substituting the dialogue head with a classification head. This approach presents a more effective method for harnessing LLMs in diagnosis with fewer learnable parameters. Furthermore, to balance the retention of detailed image information with progression towards accurate diagnosis, we introduce the contextual multi-token engine. This engine is specialized in adaptively generating diagnostic tokens. Additionally, we propose the information emitter module, which unidirectionally emits information from image tokens to diagnosis tokens. Comprehensive experiments validate the superiority of our methods.
PubMed: 38941859
DOI: 10.1016/j.media.2024.103248 -
Medical Image Analysis Jun 2024The automated segmentation of Intracranial Arteries (IA) in Digital Subtraction Angiography (DSA) plays a crucial role in the quantification of vascular morphology,...
The automated segmentation of Intracranial Arteries (IA) in Digital Subtraction Angiography (DSA) plays a crucial role in the quantification of vascular morphology, significantly contributing to computer-assisted stroke research and clinical practice. Current research primarily focuses on the segmentation of single-frame DSA using proprietary datasets. However, these methods face challenges due to the inherent limitation of single-frame DSA, which only partially displays vascular contrast, thereby hindering accurate vascular structure representation. In this work, we introduce DIAS, a dataset specifically developed for IA segmentation in DSA sequences. We establish a comprehensive benchmark for evaluating DIAS, covering full, weak, and semi-supervised segmentation methods. Specifically, we propose the vessel sequence segmentation network, in which the sequence feature extraction module effectively captures spatiotemporal representations of intravascular contrast, achieving intracranial artery segmentation in 2D+Time DSA sequences. For weakly-supervised IA segmentation, we propose a novel scribble learning-based image segmentation framework, which, under the guidance of scribble labels, employs cross pseudo-supervision and consistency regularization to improve the performance of the segmentation network. Furthermore, we introduce the random patch-based self-training framework, aimed at alleviating the performance constraints encountered in IA segmentation due to the limited availability of annotated DSA data. Our extensive experiments on the DIAS dataset demonstrate the effectiveness of these methods as potential baselines for future research and clinical applications. The dataset and code are publicly available at https://doi.org/10.5281/zenodo.11401368 and https://github.com/lseventeen/DIAS.
PubMed: 38941857
DOI: 10.1016/j.media.2024.103247 -
Neural Networks : the Official Journal... Jun 2024Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class...
Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN's decision, has drawn increasing attention. Gradient-based CAMs are efficient, while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, they are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be produced by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human's cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.
PubMed: 38941740
DOI: 10.1016/j.neunet.2024.106473 -
Neural Networks : the Official Journal... Jun 2024Video frame interpolation methodologies endeavor to create novel frames betwixt extant ones, with the intent of augmenting the video's frame frequency. However, current...
Video frame interpolation methodologies endeavor to create novel frames betwixt extant ones, with the intent of augmenting the video's frame frequency. However, current methods are prone to image blurring and spurious artifacts in challenging scenarios involving occlusions and discontinuous motion. Moreover, they typically rely on optical flow estimation, which adds complexity to modeling and computational costs. To address these issues, we introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames by introducing a novel hierarchical pyramid module. It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, enabling the model to capture intricate motion patterns, but also effectively reduces the required computational cost and complexity. Subsequently, a cross-scale motion structure is presented to estimate and refine intermediate flow maps by the extracted features. This approach facilitates the interplay between input frame features and flow maps during the frame interpolation process and markedly heightens the precision of the intervening flow delineations. Finally, a discerningly fashioned loss centered around an intermediate flow is meticulously contrived, serving as a deft rudder to skillfully guide the prognostication of said intermediate flow, thereby substantially refining the precision of the intervening flow mappings. Experiments illustrate that MA-VFI surpasses several representative VFI methods across various datasets, and can enhance efficiency while maintaining commendable efficacy.
PubMed: 38941737
DOI: 10.1016/j.neunet.2024.106433 -
Journal of Educational Evaluation For... 2024
PubMed: 38910267
DOI: 10.3352/jeehp.2024.21.9 -
Journal of Imaging Informatics in... Jun 2024The aim of this study was to investigate the effect of iterative motion correction (IMC) on reducing artifacts in brain magnetic resonance imaging (MRI) with deep...
The aim of this study was to investigate the effect of iterative motion correction (IMC) on reducing artifacts in brain magnetic resonance imaging (MRI) with deep learning reconstruction (DLR). The study included 10 volunteers (between September 2023 and December 2023) and 30 patients (between June 2022 and July 2022) for quantitative and qualitative analyses, respectively. Volunteers were instructed to remain still during the first MRI with fluid-attenuated inversion recovery sequence (FLAIR) and to move during the second scan. IMCoff DLR images were reconstructed from the raw data of the former acquisition; IMCon and IMCoff DLR images were reconstructed from the latter acquisition. After registration of the motion images, the structural similarity index measure (SSIM) was calculated using motionless images as reference. For qualitative analyses, IMCon and IMCoff FLAIR DLR images of the patients were reconstructed and evaluated by three blinded readers in terms of motion artifacts, noise, and overall quality. SSIM for IMCon images was 0.952, higher than that for IMCoff images (0.949) (p < 0.001). In qualitative analyses, although noise in IMCon images was rated as increased by two of the three readers (both p < 0.001), all readers agreed that motion artifacts and overall quality were significantly better in IMCon images than in IMCoff images (all p < 0.001). In conclusion, IMC reduced motion artifacts in brain FLAIR DLR images while maintaining similarity to motionless images.
PubMed: 38942939
DOI: 10.1007/s10278-024-01184-w