-
Diagnostics (Basel, Switzerland) Jan 2023Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is...
Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.
PubMed: 36672988
DOI: 10.3390/diagnostics13020178 -
Sensors (Basel, Switzerland) Sep 2022Identifying an individual based on their physical/behavioral characteristics is known as biometric recognition. Gait is one of the most reliable biometrics due to its...
Identifying an individual based on their physical/behavioral characteristics is known as biometric recognition. Gait is one of the most reliable biometrics due to its advantages, such as being perceivable at a long distance and difficult to replicate. The existing works mostly leverage Convolutional Neural Networks for gait recognition. The Convolutional Neural Networks perform well in image recognition tasks; however, they lack the attention mechanism to emphasize more on the significant regions of the image. The attention mechanism encodes information in the image patches, which facilitates the model to learn the substantial features in the specific regions. In light of this, this work employs the Vision Transformer (ViT) with an attention mechanism for gait recognition, referred to as Gait-ViT. In the proposed Gait-ViT, the gait energy image is first obtained by averaging the series of images over the gait cycle. The images are then split into patches and transformed into sequences by flattening and patch embedding. Position embedding, along with patch embedding, are applied on the sequence of patches to restore the positional information of the patches. Subsequently, the sequence of vectors is fed to the Transformer encoder to produce the final gait representation. As for the classification, the first element of the sequence is sent to the multi-layer perceptron to predict the class label. The proposed method obtained 99.93% on CASIA-B, 100% on OU-ISIR D and 99.51% on OU-LP, which exhibit the ability of the Vision Transformer model to outperform the state-of-the-art methods.
Topics: Biometry; Gait; Neural Networks, Computer; Pattern Recognition, Automated
PubMed: 36236462
DOI: 10.3390/s22197362 -
Sensors (Basel, Switzerland) Jan 2023Human faces are a core part of our identity and expression, and thus, understanding facial geometry is key to capturing this information. Automated systems that seek to...
Human faces are a core part of our identity and expression, and thus, understanding facial geometry is key to capturing this information. Automated systems that seek to make use of this information must have a way of modeling facial features in a way that makes them accessible. Hierarchical, multi-level architectures have the capability of capturing the different resolutions of representation involved. In this work, we propose using a hierarchical transformer architecture as a means of capturing a robust representation of facial geometry. We further demonstrate the versatility of our approach by using this transformer as a backbone to support three facial representation problems: face anti-spoofing, facial expression representation, and deepfake detection. The combination of effective fine-grained details alongside global attention representations makes this architecture an excellent candidate for these facial representation problems. We conduct numerous experiments first showcasing the ability of our approach to address common issues in facial modeling (pose, occlusions, and background variation) and capture facial symmetry, then demonstrating its effectiveness on three supplemental tasks.
Topics: Humans; Face; Learning; Facial Expression
PubMed: 36679725
DOI: 10.3390/s23020929 -
Medical Image Analysis Dec 2023Any clinically-deployed image-processing pipeline must be robust to the full range of inputs it may be presented with. One popular approach to this challenge is to...
Any clinically-deployed image-processing pipeline must be robust to the full range of inputs it may be presented with. One popular approach to this challenge is to develop predictive models that can provide a measure of their uncertainty. Another approach is to use generative modelling to quantify the likelihood of inputs. Inputs with a low enough likelihood are deemed to be out-of-distribution and are not presented to the downstream predictive model. In this work, we evaluate several approaches to segmentation with uncertainty for the task of segmenting bleeds in 3D CT of the head. We show that these models can fail catastrophically when operating in the far out-of-distribution domain, often providing predictions that are both highly confident and wrong. We propose to instead perform out-of-distribution detection using the Latent Transformer Model: a VQ-GAN is used to provide a highly compressed latent representation of the input volume, and a transformer is then used to estimate the likelihood of this compressed representation of the input. We demonstrate this approach can identify images that are both far- and near- out-of-distribution, as well as provide spatial maps that highlight the regions considered to be out-of-distribution. Furthermore, we find a strong relationship between an image's likelihood and the quality of a model's segmentation on it, demonstrating that this approach is viable for filtering out unsuitable images.
Topics: Humans; Image Processing, Computer-Assisted; Probability; Uncertainty
PubMed: 37778102
DOI: 10.1016/j.media.2023.102967 -
Diagnostics (Basel, Switzerland) Jul 2023Left ventricular ejection fraction (LVEF) plays as an essential role in the assessment of cardiac function, providing quantitative data support for the medical diagnosis...
Left ventricular ejection fraction (LVEF) plays as an essential role in the assessment of cardiac function, providing quantitative data support for the medical diagnosis of heart disease. Robust evaluation of the ejection fraction relies on accurate left ventricular (LV) segmentation of echocardiograms. Because human bias and expensive labor cost exist in manual echocardiographic analysis, computer algorithms of deep-learning have been developed to help human experts in segmentation tasks. Most of the previous work is based on the convolutional neural networks (CNN) structure and has achieved good results. However, the region occupied by the left ventricle is large for echocardiography. Therefore, the limited receptive field of CNN leaves much room for improvement in the effectiveness of LV segmentation. In recent years, Vision Transformer models have demonstrated their effectiveness and universality in traditional semantic segmentation tasks. Inspired by this, we propose two models that use two different pure Transformers as the basic framework for LV segmentation in echocardiography: one combines Swin Transformer and K-Net, and the other uses Segformer. We evaluate these two models on the EchoNet-Dynamic dataset of LV segmentation and compare the quantitative metrics with other models for LV segmentation. The experimental results show that the mean Dice similarity of the two models scores are 92.92% and 92.79%, respectively, which outperform most of the previous mainstream CNN models. In addition, we found that for some samples that were not easily segmented, whereas both our models successfully recognized the valve region and separated left ventricle and left atrium, the CNN model segmented them together as a single part. Therefore, it becomes possible for us to obtain accurate segmentation results through simple post-processing, by filtering out the parts with the largest circumference or pixel square. These promising results prove the effectiveness of the two models and reveal the potential of Transformer structure in echocardiographic segmentation.
PubMed: 37510109
DOI: 10.3390/diagnostics13142365 -
BMC Medical Imaging Sep 2023Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications. Recently, many... (Review)
Review
BACKGROUND
Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications. Recently, many researchers have developed vision transformer-based AI methods for lung cancer diagnosis and prognosis.
OBJECTIVE
This scoping review aims to identify the recent developments on vision transformer-based AI methods for lung cancer imaging applications. It provides key insights into how vision transformers complemented the performance of AI and deep learning methods for lung cancer. Furthermore, the review also identifies the datasets that contributed to advancing the field.
METHODS
In this review, we searched Pubmed, Scopus, IEEEXplore, and Google Scholar online databases. The search terms included intervention terms (vision transformers) and the task (i.e., lung cancer, adenocarcinoma, etc.). Two reviewers independently screened the title and abstract to select relevant studies and performed the data extraction. A third reviewer was consulted to validate the inclusion and exclusion. Finally, the narrative approach was used to synthesize the data.
RESULTS
Of the 314 retrieved studies, this review included 34 studies published from 2020 to 2022. The most commonly addressed task in these studies was the classification of lung cancer types, such as lung squamous cell carcinoma versus lung adenocarcinoma, and identifying benign versus malignant pulmonary nodules. Other applications included survival prediction of lung cancer patients and segmentation of lungs. The studies lacked clear strategies for clinical transformation. SWIN transformer was a popular choice of the researchers; however, many other architectures were also reported where vision transformer was combined with convolutional neural networks or UNet model. Researchers have used the publicly available lung cancer datasets of the lung imaging database consortium and the cancer genome atlas. One study used a cluster of 48 GPUs, while other studies used one, two, or four GPUs.
CONCLUSION
It can be concluded that vision transformer-based models are increasingly in popularity for developing AI methods for lung cancer applications. However, their computational complexity and clinical relevance are important factors to be considered for future research work. This review provides valuable insights for researchers in the field of AI and healthcare to advance the state-of-the-art in lung cancer diagnosis and prognosis. We provide an interactive dashboard on lung-cancer.onrender.com/ .
Topics: Humans; Artificial Intelligence; Prognosis; Lung Neoplasms; Carcinoma, Non-Small-Cell Lung; Multiple Pulmonary Nodules
PubMed: 37715137
DOI: 10.1186/s12880-023-01098-z -
Journal of Pathology Informatics 2023Over 150 000 Americans are diagnosed with colorectal cancer (CRC) every year, and annually over 50 000 individuals will die from CRC, necessitating improvements in...
Over 150 000 Americans are diagnosed with colorectal cancer (CRC) every year, and annually over 50 000 individuals will die from CRC, necessitating improvements in screening, prognostication, disease management, and therapeutic options. Tumor metastasis is the primary factor related to the risk of recurrence and mortality. Yet, screening for nodal and distant metastasis is costly, and invasive and incomplete resection may hamper adequate assessment. Signatures of the tumor-immune microenvironment (TIME) at the primary site can provide valuable insights into the aggressiveness of the tumor and the effectiveness of various treatment options. Spatially resolved transcriptomics technologies offer an unprecedented characterization of TIME through high multiplexing, yet their scope is constrained by cost. Meanwhile, it has long been suspected that histological, cytological, and macroarchitectural tissue characteristics correlate well with molecular information (e.g., gene expression). Thus, a method for predicting transcriptomics data through inference of RNA patterns from whole slide images (WSI) is a key step in studying metastasis at scale. In this work, we collected tissue from 4 stage-III (pT3) matched colorectal cancer patients for spatial transcriptomics profiling. The Visium spatial transcriptomics (ST) assay was used to measure transcript abundance for 17 943 genes at up to 5000 55-micron (i.e., 1-10 cells) spots per patient sampled in a honeycomb pattern, co-registered with hematoxylin and eosin (H&E) stained WSI. The Visium ST assay can measure expression at these spots through tissue permeabilization of mRNAs, which are captured through spatially (i.e., x-y positional coordinates) barcoded, gene specific oligo probes. WSI subimages were extracted around each co-registered Visium spot and were used to predict the expression at these spots using machine learning models. We prototyped and compared several convolutional, transformer, and graph convolutional neural networks to predict spatial RNA patterns at the Visium spots under the hypothesis that the transformer- and graph-based approaches better capture relevant spatial tissue architecture. We further analyzed the model's ability to recapitulate spatial autocorrelation statistics using SPARK and SpatialDE. Overall, the results indicate that the transformer- and graph-based approaches were unable to outperform the convolutional neural network architecture, though they exhibited optimal performance for relevant disease-associated genes. Initial findings suggest that different neural networks that operate on different scales are relevant for capturing distinct disease pathways (e.g., epithelial to mesenchymal transition). We add further evidence that deep learning models can accurately predict gene expression in whole slide images and comment on understudied factors which may increase its external applicability (e.g., tissue context). Our preliminary work will motivate further investigation of inference for molecular patterns from whole slide images as metastasis predictors and in other applications.
PubMed: 37114077
DOI: 10.1016/j.jpi.2023.100308 -
Sensors (Basel, Switzerland) Oct 2022Image super-resolution (ISR) technology aims to enhance resolution and improve image quality. It is widely applied to various real-world applications related to image...
Image super-resolution (ISR) technology aims to enhance resolution and improve image quality. It is widely applied to various real-world applications related to image processing, especially in medical images, while relatively little appliedto anime image production. Furthermore, contemporary ISR tools are often based on convolutional neural networks (CNNs), while few methods attempt to use transformers that perform well in other advanced vision tasks. We propose a so-called anime image super-resolution (AISR) method based on the Swin Transformer in this work. The work was carried out in several stages. First, a shallow feature extraction approach was employed to facilitate the features map of the input image's low-frequency information, which mainly approximates the distribution of detailed information in a spatial structure (shallow feature). Next, we applied deep feature extraction to extract the image semantic information (deep feature). Finally, the image reconstruction method combines shallow and deep features to upsample the feature size and performs sub-pixel convolution to obtain many feature map channels. The novelty of the proposal is the enhancement of the low-frequency information using a Gaussian filter and the introduction of different window sizes to replace the patch merging operations in the Swin Transformer. A high-quality anime dataset was constructed to curb the effects of the model robustness on the online regime. We trained our model on this dataset and tested the model quality. We implement anime image super-resolution tasks at different magnifications (2×, 4×, 8×). The results were compared numerically and graphically with those delivered by conventional convolutional neural network-based and transformer-based methods. We demonstrate the experiments numerically using standard peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), respectively. The series of experiments and ablation study showcase that our proposal outperforms others.
Topics: Magnetic Resonance Imaging; Image Processing, Computer-Assisted; Signal-To-Noise Ratio; Neural Networks, Computer; Electric Power Supplies
PubMed: 36365830
DOI: 10.3390/s22218126 -
Sensors (Basel, Switzerland) Oct 2022Building segmentation is crucial for applications extending from map production to urban planning. Nowadays, it is still a challenge due to CNNs' inability to model...
Building segmentation is crucial for applications extending from map production to urban planning. Nowadays, it is still a challenge due to CNNs' inability to model global context and Transformers' high memory need. In this study, 10 CNN and Transformer models were generated, and comparisons were realized. Alongside our proposed Residual-Inception U-Net (RIU-Net), U-Net, Residual U-Net, and Attention Residual U-Net, four CNN architectures (Inception, Inception-ResNet, Xception, and MobileNet) were implemented as encoders to U-Net-based models. Lastly, two Transformer-based approaches (Trans U-Net and Swin U-Net) were also used. Massachusetts Buildings Dataset and Inria Aerial Image Labeling Dataset were used for training and evaluation. On Inria dataset, RIU-Net achieved the highest IoU score, F1 score, and test accuracy, with 0.6736, 0.7868, and 92.23%, respectively. On Massachusetts Small dataset, Attention Residual U-Net achieved the highest IoU and F1 scores, with 0.6218 and 0.7606, and Trans U-Net reached the highest test accuracy, with 94.26%. On Massachusetts Large dataset, Residual U-Net accomplished the highest IoU and F1 scores, with 0.6165 and 0.7565, and Attention Residual U-Net attained the highest test accuracy, with 93.81%. The results showed that RIU-Net was significantly successful on Inria dataset. On Massachusetts datasets, Residual U-Net, Attention Residual U-Net, and Trans U-Net provided successful results.
Topics: Data Collection; Image Processing, Computer-Assisted; Neural Networks, Computer
PubMed: 36236721
DOI: 10.3390/s22197624 -
Heliyon Dec 2023A supervised deep learning network like the UNet has performed well in segmenting brain anomalies such as lesions and tumours. However, such methods were proposed to...
A supervised deep learning network like the UNet has performed well in segmenting brain anomalies such as lesions and tumours. However, such methods were proposed to perform on single-modality or multi-modality images. We use the Hybrid UNet Transformer (HUT) to improve performance in single-modality lesion segmentation and multi-modality brain tumour segmentation. The HUT consists of two pipelines running in parallel, one of which is UNet-based and the other is Transformer-based. The Transformer-based pipeline relies on feature maps in the intermediate layers of the UNet decoder during training. The HUT network takes in the available modalities of 3D brain volumes and embeds the brain volumes into voxel patches. The transformers in the system improve global attention and long-range correlation between the voxel patches. In addition, we introduce a self-supervised training approach in the HUT framework to enhance the overall segmentation performance. We demonstrate that HUT performs better than the state-of-the-art network SPiN in the single-modality segmentation on Anatomical Tracings of Lesions After Stroke (ATLAS) dataset by 4.84% of Dice score and a significant 41% in the Hausdorff Distance score. HUT also performed well on brain scans in the Brain Tumour Segmentation (BraTS20) dataset and demonstrated an improvement over the state-of-the-art network nnUnet by 0.96% in the Dice score and 4.1% in the Hausdorff Distance score.
PubMed: 38046150
DOI: 10.1016/j.heliyon.2023.e22412