-
Frontiers in Neural Circuits 2018Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI,...
Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of "orphan" fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations.
Topics: Animals; Connectome; Databases, Factual; Drosophila; Image Processing, Computer-Assisted; Mushroom Bodies; Neurons; Pattern Recognition, Automated; Synapses
PubMed: 30483069
DOI: 10.3389/fncir.2018.00102 -
Computerized Medical Imaging and... Jan 2022Whole-brain segmentation is a crucial pre-processing step for many neuroimaging analyses pipelines. Accurate and efficient whole-brain segmentations are important for...
Whole-brain segmentation is a crucial pre-processing step for many neuroimaging analyses pipelines. Accurate and efficient whole-brain segmentations are important for many neuroimage analysis tasks to provide clinically relevant information. Several recently proposed convolutional neural networks (CNN) perform whole brain segmentation using individual 2D slices or 3D patches as inputs due to graphical processing unit (GPU) memory limitations, and use sliding windows to perform whole brain segmentation during inference. However, these approaches lack global and spatial information about the entire brain and lead to compromised efficiency during both training and testing. We introduce a 3D hemisphere-based CNN for automatic whole-brain segmentation of T1-weighted magnetic resonance images of adult brains. First, we trained a localization network to predict bounding boxes for both hemispheres. Then, we trained a segmentation network to segment one hemisphere, and segment the opposing hemisphere by reflecting it across the mid-sagittal plane. Our network shows high performance both in terms of segmentation efficiency and accuracy (0.84 overall Dice similarity and 6.1 mm overall Hausdorff distance) in segmenting 102 brain structures. On multiple independent test datasets, our method demonstrated a competitive performance in the subcortical segmentation task and a high consistency in volumetric measurements of intra-session scans.
Topics: Brain; Image Processing, Computer-Assisted; Magnetic Resonance Imaging; Neural Networks, Computer; Neuroimaging
PubMed: 34839147
DOI: 10.1016/j.compmedimag.2021.102000 -
PloS One 2023Cardiovascular diseases related to the right side of the heart, such as Pulmonary Hypertension, are some of the leading causes of death among the Mexican (and worldwide)...
Cardiovascular diseases related to the right side of the heart, such as Pulmonary Hypertension, are some of the leading causes of death among the Mexican (and worldwide) population. To avoid invasive techniques such as catheterizing the heart, improving the segmenting performance of medical echocardiographic systems can be an option to early detect diseases related to the right-side of the heart. While current medical imaging systems perform well segmenting automatically the left side of the heart, they typically struggle segmenting the right-side cavities. This paper presents a robust cardiac segmentation algorithm based on the popular U-NET architecture capable of accurately segmenting the four cavities with a reduced training dataset. Moreover, we propose two additional steps to improve the quality of the results in our machine learning model, 1) a segmentation algorithm capable of accurately detecting cone shapes (as it has been trained and refined with multiple data sources) and 2) a post-processing step which refines the shape and contours of the segmentation based on heuristics provided by the clinicians. Our results demonstrate that the proposed techniques achieve segmentation accuracy comparable to state-of-the-art methods in datasets commonly used for this practice, as well as in datasets compiled by our medical team. Furthermore, we tested the validity of the post-processing correction step within the same sequence of images and demonstrated its consistency with manual segmentations performed by clinicians.
Topics: Image Processing, Computer-Assisted; Heuristics; Algorithms; Heart; Machine Learning
PubMed: 37889912
DOI: 10.1371/journal.pone.0293560 -
NeuroImage. Clinical 2022Accurate segmentation of surgical resection sites is critical for clinical assessments and neuroimaging research applications, including resection extent determination,...
Accurate segmentation of surgical resection sites is critical for clinical assessments and neuroimaging research applications, including resection extent determination, predictive modeling of surgery outcome, and masking image processing near resection sites. In this study, an automated resection cavity segmentation algorithm is developed for analyzing postoperative MRI of epilepsy patients and deployed in an easy-to-use graphical user interface (GUI) that estimates remnant brain volumes, including postsurgical hippocampal remnant tissue. This retrospective study included postoperative T1-weighted MRI from 62 temporal lobe epilepsy (TLE) patients who underwent resective surgery. The resection site was manually segmented and reviewed by a neuroradiologist (JMS). A majority vote ensemble algorithm was used to segment surgical resections, using 3 U-Net convolutional neural networks trained on axial, coronal, and sagittal slices, respectively. The algorithm was trained using 5-fold cross validation, with data partitioned into training (N = 27) testing (N = 9), and validation (N = 9) sets, and evaluated on a separate held-out test set (N = 17). Algorithm performance was assessed using Dice-Sørensen coefficient (DSC), Hausdorff distance, and volume estimates. Additionally, we deploy a fully-automated, GUI-based pipeline that compares resection segmentations with preoperative imaging and reports estimates of resected brain structures. The cross-validation and held-out test median DSCs were 0.84 ± 0.08 and 0.74 ± 0.22 (median ± interquartile range) respectively, which approach inter-rater reliability between radiologists (0.84-0.86) as reported in the literature. Median 95 % Hausdorff distances were 3.6 mm and 4.0 mm respectively, indicating high segmentation boundary confidence. Automated and manual resection volume estimates were highly correlated for both cross-validation (r = 0.94, p < 0.0001) and held-out test subjects (r = 0.87, p < 0.0001). Automated and manual segmentations overlapped in all 62 subjects, indicating a low false negative rate. In control subjects (N = 40), the classifier segmented no voxels (N = 33), <50 voxels (N = 5), or a small volumes<0.5 cm (N = 2), indicating a low false positive rate that can be controlled via thresholding. There was strong agreement between postoperative hippocampal remnant volumes determined using automated and manual resection segmentations (r = 0.90, p < 0.0001, mean absolute error = 6.3 %), indicating that automated resection segmentations can permit quantification of postoperative brain volumes after epilepsy surgery. Applications include quantification of postoperative remnant brain volumes, correction of deformable registration, and localization of removed brain regions for network modeling.
Topics: Humans; Deep Learning; Retrospective Studies; Reproducibility of Results; Magnetic Resonance Imaging; Image Processing, Computer-Assisted; Epilepsy
PubMed: 35988342
DOI: 10.1016/j.nicl.2022.103154 -
Journal of Neuroimaging : Official... May 2022Corpus callosum (CC) atrophy is predictive of future disability in multiple sclerosis (MS). However, current segmentation methods are either labor- or computationally...
BACKGROUND AND PURPOSE
Corpus callosum (CC) atrophy is predictive of future disability in multiple sclerosis (MS). However, current segmentation methods are either labor- or computationally intensive. We therefore developed an automated deep learning-based CC segmentation tool and hypothesized that its output would correlate with disability.
METHODS
A cohort of 631 MS patients (449 females, baseline age 41 ± 11 years) with both 3-dimensional T1-weighted and T2-weighted fluid-attenuated inversion recovery (FLAIR) MRI was used for the development. Data from 204 patients were manually segmented to train convolutional neural networks in extracting the midsagittal intracranial and CC areas. Remaining data were used to compare segmentations with FreeSurfer and benchmark the outputs with regard to clinical correlations. A 1.5 and 3 Tesla reproducibility cohort of 9 MS patients evaluated the segmentation robustness.
RESULTS
The deep learning-based tool was accurate in selecting the appropriate slice for segmentation (98% accuracy within 3 mm of the manual ground truth) and segmenting the CC (Dice coefficient .88-.91) and intracranial areas (.97-.98). The accuracy was lower with higher atrophy. Reproducibility was excellent (intraclass correlation coefficient > .90) for T1-weighted scans and moderate-good for FLAIR (.74-.75). Segmentations were associated with baseline and future (average follow-up time 6-7 years) Expanded Disability Status Scale (ρ = -.13 to -.24) and Symbol Digit Modalities Test (r = .18-.29) scores.
CONCLUSIONS
We present a fully automatic deep learning-based CC segmentation tool optimized to modern imaging in MS with clinical correlations on par with computationally expensive alternatives.
Topics: Adult; Atrophy; Corpus Callosum; Deep Learning; Female; Humans; Image Processing, Computer-Assisted; Magnetic Resonance Imaging; Male; Middle Aged; Multiple Sclerosis; Reproducibility of Results
PubMed: 35083815
DOI: 10.1111/jon.12972 -
Journal of Medical Imaging (Bellingham,... Oct 2017Duchenne muscular dystrophy (DMD) is a childhood-onset neuromuscular disease that results in the degeneration of muscle, starting in the extremities, before progressing...
Duchenne muscular dystrophy (DMD) is a childhood-onset neuromuscular disease that results in the degeneration of muscle, starting in the extremities, before progressing to more vital areas, such as the lungs. Respiratory failure and pneumonia due to respiratory muscle weakness lead to hospitalization and early mortality. However, tracking the disease in this region can be difficult, as current methods are based on breathing tests and are incapable of distinguishing between muscle involvements. Cine MRI scans give insight into respiratory muscle movements, but the images suffer due to low spatial resolution and poor signal-to-noise ratio. Thus, a robust lung segmentation method is required for accurate analysis of the lung and respiratory muscle movement. We deployed a deep learning approach that utilizes sequence-specific prior information to assist the segmentation of lung in cine MRI. More specifically, we adopt a holistically nested network to conduct image-to-image holistic training and prediction. One frame of the cine MRI is used in the training and applied to the remainder of the sequence ([Formula: see text] frames). We applied this method to cine MRIs of the lung in the axial, sagittal, and coronal planes. Characteristic lung motion patterns during the breathing cycle were then derived from the segmentations and used for diagnosis. Our data set consisted of 31 young boys, age [Formula: see text] years, 15 of whom suffered from DMD. The remaining 16 subjects were age-matched healthy volunteers. For validation, slices from inspiratory and expiratory cycles were manually segmented and compared with results obtained from our method. The Dice similarity coefficient for the deep learning-based method was [Formula: see text] for the sagittal view, [Formula: see text] for the axial view, and [Formula: see text] for the coronal view. The holistic neural network approach was compared with an approach using Demon's registration and showed superior performance. These results suggest that the deep learning-based method reliably and accurately segments the lung across the breathing cycle.
PubMed: 29226176
DOI: 10.1117/1.JMI.4.4.041310 -
BioRxiv : the Preprint Server For... Mar 2024Cells are a fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging...
Cells are a fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, most models in use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach for mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve human-level performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected across various imaging modalities. We show that CellSAM has strong zero-shot performance and can be improved with a few examples via few-shot learning. We also show that CellSAM can unify bioimaging analysis workflows such as spatial transcriptomics and cell tracking. A deployed version of CellSAM is available at https://cellsam.deepcell.org/.
PubMed: 38045277
DOI: 10.1101/2023.11.17.567630 -
Journal of Sports Science & Medicine Sep 2015Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and...
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key pointsMusculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest.Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis.Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries.
PubMed: 26336349
DOI: No ID Found -
BMC Bioinformatics Feb 2016Robust methods for the segmentation and analysis of cells in 3D time sequences (3D+t) are critical for quantitative cell biology. While many automated methods for...
BACKGROUND
Robust methods for the segmentation and analysis of cells in 3D time sequences (3D+t) are critical for quantitative cell biology. While many automated methods for segmentation perform very well, few generalize reliably to diverse datasets. Such automated methods could significantly benefit from at least minimal user guidance. Identification and correction of segmentation errors in time-series data is of prime importance for proper validation of the subsequent analysis. The primary contribution of this work is a novel method for interactive segmentation and analysis of microscopy data, which learns from and guides user interactions to improve overall segmentation.
RESULTS
We introduce an interactive cell analysis application, called CellECT, for 3D+t microscopy datasets. The core segmentation tool is watershed-based and allows the user to add, remove or modify existing segments by means of manipulating guidance markers. A confidence metric learns from the user interaction and highlights regions of uncertainty in the segmentation for the user's attention. User corrected segmentations are then propagated to neighboring time points. The analysis tool computes local and global statistics for various cell measurements over the time sequence. Detailed results on two large datasets containing membrane and nuclei data are presented: a 3D+t confocal microscopy dataset of the ascidian Phallusia mammillata consisting of 18 time points, and a 3D+t single plane illumination microscopy (SPIM) dataset consisting of 192 time points. Additionally, CellECT was used to segment a large population of jigsaw-puzzle shaped epidermal cells from Arabidopsis thaliana leaves. The cell coordinates obtained using CellECT are compared to those of manually segmented cells.
CONCLUSIONS
CellECT provides tools for convenient segmentation and analysis of 3D+t membrane datasets by incorporating human interaction into automated algorithms. Users can modify segmentation results through the help of guidance markers, and an adaptive confidence metric highlights problematic regions. Segmentations can be propagated to multiple time points, and once a segmentation is available for a time sequence cells can be analyzed to observe trends. The segmentation and analysis tools presented here generalize well to membrane or cell wall volumetric time series datasets.
Topics: Algorithms; Animals; Arabidopsis; Biological Evolution; Cell Nucleus; Computational Biology; Humans; Image Interpretation, Computer-Assisted; Imaging, Three-Dimensional; Microscopy; Plant Leaves; Urochordata
PubMed: 26887436
DOI: 10.1186/s12859-016-0927-7 -
Mechanisms of Development Mar 2020The segment-polarity gene engrailed is required for segmentation in the early Drosophila embryo. Loss of Engrailed function results in segmentation defects that vary in...
The segment-polarity gene engrailed is required for segmentation in the early Drosophila embryo. Loss of Engrailed function results in segmentation defects that vary in severity from pair-rule phenotypes to a lawn phenotype lacking in obvious of segmentation. During segmentation, Engrailed is expressed in stripes with a single segmental periodicity in Drosophila, which is conserved in all arthropods examined so far. To define segments, the segmental stripes of Engrailed induce the segmental stripes of wingless at each parasegmental boundary. However, segmentation functions of orthologs of engrailed in non-Drosophila arthropods have yet to be reported. Here, we analyzed functions of the Tribolium ortholog of engrailed (Tc-engrailed) during embryonic segmentation. Larval cuticles with Tc-engrailed being knocked down had segmentation phenotypes including incomplete segment formation and loss of a group of segments. In agreement with the cuticle segmentation defects, segments developed incompletely and irregularly or did not form in Tribolium germbands where Tc-engrailed was knocked down. Furthermore, knock-down of Tc-engrailed did not properly express the segmental stripes of wingless in Tribolium germbands. Taken together with the conserved expression patterns of Engrailed in arthropod segmentation, our data suggest that Tc-engrailed is required for embryonic segmentation in Tribolium, and the genetic mechanism of Engrailed inducing wingless expression is conserved at least between Drosophila and Tribolium.
Topics: Animals; Arthropods; Body Patterning; Drosophila; Drosophila Proteins; Gene Expression Regulation, Developmental; Genes, Insect; Phenotype; Tribolium
PubMed: 31778794
DOI: 10.1016/j.mod.2019.103594