-
BMC Oral Health Sep 2023Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantology, and orthognathic surgery to plan patient-specific treatments,...
OBJECTIVE
Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantology, and orthognathic surgery to plan patient-specific treatments, which require teeth segmentations with high accuracy and resolution. Manual teeth segmentation, the gold standard up until now, is time-consuming, tedious, and observer-dependent. This study aims to develop an automated teeth segmentation and labeling system using deep learning.
MATERIAL AND METHODS
As a reference, 1750 OS were manually segmented and labeled. A deep-learning approach based on PointCNN and 3D U-net in combination with a rule-based heuristic algorithm and a combinatorial search algorithm was trained and validated on 1400 OS. Subsequently, the trained algorithm was applied to a test set consisting of 350 OS. The intersection over union (IoU), as a measure of accuracy, was calculated to quantify the degree of similarity between the annotated ground truth and the model predictions.
RESULTS
The model achieved accurate teeth segmentations with a mean IoU score of 0.915. The FDI labels of the teeth were predicted with a mean accuracy of 0.894. The optical inspection showed excellent position agreements between the automatically and manually segmented teeth components. Minor flaws were mostly seen at the edges.
CONCLUSION
The proposed method forms a promising foundation for time-effective and observer-independent teeth segmentation and labeling on intra-oral scans.
CLINICAL SIGNIFICANCE
Deep learning may assist clinicians in virtual treatment planning in orthodontics, prosthetics, implantology, and orthognathic surgery. The impact of using such models in clinical practice should be explored.
Topics: Humans; Deep Learning; Algorithms; Calcium Sulfate; Dental Care; Physical Examination
PubMed: 37670290
DOI: 10.1186/s12903-023-03362-8 -
Investigative Ophthalmology & Visual... Dec 2023To investigate the flow pattern in unconventional outflow and its correlation with conventional outflow in mouse eyes.
PURPOSE
To investigate the flow pattern in unconventional outflow and its correlation with conventional outflow in mouse eyes.
METHODS
Fluorescent microspheres were injected into the anterior chamber of one eye of anesthetized C57BL/6J mice (n = 4), followed by perfused fixation with 4% paraformaldehyde in situ after 45 minutes. Post-euthanasia, the injected eyes were enucleated, further immersion fixed, and dissected into 12 equal radial segments. Both sides of each segment were imaged using a confocal microscope after nuclear counterstaining. Both unconventional and conventional outflow patterns of each eye were analyzed by ImageJ and ZEN 2.3 imaging software.
RESULTS
Segmental outflow patterns were observed in both the ciliary body (CB) and the supraciliary space and suprachoroidal space (SCS). In the CB, the tracer intensity was the lowest at 12 o'clock and highest at 9 o'clock, whereas in the SCS it was the lowest at 2 o'clock and the highest at 10 o'clock. Consequently, a segmental unconventional outflow was observed, with the lowest and highest flow regions in the superior and temporal quadrants, respectively. The overall segmental uveoscleral outflow has no correlation with trabecular outflow (P > 0.05). Four different outflow patterns were observed: (1) low-flow regions in both outflows, (2) primarily a high-flow region in conventional outflow, (3) primarily a high-flow region in unconventional outflow, and (4) high-flow regions in both outflows.
CONCLUSIONS
Uveoscleral outflow is segmental and unrelated to the trabecular segmental outflow. These findings will lead to future studies to identify the best location for the placement of drainage devices and drug delivery.
Topics: Mice; Animals; Mice, Inbred C57BL; Ciliary Body; Anterior Chamber; Coloring Agents; Drainage
PubMed: 38117243
DOI: 10.1167/iovs.64.15.26 -
Science Advances Jan 2024Spatiotemporal patterns widely occur in biological, chemical, and physical systems. Particularly, embryonic development displays a diverse gamut of repetitive patterns... (Review)
Review
Spatiotemporal patterns widely occur in biological, chemical, and physical systems. Particularly, embryonic development displays a diverse gamut of repetitive patterns established in many tissues and organs. Branching treelike structures in lungs, kidneys, livers, pancreases, and mammary glands as well as digits and bones in appendages, teeth, and palates are just a few examples. A fascinating instance of repetitive patterning is the sequential segmentation of the primary body axis, which is conserved in all vertebrates and many arthropods and annelids. In these species, the body axis elongates at the posterior end of the embryo containing an unsegmented tissue. Meanwhile, segments sequentially bud off from the anterior end of the unsegmented tissue, laying down an exquisite repetitive pattern and creating a segmented body plan. In vertebrates, the paraxial mesoderm is sequentially divided into somites. In this review, we will discuss the most prominent models, the most puzzling experimental data, and outstanding questions in vertebrate somite segmentation.
Topics: Animals; Body Patterning; Somites; Mesoderm; Vertebrates; Embryonic Development; Gene Expression Regulation, Developmental
PubMed: 38277458
DOI: 10.1126/sciadv.adk8937 -
Current Problems in Diagnostic Radiology 2023Hepatosplenomegaly is commonly diagnosed by radiologists based on single dimension measurements and heuristic cut-offs. Volumetric measurements may be more accurate for...
Hepatosplenomegaly is commonly diagnosed by radiologists based on single dimension measurements and heuristic cut-offs. Volumetric measurements may be more accurate for diagnosing organ enlargement. Artificial intelligence techniques may be able to automatically calculate liver and spleen volume and facilitate more accurate diagnosis. After IRB approval, 2 convolutional neural networks (CNN) were developed to automatically segment the liver and spleen on a training dataset comprised of 500 single-phase, contrast-enhanced CT abdomen and pelvis examinations. A separate dataset of ten thousand sequential examinations at a single institution was segmented with these CNNs. Performance was evaluated on a 1% subset and compared with manual segmentations using Sorensen-Dice coefficients and Pearson correlation coefficients. Radiologist reports were reviewed for diagnosis of hepatomegaly and splenomegaly and compared with calculated volumes. Abnormal enlargement was defined as greater than 2 standard deviations above the mean. Median Dice coefficients for liver and spleen segmentation were 0.988 and 0.981, respectively. Pearson correlation coefficients of CNN-derived estimates of organ volume against the gold-standard manual annotation were 0.999 for the liver and spleen (P < 0.001). Average liver volume was 1556.8 ± 498.7 cc and average spleen volume was 194.6 ± 123.0 cc. There were significant differences in average liver and spleen volumes between male and female patients. Thus, the volume thresholds for ground-truth determination of hepatomegaly and splenomegaly were determined separately for each sex. Radiologist classification of hepatomegaly was 65% sensitive, 91% specific, with a positive predictive value (PPV) of 23% and an negative predictive value (NPV) of 98%. Radiologist classification of splenomegaly was 68% sensitive, 97% specific, with a positive predictive value (PPV) of 50% and a negative predictive value (NPV) of 99%. Convolutional neural networks can accurately segment the liver and spleen and may be helpful to improve radiologist accuracy in the diagnosis of hepatomegaly and splenomegaly.
PubMed: 37277270
DOI: 10.1067/j.cpradiol.2023.05.005 -
PLoS Computational Biology Sep 2023Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function. Classical psychophysical methods have helped uncover many...
Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function. Classical psychophysical methods have helped uncover many rules of human perceptual segmentation, and recent progress in machine learning has produced successful algorithms. Yet, the computational logic of human segmentation remains unclear, partially because we lack well-controlled paradigms to measure perceptual segmentation maps and compare models quantitatively. Here we propose a new, integrated approach: given an image, we measure multiple pixel-based same-different judgments and perform model-based reconstruction of the underlying segmentation map. The reconstruction is robust to several experimental manipulations and captures the variability of individual participants. We demonstrate the validity of the approach on human segmentation of natural images and composite textures. We show that image uncertainty affects measured human variability, and it influences how participants weigh different visual features. Because any putative segmentation algorithm can be inserted to perform the reconstruction, our paradigm affords quantitative tests of theories of perception as well as new benchmarks for segmentation algorithms.
Topics: Humans; Uncertainty; Algorithms; Vision, Ocular; Machine Learning; Image Processing, Computer-Assisted
PubMed: 37747914
DOI: 10.1371/journal.pcbi.1011483 -
ArXiv Oct 2023Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function. Classical psychophysical methods have helped uncover many...
Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function. Classical psychophysical methods have helped uncover many rules of human perceptual segmentation, and recent progress in machine learning has produced successful algorithms. Yet, the computational logic of human segmentation remains unclear, partially because we lack well-controlled paradigms to measure perceptual segmentation maps and compare models quantitatively. Here we propose a new, integrated approach: given an image, we measure multiple pixel-based same-different judgments and perform model-based reconstruction of the underlying segmentation map. The reconstruction is robust to several experimental manipulations and captures the variability of individual participants. We demonstrate the validity of the approach on human segmentation of natural images and composite textures. We show that image uncertainty affects measured human variability, and it influences how participants weigh different visual features. Because any putative segmentation algorithm can be inserted to perform the reconstruction, our paradigm affords quantitative tests of theories of perception as well as new benchmarks for segmentation algorithms.
PubMed: 36824425
DOI: No ID Found -
European Radiology Experimental Dec 2023To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based...
PURPOSE
To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods.
METHODS
A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established "no-new-Net" framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test.
RESULTS
Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10, 3 × 10, 4 × 10, respectively), and for the omental lesions on the evaluation set (p = 1 × 10). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions.
CONCLUSION
Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions.
RELEVANCE STATEMENT
Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines.
KEY POINTS
• The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented. • Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists. • Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines.
Topics: Humans; Female; Deep Learning; Ovarian Cysts; Ovarian Neoplasms; Neural Networks, Computer; Tomography, X-Ray Computed
PubMed: 38057616
DOI: 10.1186/s41747-023-00388-z -
Medical Image Analysis Oct 2023Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of... (Review)
Review
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets.
Topics: Humans; Deep Learning; Image Processing, Computer-Assisted; Microscopy, Electron; Algorithms; Imaging, Three-Dimensional
PubMed: 37572414
DOI: 10.1016/j.media.2023.102920