-
PloS One 2020Dynamic environments often contain features that change at slightly different times. Here we investigated how sensitivity to these slight timing differences depends on...
Dynamic environments often contain features that change at slightly different times. Here we investigated how sensitivity to these slight timing differences depends on spatial relationships among stimuli. Stimuli comprised bilaterally presented plaid pairs that rotated, or radially expanded and contracted to simulate depth movement. Left and right hemifield stimuli initially moved in the same or opposite directions, then reversed directions at various asynchronies. College students judged whether the direction reversed first on the left or right-a temporal order judgment (TOJ). TOJ thresholds remained similar across conditions that required tracking only one depth plane, or bilaterally synchronized depth planes. However, when stimuli required simultaneously tracking multiple depth planes-counter-phased across hemifields-TOJ thresholds doubled or tripled. This effect depended on perceptual set. Increasing the certainty with which participants simultaneously tracked multiple depth planes reduced TOJ thresholds by 45 percent. Even complete certainty, though, failed to reduce multiple-depth-plane TOJ thresholds to levels obtained with single or bilaterally synchronized depth planes. Overall, the results demonstrate that global depth perception can alter local timing sensitivity. More broadly, the findings reflect a coarse-to-fine spatial influence on how we sense time.
Topics: Depth Perception; Humans; Models, Theoretical; Motion; Perceptual Masking; Photic Stimulation; Psychometrics; Reproducibility of Results; Sensory Thresholds; Time Factors; Time Perception
PubMed: 31971977
DOI: 10.1371/journal.pone.0228080 -
Neuron May 2021Predators use vision to hunt, and hunting success is one of evolution's main selection pressures. However, how viewing strategies and visual systems are adapted to...
Predators use vision to hunt, and hunting success is one of evolution's main selection pressures. However, how viewing strategies and visual systems are adapted to predation is unclear. Tracking predator-prey interactions of mice and crickets in 3D, we find that mice trace crickets with their binocular visual fields and that monocular mice are poor hunters. Mammalian binocular vision requires ipsi- and contralateral projections of retinal ganglion cells (RGCs) to the brain. Large-scale single-cell recordings and morphological reconstructions reveal that only a small subset (9 of 40+) of RGC types in the ventrotemporal mouse retina innervate ipsilateral brain areas (ipsi-RGCs). Selective ablation of ipsi-RGCs (<2% of RGCs) in the adult retina drastically reduces the hunting success of mice. Stimuli based on ethological observations indicate that five ipsi-RGC types reliably signal prey. Thus, viewing strategies align with a spatially restricted and cell-type-specific set of ipsi-RGCs that supports binocular vision to guide predation.
Topics: Animals; Depth Perception; Functional Laterality; Mice; Predatory Behavior; Retinal Ganglion Cells; Vision, Binocular; Visual Pathways
PubMed: 33784498
DOI: 10.1016/j.neuron.2021.03.010 -
Advanced Materials (Deerfield Beach,... May 2022The biological visual system encodes optical information into spikes and processes them by the neural network, which enables the perception with high throughput of...
The biological visual system encodes optical information into spikes and processes them by the neural network, which enables the perception with high throughput of visual processing with ultralow energy budget. This has inspired a wide spectrum of devices to imitate such neural process, while precise mimicking such procedure is still highly required. Here, a highly bio-realistic photoelectric spiking neuron for visual depth perception is presented. The firing spikes generated by the TaO memristive spiking encoders have a biologically similar frequency range of 1-200 Hz and sub-micro watts power. Such spiking encoder is integrated with a photodetector and a network of neuromorphic transistors, for information collection and recognition tasks, respectively. The distance-dependent response and eye fatigue of biological visual systems have been mimicked based on such photoelectric spiking neuron. The simulated depth perception shows a recognition improvement by adapting to sights at different distances. The results can advance the technologies in bioinspired or robotic systems that may be endowed with depth perception and power efficiency at the same time.
Topics: Depth Perception; Neural Networks, Computer; Neurons; Visual Perception
PubMed: 35305270
DOI: 10.1002/adma.202201895 -
Journal of Neuroengineering and... Aug 2021Augmented Reality (AR)-based interventions are applied in neurorehabilitation with increasing frequency. Depth perception is required for the intended interaction within... (Observational Study)
Observational Study
BACKGROUND
Augmented Reality (AR)-based interventions are applied in neurorehabilitation with increasing frequency. Depth perception is required for the intended interaction within AR environments. Until now, however, it is unclear whether patients after stroke with impaired visuospatial perception (VSP) are able to perceive depth in the AR environment.
METHODS
Different aspects of VSP (stereovision and spatial localization/visuoconstruction) were assessed in 20 patients after stroke (mean age: 64 ± 14 years) and 20 healthy subjects (HS, mean age: 28 ± 8 years) using clinical tests. The group of HS was recruited to assess the validity of the developed AR tasks in testing stereovision. To measure perception of holographic objects, three distance judgment tasks and one three-dimensionality task were designed. The effect of impaired stereovision on performance in each AR task was analyzed. AR task performance was modeled by aspects of VSP using separate regression analyses for HS and for patients.
RESULTS
In HS, stereovision had a significant effect on the performance in all AR distance judgment tasks (p = 0.021, p = 0.002, p = 0.046) and in the three-dimensionality task (p = 0.003). Individual quality of stereovision significantly predicted the accuracy in each distance judgment task and was highly related to the ability to perceive holograms as three-dimensional (p = 0.001). In stroke-survivors, impaired stereovision had a specific deterioration effect on only one distance judgment task (p = 0.042), whereas the three-dimensionality task was unaffected (p = 0.317). Regression analyses confirmed a lacking impact of patients' quality of stereovision on AR task performance, while spatial localization/visuoconstruction significantly prognosticated the accuracy in distance estimation of geometric objects in two AR tasks.
CONCLUSION
Impairments in VSP reduce the ability to estimate distance and to perceive three-dimensionality in an AR environment. While stereovision is key for task performance in HS, spatial localization/visuoconstruction is predominant in patients. Since impairments in VSP are present after stroke, these findings might be crucial when AR is applied for neurorehabilitative treatment. In order to maximize the therapy outcome, the design of AR games should be adapted to patients' impaired VSP. Trial registration: The trial was not registered, as it was an observational study.
Topics: Adult; Aged; Augmented Reality; Depth Perception; Humans; Judgment; Middle Aged; Stroke; Task Performance and Analysis; Young Adult
PubMed: 34419086
DOI: 10.1186/s12984-021-00920-5 -
Investigative Ophthalmology & Visual... Sep 2021Our visual system compares the inputs received from the two eyes to estimate the relative depths of features in the retinal image. We investigated how an imbalance in...
PURPOSE
Our visual system compares the inputs received from the two eyes to estimate the relative depths of features in the retinal image. We investigated how an imbalance in the strength of the input received from the two eyes affects stereopsis. We also explored the level of agreement between different measurements of sensory eye imbalance.
METHODS
We measured the sensory eye imbalance and stereoacuity of 30 normally sighted participants. We made our measurements using a modified amblyoscope. The sensory eye imbalance was assessed through three methods: the difference between monocular contrast thresholds, the difference in dichoptic masking weight, and the contribution of each eye to a fused binocular percept. We referred them as the "threshold imbalance," "masking imbalance," and "fusion imbalance," respectively. The stereoacuity threshold was measured by having subjects discriminate which of four circles were displaced in depth. All of our tests were performed using stimuli of the same spatial frequency (2.5 cycles/degree).
RESULTS
We found a relationship between stereoacuity and sensory eye imbalance. However, this was only the case for fusion imbalance measurement (ρ = 0.52; P = 0.003). Neither the threshold imbalance nor the masking imbalance was significantly correlated with stereoacuity. We also found the threshold imbalance was correlated with both the fusion and masking imbalances (r = 0.46, P = 0.011 and r = 0.49, P = 0.005, respectively). However, a nonsignificant correlation was found between the fusion and masking imbalances.
CONCLUSIONS
Our findings suggest that there exist multiple types of sensory eye dominance that can be assessed by different tasks. We find only imbalances in dominance that result in biases to fused percepts are correlated with stereoacuity.
Topics: Adult; Aged; Amblyopia; Contrast Sensitivity; Depth Perception; Dominance, Ocular; Female; Humans; Male; Middle Aged; Perceptual Masking; Sensory Thresholds; Vision, Binocular; Visual Acuity; Young Adult
PubMed: 34515732
DOI: 10.1167/iovs.62.12.10 -
Optics Express Mar 2020Foveation and (de)focus are two important visual factors in designing near eye displays. Foveation can reduce computational load by lowering display details towards the...
Foveation and (de)focus are two important visual factors in designing near eye displays. Foveation can reduce computational load by lowering display details towards the visual periphery, while focal cues can reduce vergence-accommodation conflict thereby lessening visual discomfort in using near eye displays. We performed two psychophysical experiments to investigate the relationship between foveation and focus cues. The first study measured blur discrimination sensitivity as a function of visual eccentricity, where we found discrimination thresholds significantly lower than previously reported. The second study measured depth discrimination threshold where we found a clear dependency on visual eccentricity. We discuss the study results and suggest further investigation.
Topics: Adult; Depth Perception; Humans; Middle Aged; Photic Stimulation; Sensory Thresholds; Visual Perception; Young Adult
PubMed: 32225914
DOI: 10.1364/OE.28.006734 -
European Archives of... Jul 2021The current standard endoscopic technique is a high resolution visualisation up to Full HD and even 4 K. A recent development are 3D endoscopes providing a... (Randomized Controlled Trial)
Randomized Controlled Trial
PURPOSE
The current standard endoscopic technique is a high resolution visualisation up to Full HD and even 4 K. A recent development are 3D endoscopes providing a 3-dimensional picture, which supposedly gives additional information of depth, anatomical details and orientation in the surgical field. Since the 3D-endoscopic technique is new, little scientific evidence is known whether the new technique provides advantages for the surgeon compared to the 2D-endoscopic standard technique in FESS. This study compares the standard 2D-endoscopic surgical technique with the new commercially available 3D-endoscopic technique.
METHODS
The prospective randomized interventional multicenter study included a total of 80 referred patients with chronic rhinosinusitis with and without polyps without prior surgery. A bilateral FESS procedure was performed, one side with the 2D-endoscopic technique, the other side with the 3D-endoscopic technique. The time of duration was measured. Additionally, a questionnaire containing 20 items was completed by 4 different surgeons judging subjective impression of visualisation and handling.
RESULTS
2D imaging was superior to 3D apart from "recognition of details", "depth perception" and "3D effect". For usability properties 2D was superior to 3D apart from "weight of endoscopes". Mean duration for surgery was 26.1 min for 2D and 27.4 min. for 3D without statistical significance (P = 0.219).
CONCLUSION
Three-dimensional endoscopy features improved depth perception and recognition of anatomic details but worse overall picture quality. It is useful for teaching purposes, yet 2D techniques provide a better outcome in terms of feasibility for routine endoscopic approaches.
Topics: Depth Perception; Endoscopes; Endoscopy; Humans; Imaging, Three-Dimensional; Prospective Studies
PubMed: 33373011
DOI: 10.1007/s00405-020-06495-6 -
Archivos de La Sociedad Espanola de... Oct 2021
Topics: Depth Perception; Lens, Crystalline
PubMed: 34620479
DOI: 10.1016/j.oftale.2021.07.002 -
Progress in Retinal and Eye Research May 2022Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of... (Review)
Review
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Topics: Depth Perception; Environment; Humans; Vision, Binocular
PubMed: 34624515
DOI: 10.1016/j.preteyeres.2021.101014 -
IEEE Transactions on Visualization and... Nov 2023This paper investigates the accuracy of Augmented Reality (AR) technologies, particularly commercially available optical see-through displays, in depicting virtual...
This paper investigates the accuracy of Augmented Reality (AR) technologies, particularly commercially available optical see-through displays, in depicting virtual content inside the human body for surgical planning. Their inherent limitations result in inaccuracies in perceived object positioning. We examine how occlusion, specifically with opaque surfaces, affects perceived depth of virtual objects at arm's length working distances. A custom apparatus with a half-silvered mirror was developed, providing accurate depth cues excluding occlusion, differing from commercial displays. We carried out a study, contrasting our apparatus with a HoloLens 2, involving a depth estimation task under varied surface complexities and illuminations. In addition, we explored the effects of creating a virtual "hole" in the surface. Subjects' depth estimation accuracy and confidence were a ssessed. Results showed more depth estimation variation with HoloLens and significant depth error beneath complex occluding surfaces. However, creating a virtual hole significantly reduced depth errors and increased subjects' confidence, irrespective of accuracy enhancement. These findings have important implications for the design and use of mixed-reality technologies in surgical applications, and industrial applications such as using virtual content to guide maintenance or repair of components hidden beneath the opaque outer surface of equipment. A free copy of this paper and all supplemental materials are available at https://bit.ly/3YbkwjU.
Topics: Humans; Arm; Computer Graphics; Augmented Reality; User-Computer Interface; Depth Perception
PubMed: 37782607
DOI: 10.1109/TVCG.2023.3320239