-
Perception 2015
Topics: Depth Perception; Humans; Optic Flow; Optical Illusions; Photic Stimulation; Vision Disparity
PubMed: 26422897
DOI: 10.1068/p4405ed -
Philosophical Transactions of the Royal... Jun 2016Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation....
Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content.This article is part of the themed issue 'Vision in our three-dimensional world'.
Topics: Adult; Distance Perception; Female; Humans; Male; Vision Disparity; Young Adult
PubMed: 27269596
DOI: 10.1098/rstb.2015.0253 -
Philosophical Transactions of the Royal... Jan 2023Stereoscopic depth perception is possible with luminance-defined target velocities at least as high as 600° s, up to the limit of 30 Hz imposed by the high-temporal... (Review)
Review
Stereoscopic depth perception is possible with luminance-defined target velocities at least as high as 600° s, up to the limit of 30 Hz imposed by the high-temporal frequency cut-off of the eye. The limitation for perceiving depth from stereo disparity of moving targets is not their velocity but the temporal frequency bandwidth of the eye, which is affected by adaption state. Stereoacuity for a depth shift in a horizontally moving grating depends not on spatial disparity between corresponding luminance points in spatial units of arc min, but on the spatial shift as a fixed proportion of the period of the grating, in other words, on the phase angle difference between the two eyes, as is also the case for obliquely orientated, stationary gratings. Phase differences explain not only the classic Pulfrich stereophenomenon but its equivalent with dynamic visual noise, and a new effect in which depth results from interocular phase differences in luminance modulation. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Topics: Vision Disparity; Depth Perception; Visual Acuity; Vision, Ocular; Vision, Binocular; Motion Perception
PubMed: 36511411
DOI: 10.1098/rstb.2021.0462 -
Vision Research Mar 2021We describe a new unified model to explain both binocular fusion and depth perception, over a broad range of depths. At each location, the model consists of an array of...
We describe a new unified model to explain both binocular fusion and depth perception, over a broad range of depths. At each location, the model consists of an array of paired spatial frequency filters, with different relative horizontal shifts (position disparity) and interocular phase disparities of 0, 90, ±180, or -90°. The paired filters with different spatial profiles (non-zero phase disparity) compute interocular misalignment and provide phase-disparity energy (binocular fusion energy) to drive selection of the appropriate filters along the position disparity space until the misalignment is eliminated and sensory fusion is achieved locally. The paired filters with identical spatial profiles (0 phase disparity) compute the position-disparity energy. After sensory fusion, the combination of position and possible residual phase disparity energies is calculated for binocular depth perception. Binocular fusion occurs at multiple scales following a coarse-to-fine process. At a given location, the apparent depth is the weighted sum of fusion shifts combined with residual phase disparity in all spatial-frequency channels, and the weights depend on stimulus spatial frequency and stimulus contrast. To test the theory, we measured disparity minimum and maximum thresholds (Dmin and Dmax) at three spatial frequencies and with different intraocular contrast levels. The stimuli were Random-Gabor-Patch (RGP) stereograms consisting of Gabor patches with random positions and phases, but with a fixed spatial frequency. The two eyes viewed identical arrays of patches except that one eye's array could be shifted horizontally and could differ in contrast. Our experiments and modeling reveal two contrast normalization mechanisms: (1) Energy Normalization (EN): Binocular energy is normalized with monocular energy after the site of binocular combination. This predicts constant Dmin thresholds when varying stimulus contrast in the two eyes; (2) DSKL model Interocular interactions: Monocular contrasts are normalized before the binocular combination site through interocular contrast gain-control and gain-enhancement mechanisms. This predicts contrast dependent Dmax thresholds. We tested a range of models and found that a model consisting of a second-order pathway with DSKL interocular interactions and a first-order pathway with EN at each spatial-frequency band can account for both the Dmin and Dmax data very well. Simulations show that the model makes reasonable predictions of suprathreshold depth perception.
Topics: Contrast Sensitivity; Depth Perception; Humans; Vision Disparity; Vision, Binocular
PubMed: 33359897
DOI: 10.1016/j.visres.2020.11.009 -
Journal of Vision Mar 2022Stereopsis plays an important role in depth perception; if so, disparity-defined depth should not vary with distance. However, studies of stereoscopic depth constancy...
Stereopsis plays an important role in depth perception; if so, disparity-defined depth should not vary with distance. However, studies of stereoscopic depth constancy often report systematic distortions in depth judgments over distance, particularly for virtual stimuli. Our aim was to understand how depth estimation is impacted by viewing distance and display-based cue conflicts by replicating physical objects in virtual counterparts. To this end, we measured perceived depth using virtual textured half-cylinders and identical three-dimensional (3D) printed versions at two viewing distances under monocular and binocular conditions. Virtual stimuli were viewed using a mirror stereoscope and an Oculus Rift head-mounted display (HMD), while physical stimuli were viewed in a controlled test environment. Depth judgments were similar in both virtual apparatuses, which suggests that variations in the viewing geometry and optics of the HMD have little impact on perceived depth. When viewing physical stimuli binocularly, judgments were accurate and exhibited stereoscopic depth constancy. However, in all cases, depth was underestimated for virtual stimuli and failed to achieve depth constancy. It is clear that depth constancy is only complete for cue-rich physical stimuli and that the failure of constancy in virtual stimuli is due to the presence of the vergence-accommodation conflict. Further, our post hoc analysis revealed that prior experience with virtual and physical environments had a strong effect on depth judgments. That is, performance in virtual environments was enhanced by limited exposure to a related task using physical objects.
Topics: Accommodation, Ocular; Depth Perception; Humans; Judgment; Mathematics; Vision Disparity
PubMed: 35315875
DOI: 10.1167/jov.22.4.9 -
Philosophical Transactions of the Royal... Jun 2016In addition to depth cues afforded by binocular vision, the brain processes relative motion signals to perceive depth. When an observer translates relative to their... (Review)
Review
In addition to depth cues afforded by binocular vision, the brain processes relative motion signals to perceive depth. When an observer translates relative to their visual environment, the relative motion of objects at different distances (motion parallax) provides a powerful cue to three-dimensional scene structure. Although perception of depth based on motion parallax has been studied extensively in humans, relatively little is known regarding the neural basis of this visual capability. We review recent advances in elucidating the neural mechanisms for representing depth-sign (near versus far) from motion parallax. We examine a potential neural substrate in the middle temporal visual area for depth perception based on motion parallax, and we explore the nature of the signals that provide critical inputs for disambiguating depth-sign.This article is part of the themed issue 'Vision in our three-dimensional world'.
Topics: Animals; Depth Perception; Humans; Macaca; Motion Perception; Vision Disparity; Vision, Binocular
PubMed: 27269599
DOI: 10.1098/rstb.2015.0256 -
Investigative Ophthalmology & Visual... Jan 2022We developed a stereo task that is based on a motion direction discrimination to examine the role that depth can play in disambiguating motion direction.
PURPOSE
We developed a stereo task that is based on a motion direction discrimination to examine the role that depth can play in disambiguating motion direction.
METHODS
In this study, we quantified normal adults' static and dynamic (i.e., laterally moving) stereoscopic performance using a psychophysical task, where we dichoptically presented randomly arranged, limited lifetime Gabor elements at two depth planes (one plane was at the fixation plane and the other at an uncrossed disparity relative to the fixation plane). Each plane contained half of the elements. For the dynamic condition, all elements were vertically oriented and moved to the left in one plane and to the right in another plane; for the static condition, the elements were horizontally oriented in one plane and vertically oriented in another plane.
RESULTS
For the range of motion speed that we measured (from 0.17°/s to 5.33°/s), we observed clear speed tuning of the stereo sensitivity (P = 3.0 × 10-5). The shape of this tuning did not significantly change with different spatial frequencies. We also found a significant difference in stereo sensitivity between stereopsis with static and laterally moving stimuli (speed = 0.67°/s; P = 0.004). Such difference was not evident when we matched the task between the static and moving stimuli.
CONCLUSIONS
We report that lateral motion modulates human global depth perception. This motion/stereo constraint is related to motion velocity not stimulus temporal frequency. We speculate that the processing of motion-based stereopsis of the kind reported here occurs in dorsal extrastriate cortex.
Topics: Adult; Depth Perception; Female; Humans; Male; Motion Perception; Psychophysics; Reference Values; Vision Disparity; Vision, Binocular; Visual Cortex; Young Adult
PubMed: 35077551
DOI: 10.1167/iovs.63.1.32 -
Philosophical Transactions of the Royal... Jan 2023The dominant inferential approach to human 3D perception assumes a model of spatial encoding based on a physical description of objects and space. Prevailing models... (Review)
Review
The dominant inferential approach to human 3D perception assumes a model of spatial encoding based on a physical description of objects and space. Prevailing models based on this physicalist approach assume that the visual system infers an objective, unitary and mostly veridical representation of the external world. However, careful consideration of the phenomenology of 3D perception challenges these assumptions. I review important aspects of phenomenology, psychophysics and neurophysiology which suggest that human visual perception of 3D objects and space is underwritten by distinct and dissociated spatial encodings that are optimized for specific regions of space. Specifically, I argue that 3D perception is underwritten by at least three distinct encodings for (1) egocentric distance perception at the ambulatory scale, (2) exocentric distance (scaled depth) perception optimized for near space, and (3) perception of object shape and layout (unscaled depth). This tripartite division can more satisfactorily account for the phenomenology, psychophysics and adaptive logic of human 3D perception. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Topics: Humans; Depth Perception; Psychophysics; Distance Perception; Visual Perception; Space Perception
PubMed: 36511412
DOI: 10.1098/rstb.2021.0454 -
Vision Research Jan 2021To calibrate stereoscopic depth from disparity our visual system must compensate for an object's egocentric location. Ideally, the perceived three-dimensional shape and...
To calibrate stereoscopic depth from disparity our visual system must compensate for an object's egocentric location. Ideally, the perceived three-dimensional shape and size of objects in visual space should be invariant with their location such that rigid objects have a consistent identity and shape. These percepts should be accurate enough to support both perceptual judgments and visually-guided interaction. This theoretical note reviews the relationship of stereoscopic depth constancy to the geometry of stereoscopic space and seemingly esoteric concepts like the horopter. We argue that to encompass the full scope of stereoscopic depth constancy, researchers need to consider not just distance but also direction, that is 3D egocentric location in space. Judgements of surface orientation need to take into account the shape of the horopter and the computation of metric depth (when tasks depend on it) must compensate for direction as well as distance to calibrate disparities. We show that the concept of the horopter underlies these considerations and that the relationship between depth constancy and the horopter should be more explicit in the literature.
Topics: Depth Perception; Humans; Judgment; Mathematics; Vision Disparity
PubMed: 33161145
DOI: 10.1016/j.visres.2020.10.003 -
Philosophical Transactions of the Royal... Jun 2016One of the most powerful forms of depth perception capitalizes on the small relative displacements, or binocular disparities, in the images projected onto each eye. The... (Review)
Review
One of the most powerful forms of depth perception capitalizes on the small relative displacements, or binocular disparities, in the images projected onto each eye. The brain employs these disparities to facilitate various computations, including sensori-motor transformations (reaching, grasping), scene segmentation and object recognition. In accordance with these different functions, disparity activates a large number of regions in the brain of both humans and monkeys. Here, we review how disparity processing evolves along different regions of the ventral visual pathway of macaques, emphasizing research based on both correlational and causal techniques. We will discuss the progression in the ventral pathway from a basic absolute disparity representation to a more complex three-dimensional shape code. We will show that, in the course of this evolution, the underlying neuronal activity becomes progressively more bound to the global perceptual experience. We argue that these observations most probably extend beyond disparity processing per se, and pertain to object processing in the ventral pathway in general. We conclude by posing some important unresolved questions whose answers may significantly advance the field, and broaden its scope.This article is part of the themed issue 'Vision in our three-dimensional world'.
Topics: Animals; Depth Perception; Macaca; Vision Disparity; Visual Pathways
PubMed: 27269602
DOI: 10.1098/rstb.2015.0259