-
PloS One 2012Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We...
Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We determined human subjects' performance and task-related brain activity in a motion trajectory prediction task. The task required spatial and motion working memory as well as the ability to extrapolate motion information in time to predict future object locations. We showed that the neural circuits associated with motion prediction included frontal, parietal and insular cortex, as well as the thalamus and the visual cortex. Interestingly, deactivation of many of these regions seemed to be more closely related to task performance. The differential activity during motion prediction vs. direct observation was also correlated with task performance. The neural networks involved in our visual motion prediction task are significantly different from those that underlie visual motion memory and imagery. Our results set the stage for the examination of the effects of deficiencies in these networks, such as those caused by aging and mental disorders, on visual motion prediction and its consequences on mobility related daily activities.
Topics: Adult; Brain Mapping; Contrast Sensitivity; Female; Humans; Male; Motion Perception; Task Performance and Analysis; Visual Perception
PubMed: 22768145
DOI: 10.1371/journal.pone.0039854 -
Current Biology : CB Jun 2010
Topics: Animals; Behavior, Animal; Humans; Research; Visual Perception
PubMed: 20560155
DOI: 10.1016/j.cub.2010.04.036 -
ELife Apr 2019We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding)....
We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated 'Bouma's Law' of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.
Topics: Crowding; Discrimination, Psychological; Fixation, Ocular; Humans; Pattern Recognition, Visual; Perceptual Masking; Photic Stimulation; Space Perception; Visual Fields; Visual Perception
PubMed: 31038458
DOI: 10.7554/eLife.42512 -
Scientific Reports Feb 2017When viewing ambiguous stimuli, people tend to perceive some interpretations more frequently than others. Such perceptual biases impose various types of constraints on...
When viewing ambiguous stimuli, people tend to perceive some interpretations more frequently than others. Such perceptual biases impose various types of constraints on visual perception, and accordingly, have been assumed to serve distinct adaptive functions. Here we demonstrated the interaction of two functionally distinct biases in bistable biological motion perception, one regulating perception based on the statistics of the environment - the viewing-from-above (VFA) bias, and the other with the potential to reduce costly errors resulting from perceptual inference - the facing-the-viewer (FTV) bias. When compatible, the two biases reinforced each other to enhance the bias strength and induced less perceptual reversals relative to when they were in conflict. Whereas in the conflicting condition, the biases competed with each other, with the dominant percept varying with visual cues that modulate the two biases separately in opposite directions. Crucially, the way the two biases interact does not depend on the dominant bias at the individual level, and cannot be accounted for by a single bias alone. These findings provide compelling evidence that humans robustly integrate biases with different adaptive functions in visual perception. It may be evolutionarily advantageous to dynamically reweight diverse biases in the sensory context to resolve perceptual ambiguity.
Topics: Bias; Computer Simulation; Depth Perception; Female; Humans; Male; Motion Perception; Visual Perception
PubMed: 28165061
DOI: 10.1038/srep42018 -
Journal of Optometry 2017Detection and identification of moving targets is of paramount importance in everyday life, even if it is not widely tested in optometric practice, mostly for technical...
OBJECTIVE
Detection and identification of moving targets is of paramount importance in everyday life, even if it is not widely tested in optometric practice, mostly for technical reasons. There are clear indications in the literature that in perception of moving targets, vision and hearing interact, for example in noisy surrounds and in understanding speech. The main aim of visual perception, the ability that optometry aims to optimize, is the identification of objects, from everyday objects to letters, but also the spatial orientation of subjects in natural surrounds. To subserve this aim, corresponding visual and acoustic features from the rich spectrum of signals supplied by natural environments have to be combined.
METHODS
Here, we investigated the influence of an auditory motion stimulus on visual motion detection, both with a concrete (left/right movement) and an abstract auditory motion (increase/decrease of pitch).
RESULTS
We found that incongruent audiovisual stimuli led to significantly inferior detection compared to the visual only condition. Additionally, detection was significantly better in abstract congruent than incongruent trials. For the concrete stimuli the detection threshold was significantly better in asynchronous audiovisual conditions than in the unimodal visual condition.
CONCLUSION
We find a clear but complex pattern of partly synergistic and partly inhibitory audio-visual interactions. It seems that asynchrony plays only a positive role in audiovisual motion while incongruence mostly disturbs in simultaneous abstract configurations but not in concrete configurations. As in speech perception in hearing-impaired patients, patients suffering from visual deficits should be able to benefit from acoustic information.
Topics: Auditory Perception; Female; Healthy Volunteers; Humans; Male; Motion Perception; Visual Perception; Young Adult
PubMed: 28237358
DOI: 10.1016/j.optom.2016.12.003 -
PloS One 2023The complex relationship between attention and visual perception can be exemplified and investigated through the Attentional Blink. The attentional blink is...
The complex relationship between attention and visual perception can be exemplified and investigated through the Attentional Blink. The attentional blink is characterised by impaired attention to the second of two target stimuli, when both occur within 200 - 500ms. The attentional blink has been well studied in experimental lab settings. However, despite the rise of online methods for behavioural research, their suitability for studying the attentional blink has not been fully addressed yet, the main concern being the lack of control and timing variability for stimulus presentation. Here, we investigated the suitability of online testing for studying the attentional blink with visual objects. Our results show a clear attentional blink effect between 200 to 400ms following the distractor including a Lag 1 sparing effect in line with previous research despite significant inter-subject and timing variability. This work demonstrates the suitability of online methods for studying the attentional blink with visual objects, opening new avenues to explore its underlying processes.
Topics: Attentional Blink; Photic Stimulation; Visual Perception
PubMed: 37535646
DOI: 10.1371/journal.pone.0289623 -
Nature Neuroscience Sep 2015Each time a locomoting fly turns, the visual image sweeps over the retina and generates a motion stimulus. Classic behavioral experiments suggested that flies use active...
Each time a locomoting fly turns, the visual image sweeps over the retina and generates a motion stimulus. Classic behavioral experiments suggested that flies use active neural-circuit mechanisms to suppress the perception of self-generated visual motion during intended turns. Direct electrophysiological evidence, however, has been lacking. We found that visual neurons in Drosophila receive motor-related inputs during rapid flight turns. These inputs arrived with a sign and latency appropriate for suppressing each targeted cell's visual response to the turn. Precise measurements of behavioral and neuronal response latencies supported the idea that motor-related inputs to optic flow-processing cells represent internal predictions of the expected visual drive induced by voluntary turns. Motor-related inputs to small object-selective visual neurons could reflect either proprioceptive feedback from the turn or internally generated signals. Our results in Drosophila echo the suppression of visual perception during rapid eye movements in primates, demonstrating common functional principles of sensorimotor processing across phyla.
Topics: Animals; Drosophila melanogaster; Eye Movements; Female; Motion Perception; Neurons; Photic Stimulation; Saccades; Visual Perception
PubMed: 26237362
DOI: 10.1038/nn.4083 -
Journal of Vision May 2020The perception of motion is considered critical for performing everyday tasks, such as locomotion and driving, and relies on different levels of visual processing....
The perception of motion is considered critical for performing everyday tasks, such as locomotion and driving, and relies on different levels of visual processing. However, it is unclear whether healthy aging differentially affects motion processing at specific levels of processing, or whether performance at central and peripheral spatial eccentricities is altered to the same extent. The aim of this study was to explore the effects of aging on hierarchically different components of motion processing: the minimum displacement of dots to perceive motion (Dmin), the minimum contrast and speed to determine the direction of motion, spatial surround suppression of motion, global motion coherence (translational and radial), and biological motion. We measured motion perception in both central vision and at 15° eccentricity, comparing performance in 20 older (60-79 years) and 20 younger (19-34 years) adults. Older adults had significantly elevated thresholds, relative to younger adults, for motion contrast, speed, Dmin, and biological motion. The differences between younger and older participants were of similar magnitude in central and peripheral vision, except for surround suppression of motion, which was weaker in central vision for the older group, but stronger in the periphery. Our findings demonstrate that the effects of aging are not uniform across all motion tasks. Whereas the performance of some tasks in the periphery can be predicted from the results in central vision, the effects of age on surround suppression of motion shows markedly different characteristics between central and peripheral vision.
Topics: Adult; Age Factors; Aged; Aging; Automobile Driving; Humans; Middle Aged; Motion; Motion Perception; Vision, Ocular; Visual Fields; Visual Perception; Young Adult
PubMed: 32433734
DOI: 10.1167/jov.20.5.8 -
Vision Research Jul 2019Visual perception is thought to be supported by a stabilization mechanism integrating information over time, resulting in a systematic attractive bias in experimental...
Visual perception is thought to be supported by a stabilization mechanism integrating information over time, resulting in a systematic attractive bias in experimental contexts. Previous studies show that this effect, whereby a current stimulus appears more similar to the one previous to it, depends on attention, suggesting an active high-level mechanism that modulates perception. Here, we test the hypothesis that such a mechanism generalizes across different stimulus formats or sensory modalities, effectively abstracting from the low-level properties of the stimuli. Participants performed a numerosity discrimination task, with task-relevant dot-array stimuli preceded by a sequence of visual (flashes) or auditory (tones) stimuli encompassing different numerosities. Our results show a clear attractive bias induced by visual sequential numerosity affecting an array of simultaneously presented dots, thus operating across different stimulus formats. Conversely, auditory sequences did not affect the judgment on visual numerosities. Overall, our results demonstrate that serial dependence in numerosity perception operates according to the abstract representation of numerical magnitude of visual stimuli irrespective of their format. These results thus support the idea that a high-level mechanism mediates visual stability and continuity, which integrates relevant information over time irrespective of the low-level sensory properties of the stimuli.
Topics: Adult; Attention; Bias; Female; Humans; Judgment; Male; Pattern Recognition, Visual; Photic Stimulation; Visual Perception; Young Adult
PubMed: 31078663
DOI: 10.1016/j.visres.2019.04.011 -
Strabismus 2015The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of...
BACKGROUND/AIMS
The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS.
METHODS
Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips.
RESULTS
As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16-62 years) decreased (eg, 110", 280", 340", and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident.
CONCLUSION
If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made.
Topics: Adolescent; Adult; Cues; Depth Perception; Female; Humans; Imaging, Three-Dimensional; Male; Middle Aged; Vision Tests; Vision, Binocular; Visual Acuity; Visual Perception
PubMed: 26669421
DOI: 10.3109/09273972.2015.1107600