-
The Journal of Neuroscience : the... Jun 2023Does our perception of an object change once we discover what function it serves? We showed human participants ( = 48, 31 females and 17 males) pictures of unfamiliar...
Does our perception of an object change once we discover what function it serves? We showed human participants ( = 48, 31 females and 17 males) pictures of unfamiliar objects either together with keywords matching their function, leading to semantically informed perception, or together with nonmatching keywords, resulting in uninformed perception. We measured event-related potentials to investigate at which stages in the visual processing hierarchy these two types of object perception differed from one another. We found that semantically informed compared with uninformed perception was associated with larger amplitudes in the N170 component (150-200 ms), reduced amplitudes in the N400 component (400-700 ms), and a late decrease in alpha/beta band power. When the same objects were presented once more without any information, the N400 and event-related power effects persisted, and we also observed enlarged amplitudes in the P1 component (100-150 ms) in response to objects for which semantically informed perception had taken place. Consistent with previous work, this suggests that obtaining semantic information about previously unfamiliar objects alters aspects of their lower-level visual perception (P1 component), higher-level visual perception (N170 component), and semantic processing (N400 component, event-related power). Our study is the first to show that such effects occur instantly after semantic information has been provided for the first time, without requiring extensive learning. There has been a long-standing debate about whether or not higher-level cognitive capacities, such as semantic knowledge, can influence lower-level perceptual processing in a top-down fashion. Here we could show, for the first time, that information about the function of previously unfamiliar objects immediately influences cortical processing within less than 200 ms. Of note, this influence does not require training or experience with the objects and related semantic information. Therefore, our study is the first to show effects of cognition on perception while ruling out the possibility that prior knowledge merely acts by preactivating or altering stored visual representations. Instead, this knowledge seems to alter perception online, thus providing a compelling case against the impenetrability of perception by cognition.
Topics: Humans; Male; Female; Evoked Potentials; Semantics; Electroencephalography; Visual Perception; Learning
PubMed: 37286353
DOI: 10.1523/JNEUROSCI.2038-22.2023 -
Perception Mar 2024Aristotle believed that objects fell at a constant velocity. However, Galileo Galilei showed that when an object falls, gravity causes it to accelerate. Regardless,...
Aristotle believed that objects fell at a constant velocity. However, Galileo Galilei showed that when an object falls, gravity causes it to accelerate. Regardless, Aristotle's claim raises the possibility that people's visual perception of falling motion might be biased away from acceleration towards constant velocity. We tested this idea by requiring participants to judge whether a ball moving in a simulated naturalistic setting appeared to accelerate or decelerate as a function of its motion direction and the amount of acceleration/deceleration. We found that the point of subjective constant velocity (PSCV) differed between up and down but not between left and right motion directions. The PSCV difference between up and down indicated that more acceleration was needed for a downward-falling object to appear at constant velocity than for an upward "falling" object. We found no significant differences in sensitivity to acceleration for the different motion directions. Generalized linear mixed modeling determined that participants relied predominantly on acceleration when making these judgments. Our results support the idea that Aristotle's belief may in part be due to a bias that reduces the perceived magnitude of acceleration for falling objects, a bias not revealed in previous studies of the perception of visual motion.
Topics: Humans; Motion Perception; Acceleration; Visual Perception; Gravitation
PubMed: 38304970
DOI: 10.1177/03010066241228681 -
Journal of Vision Sep 2021Evidences of perceptual changes that accompany motor activity have been limited primarily to audition and somatosensation. Here we asked whether motor learning results...
Evidences of perceptual changes that accompany motor activity have been limited primarily to audition and somatosensation. Here we asked whether motor learning results in changes to visual motion perception. We designed a reaching task in which participants were trained to make movements along several directions, while the visual feedback was provided by an intrinsically ambiguous moving stimulus directly tied to hand motion. We find that training improves coherent motion perception and that changes in movement are correlated with perceptual changes. No perceptual changes are observed in passive training even when observers were provided with an explicit strategy to facilitate single motion perception. A Bayesian model suggests that movement training promotes the fine-tuning of the internal representation of stimulus geometry. These results emphasize the role of sensorimotor interaction in determining the persistent properties in space and time that define a percept.
Topics: Bayes Theorem; Hand; Humans; Motion; Motion Perception; Visual Perception
PubMed: 34529006
DOI: 10.1167/jov.21.10.13 -
Proceedings of the National Academy of... Jul 2021Recurrent loops in the visual cortex play a critical role in visual perception, which is likely not mediated by purely feed-forward pathways. However, the development of...
Recurrent loops in the visual cortex play a critical role in visual perception, which is likely not mediated by purely feed-forward pathways. However, the development of recurrent loops is poorly understood. The role of recurrent processing has been studied using visual backward masking, a perceptual phenomenon in which a visual stimulus is rendered invisible by a following mask, possibly because of the disruption of recurrent processing. Anatomical studies have reported that recurrent pathways are immature in early infancy. This raises the possibility that younger infants process visual information mainly in a feed-forward manner, and thus, they might be able to perceive visual stimuli that adults cannot see because of backward masking. Here, we show that infants under 7 mo of age are immune to visual backward masking and that masked stimuli remain visible to younger infants while older infants cannot perceive them. These results suggest that recurrent processing is immature in infants under 7 mo and that they are able to perceive objects even without recurrent processing. Our findings indicate that the algorithm for visual perception drastically changes in the second half of the first year of life.
Topics: Facial Recognition; Female; Form Perception; Humans; Infant; Male; Perceptual Masking; Photic Stimulation; Reproducibility of Results; Visual Perception
PubMed: 34162737
DOI: 10.1073/pnas.2103040118 -
Journal of Experimental Child Psychology Jul 2024Perceiving motion in depth is important in everyday life, especially motion in relation to the body. Visual and auditory cues inform us about motion in space when...
Perceiving motion in depth is important in everyday life, especially motion in relation to the body. Visual and auditory cues inform us about motion in space when presented in isolation from each other, but the most comprehensive information is obtained through the combination of both of these cues. We traced the development of infants' ability to discriminate between visual motion trajectories across peripersonal space and to match these with auditory cues specifying the same peripersonal motion. We measured 5-month-old (n = 20) and 9-month-old (n = 20) infants' visual preferences for visual motion toward or away from their body (presented simultaneously and side by side) across three conditions: (a) visual displays presented alone, (b) paired with a sound increasing in intensity, and (c) paired with a sound decreasing in intensity. Both groups preferred approaching motion in the visual-only condition. When the visual displays were paired with a sound increasing in intensity, neither group showed a visual preference. When a sound decreasing in intensity was played instead, the 5-month-olds preferred the receding (spatiotemporally congruent) visual stimulus, whereas the 9-month-olds preferred the approaching (spatiotemporally incongruent) visual stimulus. We speculate that in the approaching sound condition, the behavioral salience of the sound could have led infants to focus on the auditory information alone, in order to prepare a motor response, and to neglect the visual stimuli. In the receding sound condition, instead, the difference in response patterns in the two groups may have been driven by infants' emerging motor abilities and their developing predictive processing mechanisms supporting and influencing each other.
Topics: Humans; Infant; Female; Male; Motion Perception; Auditory Perception; Cues; Child Development; Visual Perception; Depth Perception; Acoustic Stimulation
PubMed: 38615600
DOI: 10.1016/j.jecp.2024.105921 -
Journal of Vision Sep 2022Perceptual history influences current perception, readily revealed by visual priming (the facilitation of responses on repeated presentations of similar stimuli) and by...
Perceptual history influences current perception, readily revealed by visual priming (the facilitation of responses on repeated presentations of similar stimuli) and by serial dependence (systematic biases toward the previous stimuli). We asked whether the two phenomena shared perceptual mechanisms. We modified the standard "priming of pop-out" paradigm to measure both priming and serial dependence concurrently. The stimulus comprised three grating patches, one or two red, and the other green. Participants identified the color singleton (either red or green), and reproduced its orientation. Trial sequences were designed to maximize serial dependence, and long runs of priming color and position. The results showed strong effects of priming, both on reaction times and accuracy, which accumulated steadily over time, as generally reported in the literature. The serial dependence effects were also strong, but did not depend on previous color, nor on the run length. Reaction times measured under various conditions of repetition or change of priming color or position were reliably correlated with imprecision in orientation reproduction, but reliably uncorrelated with magnitude of serial dependence. The results suggest that visual priming and serial dependence are mediated by different neural mechanisms. We propose that priming affects sensitivity, possibly via attention-like mechanisms, whereas serial dependence affects criteria, two orthogonal dimensions in the signal detection theory.
Topics: Attention; Bias; Color Perception; Humans; Pattern Recognition, Visual; Reaction Time; Visual Perception
PubMed: 36053134
DOI: 10.1167/jov.22.10.1 -
Developmental Cognitive Neuroscience Dec 2023Rhythmic visual stimulation (RVS), the periodic presentation of visual stimuli to elicit a rhythmic brain response, is increasingly applied to reveal insights into early... (Review)
Review
Rhythmic visual stimulation (RVS), the periodic presentation of visual stimuli to elicit a rhythmic brain response, is increasingly applied to reveal insights into early neurocognitive development. Our systematic review identified 69 studies applying RVS in 0- to 6-year-olds. RVS has long been used to study the development of the visual system and applications have more recently been expanded to uncover higher cognitive functions in the developing brain, including overt and covert attention, face and object perception, numeral cognition, and predictive processing. These insights are owed to the unique benefits of RVS, such as the targeted frequency and stimulus-specific neural responses, as well as a remarkable signal-to-noise ratio. Yet, neural mechanisms underlying the RVS response are still poorly understood. We discuss critical challenges and avenues for future research, and the unique potentials the method holds. With this review, we provide a resource for researchers interested in the breadth of developmental RVS research and hope to inspire the future use of this cutting-edge method in developmental cognitive neuroscience.
Topics: Humans; Child; Electroencephalography; Photic Stimulation; Evoked Potentials, Visual; Brain; Attention; Visual Perception
PubMed: 37948945
DOI: 10.1016/j.dcn.2023.101315 -
Consciousness and Cognition May 2022Human visual perception is efficient, flexible and context-sensitive. The Bayesian brain view explains this with probabilistic perceptual inference integrating prior... (Review)
Review
Human visual perception is efficient, flexible and context-sensitive. The Bayesian brain view explains this with probabilistic perceptual inference integrating prior experience and knowledge through top-down influences. Advances in machine learning, such as Artificial Neural Networks (ANNs), have enabled considerable progress in computer vision. Unlike humans, these networks do not yet adaptively draw on meaningful and task-relevant contextual cues and prior knowledge. We propose ideas to better align human and computer vision, applied to facial expression recognition. We review evidence of knowledge-augmented and context-sensitive face perception in humans and approaches trying to leverage such sources of information in computer vision. We discuss how both fields can establish an epistemic loop: Redesigning synthetic systems with inspiration from the Bayesian brain-framework could make networks more flexible and useful for human-machine interaction. In turn, employing ANNs as scientific tools will widen the scope of empirical research into human knowledge-augmented perception.
Topics: Artificial Intelligence; Bayes Theorem; Brain; Facial Recognition; Humans; Visual Perception
PubMed: 35427846
DOI: 10.1016/j.concog.2022.103301 -
Journal of Vision Sep 2021The question of what peripheral vision is good for, especially in pattern recognition, is one of the most important and controversial issues in cognitive science. In a...
The question of what peripheral vision is good for, especially in pattern recognition, is one of the most important and controversial issues in cognitive science. In a series of experiments, we provide substantial evidence that observers' behavioral performance in the periphery is consistently superior to central vision for topological change detection, while nontopological change detection deteriorates with increasing eccentricity. These experiments generalize the topological account of object perception in the periphery to different kinds of topological changes (i.e., including introduction, disappearance, and change in number of holes) in comparison with a broad spectrum of geometric properties (e.g., luminance, similarity, spatial frequency, perimeter, and shape of the contour). Moreover, when the stimuli were scaled according to cortical magnification factor and the task difficulty was well controlled by adjusting luminance of the background, the advantage of topological change detection in the periphery remained. The observed advantage of topological change detection in the periphery supports the view that the topological definition of objects provides a coherent account for object perception in peripheral vision, allowing pattern recognition with limited acuity.
Topics: Form Perception; Humans; Pattern Recognition, Visual; Visual Perception
PubMed: 34570176
DOI: 10.1167/jov.21.10.19 -
Philosophical Transactions of the Royal... Sep 2023The definition of the visual cortex is primarily based on the evidence that lesions of this area impair visual perception. However, this does not exclude that the visual... (Review)
Review
The definition of the visual cortex is primarily based on the evidence that lesions of this area impair visual perception. However, this does not exclude that the visual cortex may process more information than of retinal origin alone, or that other brain structures contribute to vision. Indeed, research across the past decades has shown that non-visual information, such as neural activity related to reward expectation and value, locomotion, working memory and other sensory modalities, can modulate primary visual cortical responses to retinal inputs. Nevertheless, the function of this non-visual information is poorly understood. Here we review recent evidence, coming primarily from studies in rodents, arguing that non-visual and motor effects in visual cortex play a role in visual processing itself, for instance disentangling direct auditory effects on visual cortex from effects of sound-evoked orofacial movement. These findings are placed in a broader framework casting vision in terms of predictive processing under control of frontal, reward- and motor-related systems. In contrast to the prevalent notion that vision is exclusively constructed by the visual cortical system, we propose that visual percepts are generated by a larger network-the extended visual system-spanning other sensory cortices, supramodal areas and frontal systems. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Topics: Motivation; Visual Perception; Visual Cortex; Sound; Causality
PubMed: 37545313
DOI: 10.1098/rstb.2022.0336