-
Consciousness and Cognition Jul 2022Visual illusions provide a compelling case for the idea that perception and belief may remain incongruent. This can be explained by modular theories of mind, but it is... (Review)
Review
Visual illusions provide a compelling case for the idea that perception and belief may remain incongruent. This can be explained by modular theories of mind, but it is not straightforwardly accommodated by the Predictive Processing framework, which takes perceptual and cognitive predictions to derive from the same underlying inferential hierarchy. Recent insights concerning the neural implementation of Predictive Processing may help elucidate this. Specifically, prior information is proposed to be approximated by mechanisms in both the top-down and bottom-up streams of information processing. While the former is context-dependent and flexible in updating, the latter is context-independent and difficult to revise. We propose that a stable divergence between perception and belief may emerge when flexible prior information at higher hierarchical levels contradicts inflexible prior information at lower ones. This allows Predictive Processing to account for conflicting percepts and beliefs while still maintaining a hierarchical and unitary conception of cognition.
Topics: Cognition; Humans; Illusions; Visual Perception
PubMed: 35679724
DOI: 10.1016/j.concog.2022.103334 -
Attention, Perception & Psychophysics Apr 2021Multiple-object tracking studies consistently reveal attentive tracking limits of approximately three to five items. How do factors such as visual grouping and ensemble...
Multiple-object tracking studies consistently reveal attentive tracking limits of approximately three to five items. How do factors such as visual grouping and ensemble perception impact these capacity limits? Which heuristics lead to the perception of multiple objects as a group? This work investigates the role of grouping on multiple-object tracking ability, and more specifically, in identifying the heuristics that lead to the formation and perception of ensembles within dynamic contexts. First, we show that group tracking limits are approximately four groups of objects and are independent of the number of items that compose the groups. Further, we show that group tracking performance declines as inter-object spacing increases. We also demonstrate the role of group rigidity in tracking performance in that disruptions to common fate negatively impact ensemble tracking ability. The findings from this work contribute to our overall understanding of the perception of dynamic groups of objects. They characterize the properties that determine the formation and perception of dynamic object ensembles. In addition, they inform development and design decisions considering cognitive limitations involving tracking groups of objects.
Topics: Attention; Humans; Motion Perception; Perception; Space Perception; Visual Perception
PubMed: 33409901
DOI: 10.3758/s13414-020-02219-4 -
Neuron Oct 2022Substantial experimental, theoretical, and computational insights into sensory processing have been derived from the phenomena of perceptual multistability-when two or... (Review)
Review
Substantial experimental, theoretical, and computational insights into sensory processing have been derived from the phenomena of perceptual multistability-when two or more percepts alternate or switch in response to a single sensory input. Here, we review a range of findings suggesting that alternations can be seen as internal choices by the brain responding to values. We discuss how elements of external, experimenter-controlled values and internal, uncertainty- and aesthetics-dependent values influence multistability. We then consider the implications for the involvement in switching of regions, such as the anterior cingulate cortex, which are more conventionally tied to value-dependent operations such as cognitive control and foraging.
Topics: Brain; Uncertainty; Vision, Binocular; Visual Perception
PubMed: 36041434
DOI: 10.1016/j.neuron.2022.07.024 -
Attention, Perception & Psychophysics Apr 2021In a glance, observers can evaluate gist characteristics from crowds of faces, such as the average emotional tenor or the average family resemblance. Prior research...
In a glance, observers can evaluate gist characteristics from crowds of faces, such as the average emotional tenor or the average family resemblance. Prior research suggests that high-level ensemble percepts rely on holistic and viewpoint-invariant information. However, it is also possible that feature-based analysis was sufficient to yield successful ensemble percepts in many situations. To confirm that ensemble percepts can be extracted holistically, we asked observers to report the average emotional valence of Mooney face crowds. Mooney faces are two-tone, shadow-defined images that cannot be recognized in a part-based manner. To recognize features in a Mooney face, one must first recognize the image as a face by processing it holistically. Across experiments, we demonstrated that observers successfully extracted the average emotional valence from crowds that were spatially distributed or viewed in a rapid temporal sequence. In a subsequent set of experiments, we maximized holistic processing by including only those Mooney faces that were difficult to recognize when inverted. Under these conditions, participants remained highly sensitive to the average emotional valence of Mooney face crowds. Taken together, these experiments provide evidence that ensemble perception can operate selectively on holistic representations of human faces, even when feature-based information is not readily available.
Topics: Emotions; Humans; Orientation, Spatial; Perception
PubMed: 33241531
DOI: 10.3758/s13414-020-02173-1 -
PLoS Biology Aug 2019The number of the distinct tactile percepts exceeds the number of receptor types in the skin, signifying that perception cannot be explained by a one-to-one mapping from...
The number of the distinct tactile percepts exceeds the number of receptor types in the skin, signifying that perception cannot be explained by a one-to-one mapping from a single receptor channel to a corresponding percept. The abundance of touch experiences results from multiplexing (the coexistence of multiple codes within a single channel, increasing the available information content of that channel) and from the mixture of receptor channels by divergence and convergence. When a neuronal representation emerges through the combination of receptor channels, perceptual uncertainty can occur-a perceptual judgment is affected by a stimulus feature that would be, ideally, excluded from the task. Though uncertainty seems at first glance to reflect nonoptimality in sensory processing, it is actually a consequence of efficient coding mechanisms that exploit prior knowledge about objects that are touched. Studies that analyze how perceptual judgments are "fooled" by variations in sensory input can reveal the neuronal mechanisms underlying the tactile experience.
Topics: Judgment; Neurons; Touch; Touch Perception; Uncertainty
PubMed: 31454344
DOI: 10.1371/journal.pbio.3000430 -
Proceedings of the National Academy of... Dec 2017A fundamental problem in extracting scene structure is distinguishing different physical sources of image structure. Light reflected by an opaque surface covaries with... (Clinical Trial)
Clinical Trial
A fundamental problem in extracting scene structure is distinguishing different physical sources of image structure. Light reflected by an opaque surface covaries with local surface orientation, whereas light transported through the body of a translucent material does not. This suggests the possibility that the visual system may use the covariation of local surface orientation and intensity as a cue to the opacity of surfaces. We tested this hypothesis by manipulating the contrast of luminance gradients and the surface geometries to which they belonged and assessed how these manipulations affected the perception of surface opacity/translucency. We show that () identical luminance gradients can appear either translucent or opaque depending on the relationship between luminance and perceived 3D surface orientation, () illusory percepts of translucency can be induced by embedding opaque surfaces in diffuse light fields that eliminate the covariation between surface orientation and intensity, and () illusory percepts of opacity can be generated when transparent materials are embedded in a light field that generates images where surface orientation and intensity covary. Our results provide insight into how the visual system distinguishes opaque surfaces and light-permeable materials and why discrepancies arise between the perception and physics of opacity and translucency. These results suggest that the most significant information used to compute the perceived opacity and translucency of surfaces arise at a level of representation where 3D shape is made explicit.
Topics: Contrast Sensitivity; Female; Humans; Male; Perceptual Masking
PubMed: 29229812
DOI: 10.1073/pnas.1711416115 -
Nature Communications Dec 2022Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and...
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Topics: Humans; Motion Perception; Bayes Theorem; Visual Perception; Motion; Psychophysics
PubMed: 36456546
DOI: 10.1038/s41467-022-34805-5 -
The Journal of the Acoustical Society... Dec 2022Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a...
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.
Topics: Humans; Speech Perception; Illusions; Visual Perception; Language; Speech; Auditory Perception; Photic Stimulation; Acoustic Stimulation
PubMed: 36586857
DOI: 10.1121/10.0015262 -
Cognitive, Affective & Behavioral... Oct 2020Previous studies have demonstrated that highly narcissistic individuals perceive themselves as grandiose and devaluate and sometimes overvalue others. These results are...
Previous studies have demonstrated that highly narcissistic individuals perceive themselves as grandiose and devaluate and sometimes overvalue others. These results are mainly based on behavioural data, but we still know little about the neural correlates underlying, such as perceptional processes. To this end, we investigated event-related potential components (ERP) of visual face processing (P1 and N170) and their variations with narcissism. Participants (N = 59) completed the Narcissistic Admiration and Rivalry Questionnaire and were shown pictures of their own face, a celebrity's face, and a stranger's face. Variations of P1 and N170 with Admiration and Rivalry were analysed using multilevel models. Results revealed moderating effects of both narcissism dimensions on the ERP components of interest. Participants with either high Admiration or low Rivalry scores showed a lower P1 amplitude when viewing their own face compared with when viewing a celebrity's face. Moreover, the Self-Stranger difference in the N170 component (higher N170 amplitude in the Self condition) was larger for higher Rivalry scores. The findings showed, for the first time, variations of both narcissism dimensions with ERPs of early face processing. We related these effects to processes of attentional selection, an expectancy-driven perception, and the mobilisation of defensive systems. The results demonstrated that by linking self-report instruments to P1 and N170, and possibly to other ERP components, we might better understand self- and other-perception in narcissism.
Topics: Adult; Electroencephalography; Evoked Potentials; Facial Recognition; Female; Humans; Male; Narcissism; Self Concept; Social Perception; Young Adult
PubMed: 32803683
DOI: 10.3758/s13415-020-00818-0 -
Trends in Cognitive Sciences Oct 2023Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities,... (Review)
Review
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Topics: Humans; Adult; Auditory Perception; Spatial Processing; Cues; Acoustic Stimulation; Visual Perception
PubMed: 37208286
DOI: 10.1016/j.tics.2023.04.012