-
Journal of Neurophysiology Nov 2022Self-motion through an environment induces various sensory signals, i.e., visual, vestibular, auditory, or tactile. Numerous studies have investigated the role of visual...
Self-motion through an environment induces various sensory signals, i.e., visual, vestibular, auditory, or tactile. Numerous studies have investigated the role of visual and vestibular stimulation for the perception of self-motion direction (heading). Here, we investigated the rarely considered interaction of visual and tactile stimuli in heading perception. Participants were presented optic flow simulating forward self-motion across a horizontal ground plane (visual), airflow toward the participants' forehead (tactile), or both. In separate blocks of trials, participants indicated perceived heading from unimodal visual or tactile or bimodal sensory signals. In bimodal trials, presented headings were either spatially congruent or incongruent with a maximum offset between visual and tactile heading of 30°. To investigate the reference frame in which visuo-tactile heading is encoded, we varied head and eye orientation during presentation of the stimuli. Visual and tactile stimuli were designed to achieve comparable precision of heading reports between modalities. Nevertheless, in bimodal trials heading perception was dominated by the visual stimulus. A change of head orientation had no significant effect on perceived heading, whereas, surprisingly, a change in eye orientation affected tactile heading perception. Overall, we conclude that tactile flow is more important to heading perception than previously thought. We investigated heading perception from visual-only (optic flow), tactile-only (tactile flow), or bimodal self-motion stimuli in different conditions varying in head and eye position. Overall, heading perception was body or world centered and non-Bayes optimal and revealed a centripetal bias. Although being visually dominated, tactile flow revealed a significant influence during bimodal heading perception.
Topics: Humans; Motion Perception; Optic Flow; Vestibule, Labyrinth; Touch Perception; Touch; Photic Stimulation; Visual Perception
PubMed: 36259667
DOI: 10.1152/jn.00231.2022 -
Current Opinion in Psychology Oct 2019It is well established that attention improves performance on many visual tasks. However, for more than 100 years, psychologists, philosophers, and neurophysiologists... (Review)
Review
It is well established that attention improves performance on many visual tasks. However, for more than 100 years, psychologists, philosophers, and neurophysiologists have debated its phenomenology-whether attention actually changes one's subjective experience. Here, we show that it is possible to objectively and quantitatively investigate the effects of attention on subjective experience. First, we review evidence showing that attention alters the appearance of many static and dynamic basic visual dimensions, which mediate changes in appearance of higher-level perceptual aspects. Then, we summarize current views on how attention alters appearance. These findings have implications for our understanding of perception and attention, illustrating that attention affects not only how we perform in visual tasks, but actually alters our experience of the visual world.
Topics: Attention; Brain; Contrast Sensitivity; Humans; Photic Stimulation; Visual Perception
PubMed: 30572280
DOI: 10.1016/j.copsyc.2018.10.010 -
Proceedings. Biological Sciences Dec 2023We reveal a unique visual perception before feature-integration of colour and motion in infants. Visual perception is established by the integration of multiple...
We reveal a unique visual perception before feature-integration of colour and motion in infants. Visual perception is established by the integration of multiple features, such as colour and motion direction. The mechanism of feature integration benefits from the ongoing interplay between feedforward and feedback loops, yet our comprehension of this causal connection remains incomplete. Researchers have explored the role of recurrent processing in feature integration by studying a visual illusion called 'misbinding', wherein visual characteristics are erroneously merged, resulting in a perception distinct from the originally presented stimuli. Anatomical investigations have revealed that the neural pathways responsible for recurrent connections are underdeveloped in early infants. Therefore, there is a possibility that younger infants could potentially perceive the physically presented visual information that adults miss due to misbinding. Here, we demonstrate that infants less than half a year old showed no misbinding; thus, they perceived the physically presented visual information, while infants more than half a year old perceived incorrectly integrated visual information, showing misbinding. Our findings indicate that recurrent processing barely functions in infants younger than six months of age and that visual information that should have been originally integrated is perceived as it is without being integrated.
Topics: Adult; Humans; Infant; Motion Perception; Visual Perception; Illusions
PubMed: 38052443
DOI: 10.1098/rspb.2023.2134 -
Perception 2001
Topics: Auditory Perception; Humans; Perceptual Masking; Sound Localization; Visual Perception
PubMed: 11383188
DOI: 10.1068/p3004ed -
Philosophical Transactions of the Royal... Feb 2009How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It... (Review)
Review
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment--a 'constraint' here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects.
Topics: Adaptation, Biological; Animals; Appetitive Behavior; Pigmentation; Visual Fields; Visual Perception
PubMed: 18990671
DOI: 10.1098/rstb.2008.0218 -
Annual Review of Vision Science Sep 2017Visual textures are a class of stimuli with properties that make them well suited for addressing general questions about visual function at the levels of behavior and... (Review)
Review
Visual textures are a class of stimuli with properties that make them well suited for addressing general questions about visual function at the levels of behavior and neural mechanism. They have structure across multiple spatial scales, they put the focus on the inferential nature of visual processing, and they help bridge the gap between stimuli that are analytically convenient and the complex, naturalistic stimuli that have the greatest biological relevance. Key questions that are well suited for analysis via visual textures include the nature and structure of perceptual spaces, modulation of early visual processing by task, and the transformation of sensory stimuli into patterns of population activity that are relevant to perception.
Topics: Attention; Discrimination, Psychological; Humans; Pattern Recognition, Visual; Visual Cortex; Visual Fields; Visual Perception
PubMed: 28937948
DOI: 10.1146/annurev-vision-102016-061316 -
Vision Research Aug 2019
Topics: Humans; Psychophysics; Reading; Visual Perception
PubMed: 31194983
DOI: 10.1016/j.visres.2019.06.002 -
NeuroImage Oct 2022Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences...
Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.
Topics: Brain; Facial Recognition; Humans; Pattern Recognition, Visual; Social Perception; Visual Cortex; Visual Perception
PubMed: 35878724
DOI: 10.1016/j.neuroimage.2022.119506 -
The Journal of Neuroscience : the... May 2023From moment to moment, the visual properties of objects in the world fluctuate because of external factors like ambient lighting, occlusion and eye movements, and...
From moment to moment, the visual properties of objects in the world fluctuate because of external factors like ambient lighting, occlusion and eye movements, and internal (proximal) noise. Despite this variability in the incoming information, our perception is stable. Serial dependence, the behavioral attraction of current perceptual responses toward previously seen stimuli, may reveal a mechanism underlying stability: a spatiotemporally tuned operator that smooths over spurious fluctuations. The current study examined the neural underpinnings of serial dependence by recording the electroencephalographic (EEG) brain response of female and male human observers to prototypical objects (faces, cars, and houses) and morphs that mixed properties of two prototypes. Behavior was biased toward previously seen objects. Representational similarity analysis (RSA) revealed that responses evoked by visual objects contained information about the previous stimulus. The trace of previous representations in the response to the current object occurred immediately on object appearance, suggesting that serial dependence arises from a brain state or set that precedes processing of new input. However, the brain response to current visual objects was not representationally similar to the trace they leave on subsequent object representations. These results reveal that while past stimulus history influences current representations, this influence does not imply a shared neural code between the previous trial (memory) and the current trial (perception). The perception of visual objects is pulled toward instances of that object seen in the recent past. The neural underpinnings of this serial dependence remain to be fully investigated. The present study examined electroencephalographic (EEG) responses to faces, cars, and houses, and ambiguous between-category morphs. With representational similarity analysis (RSA), we showed (1) object-specific neural patterns that differentiate the three categories; (2) that the response to the current object contains information about the previous object, mirroring behavioral serial dependence; (3) that the object-specific neural pattern about the past was different from that in the current response, revealing that while past stimulus history influences current representations, this does not imply a shared neural code between the previous trial (memory) and the current trial (perception).
Topics: Male; Humans; Female; Visual Perception; Brain; Eye Movements; Brain Mapping; Electroencephalography; Pattern Recognition, Visual; Photic Stimulation
PubMed: 36944487
DOI: 10.1523/JNEUROSCI.2068-22.2023 -
Annual Review of Neuroscience Jul 2023Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent... (Review)
Review
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Topics: Motion Perception; Cues; Visual Perception; Vestibule, Labyrinth; Cerebral Cortex
PubMed: 37428601
DOI: 10.1146/annurev-neuro-120722-100503