-
Journal of Neurophysiology Jul 2021Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings...
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar. The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Topics: Adult; Aged; Female; Head Movements; Humans; Male; Middle Aged; Motion Perception; Photic Stimulation; Time Factors; Vestibule, Labyrinth; Visual Perception; Young Adult
PubMed: 34191637
DOI: 10.1152/jn.00351.2020 -
PloS One 2024Object and scene perception are intertwined. When objects are expected to appear within a particular scene, they are detected and categorised with greater speed and...
Object and scene perception are intertwined. When objects are expected to appear within a particular scene, they are detected and categorised with greater speed and accuracy. This study examined whether such context effects also moderate the perception of social objects such as faces. Female and male faces were embedded in scenes with a stereotypical female or male context. Semantic congruency of these scene contexts influenced the categorisation of faces (Experiment 1). These effects were bi-directional, such that face sex also affected scene categorisation (Experiment 2), suggesting concurrent automatic processing of both levels. In contrast, the more elementary task of face detection was not affected by semantic scene congruency (Experiment 3), even when scenes were previewed prior to face presentation (Experiment 4). This pattern of results indicates that semantic scene context can affect categorisation of faces. However, the earlier perceptual stage of detection appears to be encapsulated from the cognitive processes that give rise to this contextual interference.
Topics: Humans; Female; Male; Adult; Young Adult; Photic Stimulation; Reaction Time; Face; Pattern Recognition, Visual; Visual Perception; Semantics; Facial Recognition; Adolescent
PubMed: 38865378
DOI: 10.1371/journal.pone.0304288 -
Journal of Vision Mar 2022Visual orientation plays an important role in postural control, but the specific characteristics of postural response to orientation remain unknown. In this study, we...
Visual orientation plays an important role in postural control, but the specific characteristics of postural response to orientation remain unknown. In this study, we investigated the relationship between postural response and the subjective visual vertical (SVV) as a function of scene orientation. We presented a virtual room including everyday objects through a head-mounted display and measured head tilt around the naso-occipital axis. The room orientation varied from 165° counterclockwise to 180° clockwise around the center of display in 15° increments. In a separate session, we also conducted a rod adjustment task to record the participant's SVV in the tilted room. We applied a weighted vector sum model to head tilt and SVV error and obtained the weight of three visual cues to orientation: frame, horizon, and polarity. We found significant contributions for all visual cues to head tilt and SVV error. For SVV error, frame cues made the largest contribution, whereas polarity contribution made the smallest. For head tilt, there was no clear difference across visual cue types, although the order of contribution was similar to the SVV. These findings suggest that multiple visual cues to orientation are involved in postural control and imply different representations of vertical orientation across postural control and perception.
Topics: Cues; Humans; Orientation; Postural Balance; Space Perception; Visual Perception
PubMed: 35234839
DOI: 10.1167/jov.22.4.1 -
Vision Research Nov 2021Center-surround antagonism, as a ubiquitous feature in visual processing, usually leads to inferior perception for a large stimulus compared to a small one. For example,...
Center-surround antagonism, as a ubiquitous feature in visual processing, usually leads to inferior perception for a large stimulus compared to a small one. For example, it is more difficult to judge the motion direction of a large high-contrast pattern than that of a small one. However, this spatial suppression in the motion dimension was only reported for luminance motion, and was not found for chromatic motion. Given that center-surround suppression only occurs for strong visual inputs, we hypothesized that previous failure in finding spatial suppression of chromatic motion might be due to weak chromatic motion being induced with stimuli of limited parameters. In this study, we used phase-shift discrimination and motion-direction discrimination tasks to measure motion spatial suppression induced by stimuli of two spatial frequencies (0.5 and 2 cpd) and two contrasts (low and high). We found that spatial suppression of the chromatic motion was stably observed for stimuli of high spatial frequency (2 cpd) and high contrast and spatial summation occurred for stimuli of low spatial frequency (0.5 cpd). Intriguingly, there was no correlations between the motion spatial suppressions of luminance motion and chromatic motion, implying that the two types of spatial suppression are not originated from the same neural processing. Our findings indicate that spatial suppression also exists for chromatic motion, and the mechanisms underlying the spatial suppression of chromatic motion is different from that of luminance motion.
Topics: Color Perception; Contrast Sensitivity; Humans; Motion; Motion Perception; Psychophysics; Visual Perception
PubMed: 34385078
DOI: 10.1016/j.visres.2021.07.014 -
Cognition Feb 2023Humans can effortlessly assess the complexity of the visual stimuli they encounter. However, our understanding of how we do this, and the relevant factors that result in...
Humans can effortlessly assess the complexity of the visual stimuli they encounter. However, our understanding of how we do this, and the relevant factors that result in our perception of scene complexity remain unclear; especially for the natural scenes in which we are constantly immersed. We introduce several new datasets to further understanding of human perception of scene complexity. Our first dataset (VISC-C) contains 800 scenes and 800 corresponding two-dimensional complexity annotations gathered from human observers, allowing exploration for how complexity perception varies across a scene. Our second dataset, (VISC-CI) consists of inverted scenes (reflection on the horizontal axis) with corresponding complexity maps, collected from human observers. Inverting images in this fashion is associated with destruction of semantic scene characteristics when viewed by humans, and hence allows analysis of the impact of semantics on perceptual complexity. We analysed perceptual complexity from both a single-score and a two-dimensional perspective, by evaluating a set of calculable and observable perceptual features based upon grounded psychological research (clutter, symmetry, entropy and openness). We considered these factors' relationship to complexity via hierarchical regressions analyses, tested the efficacy of various neural models against our datasets, and validated our perceptual features against a large and varied complexity dataset consisting of nearly 5000 images. Our results indicate that both global image properties and semantic features are important for complexity perception. We further verified this by combining identified perceptual features with the output of a neural network predictor capable of extracting semantics, and found that we could increase the amount of explained human variance in complexity beyond that of low-level measures alone. Finally, we dissect our best performing prediction network, determining that artificial neurons learn to extract both global image properties and semantic details from scenes for complexity prediction. Based on our experimental results, we propose the "dual information" framework of complexity perception, hypothesising that humans rely on both low-level image features and high-level semantic content to evaluate the complexity of images.
Topics: Humans; Visual Perception; Learning; Semantics
PubMed: 36399902
DOI: 10.1016/j.cognition.2022.105319 -
Journal of Neurophysiology Oct 2022Conventional, computational theories limit the understanding of how action and perception are controlled. In an alternative scheme, the nervous system controls the... (Review)
Review
Conventional, computational theories limit the understanding of how action and perception are controlled. In an alternative scheme, the nervous system controls the values of physical and neurophysiological parameters that predetermine the choice of the spatial frames of reference (FRs) for action and perception. For example, all possible eye positions, Q, can be considered as comprising a spatial FR in which extraocular muscles (EOMs) stabilize gaze directions. The origin or referent point of this FR is a specific, threshold eye position, R, at which EOMs can be quiescent but activated depending on the difference between Q and R. Starting before eye motion, shifts in R cause displacement of the FR and resetting of the stable equilibrium position to which the eyes are forced to move. Rather than corollary discharge, the depiction of visual images integrated across the entire retina in the shifted spatial FR is responsible for remapping visual receptive fields and visual constancy. These suggestions are illustrated in computer models of saccades in the referent control framework in humans and monkeys. The existence of three types of visual RF remapping during saccades is suggested. Properly scaled, shifts in the R underlying a saccade are transmitted to motoneurons of arm muscles to guide reach-to-grasp motion in the same, eye-centered FR. Some predictions of the proposed control scheme have been verified and new tests are suggested. The scheme is applicable to several eye-hand coordination deficits including micrography in Parkinson's disease and explains why vision helps deafferented subjects diminish movement deficits.
Topics: Humans; Movement; Saccades; Vision, Ocular; Visual Perception
PubMed: 36070246
DOI: 10.1152/jn.00531.2021 -
Journal of Vision Jun 2020Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered...
Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.
Topics: Adult; Automobile Driving; Crowding; Eye Movements; Female; Humans; Male; Pattern Recognition, Visual; Recognition, Psychology; Saccades; Visual Perception; Young Adult
PubMed: 32492098
DOI: 10.1167/jov.20.6.1 -
Attention, Perception & Psychophysics May 2024We can grasp various features of the outside world using summary statistics efficiently. Among these statistics, variance is an index of information homogeneity or...
We can grasp various features of the outside world using summary statistics efficiently. Among these statistics, variance is an index of information homogeneity or reliability. Previous research has shown that visual variance information in the context of spatial integration is encoded directly as a unique feature, and currently perceived variance can be distorted by that of the preceding stimuli. In this study, we focused on variance perception in temporal integration. We investigated whether any variance aftereffects occurred in visual size and auditory pitch. Furthermore, to examine the mechanism of cross-modal variance perception, we also investigated whether variance aftereffects occur between different modalities. Four experimental conditions (a combination of sensory modalities of adaptor and test: visual-to-visual, visual-to-auditory, auditory-to-auditory, and auditory-to-visual) were conducted. Participants observed a sequence of visual or auditory stimuli perturbed in size or pitch with certain variance and performed a variance classification task before and after the variance adaptation phase. We found that in visual size, within modality adaptation to small or large variance, resulted in a variance aftereffect, indicating that variance judgments are biased in the direction away from that of the adapting stimulus. In auditory pitch, within modality adaptation to small variance caused variance aftereffect. For cross-modal combinations, adaptation to small variance in visual size resulted in variance aftereffect. However, the effect was weak, and variance aftereffect did not occur in other conditions. These findings indicate that the variance information of sequentially presented stimuli is encoded independently in visual and auditory domains.
Topics: Humans; Male; Female; Young Adult; Adult; Pitch Perception; Figural Aftereffect; Size Perception; Visual Perception; Auditory Perception; Adaptation, Physiological
PubMed: 37100981
DOI: 10.3758/s13414-023-02705-5 -
The Journal of Neuroscience : the... Feb 2022Covert spatial attention (without concurrent eye movements) improves performance in many visual tasks (e.g., orientation discrimination and visual search). However, both...
Covert spatial attention (without concurrent eye movements) improves performance in many visual tasks (e.g., orientation discrimination and visual search). However, both covert attention systems-endogenous (voluntary) and exogenous (involuntary)-exhibit differential effects on performance in tasks mediated by spatial and temporal resolution suggesting an underlying mechanistic difference. We investigated whether these differences manifest in sensory tuning by assessing whether and how endogenous and exogenous attention differentially alter the representation of two basic visual dimensions-orientation and spatial frequency (SF). The same human observers detected a grating embedded in noise in two separate experiments (with endogenous or exogenous attention cues). Reverse correlation was used to infer the underlying neural representation from behavioral responses, and we linked our results to established neural computations via a normalization model of attention. Both endogenous and exogenous attention similarly improved performance at the attended location by enhancing the gain of all orientations without changing tuning width. In the SF dimension, endogenous attention enhanced the gain of SFs above and below the target SF, whereas exogenous attention only enhanced those above. Additionally, exogenous attention shifted peak sensitivity to SFs above the target SF, whereas endogenous attention did not. Both covert attention systems modulated sensory tuning via the same computation (gain changes). However, there were differences in the strength of the gain. Compared with endogenous attention, exogenous attention had a stronger orientation gain enhancement but a weaker overall SF gain enhancement. These differences in sensory tuning may underlie differential effects of endogenous and exogenous attention on performance. Covert spatial attention is a fundamental aspect of cognition and perception that allows us to selectively process and prioritize incoming visual information at a given location. There are two types: endogenous (voluntary) and exogenous (involuntary). Both typically improve visual perception, but there are instances where endogenous improves perception but exogenous hinders perception. Whether and how such differences extend to sensory representations is unknown. Here we show that both endogenous and exogenous attention mediate perception via the same neural computation-gain changes-but the strength of the orientation gain and the range of enhanced spatial frequencies depends on the type of attention being deployed. These findings reveal that both attention systems differentially reshape the tuning of features coded in striate cortex.
Topics: Adult; Attention; Brain; Female; Humans; Male; Visual Perception
PubMed: 34965975
DOI: 10.1523/JNEUROSCI.0892-21.2021 -
Attention, Perception & Psychophysics Nov 2022Research has shown that visual moving and multisensory stimuli can efficiently mediate rhythmic information. It is possible, therefore, that the previously reported...
Research has shown that visual moving and multisensory stimuli can efficiently mediate rhythmic information. It is possible, therefore, that the previously reported auditory dominance in rhythm perception is due to the use of nonoptimal visual stimuli. Yet it remains unknown whether exposure to multisensory or visual-moving rhythms would benefit the processing of rhythms consisting of nonoptimal static visual stimuli. Using a perceptual learning paradigm, we tested whether the visual component of the multisensory training pair can affect processing of metric simple two integer-ratio nonoptimal visual rhythms. Participants were trained with static (AVstat), moving-inanimate (AVinan), or moving-animate (AVan) visual stimuli along with auditory tones and a regular beat. In the pre- and posttraining tasks, participants responded whether two static-visual rhythms differed or not. Results showed improved posttraining performance for all training groups irrespective of the type of visual stimulation. To assess whether this benefit was auditory driven, we introduced visual-only training with a moving or static stimulus and a regular beat (Vinan). Comparisons between Vinan and Vstat showed that, even in the absence of auditory information, training with visual-only moving or static stimuli resulted in an enhanced posttraining performance. Overall, our findings suggest that audiovisual and visual static or moving training can benefit processing of nonoptimal visual rhythms.
Topics: Humans; Auditory Perception; Acoustic Stimulation; Photic Stimulation; Learning; Visual Perception
PubMed: 36241841
DOI: 10.3758/s13414-022-02569-1