-
Vision Research Oct 1993We examine two hypotheses about the functional segregation of color and motion perception, using a motion nulling task. The most common interpretation of functional...
We examine two hypotheses about the functional segregation of color and motion perception, using a motion nulling task. The most common interpretation of functional segregation, that motion perception depends only on one of the three dimensions of color, is rejected. We propose and test an alternative formulation of functional segregation: that motion perception depends on a univariate motion signal driven by all three color dimensions, and that the motion signal is determined by the product of the stimulus contrast and a term that depends only on the relative cone excitations. Two predictions of this model are confirmed. First, motion nulling is transitive: when two stimuli null a third they also null another. Second, motion nulling is homogeneous: if two stimuli null one another, they continue to null one another when their contrasts are scaled equally. We describe how to apply our formulation of functional segregation to other behavioral and physiological measurements.
Topics: Color Perception; Contrast Sensitivity; Humans; Judgment; Motion Perception; Visual Fields
PubMed: 8266653
DOI: 10.1016/0042-6989(93)90010-t -
Nature Neuroscience Nov 2023Inhibitory interactions between opponent neuronal pathways constitute a common circuit motif across brain areas and species. However, in most cases, synaptic wiring and...
Inhibitory interactions between opponent neuronal pathways constitute a common circuit motif across brain areas and species. However, in most cases, synaptic wiring and biophysical, cellular and network mechanisms generating opponency are unknown. Here, we combine optogenetics, voltage and calcium imaging, connectomics, electrophysiology and modeling to reveal multilevel opponent inhibition in the fly visual system. We uncover a circuit architecture in which a single cell type implements direction-selective, motion-opponent inhibition at all three network levels. This inhibition, mediated by GluClα receptors, is balanced with excitation in strength, despite tenfold fewer synapses. The different opponent network levels constitute a nested, hierarchical structure operating at increasing spatiotemporal scales. Electrophysiology and modeling suggest that distributing this computation over consecutive network levels counteracts a reduction in gain, which would result from integrating large opposing conductances at a single instance. We propose that this neural architecture provides resilience to noise while enabling high selectivity for relevant sensory information.
Topics: Animals; Drosophila; Neurons; Synapses; Motion Perception; Visual Pathways
PubMed: 37783895
DOI: 10.1038/s41593-023-01443-z -
Journal of Vision Apr 2018Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same...
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an "odd-one-out," three-alternative forced-choice procedure. Two intervals contained the standard-a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison-speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception.
Topics: Female; Humans; Male; Motion; Motion Perception; Photic Stimulation; Psychophysics; Visual Perception
PubMed: 29614154
DOI: 10.1167/18.4.5 -
Vision Research Nov 2019To process the motion of objects, humans need to consider information about up-down direction as obtained through various cues such as the gravity direction in the...
To process the motion of objects, humans need to consider information about up-down direction as obtained through various cues such as the gravity direction in the environment, visual polarity, and body direction. This study investigates the effects of up-down direction, as obtained from these cues on motion perception, with a focus on acceleration perception. We presented the participants with moving objects that had various acceleration speeds and measured the physical acceleration to be perceived as constant velocity. We examined the effect of the up-down direction from the visual polarity by changing the relationship between the up-down direction indicated by the gravity direction cue and the up-down direction indicated by visual polarity by manipulating the posture of the observer. The results showed that the up-down direction received by the gravity affected motion perception. Moreover, the up-down direction indicated by the visual polarity affected motion perception when the observer's body direction and the physical gravity direction were different. On the other hand, up-down direction indicated by the visual polarity did not affect motion perception when the body direction coincides with physical gravity direction. Overall, the results suggest that the up-down directions indicated by the gravity, visual polarity, and body direction are integrated non-linearly in the perceived acceleration of visual motion.
Topics: Acceleration; Adult; Cues; Female; Gravitation; Humans; Male; Motion Perception; Orientation; Photic Stimulation; Posture; Young Adult
PubMed: 31542657
DOI: 10.1016/j.visres.2019.08.005 -
ELife Jun 2022Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which...
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Topics: Animals; Cues; Motion; Motion Perception; Temporal Lobe; Vision Disparity
PubMed: 35642599
DOI: 10.7554/eLife.74971 -
Journal of Physiological Anthropology... Nov 2004The neural mechanisms for the perception of face and motion were studied using psychophysical threshold measurements, event-related potentials (ERPs), and functional...
The neural mechanisms for the perception of face and motion were studied using psychophysical threshold measurements, event-related potentials (ERPs), and functional magnetic resonance imaging (fMRI). A face-specific ERP component, N170, was recorded over the posterior temporal cortex. Removal of the high-spatial-frequency components of the face altered the perception of familiar faces significantly, and familiarity can facilitate the cortico-cortical processing of facial perceptions. Similarly, the high-spatial-frequency components of the face seemed to be crucial for the recognition of facial expressions. Aging and visuospatial impairments affected motion perception significantly. Two distinct components of motion ERPs, N170 and P200, were recorded over the parietal region. The former was related to horizontal motion perception while the latter reflected the perception of radial optic flow motion. The results of fMRI showed that horizontal movements of objects and radial optic flow motion were perceived differently in the V5/MT and superior parietal lobe. We conclude that an integrated approach can provide useful information on spatial and temporal processing of face and motion non-invasively.
Topics: Cognition Disorders; Evoked Potentials, Visual; Face; Humans; Magnetic Resonance Imaging; Motion; Motion Perception; Recognition, Psychology; Visual Perception
PubMed: 15599074
DOI: 10.2114/jpa.23.273 -
Social Cognitive and Affective... Mar 2023Although the ability to detect the actions of other living beings is key for adaptive social behavior, it is still unclear if biological motion perception is specific to...
Although the ability to detect the actions of other living beings is key for adaptive social behavior, it is still unclear if biological motion perception is specific to human stimuli. Biological motion perception involves both bottom-up processing of movement kinematics ('motion pathway') and top-down reconstruction of movement from changes in the body posture ('form pathway'). Previous research using point-light displays has shown that processing in the motion pathway depends on the presence of a well-defined, configural shape (objecthood) but not necessarily on whether that shape depicts a living being (animacy). Here, we focused on the form pathway. Specifically, we combined electroencephalography (EEG) frequency tagging with apparent motion to study how objecthood and animacy influence posture processing and the integration of postures into movements. By measuring brain responses to repeating sequences of well-defined or pixelated images (objecthood), depicting human or corkscrew agents (animacy), performing either fluent or non-fluent movements (movement fluency), we found that movement processing was sensitive to objecthood but not animacy. In contrast, posture processing was sensitive to both. Together, these results indicate that reconstructing biological movements from apparent motion sequences requires a well-defined but not necessarily an animate shape. Instead, stimulus animacy appears to be relevant only for posture processing.
Topics: Humans; Photic Stimulation; Movement; Motion Perception; Posture; Social Behavior
PubMed: 36905406
DOI: 10.1093/scan/nsad014 -
Journal of Vestibular Research :... 2008This review focusses attention on a ragged edge of our knowledge of self-motion perception, where understanding ends but there are experimental results to indicate that... (Review)
Review
This review focusses attention on a ragged edge of our knowledge of self-motion perception, where understanding ends but there are experimental results to indicate that present approaches to analysis are inadequate. Although self-motion perception displays processes of "top-down" construction, it is typically analyzed as if it is nothing more than a deformation of the stimulus, using a "bottom-up" and input/output approach beginning with the transduction of the stimulus. Analysis often focusses on the extent to which passive transduction of the movement stimulus is accurate. Some perceptual processes that deform or transform the stimulus arise from the way known properties of sensory receptors contribute to perceptual accuracy or inaccuracy. However, further constructive processes in self-motion perception that involve discrete transformations are not well understood. We introduce constructive perception with a linguistic example which displays familiar discrete properties, then look closely at self-motion perception. Examples of self-motion perception begin with cases in which constructive processes transform particular properties of the stimulus. These transformations allow the nervous system to compose whole percepts of movement; that is, self-motion perception acts at a whole-movement level of analysis, rather than passively transducing individual cues. These whole-movement percepts may be paradoxical. In addition, a single stimulus may give rise to multiple perceptions. After reviewing self-motion perception studies, we discuss research methods for delineating principles of the constructed perception of self-motion. The habit of viewing self-motion illusions only as continuous deformations of the stimulus may be blinding the field to other perceptual phenomena, including those best characterized using the mathematics of discrete transformations or mathematical relationships relating sensory modalities in novel, sometimes discrete ways. Analysis of experiments such as these is required to mathematically formalize elements of self-motion perception, the transformations they may undergo, consistency principles, and logical structure underlying multiplicity of perceptions. Such analysis will lead to perceptual rules analogous to those recognized in visual perception.
Topics: Gravitation; Humans; Motion Perception; Optical Illusions; Rotation; Visual Perception
PubMed: 19542599
DOI: No ID Found -
The Journal of Neuroscience : the... Apr 2017Recent work from several groups has shown that perception of various visual attributes in human observers at a given moment is biased toward what was recently seen. This...
Recent work from several groups has shown that perception of various visual attributes in human observers at a given moment is biased toward what was recently seen. This positive serial dependency is a kind of temporal averaging that exploits short-term correlations in visual scenes to reduce noise and stabilize perception. To date, this stabilizing "continuity field" has been demonstrated on stable visual attributes such as orientation and face identity, yet it would be counterproductive to apply it to dynamic attributes in which change sensitivity is needed. Here, we tested this using motion direction discrimination and predict a negative perceptual dependency: a contrastive relationship that enhances sensitivity to change. Surprisingly, our data showed a cubic-like pattern of dependencies with positive and negative components. By interleaving various stimulus combinations, we separated the components and isolated a positive perceptual dependency for motion and a negative dependency for orientation. A weighted linear sum of the separate dependencies described the original cubic pattern well. The positive dependency for motion shows an integrative perceptual effect and was unexpected, although it is consistent with work on motion priming. These findings suggest that a perception-stabilizing continuity field occurs pervasively, occurring even when it obscures sensitivity to dynamic stimuli. Recent studies show that visual perception at a given moment is not entirely veridical, but rather biased toward recently seen stimuli: a positive serial dependency. This temporal smoothing process helps perceptual continuity by preserving stable aspects of the visual scene over time, yet, for dynamic stimuli, temporal smoothing would blur dynamics and reduce sensitivity to change. We tested whether this process is selective for stable attributes by examining dependencies in motion perception. We found a clear positive dependency for motion, suggesting that positive perceptual dependencies are pervasive. We also found a concurrent negative (contrastive) dependency for orientation. Both dependencies combined linearly to determine perception, showing that the brain can calculate contrastive and integrative dependencies simultaneously from recent stimulus history when making perceptual decisions.
Topics: Adult; Discrimination, Psychological; Female; Humans; Male; Motion Perception; Orientation, Spatial; Repetition Priming; Space Perception
PubMed: 28330878
DOI: 10.1523/JNEUROSCI.4601-15.2017 -
Experimental Brain Research Mar 2024Users of automated vehicles will engage in other activities and take their eyes off the road, making them prone to motion sickness. To resolve this, the current paper...
Users of automated vehicles will engage in other activities and take their eyes off the road, making them prone to motion sickness. To resolve this, the current paper validates models predicting sickness in response to motion and visual conditions. We validate published models of vestibular and visual sensory integration that have been used for predicting motion sickness through sensory conflict. We use naturalistic driving data and laboratory motion (and vection) paradigms, such as sinusoidal translation and rotation at different frequencies, Earth-Vertical Axis Rotation, Off-Vertical Axis Rotation, Centrifugation, Somatogravic Illusion, and Pseudo-Coriolis, to evaluate different models for both motion perception and motion sickness. We investigate the effects of visual motion perception in terms of rotational velocity (visual flow) and verticality. According to our findings, the SVC model, a 6DOF model based on the Subjective Vertical Conflict (SVC) theory, with visual rotational velocity input is effective at estimating motion sickness. However, it does not correctly replicate motion perception in paradigms such as roll-tilt perception during centrifuge, pitch perception during somatogravic illusion, and pitch perception during pseudo-Coriolis motions. On the other hand, the Multi-Sensory Observer Model (MSOM) accurately models motion perception in all considered paradigms, but does not effectively capture the frequency sensitivity of motion sickness, and the effects of vision on sickness. For both models (SVC and MSOM), the visual perception of rotational velocity strongly affects sickness and perception. Visual verticality perception does not (yet) contribute to sickness prediction, and contributes to perception prediction only for the somatogravic illusion. In conclusion, the SVC model with visual rotation velocity feedback is the current preferred option to design vehicle control algorithms for motion sickness reduction, while the MSOM best predicts perception. A unified model that jointly captures perception and motion sickness remains to be developed.
Topics: Humans; Motion Perception; Illusions; Space Perception; Motion Sickness; Rotation
PubMed: 38253934
DOI: 10.1007/s00221-023-06747-x