-
Developmental Science May 2020Motor experiences and active exploration during early childhood may affect individual differences in a wide range of perceptual and cognitive abilities. In the current...
Motor experiences and active exploration during early childhood may affect individual differences in a wide range of perceptual and cognitive abilities. In the current study, we suggest that active exploration of objects facilitates the ability to process object forms and magnitudes, which in turn impacts the development of numerosity perception. We tested our hypothesis by conducting a preregistered active exploration intervention with 59 8-month-old infants. The minimal intervention consisted of actively playing with and exploring blocks once a day for 8 weeks. In order to control for possible training effects on attention, we used book reading as a control condition. Pre- and post-test assessments using eye-tracking showed that block play improved visual form perception, where infants became better at detecting a deviant shape. Furthermore, using three control tasks, we showed that the intervention specifically improved infants' ability to process visual forms and the effect could not be explained by a domain general improvement in attention or visual perception. We found that the intervention did not improve numerosity perception and suggest that because of the sequential nature of our hypothesis, a longer time frame might be needed to see improvements in this ability. Our findings indicate that if infants are given more opportunities for play and exploration, it will have positive effects on their visual form perception, which in turn could help their understanding of geometrical concepts.
Topics: Attention; Child; Comprehension; Female; Form Perception; Humans; Infant; Male; Play and Playthings; Visual Perception
PubMed: 31721368
DOI: 10.1111/desc.12923 -
ELife Jan 2022Can direct stimulation of primate V1 substitute for a visual stimulus and mimic its perceptual effect? To address this question, we developed an optical-genetic toolkit...
Can direct stimulation of primate V1 substitute for a visual stimulus and mimic its perceptual effect? To address this question, we developed an optical-genetic toolkit to 'read' neural population responses using widefield calcium imaging, while simultaneously using optogenetics to 'write' neural responses into V1 of behaving macaques. We focused on the phenomenon of visual masking, where detection of a dim target is significantly reduced by a co-localized medium-brightness mask (Cornsweet and Pinsker, 1965; Whittle and Swanston, 1974). Using our toolkit, we tested whether V1 optogenetic stimulation can recapitulate the perceptual masking effect of a visual mask. We find that, similar to a visual mask, low-power optostimulation can significantly reduce visual detection sensitivity, that a sublinear interaction between visual- and optogenetic-evoked V1 responses could account for this perceptual effect, and that these neural and behavioral effects are spatially selective. Our toolkit and results open the door for further exploration of perceptual substitutions by direct stimulation of sensory cortex.
Topics: Animals; Macaca mulatta; Male; Neurons; Optogenetics; Perceptual Masking; Photic Stimulation; Proof of Concept Study; Visual Cortex; Visual Perception
PubMed: 34982033
DOI: 10.7554/eLife.68393 -
Attention, Perception & Psychophysics Feb 2024Visual scenes are too complex for one to immediately perceive all their details. As suggested by Gestalt psychologists, grouping similar scene elements and perceiving...
Visual scenes are too complex for one to immediately perceive all their details. As suggested by Gestalt psychologists, grouping similar scene elements and perceiving their summary statistics provides one shortcut for evaluating scene gist. Perceiving ensemble statistics overcomes processing, attention, and memory limits, facilitating higher-order scene understanding. Ensemble perception spans simple/complex dimensions (circle size, face emotion), including various statistics (mean, range), and inherently spans space and/or time, when sets are presented scattered across the visual scene, and/or sequentially in rapid series. Furthermore, ensemble perception occurs explicitly, when observers are asked to judge set mean, and also automatically/implicitly, when observers are engaged in an orthogonal task. We now study relationships among these ensemble-perception phenomena, testing explicit and implicit ensemble perception; for sets varying in circle size, line orientation, or disc brightness; and with spatial, temporal or spatio-temporal presentation. Following ensemble set presentation, observers were asked if a test image, or which of two test images, had been present in the set. Confirming previous results, responses reflected implicit mean perception, depending on test image distance from the mean, and on its being within or outside ensemble range. Subsequent experiments asked the same observers to explicitly judge whether test images were larger, more clockwise, or brighter than the set mean, or which of two test images was closer to the mean. Comparing implicit and explicit mean perception, we find that explicit ensemble averaging is more precise than implicit mean perception-for each ensemble variable and presentation mode. Implications are discussed regarding possible separate mechanisms for explicit versus implicit ensemble perception.
Topics: Humans; Attention; Emotions; Perception; Visual Perception
PubMed: 37821745
DOI: 10.3758/s13414-023-02784-4 -
Proceedings. Biological Sciences Jul 2020Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual...
Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.
Topics: Humans; Size Perception; Space Perception; Visual Acuity; Visual Fields; Visual Perception
PubMed: 32635869
DOI: 10.1098/rspb.2020.0825 -
The Journal of Neuroscience : the... Dec 2023During binocular rivalry, conflicting images are presented one to each eye and perception alternates stochastically between them. Despite stable percepts between...
During binocular rivalry, conflicting images are presented one to each eye and perception alternates stochastically between them. Despite stable percepts between alternations, modeling suggests that neural signals representing the two images change gradually, and that the duration of stable percepts are determined by the time required for these signals to reach a threshold that triggers an alternation. However, direct physiological evidence for such signals has been lacking. Here, we identify a neural signal in the human visual cortex that shows these predicted properties. We measured steady-state visual evoked potentials (SSVEPs) in 84 human participants (62 females, 22 males) who were presented with orthogonal gratings, one to each eye, flickering at different frequencies. Participants indicated their percept while EEG data were collected. The time courses of the SSVEP amplitudes at the two frequencies were then compared across different percept durations, within participants. For all durations, the amplitude of signals corresponding to the suppressed stimulus increased and the amplitude corresponding to the dominant stimulus decreased throughout the percept. Critically, longer percepts were characterized by more gradual increases in the suppressed signal and more gradual decreases of the dominant signal. Changes in signals were similar and rapid at the end of all percepts, presumably reflecting perceptual transitions. These features of the SSVEP time courses are well predicted by a model in which perceptual transitions are produced by the accumulation of noisy signals. Identification of this signal underlying binocular rivalry should allow strong tests of neural models of rivalry, bistable perception, and neural suppression. During binocular rivalry, two conflicting images are presented to the two eyes and perception alternates between them, with switches occurring at seemingly random times. Rivalry is an important and longstanding model system in neuroscience, used for understanding neural suppression, intrinsic neural dynamics, and even the neural correlates of consciousness. All models of rivalry propose that it depends on gradually changing neural activity that on reaching some threshold triggers the perceptual switches. This manuscript reports the first physiological measurement of neural signals with that set of properties in human participants. The signals, measured with EEG in human observers, closely match the predictions of recent models of rivalry, and should pave the way for much future work.
Topics: Male; Female; Humans; Visual Perception; Vision, Binocular; Evoked Potentials, Visual; Photic Stimulation; Visual Cortex; Vision Disparity
PubMed: 37907256
DOI: 10.1523/JNEUROSCI.1325-23.2023 -
Cortex; a Journal Devoted To the Study... Oct 2022Healthy aging is associated with decline in social, emotion, and identity perception, which is frequently attributed to deterioration of structures involved in social...
Healthy aging is associated with decline in social, emotion, and identity perception, which is frequently attributed to deterioration of structures involved in social inference. It is believed that this decline is unlikely to be a result of perceptual aberrations due to intact (corrected) visual acuity. Nevertheless, the present studies examine whether more particular perceptual aberrations may be present in healthy aging, that could in principle contribute to such difficulties. The present study examined the possibility that particular deficits in configural processing impair the perception of faces in healthy aging. Across two signal detection experiments, we required a group of healthy older adults and matched younger adults to detect changes in images of faces that could differ either at the local, featural level, or in configuration of these features. In support of our hypothesis, older adults were particularly impaired in detecting configural changes, relative to detecting changes in features. The impairments were found for both upright and inverted faces and were similar in a task with images of inanimate objects (houses). Drift diffusion modelling suggested that this decline related to reduced evidence accumulation rather than a tendency to make configural judgments based on less evidence. These findings indicate that domain-general problems processing configural information contribute to the difficulties with face processing in healthy aging, and may in principle contribute to a range of higher-level social difficulties - with implications also for other groups exhibiting similar patterns in perception and understanding.
Topics: Aged; Emotions; Facial Recognition; Healthy Aging; Humans; Judgment; Pattern Recognition, Visual
PubMed: 36087432
DOI: 10.1016/j.cortex.2022.05.026 -
Attention, Perception & Psychophysics Jul 2022Maintaining object correspondence among multiple moving objects is an essential task of the perceptual system in many everyday life activities. A substantial body of...
Maintaining object correspondence among multiple moving objects is an essential task of the perceptual system in many everyday life activities. A substantial body of research has confirmed that observers are able to track multiple target objects amongst identical distractors based only on their spatiotemporal information. However, naturalistic tasks typically involve the integration of information from more than one modality, and there is limited research investigating whether auditory and audio-visual cues improve tracking. In two experiments, we asked participants to track either five target objects or three versus five target objects amongst similarly indistinguishable distractor objects for 14 s. During the tracking interval, the target objects bounced occasionally against the boundary of a centralised orange circle. A visual cue, an auditory cue, neither or both coincided with these collisions. Following the motion interval, the participants were asked to indicate all target objects. Across both experiments and both set sizes, our results indicated that visual and auditory cues increased tracking accuracy although visual cues were more effective than auditory cues. Audio-visual cues, however, did not increase tracking performance beyond the level of purely visual cues for both high and low load conditions. We discuss the theoretical implications of our findings for multiple object tracking as well as for the principles of multisensory integration.
Topics: Attention; Auditory Perception; Cues; Humans; Motion; Motion Perception; Photic Stimulation; Visual Perception
PubMed: 35610410
DOI: 10.3758/s13414-022-02492-5 -
The Journal of Neuroscience : the... Dec 2022Social information is some of the most ambiguous content we encounter in our daily lives, yet in experimental contexts, percepts of social interactions-that is, whether...
Social information is some of the most ambiguous content we encounter in our daily lives, yet in experimental contexts, percepts of social interactions-that is, whether an interaction is present and if so, the nature of that interaction-are often dichotomized as correct or incorrect based on experimenter-assigned labels. Here, we investigated the behavioral and neural correlates of subjective (or conscious) social perception using data from the Human Connectome Project in which participants ( = 1049; 486 men, 562 women) viewed animations of geometric shapes during fMRI and indicated whether they perceived a social interaction or random motion. Critically, rather than experimenter-assigned labels, we used observers' own reports of "Social" or "Non-social" to classify percepts and characterize brain activity, including leveraging a particularly ambiguous animation perceived as "Social" by some but "Non-social" by others to control for visual input. Behaviorally, observers were biased toward perceiving information as social (vs non-social); and neurally, observer reports (compared with experimenter labels) explained more variance in activity across much of the brain. Using "Unsure" reports, we identified several regions that responded parametrically to perceived socialness. Neural responses to social versus non-social content diverged early in time and in the cortical hierarchy. Finally, individuals with higher internalizing trait scores showed both a higher response bias toward "Social" and an inverse relationship with activity in default mode and visual association areas while scanning for social information. Findings underscore the subjective nature of social perception and the importance of using observer reports to study percepts of social interactions. Simple animations involving two or more geometric shapes have been used as a gold standard to understand social cognition and impairments therein. Yet, experimenter-assigned labels of what is social versus non-social are frequently used as a ground truth, despite the fact that percepts of such ambiguous social stimuli are highly subjective. Here, we used behavioral and fMRI data from a large sample of neurotypical individuals to show that participants' responses reveal subtle behavioral biases, help us study neural responses to social content more precisely, and covary with internalizing trait scores. Our findings underscore the subjective nature of social perception and the importance of considering observer reports in studying behavioral and neural dynamics of social perception.
Topics: Male; Humans; Female; Social Interaction; Brain; Consciousness; Magnetic Resonance Imaging; Perception; Social Perception; Visual Perception; Motion Perception
PubMed: 36280263
DOI: 10.1523/JNEUROSCI.0859-22.2022 -
Scientific Reports Mar 2022Sensory differences between autism and neuro-typical populations are well-documented and have often been explained by either weak-central-coherence or...
Sensory differences between autism and neuro-typical populations are well-documented and have often been explained by either weak-central-coherence or excitation/inhibition-imbalance cortical theories. We tested these theories with perceptual multi-stability paradigms in which dissimilar images presented to each eye generate dynamic cyclopean percepts based on ongoing cortical grouping and suppression processes. We studied perceptual multi-stability with Interocular Grouping (IOG), which requires the simultaneous integration and suppression of image fragments from both eyes, and Conventional Binocular Rivalry (CBR), which only requires global suppression of either eye, in 17 autistic adults and 18 neurotypical participants. We used a Hidden-Markov-Model as tool to analyze the multistable dynamics of these processes. Overall, the dynamics of multi-stable perception were slower (i.e. there were longer durations and fewer transitions among perceptual states) in the autistic group compared to the neurotypical group for both IOG and CBR. The weighted Markovian transition distributions revealed key differences between both groups and paradigms. The results indicate overall lower levels of suppression and decreased levels of grouping in autistic than neurotypical participants, consistent with elements of excitation/inhibition imbalance and weak-central-coherence theories.
Topics: Adult; Autistic Disorder; Consciousness; Humans; Photic Stimulation; Vision, Binocular; Visual Perception
PubMed: 35288609
DOI: 10.1038/s41598-022-08108-0 -
Proceedings of the National Academy of... Aug 2021Attention alters perception across the visual field. Typically, endogenous (voluntary) and exogenous (involuntary) attention similarly improve performance in many visual...
Attention alters perception across the visual field. Typically, endogenous (voluntary) and exogenous (involuntary) attention similarly improve performance in many visual tasks, but they have differential effects in some tasks. Extant models of visual attention assume that the effects of these two types of attention are identical and consequently do not explain differences between them. Here, we develop a model of spatial resolution and attention that distinguishes between endogenous and exogenous attention. We focus on texture-based segmentation as a model system because it has revealed a clear dissociation between both attention types. For a texture for which performance peaks at parafoveal locations, endogenous attention improves performance across eccentricity, whereas exogenous attention improves performance where the resolution is low (peripheral locations) but impairs it where the resolution is high (foveal locations) for the scale of the texture. Our model emulates sensory encoding to segment figures from their background and predict behavioral performance. To explain attentional effects, endogenous and exogenous attention require separate operating regimes across visual detail (spatial frequency). Our model reproduces behavioral performance across several experiments and simultaneously resolves three unexplained phenomena: 1) the parafoveal advantage in segmentation, 2) the uniform improvements across eccentricity by endogenous attention, and 3) the peripheral improvements and foveal impairments by exogenous attention. Overall, we unveil a computational dissociation between each attention type and provide a generalizable framework for predicting their effects on perception across the visual field.
Topics: Animals; Attention; Computer Simulation; Humans; Models, Biological; Primates; Visual Perception
PubMed: 34389680
DOI: 10.1073/pnas.2106436118