-
Scientific Reports Jul 2021Developmental prosopagnosia (DP) is a selective neurodevelopmental condition defined by lifelong impairments in face recognition. Despite much research, the extent to...
Developmental prosopagnosia (DP) is a selective neurodevelopmental condition defined by lifelong impairments in face recognition. Despite much research, the extent to which DP is associated with broader visual deficits beyond face processing is unclear. Here we investigate whether DP is accompanied by deficits in colour perception. We tested a large sample of 92 DP individuals and 92 sex/age-matched controls using the well-validated Ishihara and Farnsworth-Munsell 100-Hue tests to assess red-green colour deficiencies and hue discrimination abilities. Group-level analyses show comparable performance between DP and control individuals across both tests, and single-case analyses indicate that the prevalence of colour deficits is low and comparable to that in the general population. Our study clarifies that DP is not linked to colour perception deficits and constrains theories of DP that seek to account for a larger range of visual deficits beyond face recognition.
Topics: Adult; Color Perception; Discrimination, Psychological; Electroencephalography; Facial Recognition; Female; Humans; Male; Middle Aged; Pattern Recognition, Visual; Photic Stimulation; Prosopagnosia; Visual Perception; Young Adult
PubMed: 34215772
DOI: 10.1038/s41598-021-92840-6 -
Neuroscience Bulletin Jan 2023Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic... (Review)
Review
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Topics: Animals; Humans; Motion Perception; Bayes Theorem; Optic Flow; Cues; Vestibule, Labyrinth; Photic Stimulation; Visual Perception
PubMed: 35821337
DOI: 10.1007/s12264-022-00916-8 -
Cognition & Emotion Feb 2019One of the biggest challenges in the study of emotion-cognition interaction is addressing the question of whether and how emotions influence processes of perception as... (Review)
Review
One of the biggest challenges in the study of emotion-cognition interaction is addressing the question of whether and how emotions influence processes of perception as distinct from other higher-level cognitive processes. Most theories of emotion agree that an emotion episode begins with a sensory experience - such as a visual percept - that elicits a cascade of affective, cognitive, physiological, and/or behavioural responses (the ordering and inclusion of those latter components being forever debated). However, for decades, a subset of philosophers and scientists have suggested that the presumed perception → emotion relationship is in fact bidirectional, with emotion also altering the perceptual process. In the present review we reflect on the history and empirical support (or, some might argue, lack thereof) for the notion that emotion influences visual perception. We examine ways in which researchers have attempted to test the question, and the ways in which this pursuit is so difficult. As is the case with the ongoing debate about the cognitive penetrability of perception, we conclude that nothing is conclusive in the debate about the emotional penetrability of perception. We nonetheless don rose-coloured glasses as we look forward to the future of this research topic.
Topics: Cognition; Emotions; Female; Humans; Visual Perception
PubMed: 30636535
DOI: 10.1080/02699931.2018.1561424 -
Neural Networks : the Official Journal... Jul 2023The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the...
The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the visibility threshold for sinusoidal gratings at all spatial frequencies. Here, we investigated the CSF in deep neural networks using the same 2AFC contrast detection paradigm as in human psychophysics. We examined 240 networks pretrained on several tasks. To obtain their corresponding CSFs, we trained a linear classifier on top of the extracted features from frozen pretrained networks. The linear classifier is exclusively trained on a contrast discrimination task with natural images. It has to find which of the two input images has higher contrast. The network's CSF is measured by detecting which one of two images contains a sinusoidal grating of varying orientation and spatial frequency. Our results demonstrate characteristics of the human CSF are manifested in deep networks both in the luminance channel (a band-limited inverted U-shaped function) and in the chromatic channels (two low-pass functions of similar properties). The exact shape of the networks' CSF appears to be task-dependent. The human CSF is better captured by networks trained on low-level visual tasks such as image-denoising or autoencoding. However, human-like CSF also emerges in mid- and high-level tasks such as edge detection and object recognition. Our analysis shows that human-like CSF appears in all architectures but at different depths of processing, some at early layers, while others in intermediate and final layers. Overall, these results suggest that (i) deep networks model the human CSF faithfully, making them suitable candidates for applications of image quality and compression, (ii) efficient/purposeful processing of the natural world drives the CSF shape, and (iii) visual representation from all levels of visual hierarchy contribute to the tuning curve of the CSF, in turn implying a function which we intuitively think of as modulated by low-level visual features may arise as a consequence of pooling from a larger set of neurons at all levels of the visual system.
Topics: Humans; Contrast Sensitivity; Visual Perception; Neurons; Neural Networks, Computer; Psychophysics; Pattern Recognition, Visual
PubMed: 37156217
DOI: 10.1016/j.neunet.2023.04.032 -
Current Opinion in Psychology Oct 2019It is well established that attention improves performance on many visual tasks. However, for more than 100 years, psychologists, philosophers, and neurophysiologists... (Review)
Review
It is well established that attention improves performance on many visual tasks. However, for more than 100 years, psychologists, philosophers, and neurophysiologists have debated its phenomenology-whether attention actually changes one's subjective experience. Here, we show that it is possible to objectively and quantitatively investigate the effects of attention on subjective experience. First, we review evidence showing that attention alters the appearance of many static and dynamic basic visual dimensions, which mediate changes in appearance of higher-level perceptual aspects. Then, we summarize current views on how attention alters appearance. These findings have implications for our understanding of perception and attention, illustrating that attention affects not only how we perform in visual tasks, but actually alters our experience of the visual world.
Topics: Attention; Brain; Contrast Sensitivity; Humans; Photic Stimulation; Visual Perception
PubMed: 30572280
DOI: 10.1016/j.copsyc.2018.10.010 -
Journal of Vestibular Research :... 2016Perception of upright is often assessed by aligning a luminous line to the subjective visual vertical (SVV).
BACKGROUND
Perception of upright is often assessed by aligning a luminous line to the subjective visual vertical (SVV).
OBJECTIVE
Here we investigated the effects of visual line rotation and viewing eye on SVV responses and whether there was any change with head tilt.
METHODS
SVV was measured using a forced-choice paradigm and by combining the following conditions in 22 healthy subjects: head position (20° left tilt, upright and 20° right tilt), viewing eye (left eye, both eyes and right eye) and direction of visual line rotation (clockwise [CW] and counter clockwise [CCW]).
RESULTS
The accuracy and precision of SVV responses were not different between the viewing eye conditions in all head positions (P> 0.05, Kruskal-Wallis test). The accuracy of SVV responses was % significantly different between the CW and CCW line rotations (p ≈ 0.0001; Kruskal-Wallis test) and SVV was tilted in the same direction as the line rotation. This effect of line rotation was however not consistent across head tilts and was only present in the upright and right tilt head positions. The accuracy of SVV responses showed a higher variability among subjects in the left head tilt position with no significant difference between the CW and CCW line rotations (P> 0.05; post-hoc Dunn's test).
CONCLUSIONS
In spite of the challenges to the estimate of upright with head tilt, normal subjects did remarkably well irrespective of the viewing eye. The physiological significance of the asymmetry in the effect of line rotation between the head tilt positions is unclear but it %may suggest suggests a lateralizing effect of head tilt on the visual perception of upright.
Topics: Adult; Female; Head Movements; Humans; Male; Middle Aged; Ocular Physiological Phenomena; Orientation; Reproducibility of Results; Rotation; Vertical Dimension; Vision, Binocular; Vision, Monocular; Visual Perception; Young Adult
PubMed: 26890421
DOI: 10.3233/VES-160565 -
Neuroscience and Biobehavioral Reviews Sep 2014Transcranial magnetic stimulation (TMS) continues to deliver on its promise as a research tool. In this review article we focus on the application of TMS to early visual... (Review)
Review
Transcranial magnetic stimulation (TMS) continues to deliver on its promise as a research tool. In this review article we focus on the application of TMS to early visual cortex (V1, V2, V3) in studies of visual perception and visual awareness. Depending on the asynchrony between visual stimulus onset and TMS pulse (SOA), TMS can suppress visual perception, allowing one to track the time course of functional relevance (chronometry) of early visual cortex for vision. This procedure has revealed multiple masking effects ('dips'), some consistently (∼+100ms SOA) but others less so (∼-50ms, ∼-20ms, ∼+30ms, ∼+200ms SOA). We review the state of TMS masking research, focusing on the evidence for these multiple dips, the relevance of several experimental parameters to the obtained 'masking curve', and the use of multiple measures of visual processing (subjective measures of awareness, objective discrimination tasks, priming effects). Lastly, we consider possible future directions for this field. We conclude that while TMS masking has yielded many fundamental insights into the chronometry of visual perception already, much remains unknown. Not only are there several temporal windows when TMS pulses can induce visual suppression, even the well-established 'classical' masking effect (∼+100ms) may reflect more than one functional visual process.
Topics: Animals; Humans; Occipital Lobe; Transcranial Magnetic Stimulation; Visual Perception
PubMed: 25010557
DOI: 10.1016/j.neubiorev.2014.06.017 -
ELife Jul 2019The human visual system is tasked with recovering the different physical sources of optical structure that generate our retinal images. Separate research has focused on...
The human visual system is tasked with recovering the different physical sources of optical structure that generate our retinal images. Separate research has focused on understanding how the visual system estimates (a) environmental sources of image structure and (b) blur induced by the eye's limited focal range, but little is known about how the visual system distinguishes environmental sources from optical defocus. Here, we present evidence that this is a fundamental perceptual problem and provide insights into how and when the visual system succeeds and fails in solving it. We show that fully focused surface shading can be misperceived as defocused and that optical blur can be misattributed to the material properties and shape of surfaces. We further reveal how these misperceptions depend on the relationship between shading gradients and sharp contours, and conclude that computations of blur are inherently linked to computations of surface shape, material, and illumination.
Topics: Form Perception; Humans; Optical Phenomena; Photic Stimulation; Visual Perception
PubMed: 31298655
DOI: 10.7554/eLife.48214 -
Trends in Neurosciences Apr 2015Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking... (Review)
Review
Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness.
Topics: Animals; Awareness; Eye Movements; Humans; Motion Perception; Vision, Ocular; Visual Pathways; Visual Perception
PubMed: 25765322
DOI: 10.1016/j.tins.2015.02.002 -
Perception Sep 2017Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision"... (Review)
Review
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Topics: Humans; Imagination; Touch Perception; Visual Perception
PubMed: 28399717
DOI: 10.1177/0301006617699225