-
Journal of Vision May 2024The visual system often undergoes a relatively stable perception even in a noisy visual environment. This crucial function was reflected in a visual perception...
The visual system often undergoes a relatively stable perception even in a noisy visual environment. This crucial function was reflected in a visual perception phenomenon-serial dependence, in which recent stimulus history systematically biases current visual decisions. Although serial dependence effects have been revealed in numerous studies, few studies examined whether serial dependence would require visual awareness. By using the continuous flash suppression (CFS) technique to render grating stimuli invisible, we investigated whether serial dependence effects could emerge at the unconscious levels. In an orientation adjustment task, subjects viewed a randomly oriented grating and reported their orientation perception via an adjustment response. Subjects performed a series of three type trial pairs. The first two trial pairs, in which subjects were instructed to make a response or no response toward the first trial of the pairs, respectively, were used to measure serial dependence at the conscious levels; the third trial pair, in which the grating stimulus in the first trial of the pair was masked by a CFS stimulus, was used to measure the serial dependence at the unconscious levels. One-back serial dependence effects for the second trial of the pairs were evaluated. We found significant serial dependence effects at the conscious levels, whether absence (Experiment 1) or presence (Experiment 2) of CFS stimuli, but failed to find the effects at the unconscious levels, corroborating the view that serial dependence requires visual awareness.
Topics: Humans; Awareness; Photic Stimulation; Male; Visual Perception; Young Adult; Female; Adult; Perceptual Masking; Orientation
PubMed: 38787568
DOI: 10.1167/jov.24.5.9 -
The Journal of the Acoustical Society... May 2024Medial olivocochlear (MOC) efferents modulate outer hair cell motility through specialized nicotinic acetylcholine receptors to support encoding of signals in noise....
Medial olivocochlear (MOC) efferents modulate outer hair cell motility through specialized nicotinic acetylcholine receptors to support encoding of signals in noise. Transgenic mice lacking the alpha9 subunits of these receptors (α9KOs) have normal hearing in quiet and noise, but lack classic cochlear suppression effects and show abnormal temporal, spectral, and spatial processing. Mice deficient for both the alpha9 and alpha10 receptor subunits (α9α10KOs) may exhibit more severe MOC-related phenotypes. Like α9KOs, α9α10KOs have normal auditory brainstem response (ABR) thresholds and weak MOC reflexes. Here, we further characterized auditory function in α9α10KO mice. Wild-type (WT) and α9α10KO mice had similar ABR thresholds and acoustic startle response amplitudes in quiet and noise, and similar frequency and intensity difference sensitivity. α9α10KO mice had larger ABR Wave I amplitudes than WTs in quiet and noise. Other ABR metrics of hearing-in-noise function yielded conflicting findings regarding α9α10KO susceptibility to masking effects. α9α10KO mice also had larger startle amplitudes in tone backgrounds than WTs. Overall, α9α10KO mice had grossly normal auditory function in quiet and noise, although their larger ABR amplitudes and hyperreactive startles suggest some auditory processing abnormalities. These findings contribute to the growing literature showing mixed effects of MOC dysfunction on hearing.
Topics: Animals; Mice, Knockout; Evoked Potentials, Auditory, Brain Stem; Noise; Auditory Threshold; Receptors, Nicotinic; Acoustic Stimulation; Reflex, Startle; Perceptual Masking; Behavior, Animal; Mice; Mice, Inbred C57BL; Cochlea; Male; Phenotype; Olivary Nucleus; Auditory Pathways; Female; Auditory Perception; Hearing
PubMed: 38738939
DOI: 10.1121/10.0025985 -
Cognition Jul 2024Human observers often exhibit remarkable consistency in remembering specific visual details, such as certain face images. This phenomenon is commonly attributed to...
Human observers often exhibit remarkable consistency in remembering specific visual details, such as certain face images. This phenomenon is commonly attributed to visual memorability, a collection of stimulus attributes that enhance the long-term retention of visual information. However, the exact contributions of visual memorability to visual memory formation remain elusive as these effects could emerge anywhere from early perceptual encoding to post-perceptual memory consolidation processes. To clarify this, we tested three key predictions from the hypothesis that visual memorability facilitates early perceptual encoding that supports the formation of visual short-term memory (VSTM) and the retention of visual long-term memory (VLTM). First, we examined whether memorability benefits in VSTM encoding manifest early, even within the constraints of a brief stimulus presentation (100-200 ms; Experiment 1). We achieved this by manipulating stimulus presentation duration in a VSTM change detection task using face images with high- or low-memorability while ensuring they were equally familiar to the participants. Second, we assessed whether this early memorability benefit increases the likelihood of VSTM retention, even with post-stimulus masking designed to interrupt post-perceptual VSTM consolidation processes (Experiment 2). Last, we investigated the durability of memorability benefits by manipulating memory retention intervals from seconds to 24 h (Experiment 3). Across experiments, our data suggest that visual memorability has an early impact on VSTM formation, persisting across variable retention intervals and predicting subsequent VLTM overnight. Combined, these findings highlight that visual memorability enhances visual memory within 100-200 ms following stimulus onset, resulting in robust memory traces resistant to post-perceptual interruption and long-term forgetting.
Topics: Humans; Young Adult; Adult; Male; Female; Memory, Long-Term; Memory, Short-Term; Visual Perception; Facial Recognition; Memory Consolidation; Adolescent
PubMed: 38733867
DOI: 10.1016/j.cognition.2024.105810 -
Trends in Hearing 2024Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid... (Randomized Controlled Trial)
Randomized Controlled Trial
Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid benefits can potentially be overshadowed by adverse experiences. Research has shown that sustaining focus on positive experiences has the potential to mitigate negativity bias. The purpose of the current study was to investigate whether a positive focus (PF) intervention can improve speech-in-noise abilities for experienced hearing aid users. Thirty participants were randomly allocated to a control or PF group (N = 2 × 15). Prior to hearing aid fitting, all participants filled out the short form of the Speech, Spatial and Qualities of Hearing scale (SSQ12) based on their own hearing aids. At the first visit, they were fitted with study hearing aids, and speech-in-noise testing was performed. Both groups then wore the study hearing aids for two weeks and sent daily text messages reporting hours of hearing aid use to an experimenter. In addition, the PF group was instructed to focus on positive listening experiences and to also report them in the daily text messages. After the 2-week trial, all participants filled out the SSQ12 questionnaire based on the study hearing aids and completed the speech-in-noise testing again. Speech-in-noise performance and SSQ12 Qualities score were improved for the PF group but not for the control group. This finding indicates that the PF intervention can improve subjective and objective hearing aid benefits.
Topics: Humans; Hearing Aids; Male; Female; Speech Intelligibility; Speech Perception; Aged; Noise; Middle Aged; Correction of Hearing Impairment; Persons With Hearing Impairments; Perceptual Masking; Hearing Loss; Audiometry, Speech; Surveys and Questionnaires; Aged, 80 and over; Time Factors; Acoustic Stimulation; Hearing; Treatment Outcome
PubMed: 38656770
DOI: 10.1177/23312165241246616 -
Journal of Vision Apr 2024Saccadic choice tasks use eye movements as a response method, typically in a task where observers are asked to saccade as quickly as possible to an image of a...
Saccadic choice tasks use eye movements as a response method, typically in a task where observers are asked to saccade as quickly as possible to an image of a prespecified target category. Using this approach, face-selective saccades have been observed within 100 ms poststimulus. When taking into account oculomotor processing, this suggests that faces can be detected in as little as 70 to 80 ms. It has therefore been suggested that face detection must occur during the initial feedforward sweep, since this latency leaves little time for feedback processing. In the current experiment, we tested this hypothesis using backward masking-a technique shown to primarily disrupt feedback processing while leaving feedforward activation relatively intact. Based on minimum saccadic reaction time, we found that face detection benefited from ultra-fast, accurate saccades within 110 to 160 ms and that these eye movements are obtainable even under extreme masking conditions that limit perceptual awareness. However, masking did significantly increase the median SRT for faces. In the manual responses, we found remarkable detection accuracy for faces and houses, even when participants indicated having no visual experience of the test images. These results provide evidence for the view that the saccadic bias to faces is initiated by coarse information used to categorize faces in the feedforward sweep but that, in most cases, additional processing is required to quickly reach the threshold for saccade initiation.
Topics: Humans; Saccades; Eye Movements; Cognition; Reaction Time
PubMed: 38630459
DOI: 10.1167/jov.24.4.16 -
BioRxiv : the Preprint Server For... Apr 2024Our perceptual system bins elements of the speech signal into categories to make speech perception manageable. Here, we aimed to test whether hearing speech in...
Our perceptual system bins elements of the speech signal into categories to make speech perception manageable. Here, we aimed to test whether hearing speech in categories (as opposed to a continuous/gradient fashion) affords yet another benefit to speech recognition: parsing noisy speech at the "cocktail party." We measured speech recognition in a simulated 3D cocktail party environment. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (1-4 talkers) and via forward vs. time-reversed maskers, promoting more and less informational masking (IM), respectively. In separate tasks, we measured isolated phoneme categorization using two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks designed to promote more/less categorical hearing and thus test putative links between categorization and real-world speech-in-noise skills. We first show that listeners can only monitor up to ~3 talkers despite up to 5 in the soundscape and streaming is not related to extended high-frequency hearing thresholds (though QuickSIN scores are). We then confirm speech streaming accuracy and speed decline with additional competing talkers and amidst forward compared to reverse maskers with added IM. Dividing listeners into "discrete" vs. "continuous" categorizers based on their VAS labeling (i.e., whether responses were binary or continuous judgments), we then show the degree of IM experienced at the cocktail party is predicted by their degree of categoricity in phoneme labeling; more discrete listeners are less susceptible to IM than their gradient responding peers. Our results establish a link between speech categorization skills and cocktail party processing, with a categorical (rather than gradient) listening strategy benefiting degraded speech perception. These findings imply figure-ground deficits common in many disorders might arise through a surprisingly simple mechanism: a failure to properly bin sounds into categories.
PubMed: 38617284
DOI: 10.1101/2024.04.03.587795 -
ELife Apr 2024Visual detection is a fundamental natural task. Detection becomes more challenging as the similarity between the target and the background in which it is embedded...
Visual detection is a fundamental natural task. Detection becomes more challenging as the similarity between the target and the background in which it is embedded increases, a phenomenon termed 'similarity masking'. To test the hypothesis that V1 contributes to similarity masking, we used voltage sensitive dye imaging (VSDI) to measure V1 population responses while macaque monkeys performed a detection task under varying levels of target-background similarity. Paradoxically, we find that during an initial transient phase, V1 responses to the target are enhanced, rather than suppressed, by target-background similarity. This effect reverses in the second phase of the response, so that in this phase V1 signals are positively correlated with the behavioral effect of similarity. Finally, we show that a simple model with delayed divisive normalization can qualitatively account for our findings. Overall, our results support the hypothesis that a nonlinear gain control mechanism in V1 contributes to perceptual similarity masking.
Topics: Animals; Primates; Macaca; Perceptual Masking; Voltage-Sensitive Dye Imaging
PubMed: 38592269
DOI: 10.7554/eLife.89570 -
BioRxiv : the Preprint Server For... Mar 2024Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments...
Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, predictions from a physiologically plausible model of temporal-coherence-based segregation suggest that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.
PubMed: 38586037
DOI: 10.1101/2024.03.11.584489 -
IScience Apr 2024The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from...
The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from variable sensory input (figure-ground segregation). The research focuses on peripheral and central factors contributing to this decline using a tone-cloud-based figure detection task. Results based on behavioral measures and event-related brain potentials (ERPs) indicate that, despite delayed perceptual processes and some deterioration in attention and executive functions with aging, the ability to detect sound sources in noise remains relatively intact. However, even mild hearing impairment significantly hampers the segregation of individual sound sources within a complex auditory scene. The severity of the hearing deficit correlates with an increased susceptibility to masking noise. The study underscores the impact of hearing impairment on auditory scene analysis and highlights the need for personalized interventions based on individual abilities.
PubMed: 38558934
DOI: 10.1016/j.isci.2024.109295 -
Journal of Vision Apr 2024Perceptual confidence is thought to arise from metacognitive processes that evaluate the underlying perceptual decision evidence. We investigated whether metacognitive...
Perceptual confidence is thought to arise from metacognitive processes that evaluate the underlying perceptual decision evidence. We investigated whether metacognitive access to perceptual evidence is constrained by the hierarchical organization of visual cortex, where high-level representations tend to be more readily available for explicit scrutiny. We found that the ability of human observers to evaluate their confidence did depend on whether they performed a high-level or low-level task on the same stimuli, but was also affected by manipulations that occurred long after the perceptual decision. Confidence in low-level perceptual decisions degraded with more time between the decision and the response cue, especially when backward masking was present. Confidence in high-level tasks was immune to backward masking and benefitted from additional time. These results can be explained by a model assuming confidence heavily relies on postdecisional internal representations of visual stimuli that degrade over time, where high-level representations are more persistent.
Topics: Humans; Metacognition; Mental Processes; Decision Making
PubMed: 38558159
DOI: 10.1167/jov.24.4.2