-
Attention, Perception & Psychophysics Apr 2024There is an increasing body of evidence suggesting that there are low-level perceptual processes involved in crossmodal correspondences. In this study, we investigate...
There is an increasing body of evidence suggesting that there are low-level perceptual processes involved in crossmodal correspondences. In this study, we investigate the involvement of the superior colliculi in three basic crossmodal correspondences: elevation/pitch, lightness/pitch, and size/pitch. Using a psychophysical design, we modulate visual input to the superior colliculus to test whether the superior colliculus is required for behavioural crossmodal congruency effects to manifest in an unspeeded multisensory discrimination task. In the elevation/pitch task, superior colliculus involvement is required for a behavioural elevation/pitch congruency effect to manifest in the task. In the lightness/pitch and size/pitch task, we observed a behavioural elevation/pitch congruency effect regardless of superior colliculus involvement. These results suggest that the elevation/pitch correspondence may be processed differently to other low-level crossmodal correspondences. The implications of a distributed model of crossmodal correspondence processing in the brain are discussed.
Topics: Humans; Superior Colliculi; Male; Female; Adult; Young Adult; Pattern Recognition, Visual; Size Perception; Attention; Pitch Discrimination; Association; Psychoacoustics; Orientation
PubMed: 38418807
DOI: 10.3758/s13414-024-02866-x -
BioRxiv : the Preprint Server For... Feb 2024Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and...
Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and noise levels. Yet, current cognitive models are unable to account for changing real-world hearing sensitivity. Here, using natural and perturbed background sounds we demonstrate that spectrum and modulations statistics of environmental backgrounds drastically impact human word recognition accuracy and they do so independently of the noise level. These sound statistics can facilitate or hinder recognition - at the same noise level accuracy can range from 0% to 100%, depending on the background. To explain this perceptual variability, we optimized a biologically grounded hierarchical model, consisting of frequency-tuned cochlear filters and subsequent mid-level modulation-tuned filters that account for central auditory tuning. Low-dimensional summary statistics from the mid-level model accurately predict single trial perceptual judgments, accounting for more than 90% of the perceptual variance across backgrounds and noise levels, and substantially outperforming a cochlear model. Furthermore, perceptual transfer functions in the mid-level auditory space identify multi-dimensional natural sound features that impact recognition. Thus speech recognition in natural backgrounds involves interference of multiple summary statistics that are well described by an interpretable, low-dimensional auditory model. Since this framework relates salient natural sound cues to single trial perceptual judgements, it may improve outcomes for auditory prosthetics and clinical measurements of real-world hearing sensitivity.
PubMed: 38405870
DOI: 10.1101/2024.02.13.579526 -
Seminars in Hearing Feb 2024Tinnitus acoustic therapy is defined as any use of sound where the intent is to alter the tinnitus perception and/or the reactions to tinnitus in a clinically beneficial... (Review)
Review
Tinnitus acoustic therapy is defined as any use of sound where the intent is to alter the tinnitus perception and/or the reactions to tinnitus in a clinically beneficial way. The parameters of sound that may cause beneficial effects, however, are currently only theorized with limited data supporting their effectiveness. Residual inhibition is the temporary suppression or elimination of tinnitus that is usually observed following appropriate auditory stimulation. Our pilot study investigated the effects of a therapeutic acoustic stimulus that was individually customized to maximize residual inhibition of tinnitus and extend its duration to determine if there could be a sustained suppression of the tinnitus signal (i.e., reduced tinnitus loudness) and a reduction in the psychological and emotional reactions to tinnitus. This pilot study had two objectives: (1) to evaluate the feasibility of residual inhibition technique therapy through daily use of hearing aids and (2) to determine its effects by measuring reactionary changes in tinnitus with the Tinnitus Functional Index (TFI) and perceptual changes in tinnitus loudness. A total of 20 adults (14 males, 6 females; mean age: 58 years, SD = 12.88) with chronic tinnitus were enrolled in a four-visit study that consisted of the following: (1) baseline visit and initiation of the intervention period, (2) a 1-month postintervention visit, (3) 2-month postintervention visit and initiation of a wash-out period, and (4) a 3-month visit to assess the wash-out period and any lasting effects of the intervention. The intervention consisted of fitting bilateral hearing aids and creating an individualized residual inhibition stimulus that was streamed via Bluetooth from a smartphone application to the hearing aids. The participants were instructed to wear the hearing aids and stream the residual inhibition stimulus all waking hours for the 2-month intervention period. During the wash-out period, the participants were instructed to use the hearing aids for amplification, but the residual inhibition stimulus was discontinued. At all visits, the participants completed the TFI, study-specific self-report measures to document perceptions of tinnitus, a psychoacoustic test battery consisting of tinnitus loudness and pitch matching, and a residual inhibition test battery consisting of minimum masking and minimum residual inhibition levels. At the end of the trial, participants were interviewed about the study experience and acceptability of the residual inhibition treatment technique. Repeated measures analyses of variance (ANOVA) were conducted on the two main outcomes (TFI total score and tinnitus loudness) across all four visits. The results showed a significant main effect of visit on the TFI total score ( < 0.0001). Specifically, the results indicated a significant reduction in TFI total scores from baseline to the 1-month post-intervention period, which remained stable across the 2-month post-intervention period and the wash-out period. The ANOVA results did not show a significant change in tinnitus loudness as a function of visit ( = 0.480). The majority of the participants reported a positive experience with the study intervention at their exit interview. This pilot study demonstrated that residual inhibition as a sound therapy for tinnitus, specifically through the daily use of hearing aids, was feasible and acceptable to individuals suffering from chronic tinnitus. In addition, participants showed improvement in reactions to tinnitus as demonstrated by sustained reduction in TFI scores on average over the course of the treatment period. Achieving residual inhibition may also provide patients a feeling of control over their tinnitus, and this may have a synergistic effect in reducing the psychological and emotional distress associated with tinnitus. There was no significant reduction in long-term tinnitus loudness resulting from the residual inhibition treatment; however, the current pilot study may not have had sufficient power to detect such a change. The combination of tinnitus suppression and improved psychosocial/emotional reactions to tinnitus may result in a better quality of life in both the short and long term. A larger-scale study is needed to determine the validity of using residual inhibition as a clinical therapy option and to ascertain any effects on both perception and reactions to tinnitus.
PubMed: 38370522
DOI: 10.1055/s-0043-1770153 -
Nature Communications Feb 2024The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and...
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Topics: Humans; Psychoacoustics; Music; Auditory Perception; Emotions; Judgment; Acoustic Stimulation
PubMed: 38369535
DOI: 10.1038/s41467-024-45812-z -
Data in Brief Apr 2024The present database contains brain activity of subjective tinnitus sufferers at identifying their sound tinnitus. The main objective of this database is to provide...
The present database contains brain activity of subjective tinnitus sufferers at identifying their sound tinnitus. The main objective of this database is to provide spontaneous Electroencephalographic (EEG) activity at rest, and evoked EEG activity when tinnitus sufferers attempt to identify their sound tinnitus among 54 tinnitus sound examples. For the database, 37 volunteers were recruited: 15 ones without tinnitus (Control Group - CG), and 22 ones with tinnitus (Tinnitus Group - TG). For EEG recording, 30 channels were used to record two conditions: 1) , where the volunteer remained in a state of rest with the open eyes for two minutes; and 2) , where the volunteer must have identified his/her sound stimulus by pressing a key. For the active condition, a sound-tinnitus library was generated in accordance with the most typical acoustic properties of tinnitus. The library consisted in ten pure tones (250 Hz, 500 Hz, 1 kHz, 2 kHz, 3 kHz, 3.5 kHz, 4 kHz, 6 kHz, 8 kHz, 10 kHz), a White Noise (WN), a Narrow Band noise-High frequencies (NBH, 4 kHz-10 kHz), a Narrow Band noise-Medium frequencies (NBM,1 kHz-4 kHz), a Narrow-Band noise Low frequencies (NBL, 250 Hz-1 kHz), ten pure tones combined with WN, ten pure tones superimposed with NBH, ten tones with NBM and ten pure tones combined with NBL. In total, 54 sound-tinnitus were applied for both groups. In the case of CG, volunteers must have identified a sound at 3.5 kHz. In addition to EEG information, a csv-file with audiometric and psychoacoustic information of volunteers is provided. For TG, this information refers to: 1) hearing level, 2) type of tinnitus, 3) tinnitus frequency, 4) tinnitus perception, 5) Hospital Anxiety and Depression Scale (HADS) and 6) Tinnitus Functional Index (TFI). For CG, the information refers to: 1) hearing level, and 2) HADS.
PubMed: 38357451
DOI: 10.1016/j.dib.2024.110142 -
Hearing Research Mar 2024Spectro-temporal modulation (STM) detection sensitivity has been shown to be associated with speech-in-noise reception in hearing-impaired (HI) individuals. Based on...
Spectro-temporal modulation (STM) detection sensitivity has been shown to be associated with speech-in-noise reception in hearing-impaired (HI) individuals. Based on previous research, a recent study [Zaar, Simonsen, Dau, and Laugesen (2023). Hear Res 427:108650] introduced an STM test paradigm with audibility compensation, employing STM stimulus variants using noise and complex tones as carrier signals. The study demonstrated that the test was suitable for the target population of elderly individuals with moderate-to-severe hearing loss and showed promising predictions of speech-reception thresholds (SRTs) measured in a realistic set up with spatially distributed speech and noise maskers and linear audibility compensation. The present study further investigated the suggested STM test with respect to (i) test-retest variability for the most promising STM stimulus variants, (ii) its predictive power with respect to realistic speech-in-noise reception with non-linear hearing-aid amplification, (iii) its connection to effects of directionality and noise reduction (DIR+NR) hearing-aid processing, and (iv) its relation to DIR+NR preference. Thirty elderly HI participants were tested in a combined laboratory and field study, collecting STM thresholds with a complex-tone based and a noise-based STM stimulus design, SRTs with spatially distributed speech and noise maskers using hearing aids with non-linear amplification and two different levels of DIR+NR, as well as subjective reports and preference ratings obtained in two field periods with the two DIR+NR hearing-aid settings. The results indicate that the noise-carrier based STM test variant (i) showed optimal test-retest properties, (ii) yielded a highly significant correlation with SRTs (R=0.61) exceeding and complementing the predictive power of the audiogram, (iii) yielded significant correlation (R=0.51) with the DIR+NR-induced SRT benefit, and (iv) did not provide significant correlation with subjective preference for DIR+NR settings in the field. Overall, the suggested STM test represents a valuable tool for diagnosing speech-reception problems that remain when hearing-aid amplification has been provided and the resulting need for and benefit from DIR+NR hearing-aid processing.
Topics: Humans; Aged; Speech; Hearing Aids; Speech Perception; Hearing Loss; Hearing; Hearing Loss, Sensorineural
PubMed: 38281473
DOI: 10.1016/j.heares.2024.108949 -
International Journal of Preventive... 2023Noise is one of the most important harmful factors in the environment. There are limited studies on the effect of noise loudness on brain signals and attention. The main...
BACHGROUND
Noise is one of the most important harmful factors in the environment. There are limited studies on the effect of noise loudness on brain signals and attention. The main objective of this study was to investigate the relationship between exposure to different loudness levels with brain index, types of attention, and subjective evaluation.
METHODS
Four noises with different loudness levels were generated. Sixty-four male students participated in this study. Each subject performed the integrated visual and auditory continuous performance test (IVA-2) test before and during exposure to noise loudness signals while their electroencephalography was recorded. Finally, the alpha-to-gamma ratio (AGR), five types of attention, and the subjective evaluation results were examined.
RESULTS
During exposure to loudness levels, the AGR and types of attention decreased while the NASA-Tax Load Index (NASA-TLX) scores increased. The noise exposure at lower loudness levels (65 and 75 phon) leads to greater attention dysfunction than at higher loudness. The AGR was significantly changed during exposure to 65 and 75 phon and audio stimuli. This significant change was observed in exposure at all loudness levels except 85 phon and visual stimuli. The divided and sustained attention changed significantly during exposure to all loudness levels and visual stimuli. The AGR had a significant inverse correlation with the total score of NASA-TLX during noise exposure.
CONCLUSIONS
These results can lead to the design of methods to control the psychological effects of noise at specific frequencies (250 and 4000 Hz) and can prevent non-auditory damage to human cognitive performance in industrial and urban environments.
PubMed: 38264555
DOI: 10.4103/ijpvm.ijpvm_395_22 -
Sensors (Basel, Switzerland) Dec 2023Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support...
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject's electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal-occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain-computer interfaces. Furthermore, they validate the EEG data collected in our experiment.
Topics: Humans; Artificial Intelligence; Acoustic Stimulation; Electroencephalography; Attention; Event-Related Potentials, P300; Evoked Potentials
PubMed: 38067961
DOI: 10.3390/s23239588 -
PloS One 2023There is debate whether the foundations of consonance and dissonance are rooted in culture or in psychoacoustics. In order to disentangle the contribution of culture and...
There is debate whether the foundations of consonance and dissonance are rooted in culture or in psychoacoustics. In order to disentangle the contribution of culture and psychoacoustics, we considered automatic responses to the perfect fifth and the major second (flattened by 25 cents) intervals alongside conscious evaluations of the same intervals across two cultures and two levels of musical expertise. Four groups of participants completed the tasks: expert performers of Lithuanian Sutartinės, English speaking musicians in Western diatonic genres, Lithuanian non-musicians and English-speaking non-musicians. Sutartinės singers were chosen as this style of singing is an example of 'beat diaphony' where intervals of parts form predominantly rough sonorities and audible beats. There was no difference in automatic responses to intervals, suggesting that an aversion to acoustically rough intervals is not governed by cultural familiarity but may have a physical basis in how the human auditory system works. However, conscious evaluations resulted in group differences with Sutartinės singers rating both the flattened major as more positive than did other groups. The results are discussed in the context of recent developments in consonance and dissonance research.
Topics: Humans; Music; Singing; Psychoacoustics; Recognition, Psychology; Consciousness; Acoustic Stimulation; Auditory Perception
PubMed: 38051728
DOI: 10.1371/journal.pone.0294645 -
Scientific Reports Nov 2023Sensorimotor synchronization strategies have been frequently used for gait rehabilitation in different neurological populations. Despite these positive effects on gait,... (Meta-Analysis)
Meta-Analysis
Sensorimotor synchronization strategies have been frequently used for gait rehabilitation in different neurological populations. Despite these positive effects on gait, attentional processes required to dynamically attend to the auditory stimuli needs elaboration. Here, we investigate auditory attention in neurological populations compared to healthy controls quantified by EEG recordings. Literature was systematically searched in databases PubMed and Web of Science. Inclusion criteria were investigation of auditory attention quantified by EEG recordings in neurological populations in cross-sectional studies. In total, 35 studies were included, including participants with Parkinson's disease (PD), stroke, Traumatic Brain Injury (TBI), Multiple Sclerosis (MS), Amyotrophic Lateral Sclerosis (ALS). A meta-analysis was performed on P3 amplitude and latency separately to look at the differences between neurological populations and healthy controls in terms of P3 amplitude and latency. Overall, neurological populations showed impairments in auditory processing in terms of magnitude and delay compared to healthy controls. Consideration of individual auditory processes and thereafter selecting and/or designing the auditory structure during sensorimotor synchronization paradigms in neurological physical rehabilitation is recommended.
Topics: Humans; Cross-Sectional Studies; Attention; Parkinson Disease; Gait; Electroencephalography
PubMed: 38030693
DOI: 10.1038/s41598-023-47597-5