-
Healthcare (Basel, Switzerland) Jul 2023Μusicians are reported to have enhanced auditory processing. This study aimed to assess auditory perception in Greek musicians with respect to their musical...
Μusicians are reported to have enhanced auditory processing. This study aimed to assess auditory perception in Greek musicians with respect to their musical specialization and to compare their auditory processing with that of non-musicians. Auditory processing elements evaluated were speech recognition in babble, rhythmic advantage in speech recognition, short-term working memory, temporal resolution, and frequency discrimination threshold detection. All groups were of 12 participants. Three distinct experimental groups tested included western classical musicians, Byzantine chanters, and percussionists. The control group consisted of 12 non-musicians. The results revealed: (i) a rhythmic advantage for word recognition in noise for classical musicians ( = 12.42) compared to Byzantine musicians ( = 9.83), as well as for musicians compared to non-musicians ( = 120.50, = 0.019), (ii) better frequency discrimination threshold of Byzantine musicians ( = 3.17, = 0.002) compared to the other two musicians' group for the 2000 Hz region, (iii) statistically significant better working memory for musicians ( = 123.00, = 0.025) compared to non-musicians. Musical training enhances elements of auditory processing and may be used as an additional rehabilitation approach during auditory training, focusing on specific types of music for specific auditory processing deficits.
PubMed: 37510468
DOI: 10.3390/healthcare11142027 -
Hearing Research Jul 2024Combining cochlear implants with binaural acoustic hearing via preserved hearing in the implanted ear(s) is commonly referred to as combined electric and acoustic...
Combining cochlear implants with binaural acoustic hearing via preserved hearing in the implanted ear(s) is commonly referred to as combined electric and acoustic stimulation (EAS). EAS fittings can provide patients with significant benefit for speech recognition in complex noise, perceived listening difficulty, and horizontal-plane localization as compared to traditional bimodal hearing conditions with contralateral and monaural acoustic hearing. However, EAS benefit varies across patients and the degree of benefit is not reliably related to the underlying audiogram. Previous research has indicated that EAS benefit for speech recognition in complex listening scenarios and localization is significantly correlated with the patients' binaural cue sensitivity, namely interaural time differences (ITD). In the context of pure tones, interaural phase differences (IPD) and ITD can be understood as two perspectives on the same phenomenon. Through simple mathematical conversion, one can be transformed into the other, illustrating their inherent interrelation for spatial hearing abilities. However, assessing binaural cue sensitivity is not part of a clinical assessment battery as psychophysical tasks are time consuming, require training to achieve performance asymptote, and specialized programming and software all of which render this clinically unfeasible. In this study, we investigated the possibility of using an objective measure of binaural cue sensitivity by the acoustic change complex (ACC) via imposition of an IPD of varying degrees at stimulus midpoint. Ten adult listeners with normal hearing were assessed on tasks of behavioral and objective binaural cue sensitivity for carrier frequencies of 250 and 1000 Hz. Results suggest that 1) ACC amplitude increases with IPD; 2) ACC-based IPD sensitivity for 250 Hz is significantly correlated with behavioral ITD sensitivity; 3) Participants were more sensitive to IPDs at 250 Hz as compared to 1000 Hz. Thus, this objective measure of IPD sensitivity may hold clinical application for pre- and post-operative assessment for individuals meeting candidacy indications for cochlear implantation with low-frequency acoustic hearing preservation as this relatively quick and objective measure may provide clinicians with information identifying patients most likely to derive benefit from EAS technology.
Topics: Humans; Acoustic Stimulation; Cochlear Implants; Cues; Sound Localization; Female; Speech Perception; Male; Cochlear Implantation; Adult; Middle Aged; Auditory Threshold; Electric Stimulation; Audiometry, Pure-Tone; Persons With Hearing Impairments; Time Factors; Aged; Noise; Perceptual Masking; Young Adult; Hearing; Psychoacoustics
PubMed: 38763034
DOI: 10.1016/j.heares.2024.109020 -
Behavioral Sciences (Basel, Switzerland) Oct 2023In this practice-based case study, we investigate the subjective aesthetic and affective responses to a shift from 2D stereo-based modelling to 3D object-based Dolby...
Looking for the Edge of the World: How 3D Immersive Audio Produces a Shift from an Internalised Inner Voice to Unsymbolised Affect-Driven Ways of Thinking and Heightened Sensory Awareness.
In this practice-based case study, we investigate the subjective aesthetic and affective responses to a shift from 2D stereo-based modelling to 3D object-based Dolby Atmos in an audio installation artwork. Dolby Atmos is an infinite object-based audio format released in 2012 but only recently incorporated into more public-facing formats. Our analysis focuses on the artist Sadia Sadia's 30-channel audio installation 'Notes to an Unknown Lover', based on her book of free verse poetry of the same title, which was rebuilt and reformatted in a Dolby Atmos specified studio. We examine what effect altered spatiality with an infinite number of 'placements' has on the psychoacoustic and neuroaesthetic response to the text. The effectiveness of three-dimensional (3D) object-based audio is interrogated against more traditional stereo and two-dimensional (2D) formats regarding the expression and communication of emotion and what effect altered spatiality with an infinite number of placements has on the psychoacoustic and neuroaesthetic response to the text. We provide a unique examination of the consequences of a shift from 2D to wholly encompassing object-based audio in a text-based artist's audio installation work. These findings may also have promising applications for health and well-being issues.
PubMed: 37887508
DOI: 10.3390/bs13100858 -
Journal of Clinical Sleep Medicine :... Mar 2024Hypoglossal nerve stimulation is an established therapy for sleep apnea syndrome. Whether or not this therapy on snoring and nighttime noise exposure is effective and... (Clinical Trial)
Clinical Trial
STUDY OBJECTIVES
Hypoglossal nerve stimulation is an established therapy for sleep apnea syndrome. Whether or not this therapy on snoring and nighttime noise exposure is effective and how strong this effect may be has not been objectively investigated thus far and was the aim of this study.
METHODS
In 15 participants (14 males; age: 30-72 years; mean: 51.7 years), polysomnography and acoustic measurements were performed before and after hypoglossal nerve stimulation.
RESULTS
The therapy led to a significant improvement in sleep apnea (apnea-hypopnea index from 35.8 events/h to 11.2 events/h, < .001). Acoustic parameters showed a highly significant reduction in the average sound pressure level (42.9 db[A] to 36.4 db[A], < .001), averaged sound energy, A-weighted (LAeq; 33.1 db[A] to 28.7 db[A], < .001), snoring index (1,068 to 506, < .001), percentage snoring time (29.7-14.1%, < .001), and psychoacoustic snore score, the latter being a measure of annoyance due to snoring (47.9 to 24.5, < .001).
CONCLUSIONS
This study was able to show for the first time by means of objective acoustic and psychoacoustic parameters that hypoglossal nerve stimulation can not only cause a significant improvement in sleep apnea but also has a positive effect on snoring and thus noise annoyance experienced by the bed partner.
CLINICAL TRIAL REGISTRATION
Registry: German Clinical Trials Register; Name: Effect of Hypoglossal Nerve Stimulation on Snoring: An Evaluation Using Objective Acoustic Parameters; URL: https://drks.de/search/de/trial/DRKS00032354; Identifier: DRKS00032354.
CITATION
Fischer R, Vielsmeier V, Kuehnel TS, et al. Effect of hypoglossal nerve stimulation on snoring: an evaluation using objective acoustic parameters. . 2024;20(3):363-370.
Topics: Adult; Aged; Humans; Male; Middle Aged; Acoustics; Hypoglossal Nerve; Polysomnography; Sleep Apnea Syndromes; Snoring; Female
PubMed: 38426848
DOI: 10.5664/jcsm.10868 -
BioRxiv : the Preprint Server For... Feb 2024Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and...
Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and noise levels. Yet, current cognitive models are unable to account for changing real-world hearing sensitivity. Here, using natural and perturbed background sounds we demonstrate that spectrum and modulations statistics of environmental backgrounds drastically impact human word recognition accuracy and they do so independently of the noise level. These sound statistics can facilitate or hinder recognition - at the same noise level accuracy can range from 0% to 100%, depending on the background. To explain this perceptual variability, we optimized a biologically grounded hierarchical model, consisting of frequency-tuned cochlear filters and subsequent mid-level modulation-tuned filters that account for central auditory tuning. Low-dimensional summary statistics from the mid-level model accurately predict single trial perceptual judgments, accounting for more than 90% of the perceptual variance across backgrounds and noise levels, and substantially outperforming a cochlear model. Furthermore, perceptual transfer functions in the mid-level auditory space identify multi-dimensional natural sound features that impact recognition. Thus speech recognition in natural backgrounds involves interference of multiple summary statistics that are well described by an interpretable, low-dimensional auditory model. Since this framework relates salient natural sound cues to single trial perceptual judgements, it may improve outcomes for auditory prosthetics and clinical measurements of real-world hearing sensitivity.
PubMed: 38405870
DOI: 10.1101/2024.02.13.579526 -
Attention, Perception & Psychophysics Oct 2023There have been numerous studies investigating the perception of non-native sounds by listeners with different first language (L1) backgrounds. However, research needs...
There have been numerous studies investigating the perception of non-native sounds by listeners with different first language (L1) backgrounds. However, research needs to expand to under-researched languages and incorporate predictions conducted under the assumptions of new speech models. This study aimed to investigate the perception of Dutch vowels by Cypriot Greek adult listeners and test the predictions of cross-linguistic acoustic and perceptual similarity. The predictions of acoustic similarity were formed using a machine-learning algorithm. Listeners completed a classification test, which served as the baseline for developing the predictions of perceptual similarity by employing the framework of the Universal Perceptual Model (UPM), and an AXB discrimination test; the latter allowed the evaluation of both acoustic and perceptual predictions. The findings indicated that listeners classified each non-native vowel as one or more L1 vowels, while the discrimination accuracy over the non-native contrasts was moderate. In addition, cross-linguistic acoustic similarity predicted to a large extent the classification of non-native sounds in terms of L1 categories and both the acoustic and perceptual similarity predicted the discrimination accuracy of all contrasts. Being in line with prior findings, these findings demonstrate that acoustic and perceptual cues are reliable predictors of non-native contrast discrimination and that the UPM model can make accurate estimations for the discrimination patterns of non-native listeners.
Topics: Adult; Humans; Greece; Phonetics; Speech Acoustics; Speech Perception; Language; Acoustics
PubMed: 37740154
DOI: 10.3758/s13414-023-02781-7 -
Nature Communications Feb 2024The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and...
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Topics: Humans; Psychoacoustics; Music; Auditory Perception; Emotions; Judgment; Acoustic Stimulation
PubMed: 38369535
DOI: 10.1038/s41467-024-45812-z -
Computers in Human Behavior Sep 2023The acquisition of advanced gestures is a challenge in various domains of proficient sensorimotor performance. For example, orchestral violinists must move in sync with...
The acquisition of advanced gestures is a challenge in various domains of proficient sensorimotor performance. For example, orchestral violinists must move in sync with the lead violinist's gestures. To help train these gestures, an educational music play-back system was developed using a HoloLens 2 simulated AR environment and an avatar representation of the lead violinist. This study aimed to investigate the impact of using a 2D or 3D representation of the lead violinist's avatar on students' learning experience in the AR environment. To assess the learning outcome, the study employed a longitudinal experiment design, in which eleven participants practiced two pieces of music in four trials, evenly spaced over a month. Participants were asked to mimic the avatar's gestures as closely as possible when it came to using the bow, including bowing, articulations, and dynamics. The study compared the similarities between the avatar's gestures and those of the participants at the biomechanical level, using motion capture measurements, as well as the smoothness of the participants' movements. Additionally, presence and perceived difficulty were assessed using questionnaires. The results suggest that using a 3D representation of the avatar leads to better gesture resemblance and a higher experience of presence compared to a 2D representation. The 2D representation, however, showed a learning effect, but this was not observed in the 3D condition. The findings suggest that the 3D condition benefits from stereoscopic information that enhances spatial cognition, making it more effective in relation to sensorimotor performance. Overall, the 3D condition had a greater impact on performance than on learning. This work concludes with recommendations for future efforts directed towards AR-based advanced gesture training to address the challenges related to measurement methodology and participants' feedback on the AR application.
PubMed: 37663430
DOI: 10.1016/j.chb.2023.107810 -
Scientific Reports Apr 2024Temporal envelope modulations (TEMs) are one of the most important features that cochlear implant (CI) users rely on to understand speech. Electroencephalographic...
Temporal envelope modulations (TEMs) are one of the most important features that cochlear implant (CI) users rely on to understand speech. Electroencephalographic assessment of TEM encoding could help clinicians to predict speech recognition more objectively, even in patients unable to provide active feedback. The acoustic change complex (ACC) and the auditory steady-state response (ASSR) evoked by low-frequency amplitude-modulated pulse trains can be used to assess TEM encoding with electrical stimulation of individual CI electrodes. In this study, we focused on amplitude modulation detection (AMD) and amplitude modulation frequency discrimination (AMFD) with stimulation of a basal versus an apical electrode. In twelve adult CI users, we (a) assessed behavioral AMFD thresholds and (b) recorded cortical auditory evoked potentials (CAEPs), AMD-ACC, AMFD-ACC, and ASSR in a combined 3-stimulus paradigm. We found that the electrophysiological responses were significantly higher for apical than for basal stimulation. Peak amplitudes of AMFD-ACC were small and (therefore) did not correlate with speech-in-noise recognition. We found significant correlations between speech-in-noise recognition and (a) behavioral AMFD thresholds and (b) AMD-ACC peak amplitudes. AMD and AMFD hold potential to develop a clinically applicable tool for assessing TEM encoding to predict speech recognition in CI users.
Topics: Adult; Humans; Psychoacoustics; Speech Perception; Speech; Acoustic Stimulation; Cochlear Implants; Cochlear Implantation; Evoked Potentials, Auditory
PubMed: 38589483
DOI: 10.1038/s41598-024-58225-1 -
The Journal of the Acoustical Society... Apr 2024The auditory sensitivity of a small songbird, the red-cheeked cordon bleu, was measured using the standard methods of animal psychophysics. Hearing in cordon bleus is...
The auditory sensitivity of a small songbird, the red-cheeked cordon bleu, was measured using the standard methods of animal psychophysics. Hearing in cordon bleus is similar to other small passerines with best hearing in the frequency region from 2 to 4 kHz and sensitivity declining at the rate of about 10 dB/octave below 2 kHz and about 35 dB/octave as frequency increases from 4 to 9 kHz. While critical ratios are similar to other songbirds, the long-term average power spectrum of cordon bleu song falls above the frequency of best hearing in this species.
Topics: Animals; Vocalization, Animal; Hearing; Songbirds; Acoustic Stimulation; Auditory Threshold; Male; Psychoacoustics; Sound Spectrography; Female
PubMed: 38656337
DOI: 10.1121/10.0025764