-
Brain Sciences Sep 2023Student audiology training in tinnitus evaluation and management is heterogeneous and has been found to be insufficient. We designed a new clinical simulation laboratory...
PURPOSE
Student audiology training in tinnitus evaluation and management is heterogeneous and has been found to be insufficient. We designed a new clinical simulation laboratory for training students on psychoacoustic measurements of tinnitus: one student plays the role of the tinnitus patient, wearing a device producing a sound like tinnitus on one ear, while another student plays the role of the audiologist, evaluating their condition. The objective of the study was to test this new clinical simulation laboratory of tinnitus from the perspective of the students.
METHOD
This study reports the findings from twenty-one audiology students (20 female and 1 male, mean age = 29, SD = 7.7) who participated in this laboratory for a mandatory audiology class at the Laval University of Quebec. Three students had hearing loss (one mild, two moderate). All students played the role of both the clinician and the patient, alternately. They also had to fill out a questionnaire about their overall experience of the laboratory.
RESULTS
The qualitative analysis revealed three main themes: "Benefits of the laboratory on future practice", "Barriers and facilitators of the psychoacoustic assessment", and "Awareness of living with tinnitus". The participants reported that this experience would have a positive impact on their ability to manage tinnitus patients in their future career.
CONCLUSION
This fast, cheap, and effective clinical simulation method could be used by audiology and other healthcare educators to strengthen students' skills and confidence in tinnitus evaluation and management. The protocol is made available to all interested parties.
PubMed: 37759939
DOI: 10.3390/brainsci13091338 -
Behavior Research Methods Jan 2024HALT (The Headphone and Loudspeaker Test) Part II is a continuation of HALT Part I. The main goals of this study (HALT Part II) were (a) to develop screening tests and...
HALT (The Headphone and Loudspeaker Test) Part II is a continuation of HALT Part I. The main goals of this study (HALT Part II) were (a) to develop screening tests and strategies to discriminate headphones from loudspeakers, (b) to come up with a methodological approach to combine more than two screening tests, and (c) to estimate data quality and required sample sizes for the application of screening tests. Screening Tests A and B were developed based on psychoacoustic effects. In a first laboratory study (N = 40), the two tests were evaluated with four different playback devices (circumaural and intra-aural headphones; external and laptop loudspeakers). In a final step, the two screening tests A and B and a previously established test C were validated in an Internet-based study (N = 211). Test B showed the best single-test performance (sensitivity = 80.0%, specificity = 83.2%, AUC = .844). Following an epidemiological approach, the headphone prevalence (17.67%) was determined to calculate positive and negative predictive values. For a user-oriented, parameter-based selection of suitable screening tests and the simple application of screening strategies, an online tool was programmed. HALT Part II is assumed to be a reliable procedure for planning and executing screenings to detect headphone and loudspeaker playback. Our methodological approach can be used as a generic technique for optimizing the application of any screening tests in psychological research. HALT Part I and II complement each other to form a comprehensive overall concept to control for playback conditions in Internet experiments.
Topics: Humans; Acoustic Stimulation; Predictive Value of Tests; Data Accuracy; Prevalence
PubMed: 36650403
DOI: 10.3758/s13428-022-02048-3 -
PeerJ 2023Most studies on pitch shift provoked by hearing loss have been conducted using pure tones. However, many sounds encountered in everyday life are harmonic complex tones....
BACKGROUND
Most studies on pitch shift provoked by hearing loss have been conducted using pure tones. However, many sounds encountered in everyday life are harmonic complex tones. In the present study, psychoacoustic experiments using complex tones were performed on healthy participants, and the possible mechanisms that cause pitch shift due to hearing loss are discussed.
METHODS
Two experiments were performed in this study. In experiment 1, two tones were presented, and the participants were asked to select the tone that was higher in pitch. Partials with frequencies less than 250, 500, 750, or 1,000 Hz were eliminated from the harmonic complex tones and used as test tones to simulate low-tone hearing loss. Each tone pair was constructed such that the tone with a lower fundamental frequency (F0) was higher in terms of the frequency of the lowest partial. Furthermore, partials whose frequencies were greater than 1,300 or 1,600 Hz were also eliminated from these test tones to simulate high-tone hearing loss or modified sounds that patients may hear in everyday life. When a tone with a lower F0 was perceived as higher in pitch, it was considered a pitch shift from the expected tone. In experiment 2, tonal sequences were constructed to create a passage of the song "Lightly Row." Similar to experiment 1, partials of harmonic complex tones were eliminated from the tones. After listening to these tonal sequences, the participants were asked if the sequences sounded correct based on the melody or off-key.
RESULTS
The results showed that pitch shifts and the melody sound off-key when lower partials are eliminated from complex tones, especially when a greater number of high-frequency components are eliminated.
CONCLUSION
Considering that these experiments were performed on healthy participants, the results suggest that the pitch shifts from the expected tone when patients with hearing loss hear certain complex tones, regardless of the underlying etiology of the hearing loss.
Topics: Humans; Hearing Loss; Deafness; Hearing; Computer Simulation; Niacinamide
PubMed: 37727688
DOI: 10.7717/peerj.16053 -
Computers in Human Behavior Sep 2023The acquisition of advanced gestures is a challenge in various domains of proficient sensorimotor performance. For example, orchestral violinists must move in sync with...
The acquisition of advanced gestures is a challenge in various domains of proficient sensorimotor performance. For example, orchestral violinists must move in sync with the lead violinist's gestures. To help train these gestures, an educational music play-back system was developed using a HoloLens 2 simulated AR environment and an avatar representation of the lead violinist. This study aimed to investigate the impact of using a 2D or 3D representation of the lead violinist's avatar on students' learning experience in the AR environment. To assess the learning outcome, the study employed a longitudinal experiment design, in which eleven participants practiced two pieces of music in four trials, evenly spaced over a month. Participants were asked to mimic the avatar's gestures as closely as possible when it came to using the bow, including bowing, articulations, and dynamics. The study compared the similarities between the avatar's gestures and those of the participants at the biomechanical level, using motion capture measurements, as well as the smoothness of the participants' movements. Additionally, presence and perceived difficulty were assessed using questionnaires. The results suggest that using a 3D representation of the avatar leads to better gesture resemblance and a higher experience of presence compared to a 2D representation. The 2D representation, however, showed a learning effect, but this was not observed in the 3D condition. The findings suggest that the 3D condition benefits from stereoscopic information that enhances spatial cognition, making it more effective in relation to sensorimotor performance. Overall, the 3D condition had a greater impact on performance than on learning. This work concludes with recommendations for future efforts directed towards AR-based advanced gesture training to address the challenges related to measurement methodology and participants' feedback on the AR application.
PubMed: 37663430
DOI: 10.1016/j.chb.2023.107810 -
BioRxiv : the Preprint Server For... Feb 2024Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and...
Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and noise levels. Yet, current cognitive models are unable to account for changing real-world hearing sensitivity. Here, using natural and perturbed background sounds we demonstrate that spectrum and modulations statistics of environmental backgrounds drastically impact human word recognition accuracy and they do so independently of the noise level. These sound statistics can facilitate or hinder recognition - at the same noise level accuracy can range from 0% to 100%, depending on the background. To explain this perceptual variability, we optimized a biologically grounded hierarchical model, consisting of frequency-tuned cochlear filters and subsequent mid-level modulation-tuned filters that account for central auditory tuning. Low-dimensional summary statistics from the mid-level model accurately predict single trial perceptual judgments, accounting for more than 90% of the perceptual variance across backgrounds and noise levels, and substantially outperforming a cochlear model. Furthermore, perceptual transfer functions in the mid-level auditory space identify multi-dimensional natural sound features that impact recognition. Thus speech recognition in natural backgrounds involves interference of multiple summary statistics that are well described by an interpretable, low-dimensional auditory model. Since this framework relates salient natural sound cues to single trial perceptual judgements, it may improve outcomes for auditory prosthetics and clinical measurements of real-world hearing sensitivity.
PubMed: 38405870
DOI: 10.1101/2024.02.13.579526 -
Ear and HearingPostlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice...
Prelingually Deaf Children With Cochlear Implants Show Better Perception of Voice Cues and Speech in Competing Speech Than Postlingually Deaf Adults With Cochlear Implants.
OBJECTIVES
Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults.
DESIGN
CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender.
RESULTS
CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children.
CONCLUSION
Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.
Topics: Humans; Cochlear Implants; Deafness; Male; Speech Perception; Female; Cues; Child; Adult; Cochlear Implantation; Young Adult; Adolescent; Voice; Case-Control Studies; Child, Preschool; Middle Aged
PubMed: 38616318
DOI: 10.1097/AUD.0000000000001489 -
Attention, Perception & Psychophysics Oct 2023There have been numerous studies investigating the perception of non-native sounds by listeners with different first language (L1) backgrounds. However, research needs...
There have been numerous studies investigating the perception of non-native sounds by listeners with different first language (L1) backgrounds. However, research needs to expand to under-researched languages and incorporate predictions conducted under the assumptions of new speech models. This study aimed to investigate the perception of Dutch vowels by Cypriot Greek adult listeners and test the predictions of cross-linguistic acoustic and perceptual similarity. The predictions of acoustic similarity were formed using a machine-learning algorithm. Listeners completed a classification test, which served as the baseline for developing the predictions of perceptual similarity by employing the framework of the Universal Perceptual Model (UPM), and an AXB discrimination test; the latter allowed the evaluation of both acoustic and perceptual predictions. The findings indicated that listeners classified each non-native vowel as one or more L1 vowels, while the discrimination accuracy over the non-native contrasts was moderate. In addition, cross-linguistic acoustic similarity predicted to a large extent the classification of non-native sounds in terms of L1 categories and both the acoustic and perceptual similarity predicted the discrimination accuracy of all contrasts. Being in line with prior findings, these findings demonstrate that acoustic and perceptual cues are reliable predictors of non-native contrast discrimination and that the UPM model can make accurate estimations for the discrimination patterns of non-native listeners.
Topics: Adult; Humans; Greece; Phonetics; Speech Acoustics; Speech Perception; Language; Acoustics
PubMed: 37740154
DOI: 10.3758/s13414-023-02781-7 -
Nature Communications Feb 2024The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and...
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Topics: Humans; Psychoacoustics; Music; Auditory Perception; Emotions; Judgment; Acoustic Stimulation
PubMed: 38369535
DOI: 10.1038/s41467-024-45812-z -
Scientific Reports Apr 2024Temporal envelope modulations (TEMs) are one of the most important features that cochlear implant (CI) users rely on to understand speech. Electroencephalographic...
Temporal envelope modulations (TEMs) are one of the most important features that cochlear implant (CI) users rely on to understand speech. Electroencephalographic assessment of TEM encoding could help clinicians to predict speech recognition more objectively, even in patients unable to provide active feedback. The acoustic change complex (ACC) and the auditory steady-state response (ASSR) evoked by low-frequency amplitude-modulated pulse trains can be used to assess TEM encoding with electrical stimulation of individual CI electrodes. In this study, we focused on amplitude modulation detection (AMD) and amplitude modulation frequency discrimination (AMFD) with stimulation of a basal versus an apical electrode. In twelve adult CI users, we (a) assessed behavioral AMFD thresholds and (b) recorded cortical auditory evoked potentials (CAEPs), AMD-ACC, AMFD-ACC, and ASSR in a combined 3-stimulus paradigm. We found that the electrophysiological responses were significantly higher for apical than for basal stimulation. Peak amplitudes of AMFD-ACC were small and (therefore) did not correlate with speech-in-noise recognition. We found significant correlations between speech-in-noise recognition and (a) behavioral AMFD thresholds and (b) AMD-ACC peak amplitudes. AMD and AMFD hold potential to develop a clinically applicable tool for assessing TEM encoding to predict speech recognition in CI users.
Topics: Adult; Humans; Psychoacoustics; Speech Perception; Speech; Acoustic Stimulation; Cochlear Implants; Cochlear Implantation; Evoked Potentials, Auditory
PubMed: 38589483
DOI: 10.1038/s41598-024-58225-1 -
Frontiers in Neuroscience 2024Acute ischemic stroke, characterized by a localized reduction in blood flow to specific areas of the brain, has been shown to affect binaural auditory perception. In a...
Acute ischemic stroke, characterized by a localized reduction in blood flow to specific areas of the brain, has been shown to affect binaural auditory perception. In a previous study conducted during the acute phase of ischemic stroke, two tasks of binaural hearing were performed: binaural tone-in-noise detection, and lateralization of stimuli with interaural time- or level differences. Various lesion-specific, as well as individual, differences in binaural performance between patients in the acute phase of stroke and a control group were demonstrated. For the current study, we re-invited the same group of patients, whereupon a subgroup repeated the experiments during the subacute and chronic phases of stroke. Similar to the initial study, this subgroup consisted of patients with lesions in different locations, including cortical and subcortical areas. At the group level, the results from the tone-in-noise detection experiment remained consistent across the three measurement phases, as did the number of deviations from normal performance in the lateralization task. However, the performance in the lateralization task exhibited variations over time among individual patients. Some patients demonstrated improvements in their lateralization abilities, indicating recovery, whereas others' lateralization performance deteriorated during the later stages of stroke. Notably, our analyses did not reveal consistent patterns for patients with similar lesion locations. These findings suggest that recovery processes are more individual than the acute effects of stroke on binaural perception. Individual impairments in binaural hearing abilities after the acute phase of ischemic stroke have been demonstrated and should therefore also be targeted in rehabilitation programs.
PubMed: 38482140
DOI: 10.3389/fnins.2024.1322762